Showing posts with label Oracle L1. Show all posts
Showing posts with label Oracle L1. Show all posts

Dec 26, 2023

Data file and Log File

Data file: Datafiles are physical files of the operating system that store the data of all logical structures in the database. They must be explicitly created for each tablespace. Oracle assigns each datafile two associated file numbers, an absolute file number and a relative file number, that are used to uniquely identify it.

In Oracle, you can check the internal free space in a data file and log file by querying specific data dictionary views. Here are the steps to check the internal free space in Oracle:

Checking Internal Free Space in Data File:

Query the DBA_FREE_SPACE view to get information about free space in data files.

select file_name, bytes/1024/1024 from dba_data_files ;


This query retrieves information about free space in data files. It displays the file name, tablespace name, and free space in megabytes. Adjust the query to include additional columns or conditions as needed.

Log  File:  Redo log files are operating system files used by Oracle to maintain logs of

all transactions performed against the database. Theprimary purpose of these log

files is to allow Oracle.to recover changes made to the database in the case of a failure.

Checking Internal Free Space inLog File:

Query the V$LOGFILE view to get information about the log files.

SELECT

a.GROUP#,

a.THREAD#,

a.SEQUENCE#,

a.ARCHIVED,

a.STATUS,

b.MEMBER AS FILE_NAME,

(a.BYTES/1024/1024) AS SIZE_MB

FROM v$log a

JOIN v$logfile b ON a.Group#=b.Group#

ORDER BY a.GROUP#;


This query retrieves information about the log files. It displays the log file group number, member (file name), and log file size in megabytes. The free space in a log file is typically managed automatically by Oracle, and you might not see explicit "free space" in the same way you would for a data file. Instead, Oracle manages space within the log files to accommodate ongoing transactions.

Note:It's important to have the necessary privileges to query the DBA views or V$ views. You might need to connect as a user with DBA privileges or request the required permissions from your database administrator.

Shrink Data file and LogFile

In Oracle, the process of shrinking data files and log files is different compared to SQL Server. Oracle does not have a direct equivalent to the "shrink" operation as it exists in SQL Server. However, you can reclaim unused space and resize data files in Oracle. Below are steps to achieve this:

Shrink Data File:

Check for unused space within the data file by querying the dba_free_space view.

select file_name, bytes/1024/1024 from dba_data_files ;

Resize Data File:

If there is unused space and you want to shrink the data file, you can resize it using the ALTER DATABASE statement.

ALTER DATABASE DATAFILE'/u02/oradata/CLOTADB/clota_data.dbf' RESIZE 200M;


Replace 'full_path_to_data_file' with the actual path to your data file, and new_size_in_megabytes with the desired new size. Be cautious about setting the size too small, as it may impact the database's ability to accommodate future data growth.

Shrink Log File:

Oracle automatically manages the size of the online redo log files, and there is no direct equivalent to "shrinking" log files. Oracle will reuse and overwrite space in the online redo log files as needed.

However, if you need to resize the redo log files, you can follow these steps:

Check Redo Log Information:

Query the v$log view to get information about the redo log files.

SELECT group#, member, bytes / (1024 * 1024) AS log_file_size_mb

FROM v$log;

Switch Logs:

Perform a log switch to force the database to start using a new redo log file.

ALTER SYSTEM SWITCH LOGFILE;

Resize Redo Log File:

After switching logs, you can resize the inactive redo log file using the ALTER DATABASE statement.

ALTER DATABASE DROP LOGFILE 'full_path_to_log_file';

Then, add a new redo log file with the desired size.

ALTER DATABASE ADD LOGFILE 'full_path_to_new_log_file' SIZE

new_size_in_megabytes;

Replace 'full_path_to_log_file' and 'full_path_to_new_log_file' with the actual paths to the old and new log files, respectively. Also, set new_size_in_megabytes to the desired size.

Remember to carefully plan and test any changes to file sizes, especially in a production environment, to avoid potential issues related to database performance and space management. Always perform these operations during periods of low database activity and with proper backups in place


Dec 21, 2023

SQL Loader

Hi guys, in this blog I am going to explain you about SQL Loader.

 

SQL Loader-

Oracle has a powerful application called SQL Loader, which is often referred to as SQL*Loader, that makes it easy to load data from external files into Oracle database tables. When dealing with enormous volumes of data that must be loaded precisely and fast, this is especially helpful.

 

Export In 0racle-

Here's a brief explanation of SQL Loader in Oracle 19c:

 

Overview-

Using a command-line program called SQL Loader, you can load data into Oracle tables from external files, such flat files. The format of the data file and the mapping between the data file and the database table are specified using a control file.

 

Key Components:

 

Control File:

This file specifies the data format, source file details, and how the data should be loaded into the database. It contains information such as table name, field names, data types, and data file location.


Data File:

The external file containing the actual data that needs to be loaded into the Oracle table.

 

Bad File:

A file that contains records that were rejected during the load process due to errors.

 

Discard File:

A file that contains records that were intentionally discarded during the load process.

 

Syntax:

The basic syntax for using SQL Loader is as follows:

sqlldr username/password@database control=your_control_file.ctl

 

Example Control File:

An example of a control file might look like this:

LOAD DATA

INFILE 'data_file.csv'

INTO TABLE employees

FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'

(employee_id, first_name, last_name, hire_date DATE 'YYYY-MM-DD')

 

This example specifies that data in the 'data_file.csv' file is to be loaded into the 'employees' table. The data is comma-separated, and certain fields are enclosed by double quotes. The 'hire_date' field is explicitly defined with a date format.

 

Explanation:

LOAD DATA:

This is the keyword that signals the beginning of the SQLLoader control file. It indicates that you are instructing SQLLoader to load data.

 

INFILE 'data_file.csv':

Specifies the name of the input data file, in this case, 'data_file.csv'. This is the external file from which SQL*Loader will read the data.

 

INTO TABLE employees:

Specifies the target database table where the data will be loaded. In this example, the data will be loaded into the 'employees' table.

 

FIELDS TERMINATED BY ',':

Indicates that the fields in the data file are separated by commas. This is essential information for SQL*Loader to correctly parse the data.

 

OPTIONALLY ENCLOSED BY '"':

Specifies that fields may be optionally enclosed by double quotes ("). This is common in CSV (Comma-Separated Values) files, where data fields can be enclosed to handle cases where the field itself contains the delimiter.

 

(employee_id, first_name, last_name, hire_date DATE 'YYYY-MM-DD'):

Describes the mapping between the fields in the data file and the columns in the 'employees' table. Each field in the data file corresponds to a column in the table. The columns are listed within parentheses.

 

employee_id:

This is assumed to be a numeric column in the 'employees' table.

 

first_name:

This is assumed to be a text column.

 

last_name:

Another text column.

 

hire_date DATE 'YYYY-MM-DD':

This specifies that the 'hire_date' field in the data file should be interpreted as a date in the 'YYYY-MM-DD' format.

 

 

Modes of Operation:

SQL Loader supports various modes of operation, including direct path and conventional path. Direct path loading is generally faster, as it bypasses much of the Oracle database processing and writes data directly to the data files.

 

Enhancements in Oracle 19c:

Oracle Database 19c may introduce new features or enhancements to existing features. Be sure to consult the official Oracle documentation for the latest information on SQL Loader features and improvements specific to Oracle 19c.

 

Best Practices:

Optimize the control file for better performance.

Use direct path loading for large datasets.

Monitor and review the log files for any errors or warnings.

Dec 18, 2023

System Views in Oracle

 

In Oracle Database, the concepts of "system view" typically refer to different types of views that provide access to metadata and information about the database objects. Let's delve into each of these concepts:

System Views:

System views are also known as data dictionary views or catalog views. These views are part of the Oracle data dictionary, which is a set of tables and views that store metadata about the database. System views provide information about database objects, user accounts, privileges, and various aspects of the database configuration.

Examples of system views include:

DBA_TABLES: Provides information about all tables in the database.

In Oracle Database, the DBA_TABLES view is a system view that belongs to the data dictionary. Data dictionary views, often referred to as system views or catalog views, store metadata information about the database objects. Specifically, DBA_TABLES provides information about tables in the database.

Here are some of the key columns in the DBA_TABLES view:

OWNER: The owner or schema to which the table belongs.

TABLE_NAME: The name of the table.

TABLESPACE_NAME: The tablespace in which the table is stored.

CLUSTER_NAME: The name of the cluster to which the table belongs (if any).

IOT_NAME: The name of the Index-Organized Table (IOT) if the table is an IOT.

LAST_ANALYZED: The timestamp of the last time the table was analyzed.

NUM_ROWS: The approximate number of rows in the table.

You can query the DBA_TABLES view to retrieve information about all tables in the database or filter the results based on specific criteria, such as the owner or table name.

DBA_USERS: Contains information about all users in the database.

In Oracle Database, the DBA_USERS view is a system view that belongs to the data dictionary. Data dictionary views, also known as system views or catalog views, store metadata information about various aspects of the database. Specifically, the DBA_USERS view provides information about database users.

Here are some of the key columns in the DBA_USERS view:

USERNAME: The name of the user.

USER_ID: A unique identifier for the user.

CREATED: The timestamp indicating when the user was created.

DEFAULT_TABLESPACE: The default tablespace for the user.

TEMPORARY_TABLESPACE: The default temporary tablespace for the user.

PROFILE: The name of the security profile assigned to the user.

ACCOUNT_STATUS: The status of the user's account (e.g., OPEN, LOCKED).

You can query the DBA_USERS view to retrieve information about all users in the database or filter the results based on specific criteria.



Select username, account_status, last_login from dba_users where username =’TARIF’;





DBA_VIEWS: Contains information about all views in the database.

In Oracle Database, the DBA_VIEWS view is a system view that belongs to the data dictionary. Data dictionary views, often referred to as system views or catalog views, store metadata information about various database objects. Specifically, the DBA_VIEWS view provides information about views in the database.

Here are some of the key columns in the DBA_VIEWS view:

OWNER: The owner or schema to which the view belongs.

VIEW_NAME: The name of the view.

TEXT_LENGTH: The length of the text defining the view.

TEXT: The text of the SELECT statement that defines the view.

TYPE_TEXT_LENGTH: The length of the type text (usually 'VIEW').

TYPE_TEXT: The type text (usually 'VIEW').

You can query the DBA_VIEWS view to retrieve information about all views in the database or filter the results based on specific criteria.



To query these views, you typically need elevated privileges such as DBA (Database Administrator) or select access on specific views


Dec 11, 2023

Data Integrity

 

Introduction to Data Integrity

It is important that data maintain data integrity, which is adherence to business rules determined by the database administrator or application developer.

Business rules specify conditions and relationships that must always be true or must always be false. For example, each company defines its own policies about salaries, employee numbers, inventory tracking, and so on.

Techniques for Guaranteeing Data Integrity

When designing a database application, developers have several options for guaranteeing the integrity of data stored in the database.

These options include:

  • Enforcing business rules with triggered stored database procedures
  • Using stored procedures to completely control access to data
  • Enforcing business rules in the code of a database application

Using Oracle Database integrity constraints, which are rules defined at the column or object level that restrict values in the database

Advantages of Integrity Constraints

An integrity constraint is a schema object that is created and dropped using SQL. To enforce data integrity, use integrity constraints whenever possible.

Advantages of integrity constraints over alternatives for enforcing data integrity include:

  • Declarative ease

Because you define integrity constraints using SQL statements, no additional programming is required when you define or alter a table. The SQL statements are easy to write and eliminate programming errors.

  • Centralized rules

Integrity constraints are defined for tables and are stored in the data dictionary Thus, data entered by all applications must adhere to the same integrity constraints. If the rules change at the table level, then applications need not change. Also, applications can use metadata in the data dictionary to immediately inform users of violations, even before the database checks the SQL statement.


  • Flexibility when loading data

You can disable integrity constraints temporarily to avoid performance overhead when loading large amounts of data. When the data load is complete, you can re-enable the integrity constraints.

Types of Integrity Constraints

NOT NULL

A NOT NULL constraint requires that a column of a table contain no null values. A null is the absence of a value. By default, all columns in a table allow nulls.

NOT NULL constraints are intended for columns that must not lack values.

 

For example, if you have a table of employees and the "birthdate" column is defined as "not null," it means that every employee record must have a birthdate specified; you cannot leave it blank.

 

In programming languages and SQL, you might encounter syntax like:

CREATE TABLE employees (

  employee_id INT PRIMARY KEY,

  first_name VARCHAR(50) NOT NULL,

  last_name VARCHAR(50) NOT NULL,

  birthdate DATE NOT NULL

);

In this example, both the first_name and last_name columns must have values, and the birthdate column must also have a non-null date value.

The use of "not null" constraints helps ensure data integrity and makes it clear that certain information is required for each record.

Unique Constraints

A unique key constraint requires that every value in a column or set of columns be unique. No rows of a table may have duplicate values in a single column (the unique key) or set of columns (the composite unique key) with a unique key constraint.

 

For example, let's say you have a table of employees, and you want to make sure that each employee has a unique employee ID. You can apply a unique constraint to the "employee_id" column to enforce this rule. Here's an example using SQL:

 

CREATE TABLE employees (

  employee_id INT PRIMARY KEY,

  first_name VARCHAR(50),

  last_name VARCHAR(50),

  UNIQUE (employee_id)

);

In this case, the UNIQUE (employee_id) constraint ensures that each value in the "employee_id" column is unique across all rows in the "employees" table. Attempting to insert a new record with an employee ID that already exists in the table would result in a constraint violation error.

Unique constraints are valuable for maintaining data integrity and preventing duplicate entries in critical columns. They are commonly used for columns such as usernames, email addresses, or any other data where uniqueness is a requirement.

 

Primary Key Constraints

In a primary key constraint, the values in the group of one or more columns subject to the constraint uniquely identify the row. Each table can have one primary key, which in effect names the row and ensures that no duplicate rows exist.

A primary key can be natural or a surrogate. A natural key is a meaningful identifier made of existing attributes in a table. For example, a natural key could be a postal code in a lookup table. In contrast, a surrogate key is a system-generated incrementing identifier that ensures uniqueness within a table. Typically, a sequence generates surrogate keys.

The Oracle Database implementation of the primary key constraint guarantees that the following statements are true:

  • No two rows have duplicate values in the specified column or set of columns.
  • The primary key columns do not allow nulls.

For example using SQL:

CREATE TABLE students (

  student_id INT PRIMARY KEY,

  first_name VARCHAR(50),

  last_name VARCHAR(50),

  age INT

);

In this example, the student_id column is specified as the primary key for the "students" table. It means that each student in the table must have a unique student_id, and this column cannot contain null values.

 

The primary key is crucial for database relationships because it provides a way to uniquely identify each record in a table. It is often used as a reference in other tables (as a foreign key) to establish relationships between tables.

 

In addition to enforcing uniqueness and non-null values, the database management system typically uses the primary key to create an index for faster data retrieval and search operations.

 

Foreign Key Constraints

Whenever two tables contain one or more common columns, Oracle Database can enforce the relationship between the two tables through a foreign key constraint, also called a referential integrity constraint.

A foreign key constraint requires that for each value in the column on which the constraint is defined, the value in the other specified other table and column must match. An example of a referential integrity rule is an employee can work for only an existing department.

The following table lists terms associated with referential integrity constraints.

 

For example using SQL:

 

CREATE TABLE departments (

  department_id INT PRIMARY KEY,

  department_name VARCHAR(50)

);

 

CREATE TABLE employees (

  employee_id INT PRIMARY KEY,

  first_name VARCHAR(50),

  last_name VARCHAR(50),

  department_id INT,

  FOREIGN KEY (department_id) REFERENCES departments(department_id)

);

In this example, the departments table has a primary key (department_id), and the employees table has a foreign key (department_id) that references the primary key in the departments table. This relationship indicates that the department_id in the employees table must correspond to a valid department_id in the departments table.

Key points about foreign keys:

Referential Integrity: Foreign keys ensure that relationships between tables remain consistent. They help maintain referential integrity, meaning that values in the foreign key column of one table correspond to values in the primary key column of another table.

CASCADE Options: When defining a foreign key, you can specify what actions should happen if the referenced row in the parent table is updated or deleted. For example, you can use the ON DELETE CASCADE option to automatically delete related rows in the child table when the corresponding row in the parent table is deleted.

Multiple Foreign Keys: A table can have multiple foreign keys, each referencing a different table. This is common in complex database designs where multiple relationships exist between tables.

Foreign keys play a crucial role in maintaining data integrity and establishing connections between different entities in a relational database.

 

 


Dec 7, 2023

Overview of the Optimizer in oracle 19c and Types of Optimizer

In Oracle 19c, the optimizer plays a crucial role in optimizing SQL statements. The optimizer uses various strategies to transform the SQL statements into an optimal plan, thereby minimizing the overall cost of execution.




In this blog, we will provide an overview of the optimizer in Oracle 19c.


The Optimizer in Oracle 19c:

The optimizer is responsible for generating an efficient execution plan for a given SQL statement. The execution plan consists of various operations, such as joins, scans, and aggregations, that determine the sequence of operations required to retrieve the desired result set.


Cost-Based Optimization:

Oracle 19c utilizes a cost-based optimization (CBO) technique to select the most efficient execution plan. The CBO evaluates the cost of different plans based on factors such as table statistics, indexes, and system configuration. By comparing the costs, the optimizer can determine the optimal plan that will provide the best performance for a given SQL statement.


Caching and Reusing Execution Plans:

Oracle 19c employs various techniques to improve query performance by caching and reusing execution plans. For example, Oracle can store a previously generated execution plan for a given SQL statement and reuse it when the same SQL statement is executed again. This technique reduces the overhead of plan generation and execution.


Parsing, Translation, and Execution:

The optimizer follows a series of steps during the parsing, translation, and execution of a SQL statement. These steps include parsing, which converts the SQL statement into an internal format; translation, which generates the execution plan based on the cost-based optimization techniques; and execution, which carries out the plan to retrieve the desired result set.


Advanced Optimization Techniques:

Oracle 19c provides several advanced optimization techniques, such as dynamic sampling, cardinality feedback, and adaptive execution plans. These techniques enhance the optimizer's ability to generate efficient execution plans for complex SQL statements.


Types of Optimizers-

Oracle 19c has several kinds of Optimizers.



Rule-Based Optimizer (RBO)

The Rule-Based Optimizer (RBO) in Oracle 19c is responsible for performing transformations on the SQL statement. It follows a series of rewrite rules to transform the SQL statement into a more efficient form. However, RBO does not use any statistics or cost-based optimization techniques.


Cost-Based Optimizer (CBO)

The Cost-Based Optimizer (CBO) in Oracle 19c generates the execution plan for a given SQL statement based on cost-based optimization techniques. It evaluates the cost of different plans by considering factors such as table statistics, indexes, and system configuration. By comparing the costs, CBO can determine the optimal plan that will provide the best performance for a given SQL statement.


Adaptive Optimizer (Adaptive CBO)

The Adaptive Optimizer (Adaptive CBO) in Oracle 19c is an enhanced version of the Cost-Based Optimizer (CBO). It incorporates additional features such as dynamic sampling, cardinality feedback, and adaptive execution plans to enhance the optimizer's ability to generate efficient execution plans for complex SQL statements.


Transparent Query Execution Optimizer (Transparent CBO)

The Transparent Query Execution Optimizer (Transparent CBO) in Oracle 19c is designed to optimize SQL statements that involve materialized views or other types of optimization views. It generates an execution plan for the SQL statement based on the properties of the optimization views and considers factors such as query transformation, statistics, and system configuration.


Hint-Based Optimizer

The Hint-Based Optimizer in Oracle 19c allows users to override the default optimizer decisions and enforce specific optimization techniques or strategies. It enables users to specify hints within the SQL statement, which instruct the optimizer to follow a specific optimization path.

In summary, Oracle 19c offers a comprehensive set of optimizer types, each with its own set of capabilities and optimization techniques. These optimizers work together to generate efficient execution plans for complex SQL statements, thereby enhancing the overall performance and scalability of Oracle 19c data management and analytics applications.