Apr 8, 2024

Cloning In Oracle

In the context of Oracle databases, "cloning" typically refers to the process of creating an exact replica of a database, usually for purposes such as testing, development, or disaster recovery. There are several methods for cloning an Oracle database, each with its own advantages and use cases. Here's an overview of some common cloning methods in Oracle:

 RMAN (Recovery Manager) Cloning: RMAN is a powerful Oracle utility for backup and recovery operations. With RMAN, you can create a clone of a database by taking a backup of the source database and then restoring it to a different location. RMAN also provides options for efficiently copying data files and performing point-in-time recovery if needed.



RMAN Client: The RMAN client is the interface through which database administrators interact with RMAN to perform backup, recovery, and cloning tasks. Administrators use RMAN commands and scripts to initiate and manage database operations.

 

Control File: The control file is a critical component of the Oracle database that stores metadata about the database structure, including data file locations, redo log file information, and backup details. RMAN interacts with the control file to obtain information necessary for backup and recovery operations.

 

RMAN Repository: RMAN maintains a repository that stores metadata and configuration information related to backup and recovery operations. The repository can be stored in either the control file, a recovery catalog (a separate database schema created specifically for RMAN metadata), or both.

 

Channel: RMAN channels are mechanisms for communicating between the RMAN client and the database server during backup and recovery operations. Channels can be allocated for different tasks, such as reading data files, writing backup sets, or performing parallel operations.

 

Media Management Software: RMAN relies on media management software to perform backup and restore operations to physical storage devices, such as tape drives, disk arrays, or cloud storage. Media management software provides the interface between RMAN and the storage devices, handling tasks such as data transfer, compression, and encryption.

 

Data Files: Data files are the primary storage components of an Oracle database, containing user data, indexes, and other database objects. RMAN interacts with data files to back them up, restore them, and create copies during cloning or recovery operations.

 

Redo Logs: Redo logs are files that record changes made to the database, enabling recovery of transactions in the event of a failure. RMAN uses redo logs during recovery operations to apply changes and bring the database to a consistent state.

 

Backup Sets and Image Copies: RMAN can create backups in the form of backup sets (collections of data blocks) or image copies (exact duplicates of data files). These backups are stored in the configured backup destination, such as disk or tape.

 

Recovery Catalog (Optional): In addition to the control file repository, RMAN can use a recovery catalog database to store metadata and configuration information. The recovery catalog provides additional flexibility and management capabilities, especially in environments with multiple databases.

 

 

Data Pump Export/Import: Oracle Data Pump is another utility that allows you to export and import database objects and data. You can use Data Pump to export the schema(s) or the entire database from the source environment and then import it into the target environment. This method is suitable for smaller databases or when you need to clone specific schemas.


Export Process:

1. When you run expdp, it connects to the source database and reads metadata about the objects to be exported.

2. It then creates one or more dump files containing the exported data and metadata.

3. These dump files are stored in the specified directory.

expdp username/password@database_name schemas=schema_name directory=dir_name dumpfile=export.dmp


Data Pump Import (impdp):

Importing Data:

 

Data Pump Import (impdp) is used to import data and metadata into Oracle databases.

It reads the dump files created by expdp and loads the data and metadata into the target database.

Similar to expdp, you can import entire databases, specific schemas, tablespaces, or individual tables.

Import Process:

 

When you run impdp, it connects to the target database and reads the dump files created by expdp.

It then loads the data and metadata into the target database based on the specified parameters.

You can control various aspects of the import process such as parallelism, data transformation, error handling, etc.

 

impdp username/password@database_name directory=dir_name dumpfile=export.dmp schemas=schema_name

 

Cold Backup: This method involves shutting down the source database, copying its data files to the target location, and then starting up the database in the target environment. While this approach is straightforward, it requires downtime for the source database during the cloning process.

 

Oracle Database Cold Backup:

Oracle Cold Database backup is rarely used these days. DBAs hardly take cold database backup but sometimes it is important to take one before decommissioning a database.

• Key points

• Take Cold Backup

 

Key Points

• Backup is a copy of original data which will be used to recover databases

• If the data is reproducible and backup not existing, still we can recover the data. But it is a tedious and time consuming task

• Taking backup after shutting down the database is called cold backup and because no transactions exist, the backup will be consistent

• In real time, we will perform cold backup very rarely

 

Take Cold Backup

SQL> select name from v$datafile;

SQL> select member from v$logfile;

SQL> select name from v$controlfile;

SQL> shutdown immediate

 

[oracle@server1 ~]$ mkdir /u03/coldbkp

[oracle@server1 ~]$ cp /datafiles/prod/*.dbf /u03/coldbkp

[oracle@server1 ~]$ cp /datafiles/prod/*.log /u03/coldbkp

[oracle@server1 ~]$ cp /datafiles/prod/*.ctl /u03/coldbkp

[oracle@server1 ~]$ cp $ORACLE_HOME/dbs/*.ora /u03/coldbkp

[oracle@server1 ~]$ sqlplus "/ as sysdba"

 

SQL> startup

SQL> alter database backup controlfile to trace;

Archives are not required to be copied with cold backup.

 

Oracle GoldenGate: Oracle GoldenGate is a data replication and integration tool that enables real-time data movement between heterogeneous systems. It can be used for database cloning by continuously capturing changes from the source database and applying them to the target database, keeping the two databases in sync.

 

Real-time Data Integration: GoldenGate captures changes made to data in source systems and delivers them in real-time to target systems, ensuring that the data is synchronized across multiple databases and platforms.

 

High Availability and Disaster Recovery: GoldenGate provides solutions for high availability and disaster recovery by replicating data in real-time from primary systems to standby systems, allowing for rapid failover and minimal data loss in case of system failures.

 

Data Warehousing and Reporting: GoldenGate facilitates data replication and consolidation for data warehousing and reporting purposes, enabling organizations to analyze data from multiple sources in real-time.

 

Data Migration and Upgrades: GoldenGate can be used to migrate data between different database platforms or versions with minimal downtime, allowing organizations to upgrade their systems or move data to new environments seamlessly.

 

Business Intelligence and Analytics: GoldenGate enables real-time data feeds for business intelligence and analytics applications, providing up-to-date information for decision-making processes.

 

Here are some key features and components of Oracle GoldenGate:

 

Extract Process: The Extract process captures data changes (inserts, updates, deletes) from the source database's transaction logs (redo logs or archive logs) in real-time.

 

Data Pump Process: The Data Pump process collects captured data from the Extract process and writes it to trail files.

 

Replicat Process: The Replicat process reads data from trail files and applies it to the target database, ensuring that changes are replicated accurately and efficiently.

 

Manager Process: The Manager process is responsible for managing GoldenGate processes, monitoring their status, and handling administrative tasks.

 

Trail Files: Trail files are used to store captured data changes in a platform-independent format, allowing for easy transfer and replication of data between different systems.

 

Checkpointing: GoldenGate uses checkpoints to track the progress of data replication, ensuring that transactions are replicated in the correct order and consistency is maintained between source and target systems.

 

Oracle Database Cloning Tools: Oracle offers additional tools such as Oracle Database Cloning Assistant for Standalone Database, Oracle Enterprise Manager (EM) Clone Database, and Oracle Cloud Control Database Clone feature. These tools provide user-friendly interfaces and automation capabilities for cloning databases.

Oracle provides several tools and methods for cloning databases, each suited for different scenarios and requirements. Here are some commonly used Oracle database cloning tools:

 

RMAN (Recovery Manager):

 

RMAN can be used for creating clones of databases using the DUPLICATE command.

It allows for point-in-time cloning, enabling you to create a clone of the database at a specific moment.

RMAN cloning can be disk-based or tape-based, depending on the backup destination.

RMAN provides flexibility in terms of customization and automation during the cloning process.


Oracle Enterprise Manager (OEM):

 

Oracle Enterprise Manager provides a graphical user interface for managing Oracle databases, including cloning operations.

OEM allows for easy and intuitive cloning of databases with minimal manual intervention.

It provides features for scheduling and monitoring cloning jobs, as well as managing post-cloning tasks.

Data Pump Export/Import:

 

Oracle Data Pump is a utility for exporting and importing database objects and data.

It can be used for cloning by exporting data from the source database and importing it into a new database instance.

Data Pump export/import is suitable for smaller databases or when specific customization is required during the cloning process.

Manual Cloning (Cold Cloning):

 

Manual cloning involves manually copying database files (data files, control files, redo logs, etc.) from the source database to the target location.

This method is typically used for non-production environments or when other cloning methods are not feasible.

Manual cloning requires downtime for the source database during the copy process and may not be suitable for large databases.

Snap Clone:

 

Oracle Snap Clone is a feature available with Oracle Enterprise Manager and Oracle Database 12c and later versions.

It enables rapid provisioning of database clones using storage snapshots.

Snap Clone leverages storage technologies to create space-efficient copies of databases, minimizing storage requirements and improving cloning speed.

 

Snapshot Cloning: Some storage solutions offer snapshot-based cloning, where a point-in-time snapshot of the source database's storage is taken, and clones are created from that snapshot. This method can be very fast and efficient but requires compatible storage infrastructure.

When choosing a cloning method, consider factors such as database size, downtime requirements, data consistency, and available resources. It's also essential to carefully plan and test the cloning process to ensure the integrity of the cloned database and minimize any potential disruptions to production environments.

 

Mar 14, 2024

Correlated Subquery

Correlated Subquery


Introduction-

 

A correlated subquery is a SQL query in which the values of the inner query (subquery) are derived from the outer query. Essentially, the subquery is conducted many times, one for each row processed by the outer query. This makes correlated subqueries very useful for complex data retrieval scenarios, but they can also be more performance-intensive than non-correlated subqueries due to repeated execution.

 

Here's a step-by-step explanation to understand correlated subqueries in a structured way:


1. Basic Structure-

A correlated subquery typically looks something like this:

 

SELECT column_name(s)

FROM table_name AS outer_table

WHERE column_name OPERATOR

    (SELECT column_name

     FROM table_name AS inner_table

     WHERE outer_table.column = inner_table.column);

 

2. Key Components


Outer Query-

This is the main query that first gets executed up to the point of the subquery. It retrieves data from the outer table.

 

Inner Query (Subquery)-

This query is nested inside the outer query. Unlike a regular subquery, it references columns from the outer query, creating a dependency.

 

Correlation-

The correlation happens through the condition that links the inner and outer queries (e.g., outer_table.column = inner_table.column). This condition is what makes the subquery "correlated" because it must reference a column from the outer query for each row it processes.

 

3. Execution Flow-

The outer query starts executing.

For each row in the outer query, the inner (correlated) subquery executes.

The subquery uses a value from the current row of the outer query to perform its operation.

The result of the subquery is used to complete the operation of the outer query for the current row.

Steps 2-4 repeat for each row processed by the outer query.

 

4. Use Cases


Correlated subqueries are useful for-


Row-by-row operations-

When you need to perform calculations or comparisons on each row individually.

 

Existence checks-

To check if some related data exists in another table for each row.

 

Aggregate calculations-

Such as finding the minimum, maximum, average, etc., of grouped data specific to each row in the outer query.

 

 

5. Performance Considerations-

Since the subquery maybe executed once for each row in the outer query, correlated subqueries can be slow, especially on large datasets. It's often recommended to look for
alternatives, such as JOINs or temporary tables, if performance becomes an issue.

 

Example

Consider two tables, employees and department, where you want to find the names of employees who earn more than the average salary in their respective departments-

 

SELECT e.name, e.salary

FROM employees e

WHERE e.salary > (

    SELECT AVG(salary)

    FROM employees

    WHERE department_id = e.department_id

);

 

In this example, for each employee in the outer query, the subquery calculates the average salary of their department and checks if their salary is above this average. The correlation between the outer and inner query is established through department_id.

Mar 11, 2024

Database corruption

Database corruption in Oracle can occur due to various reasons, such as hardware failures, software bugs, or issues during database operations. It's crucial to identify and address corruption promptly to ensure data integrity. Here are some steps you can take to deal with database corruption in Oracle:

 

Physical corruption: This happens generally due to media corruption. In this case, the block does have an invalid checksum. It may contain zero. This is generally the result of infrastructure issues like media, network, and memory.

RMAN block recovery can be used to remove physical corruption

We see the error ORA-01578: ORACLE data block corrupted in the alert log when Oracle detects physical corruption. Oracle detects corruption by checking the block header and block footer.

Logical corruption:  This happens when Oracle find the block header and footer good but it finds internal consistency in the block. Generally, it happens due to some bug in Oracle software. When Oracle detects it, it throws the error in the alert log.

ORA-08103 is a logical corruption error for a particular data block.
ORA-00600 [2662] – This error is related to block corruption and occurs due to a higher SCN than database SCN.

How to check block corruption in Oracle database

Block corruption in Oracle Database can be found using the below method

  • RMAN

·         Checking alert log for possible occurrence

  • dbverify
  • Export
  • Analyze command

Using RMAN   Run below command

RMAN> backup validate check logical database;

The “CHECK LOGICAL” option is used to identify both Physical and Logical Block Corruptions.

We can use the below command in case we want to check selectively

BACKUP VALIDATE DATAFILE 1;
BACKUP VALIDATE check logical DATAFILE 1;
BACKUP VALIDATE TABLESPACE TOOLS;
BACKUP VALIDATE check logical TABLESPACE TOOLS;
BACKUP VALIDATE BACKUPSET 1;
BACKUP VALIDATE CURRENT CONTROLFILE;
For Multitenant
VALIDATE CHECK LOGICAL PLUGGABLE DATABASE test1;

Select the view to identify the corrupted blocks detected by RMAN.

QL> select * from v$database_block_corruption;
For each row, find the segment impacted
SQL> SELECT TABLESPACE_NAME, SEGMENT_NAME, PARTITION_NAME, SEGMENT_TYPE 
FROM DBA_EXTENTS 
WHERE FILE_ID=&FILE_ID AND 
&BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;

Please note that after a corrupt block is repaired, the row identifying the block is deleted from the view.

Checking alert log for possible occurrence

Check alert. log file for corrupted blocks, data file list.

For ex:- ORA-01578: ORACLE data block corrupted (file # 11, block # 45)
ORA-01110: data file 5: '/xyz/db01.dbf'

dbverify

You can also use the dbverify utility to identify Physical and Logical Intra Block Corruptions.

dbv file=datafile_name blocksize=datafile_block_size

You can use the below command to generate the dbv command for all the datafiles

sqlplus /nolog
connect / as sysdba
set feedback off
set head off
set echo off
set linesize 200
set pagesize 2500
spool /tmp/dbvlogs/dbvchk.sh
select 'dbv file=' || name || ' blocksize='|| block_size || ' logfile=' ||
substr(name, instr(name, '/', -1, 1) +1) ||
'.' || file# || '.log' from v$datafile
/
spool off

In the case of RAW device

select 'dbv file='||name||' blocksize='||block_size||' end='||
(bytes/block_size)||' logfile='||
substr (name, instr (name, '/', -1, 1) + 1)||'.'||file#||'.log'
from  v$datafile;

In the case of ASM storage

select 'dbv file=' || name || ' blocksize='|| block_size || ' USERID=sys/&SYS_PASSWORD logfile=' ||
substr(name, instr(name, '/', -1, 1) +1) ||
'.' || file# || '.log' from v$datafile
/

Export

Using the export of a table or schema will also find the corruption in the table or schema

But the big limitation with export is that It fails immediately after encountering the first corruption., so the complete picture will be missed.


ANALYZE command

ANALYZE has been referred to as the best utility that performs maximum checks.

This is the easy utility to check the corruptions associated with an index by any means.

To check the corruption with a table do the following:

ANALYZE TABLE <OWNER.TABLE_NAME> VALIDATE STRUCTURE;

To check the corruption with an index do the following:

ANALYZE INDEX <OWNER.INDEX_NAME> VALIDATE STRUCTURE;

To check the corruption with the table and its index(s) to perform the cross-reference checkings do the following:

ANALYZE TABLE <OWNER.TABLE_NAME> VALIDATE STRUCTURE CASCADE;

For partition tables, we need to use the ANALYZE command with the INTO INVALID_ROWS option, similar to the following:

ANALYZE TABLE <OWNER.TABLE_NAME> VALIDATE STRUCTURE CASCADE INTO INVALID_ROWS;

How to Correct the Block Corruption using RMAN Blockrecover

1. Recovering Data blocks By Using All Available Backups

Run the BLOCKRECOVER command at the RMAN prompt, specifying the file and block numbers for the corrupted blocks

RMAN>BLOCKRECOVER DATAFILE 11 BLOCK 15;

Recover multiple blocks in a single command

RMAN>BLOCKRECOVER DATAFILE 11 BLOCK 15 DATAFILE 2 BLOCK 10

2. Recovering Data blocks Using Selected Backups

Run the BLOCKRECOVER command at the RMAN prompt, Specifying the data file and block numbers for the corrupted blocks and limiting the backup candidates by means of the available options. For example, specify what type of backup should be used to restore the blocks.

# restore from backupset


RMAN> BLOCKRECOVER DATAFILE 11 BLOCK 15 FROM BACKUPSET;

# Restore from datafile image copy

RMAN> BLOCKRECOVER DATAFILE 11 BLOCK 15 FROM DATAFILECOPY;

# Restore from backup set with tag “Sunday”


RMAN> BLOCKRECOVER DATAFILE 11 BLOCK 15 FROM TAG = Sunday

3 . Recovering blocks listed in V$DATABASE_BLOCK_CORRUPTION view

Run the below command to recover all blocks marked corrupt in V$DATABASE_BLOCK_CORRUPTION

RMAN> BLOCKRECOVER CORRUPTION LIST;

Restores blocks from backup sets created more than 7 days ago

RMAN> BLOCKRECOVER CORRUPTION LIST FROM BACKUPSET RESTORE UNTIL TIME 'SYSDATE-7';

How to Correct the Block Corruption Using Backup

  • Using data file recovery from backup or standby (if you have one)
  • Complete database recovery