Oracle Schema Refresh
Schema Refresh copies or synchronizes one or more schemas from a source Oracle database to a target database. It may involve replacing tables and objects fully, appending data to existing tables, skipping existing tables, remapping schemas or tablespaces, and more. Oracle Data Pump (expdp/impdp) is the tool used.
Steps on Source Database
1. Prechecks & Backup on Target (Plan Ahead)
Check if target schema exists:
SELECT username FROM dba_users WHERE username = 'TARGET_SCHEMA';Check row count for important tables in target schema:
SELECT COUNT(*) FROM target_schema.some_table;Backup target schema before refresh (important for revert):
nohup expdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=target_schema_bkp_%U.dmp LOGFILE=target_schema_bkp.log SCHEMAS=target_schema COMPRESSION=ALL PARALLEL=4 REUSE_DUMPFILES=Y &2. Export Schema(s) on Source DB
Export single or multiple schemas:
nohup expdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=source_schema_exp.log SCHEMAS=source_schema COMPRESSION=ALL PARALLEL=4 REUSE_DUMPFILES=Y &Export multiple schemas example:
nohup expdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=multi_schemas_%U.dmp LOGFILE=multi_schemas_exp.log SCHEMAS=schema1,schema2 COMPRESSION=ALL PARALLEL=4 REUSE_DUMPFILES=Y &3. Transfer Dump Files to Target (If Needed)
If source and target don’t share filesystem, transfer files using SCP:
scp /path/to/dump/source_schema_*.dmp oracle@target_server:/path/to/dump/
scp /path/to/dump/source_schema_exp.log oracle@target_server:/path/to/dump/More info: SCP File Transfer
Steps on Target Database
4. Create Directory Object (If Not Exists)
CREATE DIRECTORY my_dump_dir AS '/path/to/dump';
GRANT READ, WRITE ON DIRECTORY my_dump_dir TO target_user;More info: Create Directory
5. Lock Schema and Kill Sessions (If Schema Drop Is Planned)
If you’re performing a full refresh and plan to drop the target schema before import, follow these steps:
Step 1: Lock the Schema
Prevent new connections to the schema:
ALTER USER target_schema ACCOUNT LOCK;✅ This prevents any new sessions from being created during the drop/import operation.
Step 2: Kill Active Sessions
If active sessions exist that prevent the drop, generate kill commands:
SELECT
'ALTER SYSTEM KILL SESSION ''' || sid || ',' || serial# || ''' IMMEDIATE;' AS kill_command
FROM v$session
WHERE username = 'TARGET_SCHEMA';How to use:
- Run the above query.
- It will return output like:
ALTER SYSTEM KILL SESSION '123,456' IMMEDIATE;
ALTER SYSTEM KILL SESSION '124,457' IMMEDIATE;- Copy and execute these statements in SQL*Plus or SQL Developer.
6. Drop Schema (Only If Doing Full Refresh)
After locking and clearing sessions, drop the schema:
DROP USER target_schema CASCADE;⚠️ Only do this if you’re importing the entire schema fresh and don’t need any existing objects.
7. Import Schema — Different Types of Schema Refresh
| Type | When to Use | Import Command Example |
|---|---|---|
| Full Refresh (Drop & Import) | Drop existing schema & import fresh | — Kill sessions (step 5) and drop schema (step 6) firstnohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log SCHEMAS=target_schema TABLE_EXISTS_ACTION=REPLACE PARALLEL=4 & |
| Full Refresh (Replace Tables) | Replace all tables without drop | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log SCHEMAS=target_schema TABLE_EXISTS_ACTION=REPLACE PARALLEL=4 & |
| Append Data | Add new rows to existing tables | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log SCHEMAS=target_schema TABLE_EXISTS_ACTION=APPEND PARALLEL=4 & |
| Skip Existing Tables | Import only tables that don’t exist | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log SCHEMAS=target_schema TABLE_EXISTS_ACTION=SKIP PARALLEL=4 & |
| Truncate & Load | Truncate tables before import (keep structure) | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log SCHEMAS=target_schema TABLE_EXISTS_ACTION=TRUNCATE PARALLEL=4 & |
| Remap Schema Names | When source and target schema names differ | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log REMAP_SCHEMA=source_schema:target_schema TABLE_EXISTS_ACTION=REPLACE PARALLEL=4 & |
| Remap Tablespaces | Source tablespace missing on target, remap TS | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=source_schema_%U.dmp LOGFILE=target_schema_imp.log SCHEMAS=target_schema REMAP_TABLESPACE=old_ts:new_ts TABLE_EXISTS_ACTION=REPLACE PARALLEL=4 & |
| Multiple Schemas | Import multiple schemas at once | nohup impdp '"/ as sysdba"' DIRECTORY=my_dump_dir DUMPFILE=multi_schemas_%U.dmp LOGFILE=multi_schemas_imp.log SCHEMAS=target_schema1,target_schema2 TABLE_EXISTS_ACTION=REPLACE PARALLEL=4 & |
Important Parameters Summary (Useful for Schema Refresh)
nohup expdp|impdp '"/ as sysdba"' DIRECTORY=directory_name
DUMPFILE=your_table_%U.dmp [,more_files]
LOGFILE=logfile_name.log
FULL=Y|N
SCHEMAS=schema_name [,schema_name]
TABLES=table_name [,table_name] or owner.tablename
QUERY=table_name:"WHERE condition"
TABLE_EXISTS_ACTION=REPLACE|APPEND|TRUNCATE|SKIP
PARALLEL=number_of_threads
REUSE_DUMPFILES=Y|N
COMPRESSION=ALL|DATA|METADATA|NONE
CONTENT=ALL|DATA_ONLY|METADATA_ONLY
NETWORK_LINK=network_link_name
FLASHBACK_TIME="timestamp"
ENCRYPTION=ALL|DATA|METADATA|NONE
ENCRYPTION_PASSWORD=password
DISABLE_ARCHIVE_LOGGING=Y|N
ROWS=Y|N
VERSION=version_number
JOB_NAME=job_nameWhen defining DUMPFILE in expdp or impdp, you can use the following dynamic placeholders to auto-generate unique and informative file names:
| Parameter | Meaning | Example Output |
|---|---|---|
%U | Universal unique file part (used for PARALLEL) | schema_%U.dmp → schema_01.dmp, schema_02.dmp, … |
%I | Instance number (RAC environments) | schema_%I_%U.dmp → schema_1_01.dmp, schema_2_01.dmp |
%p | Job name prefix | schema_%p_%U.dmp → schema_myjob_01.dmp |
%j | Job name | schema_%j_%U.dmp → schema_myjob_01.dmp |
%d | Directory object name | schema_%d_%U.dmp → schema_MIG_DIR_01.dmp |
Notes:
%Uis the most widely used—ensures unique dump files in parallel jobs.%Ihelps distinguish files by instance in RAC setups.%pand%jallow identification of job-specific dumps.%dincludes the directory object name, useful for environments using multiple directories.
📌 Recommended format for schema refresh:
DUMPFILE=schema_name_%U.dmpOr with job-specific context:
DUMPFILE=schema_%j_%U.dmpNotes
- Always back up the target schema before performing a refresh.
- Use
TABLE_EXISTS_ACTION=REPLACEto drop and recreate tables during import. This is useful when doing a full refresh of existing objects. - If the schema cannot be dropped due to active sessions, use the session kill block to terminate them before drop.
%Uis essential for parallel export/import, allowing the generation of multiple dump files for faster processing.- Use
REMAP_SCHEMAwhen importing into a schema with a different name than the source. - Use
REMAP_TABLESPACEif the source tablespace does not exist on the target — to remap to an available target tablespace. - Use parallelism (
PARALLEL=4or higher) to significantly speed up both export and import operations. - Ensure the directory object exists on the target and the target user has READ/WRITE privileges on it.
- Always adjust parameters according to your environment, data volume, and performance requirements.
