Oracle Data Pump Export & Import
Oracle Data Pump is a fast and flexible tool used by DBAs to export and import database objects. It’s widely used for migrations, backups, and data movement between environments. This post covers the most commonly asked interview questions around Data Pump, from basics to advanced options.
1. What is Oracle Data Pump and how is it different from the old exp/imp utilities?
Oracle Data Pump is a newer, faster utility for exporting and importing database objects. Unlike the older exp/imp tools, Data Pump uses direct path and parallel processing, making it much faster and more flexible.
2. What are the two main utilities provided by Data Pump?
expdp
(Export Data Pump) and impdp
(Import Data Pump).
3. What are the common use cases for Data Pump export and import?
Migrating databases, backing up data, copying data between environments, and refreshing test or development databases.
4. How do you perform a full database export and import using Data Pump?
Use expdp
with FULL=Y
to export the entire database and impdp
with FULL=Y
to import it.
5. What is the difference between FULL, SCHEMA, TABLE, and TABLESPACE export modes?
- FULL: Export the whole database.
- SCHEMA: Export all objects of specific schemas.
- TABLE: Export specific tables.
- TABLESPACE: Export all objects in specified tablespaces.
6. How do you perform a schema-level export and import?
Use expdp
with SCHEMAS=schema_name
and impdp
with the same parameter to export/import schema objects.
7. What is a Data Pump directory object and how is it used?
It’s a database object that points to a physical directory on the server where dump files are stored. You must specify it in Data Pump commands to read/write dump files.
8. How do you monitor a running Data Pump job?
Use the STATUS
parameter with impdp
or expdp
, or query views like DBA_DATAPUMP_JOBS
and DBA_DATAPUMP_SESSIONS
.
9. What is the difference between conventional path and direct path in Data Pump?
Direct path loads data directly into Oracle datafiles, which is faster. Conventional path uses SQL insert statements and is slower but supports some object types that direct path doesn’t.
10. How do you exclude or include specific objects during export or import?
Use the EXCLUDE
or INCLUDE
parameters with object types or names.
11. What are the benefits of using parallelism in Data Pump?
Multiple workers run simultaneously, speeding up export/import, especially for large databases.
12. What is the use of the REMAP_SCHEMA
, REMAP_TABLESPACE
, and REMAP_DATAFILE
parameters?
They let you change the schema, tablespace, or datafile names during import, useful for moving data between different environments.
13. How do you resume a stopped or failed Data Pump job?
Use the RESTART=Y
parameter with impdp
or expdp
.
14. What happens if you run an import over existing data?
By default, Data Pump skips objects that already exist. You can control this with parameters like TABLE_EXISTS_ACTION
(e.g., replace, append, truncate).
15. How can you export and import data between different Oracle versions?
Data Pump supports backward compatibility for export/import between versions, but the dump file must be compatible with the target database version.
16. What is the impact of network link (NETWORK_LINK
) in Data Pump?
It allows you to import/export data directly over a database link without creating dump files.
17. How do you export/import with compression enabled?
Use the COMPRESSION=ALL
parameter to compress data in dump files, saving space and time.
18. What are dump files and log files in Data Pump?
Dump files contain exported data. Log files record job details, errors, and status.
19. How can you view the contents of a Data Pump export dump without importing?
Use the impdp
with SQLFILE
parameter to generate a SQL script listing the objects in the dump.
20. What are best practices for using Data Pump in large databases?
Use parallelism, monitor jobs, compress dump files, use network link for direct transfers, and keep directory objects secure.