Oracle Import and Export
For the examples to work we must first unlock the SCOTT account and create a directory object it can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system of the database server.
CONN / AS SYSDBA
ALTER USER scott IDENTIFIED BY tiger
ACCOUNT UNLOCK;
GRANT CREATE ANY DIRECTORY TO SCOTT;
CREATE OR REPLACE DIRECTORY test_dir AS
'/u01/app/oracle/oradata/';
GRANT READ, WRITE ON DIRECTORY test_dir
TO scott;
Existing directories can be queried using the ALL_DIRECTORIES view.
Table Exports/Imports
The TABLES parameter is
used to specify the tables that are to be exported. The following is an example
of the table export and import syntax.
expdp scott/tiger@db10g tables=EMP,DEPT
directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
impdp scott/tiger@db10g tables=EMP,DEPT
directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
The
TABLE_EXISTS_ACTION=APPEND
parameter allows data to be imported into existing tables.
Remap Schema
Example
impdp harshad/harshad@orcl tables=SCOTT.EMP,scott.dept
directory=TEST_DIR dumpfile=SCOTT_TABLE.DMP logfile=scott_table.log
remap_schema=SCOTT:harshad
Remap_table
Using remap_table to Rename Table Names
One of the Oracle 11g new features is the new parameter
remap_tablewhich is used to rename the table during the import job.
The syntax of this parameter is as follows:
1. remap_table=schema.table_name:new_table
2. remap_table=schema.table_name:schema.new_table
Schema Exports/Imports
The
OWNER
parameter of exp has been replaced by the SCHEMAS
parameter which is used to specify the schemas to be exported. The
following is an example of the schema export and import syntax.expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
EXPDP with multiple dump file on multiple
directories
Sometimes we have to take a logical backup of our database using
EXPDP utility. Consider database size to be 400GB. We cannot take backup of 400
gb database to a single FS , we have to take it in multiple FS as one FS donot
have that much space.
In such Situation we use the multiple directories feature of EXPDP
utility. Using this feature we can create dumpfile in multiple directories and
distribute our logical backup to different directories.
Example :-
Consider two directories named DATAPUMP and TESTING in Oracle
Database :-
SQL > create or replace directory DATAPUMP as '/amit/datapump';
Directory created.
SQL > create or replace directory TESTING as '/amit/testing';
Directory created.
SQL > select * from
dba_directories;
OWNER
DIRECTORY_NAME
DIRECTORY_PATH
------------------------- -----------------------------------
----------------------------------------
SYS
TESTING
/amit/testing
SYS
DATAPUMP
/amit/datapump
Now take logical backup using multiple directories.
$ expdp amit/amit
dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp
compression=all parallel=2
We can check that multiple dumpfiles will be created.
bash-3.2$ pwd
/amit/datapump
bash-3.2$ ls -ltr|tail
-rw-r--r-- 1 oracle oinstall
89785 Aug 30 19:45 import.log
-rw-r----- 1 oracle oinstall
7680000 Aug 31 02:34 expdp_datapump_01.dmp
bash-3.2$ cd -
/amit/testing
bash-3.2$ ls -ltr|tail
total 731952
-rw-r--r-- 1 oracle oinstall
185540 May 31 17:40 export.log
-rw-r----- 1 oracle oinstall
229363712 Aug 31 02:34 expdp_testing_01.dmp
Parallel parameter must be used while using multiple directories,
otherwise expdp will not fail but it will write to only first directory. Number
of directories used must be equal to parallel parameter then only all
directories will be used for writing.
%U is used to create multiple files in same directory. Like
expdp_datapump_%U.dmp will create expdp_datapump_01.dmp, expdp_datapump_02.dmp.
If we want using %U expdp write to multiple files in parallel mode then we have
to use parallel parameter.
We can also set FILESIZE parameter while using EXPDP, it will not
create file more than the filesize allocated.
$expdp amit/amit
dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp
compression=all parallel=2 filesize=2g &
Above example will create dumpfiles like expdp_datapump_01.dmp,
expdp_testing_01.dmp, expdp_datapump_02.dmp, expdp_testing_02.dmp. All
dumpfiles have filesize of 2g.
Database Exports/Imports
The
FULL
parameter indicates that a complete database export is required.
The following is an example of the full database export and import syntax.expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
INCLUDE and EXCLUDE
The
INCLUDE
and EXCLUDE
parameters can be used to limit the export/import to specific
objects. When the INCLUDE
parameter is used, only those objects specified by it will be
included in the export/import. When the EXCLUDE
parameter is used, all objects except those specified by it will
be included in the export/import. The two parameters are mutually exclusive, so
use the parameter that requires the least entries to give you the result you
require. The basic syntax for both parameters is the same.INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]
The following code shows how they can be used as command line
parameters.
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
If the parameter is used from the command line, depending on your
OS, the special characters in the clause may need to be escaped, as follows.
Because of this, it is easier to use a parameter file.
include=TABLE:\"IN (\'EMP\', \'DEPT\')\"
A single import/export can include multiple references to the
parameters, so to export tables, views and some packages we could use either of
the following approaches.
INCLUDE=TABLE,VIEW,PACKAGE:"LIKE '%API'"
or
INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"
Multiple objects can be targeted in once statement using the
LIKE
and IN
operators.EXCLUDE=SCHEMA:"LIKE 'SYS%'"
EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200','APEX_PUBLIC_USER','ANONYMOUS')"
The valid object type paths that can be included or excluded can
be displayed using the
DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
views.
CONTENT and QUERY
The
CONTENT
parameter allows you to alter the contents of the export. The
following command uses the METADATA_ONLY
parameter value to export the contents of the schema without the
data.expdp system/password@db10g schemas=SCOTT directory=TEST_DIR dumpfile=scott_meta.dmp logfile=expdp.log content=METADATA_ONLY
To capture the data without the metadata use the
DATA_ONLY
parameter value.expdp system/password@db10g schemas=SCOTT directory=TEST_DIR dumpfile=scott_data.dmp logfile=expdp.log content=DATA_ONLY
The
QUERY
parameter allows you to alter the rows exported from one or more
tables. The following example does a full database export, but doesn't include
the data for the EMP and DEPT tables.expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=full.dmp logfile=expdp_full.log query=SCOTT.EMP,SCOTT.DEPT:'"WHERE ROWNUM = 0"'
Network Exports/Imports (NETWORK_LINK)
The
NETWORK_LINK
parameter identifies a database link to be used as the source for
a network export/import. The following database link will be used to
demonstrate its use.CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;
CONN test/test
CREATE DATABASE LINK remote_scott CONNECT TO scott IDENTIFIED BY tiger USING 'DEV';
In the case of exports, the
NETWORK_LINK
parameter identifies the database link pointing to the source
server. The objects are exported from the source server in the normal manner,
but written to a directory object on the local server, rather than one on the
source server. Both the local and remote users require the EXP_FULL_DATABASE
role granted to them.expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR dumpfile=EMP.dmp logfile=expdpEMP.log
For imports, the
NETWORK_LINK
parameter also identifies the database link pointing to the source
server. The difference here is the objects are imported directly from the
source into the local server without being written to a dump file. Although
there is no need for a DUMPFILE
parameter, a directory object is still required for the logs
associated with the operation. Both the local and remote users require the IMP_FULL_DATABASE
role granted to them.impdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR logfile=impdpSCOTT.log remap_schema=SCOTT:TEST
Flashback Exports
The
exp
utility used the CONSISTENT=Y
parameter to indicate the export should be consistent to a point
in time. By default the expdp
utility exports are only consistent on a per table basis. If you
want all tables in the export to be consistent to the same point in time, you
need to use the FLASHBACK_SCN
or FLASHBACK_TIME
parameter.
The
FLASHBACK_TIME
parameter value is converted to the approximate SCN for the
specified time.expdp ..... flashback_time=systimestamp
# In parameter file.
flashback_time="to_timestamp('09-05-2011 09:00:00', 'DD-MM-YYYY HH24:MI:SS')"
# Escaped on command line.
expdp ..... flashback_time=\"to_timestamp\(\'09-05-2011 09:00:00\', \'DD-MM-YYYY HH24:MI:SS\'\)\"
Not surprisingly, you can make exports consistent to an earlier
point in time by specifying an earlier time or SCN, provided you have enough
UNDO space to keep a read consistent view of the data during the export
operation.
If you prefer to use the SCN, you can retrieve the current SCN
using one of the following queries.
SELECT current_scn FROM v$database;
SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;
That SCN is then used with the
FLASHBACK_SCN
parameter.expdp ..... flashback_scn=5474280
The following queries may prove useful for converting between
timestamps and SCNs.
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;
SELECT SCN_TO_TIMESTAMP(5474751) FROM dual;
In 11.2, the introduction of legacy mode means that you can use
the
CONSISTENT=Y
parameter with the expdp
utility if you wish.
Miscellaneous Information
Unlike the original exp and imp utilities all data pump
".dmp" and ".log" files are created on the Oracle server,
not the client machine.
All data pump actions are performed by multiple jobs (server
processes not DBMS_JOB jobs). These jobs are controlled by a master control
process which uses Advanced Queuing. At runtime an advanced queue table, named
after the job name, is created and used by the master control process. The table
is dropped on completion of the data pump job. The job and the advanced queue
can be named using the
JOB_NAME
parameter. Cancelling the client process does not stop the
associated data pump job. Issuing "ctrl+c" on the client during a job
stops the client output and presents a command prompt. Typing
"status" at this prompt allows you to monitor the current job.Export> status
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\TEMP\DB10G.DMP
bytes written: 4,096
Worker 1 Status:
State: EXECUTING
Object Schema: SYSMAN
Object Name: MGMT_CONTAINER_CRED_ARRAY
Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Completed Objects: 261
Total Objects: 261
Data pump performance can be improved by using the
PARALLEL
parameter. This should be used in conjunction with the
"%U" wildcard in the DUMPFILE
parameter to allow multiple dumpfiles to be created or read. The
same wildcard can be used during the import to allow you to reference multiple
files.expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=impdpSCOTT.log
The
DBA_DATAPUMP_JOBS
view can be used to monitor the current jobs.system@db10g> select * from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION
------------------------------ ------------------------------ ------------------------------
JOB_MODE STATE DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT
FULL EXECUTING 1 1
Comments
Post a Comment