Export And Import In Oracle

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Export And Import In Oracle as PDF for free.

More details

  • Words: 3,024
  • Pages: 5
Export :- Introduction Export and Import are used primarily for the following tasks:

• • • •

data archival upgrading to new releases of Oracle backing up Oracle databases moving data between Oracle databases

Export and Import allow you to accomplish the following tasks:

• • • • • • • • • • • • •

store Oracle data in operating system files independent of any database. store definitions of database objects (such as tables, clusters, and indexes) with or without the data. store inactive or temporary data. back up only tables whose data has changed since the last export, using either an incremental or a cumulative export. restore tables that were accidentally dropped, provided they were exported recently. restore a database by importing from incremental or cumulative exports. selectively back up parts of your database in a way that requires less storage space than a system backup. move data from an older to a newer version of Oracle. move data between Oracle databases. move data between different hardware and operating system environments. move data from one owner to another. move data from one tablespace or schema to another. save space or reduce fragmentation in your database.

The basic concept behind Export is to extract the object definitions and table data from an Oracle database and store them in Oracle-binary format. Object definitions include DDL (Data Definition Language) to create the database objects such as tablespaces, tables, indexes, sequences, users .etc. The resulting Oracle-binary file can be stored on disk or any other auxiliary storage devices. Since the resulting export file is Binary, it cannot be read by any other devices except Oracles Import utility. Export writes export files using the character set specified for the user session; for example, 7-bit ASCII or IBM Code Page 500 (EBCDIC). You can access key words used for exports by typing exp HELP=y Required Privileges To use export utility CATEXP.SQL or CATALOG.SQL (which in turn call CATEXP.SQL) must be run after the database has been created. CATEXP.SQL has to be run only once when the database is created. In case CATEXP.SQL was not called by CATALOG.SQL (due to CATALOG.SQL not finding the file CATEXP.SQL where it needs), you can run CATEXP.SQL as user SYS. Be careful not to run CATALOG.SQL once again as it might destroy your data dictionaries. To use export you must have CREATE SESSION privilege on an Oracle database. Once you have CREATE SESSION privilege you can export objects belonging to your schema. To export objects owned by another user, you must have the EXP_FULL_DATABASE privilege. DBA role will be granted this privilege. Export parameters I usually prefer to use parameter file as directive to exports, where in all the directives are written in a flat file and export utility reads directives from the parameter file. Using .par extension for your parameter file is preferable. Parameter files are nothing but text files, so use a text editor (vi, edit, ne.etc.) to create the file. Let us look at a sample parameter file let us say exp.par, you can also name the file junk.par or junk.jk , no restrictions

• • • •

FILE=exp020999.dmp LOG=exp020999.log FULL=Y COMPRESS=Y

After writing the parameter files invoke the export utility exp username/password parfile=exp.par The values in the parameter files can be as follows:

• •

BUFFER - The parameter BUFFER determines the maximum number of rows in an array fetched by Export. Buffer_size = rows_in_array * maximum_row_size If a table having a LONG datatype is Exported , or if BUFFER is specified as zero, only one row at a time is fetched. COMPRESS - Specifies how Export will manage the initial extent for the table data. This parameter is helpful during database reorganization. Export the objects (especially tables and indexes) with COMPRESS=Y. If your table was spawning 20 Extents of 1M each



• • • • • • • • • • • • • • • •

(which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced. Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on your disk (tablespace), and you do not want your imports to bomb. CONSISTENT - If massive updates/rollbacks are taking place when you have kicked off the exports, then the exports will not be consistent under normal conditions. If you set CONSISTENT=Y then a rollback segment is designated to backout the effect of any uncommitted transactions. I.e. a value changed to 100 when you have kicked off the exports and before the exports finishes, the changes being undone. The negative effect are exports being slower cause they have to check the consistency by cross referring the rollback segments. CONSTRAINTS - A flag to indicate whether to export table constraints. DIRECT - Specifying DIRECT=Y causes export to do Direct Path Exports, bypassing the evaluation layer. FEEDBACK - Specify that Export should display a progress meter in the for of a dot for x number of rows exported. For example setting FEEDBACK=10, would display a dot each time 10 rows has been exported. This is done for each table exported. FILE - The name of the export file. FULL - Specifies whether Export should export the entire database or not. This includes all users, there schemas, data dictionaries. GRANTS - Specifies whether grants should be exported or not. INCTYPE - Specifies the type of incremental export. Options are COMPLETE, CUMULATIVE, and INCREMENTAL. INDEXES - Specifies whether indexes should be exported or not. LOG - Specifies a file name to receive information and error messages. OWNER - Specifies a list of usernames whose objects will be exported. RECORD - A flag to indicate whether to record an incremental or cumulative export in the database. RECORDLENGTH- Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value. ROWS - Specifies whether the rows of table data should be exported or not. STATISTICS - Specifies the type of database optimizer statistics to generate when the exported data is imported later. Options are ESTIMATE, COMPUTE and NONE TABLES - Specifies the list of tables to export. USERID - Specifies the username/password of the user initiating the export

Tips Some Tips While Exporting







If transactions continue to access sequence numbers during an export, sequence numbers can be skipped. Sequence numbers can be skipped only when cached sequence numbers are in use (for eg. Setting SEQUENCE_CACHED_ENTRIES=10 in you init.ora file). Let us assume a sequence vtx_seq accessed by you application, and the parameter SEQUENCE_CACHED_ENTRIES is set to 10 in you init.ora file. Now 1-10 sequence numbers are cached in the memory. At this point if you export, then the exported next sequence number for the sequence vtx_seq is 11. So 1-10 sequence numbers cached but unused are lost. If a table is named emp#, and if you parameter file contains the line TABLES=(emp#, dept, invoice) Then the line succeeding emp# i.e. dept, invoice is treated as a comment. To get around with this condition enclose the table name having # in quotation marks TABLES=("emp#", dept, invoice) The maximum file size your parfile is limited to 8K on versions below 8i. So listing the tables one per line will go over 8K fairly quickly. 8i removes this restriction. To get around this condition try to list all table names on a single line.

Export methods Exports can be done in two methods:



Conventional - In conventional path export uses SQL statement "SELECT * FROM TABLE" to extract data from database tables. Data is then read from disk into a buffer (private buffer or buffer cache) and rows are transferred into evaluation buffer, The data after passing expression evaluation (equivalent Insert statement generated and validated), is transferred to the Export Client which then writes the data into the export file. Some interesting points about conventional exports are: o Conventional path export only exports in the user session character set. I.e. any settings done for a session using "ALTER SESSION SET NLS_CHARACTERSET=US7ASCII". o If a conventional export was done on Oracle version 7.1 onwards, the format of the export file was changed to include stored procedures, functions, and packages that have comments embedded in them. As a result if you want to import a dump file (7.1 onwards) to a Oracle database lower than 7.1 version a patch has to be applied. All other objects (except stored procedures, functions, and packages) can be imported without a patch.



Direct - In direct path export the data is read directly bypassing the evaluation buffer. It also optimizes the use of SELECT * FROM TABLE statement. It is used in conjunction with the database in direct read mode, which makes use of private buffers instead of public buffers and hence less resources are consumed and performance is improved. interesting points about direct exports are: o Direct path Export is available after Oracle version 7.3. o The Export Parameter BUFFER, used to specify the size of the fetch array, applies only to conventional exports and has "no" effect on direct path exports.

o o o o o

The Export parameter RECORDLENGTH can be used to specify the size of the Export I/O Buffer. If you have done a Direct path export of 7.3 release database, and want to build a 7.2 release database using this export, backward compatibility is not possible in Direct path Export. Instead use conventional exports. Direct path export only exports in the database character set. I.e. any settings done for a session has no effect and must be set the same as database character set, else export will terminate with a warning. Check your parameter NLS_CHARACTERSET by logging in as "SYS AS SYSDBA" and issuing "show NLS_CHARACTERSET". Direct path Exports cannot be invoked using interactive mode To reduce contention with other users for database resources during direct path export, you can use direct read mode. Remember, Oracle 7 users needs to enable direct read mode, enter the following line in your init.ora file. Compatible = . where db_version number is 7.1.5 or higher.

Export categories Exports can be of three categories:



Incremental Exports - An incremental export backs up only the tables that have changed since the last incremental, cumulative, or complete export. For example if one row is added or updated to a table since the last incremental, cumulative, or complete export was done the entire table is exported. Tables which have not been modified are not exported. Incremental exports cannot be specified as Read consistent i.e. CONSISTENT=Y The activity is tracked in SYS.INCEXP, then updates the table with a new ITIME and EXPID. Incremental exports can only be done in full database mode (FULL=Y). Only users having EXP_FULL_DATABASE role can run incremental exports. A look at the parameter file incexp.par o FILE= incremental020999.dmp o LOG= incremental020999.log o INCTYPE=INCREMENTAL



Cumulative Exports - An cumulative export backs up only the tables that have changed since the last cumulative, or complete export. In essence a cumulative export compresses a number of incremental exports into a single cumulative file. For example let us assume at time "A" a cumulative export was done, at time "B" a record was deleted from table emp_mast , at time "C" an incremental export was done (incremental export pickup table emp_mast, since a record is deleted). At time "D" an record was inserted in emp_dtl, at time "E" an incremental export was done (incremental export pickup table emp_dtl, since a record is inserted). At time "F" when cumulative export is done it picks up table emp_mast and also emp_dtl since these two table have changed since the last cumulative was done. A look at the parameter file cumexp.par o FILE= cumulative020999.dmp o LOG= cumulative 020999.log o INCTYPE=CUMULATIVE



Complete Exports - A complete export establishes a base for incremental and cumulative exports. It is equivalent to a full database export, except that it updates the tables that track incremental and cumulative exports. If you do not specify INCTYPE=COMPLETE then the tables that track incremental and cumulative exports are not updated. A look at the parameter file completeexp.par o FILE= complete020999.dmp o LOG= complete020999.log o INCTYPE=COMPLETE

Export modes There are three modes in which you can export, all users have at least two modes available by default; a user with EXP_FULL_DATABASE role has three choices, The modes are:



Exporting Table - You can export a table, or group of tables in a schema; You can probably write your parameter file (exptables.par) as shown below: o FILE=exptables020999.dmp o LOG=exptables020999.log o TABLES=(table1, table2, table3)



Exporting Users - If you desire you can export all objects in a users schema (such as tables, data, grants and indexes). By default a ordinary user can only export his current schema. A user with EXP_FULL_DATABASE privilege can export other users schema. You can typically write a parameter file (expschema.par) to export Your schema as below: o FILE=expschema.dmp o LOG=expschema.log



Exporting Full Database - You have to have EXP_FULL_DATABASE privilege, to carry on this operation. You can typically write a parameter file (expfulldb.par) to export full database as below o FULL=Y

o o

FILE=expfulldb.dmp LOG=expfulldb.log

Import Import extracts data from an export file and puts it back into an Oracle database. Also see Export. Table objects are imported from the export file in the following order:

   

Table definitions Table data Table indexes Integrity constraints and triggers

First, new tables are created, then data is imported. After all data is imported into all tables, indexes are built. Then triggers are imported, and integrity constraints are enabled. If you prefer to import by specifying a list of tables, then the order in which the tables are imported in is important to avoid rejection of rows in a table due to not meeting referential integrity. You can access key words used for exports by typing

imp HELP=y

To use Export utility CATEXP.SQL or CATALOG.SQL (which in turn call CATEXP.SQL) must be run after the database has been created. CATEXP.SQL has to be run only once when the database is created. In case CATEXP.SQL was not called by CATALOG.SQL (due to CATALOG.SQL not finding the file CATEXP.SQL where it needs), you can run CATEXP.SQL as user SYS. Be careful not to run CATALOG.SQL once again as it might destroy your data dictionaries. By default if an object does not exist, when importing the object will be created in the tablespace as in the export file. If the tablespace does not exists then the object will not be created. For example let us say we have exported a table emp_mast which orginally exists in tablespace DATA01 in the production database. We now desire to import this table to test database (assuming table emp_mast not being in test database), if the test database does not have the tablespace DATA01, then the import will not be successful. Required Privileges Privilages necessary for Imports are listed Below: To use import, you need the CREATE SESSION privilege. The CONNECT role will have this privilege established during database creation.  If user XYZ has created a export, then any user with IMP_FULL_DATABASE role granted to him (usually a DBA role will be granted this privilege at database creation) can import the export file.  If user XYZ happens to be a DBA (user granted DBA role), then users only with IMP_FULL_DATABASE (dba role) can import this file.  If user XYZ has created a export file, then the same userid (XYZ) on a different database can import this file.  A user with IMP_FULL_DATABASE privilege, can only do a full database import.  To be able to import privileges that a user has granted to others, the user initiating the import must either own the object or have object privilege with the WITH GRANT OPTION.



Tips Some Tips While Importing The storage parameter OPTIMAL for rollback segments is not preserved during export and import. You cannot import into a read-only tablespace. Tables with LONG data usually require large insert buffer size. If you are trying to import LONG data and import fails with the IMP-00020 message, then you can increase the insert buffer size gradually (by 10,000 bytes at a time) up to 66,000 or greater.

  

When a local stored procedure, function, or package is imported, it retains its original timestamp. If the time stamp of the imported version differs from the timestamp of the version currently in the database, it is marked for recompilation. If the timestamp matches, it is not marked and will not be recompiled.  If a table is named emp#, and if you parameter file contains the line TABLES=(emp#, dept, invoice) Then the line succeeding emp# i.e. dept, invoice is treated as a comment, To get around with this condition enclose the table name having # in quotation marks TABLES=("emp#", dept, invoice)



I usually prefer to use parameter file as directive to imports, where in all the directives are written in a flat file and import utility reads directives from the parameter file. Sample parameter file (imp.par)

  

FILE=imp021699.dmp LOG=exp021699.log COMMIT=Y

Import methods Imports can be done in three methods: Importing Table If you desire only to import a table, or group of tables in a schema; You can probably write your parameter file (imptables.par) as shown below

  

FILE=imptables021699.dmp LOG=imptables021699.log TABLES=(table1, table2, table3)

Importing Users If you desire you can import all objects in a users schema (such as tables, data, grants and indexes). By default a ordinary user can only import his current schema. A user with IMP_FULL_DATABASE privilege can import other users schema. Importing Full Database You have to have IMP_FULL_DATABASE privilege, to carry on this operation, the objects that are imported are:

         

Profiles Public database links Public synonyms Roles Rollback segment definitions System audit options System privileges Tablespace definitions Tablespace quotas User definitions

Related Documents

Import And Export 2008
November 2019 23
Export And Import
October 2019 24
Import Export
November 2019 17
Import Export
June 2020 21