9i To 10g Upgradation

  • Uploaded by: Rahul Gupta
  • 0
  • 0
  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View 9i To 10g Upgradation as PDF for free.

More details

  • Words: 4,176
  • Pages: 17
Upgradation from 9i to 10g ( OS : AIX 5.3 ) By: Rahul Gupta Path InfoTech Limited Client : L.G.Electronics /* Hi All , Kindly find the doc containing all the information about 10g Upgradation. This docs is based on the setup of “L.G. Electronics”. For further references, refer To the doc 316889.1 . More suggestion / modification on this are highly appreciated */

 Prerequisites and recommendations :  Install Oracle 10g Release-2 / Release-3 in a new Oracle Home. Kindly refer to the 10g installation guide for the same  Install the 10g Companion CD for the base ORACLE 10g version.  Compatibility Matrix a) Minimum Version of the database that can be directly upgraded 10gR2/10gr3 8.1.7.4

-> 10.2.X.X.X

9.0.1.4 or 9.0.1.5

-> 10.2.X.X.X

9.2.0.4 or higher

-> 10.2.X.X.X

10.1.0.2 or higher

-> 10.2.X.X.X

b) The following database version will require an indirect upgrade path. 7.3.3 (or lower) > 7.3.4 7.3.4 8.0.n 8.1.n

> > > >

8.1.7 8.1.7 8.1.7 8.1.7

> > > >

8.1.7.4 8.1.7.4 8.1.7.4 8.1.7.4

> > > >

10.2.X.X.X 10.2.X.X.X 10.2.X.X.X 10.2.X.X.X

NOTE UPGRADATION ACTIVITY TAKE THE COLD BACKUP OF NOTE :: BEFORE Before Starting the activity take the COLD Backup of Database and ORACLE_HOME DATABASE AND ORACLE_HOME

================

Step 1 :

Upgradation

Activity

=====================================

Run the “utlu102i.sql”

Copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the /tmp directory on your system sqlplus '/as sysdba' SQL> spool Database_Info.log SQL> @utlu102i.sql SQL> spool off --- ----- ------ ------ ------ -------WARNING: --> DRSYS tablespace is not large enough for the upgrade alter tablespace size of DRSYS tablespace need to be increased ---> atleast to 50 MB ====> WARNING: --> "streams_pool_size" is not currently defined and needs a value of at least 50331648 WARNING: --> "session_max_open_files" needs to be increased to at least 20

streams_pool_size=50331648 session_max_open_files=20

=======> --> "hash_join_enabled" --> "log_archive_start" Remove this parameter from the pfile of 9i

===> ...The 'JServer JAVA Virtual Machine' JAccelerator (NCOMP) is required to be installed from the 10g Companion CD.

...The 'Oracle interMedia Image Accelerator' is required to be installed from the 10g Companion CD ====> WARNING: --> Passwords exist in some database links. .... Passwords will be encrypted during the upgrade. .... Downgrade of database links with passwords is not supported. WARNING: --> Deprecated CONNECT role granted to some user/roles. .... CONNECT role after upgrade has only CREATE SESSION privilege. WARNING: --> Database contains stale optimizer statistics. .... Refer to the 10g Upgrade Guide for instructions to update .... statistics prior to upgrading the database. .... Component Schemas with stale statistics: .... SYS .... XDB .... OLAPSYS .... MDSYS WARNING: --> Database contains INVALID objects prior to upgrade. .... USER ORAEQUS has 8 INVALID objects.

===>

--> New "SYSAUX" tablespace ....minimum required size for database upgrade: 500 MB

----- ------ -------- ---------- --#########################################################################################

Step 2 :

Take the backup sys.link$ and create a script of all the bd links .

===> DB_links password will be encrypted in ORACLE10g , So ,in case of downgrade password will no be available . So, Please take the backup of database link

Create table db_link_backup as select * from sys.link$; select 'create database link '||name||'connect to'||userid||'identified by '||password||';' from sys.link$; ########################################################################################## Step 3 : Check for the deprecated CONNECT Role User that contains CONNECT --> SELECT grantee FROM dba_role_privs WHERE granted_role = 'CONNECT' and grantee NOT IN ( 'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP', 'LOGSTDBY_ADMINISTRATOR', 'ORDSYS', 'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY', 'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS', 'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER', 'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA', 'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');

===> grant them all priviliges , other than "CONNECT" select 'CREATE VIEW, CREATE TABLE, ALTER SESSION,CREATE CLUSTER,CREATE SESSION,CREATE SYNONYM,CREATE SEQUENCE,CREATE DATABASE LINK to '||grantee||';' from dba_role_privs WHERE granted_role = 'CONNECT' and grantee NOT IN ( 'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP', 'LOGSTDBY_ADMINISTRATOR', 'ORDSYS', 'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY', 'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS', 'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER', 'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA', 'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');

########################################################################################## ( There is no table on our site , that required this step )

Step 4 : Data of table with "TIMESTAMP WITH TIMEZONE" can be affected . Thus to avoid this .. Copy the files , to the temp location .. Then the scripts need to be run

$ sqlplus '/as sysdba' SQL> spool TimeZone_Info.log SQL> @utltzuv2.sql SQL> spool off Scripts mentioned below will show you the column with "TIMESTAMP WITH TIMEZONE" .then back up the data in character format before you upgrade the database. After the upgrade, you must update the tables to ensure that the data is stored based on the new rules --------------------------------------------------------------------------------SQL> @utltzuv2.sql DROP TABLE sys.sys_tzuv2_temptab * ERROR at line 1: ORA-00942: table or view does not exist Table created. Query sys.sys_tzuv2_temptab Table to see if any TIMEZONE data is affected by version 2 transition rules PL/SQL procedure successfully completed. Commit complete. SQL> spool off [ilomsdb01:ora9equs] /tmp> sysdba SQL*Plus: Release 9.2.0.8.0 - Production on Wed Jun 25 10:34:04 2008 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.8.0 - Production SQL> select * from sys.sys_tzuv2_temptab; no rows selected ---------------------------------------------------------------------------------------

OR

:

Suppose if there is any table SQL> desc test_table x number primary key y timestamp with time zone Now ,take the backup of this table SQL> create table test_table_backup as select * from test_table where 1=2; SQL> insert into test_table_backup select x,to_char(y, 'YYYY-MM-DD HH24.MI.SSXFF TZR') from test_table;

After the upgrade , run the update command to update the data according to new transition rule update test_table t set t.y = (select to_timestamp_tz(t1.y,'YYYY-MM-DD HH24.MI.SSXFF TZR') from test_table_backup t1 where t.x=t1.x);

##########################################################################################

Step 5 : Starting in Oracle 9i the National Characterset (NLS_NCHAR_CHARACTERSET) will be limited to UTF8 and AL16UTF16. Any other NLS_NCHAR_CHARACTERSET will no longer be supported. Basicaly N-datatype used this NLS-Characterset conversion NOTE: If you are upgrading from Oracle9i to 10g, Then no NLS Characterset conversion is required

+------------+---------+-----------------+ | Charset | RDBMS | Unicode version | +------------+---------+-----------------+ | AL24UTFFSS | 7.2-8.1 | 1.1 | | | | | | UTF8 | 8.0-11g | 2.1 (8.0-8.1.6) | | | | 3.0 (8.1.7-11g) | | | | | | UTFE | 8.0-11g | 2.1 (8.0-8.1.6) | | | | 3.0 (8.1.7-11g) | | | | | | AL32UTF8 | 9.0-11g | 3.0 (9.0) | | | | 3.1 (9.2) | | | | 3.2 (10.1) | | | | 4.01(10.2) | | | | 5.0 (11.1) | | | | | | AL16UTF16 | 9.0-10g | 3.0 (9.0) | | | | 3.1 (9.2) | | | | 3.2 (10.1) | | | | 4.01(10.2) | | | | 5.0 (11.1) | +------------+---------+-----------------+

---------------- Internal Use -----------------------SQL> select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in ('SYS','SYSTEM','XDB'); OWNER -----------------------------OE OE OE OE OE PM

TABLE_NAME -----------------------------BOMBAY_INVENTORY PRODUCTS PRODUCT_DESCRIPTIONS SYDNEY_INVENTORY TORONTO_INVENTORY PRINT_MEDIA

6 rows selected. SQL> select * from nls_database_parameters where parameter ='NLS_NCHAR_CHARACTERSET'; PARAMETER VALUE ------------------------- -----------NLS_NCHAR_CHARACTERSET AL16UTF16

= = =

==

==

===

==

==

== ===

For 8i to 10g upgradaion =

=

= =

=

=

== == == == === == == = = =

select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') not in ('SYS','SYSTEM','XDB');

and OWNER

If no rows are returned it should mean that the database is not using N-type columns for user data, so simply go to the next step. SQL> select * from nls_database_parameters where parameter ='NLS_NCHAR_CHARACTERSET'; If you are using N-type columns AND your National Characterset is UTF8 or is in the following list: JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED ZHT32EUCFIXED

then also simply go to point next step. Otherwise --: *

change the tables to use CHAR, VARCHAR2 or CLOB instead the N-type or * use export/import the table(s) containing N-type column and truncate those tables before migrating to 10g The recommended NLS_LANG during export is simply the NLS_CHARACTERSET, not the NLS_NCHAR_CHARACTERSET ========= ========= ============== NLS Conversion for 8i ====== ====== ======

= =

= = = === ===========

#########################################################################################

Step 6 :

Stats Gathering

When upgrading to Oracle Database 10g, optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade.

Note : It’s better if you take the backup of all the schemas of your database . Instead of the existing of that mentioned in the below … To take the backup of all the existing schema , use the steps given below . $ sqlplus '/as sysdba' set echo off set feedback off set heading off spool stats_backup.sql Select 'exec dbms_stats.export_schema_stats('''||username||''','''||'dictstattab'||''',statown => '''||'SYS'||''');' from dba_users; spool off Now , remove all the unimportant lines from

the stats_backup.sql

After this .., $ Sqlplus '/as sysdba' SQL>spool sdict SQL>grant analyze any to sys; SQL>@ stats_backup.sql SQL>spool off After this skip the step (A) and follow the step (B)

( A) --> First take the backup of existing stats .... $ sqlplus '/as sysdba' SQL>spool sdict SQL>grant analyze any to sys; SQL>exec dbms_stats.create_stat_table('SYS','dictstattab'); --- It'll create a new stat table

SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec SQL>exec 'SYS'); SQL>exec SQL>exec SQL>exec SQL>exec

dbms_stats.export_schema_stats('WMSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('MDSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('CTXSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('XDB','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('WKSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('LBACSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('OLAPSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('DMSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('ODM','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('ORDSYS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('ORDPLUGINS','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('SI_INFORMTN_SCHEMA','dictstattab',statown => dbms_stats.export_schema_stats('OUTLN','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('DBSNMP','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('SYSTEM','dictstattab',statown => 'SYS'); dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS');

SQL>spool off OR .., Run the sql "stats_exp_backup.sql" $ sqlplus " /as sysdba" SQL>@stats_exp_backup.sql ( B ) ---> Now , Gather the new statistics . This will also reduce the downtime $ sqlplus '/as sysdba' SQL>spool gdict_stats.txt SQL>grant analyze any to sys; SQL>exec dbms_stats.gather_schema_stats('WMSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('MDSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('CTXSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('XDB',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('WKSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('LBACSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('OLAPSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('DMSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('ODM',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('ORDSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('ORDPLUGINS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('SI_INFORMTN_SCHEMA',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);

SQL>exec dbms_stats.gather_schema_stats('OUTLN',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('DBSNMP',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); SQL>spool off

OR , Use the sql "gather_stats_9i.sql" $ sqlplus " /as sysdba" SQL>@gather_stats_9i.sql #########################################################################################

Step 7 :

Invalid Count and compile the invalid

Take a list of all the invalids object before upgrading spool invalid_pre.lst select substr(owner,1,12) owner,substr(object_name,1,30) object,substr(object_type,1,30) type, status from dba_objects where status='INVALID'; spool off

After taking the list of invalids object , Now compile the invalids ..

% sqlplus '/as sysdba' SQL> @?/rdbms/admin/utlrp.sql OR $ sqlplus " /as sysdba" SQL> exec utl_recomp,parallel(4); -- Something like this . Refer to shift log regarding this #########################################################################################==

Step 8 :

Dictionary Conrruption check

Check for corruption in the dictionary, use the following commands in sqlplus connected as sys: Set verify off Set space 0 Set line 120 Set heading off Set feedback off Set pages 1000 Spool analyze.sql Select 'Analyze cluster "'||cluster_name||'" validate structure cascade;' from dba_clusters where owner='SYS' union Select 'Analyze table "'||table_name||'" validate structure cascade;' from dba_tables where owner='SYS' and partitioned='NO' and (iot_type='IOT' or iot_type is NULL)

union Select 'Analyze table "'||table_name||'" validate structure cascade into invalid_rows;' from dba_tables where owner='SYS' and partitioned='YES'; spool off

This creates a script called analyze.sql. Now execute the following steps. $ sqlplus '/as sysdba' SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql SQL> @analyze.sql This script (analyze.sql) should not return any errors.

#########################################################################################==

Step 9 : Ensure that all Snapshot refreshes are successfully completed, and that replication is stopped. $ sqlplus '/ as sysdba' SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times; ----------- ----------------- ------------------------------------------SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times; (TRUNC(LA --------13-MAY-02 -------------------------------------------------------------------------

#########################################################################################==

Step 10 :

Stop the listener and confirm that , there should be no file in recovery/backup

mode Stop the listener for the database: $ lsnrctl LSNRCTL> stop Ensure no files need media recovery: $ sqlplus '/ as sysdba' SQL> select * from v$recover_file; ---- Ensure no files are in revovery mode: This should return no rows. SQL> select * from v$backup where status!='NOT ACTIVE'; --Ensure no files are in backup mode: This should return no row #########################################################################################==

Step 11 :Check for Distributed transaction : Resolve any outstanding unresolved distributed transaction SQL> select * from dba_2pc_pending;

If this returns rows you should do the following: SQL> select local_tran_id from dba_2pc_pending; SQL> execute dbms_transaction.purge_lost_db_entry(''); SQL> commit;

====>> Now disable , all the batch and cron jobs. Crontab –l > cronbackup Now , Hash all the program in the crontab

######################################################################################### Step 12: sys , system and table "aud$" should be there in system tablespace : Otherwise change the default tablespace to system ..

SQL> select username, default_tablespace from dba_users where username in ('SYS','SYSTEM'); Ensure that the aud$ is in the system tablespace when auditing is enabled. SQL> select tablespace_name from dba_tables where table_name='AUD$'; #########################################################################################

Step 13: ===> Drop table XDB.MIGR9202STATUS if it exist in the databse #########################################################################################

Step 14: Note down where all control files are located. SQL> select * from v$controlfile;

#########################################################################################

Step 15: NOW SHUTDOWN the database and take a full cold backup including ORACLE_HOME ( for security purpose ) OR take the RMAN online full backup [ No need to take the backup , As already taken earlier .. ] -----------------------------------------------------------------------Issue the following RMAN commands: RUN { ALLOCATE CHANNEL chan_name TYPE DISK; BACKUP DATABASE FORMAT 'some_backup_directory%U' TAG before_upgrade; BACKUP CURRENT CONTROLFILE TO 'save_controlfile_location';

} ------------------------------------------------------------------------$ sqlplus '/as sysdba' SQL> shutdown immediate;

====================================================================================== ==================== PHASE 2 ================================ =============== Upgrading to the New Oracle Database 10g Release 2 ================= ====================================================================================== #########################################################################################

Step 16: Copy the init file to the new ORACLE_HOME =====> Changes in INIT.ORA file * Comment out obsoleted parameters like hash_join_enabled , ( Please chk it againg also ) * Change all deprecated parameters like “log_archive_start” * Change the value of "COMPATIBLE" parameter : If upgradaing from 8i , Then set the value to 9.2.0 If upgrading from 9.2.0 or 10.1.0 then leave the COMPATIBLE parameter set to it's current value until the upgrade has been completed successfully .This will avoid any unnecessary ORA-942 errors from being reported in SMON trace files during the upgrade (because the upgrade is looking for 10.2 objects that have not yet been created) Note : In out case there is no need to change the value of COMPATIBLE parameter * If you have the parameter NLS_LENGTH_SEMANTICS currently set to CHAR, change the value to BYTE during the upgrade . Note : In our case there is no need to change the value as this bug has been resolved in 10.2.0.3 * * * * * * *

Verify that the parameter DB_DOMAIN is set properly. PGA_AGGREGATE_TARGET >= 24MB SHARED_POOL_SIZE and the LARGE_POOL_SIZE should be greater than 150MB JAVA_POOL_SIZE should be greater than 150MB Ensure there is a value for DB_BLOCK_SIZE Change the value of BDUMP ,CDUMP ,UDUMP Comment AQ_TM_PROCESSES and JOB_QUEUE_PROCESSES parameter Now add .., AQ_TM_PROCESSES=0 JOB_QUEUE_PROCESSES=0 ( This is required for upgrade purpose , That's why hashed above )

* Set the parameter UNDO_MANAGEMENT=AUTO

######################################################################################### Step 17: Ensure the NLS_LANG variable is set correctly: $ env | grep $NLS_LANG #########################################################################################

Step 18: Copy the Network Configuration/init.ora/password

file from old ORACLE_HOME to

new ORACLE_HOME $ cp $OLD_ORACLE_HOME/network/admin/*.ora $NEW_ORACLE_HOME/network/admin $ cp $OLD_ORACLE_HOME/dbs/init<SID>.ora $NEW_ORACLE_HOME/dbs/ $ cp $OLD_ORACLE_HOME/dbs/orapw<SID> $NEW_ORACLE_HOME/dbs/

#########################################################################################

Step 18:

Update the enviorement setting according to new 10g enviorment

---> Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup: SID:ORACLE_HOME:N ---> Update the environment variables like ORACLE_HOME and PATH $. oraenv ---> Make sure the following environment variables point to the new release (10g) directories: -

ORACLE_HOME PATH ORA_NLS10 ORACLE_BASE LD_LIBRARY_PATH LIBPATH (AIX only) ORACLE_PATH

$ $ $ $ $ $ $

env env env env env env env

| | | | | | |

grep grep grep grep grep grep grep

ORACLE_HOME PATH ORA_NLS10 ORACLE_BASE LD_LIBRARY_PATH ORACLE_PATH LIBPATH

---> Unset ORA_NLS33 and set ORA_NLS10 to point to $ORACLE_HOME/nls/data #########################################################################################

Step 19:

Datafile movement need to be done

As in our case, we have created the different mount points for the database also. Thus file movement is also required . You can skip this step if you’re not changing the location of your database -------- -------- Datafile movement Activity--------------------------- ----------------------- -------------------------

================

 -> -> ->

File that need to moved ========================

datafile tempfile logfile controlfile

Step 1 : Check the oradata folder in the mount points …. /equs10g2/oradata /equs10g3/oradata /equs10g4/oradata

Step 2 : Control file location need to be changed in the init.ora Change in init.ora

*.control_files='/equs10g2/oradata/ilequs/control01.ctl','/equs10g3/oradata/ilequs/control02.ctl','/equs10g4/oradata/ilequs/control03.ctl' cp /u06/ilequs/oradata/ilequs/control01.ctl /equs10g2/oradata/ cp /u06/ilequs/oradata/ilequs/control02.ctl /equs10g3/oradata/ cp /u06/ilequs/oradata/ilequs/control03.ctl /equs10g4/oradata/

Step 3 : Copy the files to the esired location Copy datafile to the desired location nohup cp -pr /u06/ilequs/oradata/ilequs/cwmlite01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/odm01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/indx01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/xdb01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/users01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/example01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/drsys01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/rman01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/undotbs02.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/system01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/undotbs01.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/gts_idx01.dat /equs10g2/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsd10.dbf /equs10g2/oradata/ & nohup cp -pr /u08/oradata/ilequs/tools02.dbf /equs10g2/oradata/ & nohup cp -pr /u06/ilequs/oradata/ilequs/tools01.dbf /equs10g2/oradata/ & nohup cp -pr /u07/oradata/ilequs/eaid01.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/eaix01.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/system02.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/ilrbisd01.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsx03.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsx05.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/gts_idx02.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsx01.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsx02.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsx04.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsd01.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsd02.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsd03.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsd04.dbf /equs10g3/oradata/ & nohup cp -pr /u07/oradata/ilequs/equsd05.dbf /equs10g3/oradata/ & nohup cp -pr /u08/oradata/ilequs/ilrbisx01.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsx08.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsx09.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/gts_dat02.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsx06.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsx07.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsx10.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsd06.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsd07.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsd08.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/equsd09.dbf /equs10g4/oradata/ & nohup cp -pr /u08/oradata/ilequs/gts_dat01.dat /equs10g4/oradata/ & ##### Copy Command for redo logfiles ############# cp /u06/ilequs/oradata/ilequs/redo03.log /equs10g2/oradata/redo03.log cp /u06/ilequs/oradata/ilequs/redo02.log /equs10g2/oradata/redo02.log cp /u06/ilequs/oradata/ilequs/redo01.log /equs10g2/oradata/redo01.log cp /u07/oradata/ilequs/redo03b.rdo /equs10g3/oradata/redo03b.rdo cp /u07/oradata/ilequs/redo02b.rdo /equs10g3/oradata/redo02b.rdo cp /u07/oradata/ilequs/redo01b.rdo /equs10g3/oradata/redo01b.rdo ###### Copy Command for the Temp Files ########3 cp /u06/ilequs/oradata/ilequs/temp01.dbf /equs10g2/oradata/

Step 4 :

Rename Command

Then run the rename command…. alter database rename file alter database rename file alter database rename file alter database rename file alter database rename file

'/u06/ilequs/oradata/ilequs/cwmlite01.dbf' '/u06/ilequs/oradata/ilequs/odm01.dbf' to '/u06/ilequs/oradata/ilequs/indx01.dbf' to '/u06/ilequs/oradata/ilequs/xdb01.dbf' to '/u06/ilequs/oradata/ilequs/users01.dbf' to

to '/equs10g2/oradata/cwmlite01.dbf' ; '/equs10g2/oradata/odm01.dbf' ; '/equs10g2/oradata/indx01.dbf' ; '/equs10g2/oradata/xdb01.dbf' ; '/equs10g2/oradata/users01.dbf' ;

alter database rename file '/u06/ilequs/oradata/ilequs/example01.dbf' to '/equs10g2/oradata/example01.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/drsys01.dbf' to '/equs10g2/oradata/drsys01.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/rman01.dbf' to '/equs10g2/oradata/rman01.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/undotbs02.dbf' to '/equs10g2/oradata/undotbs02.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/system01.dbf' to '/equs10g2/oradata/system01.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/undotbs01.dbf' to '/equs10g2/oradata/undotbs01.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/gts_idx01.dat' to '/equs10g2/oradata/gts_idx01.dat' ; alter database rename file '/u08/oradata/ilequs/equsd10.dbf' to '/equs10g2/oradata/equsd10.dbf' ; alter database rename file '/u08/oradata/ilequs/tools02.dbf' to '/equs10g2/oradata/tools02.dbf' ; alter database rename file '/u06/ilequs/oradata/ilequs/tools01.dbf' to '/equs10g2/oradata/tools01.dbf' ; alter database rename file '/u07/oradata/ilequs/eaid01.dbf' to '/equs10g3/oradata/eaid01.dbf' ; alter database rename file '/u07/oradata/ilequs/eaix01.dbf' to '/equs10g3/oradata/eaix01.dbf' ; alter database rename file '/u07/oradata/ilequs/system02.dbf' to '/equs10g3/oradata/system02.dbf' ; alter database rename file '/u07/oradata/ilequs/ilrbisd01.dbf' to '/equs10g3/oradata/ilrbisd01.dbf' ; alter database rename file '/u07/oradata/ilequs/equsx03.dbf' to '/equs10g3/oradata/equsx03.dbf' ; alter database rename file '/u07/oradata/ilequs/equsx05.dbf' to '/equs10g3/oradata/equsx05.dbf' ; alter database rename file '/u07/oradata/ilequs/gts_idx02.dbf' to '/equs10g3/oradata/gts_idx02.dbf' ; alter database rename file '/u07/oradata/ilequs/equsx01.dbf' to '/equs10g3/oradata/equsx01.dbf' ; alter database rename file '/u07/oradata/ilequs/equsx02.dbf' to '/equs10g3/oradata/equsx02.dbf' ; alter database rename file '/u07/oradata/ilequs/equsx04.dbf' to '/equs10g3/oradata/equsx04.dbf' ; alter database rename file '/u07/oradata/ilequs/equsd01.dbf' to '/equs10g3/oradata/equsd01.dbf' ; alter database rename file '/u07/oradata/ilequs/equsd02.dbf' to '/equs10g3/oradata/equsd02.dbf' ; alter database rename file '/u07/oradata/ilequs/equsd03.dbf' to '/equs10g3/oradata/equsd03.dbf' ; alter database rename file '/u07/oradata/ilequs/equsd04.dbf' to '/equs10g3/oradata/equsd04.dbf' ; alter database rename file '/u07/oradata/ilequs/equsd05.dbf' to '/equs10g3/oradata/equsd05.dbf' ; alter database rename file '/u08/oradata/ilequs/ilrbisx01.dbf' to '/equs10g4/oradata/ilrbisx01.dbf' ; alter database rename file '/u08/oradata/ilequs/equsx08.dbf' to '/equs10g4/oradata/equsx08.dbf' ; alter database rename file '/u08/oradata/ilequs/equsx09.dbf' to '/equs10g4/oradata/gts_dat02.dbf' ; alter database rename file '/u08/oradata/ilequs/gts_dat02.dbf' to '/equs10g4/oradata/equsx09.dbf' ; alter database rename file '/u08/oradata/ilequs/equsx06.dbf' to '/equs10g4/oradata/equsx06.dbf' ; alter database rename file '/u08/oradata/ilequs/equsx07.dbf ' to '/equs10g4/oradata/equsx10.dbf' ; alter database rename file '/u08/oradata/ilequs/equsx10.dbf ' to '/equs10g4/oradata/equsx07.dbf' ; alter database rename file '/u08/oradata/ilequs/equsd06.dbf ' to '/equs10g4/oradata/equsd06.dbf' ; alter database rename file '/u08/oradata/ilequs/equsd07.dbf ' to '/equs10g4/oradata/equsd07.dbf' ; alter database rename file '/u08/oradata/ilequs/equsd08.dbf' to '/equs10g4/oradata/equsd08.dbf' ; alter database rename file '/u08/oradata/ilequs/equsd09.dbf ' to '/equs10g4/oradata/equsd09.dbf' ; alter database rename file '/u08/oradata/ilequs/gts_dat01.dat' to '/equs10g4/oradata/gts_dat01.dat' ; ------ Redo file movement ( Rename Command ) ---------alter database rename file '/u06/ilequs/oradata/ilequs/redo03.log' to '/equs10g2/oradata/redo03.log'; alter database rename file '/u06/ilequs/oradata/ilequs/redo02.log' to '/equs10g2/oradata/redo02.log'; alter database rename file '/u06/ilequs/oradata/ilequs/redo01.log' to '/equs10g2/oradata/redo01.log'; alter database rename file '/u07/oradata/ilequs/redo03b.rdo' to '/equs10g3/oradata/redo03b.rdo'; alter database rename file '/u07/oradata/ilequs/redo02b.rdo' to '/equs10g3/oradata/redo02b.rdo'; alter database rename file '/u07/oradata/ilequs/redo01b.rdo' to '/equs10g3/oradata/redo01b.rdo'; --------- Temp file movement ( Rename ) --------------------------alter database rename file '/u06/ilequs/oradata/ilequs/temp01.dbf' to '/equs10g2/oradata/temp01.dbf';

#########################################################################################

Step 20: Now , start the upgradation of database $ cd $ORACLE_HOME/rdbms/admin $ sqlplus / as sysdba Use Startup with the UPGRADE option: SQL> startup upgrade After this create the SYSAUX tablespace SQL > CREATE TABLESPACE sysaux DATAFILE '/sp04/equs10g/oradata/sysaux01.dbf' SIZE 1000M REUSE EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE; Now , run the "catupgrd.sql" script and also spooled the output ..

SQL> spool upgrade.log SQL> @catupgrd.sql SQL> SPOOL OFF Note : This script determines which upgrade scripts need to be run and then runs each necessary script. The upgrade script creates and alters certain data dictionary tables . Check the spool file and verify .If there is any error , then you can run the errored out scripts again Now , Run the "@utlu102s.sql" to check the upgrade status tool SQL> @utlu102s.sql TEXT This is the Post-upgrade Status Tool displays the status of the database components in the upgraded database. The Upgrade Status Tool displays output similar to the following: #########################################################################################

Step 21:

Restart the database and do the following

SQL> shutdown immediate (DO NOT use "shutdown abort" !!!) SQL> startup restrict Run olstrig.sql to re-create DML triggers SQL> @olstrig.sql Above script will give error. IF [Oracle Label Security VALID 10.2.0.1.0 00:00:14 ] is not installed . Thus you can skip this script

Run utlrp.sql and compiled all the invalids object SQL> @utlrp.sql Take the list of invalid now and compare it with old list

spool invalid_post_new.lst Select substr(owner,1,12) owner,substr(object_name,1,30) object,substr(object_type,1,30) type, status from dba_objects where status <>'VALID'; spool off NOTE: If you have upgraded from version 9.2 to version 10.2 and find that the following views are invalid , the views can be safely ignored (or dropped): SYS.V_$KQRPD,SYS.V_$KQRSD,SYS.GV_$KQRPD,SYS.GV_$KQRSD

Again restat the database :% sqlplus '/as sysdba' SQL> shutdown SQL> startup

#########################################################################################

Step 22: If you changed the value for NLS_LENGTH_SEMANTICS from "CHAR" to "BYTE" prior the upgrade set

it back to "CHAR"

Now revert the init parameters AQ_TM_PROCESSES , JOB_QUEUE_PROCESSES

to their old value

to

Now , start the database and create spfile .. SQL> startup SQL> create spfile from pfile; This will create a spfile as a copy of the init.ora file located in the $ORACLE_HOME/dbs directory. Now ,modify the listener.ora according to the new ORACLE_HOME stttings and then .. $ lsnrctl LSNRCTL> start

#########################################################################################

Step 23:

Enable cron and batch jobs

crontab < cronbackup #########################################################################################

Step 24: For more better plan new stats can be gathered , using dbms_stats() ; #########################################################################################

Step 25 : If your database contains the rman catalog , Then upgrade that also. % rman CATALOG rcat/rcat@oemdb connected to recovery catalog database PL/SQL package rcat.DBMS_RCVCAT version 08.00.04 in RCVCAT database is too old RMAN> UPGRADE CATALOG recovery catalog owner is rcat enter UPGRADE CATALOG command again to confirm catalog upgrade RMAN> UPGRADE CATALOG recovery catalog upgraded to version 10.01.00 DBMS_RCVMAN package upgraded to version 10.01.00 DBMS_RCVCAT package upgraded to version 10.01.00

========================================================================================== ========================================================================================== ========================================================================================== ========================================================================================== ========================================================================================== ==========================================================================================

=====================

Suggestion on 10g upgrade ==========================================

If you are applying any CPUs ( Critical Patch Upgrades ) .. then you will have to re-run the catcpu.sql for that particular CPU after upgrading to 10g Also make sure there are no invalid objects after the 10g upgrade and also after applying the CPU

Related Documents


More Documents from ""