This document was uploaded by user and they confirmed that they have the permission to share
it. If you are author or own the copyright of this book, please report to us by using this DMCA
report form. Report DMCA
Overview
Download & View Oracle 8i,9i,10g Queries, Information, And Tips as PDF for free.
0. QUICK INFO/VIEWS ON SESSIONS, LOCKS, AND UNDO/ROLLBACK INFORMATION IN A SINGLE INSTANCE: ================================================================================== =========
SINGLE INSTANCE QUERIES: ======================== -- ---------------------------- 0.1 QUICK VIEW ON SESSIONS: -- --------------------------SELECT substr(username, 1, 10), osuser, sql_address, to_char(logon_time, 'DD-MMYYYY;HH24:MI'), sid, serial#, command, substr(program, 1, 30), substr(machine, 1, 30), substr(terminal, 1, 30) FROM v$session; SELECT sql_text, rows_processed from v$sqlarea where address='' -- ------------------------- 0.2 QUICK VIEW ON LOCKS: (use the sys.obj$ to find ID1:) -- -----------------------First, lets take a look at some important dictionary views with respect to locks: SQL> desc v$lock; Name Null? ----------------------------- -------ADDR KADDR SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
Type -------------------RAW(8) RAW(8) NUMBER VARCHAR2(2) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER
This view stores all information relating to locks in the database. The interesting columns in this view are sid (identifying the session holding or aquiring the lock), type, and the lmode/request pair. Important possible values of type are TM (DML or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk Space Transaction). Exactly one of the lmode, request pair is either 0 or 1 while the other indicates the lock mode. If lmode is not 0 or 1, then the session has aquired the lock, while it waits to aquire the lock if request is other than 0 or 1. The possible values for lmode and request are: 1: 2: 3: 4:
5: Share Row Exclusive (SSX) and 6: Exclusive(X) If the lock type is TM, the column id1 is the object's id and the name of the object can then be queried like so: select name from sys.obj$ where obj# = id1 A lock type of JI indicates that a materialized view is being SQL> desc v$locked_object; Name Null? ----------------------------- -------XIDUSN XIDSLOT XIDSQN OBJECT_ID SESSION_ID ORACLE_USERNAME OS_USER_NAME PROCESS LOCKED_MODE
Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(30) VARCHAR2(30) VARCHAR2(12) NUMBER
Type -------------------NUMBER NUMBER VARCHAR2(26) VARCHAR2(40) VARCHAR2(40) NUMBER NUMBER
SQL> desc v$transaction; Name Null? ----------------------------- -------ADDR XIDUSN XIDSLOT XIDSQN UBAFIL UBABLK UBASQN UBAREC STATUS START_TIME START_SCNB START_SCNW START_UEXT START_UBAFIL START_UBABLK START_UBASQN START_UBAREC SES_ADDR FLAG SPACE
Type -------------------RAW(8) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(16) VARCHAR2(20) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER RAW(8) NUMBER VARCHAR2(3)
VARCHAR2(3) VARCHAR2(3) VARCHAR2(3) VARCHAR2(256) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER DATE NUMBER NUMBER NUMBER NUMBER RAW(8) RAW(8) RAW(8)
Queries you can use in investigating locks: =========================================== SELECT XIDUSN,OBJECT_ID,SESSION_ID,ORACLE_USERNAME,OS_USER_NAME,PROCESS from v$locked_object; SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20), l.SESSION_ID, l.ORACLE_USERNAME, l.LOCKED_MODE from v$locked_object l, dba_objects d where d.OBJECT_ID=l.OBJECT_ID; SELECT ADDR, KADDR, SID, TYPE, ID1, ID2, LMODE, BLOCK from v$lock; SELECT a.sid, a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec, b.used_ublk FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr; SELECT s.sid, l.lmode, l.block, substr(s.username, 1, 10), substr(s.schemaname, 1, 10), substr(s.osuser, 1, 10), substr(s.program, 1, 30), s.command FROM v$session s, v$lock l WHERE s.sid=l.sid;
SELECT p.spid, s.sid, p.addr,s.paddr,substr(s.username, 1, 10), substr(s.schemaname, 1, 10), s.command,substr(s.osuser, 1, 10), substr(s.machine, 1, 10) FROM v$session s, v$process p WHERE s.paddr=p.addr SELECT sid, serial#, command,substr(username, 1, 10), osuser, sql_address,LOCKWAIT, to_char(logon_time, 'DD-MM-YYYY;HH24:MI'), substr(program, 1, 30) FROM v$session; SELECT sid, serial#,
username, LOCKWAIT from v$session;
SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID; SELECT * from dba_waiters; SELECT waiting_session, holding_session, lock_type, mode_held FROM dba_waiters; SELECT p.spid s.sid p.addr, s.paddr, substr(s.username, 1, 10) substr(s.schemaname, 1, 10) s.command substr(s.osuser, 1, 10) substr(s.machine, 1, 25) FROM v$session s, v$process WHERE s.paddr=p.addr ORDER BY p.spid;
unix_spid, sid, username, schemaname, command, osuser, machine p
Usage of v$session_longops: =========================== SQL> desc v$session_longops; SID NUMBER Session identifier SERIAL# NUMBER Session serial number OPNAME VARCHAR2(64) Brief description of the operation TARGET VARCHAR2(64) The object on which the operation is carried out TARGET_DESC VARCHAR2(32) Description of the target SOFAR NUMBER The units of work done so far TOTALWORK NUMBER The total units of work UNITS VARCHAR2(32) The units of measurement START_TIME DATE The starting time of operation LAST_UPDATE_TIME DATE Time when statistics last updated
TIMESTAMP DATE Timestamp TIME_REMAINING NUMBER Estimate (in seconds) of time remaining for the operation to complete ELAPSED_SECONDS NUMBER The number of elapsed seconds from the start of operations CONTEXT NUMBER Context MESSAGE VARCHAR2(512) Statistics summary message USERNAME VARCHAR2(30) User ID of the user performing the operation SQL_ADDRESS RAW(4 | 8) Used with the value of the SQL_HASH_VALUE column to identify the SQL statement associated with the operation SQL_HASH_VALUE NUMBER Used with the value of the SQL_ADDRESS column to identify the SQL statement associated with the operation SQL_ID VARCHAR2(13) SQL identifier of the SQL statement associated with the operation QCSID NUMBER Session identifier of the parallel coordinator This view displays the status of various operations that run for longer than 6 seconds (in absolute time). These operations currently include many backup and recovery functions, statistics gathering, and query execution, and more operations are added for every Oracle release. To monitor query execution progress, you must be using the cost-based optimizer and you must: Set the TIMED_STATISTICS or SQL_TRACE parameter to true Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS package You can add information to this view about application-specific long-running operations by using the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure. Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char(l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'), to_char(l.time_remaining), to_char(l.elapsed_seconds), l.opname,l.target,l.target_desc,l.message,s.username,s.osuser,s.lockwait from v$session_longops l, v$session s where l.sid = s.sid and l.serial# = s.serial#; Select 'long', to_char (l.sid), to_char (l.serial#), to_char(l.sofar), to_char(l.totalwork), to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'), s.username,s.osuser,s.lockwait from v$session_longops l, v$session s where l.sid = s.sid and l.serial# = s.serial#; select substr(username,1,15),target,to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ), SOFAR,substr(MESSAGE,1,70) from v$session_longops; select USERNAME, to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ), substr(message,1,90),to_char(time_remaining) from v$session_longops;
9i and 10G note: ================ Oracle has a view inside the Oracle data buffers. The view is called v$bh, and while v$bh was originally developed for Oracle Parallel Server (OPS), the v$bh view can be used to show the number of data blocks in the data buffer for every object type in the database. The following query is especially exciting because you can now see what objects are consuming the data buffer caches. In Oracle9i, you can use this information to segregate tables to separate RAM buffers with different blocksizes. Here is a sample query that shows data buffer utilization for individual objects in the database. Note that this script uses an Oracle9i scalar sub-query, and will not work in preOracle9i systems unless you comment-out column c3. column column column column
select owner c0, object_name c1, count(1) c2, (count(1)/(select count(*) from v$bh)) *100 c3 from dba_objects o, v$bh bh where o.object_id = bh.objd and o.owner not in ('SYS','SYSTEM','AURORA$JIS$UTILITY$') group by owner, object_name order by count(1) desc ; -- ------------------------------ 0.3 QUICK VIEW ON TEMP USAGE: -- ----------------------------select total_extents, used_extents, total_extents, current_users, tablespace_name from v$sort_segment; select username, user, sqladdr, extents, tablespace from v$sort_usage; SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2),
a.sid||','||a.serial# SID_SERIAL, a.username, a.program FROM sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p WHERE p.name = 'db_block_size' AND a.saddr = b.session_addr ORDER BY b.tablespace, b.blocks; -- --------------------------------- 0.4 QUICK VIEW ON UNDO/ROLLBACK: -- -------------------------------SELECT FROM WHERE AND
SELECT substr(n.name, 1, 10), s.writes, s.gets, s.waits, s.wraps, s.extents, s.status, s.optsize, s.rssize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; SELECT substr(r.name, 1, 10) "RBS", s.sid, s.serial#, s.taddr, t.addr, substr(s.username, 1, 10) "USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo, substr(s.program, 1, 15) "COMMAND" FROM sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r WHERE t.addr = s.taddr AND t.xidusn = r.usn ORDER BY t.cr_get, t.phy_io; SELECT substr(segment_name, 1, 20), substr(tablespace_name, 1, 20), status, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_ROLLBACK_SEGS; select 'FREE',count(*) from sys.fet$ union select 'USED',count(*) from sys.uet$; -- Quick view active transactions SELECT NAME, XACTS "ACTIVE TRANSACTIONS" FROM V$ROLLNAME, V$ROLLSTAT WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN; SELECT to_char(BEGIN_TIME, 'DD-MM-YYYY;HH24:MI'), to_char(END_TIME, 'DD-MMYYYY;HH24:MI'), UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON" FROM V$UNDOSTAT WHERE trunc(BEGIN_TIME)=trunc(SYSDATE); select TO_CHAR(MIN(Begin_Time),'DD-MON-YYYY HH24:MI:SS') "Begin Time", TO_CHAR(MAX(End_Time),'DD-MON-YYYY HH24:MI:SS') "End Time",
SUM(Undoblks) "Total Undo Blocks Used", SUM(Txncount) "Total Num Trans Executed", MAX(Maxquerylen) "Longest Query(in secs)", MAX(Maxconcurrency) "Highest Concurrent TrCount", SUM(Ssolderrcnt), SUM(Nospaceerrcnt) from V$UNDOSTAT; SELECT used_urec FROM v$session s, v$transaction t WHERE s.audsid=sys_context('userenv', 'sessionid') and s.taddr = t.addr; (used_urec = Used Undo records) SELECT a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr; SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID;
-- --------------------------------- 0.5 SOME EXPLANATIONS: -- --------------------------------
-- explanation of "COMMAND": 1: CREATE TABLE 2: INSERT 3: SELECT 4: CREATE CLUSTER 5: ALTER CLUSTER 6: UPDATE 7: DELETE 8: DROP CLUSTER 9: CREATE INDEX 10: DROP INDEX 11: ALTER INDEX 12: DROP TABLE 13: CREATE SEQUENCE 14: ALTER SEQUENCE 15: ALTER TABLE 16: DROP SEQUENCE 17: GRANT 18: REVOKE 19: CREATE SYNONYM 20: DROP SYNONYM 21: CREATE VIEW 22: DROP VIEW 23: VALIDATE INDEX 24: CREATE PROCEDURE 25: ALTER PROCEDURE 26: LOCK TABLE 27: NO OPERATION 28: RENAME 29: COMMENT 30: AUDIT 31: NOAUDIT 32: CREATE DATABASE LINK 33: DROP DATABASE LINK 34: CREATE DATABASE 35: ALTER DATABASE 36: CREATE ROLLBACK SEGMENT 37: ALTER ROLLBACK SEGMENT 38: DROP ROLLBACK SEGMENT 39: CREATE TABLESPACE 40: ALTER TABLESPACE 41: DROP TABLESPACE 42: ALTER SESSION 43: ALTER USE 44: COMMIT 45: ROLLBACK 46: SAVEPOINT 47: PL/SQL EXECUTE 48: SET TRANSACTION 49: ALTER SYSTEM SWITCH LOG 50: EXPLAIN 51: CREATE USER 25: CREATE ROLE 53: DROP USER 54: DROP ROLE 55: SET ROLE 56: CREATE SCHEMA 57: CREATE CONTROL FILE 58: ALTER TRACING 59: CREATE TRIGGER 60: ALTER TRIGGER 61: DROP TRIGGER 62: ANALYZE TABLE 63: ANALYZE INDEX 64: ANALYZE CLUSTER 65: CREATE PROFILE 66: DROP PROFILE 67: ALTER PROFILE 68: DROP PROCEDURE 69: DROP PROCEDURE 70: ALTER RESOURCE COST 71: CREATE SNAPSHOT LOG 72: ALTER SNAPSHOT LOG 73: DROP SNAPSHOT LOG 74: CREATE SNAPSHOT 75: ALTER SNAPSHOT 76: DROP SNAPSHOT 79: ALTER ROLE 85: TRUNCATE TABLE 86:
TRUNCATE COUSTER 88: ALTER VIEW 91: CREATE FUNCTION 92: ALTER FUNCTION 93: DROP FUNCTION 94: CREATE PACKAGE 95: ALTER PACKAGE 96: DROP PACKAGE 97: CREATE PACKAGE BODY 98: ALTER PACKAGE BODY 99: DROP PACKAGE BODY -- explanation of locks: Locks: 0, 'None', /* Mon Lock equivalent */ 1, 'Null', /* N */ 2, 'Row-S (SS)', /* L */ 3, 'Row-X (SX)', /* R */ 4, 'Share', /* S */ 5, 'S/Row-X (SRX)', /* C */ 6, 'Exclusive', /* X */ to_char(b.lmode) TX: enqueu, waiting TM: DDL on object MR: Media Recovery A TX lock is acquired when a transaction initiates its first change and is held until the transaction does a COMMIT or ROLLBACK. It is used mainly as a queuing mechanism so that other sessions can wait for the transaction to complete. TM Per table locks are acquired during the execution of a transaction when referencing a table with a DML statement so that the object is not dropped or altered during the execution of the transaction, if and only if the dml_locks parameter is non-zero. LOCKS: locks op user objects, zoals tables en rows LATCH: locks op system objects, zoals shared data structures in memory en data dictionary rows LOCKS - shared of exclusive LATCH - altijd exclusive UL= user locks, geplaats door programmatuur m.b.v. bijvoorbeeld DBMS_LOCK package DML LOCKS: data manipulatie: table lock, row lock DDL LOCKS: preserves de struktuur van object (geen simulane DML, DDL statements) DML locks: row lock (TX): voor rows (insert, update, delete) row lock plus table lock: row lock, maar ook voorkomen DDL statements table lock (TM): automatisch bij insert, update, delete, ter voorkoming DDL op table table lock: S: share lock RS: row share RSX: row share exlusive RX: row exclusive X: exclusive (ANDere tansacties kunnen alleen SELECT..)
Internal Implementation of Oracle Locks (Enqueue) Oracle server uses locks to provide concurrent access to shared resources whereas it uses latches to provide exclusive and short-term access to memory structures inside the SGA. Latches also prevent more than one process to execute the same piece of code, which other process might be executing. Latch is also a simple lock, which provides serialize and only exclusive access to the memory area in SGA. Oracle doesn�t use latches to provide shared access to resources because it will increase CPU usage. Latches are used for big memory structure and allow operations required for locking the sub structures. Shared resources can be tables, transactions, redo threads, etc. Enqueue can be local or global. If it is a single instance then enqueues will be local to that instance. There are global enqueus also like ST enqueue, which is held before any space transaction can be occurred on any tablespace in RAC. ST enqueues are held only for dictionary-managed tablespaces. These oracle locks are generally known as Enqueue, because whenever there is a session request for a lock on any shared resource structure, it's lock data structure is queued to one of the linked list attached to that resource structure (Resource structure is discussed later). Before proceeding further with this topic, here is little brief about Oracle locks. Oracle locks can be applied to compound and simple objects like tables and the cache buffer. Locks can be held in different modes like shared, excusive, null, sub-shared, sub-exclusive and shared sub-exclusive. Depending on the type of object, different modes are applied. Foe example, a compound object like a table with rows, all above mentioned modes could be applicable whereas for simple objects only the first three will be applicable. These lock modes don�t have any importance of their own but the importance is how they are being used by the subsystem. These lock modes (compatibility between locks) define how the session will get a lock on that object.
-- Explanation of Waits: SQL> desc v$system_event; Name
-----------------------EVENT TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO v$system_event This view displays the count (total_waits) of all wait events since startup of the instance. If timed_statistics is set to true, the sum of the wait times for all events are also displayed in the column time_waited. The unit of time_waited is one hundreth of a second. Since 10g, an additional column (time_waited_micro) measures wait times in millionth of a second. total_waits where event='buffer busy waits' is equal the sum of count in v$waitstat. v$enqueue_stat can be used to break down waits on the enqueue wait event. While this view totals all events in an instance, v$session select event, total_waits, time_waited from v$system_event where event like '%file%' Order by total_waits desc; column column column column column
ttitle 'System-wide Wait Analysis|for current wait events' select event c1, total_waits c2, time_waited / 100 c3, total_timeouts c4, average_wait /100 c5 from sys.v_$system_event where event not in ( 'dispatcher timer', 'lock element cleanup', 'Null event', 'parallel query dequeue wait', 'parallel query idle wait - Slaves', 'pipe get', 'PL/SQL lock timer', 'pmon timer', 'rdbms ipc message', 'slave wait', 'smon timer', 'SQL*Net break/reset to client', 'SQL*Net message from client',
'SQL*Net message to client', 'SQL*Net more data to client', 'virtual circuit status', 'WMON goes to sleep'
) AND event not like 'DFS%' and event not like '%done%' and event not like '%Idle%' AND event not like 'KXFX%' order by c2 desc ;
Create table beg_system_event as select * from v$system_event Run workload through system or user task Create table end_system_event as select * from v$system_event Issue SQL to determine true wait events drop table beg_system_event; drop table end_system_event; SELECT b.event, (e.total_waits - b.total_waits) total_waits, (e.total_timeouts - b.total_timeouts) total_timeouts, (e.time_waited - b.time_waited) time_waited FROM beg_system_event b, end_system_event e WHERE b.event = e.event; Cumulative info, after startup: ------------------------------SELECT *
FROM v$system_event WHERE event = 'enqueue';
SELECT * FROM v$sysstat WHERE class=4; select c.name,a.addr,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,a.wait_time, b.pid from v$latch a, v$latchholder b, v$latchname c where a.addr = b.laddr(+) and a.latch# = c.latch# order by a.latch#; -- ---------------------------------------------------------------- 0.6. QUICK INFO ON HIT RATIO, SHARED POOL etc.. -- ---------------------------------------------------------------- Hit ratio: SELECT FROM WHERE
SELECT * FROM V$SGA; -- free memory shared pool: SELECT * FROM v$sgastat WHERE name = 'free memory'; -- hit ratio shared pool: SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V$LIBRARYCACHE; SELECT sum(sharable_mem) FROM v$db_object_cache; -- finding literals in SP: SELECT substr(sql_text,1,50) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY substr(sql_text,1,50) HAVING count(*) > 30 ORDER BY 2; -- ---------------------------------------- 0.7 Quick Table and object information -- --------------------------------------SELECT distinct substr(t.owner, 1, 25), substr(t.table_name,1,50), substr(t.tablespace_name,1,20), t.chain_cnt, t.logging, s.relative_fno FROM dba_tables t, dba_segments s WHERE t.owner not in ('SYS','SYSTEM', 'OUTLN','DBSNMP','WMSYS','ORDSYS','ORDPLUGINS','MDSYS','CTXSYS','XDB') AND t.table_name=s.segment_name AND s.segment_type='TABLE' AND s.segment_name like 'CI_PAY%'; SELECT substr(segment_name, 1, 30), segment_type, substr(owner, 1, 10), extents, initial_extent, next_extent, max_extents FROM dba_segments WHERE extents > max_extents - 100 AND owner not in ('SYS','SYSTEM'); SELECT FROM WHERE and
segment_name, owner, tablespace_name, extents dba_segments owner='SALES' -- you use the correct schema here extents > 700;
SELECT owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, status FROM dba_objects where OWNER='RM_LIVE'; WHERE created > SYSDATE-1; SELECT owner, substr(object_name, 1, 30), object_type, created, last_ddl_time, status FROM dba_objects WHERE status='INVALID'; Compare 2 owners: ----------------select table_name from dba_tables where owner='MIS_OWNER' and table_name not in (SELECT table_name from dba_tables where OWNER='MARPAT'); Table and column information: ----------------------------select substr(table_name, 1, 3) schema , table_name , column_name , substr(data_type,1 ,1) data_type from user_tab_columns where COLUMN_NAME='ENV_ID' where table_name like 'ALG%' or table_name like 'STG%' or table_name like 'ODS%' or table_name like 'DWH%' or table_name like 'MKM%' order by decode(substr(table_name, 1, 3), 'ALG', 10, 'STG', 20, 'ODS', 30, 'DWH', 40, 'MKM', 50, 60) , table_name , column_id Check on existence of JServer: -----------------------------select count(*) from all_objects where object_name = 'DBMS_JAVA'; should return a count of 3 -- --------------------------------------- 0.8 QUICK INFO ON PRODUCT INFORMATION: -- -------------------------------------ersa SELECT * FROM PRODUCT_COMPONENT_VERSION; SELECT * FROM NLS_DATABASE_PARAMETERS; SELECT * FROM NLS_SESSION_PARAMETERS; SELECT * FROM NLS_INSTANCE_PARAMETERS; SELECT * FROM V$OPTION; SELECT * FROM V$LICENSE;
SELECT * FROM V$VERSION; Oracle RDBMS releases: ---------------------9.2.0.1 is the terminal release for Oracle 9i. Rel 2. Normally it's patched to 9.2.0.4. As from october patches 9.2.0.5 and little later 9.2.0.6 were available 9.2.0.4 is patch ID 3095277. 9.0.1.4 8.1.7 8.0.6 7.3.4
IS ORACLE 32BIT or 64BIT? ------------------------Starting with version 8, Oracle began shipping 64bit versions of it's RDBMS product on UNIX platforms that support 64bit software. IMPORTANT: 64bit Oracle can only be installed on Operating Systems that are 64bit enabled. In general, if Oracle is 64bit, '64bit' will be displayed on the opening banners of Oracle executables such as 'svrmgrl', 'exp' and 'imp'. It will also be displayed in the headers of Oracle trace files. Otherwise if '64bit' is not display at these locations, it can be assumed that Oracle is 32bit. or From the OS level: will be indicated.
% cd $ORACLE_HOME/bin
% file oracle
...if 64bit, '64bit'
To verify the wordsize of a downloaded patchset: -----------------------------------------------The filename of the downloaded patchset usually dictates which version and wordsize of Oracle it should be applied against. For instance: p1882450_8172_SOLARIS64.zip is the 8.1.7.2 patchset for 64bit Oracle on Solaris. Also refer to the README that is included with the patch or patch set and this Note: Win2k Server Certifications: ---------------------------OS Product Certified With Version Status Addtl. Info. Components Other Install Issue 2000 10g N/A N/A Certified Yes None None None 2000 9.2 32-bit -Opteron N/A N/A Certified Yes None None None 2000 9.2 N/A N/A Certified Yes None None None 2000 9.0.1 N/A N/A Desupported Yes None N/A N/A 2000 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 2000 8.1.6 (8i) N/A N/A Desupported Yes None N/A N/A 2000, Beta 3 8.1.5 (8i) N/A N/A Withdrawn Yes N/A N/A N/A
Solaris Server certifications: -----------------------------Server Certifications OS Product Certified With Version Status Addtl. Info. Components Other Install Issue 9 10g 64-bit N/A N/A Certified Yes None None None 8 10g 64-bit N/A N/A Certified Yes None None None 10 10g 64-bit N/A N/A Projected None N/A N/A N/A 9 9.2 64-bit N/A N/A Certified Yes None None None 8 9.2 64-bit N/A N/A Certified Yes None None None 10 9.2 64-bit N/A N/A Projected None N/A N/A N/A 2.6 9.2 N/A N/A Certified Yes None None None 9 9.2 N/A N/A Certified Yes None None None 8 9.2 N/A N/A Certified Yes None None None 7 9.2 N/A N/A Certified Yes None None None 10 9.2 N/A N/A Projected None N/A N/A N/A 9 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A 8 9.0.1 64-bit N/A N/A Desupported Yes None N/A N/A 2.6 9.0.1 N/A N/A Desupported Yes None N/A N/A 9 9.0.1 N/A N/A Desupported Yes None N/A N/A 8 9.0.1 N/A N/A Desupported Yes None N/A N/A 7 9.0.1 N/A N/A Desupported Yes None N/A N/A 9 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A 2.6 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 9 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 7 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A everything below: desupported Oracle clients: --------------Server Version Client Version 10.1.0 10.1.0 Yes Yes 9.2.0 Yes Yes Was 9.0.1 Was Was Was 8.1.7 Yes Yes Was 8.1.6 No No Was 8.1.5 No No No 8.0.6 No Was Was 8.0.5 No No No 7.3.4 No Was Was
9.2.0 Was Yes Was Yes Was Was Was Was Was
9.0.1 8.1.7 Yes #2 No No Was No Was Was Was Was Was Was Was Was Was Was Was Was
8.1.6 No Was Was Was Was Was Was Was Was
8.1.5 No No No Was Was Was Was Was Was
8.0.6 8.0.5 7.3.4 No No No No #1 Was Was Was Was Was Was Was
-- ------------------------------------------------------ 0.9 QUICK INFO WITH REGARDS LOGS AND BACKUP RECOVERY: -- ----------------------------------------------------SELECT * from V$BACKUP; SELECT file#, substr(name, 1, 30), status, checkpoint_change# controlfile
-- uit
FROM V$DATAFILE; SELECT d.file#, d.status, d.checkpoint_change#, b.status, b.CHANGE#, to_char(b.TIME,'DD-MM-YYYY;HH24:MI'), substr(d.name, 1, 40) FROM V$DATAFILE d, V$BACKUP b WHERE d.file#=b.file#; SELECT file#, substr(name, 1, 30), status, fuzzy, checkpoint_change# file header FROM V$DATAFILE_HEADER;
-- uit
SELECT first_change#, next_change#, sequence#, archived, substr(name, 1, 40), COMPLETION_TIME, FIRST_CHANGE#, FIRST_TIME FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > SYSDATE -2; SELECT recid, first_change#, sequence#, next_change# FROM V$LOG_HISTORY; SELECT resetlogs_change#, checkpoint_change#, controlfile_change#, open_resetlogs FROM V$DATABASE; SELECT * FROM
V$RECOVER_FILE
-- Which file needs recovery
-- ----------------------------------------------------------------------------- 0.10 QUICK INFO WITH REGARDS TO TABLESPACES, DATAFILES, REDO LOGFILES etc..: -- ------------------------------------------------------------------------------ online redo log informatie: V$LOG, V$LOGFILE: SELECT l.group#, l.members, l.status, l.bytes, substr(lf.member, 1, 50) FROM V$LOG l, V$LOGFILE lf WHERE l.group#=lf.group#; SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, FIRST_TIME, to_char(FIRST_TIME, 'DD-MM-YYYY;HH24:MI') FROM V$LOG_HISTORY; -- WHERE SEQUENCE# SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG; -- tablespace free-used: SELECT Total.name "Tablespace Name", Free_space, (total_space-Free_space) Used_space, total_space FROM (SELECT tablespace_name, sum(bytes/1024/1024) Free_Space FROM sys.dba_free_space GROUP BY tablespace_name ) Free, (SELECT b.name, sum(bytes/1024/1024) TOTAL_SPACE FROM sys.v_$datafile a, sys.v_$tablespace B WHERE a.ts# = b.ts# GROUP BY b.name ) Total WHERE Free.Tablespace_name = Total.name;
SELECT substr(file_name, 1, 70), tablespace_name FROM dba_data_files; ----------------------------------------------- 0.11 AUDIT Statements: ---------------------------------------------select v.sql_text, v.FIRST_LOAD_TIME, v.PARSING_SCHEMA_ID, v.DISK_READS, v.ROWS_PROCESSED, v.CPU_TIME, b.username from v$sqlarea v, dba_users b where v.FIRST_LOAD_TIME > '2008-05-12' and v.PARSING_SCHEMA_ID=b.user_id order by v.FIRST_LOAD_TIME ; ------------------------------------------------ 0.12 EXAMPLE OF DYNAMIC SQL: ----------------------------------------------select 'UPDATE '||t.table_name||' SET '||c.column_name||'=REPLACE('|| c.column_name||','''',CHR(7));' from user_tab_columns c, user_tables t where c.table_name=t.table_name and t.num_rows>0 and c.DATA_LENGTH>10 and data_type like '%CHAR%' ORDER BY t.table_name desc; create public synonym EMPLOYEE for HARRY.EMPLOYEE; select 'create public synonym '||table_name||' for CISADM.'||table_name||';' from dba_tables where owner='CISADM'; select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||table_name||' TO CISUSER;' from dba_tables where owner='CISADM'; select 'GRANT SELECT ON '||table_name||' TO CISREAD;' from dba_tables where owner='CISADM';
------------------------------------------------ 0.13 ORACLE MOST COMMON DATATYPES: -----------------------------------------------
Example: number as integer in comparison to smallint ----------------------------------------------------
SQL> create table a 2 (id number(3)); Table created. SQL> create table b 2 (id smallint); Table created. SQL> create table c 2 (id integer); Table created. SQL> insert into a 2 values 3 (5); 1 row created. SQL> insert into a 2 values 3 (999); 1 row created. SQL> insert into a 2 values 3 (1001); (1001) * ERROR at line 3: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into b 2 values 3 (5); 1 row created. SQL> insert into b 2 values 3 (99); 1 row created. SQL> insert into b 2 values 3 (999); 1 row created. SQL> insert into b
2 3
values (1001);
1 row created. SQL> insert into b 2 values 3 (65536); 1 row created. SQL> insert into b 2 values 3 (1048576); 1 row created. SQL> insert into b 2 values 3 (1099511627776); 1 row created. SQL> insert into b 2 values 3 (9.5); 1 row created. SQL> insert into b 2 values 3 (100.23); 1 row created. SQL> select * from b; ID ---------5 99 999 1001 65536 1048576 1.0995E+12 10 100 9 rows selected.
smallint is really not that "small". Actually its float(38).
SQL> insert into c 2 values 3 (5); 1 row created. SQL> insert into c 2 values 3 (9999); 1 row created. SQL> insert into c 2 values 3 (92.7); 1 row created. SQL> insert into c 2 values 3 (1099511627776); 1 row created. SQL> select * from c; ID ---------5 9999 93 1.0995E+12
======================== 1. NOTES ON PERFORMANCE: ========================= 1.1 POOLS: ========== -- SHARED POOL: -- -----------A literal SQL statement is considered as one which uses literals in the predicate/s rather than bind variables where the value of the literal is likely to differ between various executions of the statement. Eg 1: SELECT * FROM emp WHERE ename='CLARK'; is used by the application instead of
SELECT * FROM emp WHERE ename=:bind1; SQL statement for this article as it can be shared. -- Hard Parse If a new SQL statement is issued which does not exist in the shared pool then this has to be parsed fully. Eg: Oracle has to allocate memory for the statement from the shared pool, check the statement syntactically and semantically etc... This is referred to as a hard parse and is very expensive in both terms of CPU used and in the number of latch gets performed. --Soft Parse If a session issues a SQL statement which is already in the shared pool AND it can use an existing version of that statement then this is known as a 'soft parse'. As far as the application is concerned it has asked to parse the statement. if two statements are textually identical but cannot be shared then these are called 'versions' of the same statement. If Oracle matches to a statement with many versions it has to check each version in turn to see if it is truely identical to the statement currently being parsed. Hence high version counts are best avoided. The best approach to take is that all SQL should be sharable unless it is adhoc or infrequently used SQL where it is important to give CBO as much information as possible in order for it to produce a good execution plan. --Eliminating Literal SQL If you have an existing application it is unlikely that you could eliminate all literal SQL but you should be prepared to eliminate some if it is causing problems. By looking at the V$SQLAREA view it is possible to see which literal statements are good candidates for converting to use bind variables. The following query shows SQL in the SGA where there are a large number of similar statements: SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY substr(sql_text,1,40) HAVING count(*) > 30 ORDER BY 2; The values 40,5 and 30 are example values so this query is looking for different statements whose first 40 characters are the same which have only been executed a few times each and there are at least 30 different occurrances in the shared pool. This query uses the idea it is common for literal statements to begin "SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of each statement being the same. --Avoid Invalidations
Some specific orders will change the state of cursors to INVALIDATE. These orders modify directly the context of related objects associated with cursors. That's orders are TRUNCATE, ANALYZE or DBMS_STATS.GATHER_XXX on tables or indexes, grants changes on underlying objects. The associated cursors will stay in the SQLAREA but when it will be reference next time, it should be reloaded and reparsed fully, so the global performance will be impacted. The following query could help us to better identify the concerned cursors: SELECT substr(sql_text, 1, 40) "SQL", invalidations from v$sqlarea order by invalidations DESC; -- CURSOR_SHARING parameter (8.1.6 onwards) <Parameter:CURSOR_SHARING> is a new parameter introduced in Oracle8.1.6. It should be used with caution in this release. If this parameter is set to FORCE then literals will be replaced by system generated bind variables where possible. For multiple similar statements which differ only in the literals used this allows the cursors to be shared even though the application supplied SQL uses literals. The parameter can be set dynamically at the system or session level thus: ALTER SESSION SET cursor_sharing = FORCE; or ALTER SYSTEM SET cursor_sharing = FORCE; or it can be set in the init.ora file. Note: As the FORCE setting causes system generated bind variables to be used in place of literals, a different execution plan may be chosen by the cost based optimizer (CBO) as it no longer has the literal values available to it when costing the best execution plan. In Oracle9i, it is possible to set CURSOR_SHARING=SIMILAR. SIMILAR causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized. This enhancement improves the usability of the parameter for situations where FORCE would normally cause a different, undesired execution plan. With CURSOR_SHARING=SIMILAR, Oracle determines which literals are "safe" for substitution with bind variables. This will result in some SQL not being shared in an attempt to provide a more efficient execution plan. -- SESSION_CACHED_CURSORS parameter <Parameter:SESSION_CACHED_CURSORS> is a numeric parameter which can be set at instance level or at session level using the command: ALTER SESSION SET session_cached_cursors = NNN; The value NNN determines how many 'cached' cursors there can be in your session. Whenever a statement is parsed Oracle first looks at the statements pointed to by your private session cache if a sharable version of the statement exists it can be used. This provides a shortcut access to frequently parsed statements that uses less CPU and uses far fewer latch gets than a soft or hard parse.
To get placed in the session cache the same statement has to be parsed 3 times within the same cursor - a pointer to the shared cursor is then added to your session cache. If all session cache cursors are in use then the least recently used entry is discarded. If you do not have this parameter set already then it is advisable to set it to a starting value of about 50. The statistics section of the bstat/estat report includes a value for 'session cursor cache hits' which shows if the cursor cache is giving any benefit. The size of the cursor cache can then be increased or decreased as necessary. SESSION_CACHED_CURSORS are particularly useful with Oracle Forms applications when forms are frequently opened and closed. -- SHARED_POOL_RESERVED_SIZE parameter There are quite a few notes explaining <Parameter:SHARED_POOL_RESERVED_SIZE> already in circulation. The parameter was introduced in Oracle 7.1.5 and provides a means of reserving a portion of the shared pool for large memory allocations. The reserved area comes out of the shared pool itself. From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about 10% of SHARED_POOL_SIZE unless either the shared pool is very large OR SHARED_POOL_RESERVED_MIN_ALLOC has been set lower than the default value: If the shared pool is very large then 10% may waste a significant amount of memory when a few Mb will suffice. If SHARED_POOL_RESERVED_MIN_ALLOC has been lowered then many space requests may be eligible to be satisfied from this portion of the shared pool and so 10% may be too little. It is easy to monitor the space usage of the reserved area using the which has a column FREE_SPACE. -- SHARED_POOL_RESERVED_MIN_ALLOC parameter In Oracle8i this parameter is hidden. SHARED_POOL_RESERVED_MIN_ALLOC should generally be left at its default value, although in certain cases values of 4100 or 4200 may help relieve some contention on a heavily loaded shared pool. -- SHARED_POOL_SIZE parameter <Parameter:SHARED_POOL_SIZE> controls the size of the shared pool itself. The size of the shared pool can impact performance. If it is too small then it is likely that sharable information will be flushed from the pool and then later need to be reloaded (rebuilt). If there is heavy use of literal SQL and the shared pool is too large then over time a lot of small chunks of memory can build up on the internal memory freelists causing the shared pool latch to be held for longer which in-turn can impact performance. In this situation a smaller shared pool may perform better than a larger one. This problem is greatly reduced in 8.0.6 and in 8.1.6 onwards due to the enhancement in . NB: The shared pool itself should never be made so large that paging or swapping occur as performance can then decrease by many orders of magnitude.
-- _SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards) This is a hidden parameter which was introduced in Oracle 8.1.5. The parameter is included here as the default setting has caused some problems with SQL sharability. Setting this parameter to 0 can avoid these issues which result in multiple versions statements in the shared pool. Eg: Add the following to the init.ora file # _SQLEXEC_PROGRESSION_COST is set to ZERO to avoid SQL sharing issues # See Note:62143.1 for details _sqlexec_progression_cost=0 Note that a side effect of setting this to '0' is that the V$SESSION_LONGOPS view is not populated by long running queries. -- MTS, Shared Server and XA The multi-threaded server (MTS) adds to the load on the shared pool and can contribute to any problems as the User Global Area (UGA) resides in the shared pool. This is also true of XA sessions in Oracle7 as their UGA is located in the shared pool. (In Oracle8/8i XA sessions do NOT put their UGA in the shared pool). In Oracle8 the Large Pool can be used for MTS reducing its impact on shared pool activity - However memory allocations in the Large Pool still make use of the "shared pool latch". See for a description of the Large Pool. Using dedicated connections rather than MTS causes the UGA to be allocated out of process private memory rather than the shared pool. Private memory allocations do not use the "shared pool latch" and so a switch from MTS to dedicated connections can help reduce contention in some cases. In Oracle9i, MTS was renamed to "Shared Server". For the purposes of the shared pool, the behaviour is essentially the same. Useful SQL for looking at memory and Shared Pool problems --------------------------------------------------------Indeling SGA: ------------SELECT * FROM V$SGA; free memory shared pool: -----------------------SELECT * FROM v$sgastat WHERE name = 'free memory'; hit ratio shared pool: ---------------------SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE EXECUTING"
FROM V$LIBRARYCACHE; SELECT sum(sharable_mem) FROM v$db_object_cache; statistics: ----------SELECT class, value, name FROM v$sysstat; Executions: ----------SELECT substr(sql_text,1,90) "SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE executions > 5 GROUP BY substr(sql_text,1,90) HAVING count(*) > 10 ORDER BY 2 ; The values 40,5 and 30 are example values so this query is looking for different statements whose first 40 characters are the same which have only been executed a few times each and there are at least 30 different occurrances in the shared pool. This query uses the idea it is common for literal statements to begin "SELECT col1,col2,col3 FROM table WHERE ..." with the leading portion of each statement being the same. V$SQLAREA: SQL_TEXT VARCHAR2(1000) First thousand characters of the SQL text for the current cursor SHARABLE_MEM NUMBER Amount of shared memory used by a cursor. If multiple child cursors exist, then the sum of all shared memory used by all child cursors. PERSISTENT_MEM NUMBER Fixed amount of memory used for the lifetime of an open cursor. If multiple child cursors exist, the fixed sum of memory used for the lifetime of all the child cursors. RUNTIME_MEM NUMBER Fixed amount of memory required during execution of a cursor. If multiple child cursors exist, the fixed sum of all memory required during execution of all the child cursors.
SORTS NUMBER Sum of the number of sorts that were done for all the child cursors VERSION_COUNT NUMBER Number of child cursors that are present in the cache under this parent LOADED_VERSIONS NUMBER Number of child cursors that are present in the cache and have their context heap (KGL heap 6) loaded OPEN_VERSIONS NUMBER The number of child cursors that are currently open under this current parent USERS_OPENING NUMBER The number of users that have any of the child cursors open FETCHES NUMBER Number of fetches associated with the SQL statement EXECUTIONS NUMBER Total number of executions, totalled over all the child cursors USERS_EXECUTING NUMBER Total number of users executing the statement over all child cursors LOADS NUMBER The number of times the object was loaded or reloaded FIRST_LOAD_TIME VARCHAR2(19) Timestamp of the parent creation time INVALIDATIONS NUMBER Total number of invalidations over all the child cursors PARSE_CALLS NUMBER The sum of all parse calls to all the child cursors under this parent DISK_READS NUMBER The sum of the number of disk reads over all child cursors BUFFER_GETS NUMBER The sum of buffer gets over all child cursors
ROWS_PROCESSED NUMBER The total number of rows processed on behalf of this SQL statement COMMAND_TYPE NUMBER The Oracle command type definition OPTIMIZER_MODE VARCHAR2(10) Mode under which the SQL statement is executed PARSING_USER_ID NUMBER The user ID of the user that has parsed the very first cursor under this parent PARSING_SCHEMA_ID NUMBER The schema ID that was used to parse this child cursor KEPT_VERSIONS NUMBER The number of child cursors that have been marked to be kept using the DBMS_SHARED_POOL package ADDRESS RAW(4) The address of the handle to the parent for this cursor HASH_VALUE NUMBER The hash value of the parent statement in the library cache MODULE VARCHAR2(64) Contains the name of the module that was executing at the time that the SQL statement was first parsed as set by calling DBMS_APPLICATION_INFO.SET_MODULE MODULE_HASH NUMBER The hash value of the module that is named in the MODULE column ACTION VARCHAR2(64) Contains the name of the action that was executing at the time that the SQL statement was first parsed as set by calling DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER The hash value of the action that is named in the ACTION column SERIALIZABLE_ABORTS NUMBER Number of times the transaction fails to serialize, producing ORA-08177 errors, totalled over all the child cursors
IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become obsolete (Y) or not (N). This can happen if the number of child cursors is too large. CHILD_LATCH NUMBER Child latch number that is protecting the cursor V$SQL: -----V$SQL lists statistics on shared SQL area without the GROUP BY clause and contains one row for each child of the original SQL text entered. Column Datatype Description SQL_TEXT VARCHAR2(1000) First thousand characters of the SQL text for the current cursor SHARABLE_MEM NUMBER Amount of shared memory used by this child cursor (in bytes) PERSISTENT_MEM NUMBER Fixed amount of memory used for the lifetime of this child cursor (in bytes) RUNTIME_MEM NUMBER Fixed amount of memory required during the execution of this child cursor SORTS NUMBER Number of sorts that were done for this child cursor LOADED_VERSIONS NUMBER Indicates whether the context heap is loaded (1) or not (0) OPEN_VERSIONS NUMBER Indicates whether the child cursor is locked (1) or not (0) USERS_OPENING NUMBER Number of users executing the statement FETCHES NUMBER Number of fetches associated with the SQL statement EXECUTIONS NUMBER Number of executions that took place on this object since it was brought into the
library cache USERS_EXECUTING NUMBER Number of users executing the statement LOADS NUMBER Number of times the object was either loaded or reloaded FIRST_LOAD_TIME VARCHAR2(19) Timestamp of the parent creation time INVALIDATIONS NUMBER Number of times this child cursor has been invalidated PARSE_CALLS NUMBER Number of parse calls for this child cursor DISK_READS NUMBER Number of disk reads for this child cursor BUFFER_GETS NUMBER Number of buffer gets for this child cursor ROWS_PROCESSED NUMBER Total number of rows the parsed SQL statement returns COMMAND_TYPE NUMBER Oracle command type definition OPTIMIZER_MODE VARCHAR2(10) Mode under which the SQL statement is executed OPTIMIZER_COST NUMBER Cost of this query given by the optimizer PARSING_USER_ID NUMBER User ID of the user who originally built this child cursor PARSING_SCHEMA_ID NUMBER Schema ID that was used to originally build this child cursor KEPT_VERSIONS NUMBER Indicates whether this child cursor has been marked to be kept pinned in the cache using the DBMS_SHARED_POOL package
ADDRESS RAW(4) Address of the handle to the parent for this cursor TYPE_CHK_HEAP RAW(4) Descriptor of the type check heap for this child cursor HASH_VALUE NUMBER Hash value of the parent statement in the library cache PLAN_HASH_VALUE NUMBER Numerical representation of the SQL plan for this cursor. Comparing one PLAN_HASH_VALUE to another easily identifies whether or not two plans are the same (rather than comparing the two plans line by line). CHILD_NUMBER NUMBER Number of this child cursor MODULE VARCHAR2(64) Contains the name of the module that was executing at the time that the SQL statement was first parsed, which is set by calling DBMS_APPLICATION_INFO.SET_MODULE MODULE_HASH NUMBER Hash value of the module listed in the MODULE column ACTION VARCHAR2(64) Contains the name of the action that was executing at the time that the SQL statement was first parsed, which is set by calling DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER Hash value of the action listed in the ACTION column SERIALIZABLE_ABORTS NUMBER Number of times the transaction fails to serialize, producing ORA-08177 errors, per cursor OUTLINE_CATEGORY VARCHAR2(64) If an outline was applied during construction of the cursor, then this column displays the category of that outline. Otherwise the column is left blank. CPU_TIME NUMBER CPU time (in microseconds) used by this cursor for parsing/executing/fetching
ELAPSED_TIME NUMBER Elapsed time (in microseconds) used by this cursor for parsing/executing/fetching OUTLINE_SID NUMBER Outline session identifier CHILD_ADDRESS RAW(4) Address of the child cursor SQLTYPE NUMBER Denotes the version of the SQL language used for this statement REMOTE VARCHAR2(1) (Y/N) Identifies whether the cursor is remote mapped or not OBJECT_STATUS VARCHAR2(19) Status of the cursor (VALID/INVALID) LITERAL_HASH_VALUE NUMBER Hash value of the literals which are replaced with system-generated bind variables and are to be matched, when CURSOR_SHARING is used. This is not the hash value for the SQL statement. If CURSOR_SHARING is not used, then the value is 0. LAST_LOAD_TIME VARCHAR2(19) IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become obsolete (Y) or not (N). This can happen if the number of child cursors is too large. CHILD_LATCH NUMBER Child latch number that is protecting the cursor
Checking for high version counts: -------------------------------SELECT address, hash_value, version_count , users_opening , users_executing, substr(sql_text,1,40) "SQL" FROM v$sqlarea
WHERE version_count > 10 ; "Versions" of a statement occur where the SQL is character for character identical but the underlying objects or binds etc.. are different. Finding statement/s which use lots of shared pool memory: -------------------------------------------------------SELECT substr(sql_text,1,60) "Stmt", count(*), sum(sharable_mem) "Mem", sum(users_opening) "Open", sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,60) HAVING sum(sharable_mem) > 20000 ; SELECT substr(sql_text,1,100) "Stmt", count(*), sum(sharable_mem) "Mem", sum(users_opening) "Open", sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,60) HAVING sum(executions) > 200 ; SELECT substr(sql_text,1,100) "Stmt", count(*), sum(executions) "Exec" FROM v$sql GROUP BY substr(sql_text,1,100) HAVING sum(executions) > 200 ; where MEMSIZE is about 10% of the shared pool size in bytes. This should show if there are similar literal statements, or multiple versions of a statements which account for a large portion of the memory in the shared pool.
1.2 statistics: --------------- Rule based / Cost based - apply EXPLAIN PLAN in query - ANALYZE COMMAND: ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS; ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS FOR ALL INDEXES; ANALYZE INDEX scott.indx1 COMPUTE STATISTICS; ANALYZE TABLE EMPLOYEE ESTIMATE STATISTICS SAMPLE 10 PERCENT; ALTER TABLE EMPLOYEE DELETE STATISTICS; - DBMS_UTILITY.ANALYZE_SCHEMA() procedure: DBMS_UTILITY.ANALYZE_SCHEMA (
VARCHAR2, VARCHAR2, NUMBER DEFAULT NULL, NUMBER DEFAULT NULL, VARCHAR2 DEFAULT NULL);
DBMS_UTILITY.ANALYZE_DATABASE ( method VARCHAR2, estimate_rows NUMBER DEFAULT NULL, estimate_percent NUMBER DEFAULT NULL, method_opt VARCHAR2 DEFAULT NULL); method=compute, estimate, delete To exexcute: exec DBMS_UTILITY.ANALYZE_SCHEMA('CISADM','COMPUTE');
1.3 Storage parameters: ----------------------segement: pctfree, pctused, number AND size of extends in STORAGE clause - very low updates - if updates, oltp - if only inserts
1.4 rebuild indexes on regular basis: ----------------------------------------alter index SCOTT.EMPNO_INDEX rebuild tablespace INDEX storage (initial 5M next 5M pctincrease 0); You should next use the ANALYZE TABLE COMPUTE STATISTICS command 1.5 Is an index used in a query?: --------------------------------De WHERE clause of a query must use the 'leading column' of (one of the) index(es): Suppose an index 'indx1' exists on EMPLOYEE(city, state, zip) Suppose a user issues the query: SELECT .. FROM EMPLOYEE WHERE state='NY' Then this query will not use that index! Therfore you must pay attention to the cardinal column of any index. 1.6 set transaction parameters: ------------------------------ONLY ORACLE 7,8,8i:
Suppose you must perform an action which will generate a lot of redo and rollback. If you want to influence which rollback segment will be used in your transactions, you can use the statement set transaction use rollback segment SEGMENT_NAME 1.7 Reduce fragmentation of a dictionary managed tablespace: -----------------------------------------------------------alter tablespace DATA coalesce;
1.8 normalisation of tables: ---------------------------The more tables are 'normalized', the higher the performance costs for queries joining tables 1.9 commits na zoveel rows: ---------------------------declare i number := 0; cursor s1 is SELECT * FROM tab1 WHERE col1 = 'value1' FOR UPDATE; begin for c1 in s1 loop update tab1 set col1 = 'value2' WHERE current of s1; i := i + 1; if i > 1000 then commit; i := 0; end if;
-- Commit after every X records
end loop; commit; end; / -- -----------------------------CREATE TABLE TEST ( ID NUMBER(10) DATUM DATE NAME VARCHAR2(10) ); declare i number := 1000; begin
NULL, NULL, NULL
while i>1 loop insert into TEST values (1, sysdate+i,'joop'); i := i - 1; commit; end loop; commit; end; / -- -----------------------------CREATE TABLE TEST2 ( i number ID NUMBER(10) DATUM DATE DAG VARCHAR2(10) NAME VARCHAR2(10) );
NULL, NULL, NULL, NULL, NULL
declare i number := 1; j date; k varchar2(10); begin while i<1000000 loop j:=sysdate+i; k:=TO_CHAR(SYSDATE+i,'DAY'); insert into TEST2 values (i,1, j, k,'joop'); i := i + 1; commit; end loop; commit; end; / -- -----------------------------CREATE TABLE TEST3 ( ID NUMBER(10) DATUM DATE DAG VARCHAR2(10) VORIG VARCHAR2(10) NAME VARCHAR2(10) ); declare i number := 1; j date;
NULL, NULL, NULL, NULL, NULL
k varchar2(10); l varchar2(10); begin while i<1000 loop j:=sysdate+i; k:=TO_CHAR(SYSDATE+i,'DAY'); l:=TO_CHAR(SYSDATE+i-1,'DAY'); insert into TEST3 (ID,DATUM,DAG,VORIG,NAME) values (i, j, k, l,'joop'); i := i + 1; commit; end loop; commit; end; / 1.10 explain plan commAND, autotrace: ------------------------------------1 explain plan commAND: ----------------------First execute the utlxplan.sql script. This script will create the PLAN_TABLE table, needed for storage of performance data. Now it's possible to do the following: -- optionally, delete the former performance data DELETE FROM plan_table WHERE statement_id = 'XXX'; COMMIT; -- now you can run the query that is to be analyzed EXPLAIN PLAN SET STATEMENT_ID = 'XXX' FOR SELECT * FROM EMPLOYEE WHERE city > 'Y%'; To view results, you can use the utlxpls.sql script. 2. set autotrace on / off ------------------------Deze maakt ook gebruik van de PLAN_TABLE en de PLUSTRACE role moet bestaan. Desgewenst kan het plustrce.sql script worden uitgevoerd (onder SYS). Opmerking: Execution plan / access path bij een join query: - nested loop: 1 table is de driving table met full table scan of gebruik van index, en de tweede table wordt benadert m.b.v. een index van de tweede table gebaseerd op de WHERE clause. - merge join: als er geen bruikbare index is, worden alle rows opgehaald, gesorteerd, en gejoined naar een resultset.
- Hash join: bepaalde init.ora parameters moeten aanwezig zijn (HASH_JOIN_ENABLE=TRUE, HASH_AREA_SIZE= , of via ALTER SESSION SET HASH_JOIN_ENABLED=TRUE). Meestal zeer effectief bij joins van een kleine table met een grote table. De kleine table is de driving table in memory en het vervolg is een algolritme wat lijkt op de nested loop Kan ook worden afgedwongen met een hint: SELECT /*+ USE_HASH(COMPANY) */ COMPANY.Name, SUM(Dollar_Amount) FROM COMPANY, SALES WHERE COMPANY.Company_ID = SALES.Company_ID GROUP BY COMPANY.Name; 3 SQL trace en TKPROFF ---------------------SQL trace kan geactiveerd worden via init.ora of via ALTER SESSION SET SQL_TRACE=TRUE DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid, serial#, TRUE); DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, TRUE); DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(12, 398, FALSE); DBMS_SUPPORT.START_TRACE_IN_SESSION(12,398); Turn SQL tracing on in session 448. The trace information will get written to user_dump_dest. SQL> exec dbms_system.set_sql_trace_in_session(448,2288,TRUE); Turn SQL tracing off in session 448 SQL> exec dbms_system.set_sql_trace_in_session(448,2288,FALSE);
Init.ora: Max_dump_file_size in OS blocks SQL_TRACE=TRUE (kan zeer grote files opleveren, is voor alle sessions) USER_DUMP_DEST= lokatie trace files 1.12 Indien de CBO niet het beste access path gebruikt: hints in query: ----------------------------------------------------------------------Goal hints: Access methods hints:
ALL_ROWS, FIRST_ROWS, CHOOSE, RULE FULL, ROWID, CLUSTER, HASH, INDEX
SELECT /*+ INDEX(emp_pk) */ FROM emp WHERE empno=12345; SELECT /*+ RULE */ ename, dname FROM emp, dept WHERE emp.deptno=dept.deptno
==============================================
3. Data dictonary queries m.b.t perfoRMANce: ============================================== 3.1 Reads AND writes in files: -----------------------------V$FILESTAT, V$DATAFILE - Relative File I/O (1) SELECT fs.file#, df.file#, substr(df.name, 1, 50), fs.phyrds, fs.phywrts, df.status FROM v$filestat fs, v$datafile df WHERE fs.file#=df.file# - Relative File I/O (2) set pagesize 60 linesize 80 newpage 0 feedback off ttitle skip centre 'Datafile IO Weights' skip centre column Total_IO format 999999999 column Weigt format 999.99 column file_name format A40 break on drive skip 2 compute sum of Weight on Drive SELECT substr(DF.Name, 1, 6) Drive, DF.Name File_Name, FS.Phyblkrd+FS.Phyblkwrt Total_IO, 100*(FS.Phyblkrd+FS.Phyblkwrt) / MaxIO Weight FROM V$FILESTAT FS, V$DATAFILE DF, (SELECT MAX(Phyblkrd+Phyblkwrt) MaxIO FROM V$FILESTAT) WHERE DF.File#=FS.File# ORDER BY Weight desc / 3.2 undocumented init parameters: --------------------------------SELECT * FROM SYS.X$KSPPI WHERE SUBSTR(KSPPINM,1,1) = '_'; 3.3 Kans op gebruik index of niet?: ----------------------------------Kijk in DBA_TAB_COLUMNS.NUM_DISTINCT DBA_TABLES.NUM_ROWS als num_distinct in de buurt komt van num_rows : index favoriet i.p.v. full table
Kijk in DBA_INDEXES, USER_INDEXES.CLUSTERING_FACTOR als clustering_factor = aantal blocks: ordered 3.4 snel overzicht hit ratio buffer cache: -----------------------------------------Hit ratio= (LR - PR) / LR Stel er zijn nauwelijk Physical Reads PR, ofwel PR=0, dan is de Hit Ratio=LR/LR=1 Er worden dan geen blocks van disk gelezen. Praktijk: Hit ratio moet gemiddeld wel zo > 0,8 - 0,9 V$sess_io en v$sysstat en v$session kunnen geraadpleegd worden om de hit ratio te bepalen. V$sess_io: V$session: SELECT FROM WHERE
name, value v$sysstat name IN ('db block gets', 'consistent gets','physical reads');
SELECT (1-(pr.value/(dbg.value+cg.value)))*100 FROM v$sysstat pr, v$sysstat dbg, v$sysstat cg WHERE pr.name = 'physical reads' AND dbg.name = 'db block gets' AND cg.name = 'consistent gets'; -- uitgebeidere query m.b.t. hit ratio CLEAR SET HEAD ON SET VERIFY OFF col col col col
HitRatio format 999.99 heading 'Hit Ratio' CGets format 9999999999999 heading 'Consistent Gets' DBGets format 9999999999999 heading 'DB Block Gets' PhyGets format 9999999999999 heading 'Physical Reads'
SELECT substr(Username, 1, 10), v$sess_io.sid, consistent_gets, block_gets, physical_reads, 100*(consistent_gets+block_gets-physical_reads)/ (consistent_gets+block_gets) HitRatio FROM v$session, v$sess_io WHERE v$session.sid = v$sess_io.sid AND (consistent_gets+block_gets) > 0 AND Username is NOT NULL / SELECT 'Hit Ratio' Database, cg.value CGets, db.value DBGets, pr.value PhyGets,
100*(cg.value+db.value-pr.value)/(cg.value+db.value) HitRatio FROM v$sysstat db, v$sysstat cg, v$sysstat pr WHERE db.name = 'db block gets' AND cg.name = 'consistent gets' AND pr.name = 'physical reads' /
3.6 Wat zijn de actieve transacties?: ------------------------------------SELECT substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1, 10), t.start_time, r.name, t.used_ublk "ROLLB BLKS", decode(t.space, 'YES', 'SPACE TX', decode(t.recursive, 'YES', 'RECURSIVE TX', decode(t.noundo, 'YES', 'NO UNDO TX', t.status) )) status FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s WHERE t.xidusn = r.usn AND t.ses_addr = s.saddr 3.7 sid's, resource belasting en locks: --------------------------------------SELECT sid, lmode, ctime, block FROM v$lock SELECT s.sid, substr(s.username, 1, 10), substr(s.schemaname, 1, 10), substr(s.osuser, 1, 10), substr(s.program, 1, 10), s.command, l.lmode, l.block FROM v$session s, v$lock l WHERE s.sid=l.sid; SELECT l.addr, s.saddr, l.sid, s.sid, l.type, l.lmode, s.status, substr(s.schemaname, 1, 10), s.lockwait, s.row_wait_obj# FROM v$lock l, v$session s WHERE l.addr=s.saddr SELECT sid, substr(owner, 1, 10), substr(object, 1, 10) FROM v$access SID Session number that is accessing an object OWNER Owner of the object OBJECT Name of the object TYPE Type identifier for the object SELECT substr(s.username, 1, 10), s.sid, t.log_io, t.phy_io FROM v$session s, v$transaction t WHERE t.ses_addr=s.saddr 3.8 latch use in SGA (locks op process): ----------------------------------------
SELECT c.name,a.gets,a.misses,a.sleeps, a.immediate_gets,a.immediate_misses,b.pid FROM v$latch a, v$latchholder b, v$latchname c WHERE a.addr = b.laddr(+) AND a.latch# = c.latch# AND (c.name like 'redo%' or c.name like 'row%') ORDER BY a.latch#; column latch_name format a40 SELECT name latch_name, gets, misses, round(decode(gets-misses,0,1,gets-misses)/ decode(gets,0,1,gets),3) hit_ratio FROM v$latch WHERE name = 'redo allocation'; column latch_name format a40 SELECT name latch_name, immediate_gets, immediate_misses, round(decode(immediate_gets-immediate_misses,0,1, immediate_gets-immediate_misses)/ decode(immediate_gets,0,1,immediate_gets),3) hit_ratio FROM v$latch WHERE name = 'redo copy'; column name format a40 column value format a10 SELECT name,value FROM v$parameter WHERE name in ('log_small_entry_max_size','log_simultaneous_copies', 'cpu_count'); -- latches en locks in beeld set pagesize 23 set pause on set pause 'Hit any key...' col col col col col col col col col
sid format 999999 serial# format 999999 username format a12 trunc process format a8 trunc terminal format a12 trunc type format a12 trunc lmode format a4 trunc lrequest format a4 trunc object format a73 trunc
/ SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY v.SID; SQL> desc v$sess_io Name Null? ----------------------------- -------SID BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES CONSISTENT_CHANGES
Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER
SQL> desc v$session; Name Null? ----------------------------- -------SADDR SID SERIAL# AUDSID PADDR USER# USERNAME COMMAND OWNERID TADDR LOCKWAIT STATUS SERVER SCHEMA# SCHEMANAME OSUSER PROCESS MACHINE TERMINAL PROGRAM TYPE SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_CHILD_NUMBER PREV_SQL_ADDR PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID MODULE MODULE_HASH
Type -------------------RAW(8) NUMBER NUMBER NUMBER RAW(8) NUMBER VARCHAR2(30) NUMBER NUMBER VARCHAR2(16) VARCHAR2(16) VARCHAR2(8) VARCHAR2(9) NUMBER VARCHAR2(30) VARCHAR2(30) VARCHAR2(12) VARCHAR2(64) VARCHAR2(30) VARCHAR2(48) VARCHAR2(10) RAW(8) NUMBER VARCHAR2(13) NUMBER RAW(8) NUMBER VARCHAR2(13) NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2(48) NUMBER
VARCHAR2(32) NUMBER VARCHAR2(64) NUMBER NUMBER NUMBER NUMBER NUMBER DATE NUMBER VARCHAR2(3) VARCHAR2(13) VARCHAR2(10) VARCHAR2(3) VARCHAR2(32) VARCHAR2(8) VARCHAR2(8) VARCHAR2(8) NUMBER VARCHAR2(64) VARCHAR2(11) NUMBER NUMBER NUMBER NUMBER VARCHAR2(64) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64) NUMBER RAW(8) NUMBER NUMBER VARCHAR2(64) NUMBER NUMBER VARCHAR2(19) VARCHAR2(64) VARCHAR2(8) VARCHAR2(5) VARCHAR2(5)
SQL>
======================================================== 4. IMP and EXP, IMPDP and EXPDP, and SQL*Loader Examples ======================================================== 4.1 EXPDP and IMPDP examples: =============================
New for Oracle 10g, are the impdp and expdp utilities. EXPDP practice/practice PARFILE=par1.par EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp LOGFILE=export_dir:hr_schema.explog EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl Oracle 10g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS that allow the DBA to monitor the progress of all DataPump operations. SELECT owner_name ,job_name ,operation ,job_mode ,state ,degree ,attached_sessions FROM dba_datapump_jobs ; SELECT DPS.owner_name ,DPS.job_name ,S.osuser FROM dba_datapump_sessions DPS ,v$session S WHERE S.saddr = DPS.saddr ; Example 1. EXPDP parfile -----------------------JOB_NAME=NightlyDRExport DIRECTORY=export_dir DUMPFILE=export_dir:fulldb_%U.dmp LOGFILE=export_dir:NightlyDRExport.explog FULL=Y PARALLEL=2 FILESIZE=650M CONTENT=ALL STATUS=30 ESTIMATE_ONLY=Y Example 2. EXPDP parfile, only for getting an estimate of export size --------------------------------------------------------------JOB_NAME=EstimateOnly DIRECTORY=export_dir LOGFILE=export_dir:EstimateOnly.explog FULL=Y CONTENT=DATA_ONLY ESTIMATE=STATISTICS ESTIMATE_ONLY=Y STATUS=60
Example 3. EXPDP parfile, only 1 schema, writing to multiple files with %U variable, limited to 650M --------------------------------------------------------------------------------------------JOB_NAME=SH_TABLESONLY DIRECTORY=export_dir DUMPFILE=export_dir:SHONLY_%U.dmp LOGFILE=export_dir:SH_TablesOnly.explog SCHEMAS=SH PARALLEL=2 FILESIZE=650M STATUS=60 Example 4. EXPDP parfile, multiple tables, writing to multiple files with %U variable, limited --------------------------------------------------------------------------------------JOB_NAME=HR_PAYROLL_REFRESH DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp LOGFILE=export_dir:HR_PAYROLL_REFRESH.explog STATUS=20 FILESIZE=132K CONTENT=ALL TABLES=HR.EMPLOYEES,HR.DEPARTMENTS,HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_ SALARY,HR.PAYROLL_TRANSACTIONS Example 5. EXPDP parfile, Exports all objects in the HR schema, including metadata, asof just before midnight on April 10, 2005 -----------------------------------------------------------------------------------------------------------------------JOB_NAME=HREXPORT DIRECTORY=export_dir DUMPFILE=export_dir:HREXPORT_%U.dmp LOGFILE=export_dir:2005-04-10_HRExport.explog SCHEMAS=HR CONTENTS=ALL FLASHBACK_TIME=TO_TIMESTAMP"('04-10-2005 23:59', 'MM-DD-YYYY HH24:MI')" Example 6. IMPDP parfile, Imports data +only+ into selected tables in the HR schema, Multiple dump files will be used --------------------------------------------------------------------------------------------------------------------JOB_NAME=HR_PAYROLL_IMPORT DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp LOGFILE=export_dir:HR_PAYROLL_IMPORT.implog STATUS=20 TABLES=HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_SALARY,HR.PAYROLL_TRANSACTIO NS CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE
Example 7. IMPDP parfile,3 tables in the SH schema are the only tables to be refreshed,These tables will be truncated before loading ------------------------------------------------------------------------------------------------------------------------------DIRECTORY=export_dir JOB_NAME=RefreshSHTables DUMPFILE=export_dir:fulldb_%U.dmp LOGFILE=export_dir:RefreshSHTables.implog STATUS=30 CONTENT=DATA_ONLY SCHEMAS=SH INCLUDE=TABLE:"IN('COUNTRIES','CUSTOMERS','PRODUCTS','SALES')" TABLE_EXISTS_ACTION=TRUNCATE Example IMPDP parfile,Generates SQLFILE output showing the DDL statements,Note that this code is +not+ executed! --------------------------------------------------------------------------------------------------------------DIRECTORY=export_dir JOB_NAME=GenerateImportDDL DUMPFILE=export_dir:hr_payroll_refresh_%U.dmp LOGFILE=export_dir:GenerateImportDDL.implog SQLFILE=export_dir:GenerateImportDDL.sql INCLUDE=TABLE Example: schedule a procedure which uses DBMS_DATAPUMP -----------------------------------------------------BEGIN
END; / ====================================== How to use the NETWORK_LINK paramater: ====================================== Note 1: ======= Lora, the DBA at Acme Bank, is at the center of attention in a high-profile meeting of the bank's top management team. The objective is to identify ways of enabling end users to slice and dice the data in the company's main data warehouse. At the meeting, one idea presented is to create several small data marts�each based on a particular functional area�that
can each be used by specialized teams. To effectively implement the data mart approach, the data specialists must get data into the data marts quickly and efficiently. The challenge the team faces is figuring out how to quickly refresh the warehouse data to the data marts, which run on heterogeneous platforms. And that's why Lora is at the meeting. What options does she propose for moving the data? An experienced and knowledgeable DBA, Lora provides the meeting attendees with three possibilities, as follows: Using transportable tablespaces Using Data Pump (Export and Import) Pulling tablespaces This article shows Lora's explanation of these options, including their implementation details and their pros and cons. Transportable Tablespaces: Lora starts by describing the transportable tablespaces option. The quickest way to transport an entire tablespace to a target system is to simply transfer the tablespace's underlying files, using FTP (file transfer protocol) or rcp (remote copy). However, just copying the Oracle data files is not sufficient; the target database must recognize and import the files and the corresponding tablespace before the tablespace data can become available to end users. Using transportable tablespaces involves copying the tablespace files and making the data available in the target database. A few checks are necessary before this option can be considered. First, for a tablespace TS1 to be transported to a target system, it must be self-contained. That is, all the indexes, partitions, and other dependent segments of the tables in the tablespace must be inside the tablespace. Lora explains that if a set of tablespaces contains all the dependent segments, the set is considered to be self-contained. For instance, if tablespaces TS1 and TS2 are to be transferred as a set and a table in TS1 has an index in TS2, the tablespace set is self-contained. However, if another index of a table in TS1 is in tablespace TS3, the tablespace set (TS1, TS2) is not self-contained. To transport the tablespaces, Lora proposes using the Data Pump Export utility in Oracle Database 10g. Data Pump is Oracle's next-generation data transfer tool, which replaces the earlier Oracle Export (EXP) and Import (IMP) tools. Unlike those older tools, which use regular SQL to extract and insert data, Data Pump uses proprietary APIs that bypass the SQL buffer, making the process extremely fast. In addition, Data Pump can extract specific objects, such as a particular
stored procedure or a set of tables from a particular tablespace. Data Pump Export and Import are controlled by jobs, which the DBA can pause, restart, and stop at will. Lora has run a test before the meeting to see if Data Pump can handle Acme's requirements. Lora's test transports the TS1 and TS2 tablespaces as follows: 1. Check that the set of TS1 and TS2 tablespaces is self- contained. Issue the following command: BEGIN SYS.DBMS_TTS.TRANSPORT_SET_CHECK ('TS1','TS2'); END;
2. Identify any nontransportable sets. If no rows are selected, the tablespaces are self-contained: SELECT * FROM SYS.TRANSPORT_SET_VIOLATIONS; no rows selected 3. Ensure the tablespaces are read-only: SELECT STATUS FROM DBA_TABLESPACES WHERE TABLESPACE_NAME IN ('TS1','TS2'); STATUS --------READ ONLY READ ONLY 4. Transfer the data files of each tablespace to the remote system, into the directory /u01/oradata, using a transfer mechanism such as FTP or rcp. 5. In the target database, create a database link to the source database (named srcdb in the line below). CREATE DATABASE LINK srcdb USING 'srcdb'; 6. In the target database, import the tablespaces into the database, using Data Pump Import. impdp lora/lora123 TRANSPORT_DATAFILES="'/u01/oradata/ts1_1.dbf','/u01/oradata/ts2_1.dbf'" NETWORK_LINK='srcdb'
TRANSPORT_TABLESPACES=\(TS1,TS2\) NOLOGFILE=Y This step makes the TS1 and TS2 tablespaces and their data available in the target database. Note that Lora doesn't export the metadata from the source database. She merely specifies the value srcdb, the database link to the source database, for the parameter NETWORK_LINK in the impdp command above. Data Pump Import fetches the necessary metadata from the source across the database link and re-creates it in the target. 7. Finally, make the TS1 and TS2 tablespaces in the source database read-write. ALTER TABLESPACE TS1 READ WRITE; ALTER TABLESPACE TS2 READ WRITE; Note 2: ======= One of the most significant characteristics of an import operation is its mode, because the mode largely determines what is imported. The specified mode applies to the source of the operation, either a dump file set or another database if the NETWORK_LINK parameter is specified. The NETWORK_LINK parameter initiates a network import. This means that the impdp client initiates the import request, typically to the local database. That server contacts the remote source database referenced by the database link in the NETWORK_LINK parameter, retrieves the data, and writes it directly back to the target database. There are no dump files involved. In the following example, the source_database_link would be replaced with the name of a valid database link that must already exist. impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT This example results in an import of the employees table (excluding constraints) from the source database. The log file is written to dpump_dir1, specified on the DIRECTORY parameter.
4.2 Export / Import examples: ============================= In all Oracle versions 7,8,8i,9i,10g you can use the exp and imp utilities. exp system/manager file=expdat.dmp compress=Y owner=(HARRY, PIET)
exp system/manager file=hr.dmp owner=HR indexes=Y exp system/manager file=expdat.dmp TABLES=(john.SALES) imp system/manager file=hr.dmp full=Y buffer=64000 commit=Y imp system/manager file=expdat.dmp FROMuser=ted touser=john indexes=N commit=Y buffer=64000 imp rm_live/rm file=dump.dmp tables=(employee) imp system/manager file=expdat.dmp FROMuser=ted touser=john buffer=4194304 c:\> cd [oracle_db_home]\bin c:\> set nls_lang=american_america.WE8ISO8859P15 # export NLS_LANG=AMERICAN_AMERICA.UTF8 # export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 c:\> imp system/manager fromuser=mis_owner touser=mis_owner file=[yourexport.dmp] FROM Oracle8i one can use the QUERY= export parameter to SELECTively unload a subset of the data FROM a table. Look at this example: exp scott/tiger tables=emp query=\"WHERE deptno=10\" -- Export metadata only: The Export utility is used to export the metadata describing the objects contained in the transported tablespace. For our example scenario, the Export command could be: EXP TRANSPORT_TABLESPACE=y TABLESPACES=ts_temp_sales FILE=jan_sales.dmp This operation will generate an export file, jan_sales.dmp. The export file will be small, because it contains only metadata. In this case, the export file will contain information describing the table temp_jan_sales, such as the column names, column datatype, and all other information that the target Oracle database will need in order to access the objects in ts_temp_sales. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $ Extended example: ----------------CASE 1: ======= We create a user Albert on a 10g DB. This user will create a couple of tables with referential constraints (PK-FK relations). Then we will export this user, drop the user, and do an import. See what we have after the import. -- User: create user albert identified by albert default tablespace ts_cdc
temporary QUOTA 10M QUOTA 20M QUOTA 50M ;
tablespace temp ON sysaux ON users ON TS_CDC
-- GRANTS: GRANT create session TO albert; GRANT create table TO albert; GRANT create sequence TO albert; GRANT create procedure TO albert; GRANT connect TO albert; GRANT resource TO albert; -- connect albert/albert -- create tables create table LOC -- table of locations ( LOCID int, CITY varchar2(16), constraint pk_loc primary key (locid) ); create table DEPT -- table of departments ( DEPID int, DEPTNAME varchar2(16), LOCID int, constraint pk_dept primary key (depid), constraint fk_dept_loc foreign key (locid) references loc(locid) ); create table EMP -- table of employees ( EMPID int, EMPNAME varchar2(16), DEPID int, constraint pk_emp primary key (empid), constraint fk_emp_dept foreign key (depid) references dept(depid) ); -- show constraints: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from user_constraints; CONSTRAINT_NAME ----------------------------------------------------------FK_EMP_DEPT FK_DEPT_LOC PK_LOC PK_DEPT
C TABLE_NAME R_CONSTRAINT_NAME - -----------------------------R R P P
-- make an export C:\oracle\expimp>exp '/@test10g2 as sysdba' file=albert.dat owner=albert Export: Release 10.2.0.1.0 - Production on Sat Mar 1 08:03:59 2008 Copyright (c) 1982, 2005, Oracle.
All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Production With the Partitioning, OLAP and Data Mining options Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses AL32UTF8 character set (possible charset conversion) About to export specified users ... . exporting pre-schema procedural objects and actions . exporting foreign function library names for user ALBERT . exporting PUBLIC type synonyms . exporting private type synonyms . exporting object type definitions for user ALBERT About to export ALBERT's objects ... . exporting database links . exporting sequence numbers . exporting cluster definitions . about to export ALBERT's tables via Conventional Path ... . . exporting table DEPT 5 rows exported . . exporting table EMP 7 rows exported . . exporting table LOC 4 rows exported . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting referential integrity constraints
. exporting triggers . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting statistics Export terminated successfully without warnings. C:\oracle\expimp> -- drop user albert SQL>drop user albert cascade - create user albert See above -- do the import C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat fromuser=albert touser=albert Import: Release 10.2.0.1.0 - Production on Sat Mar 1 08:09:26 2008 Copyright (c) 1982, 2005, Oracle.
All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Production With the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (possible charset conversion) . importing ALBERT's objects into ALBERT . . importing table "DEPT" 5 rows imported . . importing table "EMP" 7 rows imported . . importing table "LOC" 4 rows imported About to enable constraints... Import terminated successfully without warnings. C:\oracle\expimp> - connect albert/albert SQL> select * from emp; EMPID ---------1 2
-- show constraints: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from user_constraints; CONSTRAINT_NAME ----------------------------------------------------------FK_DEPT_LOC FK_EMP_DEPT PK_DEPT PK_EMP PK_LOC
C TABLE_NAME R_CONSTRAINT_NAME - -----------------------------R R P P P
DEPT EMP DEPT EMP LOC
Everything is back again. CASE 2: ======= We are not going to drop the user, but empty the tables: SQL> SQL> SQL> SQL> SQL> SQL> SQL>
alter table dept disable constraint FK_DEPT_LOC; alter table emp disable constraint FK_EMP_DEPT; alter table dept disable constraint PK_DEPT; alter table emp disable constraint pk_emp; alter table loc disable constraint pk_loc; truncate table emp; truncate table loc;
PK_LOC PK_DEPT
SQL> truncate table dept; -- do the import C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat ignore=y fromuser=albert touser=albert Import: Release 10.2.0.1.0 - Production on Sat Mar 1 08:25:27 2008 Copyright (c) 1982, 2005, Oracle.
All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Production With the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (possible charset conversion) . importing ALBERT's objects into ALBERT . . importing table "DEPT" 5 rows imported . . importing table "EMP" 7 rows imported . . importing table "LOC" 4 rows imported About to enable constraints... IMP-00017: following statement failed with ORACLE error 2270: "ALTER TABLE "EMP" ENABLE CONSTRAINT "FK_EMP_DEPT"" IMP-00003: ORACLE error 2270 encountered ORA-02270: no matching unique or primary key for this column-list IMP-00017: following statement failed with ORACLE error 2270: "ALTER TABLE "DEPT" ENABLE CONSTRAINT "FK_DEPT_LOC"" IMP-00003: ORACLE error 2270 encountered ORA-02270: no matching unique or primary key for this column-list Import terminated successfully with warnings. So the data gets imported, but we have a problem with the FOREIGN KEYS: SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS from user_constrai nts; CONSTRAINT_NAME STATUS ----------------------------------------------------------FK_DEPT_LOC DISABLED FK_EMP_DEPT DISABLED PK_LOC DISABLED PK_EMP DISABLED PK_DEPT DISABLED
C TABLE_NAME
R_CONSTRAINT_NAME
- ---------------------------------R DEPT PK_LOC R EMP P LOC P EMP P DEPT
SQL> select CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS from user_constraints; CONSTRAINT_NAME STATUS ----------------------------------------------------------FK_DEPT_LOC ENABLED FK_EMP_DEPT ENABLED PK_DEPT ENABLED PK_EMP ENABLED PK_LOC ENABLED
C TABLE_NAME
R_CONSTRAINT_NAME
- ---------------------------------R DEPT PK_LOC R EMP
PK_DEPT
P DEPT P EMP P LOC
SQL> Everything is back again. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$$$$$$$
What is exported?: -----------------Tables, indexes, data, database links gets exported. Example: -------exp system/manager file=oemuser.dmp owner=oemuser Verbonden met: Oracle9i Enterprise Edition Release 9.0.1.4.0 - Production With the Partitioning option JServer Release 9.0.1.4.0 - Production. Export is uitgevoerd in WE8MSWIN1252 tekenset en AL16UTF16 NCHAR-tekenset. Export van opgegeven gebruikers gaat beginnen ... . pre-schema procedurele objecten en acties wordt ge�xporteerd. . bibliotheeknamen van verwijzende functie voor gebruiker OEMUSER worden ge�xpo teerd . objecttypedefinities voor gebruiker OEMUSER worden ge�xporteerd Export van objecten van OEMUSER gaat beginnen ...
. databasekoppelingen worden ge�xporteerd. . volgnummers worden ge�xporteerd. . clusterdefinities worden ge�xporteerd. . export van tabellen van OEMUSER gaat beginnen ... via conventioneel pad ... . . tabel CUSTOMERS wordt ge�xporteerd.Er zijn 2 rijen ge�xporteerd. . synoniemen worden ge�xporteerd. . views worden ge�xporteerd. . opgeslagen procedures worden ge�xporteerd. . operatoren worden ge�xporteerd. . referenti�le integriteitsbeperkingen worden ge�xporteerd. . triggers worden ge�xporteerd. . indextypen worden ge�xporteerd. . bitmap, functionele en uit te breiden indexen worden ge�xporteerd. . acties post-tabellen worden ge�xporteerd . snapshots worden ge�xporteerd. . logs voor snapshots worden ge�xporteerd. . takenwachtrijen worden ge�xporteerd . herschrijfgroepen en kinderen worden ge�xporteerd . dimensies worden ge�xporteerd. . post-schema procedurele objecten en acties wordt ge�xporteerd. . statistieken worden ge�xporteerd. Export is succesvol be�indigd zonder waarschuwingen. D:\temp> Can one import tables to a different tablespace? ------------------------------------------------Import the dump file using the INDEXFILE= option Edit the indexfile. Remove remarks and specify the correct tablespaces. Run this indexfile against your database, this will create the required tables in the appropriate tablespaces Import the table(s) with the IGNORE=Y option. Change the default tablespace for the user: Revoke the "UNLIMITED TABLESPACE" privilege FROM the user Revoke the user's quota FROM the tablespace FROM WHERE the object was exported. This forces the import utility to create tables in the user's default tablespace. Make the tablespace to which you want to import the default tablespace for the user Import the table Can one export to multiple files?/ Can one beat the Unix 2 Gig limit? --------------------------------------------------------------------FROM Oracle8i, the export utility supports multiple output files. exp SCOTT/TIGER FILE=D:\F1.dmp,E:\F2.dmp FILESIZE=10m LOG=scott.log Use the following technique if you use an Oracle version prior to 8i: Create a compressed export on the fly. # create a named pipe mknod exp.pipe p
# read the pipe - output to zip file in the background gzip < exp.pipe > scott.exp.gz & # feed the pipe exp userid=scott/tiger file=exp.pipe ... Some famous Errors: ------------------Error 1: -------EXP-00008: ORACLE error 6550 encountered ORA-06550: line 1, column 31: PLS-00302: component 'DBMS_EXPORT_EXTENSION' must be declared 1. The errors indicate that $ORACLE_HOME/rdbms/admin/CATALOG.SQL and $ORACLE_HOME/rdbms/admin/CATPROC.SQL Should be run again, as has been previously suggested. Were these scripts run connected as SYS? Try SELECT OBJECT_NAME, OBJECT_TYPE FROM DBA_OBJECTS WHERE STATUS = 'INVALID' AND OWNER = 'SYS'; Do you have invalid objects? Is DBMS_EXPORT_EXTENSION invalid? If so, try compiling it manually: ALTER PACKAGE DBMS_EXPORT_EXTENSION COMPILE BODY; If you receive errors during manual compilation, please show errors for further information. 2. Or possibly different imp/exp versions are run to another version of the database. The problem can be resolved by copying the higher version CATEXP.SQL and executed in the lesser version RDBMS. 3. Other fix: If there are problems in exp/imp from single byte to multibyte databases: - Analyze which tables/rows could be affected by national characters before running the export - Increase the size of affected rows. - Export the table data once again. Error 2: -------EXP-00091: Exporting questionable statistics. Hi. This warning is generated because the statistics are questionable due to the client character set difference from the server character set. There is an article which discusses the causes of questionable statistics available via the MetaLink Advanced Search option by Doc ID: Doc ID: 159787.1 9i: Import STATISTICS=SAFE If you do not want this conversion to occur, you need to ensure the client NLS environment
performing the export is set to match the server. Fix ~~~~ a) If the statistics of a table are not required to include in export take the export with parameter STATISTICS=NONE Example: $exp scott/tiger file=emp1.dmp tables=emp STATISTICS=NONE b) In case, the statistics are need to be included can use STATISTICS=ESTIMATE or COMPUTE (default is Estimate). Error 3: -------EXP-00056: ORA-01403: EXP-00056: ORA-01403: EXP-00000:
ORACLE error 1403 encountered no data found ORACLE error 1403 encountered no data found Export terminated unsuccessfully
You can't export any DB with an exp utility of a newer version. The exp version must be equal or older than the DB version Doc ID : Note:281780.1 Content Type: TEXT/PLAIN Subject: Oracle 9.2.0.4.0: Schema Export Fails with ORA-1403 (No Data Found) on Exporting Cluster Definitions Creation Date: 29-AUG-2004 Type: PROBLEM Last Revision Date: 29-AUG-2004 Status: PUBLISHED The information in this article applies to: - Oracle Server - Enterprise Edition - Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Personal Edition - Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Standard Edition - Version: 9.2.0.4 to 9.2.0.4 This problem can occur on any platform. ERRORS -----EXP-56 ORACLE error encountered ORA-1403 no data found EXP-0: Export terminated unsuccessfully SYMPTOMS -------A schema level export with the 9.2.0.4 export utility from a 9.2.0.4 or higher release database in which XDB has been installed, fails when exporting the cluster definitions with: ... . exporting cluster definitions EXP-00056: ORACLE error 1403 encountered ORA-01403: no data found EXP-00000: Export terminated unsuccessfully You can confirm that XDB has been installed in the database:
SQL> SELECT substr(comp_id,1,15) comp_id, status, substr(version,1,10) version, substr(comp_name,1,30) comp_name FROM dba_registry ORDER BY 1; COMP_ID --------------... XDB XML XOQ
STATUS VERSION COMP_NAME ----------- ---------- -----------------------------INVALID VALID LOADED
9.2.0.4.0 9.2.0.6.0 9.2.0.4.0
Oracle XML Database Oracle XDK for Java Oracle OLAP API
You create a trace file of the ORA-1403 error: SQL> SHOW PARAMETER user_dump SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack level 3'; System altered. -- Re-run the export SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack off'; System altered. The trace file that was written to your USER_DUMP_DEST directory, shows: ksedmp: internal or fatal error ORA-01403: no data found Current SQL statement for this session: SELECT xdb_uid FROM SYS.EXU9XDBUID You can confirm that you have no invalid XDB objects in the database: SQL> SET lines 200 SQL> SELECT status, object_id, object_type, owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects WHERE owner='XDB' AND status != 'VALID' ORDER BY 4,2; no rows selected Note: If you do have invalid XDB objects, and the same ORA-1403 error occurs when performing a full database export, see the solution mentioned in: [NOTE:255724.1] <ml2_documents.showDocument?p_id=255724.1&p_database_id=NOT> "Oracle 9i: Full Export Fails with ORA-1403 (No Data Found) on Exporting Cluster Defintions" CHANGES ------You recently restored the database from a backup or you recreated the controlfile, or you performed Operating System actions on your database tempfiles. CAUSE
----The Temporary tablespace does not have any tempfiles. Note that the errors are different when exporting with a 9.2.0.3 or earlier export utility: . exporting cluster definitions EXP-00056: ORACLE error 1157 encountered ORA-01157: cannot identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9201WA\TEMP01.DBF' ORA-06512: at "SYS.DBMS_LOB", line 424 ORA-06512: at "SYS.DBMS_METADATA", line 1140 ORA-06512: at line 1 EXP-00000: Export terminated unsuccessfully The errors are also different when exporting with a 9.2.0.5 or later export utility: . exporting cluster definitions EXP-00056: ORACLE error 1157 encountered ORA-01157: cannot identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'M:\ORACLE\ORADATA\M9205WA\TEMP01.DBF' EXP-00000: Export terminated unsuccessfully FIX --1. If the controlfile does not have any reference to the tempfile(s), add the tempfile(s): SQL> SET lines 200 SQL> SELECT status, enabled, name FROM v$tempfile; no rows selected SQL> ALTER TABLESPACE temp ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE; or: If the controlfile has a reference to the tempfile(s), but the files are missing on disk, re-create the temporary tablespace, e.g.: SQL> SET lines 200 SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2; SQL> DROP TABLESPACE temp; SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; SQL> SHUTDOWN IMMEDIATE SQL> STARTUP SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
2. Now re-run the export. Other errors: ------------Doc ID : Note:175624.1 Content Type: TEXT/X-HTML Subject: Oracle Server - Export and Import FAQ Creation Date: 08-FEB-2002 Type: FAQ Last Revision Date: 16-FEB-2005 Status: PUBLISHED PURPOSE ======= This Frequently Asked Questions (FAQ) provides common Export and Import issues in the following sections: - GENERIC - LARGE FILES - INTERMEDIA - TOP EXPORT DEFECTS - COMPATIBILITY - TABLESPACE - ADVANCED QUEUING - TOP IMPORT DEFECTS - PARAMETERS - ORA-942 - REPLICATION - PERFORMANCE - NLS - FREQUENT ERRORS GENERIC ======= Question: What is actually happening when I export and import data? See Note 61949.1 "Overview of Export and Import in Oracle7" Question: What is important when doing a full database export or import? See Note 10767.1 "How to perform full system Export/Import" Question: Can data corruption occur using export & import (version 8.1.7.3 to 9.2.0)? See Note 199416.1 "ALERT: EXP Can Produce Dump File with Corrupted Data" Question: How to Connect AS SYSDBA when Using Export or Import? See Note 277237.1 "How to Connect AS SYSDBA when Using Export or Import" COMPATIBILITY ============= Question: Which version should I use when moving data between different database releases? See Note 132904.1 "Compatibility Matrix for Export & Import Between Different Oracle Versions" See Note 291024.1 "Compatibility and New Features when Transporting Tablespaces with Export and Import" See Note 76542.1 "NT: Exporting from Oracle8, Importing Into Oracle7" Question: How to resolve the IMP-69 error when importing into a database? See Note 163334.1 "Import Gets IMP-00069 when Importing 8.1.7 Export" See Note 1019280.102 "IMP-69 on Import"
PARAMETERS ========== Question: What is the difference between a Direct Path and a Conventional Path Export? See Note 155477.1 "Parameter DIRECT: Conventional Path Export versus Direct Path Export" Question: What is the meaning of the Export parameter CONSISTENT=Y and when should I use it? See Note 113450.1 "When to Use CONSISTENT=Y During an Export" Question: How to use the Oracle8i/9i Export parameter QUERY=... and what does it do? See Note 91864.1 "Query= Syntax in Export in 8i" See Note 277010.1 "How to Specify a Query in Oracle10g Export DataPump and Import DataPump" Question: How to create multiple export dumpfiles instead of one large file? See Note 290810.1 "Parameter FILESIZE - Make Export Write to Multiple Export Files" PERFORMANCE =========== Question: Import takes so long to complete. How can I improve the performance of Import? See Note 93763.1 "Tuning Considerations when Import is slow" Question: Why has export performance decreased after creating tables with LOB columns? See Note 281461.1 "Export and Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance" LARGE FILES =========== Question: Which commands to use for solving Export dump file problems on UNIX platforms? See Note 30528.1 "QREF: Export/Import/SQL*Load Large Files in Unix - Quick Reference" Question: How to solve the EXP-15 and EXP-2 errors when Export dump file is larger than 2Gb? See Note 62427.1 "2Gb or Not 2Gb - File limits in Oracle" See Note 1057099.6 "Unable to export when export file grows larger than 2GB" See Note 290810.1 "Parameter FILESIZE - Make Export Write to Multiple Export Files" Question: How to export to a tape device by using a named pipe? See Note 30428.1 "Exporting to Tape on Unix System"
TABLESPACE ========== Question: How to transport tablespace between different versions? See Note 291024.1 "Compatibility and New Features when Transporting Tablespaces with Export and Import" Question: How to move tables to a different tablespace and/or different user? See Note 1012307.6 "Moving Tables Between Tablespaces Using EXPORT/IMPORT" See Note 1068183.6 "How to change the default tablespace when importing using the INDEXFILE option" Question: How can I export all tables of a specific tablespace? See Note 1039292.6 "How to Export Tables for a specific Tablespace" ORA-942 ======= Question: How to resolve an ORA-942 during import of the ORDSYS schema? See Note 109576.1 "Full Import shows Errors when adding Referential Constraint on Cartrige Tables" Question: How to resolve an ORA-942 during import of a snapshot (log) into a different schema? See Note 1017292.102 "IMP-00017 IMP-00003 ORA-00942 USING FROMUSER/TOUSER ON SNAPSHOT [LOG] IMPORT" Question: How to resolve an ORA-942 during import of a trigger on a renamed table? See Note 1020026.102 "ORA-01702, ORA-00942, ORA-25001, When Importing Triggers" Question: How to resolve an ORA-942 during import of one specific table? See Note 1013822.102 "ORA-00942: ON TABLE LEVEL IMPORT" NLS === Question: Which effect has the client's NLS_LANG setting on an export and import? See Note 227332.1 "NLS considerations in Import/Export - Frequently Asked Questions" See Note 15656.1 "Export/Import and NLS Considerations" Question: How to prevent the loss of diacritical marks during an export/import? See Note 96842.1 "Loss Of Diacritics When Performing EXPORT/IMPORT Due To Incorrect Charactersets" INTERMEDIA OBJECTS ================== Question: How to solve an EXP-78 when exporting metadata for an interMedia Text index? See Note 130080.1 "Problems
with EXPORT after upgrading from 8.1.5 to 8.1.6" Question: I dropped the ORDSYS schema, but now I get ORA-6550 and PLS-201 when exporting? See Note 120540.1 "EXP-8 PLS-201 After Drop User ORDSYS" ADVANCED QUEUING OBJECTS ======================== Question: Why does export show ORA-1403 and ORA-6512 on an AQ object, after an upgrade? See Note 159952.1 "EXP-8 and ORA-1403 When Performing A Full Export" Question: How to resolve export errors on DBMS_AQADM_SYS and DBMS_AQ_SYS_EXP_INTERNAL? See Note 114739.1 "ORA-4068 while performing full database export" REPLICATION OBJECTS =================== Question: How to resolve import errors on DBMS_IJOB.SUBMIT for Replication jobs? See Note 137382.1 "IMP-3, PLS-306 Unable to Import Oracle8i JobQueues into Oracle8" Question: How to reorganize Replication base tables with Export and Import? See Note 1037317.6 "Move Replication System Tables using Export/Import for Oracle 8.X" FREQUENTLY REPORTED EXPORT/IMPORT ERRORS ======================================== EXP-00002: Error in writing to export file Note 1057099.6 "Unable to export when export file grows larger than 2GB" EXP-00002: error in writing to export file The export file could not be written to disk anymore, probably because the disk is full or the device has an error. Most of the time this is followed by a device (filesystem) error message indicating the problem. Possible causes are file systems that do not support a certain limit (eg. dump file size > 2Gb) or a disk/filesystem that ran out of space. EXP-00003: No storage definition found for segment(%s,%s) (EXP-3 EXP-0) Note 274076.1 "EXP-00003 When Exporting From Oracle9i 9.2.0.5.0 with a Pre-9.2.0.5.0 Export Utility" Note 124392.1 "EXP-3 while exporting Rollback Segment definitions during FULL Database Export" EXP-00067: "Direct path cannot export %s which contains object or lob data." Note 1048461.6 "EXP-00067 PERFORMING DIRECT PATH EXPORT"
EXP-00079: Data in table %s is protected (EXP-79) Note 277606.1 "How to Prevent EXP-00079 or EXP-00080 Warning (Data in Table xxx is Protected) During Export" EXP-00091: Exporting questionable statistics Note 159787.1 "9i: Import STATISTICS=SAFE" IMP-00016: Required character set conversion (type %lu to %lu) not supported Note 168066.1 "IMP-16 When Importing Dumpfile into a Database Using Multibyte Characterset" IMP-00020: Long column too large for column buffer size Note 148740.1 "ALERT: Export of table with dropped functional index may cause IMP-20 on import" ORA-00904: Invalid column name (EXP-8 ORA-904 EXP-0) Note 106155.1 "EXP-00008 ORA-1003 ORA-904 During Export" Note 172220.1 "Export of Database fails with EXP-00904 and ORA-01003" Note 158048.1 "Oracle8i Export Fails on Synonym Export with EXP-8 and ORA-904" Note 130916.1 "ORA-904 using EXP73 against Oracle8/8i Database" Note 1017276.102 "Oracle8i Export Fails on Synonym Export with EXP-8 and ORA-904" ORA-01406: Fetched column value was truncated (EXP-8 ORA-1406 EXP-0) Note 163516.1 "EXP-0 and ORA-1406 during Export of Object Types" ORA-01422: Exact fetch returns more than requested number of rows Note 221178.1 "PLS-201 and ORA-06512 at 'XDB.DBMS_XDBUTIL_INT' while Exporting Database" Note 256548.1 "Export of Database with XDB Throws ORA-1422 Error" ORA-01555: Snapshot too old Note 113450.1 "When to Use CONSISTENT=Y During an Export" ORA-04030: Out of process memory when trying to allocate %s bytes (%s,%s) (IMP-3 ORA-4030 ORA-3113) Note 165016.1 "Corrupt Packages When Export/Import Wrapper PL/SQL Code" ORA-06512: at "SYS.DBMS_STATS", line ... (IMP-17 IMP-3 ORA-20001 ORA-6512) Note 123355.1 "IMP-17 and IMP-3 errors referring dbms_stats package during import" ORA-29344: Owner validation failed - failed to match owner 'SYS' Note 294992.1 "Import DataPump: Transport Tablespace Fails with ORA-39123 and 29344 (Failed to match owner SYS)"
ORA-29516: Aurora assertion failure: Assertion failure at %s (EXP-8 ORA-29516 EXP0) Note 114356.1 "Export Fails With ORA-29516 Aurora Assertion Failure EXP-8" PLS-00103: Encountered the symbol "," when expecting one of the following ... (IMP-17 IMP-3 ORA-6550 PLS-103) Note 123355.1 "IMP-17 and IMP-3 errors referring dbms_stats package during import" Note 278937.1 "Import DataPump: ORA-39083 and PLS-103 when Importing Statistics Created with Non "." NLS Decimal Character" EXPORT TOP ISSUES CAUSED BY DEFECTS =================================== Release : 8.1.7.2 and below Problem : Export may fail with ORA-1406 when exporting object type definitions Solution : apply patch-set 8.1.7.3 Workaround: no, see Note 163516.1 "EXP-0 and ORA-1406 during Export of Object Types" Bug 1098503 Release : Oracle8i (8.1.x) and Oracle9i (9.x) Problem : EXP-79 when Exporting Protected Tables Solution : this is not a defect Workaround: N/A, see Note 277606.1 "How to Prevent EXP-00079 or EXP-00080 Warning (Data in Table xxx is Protected) During Export" Bug 2410612 Release : 8.1.7.3 and higher and 9.0.1.2 and higher Problem : Conventional export may produce an export file with corrupt data Solution : 8.1.7.5 and 9.2.0.x or check for Patch 2410612 (for 8.1.7.x), 2449113 (for 9.0.1.x) Workaround: yes, see Note 199416.1 "ALERT: Client Program May Give Incorrect Query Results (EXP Can Produce Dump File with Corrupted Data)" Release : Problem : for segment Solution : Workaround: "EXP-3 while
Oracle8i (8.1.x) Full database export fails with EXP-3: no storage definition found Oracle9i (9.x) yes, see Note 124392.1 exporting Rollback Segment definitions during FULL Database Export"
Bug 2900891 Release : 9.0.1.4 and below and 9.2.0.3 and below Problem : Export with 8.1.7.3 and 8.1.7.4 from Oracle9i fails with invalid identifier SPOLICY (EXP-8 ORA-904 EXP-0) Solution : 9.2.0.4 or 9.2.0.5 Workaround: yes, see Bug 2900891 how to recreate view sys.exu81rls Bug 2685696 Release : 9.2.0.3 and below
Problem
: Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_INT (EXP-56 ORA-1422 ORA-6512) Solution : 9.2.0.4 or check for Patch 2410612 (for 9.2.0.2 and 9.2.0.3) Workaround: yes, see Note 221178.1 "ORA-01422 ORA-06512: at "XDB.DBMS_XDBUTIL_INT" while exporting full database" Bug 2919120 Release : 9.2.0.4 and below Problem : Export fails when exporting triggers in call to XDB.DBMS_XDBUTIL_INT (EXP-56 ORA-1422 ORA-6512) Solution : 9.2.0.5 or check for Patch 2919120 (for 9.2.0.4) Workaround: yes, see Note 256548.1 "Export of Database with XDB Throws ORA-1422 Error" IMPORT TOP ISSUES CAUSED BY DEFECTS =================================== Bug 1335408 Release : 8.1.7.2 and below Problem : Bad export file using a locale with a ',' decimal seperator (IMP-17 IMP-3 ORA-6550 PLS-103) Solution : apply patch-set 8.1.7.3 or 8.1.7.4 Workaround: yes, see Note 123355.1 "IMP-17 and IMP-3 errors referring DBMS_STATS package during import" Bug 1879479 Release : 8.1.7.2 and below and 9.0.1.2 and below Problem : Export of a wrapped package can result in a corrupt package being imported (IMP-3 ORA-4030 ORA-3113 ORA-7445 ORA-600[16201]). Solution : in Oracle8i with 8.1.7.3 and higher; in Oracle9iR1 with 9.0.1.3 and higher Workaround: no, see Note 165016.1 "Corrupt Packages When Export/Import Wrapper PL/SQL Code" Bug 2067904 Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below Problem : Trigger-name causes call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to fail during Import (IMP-17 IMP-3 ORA-931 ORA-23308 ORA-6512). Solution : in Oracle9iR1 with patchset 9.0.1.3 Workaround: yes, see Note 239821.1 "ORA-931 or ORA-23308 in SET_TRIGGER_FIRING_PROPERTY on Import of Trigger in 8.1.7.x and 9.0.1.x" Bug 2854856 Release : Oracle8i (8.1.7.x) and 9.0.1.2 and below Problem : Schema-name causes call to DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY to fail during Import (IMP-17 IMP-3 ORA-911 ORA-6512). Solution : in Oracle9iR2 with patchset 9.2.0.4 Workaround: yes, see Note 239890.1 "ORA-911 in SET_TRIGGER_FIRING_PROPERTY on Import of Trigger
in 8.1.7.x and Oracle9i"
4.3 SQL*Loader examples: -======================= SQL*Loader is used for loading data from text files into Oracle tables. The text file can have fixed column positions or columns separated by a special character, for example an ",". to call sqlloader sqlldr system/manager control=smssoft.ctl sqlldr parfile=bonus.par Example 1: ---------BONUS.PAR: userid=scott control=bonus.ctl bad=bonus.bad log=bonus.log discard=bonus.dis rows=2 errors=2 skip=0 BONUS.CTL: LOAD DATA INFILE bonus.dat APPEND INTO TABLE BONUS (name position(01:08) char, city position(09:19) char, salary position(20:22) integer external) Now you can use the command: $ sqlldr parfile=bonus.par Example 2: ---------LOAD1.CTL: LOAD DATA INFILE 'PLAYER.TXT' INTO TABLE BASEBALL_PLAYER FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' (player_id,last_name,first_name,middle_initial,start_date) SQLLDR system/manager CONTROL=LOAD1.CTL LOG=LOAD1.LOG
BAD=LOAD1.BAD DISCARD=LOAD1.DSC Example 3: another controlfile: -----------------------------SMSSOFT.CTL: LOAD DATA INFILE 'SMSSOFT.TXT' TRUNCATE INTO TABLE SMSSOFTWARE FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' (DWMACHINEID, SERIALNUMBER, NAME, SHORTNAME, SOFTWARE, CMDB_ID, LOGONNAME) Example 4: another controlfile: ------------------------------LOAD DATA INFILE * BADFILE 'd:\stage\loader\load.bad' DISCARDFILE 'd:\stage\loader\load.dsc' APPEND INTO TABLE TEST FIELDS TERMINATED BY "" TRAILING NULLCOLS ( c1, c2 char, c3 date(8) "DD-MM-YY" ) BEGINDATA 1X25-12-00 2Y31-12-00 Note: The placeholder is only for illustration purposes, in the acutal implementation, one would use a real tab character which is not visible. - Convential path load: When the DIRECT=Y parameter is not used, the convential path is used. This means that essentially INSERT statements are used, triggers and referential integrety are in normal use, and that the buffer cache is used. - Direct path load: Buffer cache is not used. Existing used blocks are not used. New blocks are written as needed. Referential integrety and triggers are disabled during the load. Example 5: ---------The following shows the control file (sh_sales.ctl) loading the sales table: LOAD DATA INFILE sh_sales.dat APPEND INTO TABLE sales FIELDS TERMINATED BY "|" (PROD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_SOLD) It can be loaded with the following command:
$
sqlldr sh/sh control=sh_sales.ctl direct=true
4.4 Creation of new table on basis of existing table: ===================================================== CREATE TABLE EMPLOYEE_2 AS SELECT * FROM EMPLOYEE CREATE TABLE temp_jan_sales NOLOGGING TABLESPACE ts_temp_sales AS SELECT * FROM sales WHERE time_id BETWEEN '31-DEC-1999' AND '01-FEB-2000'; insert into t SELECT * FROM t2; insert into DSA_IMPORT SELECT * FROM MDB_DW_COMPONENTEN@SALES
4.5 Copy commAND om data uit een remote database te halen: ========================================================== set copycommit 1 set arraysize 1000 copy FROM HR/PASSWORD@loc create EMPLOYEE using SELECT * FROM employee WHERE state='NM' 4.6 Simple differences between table versions: ============================================== SELECT * FROM new_version MINUS SELECT * FROM old_version; SELECT * FROM old_version MINUS SELECT * FROM new_version;
======================================================= 5. Add, Move AND Size Datafiles, tablespaces, logfiles: ======================================================= 5.1 ADD OR DROP REDO LOGFILE GROUP: =================================== ADD: ---alter database
add logfile group 4 ('/db01/oracle/CC1/log_41.dbf', '/db02/oracle/CC1/log_42.dbf') size 5M; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Add logfile plus group: ALTER DATABASE ADD LOGFILE GROUP 4 ('/dbms/tdbaeduc/educslot/recovery/redo_logs/redo04.log') SIZE 50M; ALTER DATABASE ADD LOGFILE GROUP 5 ('/dbms/tdbaeduc/educslot/recovery/redo_logs/redo05.log') SIZE 50M; ALTER DATABASE ADD LOGFILE ('G:\ORADATA\AIRM\REDO05.LOG') SIZE 20M; DROP: -----An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.) -You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur. ALTER DATABASE DROP LOGFILE GROUP 3; ALTER DATABASE DROP LOGFILE 'G:\ORADATA\AIRM\REDO02.LOG'; 5.2 ADD REDO LOGFILE MEMBER: ============================ alter database add logfile member '/db03/oracle/CC1/log_3c.dbf' to group 4; Note: More on ONLINE LOGFILES: ------------------------------- Log Files Without Redundancy LOGFILE GROUP GROUP GROUP GROUP
-- Log Files With Redundancy LOGFILE GROUP 1 ('/u01/oradata/redo1a.log','/u05/oradata/redo1b.log') SIZE 10M, GROUP 2 ('/u02/oradata/redo2a.log','/u06/oradata/redo2b.log') SIZE 10M, GROUP 3 ('/u03/oradata/redo3a.log','/u07/oradata/redo3b.log') SIZE 10M,
GROUP 4 ('/u04/oradata/redo4a.log','/u08/oradata/redo4b.log') SIZE 10M -- Related Queries View information on log files SELECT * FROM gv$log; View information on log file history SELECT thread#, first_change#, TO_CHAR(first_time,'MM-DD-YY HH12:MIPM'), next_change# FROM gv$log_history; -- Forcing log file switches ALTER SYSTEM SWITCH LOGFILE; -- Clear A Log File If It Has Become Corrupt ALTER DATABASE CLEAR LOGFILE GROUP ; This statement overcomes two situations where dropping redo logs is not possible: If there are only two log groups The corrupt redo log file belongs to the current group. ALTER DATABASE CLEAR LOGFILE GROUP 4; -- Clear A Log File If It Has Become Corrupt And Avoid Archiving ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP ; -- Use this version of clearing a log file if the corrupt log file has not been archived. ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; Managing Log File Groups Adding a redo log file group ALTER DATABASE ADD LOGFILE ('', '') SIZE ; ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; Adding a redo log file group and specifying the group number ALTER DATABASE ADD LOGFILE GROUP ('') SIZE ; ALTER DATABASE ADD LOGFILE GROUP 4 ('c:\temp\newlog1.log') SIZE 100M; Relocating redo log files ALTER DATABASE RENAME FILE '<existing_path_and_file_name>' TO ''; conn / as sysdba SELECT member FROM v_$logfile; SHUTDOWN; host $ cp /u03/logs/log1a.log /u04/logs/log1a.log $ cp /u03/logs/log1b.log /u05/logs/log1b.log
$ exit startup mount ALTER DATABASE RENAME FILE '/u03/logs/log1a.log' TO '/u04/oradata/log1a.log'; ALTER DATABASE RENAME FILE '/u04/logs/log1b.log' TO '/u05/oradata/log1b.log'; ALTER DATABASE OPEN host $ rm /u03/logs/log1a.log $ rm /u03/logs/log1b.log $ exit SELECT member FROM v_$logfile; Drop a redo log file group ALTER DATABASE DROP LOGFILE GROUP ; ALTER DATABASE DROP LOGFILE GROUP 4; Managing Log File Members Adding log file group members ALTER DATABASE ADD LOGFILE MEMBER '' TO GROUP ; ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2; Dropping log file group members ALTER DATABASE DROP LOGFILE MEMBER ''; ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo'; Dumping Log Files Dumping a log file to trace ALTER SYSTEM DUMP LOGFILE '' DBA MIN DBA MAX ; or ALTER SYSTEM DUMP LOGFILE '' TIME MIN TIME MIN conn uwclass/uwclass alter session set nls_date_format='MM/DD/YYYY HH24:MI:SS'; SELECT SYSDATE FROM dual; CREATE TABLE test AS SELECT owner, object_name, object_type FROM all_objects WHERE SUBSTR(object_name,1,1) BETWEEN 'A' AND 'W'; INSERT INTO test
(owner, object_name, object_type) VALUES ('UWCLASS', 'log_dump', 'TEST'); COMMIT; conn / as sysdba SELECT ((SYSDATE-1/1440)-TO_DATE('01/01/2007','MM/DD/YYYY'))*86400 ssec FROM dual; ALTER SYSTEM DUMP LOGFILE 'c:\oracle\product\oradata\orabase\redo01.log' TIME MIN 579354757; Disable Log Archiving Stop log file archiving The following is undocumented and unsupported and should be used only with great care and following through tests. One might consider this for loading a data warehouse. Be sure to restart logging as soon as the load is complete or the system will be at extremely high risk. The rest of the database remains unchanged. The buffer cache works in exactly the same way, old buffers get overwritten, old dirty buffers get written to disk. It's just the process of physically flushing the redo buffer that gets disabled. I used it in a very large test environment where I wanted to perform a massive amount of changes (a process to convert blobs to clobs actually) and it was going to take days to complete. By disabling logging, I completed the task in hours and if anything untoward were to have happened, I was quite happy to restore the test database back from backup. ~ the above paraphrased from a private email from Richard Foote. conn / as sysdba SHUTDOWN; STARTUP MOUNT EXCLUSIVE; ALTER DATABASE NOARCHIVELOG; ALTER DATABASE OPEN; ALTER SYSTEM SET "_disable_logging"=TRUE;
5.3 RESIZE DATABASE FILE: ========================= alter database datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size) alter tablespace DATA datafile '/db05/oracle/CC1/data01.dbf' rezise 400M; (increase or decrease size) 5.4 ADD FILE TO TABLESPACE: ===========================
alter tablespace DATA add datafile '/db05/oracle/CC1/data02.dbf' size 50M autoextend ON maxsize unlimited; 5.5 ALTER STORAGE FOR FILE: =========================== alter database datafile '/db05/oracle/CC1/data01.dbf' autoextend ON maxsize unlimited; alter database datafile '/oradata/temp/temp.dbf' autoextend off; The AUTOEXTEND option cannot be turned OFF at for the entire tablespace with a single command. Each datafile within the tablespace must explicitly turn off the AUTOEXTEND option via the ALTER DATABASE command. +447960585647 5.6 MOVE OF DATA FILE: ====================== connect internal shutdown mv /db01/oracle/CC1/data01.dbf
/db02/oracle/CC1
connect / as SYSDBA startup mount CC1 alter database rename file '/db01/oracle/CC1/data01.dbf' to '/db02/oracle/CC1/data01.dbf'; alter database open; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/sysaux01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/sysaux01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/system01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/system01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/temp01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/temp01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/undotbs01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/undotbs01.dbf'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/users01.dbf' to '/dbms/tdbaplay/playdwhs/database/default/users01.dbf';
alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo01.log' to '/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo01.log'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo02.log' to '/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo02.log'; alter database rename file '/dbms/tdbaplay/playdwhs/database/playdwhs/redo03.log' to '/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo03.log'; 5.7 MOVE OF REDO LOG FILE: ========================== connect internal shutdown mv /db05/oracle/CC1/redo01.dbf
/db02/oracle/CC1
connect / as SYSDBA startup mount CC1 alter database rename file '/db05/oracle/CC1/redo01.dbf' to '/db02/oracle/CC1/redo01.dbf'; alter database open; in case of problems: ALTER DATABASE CLEAR LOGFILE GROUP n example: -------shutdown immediate op Unix: mv /u01/oradata/spltst1/redo01.log /u02/oradata/spltst1/ mv /u03/oradata/spltst1/redo03.log /u02/oradata/spltst1/ startup mount pfile=/apps/oracle/admin/SPLTST1/pfile/init.ora alter database rename file '/u01/oradata/spltst1/redo01.log' to '/u02/oradata/spltst1/redo01.log'; alter database rename file '/u03/oradata/spltst1/redo03.log' to '/u02/oradata/spltst1/redo03.log'; alter database open;
5.8 Put a datafile or tablespace ONLINE or OFFLINE: =================================================== alter tablespace data offline; alter tablespace data online; alter database datafile 8 offline; alter database datafile 8 online; 5.9 ALTER DEFAULT STORAGE: ========================== alter tablespace AP_INDEX_SMALL default storage (initial 5M next 5M pctincrease 0); 5.10 CREATE TABLESPACE STORAGE PARAMETERS: ========================================== locally managed 9i style: -- autoallocate: ---------------CREATE TABLESPACE DEMO DATAFILE '/u02/oracle/data/lmtbsb01.dbf' size 100M extent management local autoallocate; -- uniform size, 1M is default: ------------------------------CREATE TABLESPACE LOBS DATAFILE 'f:\oracle\oradata\pegacc\lobs01.dbf' SIZE 3000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K; CREATE TABLESPACE LOBS2 DATAFILE 'f:\oracle\oradata\pegacc\lobs02.dbf' SIZE 3000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M; CREATE TABLESPACE CISTS_01 DATAFILE '/u04/oradata/pilactst/cists_01.dbf' SIZE 1000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE CISTS_01 DATAFILE '/u01/oradata/spldev1/cists_01.dbf' SIZE 400M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE PUB DATAFILE 'C:\ORACLE\ORADATA\TEST10G\PUB.DBF' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE STAGING DATAFILE 'C:\ORACLE\ORADATA\TEST10G\STAGING.DBF' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE RMAN DATAFILE 'C:\ORACLE\ORADATA\RMAN\RMAN.DBF' SIZE 100M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
CREATE TABLESPACE CISTS_01 DATAFILE '/u07/oradata/spldevp/cists_01.dbf' SIZE 1200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE USERS DATAFILE '/u06/oradata/splpack/users01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; CREATE TABLESPACE INDX DATAFILE '/u06/oradata/splpack/indx01.dbf' SIZE 100M EXTENT MANAGEMENT LOCAL UNIFORM SIZE CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u07/oradata/spldevp/temp01.dbf' SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M; ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;
ALTER TABLESPACE CISTS_01 ADD DATAFILE '/u03/oradata/splplay/cists_02.dbf' SIZE 1000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; ALTER TABLESPACE UNDOTBS ADD DATAFILE '/dbms/tdbaprod/prodross/database/default/undotbs03.dbf' SIZE 2000M; alter tablespace DATA add datafile '/db05/oracle/CC1/data02.dbf' size 50M autoextend ON maxsize unlimited; -- segment management manual or automatic: -- --------------------------------------We can have a locally managed tablespace, but the segment space management, via the free lists and the pct_free and pct_used parameters, be still used manually. To specify manual space management, use the SEGMENT SPACE MANAGEMENT MANUAL clause CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL; or if you want segement space management to be automatic: CREATE TABLESPACE INDX2 DATAFILE '/u06/oradata/bcict2/indx09.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; -- temporary tablespace: -----------------------CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/pilactst/temp01.dbf' SIZE 200M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M; create user cisadm identified by cisadm default tablespace cists_01 temporary tablespace temp; create user cisuser identified by cisuser default tablespace cists_01 temporary tablespace temp; create user cisread identified by cisread default tablespace cists_01 temporary tablespace temp; grant connect to cisadm; grant connect to cisuser; grant connect to cisread; grant resource to cisadm; grant resource to cisuser; grant resource to cisread;
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/bcict2/tempt01.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M; alter tablespace TEMP add tempfile '/u04/oradata/bcict2/temp02.dbf' SIZE 5000M; alter tablespace UNDO add file '/u04/oradata/bcict2/undo07.dbf' size 500M; ALTER DATABASE datafile '/u04/oradata/bcict2/undo07.dbf' RESIZE 3000M; CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/u04/oradata/bcict2/temp01.dbf' SIZE 5000M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M; ALTER TABLESPACE TEMP ADD TEMPFILE '/u04/oradata/bcict2/tempt4.dbf'
SIZE 5000M;
1 /u03/oradata/bcict2/temp.dbf 2 /u03/oradata/bcict2/temp01.dbf 3 /u03/oradata/bcict2/temp02.dbf ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES; The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. The Oracle default for SIZE is 1M. But if you want to specify another value for SIZE, you can do so as shown in the
above statement. The AUTOALLOCATE clause is not allowed for temporary tablespaces. If you get errors: -----------------If the controlfile does not have any reference to the tempfile(s), add the tempfile(s): SQL> SET lines 200 SQL> SELECT status, enabled, name FROM v$tempfile; no rows selected SQL> ALTER TABLESPACE temp ADD TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE; or: If the controlfile has a reference to the tempfile(s), but the files are missing on disk, re-create the temporary tablespace, e.g.: SQL> SET lines 200 SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2; SQL> DROP TABLESPACE temp; SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE 'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; SQL> SHUTDOWN IMMEDIATE SQL> STARTUP SQL> DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
################################################################################## ##### CREATE TABLESPACE "DRSYS" LOGGING DATAFILE '/u02/oradata/pegacc/drsys01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "INDX" LOGGING DATAFILE '/u02/oradata/pegacc/indx01.dbf' SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "TOOLS" LOGGING DATAFILE '/u02/oradata/pegacc/tools01.dbf' SIZE 100M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS" LOGGING DATAFILE '/u02/oradata/pegacc/users01.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "XDB" LOGGING DATAFILE '/u02/oradata/pegacc/xdb01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "LOBS" LOGGING DATAFILE '/u02/oradata/pegacc/lobs01.dbf' SIZE 2000M REUSE AUTOEXTEND ON NEXT 1024K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M ; ################################################################################## #####
General form of a 8i type statement: CREATE TABLESPACE DATA DATAFILE 'G:\ORADATA\RCDB\DATA01.DBF' size 100M EXTENT MANAGEMENT DICTIONARY default storage ( initial 512K next 512K minextents 1 pctincrease 0 ) minimum extent 512K logging online peRMANENTt; More info: ---------By declaring a tablespace as DICTIONARY managed, you are specifying that extent management for segments in this tablespace will be managed using the dictionary tables sys.fet$ and sys.uet$. Oracle updates these tables in the data dictionary whenever an extent is allocated, or freed for reuse. This is the default
in Oracle8i when no extent management clause is used in the CREATE TABLESPACE statement. The sys.fet$ table is clustered in the C_TS# cluster. Because it is created without a SIZE clause, one block will be reserved in the cluster for each tablespace. Although, if a tablespace has more free extents than can be contained in a single cluster block, then cluster block chaining will occur which can significantly impact performance on the data dictionary and space management transactions in particular. Unfortunately, chaining in this cluster cannot be repaired without recreating the entire database. Preferably, the number of free extents in a tablespace should never be greater than can be recorded in the primary cluster block for that tablespace, which is about 500 free extents for a database with an 8K database block size. Used extents, on the other hand, are recorded in the data dictionary table sys.uet$, which is clustered in the C_FILE#_BLOCK# cluster. Unlike the C_TS# cluster, C_FILE#_BLOCK# is sized on the assumption that segments will have an average of just 4 or 5 extents each. Unless your data dictionary was specifically customized prior to database creation to allow for more used extents per segment, then creating segments with thousands of extents (like mentioned in the previous section) will cause excessive cluster block chaining in this cluster. The major dilemma with an excessive number of used and/or free extents is that they can misrepresent the operations of the dictionary cache LRU mechanism. Extents should therefore not be allowed to grow into the thousands, not because of the impact of full table scans, but rather the performance of the data dictionary and dictionary cache. A Locally Managed Tablespace is a tablespace that manages its own extents by maintaining a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a block or a group of blocks. When the extents are allocated or freed for reuse, Oracle simply changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary (except for tablespace quota information). This is the default in Oracle9i. If COMPATIBLE is set to 9.0.0, then the default extent management for any new tablespace is locally managed in Oracle9i. If COMPATIBLE is less than 9.0.0, then the default extent management for any new tablespace is dictionary managed in Oracle9i. While free space is represented in a bitmap within the tablespace, used extents are only recorded in the extent map in the segment header block of each segment, and if necessary, in additional extent map blocks within the segment. Keep in mind though, that this information is not cached in the dictionary cache.
It must be obtained from the database block every time that it is required, and if those blocks are not in the buffer cache, that involves I/O and potentially lots of it. Take for example a query against DBA_EXTENTS. This query would be required to read every segment header and every additional extent map block in the entire database. It is for this reason that it is recommended that the number of extents per segment in locally managed tablespaces be limited to the number of rows that can be contained in the extent map with the segment header block. This would be approximately - (db_block_size / 16) - 7. For a database with a db block size of 8K, the above formula would be 505 extents.
5.11 DEALLOCATE EN OPSPOREN VAN UNUSED SPACE IN EEN TABLE: ========================================================== alter table emp deallocate unused; alter table emp deallocate unused keep 100K; alter table emp allocate extent ( size 100K datafile '/db05/oradata/CC1/user05.dbf'); Deze datafile moet in dezelfde tablespace bestaan. -- gebruik van de dbms_space.unused_space package declare var1 number; var2 number; var3 number; var4 number; var5 number; var6 number; var7 number; begin dbms_space.unused_space('AUTOPROV1', 'MACADDRESS_INDEX', 'INDEX', var1, var2, var3, var4, var5, var6, var7); dbms_output.put_line('OBJECT_NAME = NOG ZON SLECHTE INDEX'); dbms_output.put_line('TOTAL_BLOCKS ='||var1); dbms_output.put_line('TOTAL_BYTES ='||var2); dbms_output.put_line('UNUSED_BLOCKS ='||var3); dbms_output.put_line('UNUSED_BYTES ='||var4); dbms_output.put_line('LAST_USED_EXTENT_FILE_ID ='||var5); dbms_output.put_line('LAST_USED_EXTENT_BLOCK_ID ='||var6); dbms_output.put_line('LAST_USED_BLOCK ='||var7); end;
/ 5.12 CREATE TABLE: ================== -- STORAGE PARAMETERS EXAMPLE: -- --------------------------create table emp ( id number, name varchar(2) ) tablespace users pctfree 10 storage (initial 1024K next 1024K pctincrease 10 minextents 2); ALTER a COLUMN: =============== ALTER TABLE GEWEIGERDETRANSACTIE MODIFY (VERBRUIKTIJD DATE); -- Creation of new table on basis of existing table: -- ------------------------------------------------CREATE TABLE EMPLOYEE_2 AS SELECT * FROM EMPLOYEE insert into t SELECT * FROM t2; insert into DSA_IMPORT SELECT * FROM MDB_DW_COMPONENTEN@SALES -- Creation of a table with an autoincrement: -- -----------------------------------------CREATE SEQUENCE seq_customer INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; CREATE SEQUENCE seq_employee INCREMENT BY 1 START WITH 1218 MAXVALUE 99999 NOCYCLE; CREATE SEQUENCE seq_a
INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; CREATE TABLE CUSTOMER ( CUSTOMER_ID NUMBER (10) NOT NULL, NAAM VARCHAR2 (30) NOT NULL, CONSTRAINT PK_CUSTOMER PRIMARY KEY ( CUSTOMER_ID ) USING INDEX TABLESPACE INDX PCTFREE 10 STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0 )) TABLESPACE USERS PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE ( INITIAL 80K NEXT 80K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 2147483645 ) NOCACHE; CREATE OR REPLACE TRIGGER tr_CUSTOMER_ins BEFORE INSERT ON CUSTOMER FOR EACH ROW BEGIN SELECT seq_customer.NEXTVAL INTO :NEW.CUSTOMER_ID FROM dual; END;
CREATE SEQUENCE seq_brains_verbruik INCREMENT BY 1 START WITH 1750795 MAXVALUE 100000000 NOCYCLE; CREATE OR REPLACE TRIGGER tr_PARENTEENHEID_ins BEFORE INSERT ON PARENTEENHEID FOR EACH ROW BEGIN SELECT seq_brains_verbruik.NEXTVAL INTO :NEW.VERBRUIKID FROM dual; END; 5.13 REBUILD OF INDEX: ====================== ALTER INDEX emp_pk REBUILD -- online 8.16 or higher NOLOGGING TABLESPACE INDEX_BIG PCTFREE 10 STORAGE ( INITIAL 5M NEXT 5M pctincrease 0 ); ALTER INDEX emp_ename
INITRANS 5 MAXTRANS 10 STORAGE (PCTINCREASE 50); In situations where you have B*-tree index leaf blocks that can be freed up for reuse, you can merge those leaf blocks using the following statement: ALTER INDEX vmoore COALESCE; DROP INDEX emp_ename: -- Basic example of creating an index: CREATE INDEX emp_ename ON emp(ename) TABLESPACE users STORAGE (INITIAL 20K NEXT 20k PCTINCREASE 75) PCTFREE 0; If you have a LMT, you can just do: create index cust_indx on customers(id) nologging; This statement is without storage parameters. -- Dropping an index: DROP INDEX emp_ename: 5.14 MOVE TABLE TO OTHER TABLESPACE: ==================================== ALTER TABLE CHARLIE.CUSTOMERS MOVE TABLESPACE USERS2 5.15 SYNONYM (pointer to an object): ==================================== example: create public synonym EMPLOYEE for HARRY.EMPLOYEE; 5.16 DATABASE LINK: =================== CREATE PUBLIC DATABASE LINK SALESLINK CONNECT TO FRONTEND IDENTIFIED BY cygnusx1 USING 'SALES'; SELECT * FROM employee@MY_LINK; For example, using a database link to database sales.division3.acme.com, a user or application can reference remote data as follows: SELECT * FROM [email protected]; # emp table in scott's schema SELECT loc FROM [email protected];
If GLOBAL_NAMES is set to FALSE, then you can use any name for the link to sales.division3.acme.com. For example, you can call the link foo. Then, you can access the remote database as follows: SELECT name FROM scott.emp@foo;
# link name different FROM global name
Synonyms for Schema Objects: Oracle lets you create synonyms so that you can hide the database link name FROM the user. A synonym allows access to a table on a remote database using the same syntax that you would use to access a table on a local database. For example, assume you issue the following query against a table in a remote database: SELECT * FROM [email protected]; You can create the synonym emp for [email protected] so that you can issue the following query instead to access the same data: SELECT * FROM emp; View DATABASE LINKS: select substr(owner,1,10), substr(db_link,1,50), substr(username,1,25), substr(host,1,40), created from dba_db_links 5.17 TO CLEAR TABLESPACE TEMP: ============================== alter tablespace TEMP default storage (pctincrease 0); alter session set events 'immediate trace name DROP_SEGMENTS level TS#+1'; 5.18 RENAME OF OBJECT: ====================== RENAME sales_staff TO dept_30; RENAME emp2 TO emp; 5.19 CREATE PROFILE: ==================== CREATE PROFILE DEVELOP_FIN LIMIT SESSIONS_PER_USER 4 IDLE_TIME 30; CREATE PROFILE PRIOLIMIT LIMIT SESSIONS_PER_USER 10; ALTER USER U_ZKN
PROFILE EXTERNLIMIT; ALTER PROFILE EXTERNLIMIT LIMIT PASSWORD_REUSE_TIME 90 PASSWORD_REUSE_MAX UNLIMITED; ALTER PROFILE EXTERNLIMIT LIMIT SESSIONS_PER_USER 20 IDLE_TIME 20; 5.20 RECOMPILE OF FUNCTION, PACKAGE, PROCEDURE: =============================================== ALTER FUNCTION schema.function COMPILE; example: ALTER FUNCTION oe.get_bal COMPILE; ALTER PACKAGE schema.package COMPILE specification/body/package example ALTER PACKAGE emp_mgmt COMPILE PACKAGE; ALTER PROCEDURE schema.procedure COMPILE; example ALTER PROCEDURE hr.remove_emp COMPILE; TO FIND OBJECTS: SELECT 'ALTER '||decode( object_type, 'PACKAGE SPECIFICATION' ,'PACKAGE' ,'PACKAGE BODY' ,'PACKAGE' ,object_type) ||' '||owner ||'.'|| object_name ||' COMPILE ' ||decode( object_type, 'PACKAGE SPECIFICATION' ,'SPECIFACTION' ,'PACKAGE BODY' ,'BODY' , NULL) ||';' FROM dba_objects WHERE status = 'INVALID'; 5.21 CREATE PACKAGE: ==================== A package is a set of related functions and / or routines. Packages are used to group together PL/SQL code blocks which make up a common application or are attached to a single business function. Packages consist of a specification and a body. The package specification lists the public interfaces to the blocks within the package body. The package body contains the public and private PL/SQL blocks which make up the application, private blocks are not defined in the package specification and cannot be called by any routine other than one defined within the package body. The benefits of packages are that they improve the organisation of procedure
and function blocks, allow you to update the blocks that make up the package body without affecting the specification (which is the object that users have rights to) and allow you to grant execute rights once instead of for each and every block. To create a package specification we use a variation on the CREATE command, all we need put in the specification is each PL/SQL block header that will be public within the package. An example follows :CREATE OR REPLACE PACKAGE MYPACK1 AS PROCEDURE MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR,TCOST OUT NUMBER); FUNCTION MYFUNC1; END MYPACK1; To create a package body we now specify each PL/SQL block that makes up the package, note that we are not creating these blocks separately (no CREATE OR REPLACE is required for the procedure and function definitions). An example follows :CREATE OR REPLACE PACKAGE BODY MYPACK1 AS PROCEDURE MYPROC1 (REQISBN IN NUMBER, MYVAR1 IN OUT CHAR, TCOST OUT NUMBER) TEMP_COST NUMBER(10,2)) IS BEGIN SELECT COST FROM JD11.BOOK INTO TEMP_COST WHERE ISBN = REQISBN; IF TEMP_COST > 0 THEN UPDATE JD11.BOOK SET COST = (TEMP_COST*1.175) WHERE ISBN = REQISBN; ELSE UPDATE JD11.BOOK SET COST = 21.32 WHERE ISBN = REQISBN; END IF; TCOST := TEMP_COST; COMMIT; EXCEPTION WHEN NO_DATA_FOUND THEN INSERT INTO JD11.ERRORS (CODE, MESSAGE) VALUES(99, 'ISBN NOT FOUND'); END MYPROC1; FUNCTION MYFUNC1 RETURN NUMBER IS RCOST NUMBER(10,2); BEGIN SELECT COST FROM JD11.BOOK INTO RCOST WHERE ISBN = 21; RETURN (RCOST); END MYFUNC1; END MYPACK1; You can execute a public package block like this :EXECUTE :PCOST := JD11.MYPACK1.MYFUNC1 - WHERE JD11 is the schema name that owns the package. You can use DROP PACKAGE and DROP PACKAGE BODY to remove the package objects FROM the database. CREATE OR REPLACE PACKAGE schema.package CREATE PACKAGE emp_mgmt AS
FUNCTION hire (last_name VARCHAR2, job_id VARCHAR2, manager_id NUMBER, salary NUMBER, commission_pct NUMBER, department_id NUMBER) RETURN NUMBER; FUNCTION create_dept(department_id NUMBER, location NUMBER) RETURN NUMBER; PROCEDURE remove_emp(employee_id NUMBER); PROCEDURE remove_dept(department_id NUMBER); PROCEDURE increase_sal(employee_id NUMBER, salary_incr NUMBER); PROCEDURE increase_comm(employee_id NUMBER, comm_incr NUMBER); no_comm EXCEPTION; no_sal EXCEPTION; END emp_mgmt; / Before you can call this package's procedures and functions, you must define these procedures and functions in the package body. 5.22 View a view: ================= set long 2000 SELECT text FROM sys.dba_views WHERE view_name = 'CONTROL_PLAZA_V'; 5.23 ALTER SYSTEM: ================== ALTER ALTER ALTER ALTER ALTER ALTER ALTER ALTER ALTER
SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM
CHECKPOINT; ENABLE/DISABLE RESTRICTED SESSION; FLUSH SHARED_POOL; SWITCH LOGFILE; SUSPEND/RESUME; SET RESOURCE_LIMIT = TRUE; SET LICENSE_MAX_USERS = 300; SET GLOBAL_NAMES=FALSE; SET COMPATIBLE = '9.2.0' SCOPE=SPFILE;
5.24 HOW TO ENABLE OR DISABLE TRIGGERS: ======================================= Disable enable trigger: ALTER TRIGGER Reorder DISABLE; ALTER TRIGGER Reorder ENABLE; Or in 1 time for all triggers on a table: ALTER TABLE Inventory DISABLE ALL TRIGGERS;
5.25 DIASABLING AND ENABLING AN INDEX: ====================================== alter index HEAT_CUSTOMER_POSTAL_CODE unusable; alter index HEAT_CUSTOMER_POSTAL_CODE rebuild; 5.26 CREATE A VIEW: =================== CREATE VIEW v1 AS SELECT LPAD(' ',40-length(size_tab.size_col)/2,' ') size_col FROM size_tab; CREATE VIEW X AS SELECT * FROM gebruiker@aptest 5.27 MAKE A USER: ================= CREATE USER jward IDENTIFIED BY aZ7bC2 DEFAULT TABLESPACE data_ts QUOTA 100M ON test_ts QUOTA 500K ON data_ts TEMPORARY TABLESPACE temp_ts PROFILE clerk; GRANT connect TO jward; create user jaap identified by jaap default tablespace users temporary tablespace temp; grant connect to jaap; grant resource to jaap; Dynamic queries: ----------------- CREATE USER AND GRANT PERMISSION STATEMENTS -- dynamic querieS SELECT 'CREATE USER '||USERNAME||' identified by '||USERNAME||' default tableSpace '|| DEFAULT_TABLESPACE||' temporary tableSpace '||TEMPORARY_TABLESPACE||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT CREATE SeSSion to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT connect to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT reSource to '||USERNAME||';' FROM DBA_USERS
WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); SELECT 'GRANT unlimited tableSpace to '||USERNAME||';' FROM DBA_USERS WHERE USERNAME NOT IN ('SYS','SYSTEM','OUTLN','CTXSYS','ORDSYS','MDSYS'); Becoming another user: ====================== - Do the query: select 'ALTER USER '||username||' IDENTIFIED BY VALUES '||''''|| password||''''||';' from dba_users; - change the password - do what you need to do as the other account - change the password back to the original value -- grant to <user> SELECT 'ALTER TABLE RM_LIVE.'||table_name||' disable constraint '|| constraint_name||';' from dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='R'; SELECT 'ALTER TABLE RM_LIVE.'||table_name||' disable constraint '|| constraint_name||';' from dba_constraints where owner='RM_LIVE' and CONSTRAINT_TYPE='P'; 5.28 CREATE A SEQUENCE: ======================= Sequences are database objects from which multiple users can generate unique integers. You can use sequences to automatically generate primary key values. CREATE SEQUENCE INCREMENT BY START WITH MAXVALUE CYCLE ;
<sequence name> <start number> <maximum value>
CREATE SEQUENCE department_seq INCREMENT BY 1 START WITH 1 MAXVALUE 99999 NOCYCLE; 5.29 STANDARD USERS IN 9i: ========================== CTXSYS is the primary schema for interMedia. MDSYS, ORDSYS, and ORDPLUGINS are schemas required when installing any of the cartridges.
MTSSYS is required for the Oracle Service for MTS and is specific to NT. OUTLN is an integral part of the database required for the plan stability feature in Oracle8i. While the interMedia and cartridge schemas can be recreated by running their associated scripts as needed, I am not 100% on the steps associated with the MTSSYS user. Unfortunately, the OUTLN user is created at database creation time when sql.bsq is run. The OUTLN user owns the package OUTLN_PKG which is used to manage stored outlines and their outline categories. There are other tables (base tables), indexes, grants, and synonyms related to this package. By default, are automatically created during database creation : SCOTT by script $ORACLE_HOME/rdbms/admin/utlsampl.sql OUTLN by script $ORACLE_HOME/rdbms/admin/sql.bsq Optionally: DBSNMP if Enterprise Manager Intelligent Agent is installed TRACESVR if Enterprise Manager is installed AURORA$ORB$UNAUTHENTICATED \ AURORA$JIS$UTILITY$ -- if Oracle Servlet Engine (OSE) is installed OSE$HTTP$ADMIN / MDSYS if Oracle Spatial option is installed ORDSYS if interMedia Audio option is installed ORDPLUGINS if interMedia Audio option is installed CTXSYS if Oracle Text option is installed REPADMIN if Replication Option is installed LBACSYS if Oracle Label Security option is installed ODM if Oracle Data Mining option is installed ODM_MTR idem OLAPSYS if OLAP option is installed WMSYS if Oracle Workspace Manager script owmctab.plb is executed. ANONYMOUS if catqm.sql catalog script for SQL XML management XDB is executed 5.30 FORCED LOGGING: ==================== alter database no force logging; If a database is in force logging mode, all changes, except those in temporary tablespaces, will be logged, independently from any nologging specification. It is also possible to put arbitrary tablespaces into force logging mode: alter tablespace force logging. A force logging might take a while to complete because alter database add supplemental log data; ALTER DATABASE DROP SUPPLEMENTAL LOG DATA; ALTER TABLESPACE TDBA_CDC NO FORCE LOGGING;
================================== 6.1. Install Oracle 92 on Solaris: ================================== 6.1 Tutorial 1: =============== Short Guide to install Oracle 9.2.0 on SUN Solaris 8 -------------------------------------------------------------------------------The Oracle 9i Distribution can be found on Oracle Technet (http://technet.oracle.com) The following, short Installation Guide shows how to install Oracle 9.2.0 for SUN Solaris 8. You may download our scripts to create a database, we suggest this way and NOT using DBASSIST. Besides this scripts, you can download our SQLNET configuration files TNSNAMES.ORA. LISTENER.ORA and SQLNET.ORA. Check Hardware Requirements Operating System Software Requirements Java Runtime Environment (JRE) Check Software Limits Setup the Solaris Kernel Create Unix Group �dba� Create Unix User �oracle� Setup ORACLE environment ($HOME/.profile) as follows Install from CD-ROM ... ... or Unpacking downloaded installation files Check oraInst.loc File Install with Installer in interactive mode Create the Database Start Listener Automatically Start / Stop the Database Install Oracle Options (optional) Download Scripts for Sun Solaris For our installation, we used the following ORACLE_HOME and ORACLE_SID, please adjust these parameters for your own environment.
ORACLE_HOME = /opt/oracle/product/9.2.0 ORACLE_SID = TYP2 -------------------------------------------------------------------------------Check Hardware Requirements Minimal Memory: 256 MB Minimal Swap Space: Twice the amount of the RAM To determine the amount of RAM memory installed on your system, enter the following command. $ /usr/sbin/prtconf To determine the amount of SWAP installed on your system, enter the following command and multiply the BLOCKS column by 512. $ swap -l Use the latest kernel patch from Sun Microsystems (http://sunsolve.sun.com) Operating System Software Requirements Use the latest kernel patch from Sun Microsystems. - Download the Patch from: http://sunsolve.sun.com - Read the README File included in the Patch - Usually the only thing you have to do is: $ $ $ $
cd <patch cluster directory> ./install_custer cat /var/sadm/install_data/_log showrev -p
- Reboot the system To determine your current operating system information: $ uname -a To determine which operating system patches are installed: $ showrev -p To determine which operating system packages are installed: $ pkginfo -i [package_name] To determine if your X-windows system is working properly on your local system, but you can redirect the X-windows output to another system. $ xclock To determine if you are using the correct system executables:
Each of the four commands above should point to the /usr/ccs/bin directory. If not, add /usr/ccs/bin to the beginning of the PATH environment variable in the current shell. Java Runtime Environment (JRE) The JRE shipped with Oracle9i is used by Oracle Java applications such as the Oracle Universal Installer is the only one supported. You should not modify this JRE, unless it is done through a patch provided by Oracle Support Services. The inventory can contain multiple versions of the JRE, each of which can be used by one or more products or releases. The Installer creates the oraInventory directory the first time it is run to keep an inventory of products that it installs on your system as well as other installation information. The location of oraInventory is defined in /var/opt/oracle/oraInst.loc. Products in an ORACLE_HOME access the JRE through a symbolic link in $ORACLE_HOME/JRE to the actual location of a JRE within the inventory. You should not modify the symbolic link. Check Software Limits Oracle9i includes native support for files greater than 2 GB. Check your shell to determine whether it will impose a limit. To check current soft shell limits, enter the following command: $ ulimit -Sa To check maximum hard limits, enter the following command: $ ulimit -Ha The file (blocks) value should be multiplied by 512 to obtain the maximum file size imposed by the shell. A value of unlimited is the operating system default and is the maximum value of 1 TB. Setup the Solaris Kernel Set to the sum of the PROCESSES parameter for each Oracle database, adding the largest one twice, then add an additional 10 for each database. For example, consider a system that has three Oracle instances with the PROCESSES parameter in their initSID.ora files set to the following values: ORACLE_SID=TYP1, PROCESSES=100 ORACLE_SID=TYP2, PROCESSES=100 ORACLE_SID=TYP3, PROCESSES=200
The value of SEMMNS is calculated as follows: SEMMNS = [(A=100) + (B=100)] + [(C=200) * 2] + [(# of instances=3) * 10] = 630 Setting parameters too high for the operating system can prevent the machine from booting up. Refer to Sun Microsystems Sun SPARC Solaris system administration documentation for parameter limits. * * Kernel Parameters on our SUN Enterprise with 640MB for Oracle 9 * set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 set semsys:seminfo_semmni=100 set semsys:seminfo_semmsl=100 set semsys:seminfo_semmns=2500 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 -- remarks: The parameter for shared memory (shminfo_shmmax) can be set to the maximum value; it will not impact Solaris in any way. The values for semaphores (seminfo_semmni and seminfo_semmns) depend on the number of clients you want to collect data from. As a rule of the thumb, the values should be set to at least (2*nr of clients + 15). You will have to reboot the system after making changes to the /etc/system file. Solaris doesn't automatically allocate shared memory, unless you specify the value in /etc/system and reboot. Were I you, i'd put in lines in /etc/system that look something like this: only the first value is *really* important. It specifies the maximum amount of shared memory to allocate. I'd make this parameter be about 70-75% of your physical ram (assuming you have nothing else on this machine running besides Oracle ... if not, adjust down accordingly). Then this value will dictate your maximum SGA size as you build your database. set set set set set set set
-- end remarks Create Unix Group �dba� $ groupadd -g 400 dba $ groupdel dba
Create Unix User �oracle� $ useradd -u 400 -c "Oracle Owner" -d /export/home/oracle \ -g "dba" -m -s /bin/ksh oracle Setup ORACLE environment ($HOME/.profile) as follows # Setup ORACLE environment ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACLE_HOME ORACLE_SID=TYP2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM TNS_ADMIN=/export/home/oracle/config/9.2.0; export TNS_ADMIN NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33 LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin/lib LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/dt/lib:/usr/ucblib:/usr/local/lib export LD_LIBRARY_PATH # Set up the search paths: PATH=/bin:/usr/bin:/usr/sbin:/opt/bin:/usr/ccs/bin:/opt/local/GNU/bin PATH=$PATH:/opt/local/bin:/opt/NSCPnav/bin:$ORACLE_HOME/bin PATH=$PATH:/usr/local/samba/bin:/usr/ucb:. export PATH # CLASSPATH must include the following JRE location(s): CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib Install from CD-ROM ... Usually the CD-ROM will be mounted automatically by the Solaris Volume Manager, if not, do it as follows as user root. $ su root $ mkdir /cdrom $ mount -r -F hsfs /dev/.... /cdrom exit or CTRL-D ... or Unpacking downloaded installation files If you downloaded database installation files from Oracle site (901solaris_disk1.cpio.gz, 901solaris_disk2.cpio.gz and 901solaris_disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files. The best way to download the huge files is to use the tool GetRight ( http://www.getright.com/ ) $ $ $ $ $
cd <somewhere> mkdir Disk1 Disk2 Disk3 cd Disk1 gunzip 901solaris_disk1.cpio.gz cat 901solaris_disk1.cpio | cpio -icd
This will extract all the files for Disk1, repeat steps for Disk2 and D3isk3. Now
you should have three directories (Disk1, Disk2 and Disk3) containing installation files. Check oraInst.loc File If you used Oracle before on your system, then you must edit the Oracle Inventory File, usually located in: /var/opt/oracle/oraInst.loc inventory_loc=/opt/oracle/product/oraInventory Install with Installer in interactive mode Install Oracle 9i with Oracle Installer $ $ $ $
cd /Disk1 DISPLAY=:0.0 export DISPLAY ./runInstaller example display: $ export DISPLAY=192.168.1.10:0.0
Answer the questions in the Installer, we use the following install directories Inventory Location: /opt/oracle/product/oraInventory Oracle Universal Installer in: /opt/oracle/product/oui Java Runtime Environment in: /opt/oracle/product/jre/1.1.8 Edit the Database Startup Script /var/opt/oracle/oratab TYP2:/opt/oracle/product/9.2.0:Y Create the Database Edit and save the CREATE DATABASE File initTYP2.sql in $ORACLE_HOME/dbs, or create a symbolic-Link from $ORACLE_HOME/dbs to your Location. $ cd $ORACLE_HOME/dbs $ ln -s /export/home/oracle/config/9.2.0/initTYP2.ora initTYP2.ora $ ls -l initTYP2.ora -> /export/home/oracle/config/9.2.0/initTYP2.ora First start the Instance, just to test your initTYP2.ora file for correct syntax and system resources. $ cd /export/home/oracle/config/9.2.0/ $ sqlplus /nolog SQL> connect / as sysdba SQL> startup nomount SQL> shutdown immediate Now you can create the database
SQL> @initTYP2.sql SQL> @shutdown immediate SQL> startup Check the Logfile: initTYP2.log Start Listener $ lsnrctl start LSNRTYP2 Automatically Start / Stop the Database To start the Database automatically on Boot-Time, create or use our Startup Scripts dbora and lsnrora (included in ora_config_sol_920.tar.gz), which must be installed in /etc/init.d. Create symbolic Links from the Startup Directories. lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora* lrwxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora* Install Oracle Options (optional) You may want to install the following Options: Oracle JVM Orcale XML Oracle Spatial Oracle Ultra Search Oracle OLAP Oracle Data Mining Example Schemas Run the following script install_options.sh to enable this options in the database. Before running this scripts adjust the initSID.ora paramaters as follows for the build process. After this, you can reset the paramters to smaller values. parallel_automatic_tuning = false shared_pool_size = 200000000 java_pool_size = 100000000 $ ./install_options.sh Download Scripts for Sun Solaris These Scripts can be used as Templates. Please note, that some Parameters like ORACLE_HOME, ORACLE_SID and PATH must be adjusted on your own Environment. Besides this, you should check the initSID.ora Parameters for your Database (Size, Archivelog, ...) 6.2 Environment oracle user: ---------------------------typical profile for Oracle account on most unix systems:
===================================== 7. install Oracle 9i on Linux: ===================================== ==================== 7.1.Article 1: ==================== The Oracle 9i Distribution can be found on Oracle Technet (http://technet.oracle.com) The following short Guide shows how to install and configure Oracle 9.2.0 on RedHat Linux 7.2 / 8.0 You may download our Scripts to create a database, we suggest this way and NOT using DBASSIST. Besides these scripts, you can download our NET configuration files: LISTNER.ORA, TNSNAMES.ORA and SQLNET.ORA. System Requirements Create Unix Group �dba� Create Unix User �oracle� Setup Environment ($HOME/.bash_profile) as follows Mount the Oracle 9i CD-ROM (only if you have the CD) ... ... or Unpacking downloaded installation files Install with Installer in interactive mode Create the Database Create your own DB-Create Script (optional) Start Listener Automatically Start / Stop the Database Setup Kernel Parameters ( if necessary ) Install Oracle Options (optional) Download Scripts for RedHat Linux 7.2 For our installation, we used the following ORACLE_HOME AND ORACLE_SID, please adjust these parameters for your own environment. ORACLE_HOME = /opt/oracle/product/9.2.0 ORACLE_SID = VEN1 -------------------------------------------------------------------------------System Requirements Oracle 9i needs Kernel Version 2.4 and glibc 2.2, which is included in RedHat Linux 7.2.
Component Check with ... ... Output Liunx Kernel Version 2.4 rpm -q kernel kernel-2.4.7-10 System Libraries rpm -q glibc glibc-2.2.4-19.3 Proc*C/C++ rpm -q gcc gcc-2.96-98 Create Unix Group �dba� $ groupadd -g 400 dba Create Unix User �oracle� $ useradd -u 400 -c "Oracle Owner" -d /home/oracle \ -g "dba" -m -s /bin/bash oracle Setup Environment ($HOME/.bash_profile) as follows # Setup ORACLE environment ORACLE_HOME=/opt/oracle/product/9.2.0; export ORACLE_HOME ORACLE_SID=VEN1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM ORACLE_OWNER=oracle; export ORACLE_OWNER TNS_ADMIN=/home/oracle/config/9.2.0; export TNS_ADMIN NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33 CLASSPATH=$ORACLE_HOME/jdbc/lib/classes111.zip LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH ### see JSDK: export CLASSPATH # Set up JAVA and JSDK environment: export JAVA_HOME=/usr/local/jdk export JSDK_HOME=/usr/local/jsdk CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JSDK_HOME/lib/jsdk.jar export CLASSPATH # Set up the search paths: PATH=$POSTFIX/bin:$POSTFIX/sbin:$POSTFIX/sendmail PATH=$PATH:/usr/local/jre/bin:/usr/local/jdk/bin:/bin:/sbin:/usr/bin:/usr/sbin PATH=$PATH:/usr/local/bin:$ORACLE_HOME/bin:/usr/local/jsdk/bin PATH=$PATH:/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin:/root/bin PATH=$PATH:/usr/local/samba/bin
export PATH Mount the Oracle 9i CD-ROM (only if you have the CD) ... Mount the CD-ROM as user root. $ $ $ $
su root mkdir /cdrom mount -t iso9660 /dev/cdrom /cdrom exit ... or Unpacking downloaded installation files
If you downloaded database installation files from Oracle site (Linux9i_Disk1.cpio.gz, Linux9i_Disk2.cpio.gz and Linux9i_Disk3.cpio.gz) gunzip them somewhere and you'll get three .cpio files. The best way to download the huge files is to use the tool GetRight ( http://www.getright.com/ ) $ $ $ $
Now you should have three directories (Disk1, Disk2 and Disk3) containing installation files. Install with Installer in interactive mode Install Oracle 9i with Oracle Installer $ $ $ $
cd Disk1 DISPLAY=:0.0 export DISPLAY ./runInstaller
Answer the questions in the Installer, we use the following install directories Inventory Location: /opt/oracle/product/oraInventory Oracle Universal Installer in: /opt/oracle/product/oui Java Runtime Environment in: /opt/oracle/product/jre/1.1.8 Edit the Database Startup Script /etc/oratab VEN1:/opt/oracle/product/9.2.0:Y Create the Database Edit and save the CREATE DATABASE File initVEN1.sql in $ORACLE_HOME/dbs, or create a symbolic-Link from $ORACLE_HOME/dbs to your Location. $ cd $ORACLE_HOME/dbs $ ln -s /home/oracle/config/9.2.0/initVEN1.ora initVEN1.ora $ ls -l initVEN1.ora -> /home/oracle/config/9.2.0/initVEN1.ora
First start the Instance, just to test your initVEN1.ora file for correct syntax and system resources. $ cd /home/oracle/config/9.2.0/ $ sqlplus /nolog SQL> connect / as sysdba SQL> startup nomount SQL> shutdown immediate Now you can create the database SQL> @initVEN1.sql SQL> @shutdown immediate SQL> startup Check the Logfile: initVEN1.log Create your own DB-Create Script (optional) You can generate your own DB-Create Script using the Tool: $ORACLE_HOME/bin/dbca Start Listener $ lsnrctl start LSNRVEN1 Automatically Start / Stop the Database To start the Database automatically on Boot-Time, create or use our Startup Scripts dbora and lsnrora (included in ora_config_linux_901.tar.gz), which must be installed in /etc/rc.d/init.d. Create symbolic Links from the Startup Directories in /etc/rc.d (e.g. /etc/rc.d/rc2.d). lrwxrwxrwx 1 root root S99dbora -> ../init.d/dbora* lrwxrwxrwx 1 root root S99lsnrora -> ../init.d/lsnrora* Setup Kernel Parameters ( if necessary ) Oracle9i uses UNIX resources such as shared memory, swap space, and semaphores extensively for interprocess communication. If your kernel parameter settings are insufficient for Oracle9i, you will experience problems during installation and instance startup. The greater the amount of data you can store in memory, the faster your database will operate. In addition, by maintaining data in memory, the UNIX kernel reduces disk I/O activity. Use the ipcs command to obtain a list of the system�s current shared memory and semaphore segments, and their identification number and owner. You can modify the kernel parameters by using the /proc file system. To modify kernel parameters using the /proc file system: 1. Log in as root user. 2. Change to the /proc/sys/kernel directory.
3. Review the current semaphore parameter values in the sem file using the cat or more utility # cat sem The output will list, in order, the values for the SEMMSL, SEMMNS, SEMOPM, and SEMMNI parameters. The following example shows how the output will appear. 250 32000 32 128 In the preceding example, 250 is the value of the SEMMSL parameter, 32000 is the value of the SEMMNS parameter, 32 is the value of the SEMOPM parameter, and 128 is the value of the SEMMNI parameter. 4. Modify the parameter values using the following command: # echo SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value > sem In the preceding command, all parameters must be entered in order. 5. Review the current shared memory parameters using the cat or more utility. # cat shared_memory_parameter In the preceding example, the shared_memory_parameter is either the SHMMAX or SHMMNI parameter. The parameter name must be entered in lowercase letters. 6. Modify the shared memory parameter using the echo utility. For example, to modify the SHMMAX parameter, enter the following: # echo 2147483648 > shmmax 7. Write a script to initialize these values during system startup and include the script in your system init files. Refer to the following table to determine if your system shared memory and semaphore kernel parameters are set high enough for Oracle9i. The parameters in the following table are the minimum values required to run Oracle9i with a single database instance. You can put the initialization in the file /etc/rc.d/rc.local # Setup Kernel Parameters for Oracle 9i echo 250 32000 100 128 > /proc/sys/kernel/sem echo 2147483648 > /proc/sys/kernel/shmmax echo 4096 > /proc/sys/kernel/shmmni Install Oracle Options (optional) You may want to install the following Options: Oracle Orcale Oracle Oracle Oracle
JVM XML Spatial Ultra Search OLAP
Oracle Data Mining Example Schemas Run the following script install_options.sh to enable this options in the database. Before running this scripts adjust the initSID.ora paramaters as follows for the build process. After this, you can reset the paramters to smaller values. parallel_automatic_tuning = false shared_pool_size = 200000000 java_pool_size = 100000000 $ ./install_options.sh Download Scripts for RedHat Linux 7.2 These Scripts can be used as Templates. Please note, that some Parameters like ORACLE_HOME, ORACLE_SID and PATH must be adjusted on your own Environment. Besides this, you should check the initSID.ora Parameters for your Database (Size, Archivelog, ...)
==================== 7.2.Article 2: ==================== Installing Oracle9i (9.2.0.5.0) on Red Hat Linux (Fedora Core 2) by Jeff Hunter, Sr. Database Administrator -------------------------------------------------------------------------------Contents Overview Swap Space Considerations Configuring Shared Memory Configuring Semaphores Configuring File Handles Create Oracle Account and Directories Configuring the Oracle Environment Configuring Oracle User Shell Limits Downloading / Unpacking the Oracle9i Installation Files Update Red Hat Linux System - (Oracle Metalink Note: 252217.1) Install the Oracle 9.2.0.4.0 RDBMS Software Install the Oracle 9.2.0.5.0 Patchset Post Installation Steps Creating the Oracle Database -------------------------------------------------------------------------------Overview The following article is a summary of the steps required to successfully install the Oracle9i (9.2.0.4.0) RDBMS software on Red Hat Linux Fedora Core 2. Also
included in this article is a detailed overview for applying the Oracle9i (9.2.0.5.0) patchset. Keep in mind the following assumptions throughout this article: When installing Red Hat Linux Fedora Core 2, I install ALL components. (Everything). This makes it easier than trying to troubleshoot missing software components. As of March 26, 2004, Oracle includes the Oracle9i RDBMS software with the 9.2.0.4.0 patchset already included. This will save considerable time since the patchset does not have to be downloaded and installed. We will, however, be applying the 9.2.0.5.0 patchset. Although it is not required, it is recommend to apply the 9.2.0.5.0 patchset. The post installation section includes steps for configuring the Oracle Networking files, configuring the database to start and stop when the machine is cycled, and other miscellaneous tasks. Finally, at the end of this article, we will be creating an Oracle 9.2.0.5.0 database named ORA920 using supplied scripts. -------------------------------------------------------------------------------Swap Space Considerations Ensure enough swap space is available. Installing Oracle9i requires a minimum of 512MB of memory. (An inadequate amount of swap during the installation will cause the Oracle Universal Installer to either "hang" or "die") To check the amount of memory / swap you have allocated, type either: # free - OR # cat /proc/swaps - OR # cat /proc/meminfo | grep MemTotal If you have less than 512MB of memory (between your RAM and SWAP), you can add temporary swap space by creating a temporary swap file. This way you do not have to use a raw device or even more drastic, rebuild your system. As root, make a file that will act as additional swap space, let's say about 300MB: # dd if=/dev/zero of=tempswap bs=1k count=300000 Now we should change the file permissions: # chmod 600 tempswap Finally we format the "partition" as swap and add it to the swap space: # mke2fs tempswap # mkswap tempswap
# swapon tempswap
-------------------------------------------------------------------------------Configuring Shared Memory The Oracle RDBMS uses shared memory in UNIX to allow processes to access common data structures and data. These data structures and data are placed in a shared memory segment to allow processes the fastest form of Interprocess Communications (IPC) available. The speed is primarily a result of processes not needing to copy data between each other to share common data and structures - relieving the kernel from having to get involved. Oracle uses shared memory in UNIX to hold its Shared Global Area (SGA). This is an area of memory within the Oracle instance that is shared by all Oracle backup and foreground processes. It is important to size the SGA to efficiently hold the database buffer cache, shared pool, redo log buffer as well as other shared Oracle memory structures. Inadequate sizing of the SGA can have a dramatic decrease in performance of the database. To determine all shared memory limits you can use the ipcs command. The following example shows the values of my shared memory limits on a fresh RedHat Linux install using the defaults: # ipcs -lm ------ Shared Memory Limits -------max number of segments = 4096 max seg size (kbytes) = 32768 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Let's continue this section with an overview of the parameters that are responsible for configuring the shared memory settings in Linux. SHMMAX The SHMMAX parameter is used to define the maximum size (in bytes) for a shared memory segment and should be set large enough for the largest SGA size. If the SHMMAX is set incorrectly (too low), it is possible that the Oracle SGA (which is held in shared segments) may be limited in size. An inadequate SHMMAX setting would result in the following: ORA-27123: unable to attach to shared memory segment You can determine the value of SHMMAX by performing the following: # cat /proc/sys/kernel/shmmax 33554432 As you can see from the output above, the default value for SHMMAX is 32MB. This is often too small to configure the Oracle SGA. I generally set the SHMMAX parameter to 2GB.
NOTE: With a 32-bit Linux operating system, the default maximum size of the SGA is 1.7GB. This is the reason I will often set the SHMMAX parameter to 2GB since it requires a larger value for SHMMAX. On a 32-bit Linux operating system, without Physical Address Extension (PAE), the physical memory is divided into a 3GB user space and a 1GB kernel space. It is therefore possible to create a 2.7GB SGA, but you will need make several changes at the Linux operating system level by changing the mapped base. In the case of a 2.7GB SGA, you would want to set the SHMMAX parameter to 3GB. Keep in mind that the maximum value of the SHMMAX parameter is 4GB.
To change the value SHMMAX, you can use either of the following three methods: This is method I use most often. This method sets the SHMMAX on startup by inserting the following kernel parameter in the /etc/sysctl.conf startup file: # echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf If you wanted to dynamically alter the value of SHMMAX without rebooting the machine, you can make this change directly to the /proc file system. This command can be made permanent by putting it into the /etc/rc.local startup file: # echo "2147483648" > /proc/sys/kernel/shmmax You can also use the sysctl command to change the value of SHMMAX: # sysctl -w kernel.shmmax=2147483648 SHMMNI We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096. This value is sufficient and typically does not need to be changed. You can determine the value of SHMMNI by performing the following: # cat /proc/sys/kernel/shmmni 4096 SHMALL Finally, we look at the SHMALL shared memory kernel parameter. This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. In short, the value of this parameter should always be at least: ceil(SHMMAX/PAGE_SIZE) The default size of SHMALL is 2097152 and can be queried using the following command: # cat /proc/sys/kernel/shmall 2097152 From the above output, the total amount of shared memory (in bytes) that can be used at one time on the system is: SM = (SHMALL * PAGE_SIZE) = 2097152 * 4096 = 8,589,934,592 bytes The default setting for SHMALL should be adequate for our Oracle installation. NOTE: The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can, however, use bigpages which supports the configuration of larger memory page sizes.
-------------------------------------------------------------------------------Configuring Semaphores Now that we have configured our shared memory settings, it is time to take care of configuring our semaphores. A semaphore can be thought of as a counter that is used to control access to a shared resource. Semaphores provide low level synchronization between processes (or threads within a process) so that only one process (or thread) has access to the shared segment, thereby ensureing the integrity of that shared resource. When an application requests semaphores, it does so using "sets". To determine all semaphore limits, use the following: # ipcs -ls ------ Semaphore Limits -------max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 You can also use the following command: # cat /proc/sys/kernel/sem 250 32000 32 128 SEMMSL The SEMMSL kernel parameter is used to control the maximum number of semaphores per semaphore set. Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setting in the init.ora file for all databases hosted on the Linux system plus 10. Also, Oracle recommends setting the SEMMSL to a value of no less than 100. SEMMNI The SEMMNI kernel parameter is used to control the maximum number of semaphore sets on the entire Linux system. Oracle recommends setting the SEMMNI to a value of no less than 100. SEMMNS The SEMMNS kernel parameter is used to control the maximum number of semaphores (not semaphore sets) on the entire Linux system. Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system. To summarize: SEMMNS =
sum of PROCESSES setting for each database on the system + ( 2 * [largest PROCESSES setting]) + (10 * [number of databases on system] To determine the maximum number of semaphores that can be allocated on a Linux
system, use the following calculation. It will be the lesser of: SEMMNS SEMOPM
-or-
(SEMMSL * SEMMNI)
The SEMOPM kernel parameter is used to control the number of semaphore operations that can be performed per semop system call. The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of SEMMSL semaphores per semaphore set and is therefore recommended to set SEMOPM equal to SEMMSL. Oracle recommends setting the SEMOPM to a value of no less than 100. Setting Semaphore Kernel Parameters Finally, we see how to set all semaphore parameters using several methods. In the following, the only parameter I care about changing (raising) is SEMOPM. All other default settings should be sufficient for our example installation. This is method I use most often. This method sets all semaphore kernel parameters on startup by inserting the following kernel parameter in the /etc/sysctl.conf startup file: # echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf If you wanted to dynamically alter the value of all semaphore kernel parameters without rebooting the machine, you can make this change directly to the /proc file system. This command can be made permanent by putting it into the /etc/rc.local startup file: # echo "250 32000 100 128" > /proc/sys/kernel/sem You can also use the sysctl command to change the value of all semaphore settings: # sysctl -w kernel.sem="250 32000 100 128" -------------------------------------------------------------------------------Configuring File Handles When configuring our Linux database server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles designate the number of open files that you can have on the entire Linux system. Use the following command to determine the maximum number of file handles for the entire system: # cat /proc/sys/fs/file-max 103062 Oracle recommends that the file handles for the entire system be set to at least 65536. In most cases, the default for Red Hat Linux is 103062. I have seen others (Red Hat Linux AS 2.1, Fedora Core 1, and Red Hat version 9) that will only default to 32768. If this is the case, you will want to increase this value to at least 65536. This is method I use most often. This method sets the maximum number of file handles (using the kernel parameter file-max) on startup by inserting the following kernel parameter in the /etc/sysctl.conf startup file:
# echo "fs.file-max=65536" >> /etc/sysctl.conf If you wanted to dynamically alter the value of all semaphore kernel parameters without rebooting the machine, you can make this change directly to the /proc file system. This command can be made permanent by putting it into the /etc/rc.local startup file: # echo "65536" > /proc/sys/fs/file-max You can also use the sysctl command to change the maximum number of file handles: # sysctl -w fs.file-max=65536 NOTE: It is also possible to query the current usage of file handles using the following command: # cat /proc/sys/fs/file-nr 1140 0 103062 In the above example output, here is an explanation of the three values from the file-nr command: Total number of allocated file handles. Total number of file handles currently being used. Maximum number of file handles that can be allocated. This is essentially the value of file-max - (see above).
NOTE: If you need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for 2.4.20 it is set to unlimited. Verify the ulimit setting my issuing the ulimit command: # ulimit unlimited
-------------------------------------------------------------------------------Create Oracle Account and Directories Now let's create the Oracle UNIX account all all required directories: Login as the root user id. % su Create directories. # mkdir -p /u01/app/oracle # mkdir -p /u03/app/oradata # mkdir -p /u04/app/oradata # mkdir -p /u05/app/oradata # mkdir -p /u06/app/oradata Create the UNIX Group for the Oracle User Id. # groupadd -g 115 dba Create the UNIX User for the Oracle Software. # useradd -u 173 -c "Oracle Software Owner" -d /u01/app/oracle -g "dba" -m -s /bin/bash oracle # passwd oracle Changing password for user oracle. New UNIX password: ************ BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: ************
passwd: all authentication tokens updated successfully. Change ownership of all Oracle Directories to the Oracle UNIX User. # chown -R oracle:dba /u01 # chown -R oracle:dba /u03 # chown -R oracle:dba /u04 # chown -R oracle:dba /u05 # chown -R oracle:dba /u06 Oracle Environment Variable Settings NOTE: Ensure to set the environment variable: LD_ASSUME_KERNEL=2.4.1 Failing to set the LD_ASSUME_KERNEL parameter will cause the Oracle Universal Installer to hang!
Verify all mount points. Please keep in mind that all of the following mount points can simply be directories if you only have one hard drive. For our installation, we will be using four mount points (or directories) as follows: /u01 : The Oracle RDBMS software will be installed to /u01/app/oracle. /u03 : This mount point will contain the physical Oracle files: Control File 1 Online Redo Log File - Group 1 / Member 1 Online Redo Log File - Group 2 / Member 1 Online Redo Log File - Group 3 / Member 1 /u04 : This mount point will contain the physical Oracle files: Control File 2 Online Redo Log File - Group 1 / Member 2 Online Redo Log File - Group 2 / Member 2 Online Redo Log File - Group 3 / Member 2 /u05 : This mount point will contain the physical Oracle files: Control File 3 Online Redo Log File - Group 1 / Member 3 Online Redo Log File - Group 2 / Member 3 Online Redo Log File - Group 3 / Member 3 /u06 : This mount point will contain the all physical Oracle data files. This will be one large RAID 0 stripe for all Oracle data files. All tablespaces including System, UNDO, Temporary, Data, and Index. -------------------------------------------------------------------------------Configuring the Oracle Environment After configuring the Linux operating environment, it is time to setup the Oracle UNIX User ID for the installation of the Oracle RDBMS Software. Keep in mind that the following steps need to be performed by the oracle user id. Before delving into the details for configuring the Oracle User ID, I packaged an archive of shell scripts and configuration files to assist
with the Oracle preparation and installation. You should download the archive "oracle_920_installation_files_linux.tar" as the Oracle User ID and place it in his HOME directory. Login as the oracle user id. % su - oracle Unpackage the contents of the oracle_920_installation_files_linux.tar archive. After extracting the archive, you will have a new directory called oracle_920_installation_files_linux that contains all required files. The following set of commands descibe how to extract the file and where to copy/extract all required files: $ id uid=173(oracle) gid=115(dba) groups=115(dba) $ pwd /u01/app/oracle $ tar xvf oracle_920_installation_files_linux.tar oracle_920_installation_files_linux/ oracle_920_installation_files_linux/admin.tar oracle_920_installation_files_linux/common.tar oracle_920_installation_files_linux/dbora oracle_920_installation_files_linux/dbshut oracle_920_installation_files_linux/.bash_profile oracle_920_installation_files_linux/dbstart oracle_920_installation_files_linux/ldap.ora oracle_920_installation_files_linux/listener.ora oracle_920_installation_files_linux/sqlnet.ora oracle_920_installation_files_linux/tnsnames.ora oracle_920_installation_files_linux/crontabORA920.txt $ cp oracle_920_installation_files_linux/.bash_profile ~/.bash_profile $ tar xvf oracle_920_installation_files_linux/admin.tar $ tar xvf oracle_920_installation_files_linux/common.tar $ . ~/.bash_profile .bash_profile executed $ -------------------------------------------------------------------------------Configuring Oracle User Shell Limits Many of the Linux shells (including BASH) implement certain controls over certain critical resources like the number of file descriptors that can be opened and the maximum number of processes available to a user's session. In most cases, you will not need to alter any of these shell limits, but you find yourself getting errors when creating or maintaining the Oracle database, you may want to read through this section. You can use the following command to query these shell limits: # ulimit -a
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited Maximum Number of Open File Descriptors for Shell Session Let's first talk about the maximum number of open file descriptors for a user's shell session. NOTE: Make sure that throughout this section, that you are logged in as the oracle user account since this is the shell account we want to test! Ok, you are first going to tell me, "But I've already altered my Linux environment by setting the system wide kernel parameter /proc/sys/fs/file-max". Yes, this is correct, but there is still a per user limit on the number of open file descriptors. This typically defaults to 1024. To check that, use the following command: % su - oracle % ulimit -n 1024 If you wanted to change the maximum number of open file descriptors for a user's shell session, you could edit the /etc/security/limits.conf as the root account. For your Linux system, you would add the following lines: oracle soft nofile 4096 oracle hard nofile 101062 The first line above sets the soft limit, which is the number of files handles (or open files) that the Oracle user will have after logging in to the shell account. The hard limit defines the maximum number of file handles (or open files) are possible for the user's shell account. If the oracle user account starts to recieve error messages about running out of file handles, then number of file handles should be increased for the oracle using the user should increase the number of file handles using the hard limit setting. You can increase the value of this parameter to 101062 for the current session by using the following: % ulimit -n 101062 Keep in mind that the above command will only effect the current shell session. If you were to log out and log back in, the value would be set back to its default for that shell session. NOTE: Although you can set the soft and hard file limits higher, it is critical to understand to never set the hard limit for nofile for your shell account equal to /proc/sys/fs/file-max. If you were to do this, your shell session could use up all of the file descriptors for the entire Linux system, which means that the entire Linux system would run out of file descriptors. At this point, you would not be able to initiate any new logins since the system would not be able to open any PAM modules, which are required for login. Notice that I set my hard limit to 101062 and not 103062. In short, I am leaving 2000 spare! We're not totally done yet. We still need to ensure that pam_limits is configured in the /etc/pam.d/system-auth file. The steps defined below sould already be
performed with a normal Red Hat Linux installation, but should still be validated! The PAM module will read the /etc/security/limits.conf file. You should have an entry in the /etc/pam.d/system-auth file as follows: session required /lib/security/$ISA/pam_limits.so I typically validate that my /etc/pam.d/system-auth file has the following two entries: session required /lib/security/$ISA/pam_limits.so session required /lib/security/$ISA/pam_unix.so Finally, let's test our new settings for the maximum number of open file descriptors for the oracle shell session. Logout and log back in as the oracle user account then run the following commands. Let's first check all current soft shell limits: $ ulimit -Sa core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited Finally, let's check all current hard shell limits: $ ulimit -Ha core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 101062 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited The soft limit is now set to 4096 while the hard limit is now set to 101062. NOTE: There may be times when you cannot get access to the root user account to change the /etc/security/limits.conf file. You can set this value in the user's login script for the shell as follows: su - oracle cat >> ~oracle/.bash_profile << EOF ulimit -n 101062 EOF
NOTE: For this section, I used the BASH shell. The session values will not always be the same for other shells.
Maximum Number of Processes for Shell Session This section is very similar to the previous section, "Maximum Number of Open File Descriptors for Shell Session" and deals with the same concept of soft limits and hard limits as well as configuring pam_limits. For most default Red Hat Linux installations, you will not need to be concerned with the maximum number of user processes as this value is generally high enough! NOTE: For this section, I used the BASH shell. The session values will not always be the same for other shells. Let's start by querying the current limit of the maximum number of processes for the oracle user: % su - oracle % ulimit -u 16383 If you wanted to change the soft and hard limits for the maximum number of processes for the oracle user, (and for that matter, all users), you could edit the /etc/security/limits.conf as the root account. For your Linux system, you would add the following lines: oracle soft nproc 2047 oracle hard nproc 16384 NOTE: There may be times when you cannot get access to the root user account to change the /etc/security/limits.conf file. You can set this value in the user's login script for the shell as follows: su - oracle cat >> ~oracle/.bash_profile << EOF ulimit -u 16384 EOF
Miscellaneous Notes To check all current soft shell limits, enter the following command: $ ulimit -Sa core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited To check maximum hard limits, enter the following command: $ ulimit -Ha core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited open files (-n) 101062 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 16383 virtual memory (kbytes, -v) unlimited The file (blocks) value should be multiplied by 512 to obtain the maximum file size imposed by the shell. A value of unlimited is the operating system default and typically has a maximum value of 1 TB. NOTE: Oracle9i Release 2 (9.2.0) includes native support for files greater than 2 GB. Check your shell to determine whether it will impose a limit.
-------------------------------------------------------------------------------Downloading / Unpacking the Oracle9i Installation Files Most of the actions throughout the rest of this document should be done as the "oracle" user account unless otherwise noted. If you are not logged in as the "oracle" user account, do so now. Download Oracle9i from Oracle's OTN Site. (If you do not currently have an account with Oracle OTN, you will need to create one. This is a FREE account!) http://www.oracle.com/technology/software/products/oracle9i/htdocs/linuxsoft.html Download the following files to a temporary directory (i.e. /u01/app/oracle/orainstall: ship_9204_linux_disk1.cpio.gz (538,906,295 bytes) (cksum - 245082434) ship_9204_linux_disk2.cpio.gz (632,756,922 bytes) (cksum - 2575824107) ship_9204_linux_disk3.cpio.gz (296,127,243 bytes) (cksum - 96915247) Directions to extract the files. Run "gunzip " on all the files. % gunzip ship_9204_linux_disk1.cpio.gz Extract the cpio archives with the command: "cpio -idmv < " % cpio -idmv < ship_9204_linux_disk1.cpio NOTE: Some browsers will uncompress the files but leave the extension the same (gz) when downloading. If the above steps do not work for you, try skipping step 1 and go directly to step 2 without changing the filename. % cpio -idmv < ship_9204_linux_disk1.cpio.gz
You should now have three directories called "Disk1, Disk2 and Disk3" containing the Oracle9i Installation files: /Disk1 /Disk2
/Disk3
-------------------------------------------------------------------------------Update Red Hat Linux System - (Oracle Metalink Note: 252217.1) The following RPMs, all of which are available on the Red Hat Fedora Core 2 CDs, will need to be updated as per the steps described in Metalink Note: 252217.1 "Requirements for Installing Oracle 9iR2 on RHEL3". All of these packages will need to be installed as the root user: From Fedora Core 2 / Disk #1 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh libpng-1.2.2-22.i386.rpm From Fedora Core 2 / Disk #2 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh gnome-libs-1.4.1.2.90-40.i386.rpm From Fedora Core 2 / Disk #3 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh compat-libstdc++-7.3-2.96.126.i386.rpm # rpm -Uvh compat-libstdc++-devel-7.3-2.96.126.i386.rpm # rpm -Uvh compat-db-4.1.25-2.1.i386.rpm # rpm -Uvh compat-gcc-7.3-2.96.126.i386.rpm # rpm -Uvh compat-gcc-c++-7.3-2.96.126.i386.rpm # rpm -Uvh openmotif21-2.1.30-9.i386.rpm # rpm -Uvh pdksh-5.2.14-24.i386.rpm From Fedora Core 2 / Disk #4 # cd /mnt/cdrom/Fedora/RPMS # rpm -Uvh sysstat-5.0.1-2.i386.rpm Set gcc296 and g++296 in PATH Put gcc296 and g++296 first in $PATH variable by creating the following symbolic links: # mv /usr/bin/gcc /usr/bin/gcc323 # mv /usr/bin/g++ /usr/bin/g++323 # ln -s /usr/bin/gcc296 /usr/bin/gcc # ln -s /usr/bin/g++296 /usr/bin/g++ Check hostname Make sure the hostname command returns a fully qualified host name by amending the /etc/hosts file if necessary: # hostname Install the 3006854 patch: The Oracle / Linux Patch 3006854 can be downloaded here. # unzip p3006854_9204_LINUX.zip # cd 3006854 # sh rhel3_pre_install.sh -------------------------------------------------------------------------------Install the Oracle 9.2.0.4.0 RDBMS Software As the "oracle" user account: Set your DISPLAY variable to a valid X Windows display.
% DISPLAY=:0.0 % export DISPLAY NOTE: If you forgot to set the DISPLAY environment variable and you get the following error: Xlib: connection to ":0.0" refused by server Xlib: Client is not authorized to connect to Server you will then need to execute the following command to get "runInstaller" working again: % rm -rf /tmp/OraInstall If you don't do this, the Installer will hang without giving any error messages. Also make sure that "runInstaller" has stopped running in the background. If not, kill it. Change directory to the Oracle installation files you downloaded and extracted. Then run: runInstaller. $ su - oracle $ cd orainstall/Disk1 $ ./runInstaller Initializing Java Virtual Machine from /tmp/OraInstall2004-05-02_08-4513PM/jre/bin/java. Please wait... Screen Name Response Welcome Screen: Click "Next" Inventory Location: Click "OK" UNIX Group Name: Use "dba" Root Script Window: Open another window, login as the root userid, and run "/tmp/orainstRoot.sh". When the script has completed, return to the dialog from the Oracle Installer and hit Continue. File Locations: Leave the "Source Path" at its default setting. For the Destination name, I like to use "OraHome920". You can leave the Destination path at it's default value which should be "/u01/app/oracle/product/9.2.0". Available Products: Select "Oracle9i Database 9.2.0.4.0" and click "Next" Installation Types: Select "Enterprise Edition (2.84GB)" and click "Next" Database Configuration: Select "Software Only" and click "Next" Summary: Click "Install" Running root.sh script. When the "Link" phase is complete, you will be prompted to run the $ORACLE_HOME/root.sh script as the "root" user account. Shutdown any started Oracle processes The Oracle Universal Installer will succeed in starting some Oracle programs, in particular the Oracle HTTP Server (Apache), the Oracle Intelligent Agent, and possibly the Orcle TNS Listener. Make sure all programs are shutdown before attempting to continue in installing the Oracle 9.2.0.5.0 patchset: % $ORACLE_HOME/Apache/Apache/bin/apachectl stop
% agentctl stop % lsnrctl stop -------------------------------------------------------------------------------Install the Oracle 9.2.0.5.0 Patchset Once you have completed installing of the Oracle9i (9.2.0.4.0) RDBMS software, you should now apply the 9.2.0.5.0 patchset. NOTE: The details and instructions for applying the 9.2.0.5.0 patchset in this article is not absolutely necessary. I provide it here simply as a convenience for those how do want to apply the latest patchset. The 9.2.0.5.0 patchset can be downloaded from Oracle Metalink: Patch Number: 3501955 Description: ORACLE 9i DATABASE SERVER RELEASE 2 - PATCH SET 4 VERSION 9.2.0.5.0 Product: Oracle Database Family Release: Oracle 9.2.0.5 Select a Platform or Language: Linux x86 Last Updated: 26-MAR-2004 Size: 313M (328923077 bytes)
Use the following steps to install the Oracle10g Universal Installer and then the Oracle 9.2.0.5.0 patchset. To start, let's unpack the Oracle 9.2.0.5.0 to a temporary directory: % cd orapatch % unzip p3501955_9205_LINUX.zip % cpio -idmv < 9205_lnx32_release.cpio Next, we need to install the Oracle10g Universal Installer into the same $ORACLE_HOME we used to install the Oracle9i RDBMS software. NOTE: Using the old Universal Installer that was used to install the Oracle9i RDBMS software, (OUI release 2.2), cannot be used to install the 9.2.0.5.0 patchset and higher!
Starting with the Oracle 9.2.0.5.0 patchset, Oracle requires the use of the Oracle10g Universal Installer to apply the 9.2.0.5.0 patchset and to perform all subsequent maintenance operations on the Oracle software $ORACLE_HOME. Let's get this thing started by installing the Oracle10g Universal Installer. This must be done by running the runInstaller that is included with the 9.2.0.5.0 patchset we extracted in the above step: % cd orapatch/Disk1 % ./runInstaller -ignoreSysPrereqs Starting Oracle Universal Installer... Checking installer requirements...
Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3, SuSE-7 or SuSE-8 Failed <<<< >>> Ignoring required pre-requisite failures. Continuing... Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_0748-15PM. Please wait ... Oracle Universal Installer, Version 10.1.0.2.0 Production Copyright (C) 1999, 2004, Oracle. All rights reserved. Use the following options in the Oracle Universal Installer to install the Oracle10g OUI: Screen Name Response Welcome Screen: Click "Next" File Locations: The "Source Path" should be pointing to the products.xml file by default. For the Destination name, choose the same one you created when installing the Oracle9i software. The name we used in this article was "OraHome920" and the destination path should be "/u01/app/oracle/product/9.2.0". Select a Product to Install: Select "Oracle Universal Installer 10.1.0.2.0" and click "Next" Summary: Click "Install"
Exit from the Oracle Universal Installer. Correct the runInstaller symbolic link bug. (Bug 3560961) After the installation of Oracle10g Universal Installer, there is a bug that does NOT update the $ORACLE_HOME/bin/runInstaller symbolic link to point to the new 10g installation location. Since the symbolic link does not get updated, the runInstaller command still points to the old installer (2.2) and will be run instead of the new 10g installer. To correct this, you will need to manually update the $ORACLE_HOME/bin/runInstaller symbolic link: % cd $ORACLE_HOME/bin % ln -s -f $ORACLE_HOME/oui/bin/runInstaller.sh runInstaller We now install the Oracle 9.2.0.5.0 patchset by executing the newly installed 10g Universal Installer: % cd % runInstaller -ignoreSysPrereqs Starting Oracle Universal Installer... Checking installer requirements... Checking operating system version: must be redhat-2.1, UnitedLinux-1.0, redhat-3, SuSE-7 or SuSE-8 Failed <<<< >>> Ignoring required pre-requisite failures. Continuing...
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2004-08-30_0759-30PM. Please wait ... Oracle Universal Installer, Version 10.1.0.2.0 Production Copyright (C) 1999, 2004, Oracle. All rights reserved. Here is an overview of the selections I made while performing the 9.2.0.5.0 patchset install: Screen Name Response Welcome Screen: Click "Next" File Locations: The "Source Path" should be pointing to the products.xml file by default. For the Destination name, choose the same one you created when installing the Oracle9i software. The name we used in this article was "OraHome920" and the destination path should be "/u01/app/oracle/product/9.2.0". Select a Product to Install: Select "Oracle 9iR2 Patchsets 9.2.0.5.0" and click "Next" Summary: Click "Install"
Running root.sh script. When the Link phase is complete, you will be prompted to run the $ORACLE_HOME/root.sh script as the "root" user account. Go ahead and run the root.sh script. Exit Universal Installer Exit from the Universal Installer and continue on to the Post Installation section of this article.
-------------------------------------------------------------------------------Post Installation Steps After applying the Oracle 9.2.0.5.0 patchset, we should perform several miscellaneous tasks like configuring the Oracle Networking files and setting up startup and shutdown scripts for then the machine is cycled. Configuring Oracle Networking Files: I already included sample configuration files (contained in the oracle_920_installation_files_linux.tar file) that can be simply copied to their proper location and started. Change to the oracle HOME directory and copy the files as follows: % % % % % %
The dbora script (below) relies on an entry in the /etc/oratab. Perform the following actions as the oracle user account: % echo "ORA920:/u01/app/oracle/product/9.2.0:Y" >> /etc/oratab Configuring Startup / Shutdown Scripts: Also included in the oracle_920_installation_files_linux.tar file is a script called dbora. This script can be used by the init process to startup and shutdown the database when the machine is cycled. The following tasks will need to be performed by the root user account: % su # cp /u01/app/oracle/oracle_920_installation_files_linux/dbora /etc/init.d # chmod 755 /etc/init.d/dbora # # # # #
-------------------------------------------------------------------------------Creating the Oracle Database Finally, let's create an Oracle9i database. This can be done using scripts that I already included with the oracle_920_installation_files_linux.tar download. The scripts are included in the ~oracle/admin/ORA920/create directory. To create the database, perform the following steps: % su - oracle % cd admin/ORA920/create % ./RUN_CRDB.sh After starting the RUN_CRDB.sh, there will be no screen activity until the database creation is complete. You can, however, bring up a new console window to the Linux databse server as the oracle user account, navigate to the same directory you started the database creation from, and tail the crdb.log log file. $ telnet linux3 ... Fedora Core release 2 (Tettnang) Kernel 2.6.5-1.358 on an i686 login: oracle Password: xxxxxx .bash_profile executed [oracle@linux3 oracle]$ cd admin/ORA920/create [oracle@linux3 create]$ tail -f crdb.log
===================================== 8. Install Oracle 9.2.0.2 on OpenVMS: ===================================== VMS: ====
Using OUI to install Oracle9i Release 2 on an OpenVMS System We have a PC running Xcursion and a 16 Processor GS1280 with the 2 built-in disks In the examples we booted on disk DKA0: Oracle account is on disk DKA100. Oracle and the database will be installed on DKA100. Install disk MUST be ODS-5. Installation uses the 9.2 downloaded from the Oracle website. It comes in a Java JAR file. Oracle ships a JRE with its product. However, you will have to install Java on OpenVMS so you can unpack the 9.2 JAR file that comes from the Oracle website Unpack the JAR file as described on the Oracle website. This will create two .BCK files. Follow the instructions in the VMS_9202_README.txt file on how to restore the 2 backup save sets. When the two backup save sets files are restored, you should end up with two directories: [disk1] directory [disk2] directory These directories will be in the root of a disk. In this example they are in the root of DKA100. The OUI requires X-Windows. If the Alpha system you are using does not have a graphic head, use a PC with an X-Windows terminal such as Xcursion. During this install we discovered a problem: Instructions tell you to run @DKA100:[disk1]runinstaller. This will not work because the RUNINSTALLER.COM file is not in the root of DKA100:[disk1]. You must first copy RUNINSTALLER.COM from the dka100:[disk1.000000] directory into dka100:[disk1]: $ Copy dka100:[disk1.000000]runinstaller.com dka100:[disk1] From a terminal window execute: @DKA100:[disk1]runinstaller - Oracle Installer starts Start the installation Click Next to start the installation. - Assign name and directory structure for the Oracle Home ORACLE_HOME Assign a name for your Oracle home. Assign the directory structure for the home, for example Ora_home
Dka100:[oracle.oracle9] This is where the OUI will install Oracle. The OUI will create the directories as necessary - Select product to install Select Database. Click Next. - Select type of installation Select Enterprise Edition (or Standard Edition or Custom). Click Next. - Enable RAC Select No. Click Next. - Database summary View list of products that will be installed. Click Install. - Installation begins Installation takes from 45 minutes to an hour. Installation ends Click Exit. Oracle is now installed in DKA100:[oracle.oracle9]. To create the first database, you must first set up Oracle logicals. To do this use a terminal and execute @[.oracle9]orauser . The tool to create and manage databases is DBCA. On the terminal, type DBCA to launch the Database Assistant. Welcome to Database Configuration Assistant DBCA starts. Click Next. Select an operation Select Create a Database. Click Next. Select a template Select New Database. Click Next. Enter database name and SID Enter the name of the database and Oracle System Identifier (SID): In this example, the database name is DB9I. The SID is DB9I1. Click Next. Select database features Select which demo databases are installed. In the example, we selected all possible databases. Click Next. Select default node Select the node in which you want your database to operate by default. In the example, we selected Shared Server Mode. Click Next. Select memory In the example, we selected the default. Click Next. Specify database storage parameters Select the device and directory. Use the UNIX device syntax I.E.
For example, DKA100:[oracle.oracle9.database] would be: /DKA100/oracle/oracle9/database/ In the example, we kept the default settings. Click Next. Select database creation options Creating a template saves time when creating a database. Click Finish. Create a template Click OK. Creating and starting Oracle Instance The database builds. If it completes successfully, click Exit. If it does not complete successfully, build it again. Running the database Enter �show system� to see the Oracle database up and running. Set up some files to start and stop the database. Example of a start file This command sets the logicals to manage the database: $ @dka100:[oracle.oracle9]orauser db9i1 The next line starts the Listener (needed for client connects). The final lines start the database. Stop database example Example of how to stop the database. Test database server Use the Enterprise Manager console to test the database server. Oracle Enterprise Manager Enter address of server and SID. Name the server. Click OK. Databases connect information Select database. Enter system account and password. Change connection box to �AS SYSDBA.� Click OK. Open database Database is opened and exposed. Listener Listener automatically picks up the SID from the database. Start Listener before database and the SID will display in the Listener. If you start the database before the Listener, the SID may not appear immediately. To see if the SID is registered in the Listener, enter: $lsnrctl stat Alter a user User is altered: SQL> alter user oe identified by oe account unlock; SQL> exit Preferred method is to use the Enterprise Manager Console.
================================================== 9. Installation of Oracle 9i on AIX and other UNIX ================================================== AIX: ==== 9.1 Installation of Oracle 9i on AIX Doc ID: Note:201019.1 Content Type: TEXT/PLAIN Subject: AIX: Quick Start Guide - 9.2.0 RDBMS Installation 25-JUN-2002 Type: REFERENCE Last Revision Date: 14-APR-2004 Status: PUBLISHED Quick Start Guide Oracle9i Release 2 (9.2.0) RDBMS Installation AIX Operating System
Creation Date:
Purpose ======= This document is designed to be a quick reference that can be used when installing Oracle9i Release 2 (9.2.0) on an AIX platform. It is NOT designed to replace the Installation Guide or other documentation. A familiarity with the AIX Operating System is assumed. If more detailed information is needed, please see the Appendix at the bottom of this document for additional resources. Each step should be done in the order that it is listed. These steps are the bare minimum that is necessary for a typical install of the Oracle9i RDBMS. Verify OS version is certified with the RDBMS version ====================================================== The following steps are required to verify your version of the AIX operating system is certified with the version of the RDBMS (Oracle9i Release 2 (9.2.0)): 1. 2. 3. 4. 5. 6. 7. 8. 9.
Point your web browser to http://metalink.oracle.com. Click the "Certify & Availability" button near the left. Click the "Certifications" button near the top middle. Click the "View Certifications by Platform" link. Select "IBM RS/6000 AIX" and click "Submit". Select Product Group "Oracle Server" and click "Submit". Select Product "Oracle Server - Enterprise Edition" and click "Submit". Read any general notes at the top of the page. Select "9.2 (9i) 64-bit" and click "Submit".
The "Status" column displays the certification status. The links in the "Addt'l Info" and "Install Issue" columns may contain additional information relevant to a given version. Note that if patches are listed under one of these links, your installation is not considered certified unless you apply them. The "Addt'l Info" link also contains information about available patchsets. Installation of patchsets is not required to be considered certified, but they are highly recommended.
Pre-Installation Steps for the System Administrator ==================================================== The following steps are required to verify your operating system meets minimum requirements for installation, and should be performed by the root user. For assistance with system administration issues, please contact your system administator or operating system vendor. Use these steps to manually check the operating system requirements before attempting to install Oracle RDBMS software, or you may choose to use the convenient "Unix InstallPrep script" which automates these checks for you. more information about the script, including download information, please review the following article: Note:189256.1
For
UNIX: Script to Verify Installation Requirements for Oracle 9.x version of RDBMS
The InstallPrep script currently does not check requirements for AIX5L systems. The Following Steps Need to be Performed by the Root User: 1. Configure Operating System Resources: Ensure that the system has at least the following resources: ? 400 MB in /tmp * ? 256 MB of physical RAM memory ? Two times the amount of physical RAM memory for Swap/Paging space (On systems with more than 2 GB of physical RAM memory, the requirements for Swap/Paging space can be lowered, but Swap/Paging space should never be less than physical RAM memory.) * You may also redirect /tmp by setting the TEMP environment variable. This is only recommended in rare circumstances where /tmp cannot be expanded to meet free space requirements. 2. Create an Oracle Software Owner and Group: Create an AIX user and group that will own the Oracle software. (user = oracle, group = dba) ? Use the "smit security" command to create a new group and user Please ensure that the user and group you use are defined in the local /etc/passwd (user) and /etc/group (group) files rather than resolved via a network service such as NIS. 3. Create a Software Mount Point and Datafile Mount Points: Create a mount point for the Oracle software installation. (at least 3.5 GB, typically /u01) Create a second, third, and fourth mount point for the database files. (typically /u02, /u03, and /u04) Use of multiple mount points is not required, but is highly recommended for best performance and ease of
recoverability. 4. Ensure that Asynchronous Input Output (AIO) is "Available": Use the following command to check the current AIO status: # lsdev -Cc aio Verify that the status shown is "Available". If the status shown is "Defined", then change the "STATE to be configured at system restart" to "Available" after running the following command: # smit chaio 5. Ensure that the math library is installed on your system: Use the following command to determine if the math library is installed: # lslpp -l bos.adt.libm If this fileset is not installed and "COMMITTED", then you must install it from the AIX operating system CD-ROM from IBM. With the correct CD-ROM mounted, run the following command to begin the process to load the required bos.adt.libm fileset: # smit install_latest AIX5L systems also require the following filesets: # lslpp -l bos.perf.perfstat # lslpp -l bos.perf.libperfstat 6. Download and install JDK 1.3.1 from IBM. At the time this article was created, the JDK could be downloaded from the following URL: http://www.ibm.com/developerworks/java/jdk/aix/index.html Please contact IBM Support if you need assistance downloading or installing the JDK. 7. Mount the Oracle CD-ROM: Mount the Oracle9i Release 2 (9.2.0) CD-ROM using the command: # mount -rv cdrfs /dev/cd0 /cdrom 8. Run the rootpre.sh script: NOTE: You must shutdown ALL Oracle database instances (if any) before running the rootpre.sh script. Do not run the rootpre.sh script if you have a newer version of an Oracle database already installed on this system. Use the following command to run the rootpre.sh script: # /cdrom/rootpre.sh
Installation Steps for the Oracle User ======================================= The Following Steps Need to be Performed by the Oracle User: 1. Set Environment Variables Environment variables should be set in the login script for the oracle user. If the oracle user's default shell is the C-shell (/usr/bin/csh), then the login script will be named ".login". If the oracle user's default shell is the Bourne-shell (/usr/bin/bsh) or the Korn-shell (/usr/bin/sh or /usr/bin/ksh), then the login script will be named ".profile". In either case, the login script will be located in the oracle user's home directory ($HOME). The examples below assume that your software mount point is /u01. Parameter -----------
Value -----------------------------
ORACLE_HOME
/u01/app/oracle/product/9.2.0
PATH
/u01/app/oracle/product/9.2.0/bin:/usr/ccs/bin: /usr/bin/X11: (followed by any other directories you wish to include)
ORACLE_SID
Set this to what you will call your database instance. (typically 4 characters in length)
DISPLAY
:0.0 (review Note:153960.1 for detailed information)
2. Set the umask: Set the oracle user's umask to "022" in you ".profile" or ".login" file. Example: umask 022 3. Verify the Environment Log off and log on as the oracle user to ensure all environment variables are set correctly. Use the following command to view them: % env | more Before attempting to run the Oracle Universal Installer (OUI), verify that you can successfully run the following command: % /usr/bin/X11/xclock If this does not display a clock on your display screen, please review the following article: Note:153960.1
FAQ: X Server testing and troubleshooting
4. Start the Oracle Universal Installer and install the RDBMS software:
Use the following commands to start the installer: % cd /tmp % /cdrom/runInstaller Respond to the installer prompts as shown below: ? When prompted for whether rootpre.sh has been run by root, enter "y". This should have been done in Pre-Installation step 8 above. ? At the "Welcome Screen", click Next. ? If prompted, enter the directory to use for the "Inventory Location". This can be any directory, but is usually not under ORACLE_HOME because the oraInventory is shared with all Oracle products on the system. ? If prompted, enter the "UNIX Group Name" for the oracle user (dba). ? At the "File Locations Screen", verify the Destination listed is your ORACLE_HOME directory. Also enter a NAME to identify this ORACLE_HOME. The NAME can be anything, but is typically "DataServer" and the first three digits of the version. For example: "DataServer920" ? At the "Available Products Screen", choose Oracle9i Database, then click Next. ? At the "Installation Types Screen", choose Enterprise Edition, then click Next. ? If prompted, click Next at the "Component Locations Screen" to accept the default directories. ? At the "Database Configuration Screen", choose the the configuration based on how you plan to use the database, then click Next. ? If prompted, click Next at the "Privileged Operating System Groups Screen" to accept the default values (your current OS primary group). ? If prompted, enter the Global Database Name in the format "ORACLE_SID.hostname" at the "Database Identification Screen". For example: "TEST.AIXhost". The SID entry should be filled in with the value of ORACLE_SID. Click Next. ? If prompted, enter the directory where you would like to put datafiles at the "Database File Location Screen". Click Next. ? If prompted, select "Use the default character set" (WE8ISO8859P1) at the "Database Character Set Screen". Click Next. ? At the "Choose JDK Home Directory", enter the directory where you have previously installed the JDK 1.3.1 from IBM. This should have been done in Pre-Installation step 6 above. ? At the "Summary Screen", review your choices, then click Install. The install will begin. Follow instructions regarding running "root.sh" and any other prompts. When completed, the install will have created a
default database, configured a Listener, and started both for you. Note: If you are having problems changing CD-ROMs when prompted to do so, please review the following article: Note:146566.1
How to Unmount / Eject First Cdrom
Your Oracle9i Release 2 (9.2.0) RDBMS installation is now complete and ready for use. Appendix A ========== Documentation is available from the following resources: Oracle9i Release 2 (9.2.0) CD-ROM Disk1 ---------------------------------------Mount the CD-ROM, then use a web browser to open the file "index.htm" located at the top level directory of the CD-ROM. On this CD-ROM you will find the Installation Guide, Administrator's Reference, and other useful documentation. Oracle Documentation Center --------------------------Point your web browser to the following URL: http://otn.oracle.com/documentation/content.html Select the highest version CD-pack displayed to ensure you get the most up-to-date information. Unattended install: ------------------Note 1: ------This note describes how to start the unattended install of patch 9.2.0.5 on AIX 5L, which can be applied to 9.2.0.2, 9.2.0.3, 9.2.0.4 Shut down the existing Oracle server instance with normal or immediate priority. For example, shutdown all instances (cleanly) if running Parallel Server. Stop all listener, agent and other processes running in or against the ORACLE_HOME that will have the patch set installation. Run slibclean (/usr/sbin/slibclean) as root to remove ant currently unused modules in kernel and library memory. To perform a silent installation requiring no user intervention:
Copy the response file template provided in the response directory where you unpacked the patch set tar file. Edit the values for all fields labeled as according to the comments and examples in the template. Start the Oracle Universal Installer from the directory described in Step 4 which applies to your situation. You should pass the full path of the response file template you have edited locally as the last argument with your own value of ORACLE_HOME and FROM_LOCATION. The following is an example of the command: % ./runInstaller -silent -responseFile full_path_to_your_response_file Run the $ORACLE_HOME/root.sh script from a root session. If you are applying the patch set in a cluster database environment, the root.sh script should be run in the same way on both the local node and all participating nodes. Note 2: ------In order to make an unattended install of 9.2.0.1 on Win2K: Running Oracle Universal Installer and Specifying a Response File To run Oracle Universal Installer and specify the response file: Go to the MS-DOS command prompt. Go to the directory where Oracle Universal Installer is installed. Run the appropriate response file. For example, C:\program files\oracle\oui\install> setup.exe -silent -nowelcome -responseFile filename Where... Description filename Identifies the full path of the specific response file -silent Runs Oracle Universal Installer in complete silent mode. The Welcome window is suppressed automatically. This parameter is optional. If you use -silent, -nowelcome is not necessary. -nowelcome Suppresses the Welcome window that appears during installation. This parameter is optional. Note 3: -------
Unattended install of 9.2.0.5 on Win2K: To perform a silent installation requiring no user intervention: Make a copy of the response file template provided in the response directory where you unzipped the patch set file. Edit the values for all fields labeled as according to the comments and examples in the template. Start Oracle Universal Installer release 10.1.0.2 located in the unzipped area of the patch set. For example, Disk1\setup.exe. You should pass the full path of the response file template you have edited locally as the last argument with your own value of ORACLE_HOME and FROM_LOCATION. The syntax is as follows: setup.exe -silent -responseFile ORACLE_BASE\ORACLE_HOME\response_file_path
=============================== 9.2 Oracle and UNIX and other OS: =============================== You have the following options for creating your new Oracle database: - Use the Database Configuration Assistant (DBCA). DBCA can be launched by the Oracle Universal Installer, depending upon the type of install that you select, and provides a graphical user interface (GUI) that guides you through the creation of a database. You can chose not to use DBCA, or you can launch it as a standalone tool at any time in the future to create a database. Run DCBA as % dbca - Create the database manually from a script. If you already have existing scripts for creating your database, you can still create your database manually. However, consider editing your existing script to take advantage of new Oracle features. Oracle provides a sample database creation script and a sample initialization parameter file with the database software files it distributes, both of which can be edited to suit your needs. - Upgrade an existing database. In all cases, the Oracle software needs to be installed on your host machine.
9.1.1 Operating system dependencies: -----------------------------------First, determine for this version of Oracle, what OS settings must be made, and if any patches must be installed. For example, on Linux, glibc 2.1.3 is needed with Oracle version 8.1.7. Linux could be quite critical with respect to libraries in combination with Oracle. Ook moet er mogelijk shmmax (max size of shared memory segment) en dergelijke kernel parameters worden aangepast. # sysctl -w kernel.shmmax=100000000 # echo "kernel.shmmax = 100000000" >> /etc/sysctl.conf Opmerking: Het onderstaANDe is algemeen, maar is ook afgeleid van een Oracle 8.1.7 installatie op Linux Redhat 6.2 Als de 8.1.7 installatie gedaan wordt is ook nog de Java JDK 1.1.8 nodig. Deze kan gedownload worden van www.blackdown.org Download jdk-1.1.8_v3 jdk118_v3-glibc-2.1.3.tar.bz2 in /usr/local tar xvif jdk118_v3-glibc-2.1.3.tar.bz2 ln -s /usr/local/jdk118_v3 /usr/local/java 9.1.2 Environment variables: ---------------------------Make sure you have the following environment variables set: ON UNIX: ======== Example 1: ---------ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE (root voor oracle software) ORACLE_HOME=$ORACLE_BASE/product/8.1.5; export ORACLE_HOME (bepaald de directory waarin de instance software zich bevind) ORACLE_SID=brdb; export ORACLE_SID (bepaald de naam van de huidige instance) ORACLE_TERM=xterm, vt100, ansi of wat ANDers; export ORACLE_TERM ORA_NLSxx=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS (bepaald de nls directory t.b.v. datafiles voor meerdere talen) NLS_LANG="Dutch_The NetherlANDs.WE8ISO8859P1"; export NLS_LANG (Dit specificeert de language, territory en characterset t.b.v de client applicaties. LD_LIBRARY_PATH=/u01/app/oracle/product/8.1.7/lib; export LD_LIBRARY_PATH PATH=$ORACLE_HOME/bin:/bin:/user/bin:/usr/sbin:/bin; export PATH plaats deze variabelen in de oracle user profile file: .profile, of .bash_profile etc..
Example 6: ---------ORACLE_BASE /u01/app/oracle ORACLE_HOME $ORACLE_BASE/product/10.1.0/db_1 ORACLE_PATH /u01/app/oracle/product/10.1.0/db_1/bin:. Note: The period adds the current working directory to the search path. ORACLE_SID SAL1 ORAENV_ASK NO SQLPATH /home:/home/oracle:/u01/oracle TNS_ADMIN $ORACLE_HOME/network/admin TWO_TASK Function Specifies the default connect identifier to use in the connect string. If this environment variable is set, you do not need to specify the connect identifier in the connect string. For example, if the TWO_TASK environment variable is set to sales, you can connect to a database using the CONNECT username/password command rather than the CONNECT username/password@sales command. Syntax Any connect identifier. Example PRODDB_TCP to identify the SID and Oracle home directory for the instance that you want to shut down, enter the following command: Solaris: $ cat /var/opt/oracle/oratab Other operating systems: $ cat /etc/oratab ON NT/2000: =========== SET SET SET SET SET
ON OpenVMS: =========== When Oracle is installed on VMS, a root directory is chosen which is pointed to by the logical name ORA_ROOT. This directory can be placed anywhere on the VMS system. The majority of code, configuration files and command procedures are found below this root directory. When a new database is created a new directory is created in the root directory to store database specific configuration files. This directory is called [.DB_dbname].
This directory will normally hold the system tablespace data file as well as the database specific startup, shutdown and orauser files. The Oracle environment for a VMS user is set up by running the appropriate ORAUSER_dbname.COM file. This sets up the necessary command symbols and logical names to access the various ORACLE utilities. Each database created on a VMS system will have an ORAUSER file in it's home directory and will be named ORAUSER_dbname.COM, e.g. for a database SALES the file specification could be: ORA_ROOT:[DB_SALES]ORAUSER_SALES.COM To have the environment set up automatically on login, run this command file in your login.com file. To access SQLPLUS use the following command with a valid username and password: $ SQLPLUS username/password SQLDBA is also available on VMS and can be invoked similarly: $ SQLDBA username/password 9.1.3 OFA directory structuur: -----------------------------Hou je aan OFA. Een voorbeeld voor database PROD: /opt/oracle/product/8.1.6 /opt/oracle/product/8.1.6/admin/PROD /opt/oracle/product/8.1.6/admin/pfile /opt/oracle/product/8.1.6/admin/adhoc /opt/oracle/product/8.1.6/admin/bdump /opt/oracle/product/8.1.6/admin/udump /opt/oracle/product/8.1.6/admin/adump /opt/oracle/product/8.1.6/admin/cdump /opt/oracle/product/8.1.6/admin/create /u02/oradata/PROD /u03/oradata/PROD /u04/oradata/PROD etc.. Example mountpoints and disks: -----------------------------Mountpunt / /usr /var /home /opt /u01 /u02
9.1.4 Users en groups: ---------------------Als je met OS verificatie wilt werken, moet in de init.ora gezet zijn: remote_login_passwordfile=none (passwordfile authentication via exlusive) Benodigde groups in UNIX: group dba. Deze moet voorkomen in de /etc/group file vaak is ook nog nodig de group oinstall groupadd dba groupadd oinstall groupadd oper Maak nu user oracle aan: adduser -g oinstall -G dba -d /home/oracle oracle # # # # # # #
9.1.5 mount points en disks: ---------------------------maak de mount points: mkdir mkdir mkdir mkdir
/opt/u01 /opt/u02 /opt/u03 /opt/u04
dit moeten voor een produktie omgeving aparte schijven zijn Geef nu ownership van deze mount points aan user oracle en group oinstall chown chown chown chown
chmod u+x filename chmod ug+x filename 9.1.6 test van user oracle: --------------------------log in als user oracle en geef de commANDo's $groups $umask
laat de groups zien (oinstall, dba) laat 022 zien, zoniet zet dan de line umask 022 in het .profile
umask is de default mode van een file of directory wanneer deze aangemaakt wordt. rwxrwxrwx=777 rw-rw-rw-=666 rw-r--r--=644 welke correspondeert met umask 022 Verander nu het .profile of .bash_profile van de user oracle. Plaats de environment variabelen van 9.1 in het profile. log uit en in als user oracle, en test de environment: %env %echo $variablename
9.1.7 Oracle Installer bij 8.1.x op Linux: -----------------------------------------Log in als user oracle. Draai nu oracle installer: Linux: startx cd /usr/local/src/Oracle8iR3 ./runInstaller of Ga naar install/linux op de CD en run runIns.sh Nu volgt een grafische setup. Beantwoord de vragen. Het kan zijn dat de installer vraagt om scripts uit te voeren zoals: orainstRoot.sh en root.sh Om dit uit te voeren: open een nieuw window su root cd $ORACLE_HOME ./orainstRoot.sh Installatie database op Unix: -----------------------------
$ export PATH=$PATH:$ORACLE_HOME/bin $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib $ dbca & or $ cat "db1:/usr/oracle/9.0:Y >> /etc/oratab" $ cd $ORACLE_HOME/dbs $ cat initdw.ora |sed s/"#db_name = MY_DB_NAME"/"db_name = db1"/|sed s/#control_files/control_files/ > initdb1.ora Start and create database : $ export PATH=$PATH:$ORACLE_HOME/bin $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib $ export ORACLE_SID=db1 $ sqlplus /nolog <
CONNECT / AS SYSDBA CONNECT / AS SYSOPER For a remote database connection over a secure connection, the user must also specify the net service name of the remote database: CONNECT /@net_service_name AS SYSDBA CONNECT /@net_service_name AS SYSOPER OSDBA: unix : dba windows: ORA_DBA OSOPER: unix : oper windows: ORA_OPER -- Preparing to Use Password File Authentication To enable authentication of an administrative user using password file authentication you must do the following: Create an operating system account for the user. If not already created, Create the password file using the ORAPWD utility: ORAPWD FILE=filename PASSWORD=password ENTRIES=max_users Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE. Connect to the database as user SYS (or as another user with the administrative privilege). If the user does not already exist in the database, create the user. Grant the SYSDBA or SYSOPER system privilege to the user: GRANT SYSDBA to scott; This statement adds the user to the password file, thereby enabling connection AS SYSDBA. For example, user scott has been granted the SYSDBA privilege, so he can connect as follows: CONNECT scott/tiger AS SYSDBA 9.1.9 Create a 9i database: --------------------------Step 1: Decide on Your Instance Identifier (SID) Step 2: Establish the Database Administrator Authentication Method Step 3: Create the Initialization Parameter File
Step 4: Connect to the Instance Step 5: Start the Instance. Step 6: Issue the CREATE DATABASE Statement Step 7: Create Additional Tablespaces Step 8: Run Scripts to Build Data Dictionary Views Step 9: Run Scripts to Install Additional Options (Optional) Step 10: Create a Server Parameter File (Recommended) Step 11: Back Up the Database. Step 1: ------% ORACLE_SID=ORATEST; export ORACLE_SID Step 2: see above ----------------Step 3: init.ora ---------------Note DB_CACHE_SIZE 10g: Parameter type Big integer Syntax DB_CACHE_SIZE = integer [K | M | G] Default value If SGA_TARGET is set: If the parameter is not specified, then the default is 0 (internally determined by the Oracle Database). If the parameter is specified, then the user-specified value indicates a minimum value for the memory pool. If SGA_TARGET is not set, then the default is either 48 MB or 4MB * number of CPUs * granule size, whichever is greater Modifiable ALTER SYSTEM Basic No Oracle10g Obsolete Oracle SGA Parameters Using AMM via the sga_target parameter renders several parameters obsolete. Remember, you can continue to perform manual SGA tuning if you like, but if you set sga_target, then these parameters will default to zero: db_cache_size - This parameter determines the number of database block buffers in the Oracle SGA and is the single most important parameter in Oracle memory. db_xk_cache_size - This set of parameters (with x replaced by 2, 4, 8, 16, or 32) sets the size for specialized areas of the buffer area used to store data from tablespaces with varying blocksizes. When these are set,
they impose a hard limit on the maximum size of their respective areas. db_keep_cache_size - This is used to store small tables that perform full table scans. This data buffer pool was a sub-pool of db_block_buffers in Oracle8i. db_recycle_cache_size - This is reserved for table blocks from very large tables that perform full table scans. This was buffer_pool_keep in Oracle8i. large_pool_size - This is a special area of the shared pool that is reserved for SGA usage when using the multi-threaded server. The large pool is used for parallel query and RMAN processing, as well as setting the size of the Java pool. log_buffer - This parameter determines the amount of memory to allocate for Oracle's redo log buffers. If there is a high amount of update activity, the log_buffer should be allocated more space. shared_pool_size - This parameter defines the pool that is shared by all users in the system, including SQL areas and data dictionary caching. A large shared_pool_size is not always better than a smaller shared pool. If your application contains non-reusable SQL, you may get better performance with a smaller shared pool. java_pool_size -- This parameter specifies the size of the memory area used by Java, which is similar to the shared pool used by SQL and PL/SQL. streams_pool_size - This is a new area in Oracle Database 10g that is used to provide buffer areas for the streams components of Oracle. This is exactly the same automatic tuning principle behind the Oracle9i pga_aggregate_target parameter that made these parameters obsolete. If you set pga_aggregate_target, then these parameters are ignored: sort_area_size - This parameter determines the memory region that is allocated for in-memory sorting. When the v$sysstat value sorts (disk) become excessive, you may want to allocate additional memory. hash_area_size - This parameter determines the memory region reserved for hash joins. Starting with Oracle9i, Oracle Corporation does not recommend using hash_area_size unless the instance is configured with the shared server option. Oracle recommends that you enable automatic sizing of SQL work areas by setting pga_aggregate_target hash_area_size is retained only for backward compatibility purposes.
Sample Initialization Parameter File # Cache and I/O DB_BLOCK_SIZE=4096 DB_CACHE_SIZE=20971520 # Cursors and Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=300 # Diagnostics and Statistics BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump TIMED_STATISTICS=TRUE
USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump # Control File Configuration CONTROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl", "/vobs/oracle/oradata/mynewdb/control02.ctl", "/vobs/oracle/oradata/mynewdb/control03.ctl") # Archive LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive' LOG_ARCHIVE_FORMAT=%t_%s.dbf LOG_ARCHIVE_START=TRUE # Shared Server # Uncomment and use first DISPATCHES parameter below when your listener is # configured for SSL # (listener.ora and sqlnet.ora) # DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)", # "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)" DISPATCHERS="(PROTOCOL=TCP)(SER=MODOSE)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)", (PROTOCOL=TCP) # Miscellaneous COMPATIBLE=9.2.0 DB_NAME=mynewdb # Distributed, Replication and Snapshot DB_DOMAIN=us.oracle.com REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE # Network Registration INSTANCE_NAME=mynewdb # Pools JAVA_POOL_SIZE=31457280 LARGE_POOL_SIZE=1048576 SHARED_POOL_SIZE=52428800 # Processes and Sessions PROCESSES=150 # Redo Log and Recovery FAST_START_MTTR_TARGET=300 # Resource Manager RESOURCE_MANAGER_PLAN=SYSTEM_PLAN # Sort, Hash Joins, Bitmap Indexes SORT_AREA_SIZE=524288 # Automatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undotbs Reasonable 10g init.ora: ------------------------
remote_login_passwordfile=EXCLUSIVE ########################################### # Shared Server ########################################### dispatchers="(PROTOCOL=TCP) (SERVICE=test10gXDB)" ########################################### # Sort, Hash Joins, Bitmap Indexes ########################################### pga_aggregate_target=95420416 ########################################### # System Managed Undo and Rollback Segments ########################################### undo_management=AUTO undo_tablespace=UNDOTBS1 LOG_ARCHIVE_DEST=c:\oracle\oradata\log LOG_ARCHIVE_FORMAT=arch_%t_%s_%r.dbf' Flash_recovery_area: location where RMAN stores diskbased backups
Step 4: Connect to the Instance: -------------------------------Start SQL*Plus and connect to your Oracle instance AS SYSDBA. $ SQLPLUS /nolog CONNECT SYS/password AS SYSDBA Step 5: Start the Instance: --------------------------Start an instance without mounting a database. Typically, you do this only during database creation or while performing maintenance on the database. Use the STARTUP command with the NOMOUNT option. In this example, because the initialization parameter file is stored in the default location, you are not required to specify the PFILE clause: STARTUP NOMOUNT At this point, there is no database. Only the SGA is created and background processes are started in preparation for the creation of a new database. Step 6: Issue the CREATE DATABASE Statement: -------------------------------------------To create the new database, use the CREATE DATABASE statement. The following statement creates database mynewdb: CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1 DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;
Oracle 10g create statement: CREATE DATABASE playdwhs USER SYS IDENTIFIED BY cactus USER SYSTEM IDENTIFIED BY cactus LOGFILE GROUP 1 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo01.log') SIZE 100M, GROUP 2 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo02.log') SIZE 100M, GROUP 3 ('/dbms/tdbaplay/playdwhs/recovery/redo_logs/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/dbms/tdbaplay/playdwhs/database/default/system01.dbf' SIZE 500M REUSE EXTENT MANAGEMENT LOCAL SYSAUX DATAFILE '/dbms/tdbaplay/playdwhs/database/default/sysaux01.dbf' SIZE 300M REUSE DEFAULT TEMPORARY TABLESPACE temp TEMPFILE '/dbms/tdbaplay/playdwhs/database/default/temp01.dbf' SIZE 1000M REUSE UNDO TABLESPACE undotbs DATAFILE '/dbms/tdbaplay/playdwhs/database/default/undotbs01.dbf' SIZE 1000M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; CONNECT SYS/password AS SYSDBA -- create a user tablespace to be assigned as the default tablespace for users CREATE TABLESPACE users LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/users01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL;
-- create a tablespace for indexes, separate from user tablespace CREATE TABLESPACE indx LOGGING DATAFILE '/u01/oracle/oradata/mynewdb/indx01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; For information about creating tablespaces, see Chapter 8, " Managing Tablespaces". Step 9: Run Scripts to Build Data Dictionary Views Run the scripts necessary to build views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/u01/oracle/rdbms/admin/catalog.sql @/u01/oracle/rdbms/admin/catproc.sql EXIT catalog.sql All databases Creates the data dictionary and public synonyms for many of its views Grants PUBLIC access to the synonyms catproc.sql All databases Runs all scripts required for, or used with PL/SQL catclust.sql Real Application Clusters Creates Real Application Clusters data dictionary views Oracle supplies other scripts that create additional structures you can use in managing your database and creating database applications. These scripts are listed in Table B-2. See Also: Your operating system-specific Oracle documentation for the exact names and locations of these scripts on your operating system Table B-2 Creating Additional Data Dictionary Structures Script Name Needed For Run By Description catblock.sql Performance management SYS Creates views that can dynamically display lock dependency graphs catexp7.sql Exporting data to Oracle7 SYS Creates the dictionary views needed for the Oracle7 Export utility to export data from the Oracle Database in Oracle7 Export file format caths.sql Heterogeneous Services SYS Installs packages for administering heterogeneous services catio.sql Performance management SYS Allows I/O to be traced on a table-by-table basis catoctk.sql Security SYS Creates the Oracle Cryptographic Toolkit package catqueue.sql Advanced Queuing Creates the dictionary objects required for Advanced Queuing catrep.sql Oracle Replication SYS Runs all SQL scripts for enabling database replication catrman.sql Recovery Manager RMAN or any user with GRANT_RECOVERY_CATALOG_OWNER role Creates recovery manager tables and views (schema) to establish an external recovery catalog for the backup, restore, and recovery functionality provided by the Recovery Manager (RMAN) utility
dbmsiotc.sql Storage management Any user Analyzes chained rows in index-organized tables dbmsotrc.sql Performance management SYS or SYSDBA Enables and disables generation of Oracle Trace output dbmspool.sql Performance management SYS or SYSDBA Enables DBA to lock PL/SQL packages, SQL statements, and triggers into the shared pool userlock.sql Concurrency control SYS or SYSDBA Provides a facility for user-named locks that can be used in a local or clustered environment to aid in sequencing application actions utlbstat.sql and utlestat.sql Performance monitoring SYS Respectively start and stop collecting performance tuning statistics utlchn1.sql Storage management Any user For use with the Oracle Database. Creates tables for storing the output of the ANALYZE command with the CHAINED ROWS option. Can handle both physical and logical rowids. utlconst.sql Year 2000 compliance Any user Provides functions to validate that CHECK constraints on date columns are year 2000 compliant utldtree.sql Metadata management Any user Creates tables and views that show dependencies between objects utlexpt1.sql Constraints Any user For use with the Oracle Database. Creates the default table (EXCEPTIONS) for storing exceptions from enabling constraints. Can handle both physical and logical rowids. utlip.sql PL/SQL SYS Used primarily for upgrade and downgrade operations. It invalidates all existing PL/SQL modules by altering certain dictionary tables so that subsequent recompilations will occur in the format required by the database. It also reloads the packages STANDARD and DBMS_STANDARD, which are necessary for any PL/SQL compilations. utlirp.sql PL/SQL SYS Used to change from 32-bit to 64-bit word size or vice versa. This script recompiles existing PL/SQL modules in the format required by the new database. It first alters some data dictionary tables. Then it reloads the packages STANDARD and DBMS_STANDARD, which are necessary for using PL/SQL. Finally, it triggers a recompilation of all PL/SQL modules, such as packages, procedures, and types. utllockt.sql Performance monitoring SYS or SYSDBA Displays a lock wait-for graph, in tree structure format utlpwdmg.sql Security SYS or SYSDBA Creates PL/SQL functions for default password complexity verification. Sets the default password profile parameters and enables password management features. utlrp.sql PL/SQL SYS Recompiles all existing PL/SQL modules that were previously in an INVALID state, such as packages, procedures, and types. utlsampl.sql Examples SYS or any user with DBA role Creates sample tables, such as emp and dept, and users, such as scott utlscln.sql Oracle Replication Any user Copies a snapshot schema from another snapshot site utltkprf.sql Performance management SYS Creates the TKPROFER role to allow the TKPROF profiling utility to be run by non-DBA users utlvalid.sql Partitioned tables Any user Creates tables required for storing output of ANALYZE TABLE ...VALIDATE STRUCTURE of a partitioned table utlxplan.sql Performance management Any user
+++++++ Graag op de pl003 de twee volgende instances: - playdwhs - accpdwhs
En op de pl101 de volgende instance: - proddwhs Graag conform de huidige standaard voor filesystems. Dat wil zeggen, al deze databases komen op volumegroup roca_vg. Met daaronder het de volgende mount points: /dbms/tdba[env]/[env]dwhs/admin /dbms/tdba[env]/[env]dwhs/database /dbms/tdba[env]/[env]dwhs/recovery /dbms/tdba[env]/[env]dwhs/export /dev/fslv32 0.25 0.23 /dbms/tdbaaccp/accproca/admin /dev/fslv33 15.00 11.78 /dbms/tdbaaccp/accproca/database /dev/fslv34 4.00 3.51 /dbms/tdbaaccp/accproca/recovery /dev/fslv35 5.00 4.99 /dbms/tdbaaccp/accproca/export
CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1 DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED; +++++++ Step 7: Create Additional Tablespaces: -------------------------------------To make the database functional, you need to create additional files and tablespaces for users. The following sample script creates some additional tablespaces: CONNECT SYS/password AS SYSDBA -- create a user tablespace to be assigned as the default tablespace for users CREATE TABLESPACE users LOGGING DATAFILE '/vobs/oracle/oradata/mynewdb/users01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; -- create a tablespace for indexes, separate from user tablespace CREATE TABLESPACE indx LOGGING DATAFILE '/vobs/oracle/oradata/mynewdb/indx01.dbf' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; EXIT Step 8: Run Scripts to Build Data Dictionary Views: --------------------------------------------------Run the scripts necessary to build views, synonyms, and PL/SQL packages: CONNECT SYS/password AS SYSDBA @/vobs/oracle/rdbms/admin/catalog.sql @/vobs/oracle/rdbms/admin/catproc.sql EXIT Do not forget to run as SYSTEM the script /sqlplus/admin/pupbld.sql; @/dbms/tdbaaccp/ora10g/home/sqlplus/admin/pupbld.sql @/dbms/tdbaaccp/ora10g/home/rdbms/admin/catexp.sql The following table contains descriptions of the scripts: Script Description CATALOG.SQL: Creates the views of the data dictionary tables, the dynamic performance views, and public synonyms for many of the views. Grants PUBLIC access to the synonyms. CATPROC.SQL:
Runs all scripts required for or used with PL/SQL.
Step 10: Create a Server Parameter File (Recommended): -----------------------------------------------------Oracle recommends you create a server parameter file as a dynamic means of
maintaining initialization parameters. The following script creates a server parameter file from the text initialization parameter file and writes it to the default location. The instance is shut down, then restarted using the server parameter file (in the default location). CONNECT SYS/password AS SYSDBA -- create the server parameter file CREATE SPFILE='/vobs/oracle/dbs/spfilemynewdb.ora' FROM PFILE='/vobs/oracle/admin/mynewdb/scripts/init.ora'; SHUTDOWN -- this time you will start up using the server parameter file CONNECT SYS/password AS SYSDBA STARTUP EXIT CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfileOWS.ora' FROM PFILE='/opt/app/oracle/admin/OWS/pfile/init.ora'; CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGACC.ora' FROM PFILE='/opt/app/oracle/admin/PEGACC/scripts/init.ora'; CREATE SPFILE='/opt/app/oracle/product/9.2/dbs/spfilePEGTST.ora' FROM PFILE='/opt/app/oracle/admin/PEGTST/scripts/init.ora'; 9.10 Oracle 9i licenses: -----------------------Setting License Parameters Oracle no longer offers licensing by the number of concurrent sessions. Therefore the LICENSE_MAX_SESSIONS and LICENSE_SESSIONS_WARNING initialization parameters have been deprecated. - named user licesnsing: If you use named user licensing, Oracle can help you enforce this form of licensing. You can set a limit on the number of users created in the database. Once this limit is reached, you cannot create more users. Note: This mechanism assumes that each person accessing the database has a unique user name and that no people share a user name. Therefore, so that named user licensing can help you ensure compliance with your Oracle license agreement, do not allow multiple users to log in using the same user name. To limit the number of users created in a database, set the LICENSE_MAX_USERS initialization parameter in the database's initialization parameter file, as shown in the following example: LICENSE_MAX_USERS = 200 - per-processor licensing:
Oracle encourages customers to license the database on the per-processor licensing model. With this licensing method you count up the number of CPUs in your computer, and multiply that number by the licensing cost of the database and database options you need. Currently the Standard (STD) edition of the database is priced at $15,000 per processor, and the Enterprise (EE) edition is priced at $40,000 per processor. The RAC feature is $20,000 per processor extra, and you need to add 22 percent annually for the support contract. It's possible to license the database on a per-user basis, which makes financial sense if there'll never be many users accessing the database. However, the licensing method can't be changed after it is initially licensed. So if the business grows and requires significantly more users to access the database, the costs could exceed the costs under the per-processor model. You also have to understand what Oracle corporation considers to be a user for the purposes of licensing purposes. If 1,000 users access the database through an application server, which only makes five connections to the database, then Oracle will require that either 1,000 user licenses be purchased or that the database be licensed via the per-processor pricing model. The Oracle STD edition is licensed at $300 per user (with a five user minimum), and EE edition costs $800 per user (with a 25 user minimum). There is still an annual support fee of 22 percent, which should be budgeted in addition to the licensing fees. If the support contract is not paid each year, then the customer is not licensed to upgrade to the latest version of the database and must re-purchase all of the licenses over again in order to upgrade versions. This section only gives you a brief overview of the available licensing options and costs, so if you have additional questions you really should contact an Oracle sales representative Note about 10g init.ora: -----------------------PARALLEL_MAX_SERVERS=(> apply or capture processes) Each capture process and apply process may use multiple parallel execution servers. The apply process by default needs two parallel servers. So this parameter needs to set to at least 2 even for a single non-parallel apply process. Specify a value for this parameter to ensure that there are enough parallel execution servers. In our installation we went for 12 apply server, so we increased the number of parallel_max_server above this figure of 12. _kghdsidx_count=1 This parameter prevents the shared_pool from being divided among CPUs LOG_PARALLELISM=1 This parameter must be set to 1 at each database that captures events. Parameters set using DBMS_CAPTURE_ADM package: Using the DBMS_CAPTURADM.SET_PARAMETER procedure there a 3 a parameters that are of common usage to affect installation
PARALLELISM=3 There may be only one logminer session for the whole ruleset and only one enqueuer process that will push the objects. you can safely define as much as 3 execution capture process per CPU _CHECKPOINT_FREQUENCY=1 Increase the frequency of logminer checkpoints especially in a database with significant LOB or DDL activity. A logminer checkpoint is requested by default every 10Mb of redo mined. _SGA_SIZE Amount of memory available from the shared pool for logminer processing. The default amount of shared_pool memory allocated to logminer is 10Mb. Increase this value especially in environments where large LOBs are processed. 9.11. Older Database installations: ----------------------------------CREATE DATABASE Examples on 8.x The easiest way to create a 8i, 9i database, is using the "Database Configuration Assistant". Using this tool, you are able to create a database and setup the NET configuration and the listener, in a graphical environment. It is also possible to use a script running in sqlpus (8i,9i) or svrmgrl (only in 8i). Charactersets that are used a lot in europe: WE8ISO8859P15 WE8MMSWIN1252 Example 1: ---------$ SQLPLUS /nolog CONNECT username/password AS sysdba STARTUP NOMOUMT PFILE=<path to init.ora> -- Create database CREATE DATABASE rbdb1 CONTROLFILE REUSE LOGFILE '/u01/oracle/rbdb1/redo01.log' SIZE 1M '/u01/oracle/rbdb1/redo02.log' SIZE 1M '/u01/oracle/rbdb1/redo03.log' SIZE 1M '/u01/oracle/rbdb1/redo04.log' SIZE 1M DATAFILE '/u01/oracle/rbdb1/system01.dbf' SIZE AUTOEXTEND ON NEXT 10M MAXSIZE 200M CHARACTER SET WE8ISO8859P1;
REUSE, REUSE, REUSE, REUSE 10M REUSE
run catalog.sql run catproq.sql -- Create another (temporary) system tablespace CREATE ROLLBACK SEGMENT rb_temp STORAGE (INITIAL 100 k NEXT 250 k); -- Alter temporary system tablespace online before proceding ALTER ROLLBACK SEGMENT rb_temp ONLINE; -- Create additional tablespaces ... -- RBS: For rollback segments -- USERs: Create user sets this as the default tablespace -- TEMP: Create user sets this as the temporary tablespace CREATE TABLESPACE rbs DATAFILE '/u01/oracle/rbdb1/rbs01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; CREATE TABLESPACE users DATAFILE '/u01/oracle/rbdb1/users01.dbf' SIZE 3M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; CREATE TABLESPACE temp DATAFILE '/u01/oracle/rbdb1/temp01.dbf' SIZE 2M REUSE AUTOEXTEND ON NEXT 5M MAXSIZE 150M; -- Create rollback segments. CREATE ROLLBACK SEGMENT rb1 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb2 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb3 STORAGE(INITIAL tablespace rbs; CREATE ROLLBACK SEGMENT rb4 STORAGE(INITIAL tablespace rbs;
50K NEXT 250K) 50K NEXT 250K) 50K NEXT 250K) 50K NEXT 250K)
-- Bring new rollback segments online and drop the temporary system one ALTER ROLLBACK SEGMENT rb1 ONLINE; ALTER ROLLBACK SEGMENT rb2 ONLINE; ALTER ROLLBACK SEGMENT rb3 ONLINE; ALTER ROLLBACK SEGMENT rb4 ONLINE; ALTER ROLLBACK SEGMENT rb_temp OFFLINE; DROP ROLLBACK SEGMENT rb_temp ; Example 2: ---------connect internal startup nomount pfile=/disk00/oracle/software/7.3.4/dbs/initDB1.ora create database "DB1" maxinstances 2 maxlogfiles 32 maxdatafiles 254 characterset "US7ASCII" datafile '/disk02/oracle/oradata/DB1/system01.dbf' size 128M autoextent on next 8M maxsize 256M
logfile group 1 ('/disk03/oracle/oradata/DB1/redo1a.log', '/disk04/oracle/oradata/DB1/redo1b.log') size 5M, group 2 ('/disk05/oracle/oradata/DB1/redo2a.log', ('/disk06/oracle/oradata/DB1/redo2b.log') size 5M REM * install data dictionary views @/disk00/oracle/software/7.3.4/rdbms/admin/catalog.sql @/disk00/oracle/software/7.3.4/rdbms/admin/catproq.sql create rollback segment SYSROLL tablespace system storage (initial 2M next 2M minextents 2 maxextents 255); alter rollback segment SYSROLL online; create tablespace RBS datafile '/disk01/oracle/oradata/DB1/rbs01.dbf' size 25M default storage ( initial 500K next 500K pctincrease 0 minextents 2 ); create rollback segment RBS_1 tablespace RBS1 storage (initial 512K next 512K minextents 50); create rollback segment RBS02 tablespace RBS storage (initial 500K next 500K minextents 2 optimal 1M); etc.. alter rollback segment RBS01 online; alter rollback segment RBS02 online; etc.. create tablespace DATA datafile '/disk05/oracle/oradata/DB1/data01.dbf' size 25M default storage ( initial 500K next 500K pctincrease 0 maxextends UNLIMITED ); etc.. other tablespaces you need run other scripts you need. alter user sys temporary tablespace TEMP; alter user system default tablespace TOOLS temporary tablespace TEMP; connect system/manager @/disk00/oracle/software/7.3.4/rdbms/admin/catdbsyn.sql
@/disk00/oracle/software/7.3.4/rdbms/admin/pubbld.sql t.b.v. PRODUCT_USER_PROFILE, SQLPLUS_USER_PROFILE Example 3: on NT/2000 8i best example: -------------------------------------Suppose you want a second database on a NT/2000 Server: 1. create a service with oradim oradim -new -sid -startmode -pfile 2. sqlplus /nolog (or use svrmgrl) startup nomount pfile="G:\oracle\admin\hd\pfile\init.ora" SVRMGR> CREATE DATABASE hd LOGFILE 'G:\oradata\hd\redo01.log' SIZE 2048K, 'G:\oradata\hd\redo02.log' SIZE 2048K, 'G:\oradata\hd\redo03.log' SIZE 2048K MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXLOGHISTORY 1 DATAFILE 'G:\oradata\hd\system01.dbf' SIZE 264M 10240K MAXDATAFILES 254 MAXINSTANCES 1 CHARACTER SET WE8ISO8859P1 NATIONAL CHARACTER SET WE8ISO8859P1;
REUSE AUTOEXTEND ON NEXT
@catalog.sql @catproq.sql Oracle 9i: ---------Example 1: ---------CREATE DATABASE mynewdb USER SYS IDENTIFIED BY pz6r58 USER SYSTEM IDENTIFIED BY y1tz5p LOGFILE GROUP 1 ('/vobs/oracle/oradata/mynewdb/redo01.log') SIZE 100M, GROUP 2 ('/vobs/oracle/oradata/mynewdb/redo02.log') SIZE 100M, GROUP 3 ('/vobs/oracle/oradata/mynewdb/redo03.log') SIZE 100M MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXLOGHISTORY 1 MAXDATAFILES 100 MAXINSTANCES 1 CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/vobs/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE tempts1
DATAFILE '/vobs/oracle/oradata/mynewdb/temp01.dbf' SIZE 20M REUSE UNDO TABLESPACE undotbs DATAFILE '/vobs/oracle/oradata/mynewdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;
9.2 Automatische start oracle bij system boot: ============================================== 9.2.1 oratab: ------------Inhoud ORATAB in /etc of /var/opt: Voorbeeld: # $ORACLE_SID:$ORACLE_HOME:[N|Y] # ORCL:/u01/app/oracle/product/8.0.5:Y # De oracle scripts om de database te starten en te stoppen zijn: $ORACLE_HOME/bin/dbstart en dbshut, of startdb en stopdb of wat daarop lijkt. Deze kijken in ORATAB om te zien welke databases gestart moeten worden. 9.2.2 dbstart en dbshut: -----------------------Het script dbstart zal oratab lezen en ook tests doen en om de oracle versie te bepalen. Verder bestaat de kern uit: het starten van sqldba, svrmgrl of sqlplus vervolgens doen we een connect vervolgens geven we het startup commando. Voor dbshut geldt een overeenkomstig verhaal. 9.2.3 init, sysinit, rc: -----------------------Voor een automatische start, voeg nu de juiste entries toe in het /etc/rc2.d/S99dbstart (or equivalent) file: Tijdens het opstarten van Unix worden de scrips in de /etc/rc2.d uitgevoerd die beginnen met een 'S' en in alfabetische volgorde. De Oracle database processen zullen als (een van de) laatste processen worden
gestart. Het bestAND S99oracle is gelinkt met deze directory. Inhoud S99oracle: su - oracle -c "/path/to/$ORACLE_HOME/bin/dbstart" su - oracle -c "/path/to/$ORACLE_HOME/bin/lsnrctl start" su - oracle -c "/path/tp/$ORACLE_HOME/bin/namesctl start" (optional)
# Start DB's # Start listener # Start OraNames
Het dbstart script is een standaard Oracle script. Het kijkt in oratab welke sid's op 'Y' staan, en zal deze databases starten. of customized via een customized startdb script: ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN su - oracle -c "$ORACLE_ADMIN/bin/startdb WPRD 1>$ORACLE_ADMIN/log/WPRD/startWPRD.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/startdb WTST 1>$ORACLE_ADMIN/log/WTST/startWTST.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/startdb WCUR 1>$ORACLE_ADMIN/log/WCUR/startWCUR.$$ 2>&1"
9.3 Het stoppen van Oracle in unix: ----------------------------------Tijdens het down brengen van Unix (shutdown -i 0) worden de scrips in de directory /etc/rc2.d uitgevoerd die beginnen met een 'K' en in alfabetische volgorde. De Oracle database processen zijn een van de eerste processen die worden afgesloten. Het bestand K10oracle is gelinkt met de /etc/rc2.d/K10oracle # Configuration File: /opt/oracle/admin/bin/K10oracle ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN su - oracle -c "$ORACLE_ADMIN/bin/stopdb WPRD 1>$ORACLE_ADMIN/log/WPRD/stopWPRD.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/stopdb WCUR 1>$ORACLE_ADMIN/log/WCUR/stopWCUR.$$ 2>&1" su - oracle -c "$ORACLE_ADMIN/bin/stopdb WTST 1>$ORACLE_ADMIN/log/WTST/stopWTST.$$ 2>&1" 9.4 startdb en stopdb: ---------------------Startdb [ORACLE_SID] -------------------Dit script is een onderdeel van het script S99Oracle. Dit script heeft 1
parameter, ORACLE_SID # Configuration File: /opt/oracle/admin/bin/startdb # Algemene omgeving zetten . $ORACLE_ADMIN/env/profile ORACLE_SID=$1 echo $ORACLE_SID # Omgeving zetten RDBMS . $ORACLE_ADMIN/env/$ORACLE_SID.env # Het starten van de database sqlplus /nolog << EOF connect / as sysdba startup EOF # Het starten van de listener lsnrctl start $ORACLE_SID # Het starten van de intelligent agent voor alle instances #lsnrctl dbsnmp_start
Stopdb [ORACLE_SID] ------------------Dit script is een onderdeel van het script K10Oracle. Dit script heeft 1 parameter, ORACLE_SID # Configuration File: /opt/oracle/admin/bin/stopdb # Algemene omgeving zetten . $ORACLE_ADMIN/env/profile ORACLE_SID=$1 export $ORACLE_SID # Settings van het RDBMS . $ORACLE_ADMIN/env/$ORACLE_SID.env # Het stoppen van de intelligent agent #lsnrctl dbsnmp_stop # Het stoppen van de listener lsnrctl stop $ORACLE_SID # Het stoppen van de database. sqlplus /nolog << EOF connect / as sysdba shutdown immediate EOF
9.5 Batches: -----------De batches (jobs) worden gestart door het Unix proces cron # Batches (Oracle) # Configuration File: /var/spool/cron/crontabs/root # Format of lines: # min hour daymo month daywk cmd # # Dayweek 0=sunday, 1=monday... 0 9 * * 6 /sbin/sh /opt/oracle/admin/batches/bin/batches.sh >> /opt/oracle/admin/batches/log/batcheserroroutput.log 2>&1 # Configuration File: /opt/oracle/admin/batches/bin/batches.sh # Door de op de commandline ' BL_TRACE=T ; export BL_TRACE ' worden alle commando's getoond. case $BL_TRACE in T) set -x ;; esac ORACLE_ADMIN=/opt/oracle/admin; export ORACLE_ADMIN ORACLE_HOME=/opt/oracle/product/8.1.6; export ORACLE_HOME ORACLE_SID=WCUR ; export ORACLE_SID su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.env; cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG @$ORACLE_ADMIN/batches/bin/Analyse_WILLOW2K.sql 1> $ORACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1" ORACLE_SID=WCON ; export ORACLE_SID su - oracle -c ". $ORACLE_ADMIN/env/profile ; . $ORACLE_ADMIN/env/$ORACLE_SID.env; cd $ORACLE_ADMIN/batches/bin; sqlplus /NOLOG @$ORACLE_ADMIN/batches/bin/Analyse_WILLOW2K.sql 1> $ORACLE_ADMIN/batches/log/batches$ORACLE_SID.`date +"%y%m%d"` 2>&1" 9.6 Autostart in NT/Win2K: -------------------------1) Older versions delete the existing instance FROM the command prompt: oradim80 -delete -sid SID recreate the instance FROM the command prompt: oradim -new -sid SID -intpwd <password> -startmode -pfile <path\initSID.ora> Execute the command file FROM the command prompt: oracle_home\database\strt<sid>.cmd Check the log file generated FROM this execution: oracle_home\rdbmsxx\oradimxx.log
2) NT Registry value HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME0\ORA_SID_AUTOSTART REG_EXPAND_SZ TRUE 9.7 Tools: ---------Relink van Oracle: -----------------info: showrev -p pkginfo -i relink: mk -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk install mk -f $ORACLE_HOME/svrmgr/lib/ins_svrmgr.mk install mk -f $ORACLE_HOME/network/lib/ins_network.mk install $ORACLE_HOME/bin relink all Relinking Oracle Background: Applications for UNIX are generally not distributed as complete executables. Oracle, like many application vendors who create products for UNIX, distribute individual object files, library archives of object files, and some source files which then get ?relinked? at the operating system level during installation to create usable executables. This guarantees a reliable integration with functions provided by the OS system libraries. Relinking occurs automatically under these circumstances: - An Oracle product has been installed with an Oracle provided installer. - An Oracle patch set has been applied via an Oracle provided installer. [Step 1] Log into the UNIX system as the Oracle software owner Typically this is the user 'oracle'. [STEP 2] Verify that your $ORACLE_HOME is set correctly: For all Oracle Versions and Platforms, perform this basic environment check first: % cd $ORACLE_HOME % pwd ...Doing this will ensure that $ORACLE_HOME is set correctly in your current environment. [Step 3] Verify and/or Configure the UNIX Environment for Proper Relinking: For all Oracle Versions and UNIX Platforms: The Platform specific environment variables LIBPATH, LD_LIBRARY_PATH, & SHLIB_PATH typically are already set to include system library locations like '/usr/lib'. In most cases, you need only check what they are set to first, then add the
$ORACLE_HOME/lib directory to them where appropriate. i.e.: % setenv LD_LIBRARY_PATH ${ORACLE_HOME}/lib:${LD_LIBRARY_PATH} (see [NOTE:131207.1] How to Set UNIX Environment Variables for help with setting UNIX environment variables) If on SOLARIS (Sparc or Intel) with: Oracle 7.3.X, 8.0.X, or 8.1.X: - Ensure that /usr/ccs/bin is before /usr/ucb in $PATH % which ld ....should return '/usr/ccs/bin/ld' If using 32bit(non 9i) Oracle, - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib If using 64bit(non 9i) Oracle, - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib - Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib64 Oracle 9.X.X (64Bit) on Solaris (64Bit) OS - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib32 - Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib Oracle 9.X.X (32Bit) on Solaris (64Bit) OS - Set LD_LIBRARY_PATH=$ORACLE_HOME/lib [Step 4] For all Oracle Versions and UNIX Platforms: Verify that you performed Step 2 correctly: % env|pg ....make sure that you see the correct absolute path for $ORACLE_HOME in the variable definitions. [Step 5] Run the OS Commands to Relink Oracle: Before relinking Oracle, shut down both the database and the listener. Oracle 8.1.X or 9.X.X -----------------------*** NEW IN 8i AND ABOVE *** A 'relink' script is provided in the $ORACLE_HOME/bin directory. % cd $ORACLE_HOME/bin % relink ...this will display all of the command's options. usage: relink <parameter> accepted values for parameter: all Every product executable that has been installed oracle Oracle Database executable only network net_client, net_server, cman client net_client, plsql client_sharedlib Client shared library interMedia ctx ctx Oracle Text utilities precomp All precompilers that have been installed utilities All utilities that have been installed oemagent oemagent Note: To give the correct permissions to the nmo and nmb executables, you must run the root.sh script after relinking oemagent. ldap
ldap, oid
Note: ldap option is available only from 9i. In 8i, you would have to manually relink ldap. You can relink most of the executables associated with an Oracle Server Installation by running the following command: % relink all
This will not relink every single executable Oracle provides (you can discern which executables were relinked by checking their timestamp with 'ls -l' in the $ORACLE_HOME/bin directory). However, 'relink all' will recreate the shared libraries that most executables rely on and thereby resolve most issues that require a proper relink. -orSince the 'relink' command merely calls the traditional 'make' commands, you still have the option of running the 'make' commands independently: For executables: oracle, exp, imp, sqlldr, tkprof, mig, dbv, orapwd, rman, svrmgrl, ogms, ogmsctl % cd $ORACLE_HOME/rdbms/lib % make -f ins_rdbms.mk install For executables: sqlplus % cd $ORACLE_HOME/sqlplus/lib % make -f ins_sqlplus.mk install For executables: isqlplus % cd $ORACLE_HOME/sqlplus/lib % make -f ins_sqlplus install_isqlplus For executables: dbsnmp, oemevent, oratclsh % cd $ORACLE_HOME/network/lib % make -f ins_oemagent.mk install For executables: names, namesctl % cd $ORACLE_HOME/network/lib % make -f ins_names.mk install For executables: osslogin, trcasst, trcroute, onrsd, tnsping % cd $ORACLE_HOME/network/lib % make -f ins_net_client.mk install For executables: tnslsnr, lsnrctl % cd $ORACLE_HOME/network/lib % make -f ins_net_server.mk install For executables related to ldap (for example Oracle Internet Directory): % cd $ORACLE_HOME/ldap/lib % make -f ins_ldap.mk install Note: Unix Installation/OS: RDBMS Technical Forum Displayed below are the messages of the selected thread. Thread Status: Closed From: Ray Stell 20-Apr-05 21:43 Subject: solaris upgrade RDBMS Version: 9.2.0.4 Operating System and Version: Solaris 8 Error Number (if applicable): Product (i.e. SQL*Loader, Import, etc.): Product Version: solaris upgrade I need to move a server from solaris 5.8 to 5.9. Does this
require a new oracle 9.2.0 ee server install or relink or nothing at all? Thanks. -------------------------------------------------------------------------------From: Samir Saad 21-Apr-05 03:28 Subject: Re : solaris upgrade You must relink even if you find that the databases came up after Solaris upgrade and they seem fine. As for the existing Oracle installations, they will all be fine. Samir. -------------------------------------------------------------------------------From: Oracle, soumya anand 21-Apr-05 10:59 Subject: Re : solaris upgrade Hello Ray, As rightly pointed by Samir, after an OS upgrade it sufficient to relink the executables. Regards, Soumya Note: troubles after relink: ---------------------------If you see on AIX something that resembles the following: P522:/home/oracle $lsnrctl exec(): 0509-036 Cannot load program lsnrctl because of the following errors: 0509-130 Symbol resolution failed for /usr/lib/libc.a[aio_64.o] because: 0509-136 Symbol kaio_rdwr64 (number 0) is not exported from dependent module /unix. 0509-136 Symbol listio64 (number 1) is not exported from dependent module /unix. 0509-136 Symbol acancel64 (number 2) is not exported from dependent module /unix. 0509-136 Symbol iosuspend64 (number 3) is not exported from dependent module /unix. 0509-136 Symbol aio_nwait (number 4) is not exported from dependent module /unix. 0509-150 Dependent module libc.a(aio_64.o) could not be loaded. 0509-026 System error: Cannot run a file that does not have a valid format. 0509-192 Examine .loader section symbols with the 'dump -Tv' command. If this occurs, you have asynchronous I/O turned off. To turn on asynchronous I/O:
Run smitty chgaio and set STATE to be configured at system restart from defined to available. Press Enter. Do one of the following: Restart your system. Run smitty aio and move the cursor to Configure defined Asynchronous I/O. Then press Enter.
trace: -----truss -aef -o /tmp/trace svrmgrl To trace what a Unix process is doing enter: truss -rall -wall -p truss -p $ lsnrctl dbsnmp_start NOTE: The "truss" command works on SUN and Sequent. Use "tusc" on HP-UX, "strace" on Linux, "trace" on SCO Unix or call your system administrator to find the equivalent command on your system. Monitor your Unix system: Logfiles: --------Unix message files record all system problems like disk errors, swap errors, NFS problems, etc. Monitor the following files on your system to detect system problems: tail -f /var/adm/SYSLOG tail -f /var/adm/messages tail -f /var/log/syslog
10.2 PK en FK constraint relations: ---------------------------------SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY, SUBSTR(b.column_name, 1, 40) as COLUMN_NAME FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER in ('TRIDION_CM','TCMLOGDBUSER','VPOUSERDB') AND c.constraint_type in ('P', 'R', 'U'); SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY, SUBSTR(b.column_name, 1, 40) as COLUMN_NAME FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type in ('P', 'R', 'U'); SELECT distinct c.constraint_type SUBSTR(c.table_name, 1, 40)
as TYPE, as TABLE_NAME,
SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; ----------------------------------------------------------------------create table reftables (TYPE varchar2(32), TABLE_NAME varchar2(40), CONSTRAINT_NAME varchar2(40), REF_KEY varchar2(40), REF_TABLE varchar2(40));
insert into reftables (type,table_name,constraint_name,ref_key) SELECT distinct c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; update reftables set REF_TABLE=(select distinct table_name from dba_cons_columns where owner='RM_LIVE' and CONSTRAINT_NAME=REF_KEY); ---------------------------------------------------------------------SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R'; SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 40) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 40) as REF_KEY, (select b.table_name from dba_cons_columns where b.constraint_name=c.r_constraint_name) as REF_TABLE FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE
c.constraint_name=b.constraint_name AND c.OWNER='RM_LIVE' AND c.constraint_type ='R' or c.constraint_type ='P' ; select
select c.constraint_name, c.constraint_type, c.table_name, (select table_name from c where c.r_constraint_name, o.constraint_name, o.column_name from dba_constraints c, dba_cons_columns o where c.constraint_name=o.constraint_name and c.constraint_type='R' and c.owner='BRAINS'; SELECT 'SELECT * FROM '||c.table_name||' WHERE '||b.column_name||' '|| c.search_condition FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER='BRAINS' AND c.constraint_type = 'C'; SELECT 'ALTER TABLE PROJECTS.'||table_name||' enable constraint '|| constraint_name||';' FROM DBA_CONSTRAINTS WHERE owner='PROJECTS' AND constraint_type='R'; SELECT 'ALTER TABLE BRAINS.'||table_name||' disable constraint '|| constraint_name||';' FROM USER_CONSTRAINTS WHERE owner='BRAINS' AND constraint_type='R';
10.3 PK en FK constraint informatie: DBA_CONSTRAINTS ------------------------------------ owner and all foreign key, constraints SELECT SUBSTR(owner, 1, 10) as OWNER, constraint_type as TYPE, SUBSTR(table_name, 1, 40) as TABLE_NAME, SUBSTR(constraint_name, 1, 40) as CONSTRAINT_NAME, SUBSTR(r_constraint_name, 1, 40) as REF_KEY, DELETE_RULE as DELETE_RULE, status FROM DBA_CONSTRAINTS WHERE OWNER='BRAINS' AND constraint_type in ('R', 'P', 'U'); SELECT
SUBSTR(owner, 1, 10) as OWNER, constraint_type as TYPE, SUBSTR(table_name, 1, 30) as TABLE_NAME, SUBSTR(constraint_name, 1, 30) as CONSTRAINT_NAME, SUBSTR(r_constraint_name, 1, 30) as REF_KEY, DELETE_RULE as DELETE_RULE, status FROM DBA_CONSTRAINTS WHERE OWNER='BRAINS' AND constraint_type in ('R'); -- owner en alle primary key constraints bepalen van een bepaalde user, op bepaalde objects Zelfde query: Zet OWNER='gewenste_owner' AND constraint_type='P' select owner, CONSTRAINT_NAME, CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME,STATUS from dba_constraints where owner='FIN_VLIEG' and constraint_type in ('P','R','U');
10.4 opsporen bijbehorende index van een bepaalde constraint: DBA_INDEXES, DBA_CONSTRAINTS -----------------------------------------------------------SELECT c.constraint_type as substr(x.index_name, 1, 40) as substr(c.constraint_name, 1, 40) as substr(x.tablespace_name, 1, 40) as FROM DBA_CONSTRAINTS c, DBA_INDEXES x WHERE c.constraint_name=x.index_name AND c.constraint_name='UN_DEMO1'; SELECT c.constraint_type substr(x.index_name, 1, 40) substr(c.constraint_name, 1, 40) substr(c.table_name, 1, 40) substr(c.owner, 1, 10) FROM DBA_CONSTRAINTS c, DBA_INDEXES WHERE c.constraint_name=x.index_name AND c.owner='JOOPLOC';
10.5 opsporen tablespace van een constraint of constraint owner: --------------------------------------------------------------SELECT substr(s.segment_name, 1, 40) substr(c.constraint_name, 1, 40)
as Segmentname, as Constraintname,
substr(s.tablespace_name, 1, 40) as Tablespace, substr(s.segment_type, 1, 10) as Type FROM DBA_SEGMENTS s, DBA_CONSTRAINTS c WHERE s.segment_name=c.constraint_name AND c.owner='PROJECTS'; 10.6 Ophalen index create statements: -----------------------------------DBA_INDEXES DBA_IND_COLUMNS SELECT substr(i.index_name, 1, 40) as INDEX_NAME, substr(i.index_type, 1, 15) as INDEX_TYPE, substr(i.table_name, 1, 40) as TABLE_NAME, substr(c.index_owner, 1, 10) as INDEX_OWNER, substr(c.column_name, 1, 40) as COLUMN_NAME, c.column_position as POSITION FROM DBA_INDEXES i, DBA_IND_COLUMNS c WHERE i.index_name=c.index_name AND i.owner='SALES';
10.7 Aan en uitzetten van constraints: -------------------------------------- aanzetten: alter table tablename enable constraint constraint_name -- uitzetten: alter table tablename disable constraint constraint_name -- voorbeeld: ALTER TABLE EMPLOYEE DISABLE CONSTRAINT FK_DEPNO; ALTER TABLE EMPLOYEE ENABLE CONSTRAINT FK_DEPNO; maar ook kan: ALTER TABLE DEMO ENABLE PRIMARY KEY; -- Alle FK constraints van een schema in een keer uitzetten: SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' disable constraint '|| constraint_name||';' FROM DBA_CONSTRAINTS WHERE owner='MIS_OWNER' AND constraint_type='R' AND TABLE_NAME LIKE 'MKM%';
SELECT 'ALTER TABLE MIS_OWNER.'||table_name||' enable constraint '|| constraint_name||';' FROM DBA_CONSTRAINTS WHERE owner='MIS_OWNER' AND constraint_type='R' AND TABLE_NAME LIKE 'MKM%'; 10.8 Constraint aanmaken en initieel uit: ---------------------------------------Dit kan handig zijn bij bijvoorbeeld het laden van een table waarbij mogelijk dubbele waarden voorkomen ALTER TABLE CUSTOMERS ADD CONSTRAINT PK_CUST PRIMARY KEY (custid) DISABLE; Als nu blijkt dat bij het aanzetten van de constraint, er dubbele records voorkomen, kunnen we deze dubbele records plaatsen in de EXCEPTIONS table: 1. aanmaken EXCEPTIONS table: @ORACLE_HOME\rdbms\admin\utlexcpt.sql 2. Constraint aaNzetten: ALTER TABLE CUSTOMERS ENABLE PRIMARY KEY exceptions INTO EXCEPTIONS; Nu bevat de EXCEPTIONS table de dubbele rijen. 3. Welke dubbele rijen: SELECT c.custid, c.name FROM CUSTOMERS c, EXCEPTIONS s WHERE c.rowid=s.row_id; 10.9 Gebruik PK FK constraints: -----------------------------10.9.1: Voorbeeld normaal gebruik met DRI: create table customers ( custid number not null, custname varchar(10), CONSTRAINT pk_cust PRIMARY KEY (custid) ); create table contacts ( contactid number not null, custid number, contactname varchar(10), CONSTRAINT pk_contactid PRIMARY KEY (contactid), CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(custid)
); Hierbij kun je dus niet zondermeer een row met een bepaald custid uit customers verwijderen, indien er een row in contacts bestaat met hetzelfde custid. 10.9.2: Voorbeeld met ON DELETE CASCADE: create table contacts ( contactid number not null, custid number, contactname varchar(10), CONSTRAINT pk_contactid PRIMARY KEY (contactid), CONSTRAINT fk_cust FOREIGN KEY (custid) REFERENCES customers(custid) ON DELETE CASCADE ); Ook de clausule "ON DELETE SET NULL" kan gebruikt worden. Nu is het wel mogelijk om in customers een row te verwijderen, terwijl in contacts een overeenkomende custid bestaat. De row in contacts wordt dan namelijk ook verwijdert. 10.10 Procedures voor insert, delete: -----------------------------------Als voorbeeld op table customers: CREATE OR REPLACE PROCEDURE newcustomer (custid NUMBER, custname VARCHAR) IS BEGIN INSERT INTO customers values (custid,custname); commit; END; / CREATE OR REPLACE PROCEDURE delcustomer (cust NUMBER) IS BEGIN delete from customers where custid=cust; commit; END; / 10.11 User datadictonary views: ----------------------------We hebben al gezien dat we voor constraint informatie voornamelijk de onderstaande views raadplegen: DBA_TABLES DBA_INDEXES,
DBA_CONSTRAINTS, DBA_IND_COLUMNS, DBA_SEGMENTS Deze zijn echter voor de DBA. Gewone users kunnen informatie opvragen uit USER_ en ALL_ views. USER_ : ALL_ :
cat tab col dict 10.12 Create en drop index examples: ----------------------------------CREATE UNIQUE INDEX HEATCUST0 ON HEATCUST(CUSTTYPE) TABLESPACE INDEX_SMALL PCTFREE 10 STORAGE(INITIAL 163840 NEXT 163840 PCTINCREASE 0 ); DROP INDEX indexname 10.13 Check the height of indexes: --------------------------------Is an index rebuild neccessary ? SELECT index_name, owner, blevel, decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL', 2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK FROM dba_indexes WHERE owner='SALES' and blevel > 3; 10.14 Make indexes unusable (before a large dataload): ------------------------------------------------------ Make Indexes unusable alter index HEAT_CUSTOMER_DISCON_DATE unusable; alter index HEAT_CUSTOMER_EMAIL_ADDRESS unusable; alter index HEAT_CUSTOMER_POSTAL_CODE unusable; -- Enable Indexes again
alter index HEAT_CUSTOMER_DISCON_DATE rebuild; alter index HEAT_CUSTOMER_EMAIL_ADDRESS rebuild; alter index HEAT_CUSTOMER_POSTAL_CODE rebuild;
================================ 11. DBMS_JOB and scheduled Jobs: ================================ Used in Oracle 9i and lower versions. 11.1 SNP background process: ---------------------------Scheduled jobs zijn mogelijk wanneer het SNP background process geactiveerd is. Dit kan via de init.ora: JOB_QUEUE_PROCESSES=1 aantal SNP processes (SNP0, SNP1), max 36 t.b.v. replication en jobqueue's JOB_QUEUE_INTERVAL=60 check interval 11.2 DBMS_JOB package: ---------------------DBMS_JOB.SUBMIT() DBMS_JOB.REMOVE() DBMS_JOB.CHANGE() DBMS_JOB.WHAT() DBMS_JOB.NEXT_DATE() DBMS_JOB.INTERVAL() DBMS_JOB.RUN() 11.2.1 DBMS_JOB.SUBMIT() ----------------------There are actually two versions SUBMIT() and ISUBMIT() PROCEDURE DBMS_JOB.SUBMIT (job OUT BINARY_INTEGER, what IN VARCHAR2, next_date IN DATE DEFAULT SYSDATE, interval IN VARCHAR2 DEFAULT 'NULL', no_parse IN BOOLEAN DEFAULT FALSE); PROCEDURE DBMS_JOB.ISUBMIT (job IN BINARY_INTEGER, what IN VARCHAR2, next_date in DATE DEFAULT SYSDATE interval IN VARCHAR2 DEFAULT 'NULL', no_parse in BOOLEAN DEFAULT FALSE); The difference between ISUBMIT and SUBMIT is that ISUBMIT specifies a job number,
whereas SUBMIT returns a job number generated by the DBMS_JOB package Look for submitted jobs: -----------------------select job, last_date, next_date, interval, substr(what, 1, 50) from dba_jobs; Submit a job: -------------The jobnumber (if you use SUBMIT() ) will be derived from the sequence SYS.JOBSEQ Suppose you have the following procedure: create or replace procedure test1 is begin dbms_output.put_line('Hallo grapjas.'); end; / Example 1: ---------variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', Sysdate, 'Sysdate+1'); commit; end; / DECLARE jobno NUMBER; BEGIN DBMS_JOB.SUBMIT (job => jobno ,what => 'test1;' ,next_date => SYSDATE ,interval => 'SYSDATE+1/24'); COMMIT; END; / So suppose you submit the above job at 08.15h. Then the next, and first time, that the job will run is at 09.15h. Example 2: ---------variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE+1), 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); commit; end;
/ Example 3: ---------VARIABLE jobno NUMBER BEGIN DBMS_JOB.SUBMIT(:jobno, 'DBMS_DDL.ANALYZE_OBJECT(''TABLE'', ''CHARLIE'', ''X1'', ''ESTIMATE'', NULL, 50);', SYSDATE, 'SYSDATE + 1'); COMMIT; END; / PRINT jobno JOBNO ---------14144 Example 4: this job is scheduled every hour ------------------------------------------DECLARE jobno NUMBER; BEGIN DBMS_JOB.SUBMIT (job => jobno ,what => 'begin space_logger; end;' ,next_date => SYSDATE ,interval => 'SYSDATE+1/24'); COMMIT; END; / Example 5: Examples of intervals -------------------------------'SYSDATE + 7' days from the last execution 'SYSDATE + 1/48' 'NEXT_DAY(TRUNC(SYSDATE), ''MONDAY'') + 15/24' 3PM 'NEXT_DAY(ADD_MONTHS(TRUNC(SYSDATE, ''Q''), 3), ''THURSDAY'')' each quarter 'TRUNC(SYSDATE + 1)' 12:00 midnight 'TRUNC(SYSDATE + 1) + 8/24' a.m. 'NEXT_DAY(TRUNC(SYSDATE ), "TUESDAY" ) + 12/24' 12:00 noon 'TRUNC(LAST_DAY(SYSDATE ) + 1)' month at midnight
:exactly seven :every half hour :every Monday at :first Thursday of :Every day at :Every day at 8:00 :Every Tuesday at :First day of the
'TRUNC(ADD_MONTHS(SYSDATE + 2/24, 3 ), 'Q' ) - 1/24' quarter at 11:00 p.m. NEXT_DAY(SYSDATE, "FRIDAY") ) ) + 9/24' Wednesday, and Friday at 9:00 a.m.
:Last day of the :Every Monday,
--------------------------------------------------------------------------------Example 6: ---------You have this testprocedure create or replace procedure test1 as id_next number; begin select max(id) into id_next from iftest; insert into iftest (id) values (id_next+1); commit; end; / Suppose on 16 juli at 9:26h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE+1), 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); commit; end; / select job, to_char(this_date,'DD-MM-YYYY;HH24:MI'), to_char(next_date, 'DD-MMYYYY;HH24:MI') from dba_jobs; JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------25 31-07-2004;09:26 Suppose on 16 juli at 9:38h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', LAST_DAY(SYSDATE)+1, 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); commit; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------25 31-07-2004;09:26 26 01-08-2004;09:38
Suppose on 16 juli at 9:41h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', SYSDATE, 'LAST_DAY(ADD_MONTHS(LAST_DAY(SYSDATE+1),1))'); commit; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT ---------- ---------------- ---------------27 31-08-2004;09:41 25 31-07-2004;09:26 26 01-08-2004;09:39
Suppose on 16 juli at 9:46h you do: variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', SYSDATE, 'TRUNC(LAST_DAY(SYSDATE + 1/24 ) )'); commit; end; / JOB TO_CHAR(THIS_DAT TO_CHAR(NEXT_DAT --------- ---------------- ---------------27 31-08-2004;09:41 28 31-07-2004;00:00 25 31-07-2004;09:26 29 31-07-2004;00:00 ------------------------------------------------------------------------------------variable jobno number; begin DBMS_JOB.SUBMIT(:jobno, 'test1;', null, 'TRUNC(LAST_DAY(SYSDATE ) + 1)' ); commit; end; / In the job definition, use two single quotation marks around strings. Always include a semicolon at the end of the job definition.
11.2.2 DBMS_JOB.REMOVE() -----------------------Removing a Job FROM the Job Queue To remove a job FROM the job queue, use the REMOVE procedure in the DBMS_JOB package.
The following statements remove job number 14144 FROM the job queue: BEGIN DBMS_JOB.REMOVE(14144); END; / 11.2.3 DBMS_JOB.CHANGE() -----------------------In this example, job number 14144 is altered to execute every three days: BEGIN DBMS_JOB.CHANGE(1, NULL, NULL, 'SYSDATE + 3'); END; / If you specify NULL for WHAT, NEXT_DATE, or INTERVAL when you call the procedure DBMS_JOB.CHANGE, the current value remains unchanged. 11.2.4 DBMS_JOB.WHAT() ---------------------You can alter the definition of a job by calling the DBMS_JOB.WHAT procedure. The following example changes the definition for job number 14144: BEGIN DBMS_JOB.WHAT(14144, 'DBMS_DDL.ANALYZE_OBJECT(''TABLE'', ''HR'', ''DEPARTMENTS'', ''ESTIMATE'', NULL, 50);'); END; / 11.2.5 DBMS_JOB.NEXT_DATE() --------------------------You can alter the next execution time for a job by calling the DBMS_JOB.NEXT_DATE procedure, as shown in the following example: BEGIN DBMS_JOB.NEXT_DATE(14144, SYSDATE + 4); END; / 11.2.6 DBMS_JOB.INTERVAL(): --------------------------The following example illustrates changing the execution interval for a job by calling the DBMS_JOB.INTERVAL procedure: BEGIN DBMS_JOB.INTERVAL(14144, 'NULL'); END; /
execute dbms_job.interval(<job number>,'SYSDATE+(1/48)'); In this case, the job will not run again after it successfully executes and it will be deleted FROM the job queue 11.2.7 DBMS_JOB.BROKEN(): ------------------------A job is labeled as either broken or not broken. Oracle does not attempt to run broken jobs. Example: BEGIN DBMS_JOB.BROKEN(10, TRUE); END; / Example: The following example marks job 14144 as not broken and sets its next execution date to the following Monday: BEGIN DBMS_JOB.BROKEN(14144, FALSE, NEXT_DAY(SYSDATE, 'MONDAY')); END; / Example: exec DBMS_JOB.BROKEN( V_JOB_ID, true); Example: select JOB into V_JOB_ID from DBA_JOBS where WHAT like '%SONERA%'; DBMS_SNAPSHOT.REFRESH( 'SONERA', 'C'); DBMS_JOB.BROKEN( V_JOB_ID, false); fix broken jobs: ---------------/* Filename on companion disk: job5.sql */* CREATE OR REPLACE PROCEDURE job_fixer AS /* || calls DBMS_JOB.BROKEN to try and set || any broken jobs to unbroken */ /* cursor selects user's broken jobs */ CURSOR broken_jobs_cur IS SELECT job FROM user_jobs
WHERE broken = 'Y'; BEGIN FOR job_rec IN broken_jobs_cur LOOP DBMS_JOB.BROKEN(job_rec.job,FALSE); END LOOP; END job_fixer; 11.2.8 DBMS_JOB.RUN(): ---------------------BEGIN DBMS_JOB.RUN(14144); END; / 11.3 DBMS_SCHEDULER: -------------------Used in Oracle 10g. BEGIN DBMS_SCHEDULER.create_job ( job_name => 'test_self_contained_job', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''JOHN''); END;', start_date => SYSTIMESTAMP, repeat_interval => 'freq=hourly; byminute=0', end_date => NULL, enabled => TRUE, comments => 'Job created using the CREATE JOB procedure.'); End; / BEGIN DBMS_SCHEDULER.run_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB', use_current_session => FALSE); END; / BEGIN DBMS_SCHEDULER.stop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB'); END; / Jobs can be deleted using the DROP_JOB procedure: BEGIN DBMS_SCHEDULER.drop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB'); DBMS_SCHEDULER.drop_job (job_name => 'test_self_contained_job'); END; /
================== 12. Net8 / SQLNet: ================== In bijvoorbeeld sql*plus vult men in: ----------------Username: system Password: manager Host String: XXX ----------------NET8 bij de client kijkt in TNSNAMES.ORAnaar de eerste entry XXX= (description.. protocol..host...port.. SERVICE_NAME=Y) XXX is eigenlijk een alias en is dus willekeurig hoewel het uiteraard aansluit bij de instance name of database name waarnaar je wilt connecten. Maar het zou dus zelfs pipo mogen zijn. Wordt XXX niet gevonden, dan meld de client: ORA-12154 TNS: could not resolve SERVICE NAME Vervolgens wordt door NET8 via de connect descriptor Y contact gemaakt met de listener op de Server die luistert naar Y Is Y niet wat de listener verwacht, dan meldt de listener aan de client: TNS: listener could not resolve SERVICE_NAME in connect descriptor 12.1 sqlnet.ora voorbeeld: -------------------------SQLNET.AUTHENTICATION_SERVICES= (NTS) NAMES.DIRECTORY_PATH= (TNSNAMES) 12.2 tnsnames.ora voorbeelden: ------------------------------
) 12.4: CONNECT TIME FAILOVER: ---------------------------The connect-time failover feature allows clients to connect to another listener if the initial connection to the first listener fails. Multiple listener locations are specified in the clients tnsnames.ora file. If a connection attempt to the first listener fails, a connection request to the next listener in the list is attempted. This feature increases the availablity of the Oracle service should a listener location be unavailable. Here is an example of what a tnsnames.ora file looks like with connect-time failover enabled: ORCL= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=DBPROD)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=DBFAIL)(PORT=1521)) ) (CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED) ) ) 12.5: CLIENT LOAD BALANCING: ---------------------------Client Load Balancing is a feature that allows clients to randomly select from a list of listeners. Oracle Net moves through the list of listeners and balances the load of connection requests accross the available listeners. Here is an example of the tnsnames.ora entry that allows for load balancing: ORCL= (DESCRIPTION= (LOAD_BALANCE=ON) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1522)) (ADDRESS=(PROTOCOL=TCP)(HOST=MWEISHAN-DELL)(PORT=1521)) ) (CONNECT_DATA=(SERVICE_NAME=PROD)(SERVER=DEDICATED) ) ) Notice the additional parameter of LOAD_BALANCE. This enables load balancing between the two listener locations specified. 12.6: ORACLE SHARED SERVER:
--------------------------With the dedicated Server, each server process has a PGA, outside the SGA When Shared Server is used, the user program area's are in the SGA in the large pool. With a few init.ora parameters, you can configure Shared Server. 1. DISPATCHERS: The DISPATCHERS parameter defines the number of dispatchers that should start when the instance is started. For example, if you want to configure 3 TCP/IP dispatchers and to IPC dispatchers, you set the parameters as follows: DISPATCHERS="(PRO=TCP)(DIS=3)(PRO=IPC)(DIS=2)" For example, if you have 500 concurrent TCP/IP connections, and you want each dispatcher to manage 50 concurrent connections, you need 10 dispatchers. You set your DISPATCHERS parameter as follows: DISPATCHERS="(PRO=TCP)(DIS=10)" 2. SHARED_SERVER: The Shared_Servers parameter specifies the minimum number of Shared Servers to start and retain when the Oracle instance is started. View information about dispatchers and shared servers with the following commands and queries: lsnrctl services SELECT name, status, messages, idle, busy, bytes, breaks FROM v$dispatcher; 12.7: Keeping Oracle connections alive through a Firewall: ---------------------------------------------------------Implementing keep alive packets: SQLNET.INBOUND_CONNECT_TIMEOUT
Notes: ======= Note 1: -------
Doc ID: Subject: Type: Status: PURPOSE -------
Note:274130.1 Content Type: TEXT/PLAIN SHARED SERVER CONFIGURATION Creation Date: 25-MAY-2004 BULLETIN Last Revision Date: 24-JUN-2004 PUBLISHED
This article discusses about the configuration of shared servers on 9i DB. SHARED SERVER CONFIGURATION: =========================== 1. Add the parameter shared_servers in the init.ora SHARED_SERVERS specifies the number of server processes that you want to create when an instance is started up. If system load decreases, this minimum number of servers is maintained. Therefore, you should take care not to set SHARED_SERVERS too high at system startup. Parameter type Parameter class
Integer Dynamic: ALTER SYSTEM
2. Add the parameter DISPATCHERS in the init.ora DISPATCHERS configures dispatcher processes in the shared server architecture. USAGE: ----DISPATCHERS = "(PROTOCOL=TCP)(DISPATCHERS=3)" 3. Save the init.ora file. 4. Change the connect string in tnsnames.ora from ORACLE.IDC.ORACLE.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = oracle) ) ) to ORACLE.IDC.ORACLE.COM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xyzac)(PORT = 1521)) ) (CONNECT_DATA =
)
(SERVER = SHARED) (SERVICE_NAME = Oracle)
) Change SERVER=SHARED. 5. Shutdown and startup the database. 6. Make a new connection to database other than SYSDBA. (NOTE: SYSDBA will always acquire dedicated connection by default.) 7. Check whether the connection is done through server server. > Select server from v$session. SERVER --------DEDICATED DEDICATED DEDICATED SHARED DEDICATED NOTE: ==== The following parameters are optional (if not specified, Oracle selects defaults): MAX_DISPATCHERS: =============== Specifies the maximum number of dispatcher processes that can run simultaneously. SHARED_SERVERS: ============== Specifies the number of shared server processes created when an instance is started up. MAX_SHARED_SERVERS: ================== Specifies the maximum number of shared server processes that can run simultaneously. CIRCUITS: ======== Specifies the total number of virtual circuits that are available for inbound and outbound network sessions. SHARED_SERVER_SESSIONS: ====================== Specifies the total number of shared server user sessions to allow. Setting this parameter enables you to reserve user sessions for dedicated servers.
Other parameters affected by shared server that may require adjustment: LARGE_POOL_SIZE: =============== Specifies the size in bytes of the large pool allocation heap. Shared server may force the default value to be set too high, causing performance problems or problems starting the database. SESSIONS: ======== Specifies the maximum number of sessions that can be created in the system. May need to be adjusted for shared server.
12.7 password for the listener: ------------------------------Note 1: LSNRCTL> set password <password> where <password> is the password you want to use. To change a password, use "Change_Password" You can also designate a password when you configure the listener with the Net8 Assistant. These passwords are stored in the listener.ora file and although they will not show in the Net8 Assistant, they are readable in the listener.ora file. Note 2: The password can be set either by specifying it through the command CHANGE_PASSWORD, or through a parameter in the listener.ora file. We saw how to do that through the CHANGE_PASSWORD command earlier. If the password is changed this way, it should not be specified in the listener.ora file. The password is not displayed anywhere. When supplying the password in the listener control utility, you must supply it at the Password: prompt as shown above. You cannot specify the password in one line as shown below. LSNRCTL> set password t0p53cr3t LSNRCTL> stop Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC))) TNS-01169: The listener has not recognized the password LSNRCTL> Note 3: more correct method would be to password protect the listener functions. See the net8 admin guide for info but in short -- you can: LSNRCTL> change_password
Old password: <just hit enter if you don't have one yet> New password: Reenter new password: Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521))) Password changed for LISTENER The command completed successfully LSNRCTL> set password Password: The command completed successfully LSNRCTL> save_config Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=slackdog)(PORT=1521))) Saved LISTENER configuration parameters. Listener Parameter File /d01/home/oracle8i/network/admin/listener.ora Old Parameter File /d01/home/oracle8i/network/admin/listener.bak The command completed successfully LSNRCTL> Now, you need to use a password to do various operations (such as STOP) but not others (such as STATUS)
============================================= 13. Datadictionary queries Rollback segments: ============================================= 13.1 naam, plaats en status van rollback segementen: ---------------------------------------------------SELECT substr(segment_name, 1, 10), substr(tablespace_name, 1, 20), status, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE FROM DBA_ROLLBACK_SEGS; 13.2 indruk van aantal active transactions per rollback segment: ---------------------------------------------------------------aantal actieve transacties: V$ROLLSTAT naam rollback segment: V$ROLLNAME SELECT n.name, s.xacts FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; (usn=undo segment number) 13.3 grootte, naam, extents, bytes van de rollback segmenten: ------------------------------------------------------------SELECT substr(segment_name, 1, 15), bytes/1024/1024 Size_in_MB, blocks, extents, substr(tablespace_name, 1, 15) FROM DBA_SEGMENTS WHERE segment_type='ROLLBACK'; SELECT n.name, s.extents, s.rssize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn;
Create Tablespace RBS datafile '/db1/oradata/oem/rbs.dbf' SIZE 200M AUTOEXTEND ON NEXT 20M MAXSIZE 500M LOGGING DEFAULT STORAGE ( INITIAL 5M NEXT 5M MINEXTENTS 2 MAXEXTENTS 100 PCTINCREASE 0 ) ONLINE PERMANENT; 13.4 De optimal parameter: -------------------------SELECT n.name, s.optsize FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn; 13.5 writes to rollback segementen: ----------------------------------Doe de query begin meting, en bij einde meting en bekijk het verschil SELECT n.name, s.writes FROM V$ROLLNAME n, V$ROLLSTAT s WHERE n.usn=s.usn 13.6 Wie en welke processes gebruiken de rollback segs: ------------------------------------------------------Query1: Query op v$lock, v$session, v$rollname column column column column
rr us os te
heading heading heading heading
'RB Segment' 'Username' 'OS user' 'Terminal'
format format format format
a15 a10 a10 a15
SELECT R.name rr, nvl(S.username, 'no transaction') us, S.Osuser os, S.Terminal te FROM V$LOCK L, V$SESSION S, V$ROLLNAME R WHERE L.Sid=S.Sid(+) AND trunc(L.Id1/65536)=R.usn AND L.Type='TX' AND L.Lmode=6 ORDER BY R.name / Query 2: SELECT r.name "RBS", s.sid, s.serial#, s.username "USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo,
FROM WHERE AND ORDER /
substr(s.program, 1, 78) "COMMAND" sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r t.addr = s.taddr t.xidusn = r.usn BY t.cr_get, t.phy_io
13.7 Bepaling minimum aantal rollbacksegmenten: -----------------------------------------------Bepaal in init.ora via "show parameter transactions" transactions= a transactions_per_rollback_segment= b minimum=a/b
(max no of transactions, stel 100) (allowed no of concurrent tr/rbs, stel 10)
(100/10=10)
13.8 Bepaling minimale grootte rollback segmenten: -------------------------------------------------lts=largest transaction size (normal production, niet af en toe batch loads) min_size=minimum size van rollback segment min_size= lts * 100 / (100 - (40 {%free} + 15 {iaiu} +5 {header} min_size=lts * 1.67 Stel lts=700K, dan is de startwaarde rollbacksegment=1400K ========================================================= 14. Data dictionary queries m.b.t. security, permissions: =========================================================
14.1 user information in datadictionary --------------------------------------SELECT username, user_id, password FROM DBA_USERS WHERE username='Kees'; 14.2 default tablespace, account_status of users -----------------------------------------------SELECT username, default_tablespace, account_status FROM DBA_USERS; 14.3 tablespace quotas of users ------------------------------SELECT tablespace_name, bytes, max_bytes, blocks, max_blocks FROM DBA_TS_QUOTAS WHERE username='CHARLIE'; 14.4 Systeem rechten van een user opvragen: DBA_SYS_PRIVS ---------------------------------------------------------
SELECT substr(grantee, 1, 15), substr(privilege, 1, 40), admin_option FROM DBA_SYS_PRIVS WHERE grantee='CHARLIE'; SELECT * FROM dba_sys_privs WHERE grantee='Kees'; 14.5 Invalid objects in DBA_OBJECTS: -----------------------------------SELECT substr(owner, 1, 10), substr(object_name, 1, 40), substr(object_type, 1, 40), status FROM DBA_OBJECTS WHERE status='INVALID'; 14.6 session information -----------------------SELECT sid, serial#, substr(username, 1, 10), substr(osuser, 1, 10), substr(schemaname, 1, 10), substr(program, 1, 15), substr(module, 1, 15), status, logon_time, substr(terminal, 1, 15), substr(machine, 1, 15) FROM V$SESSION; 14.7 kill a session ------------------alter system kill session 'SID, SERIAL#'
SHARED_POOL_SIZE: in bytes or K or M SHARED_POOL_SIZE specifies (in bytes) the size of the shared pool. The shared pool contains shared cursors, stored procedures, control structures, and other structures. If you set PARALLEL_AUTOMATIC_TUNING to false, Oracle also allocates parallel execution message buffers from the shared pool. Larger values improve perfoRMANce in multi-user systems. Smaller values use less memory. You can monitor utilization of the shared pool by querying the view V$SGASTAT. SHARED_POOL_RESERVED_SIZE: The parameter was introduced in Oracle 7.1.5 and provides a means of reserving a portion of the shared pool for large memory allocations. The reserved area comes out of the shared pool itself. From a practical point of view one should set SHARED_POOL_RESERVED_SIZE to about 10% of SHARED_POOL_SIZE unless either the shared pool is very large OR SHARED_POOL_RESERVED_MIN_ALLOC has been set lower than the default value:
15.3 init.ora en jobs: ---------------------JOB_QUEUE_PROCESSES=1 aantal SNP processes (SNP0, SNP1), max 36 t.b.v. replication en jobqueue's JOB_QUEUE_INTERVAL=60 check interval 15.4 instance name, sid: -----------------------db_name global_names instance_name db_domain
= = = =
CC1 TRUE CC1 antapex.net
15.5 overige parameters: -----------------------OS_AUTHENT_PREFIX REMOTE_OS_AUTHENTICATION via het netwerk kan) REMOTE_LOGIN_PASSWORDFILEe
= "" = TRUE or FALSE
distributed_transactions aq_tm_processes
=0 or >0 =
(starts the RECO process) (advanced queuing, message queues)
mts_servers multithreaded server) mts_max_servers
=
(number of shared server processes in
= NONE or EXCLUSIVE
=
audit_file_dest background_dump_dest user_dump_dest core_dump_dest resource_limit profiles are in effect) license_max_sessions sessions) license_sessions_warning log) license_max_users be created in the database)
= = = = =true
/dbs01/app/oracle/admin/AMI_PRD/adump /dbs01/app/oracle/admin/AMI_PRD/bdump /dbs01/app/oracle/admin/AMI_PRD/udump /dbs01/app/oracle/admin/AMI_PRD/cdump (specifies whether resource limits in
DESCRIPTION ---------------------------------------Version 7 Dictionary Accessibility support [TRUE | FALSE] Number of active instances in the cluster database [NUMBER] Number of AQ Time Managers to start [NUMBER] Maximum number of seconds of redos the standby could lose [NUMBER] Disk groups to mount automatically [CHAR] Disk set locations for discovery [CHAR] Number of processes for disk rebalancing [NUMBER] Directory in which auditing files are to reside Enable sys auditing [TRUE|FALSE] Enable system auditing [NONE|DB|DB_EXTENDED|OS] Core Size for Background Processes [partial | Detached process dump directory [file_path] BACKUP Tape I/O slaves [TRUE | FALSE] Maximum memory allow for BITMAP MERGE [NUMBER] Blank trimming semantics parameter [TRUE | FALSE] Number of database blocks/latches in keep buffer pool [CHAR: (buffers:n, latches:m)] Number of database blocks/latches in recycle buffer pool [CHAR: (buffers:n, Max number of virtual circuits [NUMBER] If TRUE startup in cluster database mode [TRUE | Number of instances to use for sizing cluster db SGA structures [NUMBER] Interconnects for RAC use [CHAR] Bias this node has toward not preparing in a two-phase commit [NUMBER (0-255)]
Database will be completely compatible with this software version [CHAR: 9.2.0.0.0] Control file record keep time in days [NUMBER] Control file names list [file_path,file_path..] Core dump directory [file_path] Initial number of cpu's for this instance Size of create bitmap buffer for bitmap index [INTEGER] Cursor sharing mode [EXACT | SIMILAR | FORCE] Create stored outlines for DML statements [TRUE | Use more memory in order to get faster execution [TRUE | FALSE] Size of cache for 16K buffers [bytes] Size of cache for 2K buffers [bytes] Size of cache for 32K buffers [bytes] Size of cache for 4K buffers [bytes] Size of cache for 8K buffers [bytes] Number of database blocks to cache in memory [bytes: 8M or NUMBER of blocks (Ora7)] Data and index block checking [TRUE | FALSE] Store checksum in db blocks and check during reads [TRUE | FALSE] Size of database block [bytes] Buffer cache sizing advisory [internal use only] Size of DEFAULT buffer pool for standard block size buffers [bytes] Default database location ['Path_to_directory'] Online log/controlfile destination (where n=1-5) Directory part of global database name stored with CREATE DATABASE [CHAR] Db blocks to be read each IO [NUMBER] Datafile name convert patterns and strings for standby/clone db [, ] Max allowable # db files [NUMBER] Maximum Flashback Database log retention time in Size of KEEP buffer pool for standard block size buffers [bytes] Database name specified in CREATE DATABASE [CHAR] Default database recovery file location [CHAR] Database recovery files size limit [bytes] Size of RECYCLE buffer pool for standard block size buffers [bytes] Database Unique Name [CHAR] Number of background database writer processes to start [NUMBER] Enforce password for distributed login always be encrypted [TRUE | FALSE] DBWR I/O slaves [NUMBER] Disable NOWAIT DML lock acquisitions [TRUE | Data guard broker configuration file #1 ['Path'] Data guard broker configuration file #2 ['Path']
dg_broker_start disk_asynch_io FALSE] dispatchers (MTS_dispatchers in Ora 8) distributed_lock_timeout dml_locks drs_start FALSE]
Start Data Guard broker framework (DMON process) [TRUE | FALSE] Use asynch I/O for random access devices [TRUE | Specifications of dispatchers [CHAR] Number of seconds a distributed transaction waits for a lock [Internal] Dml locks - one for each table modified in a transaction [NUMBER] Start DG Broker monitor (DMON process)[TRUE |
enqueue_resources event
Resources for enqueues [NUMBER] Debug event control - default null string [CHAR]
FAL client [CHAR] FAL server list [CHAR] Upper bound on recovery reads [NUMBER] MTTR target of forward crash recovery in seconds [NUMBER] Max number of parallel recovery slaves that may be used [LOW | HIGH | FALSE] Enable file mapping [TRUE | FALSE] Network Adapters for File I/O [CHAR] IO operations on filesystem files [Internal] Fix SYSDATE value for debugging[NONE or
RAC/OPS - lock granularity number of global cache locks per file (DFS) [CHAR] Number of background gcs server processes to Global Application Context Pool Size in Bytes [bytes] Enforce that database links have same name as remote database [TRUE | FALSE] Size of in-memory hash work area (Shared Enable/disable hash join (CBO) [TRUE | FALSE] SGA starting address (high order 32-bits on 64-bit platforms) [NUMBER] Enable automatic server DD updates in HS agent self-registration [TRUE | FALSE] Include file in init.ora ['path_to_file'] List of instance group names [CHAR] Instance name supported by the instance [CHAR] Instance number [NUMBER] Type of instance to be executed RDBMS or Automated Storage Management [RDBMS | Max allowed size in bytes of a Java sessionspace [bytes] Size in bytes of the Java pool [bytes] Warning limit on size in bytes of a Java
sessionspace [NUMBER] Number of job queue slave processes [NUMBER] Size in bytes of the large allocation pool RDBMS's LDAP access option [NONE | PASSWORD | Maximum number of non-system user sessions (concurrent licensing) [NUMBER] Maximum number of named users that can be created (named user licensing) [NUMBER] Warning level for number of non-system user sessions [NUMBER] Define which listeners instances register with Used for generating lock names for assign each a unique name space [CHAR] Lock entire SGA in physical memory [Internal] Log archive config [SEND|NOSEND] [RECEIVE|NORECEIVE] [ DG_CONFIG] Archive logs destination ['path_to_directory'] Archive logging parameters (n=1-10) Enterprise Edition [CHAR] Archive logging parameter status (n=1-10) [CHAR] Enterprise Edition [CHAR] Duplex archival destination ['path_to_directory'] Archive log filename format [CHAR: "MyApp%S.ARC"] Establish EXPEDITE attribute default value [TRUE Maximum number of active ARCH processes [NUMBER] Minimum number of archive destinations that must succeed [NUMBER] Start archival process on SGA initialization Archive log tracing level [NUMBER] Redo circular buffer size [bytes] Checkpoint threshold, # redo blocks [NUMBER] Checkpoint threshold, maximum time interval checkpoints in seconds [NUMBER] Log checkpoint begin/end to alert file [TRUE | Logfile name convert patterns and strings for standby/clone db [, ] Number of log buffer strands [NUMBER] Maximum number of threads to mine [NUMBER] Max age of new snapshot in .01 seconds [NUMBER] Max number of dispatchers [NUMBER] Maximum size (blocks) of dump file [UNLIMITED or Max number of roles a user can have enabled Max number of rollback segments in SGA cache Max number of shared servers [NUMBER]
Max number of circuits [NUMBER] Specifications of dispatchers [CHAR] Address(es) of network listener [CHAR] Max number of dispatchers [NUMBER] Max number of shared servers [NUMBER] Are multiple listeners enabled? [TRUE | FALSE] Number of shared servers to start up [NUMBER] Service supported by dispatchers [CHAR] max number of shared server sessions [NUMBER] NLS calendar system name (Default=GREGORIAN) NLS comparison, Enterprise Edition [BINARY | NLS local currency symbol [CHAR] NLS Oracle date format [CHAR] NLS date language name (Default=AMERICAN) [CHAR] Dual currency symbol [CHAR] NLS ISO currency territory name override the default set by NLS_TERRITORY [CHAR] NLS language name (session default) [CHAR] Create columns using byte or char semantics by default [BYTE | CHAR] NLS raise an exception instead of allowing implicit conversion [CHAR] NLS numeric characters [CHAR] Case-sensitive or insensitive sort [Language] language may be BINARY, BINARY_CI, BINARY_AI, GERMAN, GERMAN_CI, etc NLS territory name (country settings) [CHAR] Time format [CHAR] Time with timezone format [CHAR] Time stamp format [CHAR] Timestamp with timezone format [CHAR] Percentage of maximum size over optimal of the user session's ob [NUMBER] Optimal size of the user session's object cache in bytes [bytes] Size of the olap page pool in bytes [bytes] Max # cursors per session [NUMBER] Max # open links per session [NUMBER] Max # open links per instance [NUMBER] Optimizer dynamic sampling [NUMBER] Optimizer plan compatibility (oracle version e.g. 8.1.7) [CHAR] Optimizer index caching percent [NUMBER] Optimizer index cost adjustment [NUMBER] Optimizer maximum join permutations per query block [NUMBER] Optimizer mode [RULE | CHOOSE | FIRST_ROWS | Oracle Oracle Oracle Oracle Oracle Oracle
Prefix for auto-logon accounts [CHAR] Retrieve roles from the operating system [TRUE |
parallel_adaptive_multi_user
Enable adaptive setting of degree for multiple user streams [TRUE | FALSE] parallel_automatic_tuning Enable intelligent defaults for parallel execution parameters [TRUE | FALSE] parallel_execution_message_size Message buffer size for parallel execution [bytes] parallel_instance_group Instance group to use for all parallel operations [CHAR] parallel_max_servers Maximum parallel query servers per instance [NUMBER] parallel_min_percent Minimum percent of threads required for parallel query [NUMBER] parallel_min_servers Minimum parallel query servers per instance [NUMBER] parallel_server If TRUE startup in parallel server mode [TRUE | FALSE] parallel_server_instances Number of instances to use for sizing OPS SGA structures [NUMBER] parallel_threads_per_cpu Number of parallel execution threads per CPU [NUMBER] partition_view_enabled Enable/disable partitioned views [TRUE | FALSE] pga_aggregate_target Target size for the aggregate PGA memory consumed by the instance [bytes] plsql_code_type PL/SQL code-type [INTERPRETED | NATIVE] plsql_compiler_flags PL/SQL compiler flags [CHAR] plsql_debug PL/SQL debug [TRUE | FALSE] plsql_native_c_compiler plsql native C compiler [CHAR] plsql_native_library_dir plsql native library dir ['Path_to_directory'] plsql_native_library_subdir_count plsql native library number of subdirectories [NUMBER] plsql_native_linker plsql native linker [CHAR] plsql_native_make_file_name plsql native compilation make file [CHAR] plsql_native_make_utility plsql native compilation make utility [CHAR] plsql_optimize_level PL/SQL optimize level [NUMBER] plsql_v2_compatibility PL/SQL version 2.x compatibility flag [TRUE | FALSE] plsql_warnings PL/SQL compiler warnings settings [CHAR] See also DBMS_WARNING and DBA_PLSQL_OBJECT_SETTINGS pre_page_sga Pre-page sga for process [TRUE | FALSE] processes User processes [NUMBER] query_rewrite_enabled query_rewrite_integrity TRUSTED | ENFORCED] rdbms_server_dn read_only_open_delayed recovery_parallelism remote_archive_enable
Allow rewrite of queries using materialized views if enabled [FORCE | TRUE | FALSE] Perform rewrite using materialized views with desired integrity [STALE_TOLERATED | RDBMS's Distinguished Name [CHAR] If TRUE delay opening of read only files until first access [TRUE | FALSE] Number of server processes to use for parallel recovery [NUMBER] Remote archival enable setting [RECEIVE[,SEND] |
FALSE | TRUE] remote_dependencies_mode
Remote-procedure-call dependencies mode parameter [TIMESTAMP | SIGNATURE] remote_listener Remote listener [CHAR] remote_login_passwordfile Use a password file [NONE | SHARED | EXCLUSIVE] remote_os_authent Allow non-secure remote clients to use auto-logon accounts [TRUE | FALSE] remote_os_roles Allow non-secure remote clients to use os roles [TRUE | FALSE] replication_dependency_tracking Tracking dependency for Replication parallel propagation [TRUE | FALSE] resource_limit Master switch for resource limit [TRUE | FALSE] resource_manager_plan Resource mgr top plan [Plan_Name] resumable_timeout Set resumable_timeout, seconds [NUMBER] rollback_segments Undo segment list [CHAR] row_locking Row-locking [ALWAYS | DEFAULT | INTENT] (Default=always) serial_reuse PLSQL|ALL|NULL] serializable service_names session_cached_cursors session_max_open_files sessions sga_max_size sga_target shadow_core_dump NONE] shared_memory_address shared_pool_reserved_size shared_pool_size shared_server_sessions shared_servers skip_unusable_indexes FALSE] sort_area_retained_size sort_area_size smtp_out_server [server_clause] spfile sp_name sql92_security sql_trace sqltune_category sql_version standby_archive_dest standby_file_management star_transformation_enabled
Reuse the frame segments [DISABLE | SELECT|DML| Serializable [Internal] Service names supported by the instance [CHAR] Number of cursors to save in the session cursor cache [NUMBER] Maximum number of open files allowed per session [NUMBER] User and system sessions [NUMBER] Max total SGA size [bytes] Target size of SGA [bytes] Core Size for Shadow Processes [PARTIAL | FULL | SGA starting address (low order 32-bits on 64-bit platforms) [NUMBER] Size in bytes of reserved area of shared pool [bytes] Size in bytes of shared pool [bytes] Max number of shared server sessions [NUMBER] Number of shared servers to start up [NUMBER] Skip unusable indexes if set to true [TRUE | Size of in-memory sort work area retained between fetch calls [bytes] Size of in-memory sort work area [bytes] utl_smtp server and port configuration parameter Server parameter file [CHAR] Service Provider Name [CHAR] Require select privilege for searched update/delete [TRUE | FALSE] Enable SQL trace [TRUE | FALSE] Category qualifier for applying hintsets [CHAR] Sql language version parameter for compatibility issues [CHAR] Standby database archivelog destination text string ['Path_to_directory'] If auto then files are created/dropped automatically on standby [MANUAL | AUTO] Enable the use of star transformation
[TRUE | FALSE | DISABLE_TEMP_TABLE] Statistics level [ALL | TYPICAL | BASIC] Size in bytes of the streams pool [bytes] Use asynch I/O requests for tape devices [TRUE | Redo thread to mount [NUMBER] Internal os statistic gathering interval in seconds [NUMBER] Maintain internal timing statistics [TRUE | Enable KST tracing
(Internal parameter) [TRUE |
Trace file custom identifier [CHAR] Transaction auditing records generated in the redo log [TRUE | FALSE] transactions Max. number of concurrent active transactions [NUMBER] transactions_per_rollback_segment Number of active transactions per rollback segment [NUMBER] undo_management undo_retention undo_suppress_errors undo_tablespace use_indirect_data_buffers user_dump_dest utl_file_dir
workarea_size_policy AUTO]
Instance runs in SMU mode if TRUE, else in RBU mode [MANUAL | AUTO] Undo retention in seconds [NUMBER] Suppress RBU errors in SMU mode [TRUE | FALSE] Use or switch undo tablespace [Undo_tbsp_name] Enable indirect data buffers (very large SGA on 32-bit platforms [TRUE | FALSE] User process dump directory ['Path_to_directory'] utl_file accessible directories list utl_file_dir='Path1', 'Path2'.. or utl_file_dir='Path1' # Must be utl_file_dir='Path2' # consecutive entries Policy used to size SQL working areas [MANUAL |
db_file_multiblock_read_count: The db_file_multiblock_read_count initialization parameter determines the number of database blocks read in one I/O operation during a full table scan. The setting of this parameter can reduce the number of I/O calls required for a full table scan, thus improving performance. 15.6 9i UNDO or ROLLBACK parameters: ------------------------------------ UNDO_MANAGEMENT If AUTO, use automatic undo management mode. If MANUAL, use manual undo management mode. - UNDO_TABLESPACE A dynamic parameter specifying the name of an undo tablespace to use.
maximum
- UNDO_RETENTION A dynamic parameter specifying the length of time to retain undo. Default is 900 seconds. - UNDO_SUPPRESS_ERRORS If TRUE, suppress error messages if manual undo management SQL statements are issued when operating in automatic undo management mode. If FALSE, issue error message. This is a dynamic parameter. If you're database is on manual, you can still use the following 8i type parameters: - ROLLBACK_SEGMENTS Specifies the rollback segments to be acquired at instance startup - TRANSACTIONS Specifies the maximum number of concurrent transactions - TRANSACTIONS_PER_ROLLBACK_SEGMENT Specifies the number of concurrent transactions that each rollback segment is expected to handle - MAX_ROLLBACK_SEGMENTS Specifies the maximum number of rollback segments that can be online for any instance
15.7 Oracle 9i init file examples: ---------------------------------= Example 1: ---------# Cache and I/O DB_BLOCK_SIZE=4096 DB_CACHE_SIZE=20971520 # Cursors and Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=300 # Diagnostics and Statistics BACKGROUND_DUMP_DEST=/vobs/oracle/admin/mynewdb/bdump CORE_DUMP_DEST=/vobs/oracle/admin/mynewdb/cdump TIMED_STATISTICS=TRUE USER_DUMP_DEST=/vobs/oracle/admin/mynewdb/udump # Control File Configuration CONTROL_FILES=("/vobs/oracle/oradata/mynewdb/control01.ctl", "/vobs/oracle/oradata/mynewdb/control02.ctl", "/vobs/oracle/oradata/mynewdb/control03.ctl") # Archive LOG_ARCHIVE_DEST_1='LOCATION=/vobs/oracle/oradata/mynewdb/archive'
LOG_ARCHIVE_FORMAT=%t_%s.dbf LOG_ARCHIVE_START=TRUE # Shared Server # Uncomment and use first DISPATCHES parameter below when your listener is # configured for SSL # (listener.ora and sqlnet.ora) # DISPATCHERS = "(PROTOCOL=TCPS)(SER=MODOSE)", # "(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)" DISPATCHERS="(PROTOCOL=TCP)(SER=MODOSE)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)", (PROTOCOL=TCP) # Miscellaneous COMPATIBLE=9.2.0 DB_NAME=mynewdb # Distributed, Replication and Snapshot DB_DOMAIN=us.oracle.com REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE # Network Registration INSTANCE_NAME=mynewdb # Pools JAVA_POOL_SIZE=31457280 LARGE_POOL_SIZE=1048576 SHARED_POOL_SIZE=52428800 # Processes and Sessions PROCESSES=150 # Redo Log and Recovery FAST_START_MTTR_TARGET=300 # Resource Manager RESOURCE_MANAGER_PLAN=SYSTEM_PLAN # Sort, Hash Joins, Bitmap Indexes SORT_AREA_SIZE=524288 # Automatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undotbs Example 2: ---------############################################################################## # Copyright (c) 1991, 2001 by Oracle Corporation ############################################################################## ########################################### # Cache and I/O ########################################### db_block_size=8192 db_cache_size=50331648
processes=150 ########################################### # Redo Log and Recovery ########################################### fast_start_mttr_target=300 ########################################### # Sort, Hash Joins, Bitmap Indexes ########################################### pga_aggregate_target=33554432 sort_area_size=524288 ########################################### # System Managed Undo and Rollback Segments ########################################### undo_management=AUTO undo_tablespace=UNDOTBS
============== 17. Snapshots: ============== Snapshots allow you to replicate data based on column- and/or row-level subsetting, while multimaster replication requires replication of the entire table. You need a database link to implement replication. 17.1 Database link: ------------------In de "local" database, waar de snapshot copy komt te staan, geef een statement als bijv: CREATE PUBLIC DATABASE LINK MY_LINK CONNECT TO HARRY IDENTIFIED BY password USING 'DB1'; De servicename "DB1" wordt via de tnsnames.ora geresolved in een connectdescriptor, waarin de remote Servername, protocol, en SID van de remote database bekend is geworden. Nu is het mogelijk om bijv. de table employee in de remote database "DB1" te SELECTeren: SELECT * FROM employee@MY_LINK; Ook 2PC is geimplementeerd: update employee
set amount=amount-100;
update employee@my_link
set amount=amount+100;
commit; 17.2 Snapshots: --------------There are in general 2 styles of snapshots available Simple snapshot: One to one replication of a remote table to a local snapshot (=table). The refresh of the snapshot can be a complete refresh, with the refresh rate specified in the "create snapshot" command. Also a snapshot log can be used at the remote original table in order to replicate only the transaction data. Complex snapshot: If multiple remote tables are joined in order to create/refresh a local snapshot, it is a "complex snapshot". Only complete refreshes are possible. If joins or complex query clauses are used, like group by, one can only use a "complex snapshot". -> Example COMPLEX snapshot: On the local database: CREATE SNAPSHOT EMP_DEPT_COUNT pctfree 5 tablespace SNAP storage (initial 100K next 100K pctincrease 0) REFRESH COMPLETE START WITH SYSDATE NEXT SYSDATE+7 AS SELECT DEPTNO, COUNT(*) Dept_count FROM EMPLOYEE@MY_LINK GROUP BY Deptno; Because the records in this snapshot will not correspond one to one with the records in the master table (since the query contains a group by clause) this is a complex snapshot. Thus the snapshot will be completely recreated every time it is refreshed. -> Example SIMPLE snapshot: On the local database: CREATE SNAPSHOT EMP_DEPT_COUNT pctfree 5 tablespace SNAP
storage (initial 100K next 100K pctincrease 0) REFRESH FAST START WITH SYSDATE NEXT SYSDATE+7 AS SELECT * FROM EMPLOYEE@MY_LINK In this case the refresh fast clause tells oracle to use a snapshot log to refresh the local snapshot. When a snapshotlog is used, only the changes to the master table are sent to the targets. The snapshot log must be created in the master database (WHERE the original object is) create snapshot log on employee tablespace data storage (initial 100K next 100K pctincrease 0); Snapshot groups: ---------------A snapshot group in a replication system maintains a partial or complete copy of the objects at the target master group. Snapshot groups cannot span master group boundaries. Figure 3-7 displays the correlation between Groups A and B at the master site and Groups A and B at the snapshot site. Group A at the snapshot site (see Figure 3-7) contains only some of the objects in the corresponding Group A at the master site. Group B at the snapshot site contains all objects in Group B at the master site. Under no circumstances, however, could Group B at the snapshot site contain objects FROM Group A at the master site. As illustrated in Figure 3-7, a snapshot group has the same name as the master group on which the snapshot group is based. For example, a snapshot group based on a "PERSONNEL" master group is also named "PERSONNEL." In addition to maintaining organizational consistency between snapshot sites and master sites, snapshot groups are required for supporting updateable snapshots. If a snapshot does not belong to a snapshot group, then it must be a read-only snapshot. A snapshot group is used to organize snapshots in a logical manner. Refresh groups: --------------If 2 or more master tables which have a PK-FK relationship, are replicated, it is possible'that the 2 cooresponding snapshots violate the referential integrety, because of different refresh times and schedules etc.. Related snapshots can be collected int refresh groups. The purpose of a refresh group is to coordinate
the refresh schedules of it's members. This is achieved via the DBMS_REFRESH package. The procedures in this package are MAKE, ADD, SUBSTRACT, CHANGE, DESTROY, and REFRESH A refresh group could contain more than one snapshot groups.
Types of snapshots: ------------------Primary Key ----------Primary key snapshots are the default type of snapshot. They are updateable if the snapshot was created as part of a snapshot group and "FOR UPDATE" was specified when defining the snapshot. Changes are propagated according to the row-level changes that have occurred, as identified by the primary key value of the row (not the ROWID). The SQL statement for creating an updateable, primary key snapshot might look like: CREATE SNAPSHOT sales.customer FOR UPDATE AS SELECT * FROM [email protected]; Primary key snapshots may contain a subquery so that you can create a horizontally partitioned subset of data at the remote snapshot site. This subquery may be as simple as a basic WHERE clause or as complex as a multilevel WHERE EXISTS clause. Primary key snapshots that contain a SELECTed class of subqueries can still be incrementally or fast refreshed. The following is a subquery snapshot with a WHERE clause containing a subquery: CREATE SNAPSHOT sales.orders REFRESH FAST AS SELECT * FROM [email protected] o WHERE EXISTS (SELECT 1 FROM [email protected] c WHERE o.c_id = c.c_id AND zip = 19555); ROWID ----For backwards compatibility, Oracle supports ROWID snapshots in addition to the default primary key snapshots. A ROWID snapshot is based on the physical row identifiers (ROWIDs) of the rows in a master table. ROWID snapshots should be used only for snapshots based on master tables FROM an Oracle7 database, and should not be used when creating new snapshots based on master tables FROM Oracle release 8.0 or greater databases. CREATE SNAPSHOT sales.customer REFRESH WITH ROWID AS SELECT * FROM [email protected];
Complex ------To be fast refreshed, the defining query for a snapshot must observe certain restrictions. If you require a snapshot whose defining query is more general and cannot observe the restrictions, then the snapshot is complex and cannot be fast refreshed. Specifically, a snapshot is considered complex when the defining query of the snapshot contains: A CONNECT BY clause Clauses that do not comply with the requirements detailed in Table 3-1, "Restrictions for Snapshots with Subqueries" A set operation, such as UNION, INTERSECT, or MINUS In most cases, a distinct or aggregate function, although it is possible to have a distinct or aggregate function in the defining query and still have a simple snapshot See Also: Oracle8i Data Warehousing Guide for more information about complex materialized views. "Snapshot" is synonymous with "materialized view" in Oracle documentation, and "materialized view" is used in the Oracle8i Data Warehousing Guide. The following statement is an example of a complex snapshot CREATE statement: CREATE SNAPSHOT scott.snap_employees AS SELECT emp.empno, emp.ename FROM [email protected] UNION ALL SELECT new_emp.empno, new_emp.ename FROM [email protected]; Read Only --------Any of the previously described types of snapshots can be made read-only by omitting the FOR UPDATE clause or disabling the equivalent checkbox in the Replication Manager interface. Read-only snapshots use many of the same mechanisms as updateable snapshots, except that they do not need to belong to a snapshot group. Snapshot Registration at a Master Site -------------------------------------At the master site, an Oracle database automatically registers information about a snapshots based on its master table(s). The following sections explain more about Oracle's snapshot registration mechanism. DBA_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_REFRESH_TIMES dictionary views
You can query the DBA_REGISTERED_SNAPSHOTS data dictionary view to list the following information about a remote snapshot: The owner, name, and database that contains the snapshot The snapshot's defining query Other snapshot characteristics, such as its refresh method (fast or complete) You can also query the DBA_SNAPSHOT_REFRESH_TIMES view at the master site to obtain the last refresh times for each snapshot. Administrators can use this information to monitor snapshot activity FROM master sites and coordinate changes to snapshot sites if a master table needs to be dropped, altered, or relocated. Internal Mechanisms Oracle automatically registers a snapshot at its master database when you create the snapshot, and unregisters the snapshot when you drop it. Caution: Oracle cannot guarantee the registration or unregistration of a snapshot at its master site during the creation or drop of the snapshot, respectively. If Oracle cannot successfully register a snapshot during creation, Oracle completes snapshot registration during a subsequent refresh of the snapshot. If Oracle cannot successfully unregister a snapshot when you drop the snapshot, the registration information for the snapshot persists in the master database until it is manually unregistered. Complex snapshots might not be registered. Manual registration ------------------If necessary, you can maintain registration manually. Use the REGISTER_SNAPSHOT and UNREGISTER_SNAPSHOT procedures of the DBMS_SNAPSHOT package at the master site to add, modify, or remove snapshot registration information. Snapshot Log -----------When you create a snapshot log for a master table, Oracle creates an underlying table as the snapshot log. A snapshot log holds the primary keys and/or the ROWIDs of rows that have been updated in the master table. A snapshot log can also contain filter columns to support fast refreshes of snapshots with subqueries. The name of a snapshot log's table is MLOG$_master_table_name. The snapshot log is created in the same schema as the target master table. One snapshot log can support multiple snapshots on its master table. As described in the previous section, the internal trigger adds change information to the snapshot log whenever a DML transaction has taken place on the target
master table. There are three types of snapshot logs: Primary Key: The snapshot records changes to the master table based on the primary key of the affected rows. Row ID: The snapshot records changes to the master table based on the ROWID of the affected rows. Combination: The snapshot records changes to the master table based on both the primary key and the ROWID of the affected rows. This snapshot log supports both primary key and ROWID snapshots, which is helpful for mixed environments. A combination snapshot log works in the same manner as the primary key and ROWID snapshot log, except that both the primary key and the ROWID of the affected row are recorded. Though the difference between snapshot logs based on primary keys and ROWIDs is small (one records affected rows using the primary key, while the other records affected rows using the physical ROWID), the practical impact is large. Using ROWID snapshots and snapshot logs makes reorganizing and truncating your master tables difficult because it prevents your ROWID snapshots FROM being fast refreshed. If you reorganize or truncate your master table, your ROWID snapshot must be COMPLETE refreshed because the ROWIDs of the master table have changed. To delete a snapshot log, execute the DROP SNAPSHOT LOG SQL statement in SQL*Plus. For example, the following statement deletes the snapshot log for a table named CUSTOMERS in the SALES schema: DROP SNAPSHOT LOG ON sales.customers; To delete the master table, use truncate table TABLE_NAME purge snapshot log;
============= 18. Triggers: ============= A trigger is PL/SQL code block attached and executed by an event which occurs to a database table. Triggers are implicitly invoked by DML commands. Triggers are stored as text and compiled at execute time, because of this it is wise not to include much code in them but to call out to previously stored procedures or packages as this will greatly improve perfoRMANce. You may not use COMMIT, ROLLBACK and SAVEPOINT statements within trigger blocks. Remember that triggers may be executed thousands of times for a large update they can seriously affect SQL execution perfoRMANce.
Triggers may be called BEFORE or AFTER the following events :INSERT, UPDATE and DELETE. Triggers may be STATEMENT or ROW types. - STATEMENT triggers fire BEFORE or AFTER the execution of the statement that caused the trigger to fire. - ROW triggers fire BEFORE or AFTER any affected row is processed. An example of a statement trigger follows :CREATE OR REPLACE TRIGGER MYTRIG1 BEFORE DELETE OR INSERT OR UPDATE ON JD11.BOOK BEGIN IF (TO_CHAR(SYSDATE,'DAY') IN ('sat','sun')) OR (TO_CHAR(SYSDATE,'hh24:mi') NOT BETWEEN '08:30' AND '18:30') THEN RAISE_APPLICATION_ERROR(-20500,'Table is secured'); END IF; END; After the CREATE OR REPLACE statement is the object identifier (TRIGGER) and the object name (MYTRIG1). This trigger specifies that before any data change event on the BOOK table this PL/SQL code block will be compiled and executed. The user will not be allowed to update the table outside of normal working hours. An example of a row trigger follows :CREATE OR REPLACE TRIGGER MYTRIG2 AFTER DELETE OR INSERT OR UPDATE ON JD11.BOOK FOR EACH ROW BEGIN IF DELETING THEN INSERT INTO JD11.XBOOK (PREVISBN, TITLE, DELDATE) VALUES (:OLD.ISBN, :OLD.TITLE, SYSDATE); ELSIF INSERTING THEN INSERT INTO JD11.NBOOK (ISBN, TITLE, ADDDATE) VALUES (:NEW.ISBN, :NEW.TITLE, SYSDATE); ELSIF UPDATING ('ISBN) THEN INSERT INTO JD11.CBOOK (OLDISBN, NEWISBN, TITLE, UP_DATE) VALUES (:OLD.ISBN :NEW.ISBN, :NEW.TITLE, SYSDATE); ELSE /* UPDATE TO ANYTHING ELSE THAN ISBN */ INSERT INTO JD11.UBOOK (ISBN, TITLE, UP_DATE) VALUES (:OLD.ISBN :NEW.TITLE, SYSDATE); END IF END; In this case we have specified that the trigger will be executed after any data change event on any affected row. Within the PL/SQL block body we can check which update action is being performed for the currently affected row and take whatever action we feel is appropriate. Note that we can specify the old and new values of updated rows by prefixing column names with the
:OLD and :NEW qualifiers. --------------------------------------------------------------------------------
The following statement creates a trigger for the Emp_tab table: CREATE OR REPLACE TRIGGER Print_salary_changes BEFORE DELETE OR INSERT OR UPDATE ON Emp_tab FOR EACH ROW WHEN (new.Empno > 0) DECLARE sal_diff number; BEGIN sal_diff := :new.sal - :old.sal; dbms_output.put('Old salary: ' || :old.sal); dbms_output.put(' New salary: ' || :new.sal); dbms_output.put_line(' Difference ' || sal_diff); END; / If you enter a SQL statement, such as the following: UPDATE Emp_tab SET sal = sal + 500.00 WHERE deptno = 10; Then, the trigger fires once for each row that is updated, and it prints the new and old salaries, and the difference. CREATE OR REPLACE TRIGGER "SALES".HENKILOROOLI_CHECK2 AFTER INSERT OR UPDATE OR DELETE ON AH_HENKILOROOLI BEGIN IF INSERTING OR DELETING THEN handle_delayed_triggers ('AH_HENKILOROOLI', 'HENKILOROOLI_CHECK'); END IF; IF INSERTING OR UPDATING OR DELETING THEN handle_delayed_triggers('AH_HENKILOROOLI', 'FRONTEND_FLAG'); END IF;
/* FE */ /* FE */ /* FE */
END; A trigger is either a stored PL/SQL block or a PL/SQL, C, or Java procedure associated with a table, view, schema, or the database itself. Oracle automatically executes a trigger when a specified event takes place, which may be in the form of a system event or a DML statement being issued against the table. Triggers can be: -DML triggers on tables. -INSTEAD OF triggers on views. -System triggers on DATABASE or SCHEMA: With DATABASE, triggers fire for each event for all users; with SCHEMA, triggers fire for each event
for that specific user. BEFORE and AFTER Options The BEFORE or AFTER option in the CREATE TRIGGER statement specifies exactly when to fire the trigger body in relation to the triggering statement that is being run. In a CREATE TRIGGER statement, the BEFORE or AFTER option is specified just before the triggering statement. For example, the PRINT_SALARY_CHANGES trigger in the previous example is a BEFORE trigger. INSTEAD OF Triggers The INSTEAD OF option can also be used in triggers. INSTEAD OF triggers provide a transparent way of modifying views that cannot be modified directly through UPDATE, INSERT, and DELETE statements. These triggers are called INSTEAD OF triggers because, unlike other types of triggers, Oracle fires the trigger instead of executing the triggering statement. The trigger performs UPDATE, INSERT, or DELETE operations directly on the underlying tables.
CREATE TABLE Prj_level Projno Resp_dept CREATE TABLE Empno Ename Job Mgr Hiredate Sal Comm Deptno
Project_tab ( NUMBER, NUMBER, NUMBER); Emp_tab ( NUMBER NOT NULL, VARCHAR2(10), VARCHAR2(9), NUMBER(4), DATE, NUMBER(7,2), NUMBER(7,2), NUMBER(2) NOT NULL);
CREATE TABLE Deptno Dname Loc Mgr_no Dept_type
Dept_tab ( NUMBER(2) NOT NULL, VARCHAR2(14), VARCHAR2(13), NUMBER, NUMBER);
The following example shows an INSTEAD OF trigger for inserting rows into the MANAGER_INFO view. CREATE OR REPLACE VIEW manager_info AS SELECT e.ename, e.empno, d.dept_type, d.deptno, p.prj_level, p.projno FROM Emp_tab e, Dept_tab d, Project_tab p WHERE e.empno = d.mgr_no
AND
d.deptno = p.resp_dept;
CREATE OR REPLACE TRIGGER manager_info_insert INSTEAD OF INSERT ON manager_info REFERENCING NEW AS n -- new manager information FOR EACH ROW DECLARE rowcnt number; BEGIN SELECT COUNT(*) INTO rowcnt FROM Emp_tab WHERE empno = :n.empno; IF rowcnt = 0 THEN INSERT INTO Emp_tab (empno,ename) VALUES (:n.empno, :n.ename); ELSE UPDATE Emp_tab SET Emp_tab.ename = :n.ename WHERE Emp_tab.empno = :n.empno; END IF; SELECT COUNT(*) INTO rowcnt FROM Dept_tab WHERE deptno = :n.deptno; IF rowcnt = 0 THEN INSERT INTO Dept_tab (deptno, dept_type) VALUES(:n.deptno, :n.dept_type); ELSE UPDATE Dept_tab SET Dept_tab.dept_type = :n.dept_type WHERE Dept_tab.deptno = :n.deptno; END IF; SELECT COUNT(*) INTO rowcnt FROM Project_tab WHERE Project_tab.projno = :n.projno; IF rowcnt = 0 THEN INSERT INTO Project_tab (projno, prj_level) VALUES(:n.projno, :n.prj_level); ELSE UPDATE Project_tab SET Project_tab.prj_level = :n.prj_level WHERE Project_tab.projno = :n.projno; END IF; END;
FOR EACH ROW Option The FOR EACH ROW option determines whether the trigger is a row trigger or a statement trigger. If you specify FOR EACH ROW, then the trigger fires once for each row of the table that is affected by the triggering statement. The absence of the FOR EACH ROW option indicates that the trigger fires only once for each applicable statement, but not separately for each row affected by the statement. For example, you define the following trigger: -------------------------------------------------------------------------------Note: You may need to set up the following data structures for certain examples to work: CREATE TABLE Emp_log (
Emp_id Log_date New_salary Action
NUMBER, DATE, NUMBER, VARCHAR2(20));
-------------------------------------------------------------------------------CREATE OR REPLACE TRIGGER Log_salary_increase AFTER UPDATE ON Emp_tab FOR EACH ROW WHEN (new.Sal > 1000) BEGIN INSERT INTO Emp_log (Emp_id, Log_date, New_salary, Action) VALUES (:new.Empno, SYSDATE, :new.SAL, 'NEW SAL'); END; Then, you enter the following SQL statement: UPDATE Emp_tab SET Sal = Sal + 1000.0 WHERE Deptno = 20; If there are five employees in department 20, then the trigger fires five times when this statement is entered, because five rows are affected. The following trigger fires only once for each UPDATE of the Emp_tab table: CREATE OR REPLACE TRIGGER Log_emp_update AFTER UPDATE ON Emp_tab BEGIN INSERT INTO Emp_log (Log_date, Action) VALUES (SYSDATE, 'Emp_tab COMMISSIONS CHANGED'); END; Trigger Size The size of a trigger cannot be more than 32K. Valid SQL Statements in Trigger Bodies The body of a trigger can contain DML SQL statements. It can also contain SELECT statements, but they must be SELECT... INTO... statements or the SELECT statement in the definition of a cursor. DDL statements are not allowed in the body of a trigger. Also, no transaction control statements are allowed in a trigger. ROLLBACK, COMMIT, and SAVEPOINT cannot be used.For system triggers, {CREATE/ALTER/DROP} TABLE statements and ALTER...COMPILE are allowed. Recompiling Triggers Use the ALTER TRIGGER statement to recompile a trigger manually. For example, the following statement recompiles the PRINT_SALARY_CHANGES trigger:
ALTER TRIGGER Print_salary_changes COMPILE; Disable enable trigger: ALTER TRIGGER Reorder DISABLE; ALTER TRIGGER Reorder ENABLE; Or in 1 time for all triggers on a table: ALTER TABLE Inventory DISABLE ALL TRIGGERS;
19.1 SCN: -------The Control files and all datafiles contain the last SCN (System Change Number) after: -
checkpoint, for example via ALTER SYSTEM CHECKPOINT, shutdown normal/immediate/transactional, log switch occurs by the system via alter system switch logfile, alter tablespace begin backup etc..
at checkpoint the following occurs: -----------------------------------and
The database writer (DBWR) writes all modified database blocks in the buffer cache back to datafiles, Log writer (LGWR) or Checkpoint process (CHKPT) updates both the controlfile the datafiles to indicate when the last checkpoint occurred (SCN)
Log switching causes a checkpoint, but a checkpoint does not cause a logswitch. LGWR writes logbuffers to online redo log: ------------------------------------------ at commit - redolog buffers 1/3 full, > 1 MB changes - before DBWR writes modified blocks to datafiles LOG_CHECKPOINT_INTERVAL init.ora parameter:
------------------------------------------The LOG_CHECKPOINT_INTERVAL init.ora parameter controls how often a checkpoint operation will be performed based upon the number of operating system blocks that have been written to the redo log. If this value is larger than the size of the redo log, then the checkpoint will only occur when Oracle performs a log switch FROM one group to another, which is preferred. NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_INTERVAL will be interpreted to mean that the incremental checkpoint should not lag the tail of the log by more than log_checkpoint_interval number of redo blocks. On most Unix systems the operating system block size is 512 bytes. This means that setting LOG_CHECKPOINT_INTERVAL to a value of 10,000 (the default setting), causes a checkpoint to occur after 5,120,000 (5M) bytes are written to the redo log. If the size of your redo log is 20M, you are taking 4 checkpoints for each log. LOG_CHECKPOINT_TIMEOUT init.ora parameter: -----------------------------------------The LOG_CHECKPOINT_TIMEOUT init.ora parameter controls how often a checkpoint will be performed based on the number of seconds that have passed since the last checkpoint. NOTE: Starting with Oracle 8.1, LOG_CHECKPOINT_TIMEOUT will be interpreted to mean that the incremental checkpoint should be at the log position WHERE the tail of the log was LOG_CHECKPOINT_TIMEOUT seconds ago. Checkpoint frequency impacts the time required for the database to recover FROM an unexpected failure. Longer intervals between checkpoints mean that more time will be required during database recovery. LOG_CHECKPOINTS_TO_ALERT init.ora parameter: -------------------------------------------The LOG_CHECKPOINTS_TO_ALERT init.ora parameter, when set to a value of TRUE, allows you to log checkpoint start and stop times in the alert log. This is very helpful in determining if checkpoints are occurring at the optimal frequency and gives a chronological view of checkpoints and other database activities occurring in the background. It is a misconception that setting LOG_CHECKPOINT_TIMEOUT to a given value will initiate a log switch at that interval, enabling a recovery window used for a stand-by database configuration. Log switches cause a checkpoint, but a checkpoint does not cause a log switch. The only way to cause a log switch is manually with ALTER SYSTEM SWITCH LOGFILE or resizing the redo logs to cause more FAST_START_MTTR_TARGET init.ora parameter: -----------------------------------------FAST_START_MTTR_TARGET enables you to specify the number of seconds the database takes to perform crash recovery of a single instance. It is the number of seconds it takes to recover FROM crash recovery. The lower the value, the more often DBWR will write the blocks to disk. FAST_START_MTTR_TARGET can be overridden by either FAST_START_IO_TARGET or LOG_CHECKPOINT_INTERVAL.
FAST_START_IO_TARGET init.ora paramater: ---------------------------------------FAST_START_IO_TARGET (available only with the Oracle Enterprise Edition) specifies the number of I/Os that should be needed during crash or instance recovery. Smaller values for this parameter result in faster recovery times. This improvement in recovery perfoRMANce is achieved at the expense of additional writing activity during normal processing. ARCHIVE_LAG_TARGET init.ora parameter: -------------------------------------The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value). ARCHIVE_LAG_TARGET = 1800
Note: More on SCN: ================== >>>> thread from asktom You Asked Tom, Would you tell me what snapshot too old error. When does it happen? What's the possible causes? How to fix it? Thank you very much. Jane and we said... I think support note covers this topic very well: ORA-01555 "Snapshot too old" - Detailed Explanation =================================================== Overview ~~~~~~~~ This article will discuss the circumstances under which a query can return the Oracle error ORA-01555 "snapshot too old (rollback segment too small)". The article will then proceed to discuss actions that can be taken to avoid the error and finally will provide
some simple PL/SQL scripts that illustrate the issues discussed. Terminology ~~~~~~~~~~~ It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Server Concepts manual and related Oracle documentation. In addition to this, two key concepts are briefly covered below which help in the understanding of ORA-01555: 1. READ CONSISTENCY: ==================== This is documented in the Oracle Server Concepts manual and so will not be discussed further. However, for the purposes of this article this should be read and understood if not understood already. Oracle Server has the ability to have multi-version read consistency which is invaluable to you because it guarantees that you are seeing a consistent view of the data (no 'dirty reads'). 2. DELAYED BLOCK CLEANOUT: ========================== This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout'). Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes made.) Upon commit, the database simply marks the relevant rollback segment header entry as committed. Now, when one of the changed blocks is revisited Oracle examines the
header of the data block which indicates that it has been changed at some point. The database needs to confirm whether the change has been committed or whether it is currently uncommitted. To do this, Oracle determines the rollback segment used for the previous transaction (from the block's header) and then determines whether the rollback header indicates whether it has been committed or not. If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing. This behaviour is illustrated in a very simplified way below. Here we walk through the stages involved in updating a data block. STAGE 1 - No changes made Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment. In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.) Data Block 500 +----+--------------+ | tx | None | +----+--------------+ | row 1 | | row 2 | | ... .. | | row n | +-------------------+
STAGE 2 - Row 2 is updated Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active). Data Block 500 Rollback Segment Header 5 +----+--------------+ +----------------------+---------+ | tx |5.3uncommitted|-+ | transaction entry 01 |ACTIVE | +----+--------------+ | | transaction entry 02 |ACTIVE | | row 1 | +-->| transaction entry 03 |ACTIVE | | row 2 *changed* | | transaction entry 04 |COMMITTED|
STAGE 3 - The user issues a commit Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block. Data Block 500 Rollback Segment Header 5 +----+--------------+ +----------------------+---------+ | tx |5.3uncommitted|--+ | transaction entry 01 |ACTIVE | +----+--------------+ | | transaction entry 02 |ACTIVE | | row 1 | +--->| transaction entry 03 |COMMITTED| | row 2 *changed* | | transaction entry 04 |COMMITTED| | ... .. | | ... ... .. | ... | | row n | | transaction entry nn |COMMITTED| +------------------+ +--------------------------------+ STAGE 4 - Another user selects data block 500 Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header. Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout). Data Block 500 +----+--------------+ | tx | None | +----+--------------+ | row 1 | | row 2 | | ... .. | | row n | +------------------+
ORA-01555 Explanation ~~~~~~~~~~~~~~~~~~~~~ There are two fundamental causes of the error ORA-01555 that are a result of Oracle trying to attain a 'read consistent' image. These are : o The rollback information itself is overwritten so that Oracle is unable to rollback the (committed) transaction entries to attain a sufficiently old enough version of
the block. o The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transaction header sufficiently to derive the original rollback segment transaction slot. Note: If the transaction of User A is not committed, the rollback segment entries will NOT be reused, but if User A commits, the entries become free for reuse, and if a query of User B takes a lot of time, and "meet" those overwritten entries, user B gets an error. Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a query is first started and to which Oracle is trying to attain a read consistent image. Associated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN 50. CASE 1 - ROLLBACK OVERWRITTEN This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the rollback information that it requires. The latter is discussed in this article because this is usually the harder one to understand. Steps: 1. Session 1 starts query at time T1 and QENV 50 2. Session 1 selects block B1 during this query 3. Session 1 updates the block at SCN 51 4. Session 1 does some other work that generates rollback information. 5. Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information) 6. Session 1 revisits the same block B1 (perhaps for a different row). Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image of the block as of this QENV.
If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required QENV. It is under this condition that Oracle may not be able to get the required rollback information because Session 1's changes have generated rollback information that has overwritten it and returns the ORA-1555 error. CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN 1. Session 1 starts query at time T1 and QENV 50 2. Session 1 selects block B1 during this query 3. Session 1 updates the block at SCN 51 4. Session 1 commits the changes (Now other transactions are free to overwrite this rollback information) 5. A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions. These transactions each consume a slot in the rollback segment transaction table such that it eventually wraps around (the slots are written to in a circular fashion) and overwrites all the slots. Note that Oracle is free to reuse these slots since all transactions are committed. 6. Session 1's query then visits a block that has been changed since the initial QENV was established. Oracle therefore needs to derive an image of the block as at that point in time. Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been overwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry. If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the data block. It is also possible to encounter a variant of the transaction slot being overwritten when using block cleanout. This is briefly described below :
Session 1 starts a query at QENV 50. After this another process updates the blocks that Session 1 will require. When Session 1 encounters these blocks it determines that the blocks have changed and have not yet been cleaned out (via delayed block cleanout). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed, In order to do this, Oracle must look at the relevant rollback segment transaction table slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try to construct an older version of the block and if it is before then the block just needs clean out to be good enough for the QENV. If the transaction slot has been overwritten and the transaction table cannot be rolled back to a sufficiently old enough version then Oracle cannot derive the block image and will return ORA-1555. (Note: Normally Oracle can use an algorithm for determining a block's SCN during block cleanout even when the rollback segment slot has been overwritten. But in this case Oracle cannot guarantee that the version of the block has not changed since the start of the query). Solutions ~~~~~~~~~ This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment information is overwritten by the same session and when the rollback segment transaction table entry is overwritten. It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session must be using an Oracle extension whereby fetches across commits are tolerated. This does not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used. CASE 1 - ROLLBACK OVERWRITTEN 1. Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed. 2.
Reduce the number of commits (same reason as 1).
3. Run the processing against a range of data rather than the whole table. (Same reason as 1). 4. Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information. 5. If fetching across commits, the code can be changed so that this is not done. 6. Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by : - Using a full table scan rather than an index lookup - Introducing a dummy sort so that we retrieve all the data, sort it and
then sequentially visit these data blocks.
CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN 1. Use any of the methods outlined above except for '6'. This will allow transactions to spread their work across multiple rollback segments therefore reducing the likelihood or rollback segment transaction table slots being consumed. 2. If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can be achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager : alter session set optimizer_goal = rule; select count(*) from table_name; If indexes are being accessed then the problem may be an index block and clean out can be forced by ensuring that all the index is traversed. Eg, if the index is on a numeric column with a minimum value of 25 then the following query will force cleanout of the index : select index_column from table_name where index_column > 24; Examples ~~~~~~~~ Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases given above. Before these PL/SQL examples will return this error the database must be configured as follows :
o Use a small buffer cache (db_block_buffers). REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block visit without requiring the rollback information. o Use one rollback segment other than SYSTEM. REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required. o Ensure that the rollback segment is small. REASON: See the reason for using one rollback segment. ROLLBACK OVERWRITTEN rem rem rem rem
* * * *
1555_a.sql Example of getting ora-1555 "Snapshot too old" by session overwriting the rollback information required by the same session.
drop table bigemp; create table bigemp (a number, b varchar2(30), done char(1)); drop table dummy1; create table dummy1 (a varchar2(200)); rem * Populate the example tables. begin for i in 1..4000 loop insert into bigemp values (mod(i,20), to_char(i), 'N'); if mod(i,100) = 0 then insert into dummy1 values ('ssssssssssss'); commit; end if; end loop; commit; end; / rem * Ensure that table is 'cleaned out'. select count(*) from bigemp; declare -- Must use a predicate so that we revisit a changed block at a different -- time. -- If another tx is updating the table then we may not need the predicate cursor c1 is select rowid, bigemp.* from bigemp where a < 20; begin for c1rec in c1 loop update dummy1 set a = 'aaaaaaaa'; update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc'; update bigemp set done='Y' where c1rec.rowid = rowid; commit; end loop; end; / ROLLBACK TRANSACTION SLOT OVERWRITTEN rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by rem * overwriting the transaction slot in the rollback rem * segment header. This just uses one session. drop table bigemp; create table bigemp (a number, b varchar2(30), done char(1)); rem * Populate demo table. begin for i in 1..200 loop insert into bigemp values (mod(i,20), to_char(i), 'N'); if mod(i,100) = 0 then commit; end if; end loop; commit; end; / drop table mydual; create table mydual (a number); insert into mydual values (1); commit; rem * Cleanout demo table. select count(*) from bigemp; declare cursor c1 is select * from bigemp; begin -- The following update is required to illustrate the problem if block -- cleanout has been done on 'bigemp'. If the cleanout (above) is commented -- out then the update and commit statements can be commented and the -- script will fail with ORA-1555 for the block cleanout variant. update bigemp set b = 'aaaaa'; commit; for c1rec in c1 loop for i in 1..20 loop update mydual set a=a; commit; end loop; end loop; end; /
Special Cases ~~~~~~~~~~~~~ There are other special cases that may result in an ORA-01555. These are given below but are rare and so not discussed in this article : o Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem. o If a query visits a data block that has been changed by using the Oracle discrete transaction facility then it will return ORA-01555. o It is feasible that a rollback segment created with the OPTIMAL clause maycause a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to be lost. Summary ~~~~~~~ This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, and has provided simple PL/SQL scripts that illustrate the cases discussed.
>>>>> thread about SCN Do It Yourself (DIY) Oracle replication Here's a demonstration. First I create a simple table, called TBL_SRC. This is the table on which we want to perform change-data-capture (CDC). create table tbl_src ( x number primary key, y number ); Next, I show a couple of CDC tables, and the trigger on TBL_SRC that will load the CDC tables. create table trx ( trx_id varchar2(25) primary key, SCN number, username varchar2(30) ); create table trx_detail
( trx_id varchar(25) , step_id number , step_tms date , old_x number , old_y number , new_x number , new_y number , operation char(1) ); alter table trx_detail add constraint xp_trx_detail primary key ( trx_id, step_id ); create or replace trigger b4_src before insert or update or delete on tbl_src for each row DECLARE l_trx_id VARCHAR2(25); l_step_id NUMBER; BEGIN BEGIN l_trx_id := dbms_transaction.local_transaction_id; l_step_id := dbms_transaction.step_id; INSERT INTO trx VALUES (l_trx_id, userenv('COMMITSCN'), USER); EXCEPTION WHEN dup_val_on_index THEN NULL; END; INSERT INTO trx_detail (trx_id, step_id, step_tms, old_x, old_y, new_x, new_y) VALUES (l_trx_id, l_step_id, SYSDATE, :OLD.x, :OLD.y, :NEW.x, :NEW.y); END; /
Let's see the magic in action. I'll insert a record. We'll see the 'provisional' SCN in the TRX table. Then we'll commit, and see the 'true'/post-commit SCN: insert into tbl_src values ( 1, 1 ); 1 row created. select * from trx; TRX_ID SCN USERNAME ------------------------- ---------- ------------------3.4.33402 3732931665 CIDW commit; Commit complete. select * from trx;
TRX_ID SCN USERNAME ------------------------- ---------- ------------------3.4.33402 3732931668 CIDW Notice how the SCN "changed" from 3732931665 to 3732931668. Oracle was doing some background transactions in between. And we can look at the details of the transaction: column step_id format 999,999,999,999,999,999,999; / TRX_ID STEP_ID STEP_TMS OLD_X OLD_Y NEW_X NEW_Y O ------------------------- ---------------------------- --------- ------------------- ---------- ---------- 3.4.33402 4,366,162,821,393,448 11-NOV-06 1 1 This approach works back to at least Oracle 7.3.4. Not perfect, because it only captures DML. A TRUNCATE is DDL, and that's not captured. For the actual implementation, I stored the before and after values as CSV strings. For 9i or later, I'd use built-in Oracle functionality.
19.2 init.ora parameters and ARCHIVE MODE: ---------------------------------------LOG_ARCHIVE_DEST=/oracle/admin/cc1/arch LOG_ARCHIVE_DEST_1=d:\oracle\oradata\arc LOG_ARCHIVE_START=TRUE LOG_ARCHIVE_FORMAT=arc_%s.log LOG_ARCHIVE_DEST_1= LOG_ARCHIVE_DEST_2= LOG_ARCHIVE_MAX_PROCESSES=2 19.3 Enabling or disabling archive mode: ---------------------------------ALTER DATABASE ARCHIVELOG (mounted, niet open) ALTER DATABASE NOARCHIVELOG (mounted, niet open) 19.4 Implementation backup in archive mode via OS script: --------------------------------------------------------
19.4.1 OS backup script in unix -----------------------------############################################### # Example archive log backup script in UNIX: # ############################################### # Set up the environment to point to the correct database ORACLE_SID=CC1; export ORACLE_SID ORAENV_ASK=NO; export ORAENV_ASK .oraenv # Backup the tablespaces svrmgrl <<EOFarch1 connect internal alter tablespace SYSTEM begin backup; ! tar -cvf /dev/rmt/0hc /u01/oradata/sys01.dbf alter tablespace data end backup; alter tablespace DATA begin backup; ! tar -rvf /dev/rmt/0hc /u02/oradata/data01.dbf alter tablespace data end backup; etc .. .. # Now we backup the archived redo logs before we delete them. # We must briefly stop the archiving process in order that # we do not miss the latest files for sure. archive log stop; exit EOFarch1 # Get a listing of all archived files. FILES='ls /db01/oracle/arch/cc1/arch*.dbf'; export FILES # Start archiving again svrmgrl <<EOFarch2 connect internal archive log start; exit EOFarch2 # Now backup the archived files to tape tar -rvf /dev/rmt/0hc $FILES # Delete the backupped archived files rm -f $FILES # Backup the control file
svrmgrl <<EOFarch3 connect internal alter database backup controlfile to '/db01/oracle/cc1/cc1controlfile.bck'; exit EOFarch3 tar -rvf /dev/rmt/0hc /db01/oracle/cc1/cc1controlfile.bck ############################### # End backup script example # ############################### 19.5 Tablespaces en datafiles online/offline in non-archive en archive mode: --------------------------------------------------------------------------Tablespace: Een tablespace kan in archive mode en non-archive mode offline worden geplaatst zonder dat media recovery nodig is. Dit is zo met de NORMAL clausule: alter tablespace offline normal; Met de immediate clausule is wel recovery nodig. Datafile; Een datafile kan in archive mode offline worden gezet. Als de datafile online wordt gebracht, moet eerst media recovery wordfen toegepast. Een datafile kan in non-archive mode niet offline worden geplaatst. Backup mode: When you issue ALTER TABLESPACE .. BEGIN BACKUP, it freezes the datafile header. This is so that we know what redo logs we need to apply to a given file to make it consistent. While you are backing up that file hot, we are still writing to it -- it is logically inconsistent. Some of the backed up blocks could be from the SCN in place at the time the backup began -- others from the time it ended and others from various points in between. 19.6 Recovery in archive mode: ----------------------------19.6.1: recovery waarbij een current controlfile bestaat ======================================================= Media recovery na de loss van datafile(s) en dergelijke, gebeurt normaliter op basis van de SCN in de controlfile. A1: complete recovery: -----------------RECOVER DATABASE RECOVER TABLESPACE DATA RECOVER DATAFILE 5 A2: incomplete recovery: ------------------------
(database not open) (database open, except this tablespace) (database open, except this datafile)
time based: cancel based: change bases:
recover database until time '1999-12-31:23.40.00' recover database until cancel recover database until change 60747681;
Bij beide recoveries worden de archived redo logs toegepast. Een incomplete recovery altijd met "alter database open resetlogs;" uitvoeren om de nieuwe logentries te purgen uit de online redo files 19.6.2: Recovery zonder huidige controlfile ========================================== media recovery wanneer er geen huidige controlfile bestaat De control file bevat dus een SCN die te oud is t.o.v. de SCN's in de archived redo logs. Dit moet je Oracle laten weten via RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; specifying "using backup controlfile" is effectively telling oracle that you've lost your controlfile, and thus SCN's in file headers cannot be compared to anything. So Oracle will happily keep applying archives until you tell it to stop (or run out)
19.7 Queries om SCN te vinden: ----------------------------Iedere redo log is geassocieerd met een hoog en laag scn In V$LOG_HISTORY, V$ARCHIVED_LOG, V$DATABASE, V$DATAFILE_HEADER, V$DATAFILE scn's:
Find the latest archived redologs: SELECT name FROM v$archived_log WHERE sequence# = (SELECT max(sequence#) FROM v$archived_log WHERE 1699499 >= first_change#;
sequence# : geeft het nummer aan van de archived redo log first_change# : eerste scn in archived redo log next_change# : laatste scn in archived redo log, en de eerste scn van de volgende log checkpoint_change# : laatste actuele SCN FUZZY : Y/N, indien YES dan bevat de file changes die later zijn dan de scn in de header A datafile that contains a block whose SCN is more recent than the SCN of its header is called a fuzzy datafile. 19.8 Archived redo logs nodig voor recovery: ------------------------------------------In V$RECOVERY_LOG staan die archived logs vermeld die nodig zijn bij een recovery. Je kunt ook V$RECOVER_FILE gebruiken om te bepalen welke files moeten recoveren. SELECT * FROM v$recover_file; Hier vind je de FILE# en deze kun je weer gebruiken met v$datafile en v$tablespace: SELECT d.name, t.name FROM v$datafile d, v$tablespace t WHERE t.ts# = d.ts# AND d.file# in (14,15,21); # use values obtained FROM V$RECOVER_FILE query 19.9 voorbeeld recovery 1 datafile: ---------------------------------Stel 1 datafile is corrupt. Nu behoeft slechts die ene file te worden teruggezet en daarna recovery toe te passen. SVRMGRL>alter database datafile '/u01/db1/users01.dbf' offline; $ cp /stage/users01.dbf /u01/db1 SVRMGRL>recover datafile '/u01/db1/users01.dbf'; en oracle komt met een suggestie van het toepassen van archived logfiles SVRMGRL>alter database datafile '/u01/db1/users01.dbf' online;
19.10 voorbeeld recovery database: --------------------------------Stel meerdere datafiles zijn verloren. Zet nu backup files terug. SVRMGRL>startup mount; SVRMGRL>recover database; en oracle zal de archived redo logfiles toepassen. media recovery complete SVRMGRL>alter database open;
19.11 restore naar ANDere disks: ------------------------------- alter database backup controlfile to trace; - restore files naar nieuwe lokatie: - edit control file met nieuwe lokatie files - save dit als .sql script en voer het uit: SVRMGRL>@new.sql controlfile: startup nomount create controlfile reuse database "brdb" noresetlogs archivelog maxlogfiles 16 maxlogmembers 2 maxdatafiles 100 maxinstances 1 maxloghistory 226 logfile group 1 ('/disk03/db1/redo/redo01a.dbf', '/disk04/db1/redo/redo01b.dbf') size 2M, group 2 ('/disk03/db1/redo/redo02a.dbf', '/disk04/db1/redo/redo02b.dbf') size 2M datafile '/disk04/oracle/db1/sys01.dbf', '/disk05/oracle/db1/rbs01.dbf', '/disk06/oracle/db1/data01.dbf', '/disk04/oracle/db1/index01.dbf', character set 'us7ascii' ; RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; 19.12 Copy van database naar ANDere Server: -----------------------------------------1. kopieer alle files precies van ene lokatie naar ANDere
2. source server: alter database backup controlfile to trace 3. Maak een juiste init.ora met references nieuwe server 4. edit de ascii versie controlfile uit stap 2 waarbij alle schijflokaties verwijzen naar de target STARTUP NOMOUNT CREATE CONTROLFILE REUSE SET DATABASE "FSYS" RESETLOGS noARCHIVELOG MAXLOGFILES 8 MAXLOGMEMBERS 4 etc.. ALTER DATABASE OPEN resetlogs; of CREATE CONTROLFILE REUSE SET DATABASE "TEST" RESETLOGS ARCHIVELOG .. #RECOVER DATABASE ALTER DATABASE OPEN RESETLOGS; ALTER DATABASE OPEN RESETLOGS;
CREATE CONTROLFILE REUSE DATABASE "PROD" NORESETLOGS ARCHIVELOG .. .. RECOVER DATABASE # All logs need archiving AND a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; 5. SVRMGRL>@script bij probleem: delete originele controlfiles en geen reuse. Voorbeeld create controlfile: ----------------------------If you want another database name use CREATE CONTROLFILE SET DATABASE STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "O901" RESETLOGS NOARCHIVELOG MAXLOGFILES 50 MAXLOGMEMBERS 5 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 'D:\ORACLE\ORADATA\O901\REDO01.LOG' SIZE 100M, GROUP 2 'D:\ORACLE\ORADATA\O901\REDO02.LOG' SIZE 100M, GROUP 3 'D:\ORACLE\ORADATA\O901\REDO03.LOG' SIZE 100M DATAFILE
'/oradata/data_small/fe_heat_data_small.dbf' ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; 19.13 PROBLEMS DURING RECOVERY: -------------------------------
BEGIN BACKUP
END BACKUP
normal business | system=453 switch logfile | users=455 | | CRASH tools=459 | | | | | | | -----------------------------------------------------------------------------t=t0 t=t1 t=t2 t=t3 ORA-01194, ORA-01195: --------------------------Note 1: ------Suppose the system comes with: ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '/u03/oradata/tstc/dbsyst01.dbf' Either you had the database in archive mode or in non archive mode: archive mode RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; non-archive mode: # RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; ALTER DATABASE OPEN RESETLOGS; If you have checked that the scn's of all files are the samed number, you might try in the init.ora file: _allow_resetlogs_corruption = true ------Note 2: ------Problem Description
------------------You restored your hot backup and you are trying to do a point-in-time recovery. When you tried to open your database you received the following error: ORA-01195: online backup of file needs more recovery to be consistent Cause: An incomplete recovery session was started, but an insufficient number of redo logs were applied to make the file consistent. The reported file is an online backup that must be recovered to the time the backup ended. Action: Either apply more redo logs until the file is consistent or restore the file from an older backup and repeat the recovery. For more information about online backup, see the index entry "online backups" in the . This is assuming that the hot backup completed error free. Solution Description -------------------Continue to apply the requested logs until you are able to open the
database.
Explanation ----------When you perform hot backups on a file, the file header is frozen. For example, datafile01 may have a file header frozen at SCN #456. When you backup the next datafile the SCN # may be differnet. For example the file header for datafile02 may be frozen with SCN #457. Therefore, you must apply archive logs until you reach the SCN # of the last file that was backed up. Usually, applying one or two more archive logs will solve the problem, unless there was alot of activity on the database during the backup.
------Note 3: ------ORA-01194: file 1 needs more recovery to be consistent I am working with a test server, I can load it again but I would like to know if this kind of problem could be solved or not. Just to let you know, that I am new in Oracle Database Administration. I ran a hot backup script, which deleted the old ARCHIVE, logs at the end. After checking the script's log, I realized that the hot backup was not successful and it deleted the Archives. I tried to startup the database and an error occurred; "ORA-01589: must use RESETLOGS or NORESETLOGS option for database open" I tried to open it with the RESETLOGS option then another error occurred; "ORA-01195: online backup of file 1 needs more recovery to be consistent" Just because, it was a test environment, I have never taken any cold backups. I still have hot backups. I don't know how to recover from those.
If anyone can tell me how to do it from SQLPLUS (SVRMGRL is not loaded), I would really appreciate it. Thanks, Hi Hima, The following might help. You now have a database that is operating like it's in noarchive mode since the logs are gone. 1. Mount the database. 2. Issue the following query: SELECT V1.GROUP#, MEMBER, SEQUENCE#, FIRST_CHANGE# FROM V$LOG V1, V$LOGFILE V2 WHERE V1.GROUP# = V2.GROUP# ; This will list all your online redolog files and their respective sequence and first change numbers. 3. If the database is in NOARCHIVELOG mode, issue the query: SELECT FILE#, CHANGE# FROM V$RECOVER_FILE; If the CHANGE# is GREATER than the minimum FIRST_CHANGE# of your logs, the datafile can be recovered. 4. Recover the datafile, after taking offline, you cannot take system offline which is the file in error in your case. RECOVER DATAFILE '' 5. Confirm each of the logs that you are prompted for until you receive the message "Media recovery complete". If you are prompted for a nonexisting archived log, Oracle probably needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the ORA-280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence number matches the one you are being asked for. Keep entering online logs as requested until you receive the message "Media recovery complete". 6. Bring the datafile online. No need for system. 7. If the database is at mount point, open it Perform a full closed backup of the existing database ------Note 4: ------Recover until time using backup controlfile
Hi, I am trying to perform an incomplete recovery to an arbitrary point in time in the past. Eg. I want to go back five minutes. I have a hot backup of my database. (Tablespaces into hotbackup mode, copy files, tablespaces out of hotbackup mode, archive current log, backup controlfile to a file and also to a trace). (yep im in archivelog mode as well) I shutdown the current database and blow the datafiles,online redo logs,controlfiles away. I restore my backup copy of the database - (just the datafiles) startup nomount and then run an edited controlfile trace backup (with resetlogs). I then RECOVER DATABSE UNTIL TIME 'whenever' USING BACKUP CONTROLFILE. I'm prompted for logs in the usual way but the recovery ends with an ORA-1547 Recover succeeded but open resetlogs would give the following error. The next error is that datafile 1 (system ts) - would need more recovery. Now metalink tells me that this is usually due to backups being restored that are older than the archive redo logs - this isn't the case. I have all the archive redo logs I need to cover the time the backup was taken up to the present. The time specified in the recovery is after the backup as well. What am I missing here? Its driving me nuts. I'm off back to the docs again! Thanks in advance Tim -------------------------------------------------------------------------------From: Anand Devaraj 15-Aug-02 15:15 Subject: Re : Recover until time using backup controlfile The error indicates that Oracle requires a few more scns to get all the datafiles in sync. It is quite possible that those scns are present in the online redo logfiles which were lost. In such cases when Oracle asks for a non-existent archive log, you should provide the complete path of the online log file for the recovery to succeed. Since you dont have an online log file you should use RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE. In this case when you exhaust all the archive log files, you issue the cancel command which will
automatically rollback all the incomplete transactions and get all the datafile headers in sync with the controlfile. To do an incomplete recovery using time,you usually require the online logfiles to be present. Anand -------------------------------------------------------------------------------From: Radhakrishnan paramukurup 15-Aug-02 16:19 Subject: Re : Recover until time using backup controlfile I am not sure whether you have You need to also to switch the practice else you need the next log which is not Otherwise some of the changes to reach a consistant state is untill you reach a consistent state.
missed this step or just missed in the note. log at the end of the back up (I do as a matter of sure to be available in case of a failure). still in the online log and you can never open
Hope this helps ........ -------------------------------------------------------------------------------From: Mark Gokman 15-Aug-02 16:41 Subject: Re : Recover until time using backup controlfile To successfully perform incomplete recovery, you need a full db backup that was completed prior to the point to which you want to recover, plus you need all archive logs containing all SCNs up to the point to which you want to recover. Applying these rules to your case, I have two questions: - are you recovering to the point in time AFTER the time the successful full backup was copleted? - is there an archive log that was generated AFTER the time you specify in until time? If both answers are yes, then you should have no problems. I actually recently performed such a recovery several times. -------------------------------------------------------------------------------From: Tim Palmer 15-Aug-02 18:02 Subject: Re : Re : Recover until time using backup controlfile Thanks Guys! I think Mark has hit the nail on the head here. I was being an idiot! Ive ran this exercise a few more times (with success) and I am convinced that what I was doing was trying to recover to a point in time that basically was before the latest scn of any one file in the hot backup set I was using - convinced myself that I wasnt -
but I must have been..... perhaps I need a holiday! Thanks again Tim -------------------------------------------------------------------------------From: Oracle, Rowena Serna 16-Aug-02 15:44 Subject: Re : Recover until time using backup controlfile Thanks to mark for his input for helping you out. ------Note 5: ------ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\\UNDOTBS01.DBF' File number, name and directory may vary depending on Oracle configuration Details: Undo tablespace data description In an Oracle database, Undo tablespace data is an image or snapshot of the original contents of a row (or rows) in a table. This data is stored in Undo segments (formerly Rollback segments in earlier releases of Oracle) in the Undo tablespace. When a user begins to make a change to the data in a row in an Oracle table, the original data is first written to Undo segments in the Undo tablespace. The entire process (including the creation of the Undo data) is recorded in Redo logs before the change is completed and written in the Database Buffer Cache, and then the data files via the database writer (DBWn) process. If the transaction does not complete due to some error or should there be a user decision to reverse (rollback) the change, this Undo data is critical for the ability to roll back or undo the changes that were made. Undo data also ensures a way to provide read consistency in the database. Read consistency means that if there is a data change in a row of data that is not yet committed, a new query of this same row or table will not display any of the uncommitted data to other users, but will use the information from the Undo segments in the Undo tablespace to actually construct and present a consistent view of the data that only includes committed transactions or information. During recovery, Oracle uses its Redo logs to play forward through transactions in
a database so that all lost transactions (data changes and their Undo data generation) are replayed into the database. Then, once all the Redo data is applied to the data files, Oracle uses the information in the Undo segments to undo or roll back all uncommitted transactions. Once recovery is complete, all data in the database is committed data, the System Change Numbers (SCN) on all data files and the control_files match, and the database is considered consistent. As for Oracle 9i, the default method of Undo management is no longer manual, but automatic; there are no Rollback segments in individual user tablespaces, and all Undo management is processed by the Oracle server, using the Undo tablespace as the container to maintain the Undo segments for the user tablespaces in the database. The tablespace that still maintains its own Rollback segments is the System tablespace, but this behavior is by design and irrelevant to the discussion here. If this configuration is left as the default for the database, and the 5.022 or 5.025 version of the VERITAS Backup Exec (tm) Oracle Agent is used to perform Oracle backups, the Undo tablespace will not be backed up. If Automatic Undo Management is disabled and the database administrator (DBA) has modified the locations for the Undo segments (if the Undo data is no longer in the Undo tablespace), this data may be located elsewhere, and the issues addressed by this TechNote may not affect the ability to fully recover the database, although it is still recommended that the upgrade to the 5.026 Oracle Agent be performed. Scenario 1 The first scenario would be a recovery of the entire database to a previous pointin-time. This type of recovery would utilize the RECOVER DATABASE USING BACKUP CONTROLFILE statement and its customizations to restore the entire database to a point before the entry of improper or corrupt data or to roll back to a point before the accidental deletion of critical data. In this type of situation, the most common procedure for the restore is to just restore the entire online backup over the existing Oracle files with the database shutdown. (See the Related Documents section for the appropriate instructions on how to restore and recover an Oracle database to a point-in-time using an online backup.) In this scenario, where the entire database would be rolled back in time, an offline restore would include all data files, archived log files, and the backup control_file from
the tape or backup media. Once the RECOVER DATABASE USING BACKUP CONTROLFILE command was executed, Oracle would begin the recovery process to roll forward through the Redo log transactions, and it would then roll back or undo uncommitted transactions. At the point when the recovery process started on the actual Undo tablespace, Oracle would see that the SCN of that tablespace was too high (in relation to the record in the control_file). This would happen simply because the Undo tablespace wasn't on the tape or backup media that was restored, so the original Undo tablespace wouldn't have been overwritten, as were the other data files, during the restore operation. The failure would occur because the Undo tablespace would still be at its SCN before the restore from backup (an SCN in the future as related to the restored backup control_file). All other tablespaces and control_files would be back at their older SCNs (not necessarily consistent yet), and the Oracle server would respond with the following error messages: ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\\UNDOTBS01.DBF' At this point, the database cannot be opened with the RESETLOGS option, nor in a normal mode. Any attempt to do so yields the error referenced above. SQL> alter database open resetlogs; alter database open resetlogs * Error at line 1: ORA-01152: file 2 was not restored from a sufficiently old backup ORA-01110: data file 2: 'D:\ORACLE\ORADATA\DRTEST\UNDOTBS01.DBF' The only recourse here is to recover or restore an older backup that contains an Undo tablespace, whether from an older online backup, or from a closed or offline backup or copy of the database. Without this ability to acquire an older Undo tablespace to rerun the recovery operation, it will not be possible to start the database. At this point, Oracle Technical Support must be contacted. Scenario 2 The second scenario would involve the actual corruption or loss of the Undo tablespace's data files. If the Undo tablespace data is lost or corrupted due to media failure or other internal logical error or user error, this data/tablespace must be recovered. Oracle 9i does offer the ability to create a new Undo tablespace and to alter the
Oracle Instance to use this new tablespace when deemed necessary by the DBA. One of the requirements to accomplish this change, though, is that there cannot be any active transactions in the Undo segments of the tablespace when it is time to actually drop it. In the case of data file corruption, uncommitted transactions in the database that have data in Undo segments can be extremely troublesome because the existence of any uncommitted transactions will lock the Undo segments holding the data so that they cannot be dropped. This will be evidenced by an "ORA-01548" error if this is attempted. This error, in turn, prevents the drop and recreation of the Undo tablespace, and thus prevents the successful recovery of the database. To overcome this problem, the transaction tables of the Undo segments can be traced to provide details on transactions that Oracle is trying to recover via rollback and these traces will also identify the objects that Oracle is trying to apply the undo to. Oracle Doc ID: 94114.1 may be referenced to set up a trace on the database startup so that the actual transactions that are locking the Undo segments can be identified and dropped. Dropping objects that contain uncommitted transactions that are holding locks on Undo segments does entail data loss, and the amount of loss depends on how much uncommitted data was in the Undo segments at the point of failure. When utilized, this trace is actually monitoring or dumping data from the transaction tables in the headers of the Undo segments (where the records that track the data in the Undo segments are located), but if the Undo tablespace's data file is actually missing, has been offline dropped, or if these Undo segment headers have been corrupted, even the ability to dump the transaction table data is lost and the only recourse at this point may be to open the database, export, and rebuild. At this point, Oracle Technical Support must be contacted. Backup Exec Agent for Oracle 5.022 and 5.025 should be upgraded to 5.026 When using the 5.022 or 5.025 version of the Backup Exec for Windows Servers Oracle Agent (see the Related Documents section for the appropriate instructions on how to identify the version of the Oracle Agent in use), the Oracle Undo tablespace is not available for backup because the Undo tablespace falls into the type category of Undo, and only tablespaces with a content type of PERMANENT are located and made available for backup. Normal full backups with all Oracle components selected will run without error and will complete with a successful status since the Undo tablespace is not actually flagged as a selection. In most Oracle recovery situations, this absence of the Undo tablespace data for restore would not
cause any problem because the original Undo tablespace is still available on the database server. Restores of User tablespaces, which do not require a rollback in time, would proceed normally since lost data or changes would be replayed back into the database, and Undo data would be available to roll back uncommitted transactions to leave the database in a consistent state and ready for user access. However, in certain recovery scenarios, (in which a rollback in time or full database recovery is attempted, or in the case of damaged or missing Undo tablespace data files) this missing Undo data can result in the inability to properly recover tablespaces back to a point-intime, and could potentially render the database unrecoverable without an offline backup or the assistance of Oracle Technical Support. The scenarios in this TechNote describe two examples (this does not necessarily imply that these are the only scenarios) of how this absence of the Undo tablespace on tape or backup media, and thus its inability to be restored, can result in failure of the database to open and can result in actual data loss. The only solution to the problems referenced within this TechNote is to upgrade the Backup Exec for Windows Servers Oracle Agent to version 5.026, and to take new offline (closed database) and then new online (running database) backups of the entire Oracle 9i database as per the Oracle Agent documentation in the Backup Exec 9.0 for Windows Servers Administrator's Guide. Oracle 9i database backups made with the 5.022 and 5.025 Agent that shipped with Backup Exec 9.0 for Windows Servers build 4367 or build 4454 should be considered suspect in the context of the information provided in this TechNote. Note: The 5.022, 5.025, and 5.026 versions of the Oracle Agent are compatible with Backup Exec 8.6 for Windows NT and Windows 2000, which includes support for Oracle 9i, as well as Backup Exec 9.0 for Windows Servers. See the Related Documents section for instructions on how to identify the version of the Oracle Agent in use. ------Note 6: ------- Backup a) Consistent backups A consistent backup means that all data files and control files are consistent to a point in time. I.e. they have the same SCN. This is the only method of backup when the database is in NO Archive log mode.
b) Inconsistent backups An Inconsistent backup is possible only when the database is in Archivelog mode and proper Oracle aware software is used. Most default backup software can not backup open files. Special precautions need to be used and testing needs to be done. You must apply redo logs to the data files, in order to restore the database to a consistent state. c) Database Archive mode The database can run in either Archivelog mode or noarchivelog mode. When you first create the database, you specify if it is to be in Archivelog mode. Then in the init.ora file you set the parameter log_archive_start=true so that archiving will start automatically on startup. If the database has not been created with Archivelog mode enabled, you can issue the command whilst the database is mounted, not open. SVRMGR> alter database Archivelog;. SVRMGR> log archive start SVRMGR> alter database open SVRMGR> archive log list This command will show you the log mode and if automatic archival is set. d) Backup Methods Essentially, there are two backup methods, hot and cold, also known as online and offline, respectively. A cold backup is one taken when the database is shutdown. A hot backup is on taken when the database is running. Commands for a hot backup: 1. Svrmgr>alter database Archivelog Svrmgr> log archive start Svrmgr> alter database open 2. Svrmgr> archive log list --This will show what the oldest online log sequence is. As a precaution, always keep the all archived log files starting from the oldest online log sequence. 3. Svrmgr> Alter tablespace tablespace_name BEGIN BACKUP 4. --Using an OS command, backup the datafile(s) of this tablespace. 5. Svrmgr> Alter tablespace tablespace_name END BACKUP --- repeat step 3, 4, 5 for each tablespace. 6. Svrmgr> archive log list ---do this again to obtain the current log sequence. You will want to make sure you have a copy of this redo log file. 7. So to force an archived log, issue Svrmgr> ALTER SYSTEM SWITCH LOGFILE A better way to force this would be: svrmgr> alter system archive log current; 8. Svrmgr> archive log list This is done again to check if the log file had been archived and to find the latest archived sequence number. 9. Backup all archived log files determined from steps 2 and 8. Do not backup the online redo logs. These will contain the end-of-backup marker and can cause corruption if use doing recovery. 10. Back up the control file: Svrmgr> Alter database backup controlfile to 'filename' e) Incremental backups These are backups that are taken on blocks that have been modified since the
last backup. These are useful as they don't take up as much space and time. There are two kinds of incremental backups Cumulative and Non cumulative. Cumulative incremental backups include all blocks that were changed since the last backup at a lower level. This one reduces the work during restoration as only one backup contains all the changed blocks. Noncumulative only includes blocks that were changed since the previous backup at the same or lower level. Using rman, you issue the command "backup incremental level n" f) Support scenarios When the database crashes, you now have a backup. You restore the backup and then recover the database. Also, don't forget to take a backup of the control file whenever there is a schema change. RECOVERY ========= There are several kinds of recovery you can perform, depending on the type of failure and the kind of backup you have. Essentially, if you are not running in archive log mode, then you can only recover the cold backup of the database and you will lose any new data and changes made since that backup was taken. If, however, the database is in Archivelog mode you will be able to restore the database up to the time of failure. There are three basic types of recovery: 1. Online Block Recovery. This is performed automatically by Oracle.(pmon) Occurs when a process dies while changing a buffer. Oracle will reconstruct the buffer using the online redo logs and writes it to disk. 2. Thread Recovery. This is also performed automatically by Oracle. Occurs when an instance crashes while having the database open. Oracle applies all the redo changes in the thread that occurred since the last time the thread was checkpointed. 3. Media Recovery. This is required when a data file is restored from backup. The checkpoint count in the data files here are not equal to the check point count in the control file. This is also required when a file was offlined without checkpoint and when using a backup control file. Now let's explain a little about Redo vs Rollback. Redo information is recorded so that all commands that took place can be repeated during recovery. Rollback information is recorded so that you can undo changes made by the current transaction but were not committed. The Redo Logs are used to Roll Forward the changes made, both committed and non- committed changes. Then from the Rollback segments, the undo information is used to rollback the uncommitted changes. Media Failure and Recovery in Noarchivelog Mode In this case, your only option is to restore a backup of your Oracle files. The files you need are all datafiles, and control files. You only need to restore the password file or parameter files if they are lost or are corrupted. Media Failure and Recovery in Archivelog Mode In this case, there are several kinds of recovery you can perform, depending on what has been lost. The three basic kinds of recovery are: 1. Recover database - here you use the recover database command and the database must be closed and mounted. Oracle will recover all datafiles that are online. 2. Recover tablespace - use the recover tablespace command. The database can be open but the tablespace must be offline. 3. Recover datafile - use the recover datafile command. The database can be
open but the specified datafile must be offline. Note: You must have all archived logs since the backup you restored from, or else you will not have a complete recovery. a) Point in Time recovery: A typical scenario is that you dropped a table at say noon, and want to recover it. You will have to restore the appropriate datafiles and do a point-in-time recovery to a time just before noon. Note: you will lose any transactions that occurred after noon. After you have recovered until noon, you must open the database with resetlogs. This is necessary to reset the log numbers, which will protect the database from having the redo logs that weren't used be applied. The four incomplete recovery scenarios all work the same: Recover database until time '1999-12-01:12:00:00'; Recover database until cancel; (you type in cancel to stop) Recover database until change n; Recover database until cancel using backup controlfile; Note: When performing an incomplete recovery, the datafiles must be online. Do a select name, status from v$datafile to find out if there are any files which are offline. If you were to perform a recovery on a database which has tablespaces offline, and they had not been taken offline in a normal state, you will lose them when you issue the open resetlogs command. This is because the data file needs recovery from a point before the resetlogs option was used. b) Recovery without control file If you have lost the current control file, or the current control file is inconsistent with files that you need to recover, you need to recover either by using a backup control file command or create a new control file. You can also recreate the control file based on the current one using the 'backup control file to trace' command which will create a script for you to run to create a new one. Recover database using backup control file command must be used when using a control file other that the current. The database must then be opened with resetlogs option. c) Recovery of missing datafile with rollback segment The tricky part here is if you are performing online recovery. Otherwise you can just use the recover datafile command. Now, if you are performing an online recovery, you must first ensure that in the init.ora file, you remove the parameter rollback_segments. Otherwise, oracle will want to use those rollback segments when opening the database, but can't find them and wont open. Until you recover the datafiles that contain the rollback segments, you need to create some temporary rollback segments in order for new transactions to work. Even if other rollback segments are ok, they will have to be taken offline. So, all the rollback segments that belong to the datafile need to be recovered. If all the datafiles belonging to the tablespace rollback_data were lost, you can now issue a recover tablespace rollback_data. Next bring the tablespace online and check the status of the rollback segments by doing a select segment_name, status from dba_rollback_segs; You will see the list of rollback segments that are in status Need Recovery. Simply issue alter rollback segment online command to complete. Don't forget to reset the rollback_segments parameter in the init.ora. d) Recovery of missing datafile without rollback segment There are three ways to recover in this scenario, as mentioned above. 1. recover database 2. recover datafile 'c:\orant\database\usr1orcl.ora' 3. recover tablespace user_data e) Recovery with missing online redo logs Missing online redo logs means that somehow you have lost your redo logs before they had a chance to archived. This means that crash recovery cannot be performed, so media recovery is required instead. All datafiles will need to
berestored and rolled forwarded until the last available archived log file is applied. This is thus an incomplete recovery, and as such, the recover database command is necessary. (i.e. you cannot do a datafile or tablespace recovery). As always, when an incomplete recovery is performed, you must open the database with resetlogs. Note: the best way to avoid this kind of a loss, is to mirror your online log files. f) Recovery with missing archived redo logs If your archives are missing, the only way to recover the database is to restore from your latest backup. You will have lost any uncommitted transactions which were recorded in the archived redo logs. Again, this is why Oracle strongly suggests mirroring your online redo logs and duplicating copies of the archives. g) Recovery with resetlogs option Reset log option should be the last resort, however, as we have seen from above, it may be required due to incomplete recoveries. (recover using a backup control file, or a point in time recovery). It is imperative that you backup up the database immediately after you have opened the database with reset logs. The reason is that oracle updates the control file and resets log numbers, and you will not be able to recover from the old logs. The next concern will be if the database crashes after you have opened the database with resetlogs, but have not had time to backup the database. How to recover? Shut down the database Backup all the datafiles and the control file Startup mount Alter database open resetlogs This will work, because you have a copy of a control file after the resetlogs point. Media failure before a backup after resetlogs. If a media failure should occur before a backup was made after you opened the database using resetlogs, you will most likely lose data. The reason is because restoring a lost datafile from a backup prior to the resetlogs will give an error that the file is from a point in time earlier, and you don't have its backup log anymore. h) Recovery with corrupted/missing rollback segments. If a rollback segment is missing or corrupted, you will not be able to open the database. The first step is to find out what object is causing the rollback to appear corrupted. If we can determine that, we can drop that object. If we can't we will need to log an iTar to engage support. So, how do we find out if it's actually a bad object? 1. Make sure that all tablespaces are online and all datafiles are online. This can be checked through v$datafile, under the status column. For tablespaces associated with the datafiles, look in dba_tablespaces. If this doesn't show us anything, i.e., all are online, then 2. Put the following in the init.ora: event = "10015 trace name context forever, level 10" This event will generate a trace file that will reveal information about the transaction Oracle is trying to roll back and most importantly, what object Oracle is trying to apply the undo to. Stop and start the database. 3. Check in the directory that is specified by the user_dump_dest parameter (in the init.ora or show parameter command) for a trace file that was generated at startup time. 4. In the trace file, there should be a message similar to: error recovery tx(#,#) object #. TX(#,#) refers to transaction information.
The object # is the same as the object_id in sys.dba_objects. 5. Use the following query to find out what object Oracle is trying to perform recovery on. select owner, object_name, object_type, status from dba_objects where object_id =