How To Use Logminer To Locate Archive Logs Flow

  • August 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View How To Use Logminer To Locate Archive Logs Flow as PDF for free.

More details

  • Words: 1,136
  • Pages: 6
How to use LogMiner to locate Archive Logs Flow R. Wang

Keywords: logminer, STATSPACK, Archive log files Environment: Oracle 9iR2 and Solaris 5.9 Circumstance: Without introduction of new jobs, we monitored rapid disk occupancy in very short period. The circumstance we encountered is that a new archive log file (18MB) is created every minute. And, the disk occupancy is rapidly increased as shown below. -rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----rw-r-----

1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle 1 oracle

dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba

18873856 Jul 24 20:38 PROD_0001_0000045102.arc 18873856 Jul 24 20:39 PROD_0001_0000045103.arc 18873856 Jul 24 20:40 PROD_0001_0000045104.arc 18873856 Jul 24 20:41 PROD_0001_0000045105.arc 18873856 Jul 24 20:41 PROD_0001_0000045106.arc 18873856 Jul 24 20:43 PROD_0001_0000045107.arc 18873856 Jul 24 20:43 PROD_0001_0000045108.arc 18873856 Jul 24 20:45 PROD_0001_0000045109.arc 18873856 Jul 24 20:45 PROD_0001_0000045110.arc 18873856 Jul 24 20:46 PROD_0001_0000045111.arc 18873856 Jul 24 20:47 PROD_0001_0000045112.arc 18873856 Jul 24 20:49 PROD_0001_0000045113.arc 18873856 Jul 24 20:50 PROD_0001_0000045114.arc 18873856 Jul 24 20:51 PROD_0001_0000045115.arc 18873856 Jul 24 20:52 PROD_0001_0000045116.arc 18873856 Jul 24 20:53 PROD_0001_0000045117.arc 18873856 Jul 24 20:54 PROD_0001_0000045118.arc 18873856 Jul 24 20:56 PROD_0001_0000045119.arc 18873856 Jul 24 20:56 PROD_0001_0000045120.arc 18873856 Jul 24 20:57 PROD_0001_0000045121.arc 18873856 Jul 24 20:58 PROD_0001_0000045122.arc 18873856 Jul 24 20:59 PROD_0001_0000045123.arc 18873856 Jul 24 21:00 PROD_0001_0000045124.arc

Steps of Diagnosis 1. Checking on Memory, CPU, I/O With checking on memory, CPU and I/O, we failed to find any abnormal process that excessively consumes resource of database and OS.

Note: For reference on “ Monitor Oracle Resouce Consumption in Unix” , please refer to the article in “ oracle metalink”with Note: 148466.1. 2. Using Logminer to locate targeted archive log files Then, we would like to look at what does the database do by examining archive log files created with Logminer. Before start using Logminer, please make sure of that you  Log in with sys account with sysdba role  Never escape from the current session because the data viewing of logminer will not be available at other sessions. The steps to use Logminer are depicted as following in details. 1) Check init parameter UTL_FILE_DIR SQL>show parameter UTL_FILE NAME TYPE VALUE ------------------------------ ----------- ------------------utl_file_dir string /tmp 2) Creates text based file containing data dictionary contents execute dbms_logmnr_d.build('shwdict.ora','/tmp'); 3) Add targeted archive log files to log list SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045087. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045088. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045089. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045288. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045289. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045290. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045291. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045297. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. SQL> execute dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045308. arc',dbms_logmnr.addfile); PL/SQL procedure successfully completed. Note: In order to track issues happened, we loaded not only excessive archive log files generated every minutes, but the normal archive log files created in reasonable interval. And, then go check the archive log files which have been loaded and available for analyzing. SQL> select log_id, filename from v$logmnr_logs; LOG_ID ----------45087 45088 45089 45288 45289 45290 45291 45297 45308

FILENAME ------------------------------------------/dbPROD/archive/PROD_0001_0000045087.arc /dbPROD/archive/PROD_0001_0000045088.arc /dbPROD/archive/PROD_0001_0000045089.arc /dbPROD/archive/PROD_0001_0000045288.arc /dbPROD/archive/PROD_0001_0000045289.arc /dbPROD/archive/PROD_0001_0000045290.arc /dbPROD/archive/PROD_0001_0000045291.arc /dbPROD/archive/PROD_0001_0000045297.arc /dbPROD/archive/PROD_0001_0000045308.arc

9 rows selected. 4) Alter date format in current session

To know happening in exact time, we then alter data format in current session to show hours, minutes and seconds. SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; Session altered. 5) Identify range of SCN and Time Next, the time range of SCN is available by querying view v$logmnr_logs. The time range of SCN is helpful for you to limit the range of activities what you are going to examine. SQL> select low_time,high_time,low_scn,next_scn from v$logmnr_logs; LOW_TIME ------------------2007-07-24 20:22:13 2007-07-24 20:23:15 2007-07-24 20:24:20 2007-07-24 22:57:47 2007-07-24 22:58:05 2007-07-24 22:58:21 2007-07-24 22:58:38 2007-07-25 00:12:52 2007-07-25 10:42:05

HIGH_TIME ------------------2007-07-24 20:23:15 2007-07-24 20:24:20 2007-07-24 20:25:35 2007-07-24 22:58:05 2007-07-24 22:58:21 2007-07-24 22:58:38 2007-07-24 22:58:52 2007-07-25 00:15:52 2007-07-25 10:50:41

LOW_SCN ---------1343378330 1343379060 1343379595 1343485178 1343485332 1343485451 1343485597 1343494920 1343712495

NEXT_SCN ---------1343379060 1343379595 1343380558 1343485332 1343485451 1343485597 1343485674 1343495568 1343715303

9 rows selected.

6) Do analysis By referencing created text file of data dictionary “ shwdict.ora”in directory “ /tmp” , we now can start logminer by running the following statement. SQL> execute dbms_logmnr.start_logmnr(dictfilename => '/tmp/shwdict.ora',starttime => to_date('2007-07-24 20:22:13','yyyy-mm-dd hh24:mi:ss'), endtime=> to_date('200707-25 10:50:41','yyyy-mm-dd hh24:mi:ss')); PL/SQL procedure successfully completed Here, we set the time range from the very beginning to very end to analyze any activities we loaded previously. 7) User-defined query From here, we are able to pick up the activities recorded in the archive log files we loaded previously. We can easily run a query as following to pick plenty information from view V$LOGMNR_CONTENTS.

Select SCN,timestamp, session# session_num, table_space, rollback, operation, operation_code, username,sql_redo, sql_undo From V$LOGMNR_CONTENTS Order by 1; By examining the output, we noticed that the values of SQL_REDO and SQL_UNDO are NULL in time range at which excessive archive log files were created. We then narrow down to run another statement upon V$LOGMNR_CONTENTS to focus only object_id for those null SQL_UNDO and SQL_REDO activities. We run: SQL> Select SCN, data_obj#,data_objd#,seg_owner, rollback, operation, username,sql_redo, sql_undo From V$LOGMNR_CONTENTS Order by 1; And, the filtered output presents that couple of duplicated DATA_OBJ# keep showing up. They are: 120049, 120048, 120071, 120061, 120070 and so on. 8) Locate the database objects causing excessive archive log files SQL> select object_id, object_name, owner from dba_objects where object_id in (120049, 120048,120071,120070,120061); OBJECT_ID OBJECT_NAME OWNER ------------- ------------------------------------------------ ----------------120061 STATS$PGASTAT PERFSTAT 120070 STATS$INSTANCE_RECOVERY PERFSTAT 120071 STATS$INSTANCE_RECOVERY_PK PERFSTAT 120048 STATS$SQL_PLAN_USAGE PERFSTAT 120049 STATS$SQL_PLAN_USAGE_PK PERFSTAT Conclusion Now, it’ s very clear that internal activities relating to STATSPACK cause the excessive archive log files. We then go back to check STATSPACK jobs and found that that job started two years ago with interval 1 hour. That means 365x2X24=17520 snapshots were created and saved to database. We doubt that the internal maintenance of snapshots attributes to this problem we are facing. Our solution is then to stop the existing STATSPACK job (which terminate the internal activities) and adjust it to run only at peak time, say 8:00, 9:00, 10:00, 11:00, 13:00, 14:00, 15:00, 16:00 and 17:00. By the way, we are also planning to clean up some historical snapshots at our convenience.

About the Author. R. Wang currently works as Oracle DBA in Canada. He is responsible for database performance tuning and high availability. With over 10 years experience in architecting and building oracle systems, Rui is an evangelist for oracle technology and products. Rui is OCP and received master degree in computer science from Simon Fraser University in Canada. Visit Rui’ s blog at http://www.oraclepoint.com/oralife or join forum http://www.oraclepoint.com for much more resource on oracle.

Related Documents