LogMiner可以用来分析在线/redo日志,也可以用来分析离线日志文件/归档日志,是由一组PL/SQL包和一些动态视图组成,而且是免费的。

参考了老杨的《Logminer:oracle人为误操作之恢复神器》,在11g尝试一次。但是请注意12c以上,具体操作过程,应该有改动,有机会得研究下。

如果没安装LogMiner所需的两个包dbms_logmnr和dbms_logmnr_d,则可执行这两个脚本,dbmslmd.sql和dbmslm.sql,路径是@?/rdbms/admin。

dbmslmd.sql的内容,

Rem
Rem $Header: dbmslmd.sql 13-feb-2006.15:15:25 ajadams Exp $
Rem
Rem dbmslmd.sql
Rem
Rem Copyright (c) 1998, 2006, Oracle. All rights reserved.
Rem
Rem    NAME
Rem      dbmslmd.sql - DBMS Logminer Dictionary package specification
Rem      for DBMS_LOGMNR_D
Rem
Rem    DESCRIPTION
Rem      This file contains the logminer package specification for DBMS_LOGMNR_D
Rem
Rem    NOTES
Rem
Rem    MODIFIED   (MM/DD/YY)
Rem    ajadams     02/13/06 - create synonym
Rem    abrown      09/13/05 - bug 3776830: unwind dictionary
Rem    jnesheiw    02/17/05 - Bug 4028220 Relocated change history and logmnr
Rem                           metada creation to prvtlmd.sql
Rem    doshaugh    04/14/98 - Created
Rem
Rem
Rem  PUBLIC PROCEDURES
Rem
Rem     BUILD (FileName, FileLocation, Options)
Rem
Rem     SET_TABLESPACE(NewTablespace);
Rem
Rem  PUBLIC CONSTANTS
Rem
Rem     STORE_IN_FLAT_FILE
Rem
Rem     STORE_IN_REDO_LOGS
Rem
Rem     MARK_SAFE_MINE_POINT
Rem
Rem
Rem-- --------------------------------------------------------------
--
CREATE or REPLACE PACKAGE dbms_logmnr_d AS
--
--    PACKAGE NAME
--      dbms_logmnr_d
--
--    DESCRIPTION
--      This package contains Logminer Dictionary related procedures.
--      "build" is used to gather the logminer dictionary.
--
--      "set_tablespace" is used to alter the default tablespace of
--      Logminer tables.
--
--      BUILD
--      The logminer dictionary can be gathered
--      into a flat file (Logminer V1 behavior) or it can be gathered
--      into the redo log stream.
--
--      When creating a Flat File dictionary the procedure queries the
--      dictionary tables of the current database and creates a text based
--      file containing their contents. Each table is represented by
--      "pseudo" SQL statements. A description of the columns in a
--      table is created by a "CREATE_TABLE" line (one statement for
--      table). It contains the name, datatype and length for each
--      column. A "INSERT_INTO" statement is created for each row in a
--      selected table. It contains the values for each row. The file
--      is created in preparation of future analysis of databases
--      log files using the logminer tool.
--
--      When gathering the system dictionary into the logstream the procedure
--      queries the dictionary tables inserting the results into a special
--      set of Logminer Gather tables (SYS.LOGMNRG_*).  A side effect of
--      each query is that the resultant inserts cause redo to be generated.
--      Down stream processing can mine this redo to determine the contents
--      of this system's system dictionary at the time this procedure was
--      executed.
--
--      NOTE:  Database must be in "Archivelog Mode" and supplemental logging
--             must be enabled for this procedure to run
--
--      BUILD INPUTS
--      dictionary_filename - name of the dictionary file
--      dictionary_location - path to file directory
--      options - To explicitly indicate flat file or log stream destination.
--
--      BUILD EXAMPLE1
--      Creating a dictionary file as:
--                   /usr/ora/dict.ora
--      Complete syntax, typed all on one line:
--
--      SQL> execute dbms_logmnr_d.build('dict.ora',
--                                       '/usr/ora',
--                                       DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
--
--      BUILD EXAMPLE2
--      Creating a dictionary file as:
--                   /usr/ora/dict.ora
--      Logminer V1 syntax.
--
--      SQL> execute dbms_logmnr_d.build('dict.ora', '/usr/ora');
--
--      BUILD EXAMPLE3
--      Gathering a dictionary into the log stream
--      Complete syntax, typed all on one line:
--
--      SQL> execute dbms_logmnr_d.build('', '',
--                                          DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
--
--      BUILD NOTES
--      The dictionary gather should be done after all dictionary
--      changes to a database and prior to the creation of any log
--      files that are to be analyzed.
--
--
--      SET_TABLESPACE
--      By default all Logminer tables are created to use the SYSAUX
--      tablespace.  All users will find it desirable to alter Logminer
--      tables to employ an alternate tablespace.  Use this routine to
--      recreate all Logminer tables in an alternate tablespace.
--
--      SET_TABLESPACE INPUTS
--      new_tablespace         - a string naming a preexistant tablespace.
--STORE_IN_FLAT_FILE CONSTANT INTEGER := 1;
STORE_IN_REDO_LOGS CONSTANT INTEGER := 2;
MARK_SAFE_MINE_POINT  CONSTANT INTEGER := 8;PROCEDURE  build(dictionary_filename IN VARCHAR2 DEFAULT '',dictionary_location IN VARCHAR2 DEFAULT '',options IN NUMBER DEFAULT 0);--
--
PROCEDURE set_tablespace( new_tablespace IN VARCHAR2 );
--
--
END dbms_logmnr_d; -- End Definition of package
/CREATE OR REPLACE PUBLIC SYNONYM dbms_logmnr_d FOR sys.dbms_logmnr_d;

dbmslm.sql的内容

create or replace PACKAGE dbms_logmnr IS---------------------- OVERVIEW-- --   This package contains the procedures used by LogMiner ad-hoc query--   interface that allows for redo log stream analysis. --   There are three procedures and two functions available to the user:--   dbms_logmnr.add_logfile()    : to register logfiles to be analyzed--   dbms_logmnr.remove_logfile() : to remove logfiles from being analyzed--   dbms_logmnr.start_logmnr()   : to provide window of analysis and--                                  meta-data information--   dbms_logmnr.end_logmnr()     : to end the analysis session--   dbms_logmnr.column_present() : whether a particular column value--                                  is presnet in a redo record--   dbms_logmnr.mine_value()     : extract data value from a redo record-------------------------------  PROCEDURE INFORMATION:--  #1 dbms_logmnr.add_logfile(): --     DESCRIPTION:--       Registers a redo log file with LogMiner. Multiple redo logs can be--       registered by calling the procedure repeatedly. The redo logs --       do not need to be registered in any particular order.--       Both archived and online redo logs can be mined.  If a successful --       call to the procedure is made a call to start_logmnr() must be --       made before selecting from v$logmnr_contents.----     CONSTANTS:--       dbms_logmnr.NEW:  Starts a new list. Any file registered prior to--         this call is discarded.--       dbms_logmnr.ADDFILE:  Adds the redo log to the existing--         list. LogMiner checks to make sure that the redo log is from--         the same database (DB_ID and RESETLOGS_SCN) incarnation as the--         ones previously added.----     EXCEPTIONS:--       ORA: 1284   Redo log file specified can not be opened.--       ORA: 1285   Error reading the header of the redo log file--       ORA: 1286   Redo log file specified is from a database with a--                   different DB_ID--       ORA: 1287   Redo log file specified is from a database with--                   different incarnation--       ORA: 1289   Redo log file specified is a duplicate of a previously--                   specified redo log. LogMiner matches redo logs by the--                   log sequence number. Thus two redo logs with different--                   names but with the same log sequence# (for instance--                   the online counterpart of an archived redo log has --                   a different name, but attempting to register it with--                   LogMiner after registering the archived counterpart --                   will return this exception).----  #2 dbms_logmnr.remove_logfile()--     DESCRIPTION:--       Unregisters a redo log file from LogMiner. Multiple redo logs can be--       unregistered by calling the procedure repeatedly. The redo logs --       do not need to be unregistered in any particular order.  If a --       successful call to the procedure is made a call to start_logmnr() --       must be made before selecting from v$logmnr_contents.----     EXCEPTIONS:--       ORA: 1290   Attempt was made to remove a redo log that has not been--                   registered with LogMiner----  #3 dbms_logmnr.start_logmnr()--     DESCRIPTION:--       Loads the data dictionary used by LogMiner to translate internal--       schema object identifiers to names. The redo stream does not--       contain names of schema objects and columns. The data dictionary--       extract can be provided in three ways: --         (i) use Oracle's online catalog. This is only valid when the --         mining of redo logs is done in the same system that generated--         them.--         (ii) use data dictionary extract taken to a flat file. --         (See description of dbms_logmnr_d.build())--         (iii) use data dictionary extracted in the redo stream. This--         option can ONLY be used when LogMiner is being run on an open--         database, and the source and the mining database instances are--         Oracle9i or higher.--       The user can also restrict the analysis inside an SCN range or a --       time range. If both SCN range and time range are specified, only--       the SCN range is used. --       The user needs to be mindful of the fact that use of time range--       can be imprecise.  If a start_time or start_scn is specified, it --       must be contained in a redo log added by a previous call to --       dbms_logmnr.add_logfile().  If a start_time and start_scn is not --       specified, LogMiner will set it based on the earliest added redo --       log.  If a end_time or end_scn is specified and it is beyond the --       latest added redo log, LogMiner will overwrite the end_time and --       and end_scn with information from the latest added redo log.  When --       the CONTINOUS_MINE option is in use the semantics of --       start and end time/scn ranges may be different.  --       See additional documentation below.----       CONSTANTS (used in options parameter)--       dbms_logmnr.NO_DICT_RESET_ONSELECT:  (will be deprecated soon)--       dbms_logmnr.COMMITED_DATA_ONLY: Groups DMLs belonging to the--         same transaction. Transactions are shown in their commit order.--         Internal redo records (those related to index operations, space--         management etc) are filtered out. So are rolled back--         transactions, rollback to savepoints and in-flight transactions.--       dbms_logmnr.SKIP_CORRUPTION: Usually LogMiner returns an error--         on encountering corrupt redo records. With this option set--         LogMiner will skip the corrupted redo records and continue--         mining. LogMiner can not handle a redo log that has a corrupt--         header.--       dbms_logmnr.DDL_DICT_TRACKING: LogMiner will apply the DDL--         statements encountered in the redo stream to its internal--         dictionary. Only available with Oracle9i redo logs and later.--         Mining database needs to be open.--       dbms_logmnr.DICT_FROM_ONLINE_CATALOG: Use the online data--         dictionary for SQL reconstruction. Mining database must be the--         same one that generated the redo logs. User should expect to --         see "Dictionary Version Mismatch" in SQL_REDO if the current--         object version is newer than the ones encountered in the redo --         stream.--       dbms_logmnr.DICT_FROM_REDO_LOGS: Use the dictionary extract logged--         in the redo stream.--       dbms_logmnr.NO_SQL_DELIMITER: By default, the SQL_REDO and SQL_UNDO--         statements are delimited with a ';'. However, this is--         inconvenient for applications that want to open a cursor and--         execute the reconstructed statements. With this option set,--         the SQL_DELIMITER is not placed at the end of reconstructed --         statements.--       dbms_logmnr.NO_ROWID_IN_STMT: By default, the SQL_REDO and SQL_UNDO--         statements for UPDATE and DELETE operations contain a 'ROWID = '--         in the where clause.  However, this is inconvenient for --         applications that want to re-execute the SQL statement.  With --         this option set, 'ROWID' is not placed at the end of reconstructed--         statements.  Note: The onus is on the user to ensure that --         supplemental logging was enabled in the source database at the --         appropriate level and that no duplicate rows exist in tables of --         interest.  LogMiner Adhoc Query does NOT make any quarantee --         regarding uniqueness of logical row identifiers.--       dbms_logmnr.PRINT_PRETTY_SQL: Useful for basic report for--         analysis. With large number of columns the reconstructed--         SQL statements become visually busy. With this option set--         LogMiner formats the reconstructed SQL statements for ease--         of reading. The reconstructed SQL statements look as follow:--            insert into "SCOTT"."EMP" values--              EMPNO: 101010,--              ENAME: "Valued Employee",--              SAL:   101010,--              DEPT:  NULL;--             update "SCOTT"."EMP" --              set--              "EMPNO" = 101011 and--              "SAL"   = 101011--              where--              "EMPNO" = 101010 and--              "SAL"   = 101010 and--              ROWID   = AABBCEXFGHA;--       dbms_logmnr.CONTINUOUS_MINE: Need to mine in the same instance--         that is generating the redo logs. The user needs to register--         only one archived log file. LogMiner will automatically add--         and mine subsequent archived redo logs, and eventually --         mine online logfiles.--       dbms_logmnr.STRING_LITERALS_IN_STMT: By default, the SQL_REDO and --         SQL_UNDO statements honor a database session's NLS setting --         (e.g. NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, etc.) when --         formating the reconstructed SQL statements.  With this option, --         the reconstructed SQL statements will be formatted using ANSI/ISO --         string literal formats.----    EXCEPTIONS:--      ORA: 1281     startScn or endSCN parameter specified is not a valid --                    SCN or endScn is greater then startScn--      ORA: 1282     startTime parameter is greater than year 2110 or --                    endTime parameter is greater than year 2110 or --                    startTime parameter is less then year 1988--      ORA: 1283     The value specified in the Options parameter is not a --                    NUMBER or is not a known LogMiner Adhoc option--      ORA: 1284     The dictionary file specified in the DictFileName --                    parameter has a full path length greater then 256 or --                    cannot be opened--      ORA: 1285     DictFileName parameter is not a valid VARCHAR2--      ORA: 1291     Redo files are missing which are needed to satisfy --                    the user's requested SCN/time range.--                    The user can specify ALLOW_MISSING_LOGS option.--                    Missing logs are not allowed under any circumstance --                    when DDL tracking is in use--      ORA: 1292     No log file has been registered with LogMiner--      ORA: 1293     Mounted database required for options specified --                    (CONTINIOUS_MINE)--      ORA: 1294     Error while processing the data dictionary extract--      ORA: 1295     DB_ID of the data dictionary does not match that of--                    the redo logs--      ORA: 1296     Character set specified in the data dictionary does--                    not match (and is incompatible with) that of the --                    mining database--      ORA: 1297     Redo version mismatch between the dictionary and--                    the registered redo logs--      ORA: 1298     More than one dictionary source was specified or --                    DDL_DICT_TRACKING was requested with --                    DICT_FROM_ONLINE_CATALOG--      ORA: 1299     Dictionary is from a different database incarnation--      ORA: 1300     Writable database required for options specified --                    (DDL_DICT_TRACKING, DICT_FROM_REDO_LOGS, --                     DICT_FROM_ONLINE_CATALOG)--      ORA: 1371     A logfile containing the dictionary dump to redo logs --                    is missing--      ORA: 1286     Options specified require start time or start SCN--                 --  #4 dbms_logmnr.end_logmnr()--     DESCRIPTION:--       Ends the LogMiner session. Releases all PGA memory allocated --       to stage internal data structures etc.----     EXCEPTIONS:--       ORA: 1307    No LogMiner session is currently active. --                    Attempt to end_logmnr() without calling--                    add_logfile() or start_logmnr()----  #5 dbms_logmnr.mine_value()--     DESCRIPTION: --       This facilitates query by data value. For instance, the user--       can formulate a query that says "Show me all updates to--       SCOTT.EMP where the SAL column is updated to twice its--       original value"--       select sql_redo from v$logmnr_contents where--           operation = 'UPDATE" and --           owner_name = 'SCOTT' and seg_name = 'EMP' and--         dbms_logmnr.mine_value(redo_value, 'SCOTT.EMP.SAL') > --         2* dbms_logmnr.mine_value(undo_value, 'SCOTT.EMP.SAL');--      The function returns NULL if the column does not exist in--      the redo record or if the column value is actually null.--      To decipher between the two different null possibilities--      use dbms_logmnr.column_present() function.--      --      PARAMETERS:--        sql_redo_undo:  which column in v$logmnr_contents to --        extract data value from--        column_name:    fully qualified column name of the --        column that needs to be extracted----      EXCEPTIONS:--      ORA 1302:     Specified table or column does not exist--       --  #6 dbms_logmnr.column_present()--     DESCRIPTION: --       Can be used to decipher null returns from mine_value function--       The query described above can be rewritten to filter out--       redo records that do not contain update to the 'SAL' --       columns--         select sql_redo from v$logmnr_contents where--           operation = 'UPDATE" --           owner_name = 'SCOTT' and seg_name = 'EMP' and--           dbms_logmnr.mine_value(redo_value, 'SCOTT.EMP.SAL') > --           2* dbms_logmnr.mine_value(undo_value, 'SCOTT.EMP.SAL') and--           dbms_logmnr.column_present(redo_value, 'SCOTT.EMP.SAL');-- --      PARAMETERS:--        sql_redo_undo:  which column in v$logmnr_contents to --        extract data value from--        column_name:    fully qualified column name of the --        column that needs to be extracted----      EXCEPTIONS:--      ORA 1302:     Specified table or column does not exist--       --------------------------------------------------------------------
-- SUBTYPES and related CONSTANTS
----
-- Constants for add_archivelog options flagNEW                       CONSTANT BINARY_INTEGER := 1;
REMOVEFILE                CONSTANT BINARY_INTEGER := 2;
ADDFILE                   CONSTANT BINARY_INTEGER := 3;--
-- Constants for start_logmnr options flag
NO_DICT_RESET_ONSELECT    CONSTANT BINARY_INTEGER := 1;
COMMITTED_DATA_ONLY       CONSTANT BINARY_INTEGER := 2;
SKIP_CORRUPTION           CONSTANT BINARY_INTEGER := 4;
DDL_DICT_TRACKING         CONSTANT BINARY_INTEGER := 8;
DICT_FROM_ONLINE_CATALOG  CONSTANT BINARY_INTEGER := 16;
DICT_FROM_REDO_LOGS       CONSTANT BINARY_INTEGER := 32;
NO_SQL_DELIMITER          CONSTANT BINARY_INTEGER := 64;
PRINT_PRETTY_SQL          CONSTANT BINARY_INTEGER := 512;
CONTINUOUS_MINE           CONSTANT BINARY_INTEGER := 1024;
NO_ROWID_IN_STMT          CONSTANT BINARY_INTEGER := 2048;
STRING_LITERALS_IN_STMT   CONSTANT BINARY_INTEGER := 4096;
--
SUBTYPE Length            IS BINARY_INTEGER;
SUBTYPE ThreadId          IS BINARY_INTEGER;--
-- Constants for STATUS column of v$logmnr_contents
-- NOTE: Make sure that new ones match the values defined
-- in the krvfsri struct in krv0.h
VALID_SQL                 CONSTANT BINARY_INTEGER := 0;
INVALID_SQL               CONSTANT BINARY_INTEGER := 2;
UNGUARANTEED_SQL          CONSTANT BINARY_INTEGER := 3;
CORRUPTED_BLK_IN_REDO     CONSTANT BINARY_INTEGER := 4;
ASSEMBLY_REQUIRED_SQL     CONSTANT BINARY_INTEGER := 5;
HOLE_IN_LOGSTREAM         CONSTANT BINARY_INTEGER := 1291;-- Workaround for the lack of constrained subtypesLogFileNameTemplate          VARCHAR2(256);
SUBTYPE LogFileName          IS LogFileNameTemplate%TYPE;
LogFileDescTemplate          VARCHAR2(256);
SUBTYPE LogFileDescription   IS LogFileDescTemplate%TYPE;-------------
-- PROCEDURES
-----------------------------------------------------------------------------
---------------------------------------------------------------------------
-- Initialize LOGMINER
--
-- Supplies LOGMINER with the list of filenames and SCNs required
-- to initialize the tool.  Once this procedure completes, the server is ready
-- to process selects against the v$logmnr_contents fixed view.
--
---------------------------------------------------------------------------PROCEDURE start_logmnr( startScn           IN  NUMBER default 0 ,endScn             IN  NUMBER default 0,startTime          IN  DATE default '',endTime            IN  DATE default '',DictFileName       IN  VARCHAR2 default '',Options            IN  BINARY_INTEGER default 0 );PROCEDURE add_logfile( LogFileName        IN  VARCHAR2,Options            IN  BINARY_INTEGER default ADDFILE );PROCEDURE end_logmnr;FUNCTION column_present(sql_redo_undo      IN  NUMBER default 0,column_name        IN  VARCHAR2 default '') RETURN BINARY_INTEGER;FUNCTION mine_value(sql_redo_undo      IN  NUMBER default 0,column_name        IN  VARCHAR2 default '') RETURN VARCHAR2;PROCEDURE remove_logfile( LogFileName        IN  VARCHAR2);---------------------------------------------------------------------------pragma TIMESTAMP('1998-05-05:11:25:00');END;
/
grant execute on dbms_logmnr to execute_catalog_role;
create or replace public synonym dbms_logmnr for sys.dbms_logmnr;

其实内容都不多,主要是注释,作用就是创建存储过程,并授予权限。

执行过程,

SQL> @?/rdbms/admin/dbmslmd.sqlPackage created.Synonym created.SQL> @?/rdbms/admin/dbmslm.sqlPackage created.Grant succeeded.Synonym created.

我们为LogMiner创建一个专门的表空间,

SQL> create tablespace lm_tbs datafile '/opt/applog/oracle/oradata/DCM/lm_tbs01.dbf' size 500M autoextend on maxsize 2G;Tablespace created.SQL> exec dbms_logmnr_d.set_tablespace(new_tablespace=>'lm_tbs');PL/SQL procedure successfully completed.

创建下LogMiner的目录,

SQL> create directory utlfile as '/opt/applog/oracle/oradata/DCM';Directory created.SQL> show parameter utl_NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_stored_outlines               string
utl_file_dir                         string

需要设定UTL_FILE_DIR参数值,指定数据字典文件的位置,

P.S. 此参数在12c以上已经被废弃了。

SQL> alter system set utl_file_dir='/opt/applog/oracle/oradata/DCM/' scope=spfile;System altered.

UTL_FILE_DIR是静态参数,需要重启,

SQL> show parameter utl_NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_stored_outlines               string
utl_file_dir                         string      /opt/applog/oracle/oradata/DCM

建议开启LogMiner日志补充模式,

SQL> alter database add supplemental log data;Database altered.

创建一个LogMiner账号,授予权限,

SQL> create user logminer identified by logminer;User created.SQL> grant connect, resource, dba to logminer;Grant succeeded.

登陆普通用户,删除一张表,再创建同名表,增加一些数据,

[oracle@vm-osvm77512-app admin]$ sqlplus bisal/bisalSQL*Plus: Release 11.2.0.4.0 Production on Tue Dec 1 16:21:05 2020Copyright (c) 1982, 2013, Oracle.  All rights reserved.Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing optionsSQL> desc bisalName                                      Null?    Type----------------------------------------- -------- ----------------------------ID                                                 NUMBERNAME                                               VARCHAR2(1)SQL> drop table bisal;Table dropped.SQL> create table bisal(id number primary key, a varchar2(1), b number, c date);Table created.SQL> insert into bisal values(1, 'a', 1, sysdate-1);1 row created.SQL> insert into bisal values(2, 'b', 2, sysdate);1 row created.SQL> insert into bisal values(3, 'c', 3, sysdate);1 row created.SQL> commit;Commit complete.SQL> select * from bisal;ID A          B C
---------- - ---------- ------------------1 a          1 30-NOV-202 b          2 01-DEC-203 c          3 01-DEC-20

现在的需求是,想得到刚才删除的原始表,该怎么操作?

因为上述操作,包含了DDL,数据库对象发生变化,因此,需要重新创建数据字典文件,使用LogMiner分析redo log文件/archive log文件之前,建议使用dbms_logmnr_d包将数据字典导出为一个文本文件,该字典文件是可选的,但如果没有他,LogMiner解释出来的语句中关于数据字典中的部分(如表名、列名等)和数值都将是16进制的形式,无法直接理解,

[oracle@vm-osvm77512-app admin]$ sqlplus logminer/logminerSQL*Plus: Release 11.2.0.4.0 Production on Tue Dec 1 16:56:59 2020Copyright (c) 1982, 2013, Oracle.  All rights reserved.Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing optionsSQL> exec dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location => '/opt/applog/oracle/oradata/DCM', options => dbms_logmnr_d.store_in_flat_file);PL/SQL procedure successfully completed.

dbms_logmnr_d包将数据字典导出为一个文本文件的原因:

1. 如要分析的数据库表有变化,库的数据字典也发生变化,需重新创建该字典文件。

2. 在新库中分析另外一个数据库的重作日志时,也须生成一个原库的数据字典文件。

确认当前系统使用的在线日志,可以看到,redo01.log处于ACTIVE,

SQL> select group#, sequence#, status, first_change#, first_time from v$log order by first_change#;GROUP#  SEQUENCE# STATUS           FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------------- ------------- ------------------2       2915 INACTIVE             396421406 27-NOV-203       2916 INACTIVE             396547945 29-NOV-201       2917 CURRENT              396658312 01-DEC-20SQL> select sequence#, FIRST_CHANGE#, NEXT_CHANGE#,name from v$archived_log order by sequence# desc;no rows selected

使用dbms_logmnr.add_logfile加入分析日志的文件,第一个文件使用的参数是dbms_logmnr.new,再增加就是用dbms_logmnr.addfile,

SQL> begin2  dbms_logmnr.add_logfile(logfilename=>'/opt/applog/oracle/oradata/DCM/redo01.log', options=>dbms_logmnr.new);  3  end;4  /PL/SQL procedure successfully completed.

再增加就是用dbms_logmnr.addfile,

SQL> begin2  dbms_logmnr.add_logfile(logfilename=>'/opt/applog/oracle/oradata/DCM/redo02.log', options=>dbms_logmnr.addfile);3  end;4  /PL/SQL procedure successfully completed.

开始日志分析,可分为无限制条件和限制条件,无限制条件中分析所有加入到分析列表日志文件,限制条件根据限制条件分析指定范围日志文件,

SQL> exec dbms_logmnr.start_logmnr(dictfilename=>'/opt/applog/oracle/oradata/DCM/dictionary.ora');PL/SQL procedure successfully completed.

有条件限制,可以设置不同参数的值,缩小要分析日志文件的范围,例如通过设置起止时间参数来限制分析某一时间范围的日志内容,

START_LOGMNR Procedure
This procedure starts LogMiner by loading the dictionary that LogMiner
will use to translate internal schema object identifiers to names.Syntax
DBMS_LOGMNR.START_LOGMNR (startScn           IN NUMBER default 0,endScn             IN NUMBER default 0,startTime          IN DATE default '01-jan-1988',endTime            IN DATE default '31-dec-2110',DictFileName       IN VARCHAR2 default '',Options            IN BINARY_INTEGER default 0 );

例如分析2021年6月5日全天的日志,

EXECUTE dbms_logmnr.start_logmnr(
DictFileName => dictfilename=>'/opt/applog/oracle/oradata/DCM/dictionary.ora',
StartTime =>to_date('2021-6-5 00:00:00','YYYY-MM-DD HH24:MI:SS')
EndTime =>to_date(''2021-6-5 23:59:59','YYYY-MM-DD HH24:MI:SS '));

分析完成,就可以从动态性能视图v$logmnr_contents,检索到LogMiner分析得到的所有的信息,根据用户名、表名,找到误删除的表,执行相关的回滚语句,即可得到原始表,

SQL> select sql_redo, sql_undo from v$logmnr_contents where username='BISAL' and table_name='BISAL';SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
ALTER TABLE "BISAL"."BISAL" RENAME TO "BIN$tWPQZiEzW2ngVfgWPgtqEw==$0" ;
drop table bisal AS "BIN$tWPQZiEzW2ngVfgWPgtqEw==$0" ;
create table bisal(id number primary key, a varchar2(1), b number, c date);SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
insert into "BISAL"."BISAL"("ID","A","B","C") values ('1','a','1',TO_DATE('30-NO
V-20', 'DD-MON-RR'));
delete from "BISAL"."BISAL" where "ID" = '1' and "A" = 'a' and "B" = '1' and "C"= TO_DATE('30-NOV-20', 'DD-MON-RR') and ROWID = 'AAAHViAAJAAA6ZzAAA';insert into "BISAL"."BISAL"("ID","A","B","C") values ('2','b','2',TO_DATE('01-DE
C-20', 'DD-MON-RR'));
delete from "BISAL"."BISAL" where "ID" = '2' and "A" = 'b' and "B" = '2' and "C"= TO_DATE('01-DEC-20', 'DD-MON-RR') and ROWID = 'AAAHViAAJAAA6ZzAAB';SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
insert into "BISAL"."BISAL"("ID","A","B","C") values ('3','c','3',TO_DATE('01-DE
C-20', 'DD-MON-RR'));
delete from "BISAL"."BISAL" where "ID" = '3' and "A" = 'c' and "B" = '3' and "C"= TO_DATE('01-DEC-20', 'DD-MON-RR') and ROWID = 'AAAHViAAJAAA6ZzAAC';6 rows selected.

SQL_REDO是可以为重做记录重做指定行变化的SQL语句(正向操作)。

SQL_UNDO是可以为重做记录回退或恢复指定行变化的SQL语句(反向操作)。

但是要注意,视图v$logmnr_contents中的分析结果仅在我们运行过dbms_logmrn.start_logmnr的会话中,其他进程不能看,这是因为所有的LogMiner存储都在PGA内存中,如果想让其他会话可见,或者想一直存着这些数据,可以用CTAS创建表,存储这些数据。

P.S. 从上面的第一条,能看到Oracle是如何删除一张表的,执行的drop,先是对其改名,然后执行drop,所以我们才可以从回收站恢复该表。

结束分析工作,执行这个dbms_logmnr.end_logmnr,PGA区域会被清空了,

SQL> exec dbms_logmnr.end_logmnr;PL/SQL procedure successfully completed.

再次检索视图,就会提示错误,说明数据已清空,

SQL> select count(*) from v$logmnr_contents;
select count(*) from v$logmnr_contents*
ERROR at line 1:
ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from
v$logmnr_contents

整个操作虽然步骤有些多,但相对来说,还是比较规矩的,可以作为日常误操作的一种应急手段,或者通过日志的数据分析,支持更多的功能。

近期更新的文章:

《尝试一下OSWatch》

《数据库hang等待链分析利器》

《会议交流的一些杂谈》

《NULL判断对SQL的影响》

《曾经运维生涯中的几个“最”》

文章分类和索引:

《公众号700篇文章分类和索引》

误操作怎么办?试试这个神器-Log Miner相关推荐

  1. 整理chinaUnix上【你职业生涯中最难忘的误操作】

    一次删除用户时,去 /home/$USER 看了下,没任何用户自建文件 于是 userdel -rf USERNAME 回车后,没见出现 shell# 脑子瞬间空白~~手去按CTRL+C 已经晚了,, ...

  2. mysql回滚用法_Mysql误操作后利用binlog2sql快速回滚的方法详解

    前言 在日常工作或者学习中,操作数据库时候难免会因为"大意"而误操作,需要快速恢复的话通过备份来恢复是不太可能的,下面这篇文章主要给大家介绍关于Mysql误操作后利用binlog2 ...

  3. MySQL中truncate误操作后的数据恢复案例

    MySQL中truncate误操作后的数据恢复案例 这篇文章主要介绍了MySQL中truncate误操作后的数据恢复案例,主要是要从日志中定位到truncate操作的地方然后备份之前丢失的数据,需要的 ...

  4. 误操作数据库的一个方法

    1 问题数据库 备份日志 backup log InterCreditAVDb to disk='E:\黑名单系统\数据库\log1.bak' 2 还原最新完整备份的数据 use master RES ...

  5. MySQL误操作后如何快速恢复数据

    基本上每个跟数据库打交道的程序员(当然也可能是你同事)都会碰一个问题,MySQL误操作后如何快速回滚?比如,delete一张表,忘加限制条件,整张表都没了.假如这还是线上环境核心业务数据,那这事就闹大 ...

  6. 数据库误操作后悔药来了:AnalyticDB PostgreSQL教你实现分布式一致性备份恢复

    简介:本文将介绍AnalyticDB PostgreSQL版备份恢复的原理与使用方法. 一.背景 AnalyticDB PostgreSQL版(简称ADB PG)是阿里云数据库团队基于PostgreS ...

  7. TSPITR方式数据库找回误操作丢失的数据

    一.TSPITR介绍 TSPITR全称是Tablespace Point In Time Recover(表空间基于时间点的不完全恢复).原理是通过辅助实例基于时间还原出误操作前的数据通过DataPu ...

  8. mysql+误操作怎么恢复_MySQL 误操作后如何快速恢复数据

    传统解法 用全量备份重搭实例,再利用增量binlog备份,恢复到误操作之前的状态.然后跳过误操作的SQL,再继续应用binlog.此法费时费力,不值得再推荐. 利用binlog2sql快速闪回 首先, ...

  9. mysql mgr bug_Mysql MGR架构误操作引发的问题处理

    [背景介绍] 故障方描述:一次用户刷权限的时候不小心把数据库用户表记录删掉了,执行之后发现不对后重建用户,杀掉进程后重新MGR启动报错. [报错信息] 2018-06-13T12:47:41.4055 ...

最新文章

  1. 星梦缘陈彦妃_还记得《星梦缘》的女主吗,她现在变成这样了
  2. 详解虚函数的实现过程之菱形继承(5)
  3. 限制tomcat带宽_必看的Tomcat性能优化
  4. 在html页面中引入公共的头部和底部
  5. 网页中JS函数自动执行常用三种方法
  6. 防火墙的理论特性和实际功能
  7. python显示no matching distribution,Python使用pip安装No matching distribution found for PyYaml==5.3.1...
  8. 顶点计划:辅导员与学生关系讨论
  9. Kotlin 1.3带来稳定的协程、合约及其他
  10. Windows8[Web应用程序项目***已配置为使用IIS。无法访问IIS元数据库,您没有足够的特权访问计算机上的IIS网站]
  11. iphone导出视频 无法连接到设备_如何解决爱思助手无法识别设备或连接超时等故障?...
  12. 请求 Provisional headers are shown 问题
  13. 心电算法(Discuss of ECG Analysis Algorithm)
  14. 【Proteus仿真】TLC5615输出1024点正弦波(振幅可调)
  15. mysql免安装版用法
  16. 【3D游戏模型】在ZBrush中制作3D兽人
  17. 使用kali对同一局域网内的设备进行断网和查看设备图片
  18. Ubuntu系统安装、配置与美化(Ubuntu20.04)
  19. 傅里叶变换究竟是什么玩意儿 以及 这些公式究竟是怎么来的 引子
  20. 广东省污水处理厂数字孪生平台建模_三维可视化平台_吉优赛维数字孪生_三维激光扫描_BIM建模

热门文章

  1. 运动后喝什么更能解渴
  2. python爬去易车网某地市经销商信息
  3. 公务员和计算机专业,国家公务员考试 计算机类和计算机专业的区别是什么
  4. CF1425D Danger of Mad Snakes题解
  5. winlicense官方版是一款功能专业强大的编程软件
  6. 【Python】解决Pycharm中pip更新问题
  7. office2003和2007共存不冲突的解决方案
  8. docker系列--1.docker基础--1.4.启动时挂载数据卷-v
  9. python 解决鸡兔同笼问题
  10. Java组件化开发教程!工信部java软件开发工程师证书