The SQL Server transaction log is akin to a ‘Black box’ in an airliner. It contains all of the records of transactions made against a database. This information is a proverbial goldmine for database audits, recoveries etc but it was never meant to be exposed to end users let alone visualized in an easy to read manner nor used for DBA tasks. As such, utilizing this information can be a challenge, to say the least.

SQL Server事务日志类似于客机中的“黑匣子”。 它包含针对数据库进行的所有交易记录。 该信息是用于数据库审计,恢复等的众所周知的金矿,但绝不意味着将其暴露给最终用户,更不用说以易于阅读的方式将其可视化或用于DBA任务。 这样,至少可以说,利用这些信息可能是一个挑战。

To illustrate, let’s open up a LDF file in a hex editor and see what’s inside. As you can see, this is data never meant to be processed directly by humans, at least

为了说明这一点,让我们在十六进制编辑器中打开LDF文件,然后查看其中的内容。 如您所见,这是至少永远不会被人类直接处理的数据

Despite the challenges inherent in the data format of LDF and TRN files there are some solutions to de-mystify this information, un-obfuscating it to a point where it can begin to be useful. We’ll examine a few of these in this article

尽管LDF和TRN文件的数据格式存在固有的挑战,但仍有一些解决方案可以使此信息变得神秘,从而避免混淆信息,使其开始有用。 我们将在本文中研究其中一些

使用fn_dblog (Using fn_dblog)

fn_dblog is a SQL Server function, albeit undocumented, that can be utilized to decrypt the active region of an online SQL Server transaction log

fn_dblog是SQL Server函数(尽管未记录),可用于解密联机SQL Server事务日志的活动区域

Let’s take a walk through this solution and examine the results

让我们来看一下该解决方案并检查结果

  1. Run fn_dblog

    运行fn_dblog

    Select * FROM sys.fn_dblog(NULL,NULL)
    

    The results include 129 columns, by default. So our next step will be to narrow down the result set to something more manageable, like for a certain transaction type, specifically e.g. Inserts

    默认情况下,结果包括129列。 因此,我们的下一步将是将结果集缩小到更易于管理的范围,例如某种交易类型,例如,插入

  2. Narrow results to match our search

    符合我们搜索条件的窄结果

    So let’s drill down into the results to get a more precise set of information

    因此,让我们深入研究结果以获得更精确的信息

    To see transactions for inserted rows, run:

    要查看插入行的事务,请运行:

    SELECT [Current LSN], Operation, Context, [Transaction ID], [Begin time]FROM sys.fn_dblog(NULL, NULL)WHERE operation IN('LOP_INSERT_ROWS');
    

    Similarly, we can do the same for deleted rows

    同样,我们可以对已删除的行执行相同的操作

    SELECT [begin time], [rowlog contents 1], [Transaction Name], OperationFROM sys.fn_dblog(NULL, NULL)WHERE operation IN('LOP_DELETE_ROWS');
    
  3. Find the appropriate columns that store the values

    查找存储值的适当列

    To find the columns that store values for inserted or deleted rows, look for these headers ‘RowLog Contents 0’, ‘RowLog Contents 1’, ‘RowLog Contents 2’, ‘RowLog Contents 3’, ‘RowLog Contents 4’, ‘Description’ and ‘Log Record’

    要查找存储已插入或已删除行的值的列,请查找以下标头“ RowLog目录0”,“ RowLog目录1”,“ RowLog目录2”,“ RowLog目录3”,“ RowLog目录4”,“描述”和“日志记录”

    The challenge is that data is scattered across different columns for different operations. This requires the user to map the columns for each transaction type. Since this is an undocumented function, no such map or documentation exists, making reading the proverbial tea leaves that are the results of fn_dblog quite difficult.

    挑战在于数据分散在不同的列中以进行不同的操作。 这要求用户映射每种交易类型的列。 由于这是一个未记录的函数,因此不存在这样的映射或文档,这使得读取fn_dblog结果产生的众所周知的茶叶非常困难。

    To compound the difficulty further, rows for deleted and inserted data are presented as hexadecimal. Converting this data to meaningful information requires deep knowledge of the format, status bits, total columns etc. Reading Egyptian hieroglyphics might be easier in some cases

    为了进一步增加难度,已删除和已插入数据的行以十六进制表示。 将此数据转换为有意义的信息需要对格式,状态位,总列数等有深刻的了解。在某些情况下,阅读埃及象形文字可能会更容易

  4. Convert the data to usable information

    将数据转换为可用信息

    To complete our process, we must convert the binary data to textual/table data taking into consideration the data type of each column. Conversion mechanisms will vary depending on the column, data type etc

    为了完成我们的过程,我们必须考虑到每一列的数据类型,将二进制数据转换为文本/表数据。 转换机制将根据列,数据类型等而有所不同

    Fn_dbLog is a very powerful and useful tool but it does require an expert level user to be utilized to its full extent. And even so it has some quite significant limitations including

    Fn_dbLog是一个非常强大且有用的工具,但它确实需要充分利用专家级别的用户。 即使如此,它也有一些相当大的限制,包括

    1. DDL auditing is complicated and involves reconstructing the state of certain system tables DDL审核很复杂,涉及重建某些系统表的状态
    2. Only the active part of the online transaction log can be read 只能读取在线交易日志的活动部分
    3. There is no means to reconstruct Update DML transactions or BLOBs (see next) 没有办法重建Update DML事务或BLOB(请参见下一个)

    Update operations in SQL Server are not fully logged in the transaction log. Full before-and-after values, unfortunately don’t exist, only the delta of the change for that record. For example, SQL Server may show a change from “H” to “M” when the actual record that was changed was from “House” to “Mouse”. To piece together the full picture a process must be devised to manually reconstruct the history of changes, including the state of the record prior to update. This requires painstakingly re-constructing every record from the original insert to the final update, and everything in between.

    SQL Server中的更新操作未完全记录在事务日志中。 不幸的是,之前和之后的完整值不存在,仅存在该记录的更改增量。 例如,当更改的实际记录是从“房屋”更改为“鼠标”时,SQL Server可能会显示从“ H”更改为“ M”。 为了拼凑整个图片,必须设计一个过程来手动重建更改的历史记录,包括更新之前的记录状态。 这就需要认真地重建从原始插入到最终更新的所有记录,以及介于两者之间的所有内容。

    BLOBs are another challenge when trying to use fn_dblog to read transaction history. BLOBs, when deleted, are never inserted into the transaction log. So examining the transaction log won’t provide information about its existence unless the original insert can be located. But only by combining these two pieces of data will you be able to recover a deleted BLOB file. This obviously requires that the original insert exists in the active/online portion of the transaction log, the only part accessible to fn_dblog. This may be problematic if the original insert was done some weeks, months or years earlier and the transaction log has been subsequently backed up or truncated

    尝试使用fn_dblog读取事务历史记录时,BLOB是另一个挑战。 BLOB删除后,永远不会插入到事务日志中。 因此,除非可以找到原始插入内容,否则检查事务日志将不会提供有关其存在的信息。 但是,仅通过组合这两部分数据,您就可以恢复已删除的BLOB文件。 显然,这显然要求原始插入物存在于事务日志的活动/在线部分中,这是fn_dblog唯一可访问的部分。 如果原始插入是在数周,数月或数年之前完成的,并且随后备份或截断了事务日志,则可能会出现问题。

使用fn_dump_dblog (Using fn_dump_dblog)

Another solution exists, in the form of fn_dump_dblog that overcomes the limitation of fn_dblog, in that it can only read the active/online transaction log.

存在另一种解决方案,它以fn_dump_dblog的形式克服了fn_dblog的局限性,因为它只能读取活动/联机事务日志。

Fn_dump_dblog is able to read native transaction log backups

Fn_dump_dblog能够读取本机事务日志备份

Please keep in mind that this function is also undocumented

请记住,此功能也未记录

  1. Run fn_dump_dblog for a specific transaction log backup.

    运行fn_dump_dblog以获得特定的事务日志备份。

    Please note that you have to specify all parameters, 63 in this case

    请注意,您必须指定所有参数,在这种情况下为63

    SELECT *
    FROM fn_dump_dblog
    (NULL,NULL,N'DISK',1,N'F:\Mybackups\InventoryDatabase_4589725r.trn',
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT);
    

    The same as with fn_dbLog, 129 columns are returned, so returning only the specific ones is recommended

    与fn_dbLog相同,返回129列,因此建议仅返回特定的列

    SELECT [Current LSN], Context, [transaction name], Operation, [Transaction ID], Description
    FROM fn_dump_dblog
    (NULL,NULL,N'DISK',1,N' F:\Mybackups\InventoryDatabase_4589725r.trn',
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,
    DEFAULT);
    

    Once again we need to dust off our hex editor to decipher this information and we have similar challenges and obstacles with Updates and BLOBs

    再一次,我们需要清除十六进制编辑器以解密此信息,并且在Updates和BLOB中也遇到了类似的挑战和障碍

    But using this function, there is a potentially very helpful and powerful feature available as you are able to restore a database, to a specific point in time, before a particular operation occurred. By doing this, in essence, you could “recover” from a bad/rogue transaction by going back in time and eliminating it

    但是使用此功能,可能会提供非常有用和强大的功能,因为您可以在执行特定操作之前将数据库还原到特定时间点。 通过这样做,从本质上讲,您可以通过及时返回并消除它来从不良/无赖交易中“恢复”

    To do this you can follow these steps

    为此,您可以按照以下步骤

  2. Determine the LSN for the particular transaction you want to “reverse” 确定您要“撤消”的特定交易的LSN
  3. Convert this LSN into the format used in the WITH STOPBEFOREMARK = ‘<mark_name>’ clause, For example, 00000070:00000011:0001 should be transformed into 112000000001700001 将此LSN转换为WITH STOPBEFOREMARK ='<mark_name>'子句中使用的格式,例如,00000070:00000011:0001应转换为112000000001700001
  4.    RESTORE LOG InventoryDatabaseFROMDISK = N' F:\Mybackups\InventoryDatabase_4589725r.trn'WITHSTOPBEFOREMARK = 'lsn:112000000001700001',NORECOVERY;
    

使用DBCC PAGE (Using DBCC PAGE)

Next on our list is DBCC PAGE, another useful command to read the SQL Server transaction log, which is able to read only MDF and LDF database files. Although, like the others it is also undocumented

我们列表的下一个是DBCC PAGE,这是另一个读取SQL Server事务日志的有用命令,该命令只能读取MDF和LDF数据库文件。 尽管像其他文件一样,它也没有记录

DBCC PAGE ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ])

Use the following syntax to read the first page in our example database’s online transaction log:

使用以下语法读取示例数据库的在线事务日志中的第一页:

SELECT FILE_ID ('InventoryDatabase_Log') AS 'File ID'
-- to determine Log file ID = 2
DBCC PAGE (InventoryDatabase, 2, 0, 2)

The results will be as follows

结果如下

DBCC execution completed. If DBCC printed error messages,
contact your system administrator.

The output is not displayed, by default unless you turn on the 3604 trace flag first. Once set the output will be displayed

默认情况下,除非您先打开3604跟踪标志,否则不显示输出。 设置后,将显示输出

DBCC TRACEON (3604, -1)

Now re-execute

现在重新执行

DBCC PAGE (InventoryDatabase, 2, 0, 2)

The result is quite a bit of errors, bad headers and other visual noise. This can be ignored but the results will still be in hexadecimal format

结果是相当多的错误,错误的标题和其他视觉噪声。 可以忽略,但结果仍将采用十六进制格式

Hexadecimal output is really not meant to be interpreted by humans, visually, which makes converting this data into meaningful decisions difficult

十六进制输出实际上并不意味着人类会从视觉上进行解释,这使得将这些数据转换为有意义的决策非常困难

使用ApexSQL日志 (Use ApexSQL Log)

ApexSQL Log is a well-known, 3rd party SQL Server transaction log reader, auditing and recovery tool. It can read online, detached transaction logs as well as transaction log backups, including compressed.

ApexSQL Log是一个众所周知的, 第三方 SQL Server事务日志的读者 ,审计和恢复工具。 它可以读取联机的独立事务日志以及事务日志备份,包括压缩的备份。

ApexSQL Log can read all transactions, including DDL and DML, in a chain of transaction log data sources and create scripts to Undo or Redo them. It can even read transactions before the product was installed.

ApexSQL Log可以读取一系列事务日志数据源中的所有事务,包括DDL和DML,并创建脚本来撤消或重做它们。 它甚至可以在安装产品之前读取事务。

ApexSQL Log, unlike the aforementioned solutions, provides significant value added capabilities to log reading, by processing encrypted and obfuscated log data into easily processed and understood information, including the object, dates and times, and change history including before and after values

与上述解决方案不同,ApexSQL日志通过将加密和混淆的日志数据处理为易于处理和易于理解的信息(包括对象,日期和时间以及包括值前后的更改历史记录),提供了显着的增值功能来读取日志

To use this tool, follow these steps

要使用此工具,请按照下列步骤操作

  1. ApexSQL Log application ApexSQL Log应用程序
  2. Connect to a database. Normally this is the database who’s transaction logs you want to read, but it doesn’t have to be necessarily, as you can add offline and backed up logs for other databases

    连接到数据库。 通常,这是您要读取的事务日志的数据库,但不必一定要这样,因为您可以为其他数据库添加脱机和备份日志

  3. Select the logs you want to read in the Select datasources step ensuring to construct a full chain. Use the Add button to add other datasources like detached LDF files and transaction log backups

    在“ 选择数据源”步骤选择要读取的日志,以确保构建完整链 。 使用“添加”按钮添加其他数据源,例如分离的LDF文件和事务日志备份

  4. Select the output type, in the next step. To view the data in a grid, select Open results in grid. This will populate the ApexSQL Log visual grid with the contents of the transaction log, in an easy to read manner and offer lots of valued added processing features like sorting, search etc

    在下一步中选择输出类型。 要查看网格中的数据,请选择“ 在网格中打开结果” 。 这将以一种易于阅读的方式,用事务日志的内容填充ApexSQL Log可视网格,并提供许多有价值的处理功能,例如排序,搜索等。

  5. To refine your results, use the Filter setup, to filter by operation type e.g. DML, object or user name, time ranges and more

    要优化结果,请使用过滤器设置 ,以按操作类型(例如DML,对象或用户名,时间范围等)进行过滤

  6. Click the Finish button

    单击完成按钮

    Detailed results, but without the visual noise and extraneous data, are presented in an easy to understandable and easy to use manner

    以易于理解和易于使用的方式呈现了详细的结果,但没有视觉噪音和多余的数据

    Now you can really decipher the transaction log and unlock its true value. You will be able to see the time the operation began and ended, the operation type, the schema and object name of the object affected, the name of the user who executed the operation, the computer and application used to execute the operation. For UPDATEs, you’ll see the old and the new value of the updated fields aka before-and-after auditing.In addition, you can use ApexSQL Log, for disaster recovery purposes, to easily create Undo scripts to reverse batches of transactions or Redo scripts to replay them.

    现在,您可以真正解密事务日志并解锁其真实价值。 您将能够看到操作开始和结束的时间,操作类型,受影响对象的模式和对象名称,执行操作的用户名称,用于执行操作的计算机和应用程序。 对于UPDATE,您将看到审计前后的更新字段的旧值和新值。此外,您可以使用ApexSQL Log进行灾难恢复 ,轻松创建撤消脚本以撤消批量事务或重做脚本以重播它们 。

    ApexSQL Log can be set up, in continuous auditing mode, to run unattended to continuously write the transaction log back to the database itself, with no gaps in records or duplicated records, so the transaction log data can be queried just like the database

    可以在连续审核模式下将ApexSQL Log设置为在无人值守的情况下运行,以将事务日志连续写回到数据库本身,而记录中没有空缺或没有重复记录,因此可以像查询数据库一样查询事务日志数据

    For database replication, ApexSQL Log can easily be configured to read the transactions from one database and write to another

    对于数据库复制 ,可以轻松配置ApexSQL日志以从一个数据库读取事务并写入另一个数据库

    To avoid hex values, undocumented functions, unclear column content, long queries, complex action steps, incomplete UPDATE and BLOB reconstruction when reading SQL Server transaction logs, use ApexSQL Log. It will read transaction logs for you and present the results “in plain English”. Besides that, undo and redo scripts, database replication and continuous auditing are just a click away

    为了避免在读取SQL Server事务日志时遇到十六进制值,未记录函数,列内容不清楚,查询长,操作步骤复杂,UPDATE和BLOB重建不完整的情况,请使用ApexSQL Log 。 它将为您读取交易日志并以“普通英语”显示结果。 除此之外,只需单击一下,即可撤消和重做脚本,数据库复制和连续审核

翻译自: https://www.sqlshack.com/how-to-read-a-sql-server-transaction-log/

在SQL Server中读取事务日志-从黑客到解决方案相关推荐

  1. 事物日志恢复 mysql_浅谈SQL Server中的事务日志(五)----日志在高可用和灾难恢复中的作用...

    本篇文章是系列文章中的第五篇,是对前一个日志系列的补充篇.如果您对日志的基本概念还没有一个比较系统的了解,可以参看本系列之前的文章: 浅谈SQL Server中的事务日志(一)----事务日志的物理和 ...

  2. 第17周翻译:SQL Server中的事务日志管理的阶梯:第5级:在完全恢复模式下管理日志...

    来源:http://www.sqlservercentral.com/articles/Stairway+Series/73785/ 作者:Tony Davis, 2012/01/27 翻译:刘琼滨. ...

  3. 浅谈SQL Server中的事物日志(一)

    简介 SQL Server中的事务日志无疑是SQL Server中最重要的部分之一.因为SQL SERVER利用事务日志来确保持久性(Durability)和事务回滚(Rollback).从而还部分确 ...

  4. SQL Server 中的事务与事务隔离级别以及如何理解脏读, 未提交读,不可重复读和幻读产生的过程和原因...

    原本打算写有关 SSIS Package 中的事务控制过程的,但是发现很多基本的概念还是需要有 SQL Server 事务和事务的隔离级别做基础铺垫.所以花了点时间,把 SQL Server 数据库中 ...

  5. Microsoft SQL Server中的事务(转载)

    1.1 事务的概念 事务是作为单个工作单元而执行的一系列操作,比如查询和修改数据等. 事务是数据库并发控制的基本单位,一条或者一组语句要么全部成功,对数据库中的某些数据成功修改; 要么全部不成功,数据 ...

  6. SQL Server中的尾日志备份和还原

    A tail-log backup is a special type of transaction log backup. In this type of backup, the log recor ...

  7. SQL Server中读取XML文件的简单做法

    SQL Server 2000使得以XML导出数据变得更加简单,但在SQL Server 2000中导入XML数据并对其进行处理则有些麻烦.本文介绍在SQL Server中读取XML文件的简单做法. ...

  8. 深入sql server中的事务

    一. 概述... 1 二. 并发访问的不利影响... 1 1. 脏读(dirty read)... 1 2. 不可重复读(nonrepeatable read)... 1 3. 幻读(phantom ...

  9. SQL Server移除事务日志后sys.master_files依然存在记录问题

    在SQL Server中移除了事务日志文件后,使用sys.master_files检查时发现,对应的事务日志文件记录信息依然存在sys.master_files里面,只是状态state_desc为OF ...

最新文章

  1. mysql onlibe all_MySQL Online DDL
  2. 如何让项目一步步走向灭亡
  3. micropython文件上传软件_ESP32玩转MicroPython(二) 连接WIFI网络 webperl文件传输
  4. 杂谈——杭州考驾照历程
  5. JavaScript操作BOM简单案例
  6. 托管非托管_如何利用Kubernetes的力量来优化您的托管成本
  7. 使用OpenCV提取不同颜色物体
  8. Xshell连接linux(deepin)时提示ssh服务器拒绝了密码,请再试一次解决方法
  9. 把一个byte高低位进行交换
  10. anylogic和java_使用Anylogic访问服务器及文件系统
  11. 3d max插件的基本知识和安装方法
  12. Win7主题文件themepack不能安装,低级错误!
  13. 算法——霍夫曼编码压缩算法
  14. AM335x片上ecap驱动移植
  15. chrome添加网页单词翻译插件
  16. Minecraft 命令方块:简易计时器
  17. matlab 死区,MATLAB对AC/DC/AC电源的死区效应谐波仿真
  18. Hive sql : 查询连续登录天数
  19. 【读点论文】A ConvNet for the 2020s,结合swin transformer的结构设计和训练技巧调整resnet网络,在类似的FLOPs和参数量取得更好一点的效果
  20. tcp/ip网络编程--accept()函数返回的套接字

热门文章

  1. linux串口配置详解
  2. CheckBox的Attributes
  3. 第八次立会顺利召开!
  4. Oracle视图分类及各种操作讲解(超级好文)
  5. Vsphere初试——基本安装
  6. TypeScript声明文件
  7. RN style使用以及常用样式总结
  8. Linux---僵尸进程的解决办法
  9. 前端面试高频考点,ES6知识点汇总!!!
  10. 额度降为0剩下欠款怎么办?