德鲁伊 oltp oltp

In sequence of the first article about the server memory importance, we will check another In-Memory OLTP key point: The Checkpoint files. Check what wrong can happen, and how to avoid this!

在有关服务器内存重要性的第一篇文章的顺序中,我们将检查另一个内存中OLTP关键点:Checkpoint文件。 检查会发生什么错误,以及如何避免这种情况!

In continuation to the previous article about the three key points to monitor on In-Memory OLTP, where we talked about the server memory and its (obvious) importance, we will check now the second key point of In-Memory OLTP: The Checkpoint Files!

在上一篇关于在内存中OLTP上监视的三个关键点的文章的续篇(我们讨论了服务器内存及其(显而易见的)重要性)之前,我们现在将检查内存中OLTP的第二个关键点:检查点文件!

If you start reading this series of three articles from this article, I recommend you to check the previous one and the last one (when available). All the explored three key points are summarily important to the In-Memory OLTP health, and also for all the other instance components.

如果您开始阅读本文中的这三篇文章系列,则建议您检查上一篇和最后一篇(如果有)。 所探索的所有三个关键点对于内存中OLTP运行状况以及所有其他实例组件都至关重要。

情况2:检查点文件 ( Case #2: Checkpoint Files )

One of the new concepts that In-Memory OLTP brought to SQL Server, were the checkpoint files. In order to support those kind of files, we have a new Filegroup flavor, specific for memory optimized data.

内存中OLTP引入SQL Server的新概念之一就是检查点文件。 为了支持这类文件,我们有一个新的Filegroup风格,专门用于内存优化的数据。

This “new flavor” is based on FILESTREAM and it’s basically a container of checkpoint files.

这种“新口味”基于FILESTREAM,基本上是检查点文件的容器。

But what are checkpoint files?
The mission of the checkpoint files is store all the needed information to recreate the rows of data on memory-optimized tables. This way, in case of a crash or a voluntary server restart, SQL Server will be able to keep the data.

但是什么是检查点文件?
检查点文件的任务是存储所有必需的信息,以在内存优化表上重新创建数据行。 这样,在崩溃或服务器自愿重启的情况下,SQL Server将能够保留数据。

When you create a memory-optimized table, you have two choices: persistent tables and non-persistent tables. The checkpoint files are only applicable for the persisted ones.

创建内存优化表时,有两种选择:持久表和非持久表。 检查点文件仅适用于持久文件。

We talk about checkpoint files, but in the reality the checkpoint files are a pair of two distinct files:

我们谈论检查点文件,但实际上检查点文件是两个不同的文件对:

  • Data file资料档案
  • Delta file三角洲文件

Those files are paired by the same timestamp, recording the activity of a defined range of time.

这些文件通过相同的时间戳进行配对,以记录定义时间范围内的活动。

The data file stores all the inserted rows. An UPDATE command is also writing to the data files, as an update is nothing more than a DELETE (hiding the row from the “table scope” followed by an INSERT.

数据文件存储所有插入的行。 UPDATE命令也正在写入数据文件,因为更新只不过是DELETE(在“表作用域”中隐藏行,后跟INSERT)。

The delta file has always data file as its pair and stores de ID of deleted rows. This way, “summing up” the data and delta file we will be able to achieve the latest table state.

增量文件始终将数据文件作为其对,并存储已删除行的ID。 这样,“总结”数据和增量文件,我们将能够实现最新的表状态。

As you can imagine, after some a memory-optimized table will be target of various INSERTS, UPDATES and DELETES, making the checkpoint files grow, and grow…As the data file is just writing the inserted data and not removing what was deleted, its size will only grow! The same happen with the delta file. All the deleted rows, generated by DELETES and UPDATES, are being tracked and the tendency is just grow, and grow…

可以想象,在经过内存优化的表将成为各种INSERTS,UPDATES和DELETES的目标之后,使得检查点文件不断增长……随着数据文件只是写入插入的数据而不是删除已删除的数据,其大小只会增长! 增量文件也会发生同样的情况。 由DELETES和UPDATES生成的所有已删除的行都将被跟踪,并且趋势正不断增长……

合并程序 ( Merge Process )

At certain point, a data file will have most of its data invalidated by the delta file. In order to avoid an enormous number of checkpoint files in the disk as well as waste of space storing invalid data, SQL Server has process called “Merge”.

在某个时候,数据文件将使大部分数据被增量文件无效。 为了避免磁盘中存在大量检查点文件以及浪费空间存储无效数据,SQL Server采取了称为“合并”的过程。

What the Merge Process does? Basically it will select two contiguous (based on timestamp sequence) checkpoint files, remove all the invalid information from the data file, based on its paired delta file, and then it merges de valid data, generating a brand new checkpoint file, which will reflect the same time range covered by the original ones. The bellow image is showing a visual example of what is done.

合并过程是做什么的? 基本上,它将选择两个连续的(基于时间戳序列)检查点文件,根据其配对的增量文件从数据文件中删除所有无效信息,然后合并有效数据,生成一个全新的检查点文件,该文件将反映与原始时间范围相同。 波纹管图像显示了已完成操作的直观示例。

What is the big problem here? The disk! Yes, even using memory optimized tables we are still hostage of the disk, when we are talking about persisted tables. There are two possible situations here that we need to keep the heads up:

这里有什么大问题? 磁盘! 是的,即使使用内存优化表,当我们谈论持久化表时,我们仍然是磁盘的人质。 在这里,有两种情况需要我们保持警惕:

  • Disk performance.磁盘性能。
  • Free space.可用空间。

Both are much known from any SQL Server DBA, and usually we are caring about that…But what really happens when there is no space for the checkpoint files growth?

两者在任何SQL Server DBA中都广为人知,并且通常我们会对此加以关注……但是,当没有足够的空间容纳检查点文件时,会发生什么呢?

I tested this, and was surprised with the final result. So, what I did was:

我对此进行了测试,并对最终结果感到惊讶。 因此,我所做的是:

  1. Created a database with memory-optimized tables.创建具有内存优化表的数据库。
  2. fsutil file createnew E:\Data\DummyFile.txt 18307298099 fsutil文件createnew E:\ Data \ DummyFile.txt 18307298099
  3. Started loading data into the memory-optimized table.开始将数据加载到内存优化表中。

As a result of this test, I was expecting for some behavior comparable to the “full transaction log in a full disk”, that we sometimes see around there… But not!

测试的结果是,我期望某些行为类似于“完整磁盘中的完整事务日志”,有时我们会在那儿看到……但事实并非如此!

As expected, the transactions started to fail, and for my surprise, the entire database was in “recovery” state. The way I found to make it work again, was free up some disk space (I just deleted the dummy file) and took the database “offline“ and right after to “online” state. This way everything started to work again.

正如预期的那样,事务开始失败,令我惊讶的是,整个数据库处于“恢复”状态。 我发现重新使它工作的方法是释放了一些磁盘空间(我刚刚删除了虚拟文件),并使数据库“脱​​机”,然后又进入“联机”状态。 这样,一切又开始工作了。

The main point here is that even if you have a database mixing memory-optimized tables with traditional tables, no matter if you have space for you MDF, NDF and LDF files, the database is going to stop! So it’s a very critical situation where we need to take care and never relax. Remember that with a low percentage of free disk space you risk that an “out-of-normal” insert, loading more data than usual, can cause your database to be unresponsive, of course, reflecting in the application unavailability and all the “snow ball” kind flow that comes on those situations.

这里的要点是,即使您有一个将内存优化表与传统表混合使用的数据库,无论您是否有空间容纳MDF,NDF和LDF文件,数据库都将停止运行! 因此,在非常关键的情况下,我们需要保重,永不放松。 请记住,可用磁盘空间百分比较低,您冒着以下风险:“异常”插入(加载的数据比平时多)会导致数据库无响应,当然,这反映在应用程序不可用和所有“雪”中这些情况下出现的“球”流动。

In order to monitor this situation, I created a PowerShell script, which is going to watch the free space in the disk, as follows:

为了监视这种情况,我创建了一个PowerShell脚本,该脚本将监视磁盘上的可用空间,如下所示:

This job returns an error message if the minimum free space threshold is not being achieved.

如果未达到最小可用空间阈值,则此作业将返回错误消息。

The presented solution, is following the idea of the ones presented in the previous article. You can create a job, with the provided code, and add an operator, who is going to be alerted in case of a job failure. I suggest to run this job on every five minutes.

提出的解决方案遵循了上一篇文章中提出的解决方案的思想。 您可以使用提供的代码创建作业,并添加操作员,如果作业失败,该操作员将收到警报。 我建议每五分钟运行一次此作业。

As said before, the disk monitoring is something that we are used to do. However, more than try to show how to monitor the free disk space, I wanted to show the effects of not have enough free space in the disk hosting the memory-optimized table’s container.

如前所述,磁盘监视是我们常用的方法。 但是,除了试图展示如何监视可用磁盘空间外,我还想展示托管内存优化表容器的磁盘中没有足够可用空间的影响。

I hope it was useful! See you in the next article, talking about the remaining key point: the BUCKET NUMBER.

我希望它是有用的! 在下一篇文章中再见,谈论剩下的关键点:“桶号”。

翻译自: https://www.sqlshack.com/memory-oltp-three-key-points-entertain-watchdog-checkpoint-files/

德鲁伊 oltp oltp

德鲁伊 oltp oltp_内存中OLTP –招待看门狗的三个关键点–检查点文件相关推荐

  1. 德鲁伊 oltp oltp_内存中OLTP –娱乐看门狗的三个关键点

    德鲁伊 oltp oltp With the introduction of the in-memory technology, we need to think about what are the ...

  2. 德鲁伊 oltp oltp_内存中OLTP –更快变得更简单!

    德鲁伊 oltp oltp In-memory OLTP is a revolutionary tool introduced on SQL Server 2014. On SQL Server 20 ...

  3. 德鲁伊 oltp oltp_内存中OLTP系列–简介

    德鲁伊 oltp oltp Introduced on SQL Server 2014, the new brand feature In-Memory OLTP a.k.a "Hekato ...

  4. 德鲁伊 oltp oltp_内存中OLTP系列–表创建和类型

    德鲁伊 oltp oltp In sequence of the first article of the In-Memory OLTP Series that explained the main ...

  5. 德鲁伊 oltp oltp_内存中OLTP –娱乐看门狗的三个关键点–桶数

    德鲁伊 oltp oltp When creating a hash index in a memory optimized table we need to define a value for t ...

  6. sql oltp_SQL Server中的内存中OLTP的快速概述

    sql oltp This is in continuation of the previous articles How to monitor internal data structures of ...

  7. 德鲁伊 oltp oltp_深入研究内存中OLTP表的非聚集索引

    德鲁伊 oltp oltp With the introduction of Microsoft's new In-Memory OLTP engine* (code name Hekaton) a ...

  8. 德鲁伊 oltp oltp_深入研究内存中OLTP表的哈希索引

    德鲁伊 oltp oltp With the introduction of Microsoft's new In-Memory OLTP engine (code name Hekaton) the ...

  9. sql oltp_内存中的OLTP系列– SQL Server 2014上的数据迁移指南过程

    sql oltp In this article we will review migration from disk-based tables to in-memory optimized tabl ...

最新文章

  1. Web.Config文件配置之限制上传文件大小和时间
  2. selenium java session_Selenium Java浏览器会话重用
  3. LeetCode 784. 字母大小写全排列(位运算回溯)
  4. php 动态加载JavaScript文件或者css文件
  5. 经验 | 如何使用Python进行可视化?
  6. 第15章-使用远程服务
  7. 用组策略彻底禁止USB存储设备、光驱、软驱、ZIP软驱
  8. C语言练习题 日期天数转换
  9. 【错误记录】springboot项目报错Field xxx in com.xx.xx.xx.impl.xxImpl required a bean
  10. 马克思对“货币之谜”的 历史唯物主义解答
  11. java对打字速度,java课程设计-- 打字速度测试程序
  12. KeilMDK编译错误Error: L6218E: Undefined symbol __aeabi_assert (referred from xxx.o).
  13. Winmerge教程,包括可视化补丁,差异,合并示例
  14. 网易云热歌榜评论(爬虫项目)
  15. java3D实现空间立方体_CSS3 3D旋转立方体
  16. max3490esa_MAX4524EUB_美信MAXIM半导体代理就找宇航
  17. 数据可视化之美 -- 以Matlab、Python为工具
  18. Vue之Axios AJAX封装
  19. matlab 画渐近线,如何绘制渐近线?
  20. 更高更快更强!“游戏引擎”发展漫谈

热门文章

  1. 并发之线程封闭与ThreadLocal解析
  2. 2017-06-27
  3. 常用的系统架构 web服务器之iis,apache,tomcat三者之间的比较
  4. 编写 Struts2 程序 的三个步骤 手动配置
  5. 查找路径php.ini文件到底在哪里?
  6. DataTable随机复制一行给新的DataTable
  7. Nhibernate学习之many-to-many篇
  8. Javascript对象扩展 - JsPoint类
  9. 用ZedGraph作图表(一)
  10. notes_2019