In the previous article, we have examined the physical structure of the transaction log and discussed WAL algorithm. Now I will take you on a journey through the most common reasons for experiencing a log bottleneck in your systems.

在上一篇文章中 ,我们检查了事务日志的物理结构并讨论了WAL算法。 现在,我将带您了解在系统中遇到日志瓶颈的最常见原因。

未使用的非聚集索引 (Unused Non-Clustered Indexes)

Nowadays, the expectations for modern databases are set very high and we are anticipating almost instantaneous answers. In order to meet these expectations, indexes are must-have objects. However one of the most widespread mistakes is to create every index proposed from SQL Server or different performance tools. When we are updating our data, we need to update records into each non-clustered index (not applicable only for filtered indexes). In order to do this, SQL is generating the additional log records placing the unnecessary overhead. We need to monitor our tables and remove unused indexes which can be achieved by using “sys.dm_db_index_usage_stats”.

如今,对现代数据库的期望值很高,我们正在期待几乎即时的答案。 为了满足这些期望,索引是必不可少的对象。 但是,最普遍的错误之一是创建由SQL Server或其他性能工具提出的每个索引。 在更新数据时,我们需要将记录更新到每个非聚集索引中(不适用于仅过滤索引)。 为此,SQL生成了额外的日志记录,从而增加了不必要的开销。 我们需要监视我们的表并删除未使用的索引,这可以通过使用“ sys.dm_db_index_usage_stats”来实现。

Since the information in this view is not persistent, it can be cleared under one of the following circumstances:

由于此视图中的信息不是永久性的,因此可以在以下情况之一下清除它:

  • SQL is restartedSQL重新启动
  • Detach/Attach a database分离/附加数据库
  • Take database offline/online使数据库脱机/联机
  • Restore a database恢复数据库
  • Rebuild an index (only for the index/indexes that were rebuilt)重建索引(仅适用于重建的索引)

You cannot solely leverage on the information you will receive by querying this view only, thus, a solution should be developed. This can be taking the info from this DMV every 1 hour and storing it in a specific table for your database; alternatively, some kind of a central solution for all of your databases should be considered.

您不能仅通过查询此视图来单独利用将要收到的信息,因此,应开发一种解决方案。 这可以每1小时从该DMV中获取一次信息,并将其存储在数据库的特定表中; 或者,应考虑为所有数据库提供某种集中式解决方案。

This view has another “nasty” characteristic – it is not showing when you are using only index statistics to satisfy your query. Let’s see an example. I will use default database “AdventureWorksDW2008R2”:

此视图还有另一个“讨厌的”特征–仅在使用索引统计信息满足查询时未显示。 让我们来看一个例子。 我将使用默认数据库“ AdventureWorksDW2008R2”:


USE [master]
GO
ALTER DATABASE [AdventureWorksDW2008R2] SET AUTO_CREATE_STATISTICS OFF WITH NO_WAIT
GO
USE [AdventureWorksDW2008R2]
GO
SELECT RevisionNumber
FROM dbo.FactInternetSales
WHERE TaxAmt = 5.08 

Here is the execution plan:

这是执行计划:

The cardinality we have is not good at all! Now let’s create a new index to improve this and check again the plan:

我们拥有的基数根本不好! 现在,让我们创建一个新索引来改进它,然后再次检查计划:


CREATE INDEX IX_FactInternetSales_TaxAmt ON
dbo.FactInternetSales (TaxAmt)
USE [AdventureWorksDW2008R2]
GO
SELECT RevisionNumber
FROM dbo.FactInternetSales
WHERE TaxAmt = 5.08 

The situation seems a lot better after we have created the index. So I would say that we are using it and this is great. Let’s examine the DMV “sys.dm_db_index_usage_stats”:

创建索引后,情况似乎好多了。 所以我想说我们正在使用它,这很棒。 让我们检查DMV“ sys.dm_db_index_usage_stats”:


SELECT i.name,u.user_seeks, u.user_lookups, u.user_scans
FROM sys.dm_db_index_usage_stats u
INNER JOIN sys.indexes i ON u.object_id = i.object_id AND u.index_id = i.index_id
WHERE u.object_id=object_id('dbo.FactInternetSales') AND i.name = 'IX_FactInternetSales_TaxAmt'

The result is not matching the expected one and it looks like we are not using this index at all. It should be then safe to drop it and test the query again:

结果与预期的不匹配,并且看起来我们根本没有使用此索引。 然后放下它并再次测试查询应该是安全的:


USE [AdventureWorksDW2008R2]
GO
DROP INDEX [IX_FactInternetSales_TaxAmt] ON [dbo].[FactInternetSales] WITH ( ONLINE = OFF )
GO
SELECT RevisionNumber
FROM dbo.FactInternetSales
WHERE TaxAmt = 5.08 

Credit for the demo: Joe Sack

演示的信誉:Joe Sack

We are back to the initial cardinality issue. SQL has used only the statistic of the index, but not the index itself, to improve the cardinality of our query. In these cases the DMV is not showing any information and this could easily mislead us.

我们回到了最初的基数问题。 SQL仅使用索引的统计信息,而不使用索引本身,来提高查询的基数。 在这些情况下,DMV不会显示任何信息,这很容易误导我们。

日志自动增长设置 ( Log Autogrowth Setting)

Among the most common causes for the log performance problems, is the log autogrowth setting. It is a prevalent mistake, to left the autogrowth step to the default one – it is 10% not what is configured for the “model” database:

日志自动增长设置是导致日志性能问题的最常见原因。 将自动增长步骤保留为默认值是一个普遍的错误–它不是为“模型”数据库配置的值的10%:

Let’s create a new test database without specifying anything:

让我们在不指定任何内容的情况下创建一个新的测试数据库:


CREATE DATABASE LogSetting
GO

For the majority of the databases, the general log settings are not the optimal ones. Why is that so critical and important?

对于大多数数据库,常规日志设置不是最佳的。 为什么如此重要和重要?

SQL Server must always zero-initialized the new space added to the transaction log regardless of the “Perform volume maintenance task” policy! You can only benefit from this option when it comes to adding a new space for a data file but it does not have any effect on the transaction log. The transaction log is being used by SQL Server to recover our databases to a consistent state. There is a very small chance that the old data that was previously occupying the portion of the file system that you are now using to expand the log would try to fool a database that these are the valid log records. If this happens, the result of the recovery would be unpredictable and it is almost guaranteed that we will end up with the corrupted data. Furthermore, while the zero-initialization is happening, log records cannot be flushed to a disk! Dramatic performance degradation might take place if your autogrowth setting is not well configured – for example set to a very large number.

SQL Server必须始终对添加到事务日志中的新空间进行零初始化,而不管“执行卷维护任务”策略如何! 只有在为数据文件添加新空间时,您才能从此选项中受益,但对事务日志没有任何影响。 SQL Server正在使用事务日志将数据库恢复到一致状态。 以前占据文件系统一部分(现在用于扩展日志)的旧数据极有可能尝试使数据库欺骗这些有效日志记录。 如果发生这种情况,恢复的结果将是不可预测的,几乎可以保证我们最终将获得损坏的数据。 此外,在进行零初始化时,无法将日志记录刷新到磁盘! 如果您的自动增长设置配置不正确,例如设置为非常大的数量,则性能可能会急剧下降。

In a perfect world you should avoid log autogrowth. This is achievable by allowing your log to be cleared regularly so there is always free space that can be reused. I have seen many discussions on this topic, but log clearing (also known as log truncation) is only happening during log backups when your database is in the FULL or the BULK-LOGGED recovery models! In the SIMPLE recovery model, this task is handled whenever a CHECKPOINT is fired.

在理想环境中,应避免日志自动增长。 这可以通过允许定期清除您的日志来实现,以便始终有可重复使用的可用空间。 我已经看到了许多有关此主题的讨论,但是日志清除(也称为日志截断)仅在数据库处于FULL或BULK-LOGGED恢复模型时才在日志备份期间发生! 在SIMPLE恢复模型中,只要触发CHECKPOINT,便会处理此任务。

高可用性功能 ( High Availability Features )

For the sake of completeness, I will also briefly talk about High Availability features as one of the top contributors to the transaction log’s throughput issues. The following high availability features can also bring additional delay in the log clearing:

为了完整起见,我还将简要介绍一下高可用性功能,它是事务日志吞吐量问题的主要贡献者之一。 以下高可用性功能还会在清除日志时带来额外的延迟:

  • Database Mirroring running in synchronous mode在同步模式下运行数据库镜像
  • Availability Groups running in synchronous mode以同步模式运行的可用性组
  • Transactional Replication事务复制

When you are using mirroring and availability groups in a sync mode, any transaction that executes on the principal will only return acknowledgment to the application once all the associated log records have been committed to the mirror or the replica. If you are having an average large transaction size, this may not be that problematic but if the size is relatively small, the effect would be stronger and more noticeable. In this situation you can change your workload, if possible, increase your network bandwidth or simply go to the asynchronous mode.

当您在同步模式下使用镜像和可用性组时,在主体上执行的任何事务都仅在将所有关联的日志记录都提交到镜像或副本后才向应用程序返回确认。 如果您的交易量平均较大,则可能不会出现问题,但是如果交易量较小,则效果会更强且更明显。 在这种情况下,您可以更改工作量,如果可能的话,可以增加网络带宽或直接进入异步模式。

With the transactional replication, you are leveraging on your Log Reader agent to run regularly and to scan the log records associated with the objects participating in the replication. Sometimes the agent might fail due to something; for example, I have seen it deliberately made to run infrequently. If such a failure occurs all the LSNs, that have not been yet scanned, must be kept and this is preventing the log truncation.

使用事务复制,您可以利用Log Reader代理定期运行并扫描与参与复制的对象相关的日志记录。 有时,代理可能因某些原因而失败; 例如,我已经看到它故意不经常运行。 如果发生此类故障,则必须保留所有尚未扫描的LSN,这将防止日志被截断。

碎片索引 ( Fragmented Indexes )

Fragmented indexes can be a reason of various problems among which is performance degradation for queries that need to process relatively large amount of data. Although valid and significant, fragmentation is causing troubles in the way it occurs – it is based on an activity called the “page split”. The most widespread situation is when an index record must be inserted on a particular page and the page does not have enough space to accommodate it. In this case, SQL Server will need to perform the following actions:

碎片索引可能是导致各种问题的原因,其中包括需要处理相对大量数据的查询的性能下降。 尽管碎片是有效且重要的,但碎片正在以其发生的方式引起麻烦–它基于一种称为“页面拆分”的活动。 最普遍的情况是必须在特定页面上插入索引记录,而该页面没有足够的空间容纳它。 在这种情况下,SQL Server将需要执行以下操作:

  • A new index page is allocated and formatted分配了新的索引页并设置了格式
  • Part of the records are moved to the new page in order to release some space部分记录移至新页面以释放一些空间
  • The new page is linked to the structure of the index新页面链接到索引的结构
  • The new data is inserted on the respective page新数据将插入到相应页面上

The biggest downside is not that these additional operations are slowing our queries down; it is the fact that extra transaction log records are being generated for each activity.

最大的缺点不是这些额外的操作使我们的查询变慢了。 这是为每个活动生成额外的事务日志记录的事实。

Now, let’s test this. First, create a test database with one simple table and clustered index:

现在,让我们测试一下。 首先,使用一个简单的表和聚集索引创建一个测试数据库:


CREATE DATABASE PageSplitDemo;GOUSE pagesplitdemo;GO CREATE TABLE PageSplit (c1 INT, c2 CHAR (100));CREATE CLUSTERED INDEX PageSplit_CL ON PageSplit (c1);GO 

Then, insert records by making a gap for the 30th one:

然后,通过在第30 个空格之间插入一个间隙来插入记录:


INSERT INTO PageSplit VALUES (1, 'a');INSERT INTO PageSplit VALUES (2, 'a');INSERT INTO PageSplit VALUES (3, 'a');INSERT INTO PageSplit VALUES (4, 'a');INSERT INTO PageSplit VALUES (6, 'a');INSERT INTO PageSplit VALUES (7, 'a');INSERT INTO PageSplit VALUES (8, 'a');INSERT INTO PageSplit VALUES (9, 'a');INSERT INTO PageSplit VALUES (10, 'a');INSERT INTO PageSplit VALUES (11, 'a');INSERT INTO PageSplit VALUES (12, 'a');INSERT INTO PageSplit VALUES (13, 'a');INSERT INTO PageSplit VALUES (14, 'a');INSERT INTO PageSplit VALUES (15, 'a');INSERT INTO PageSplit VALUES (16, 'a');INSERT INTO PageSplit VALUES (17, 'a');INSERT INTO PageSplit VALUES (18, 'a');INSERT INTO PageSplit VALUES (19, 'a');INSERT INTO PageSplit VALUES (20, 'a');INSERT INTO PageSplit VALUES (21, 'a');INSERT INTO PageSplit VALUES (22, 'a');INSERT INTO PageSplit VALUES (23, 'a');INSERT INTO PageSplit VALUES (24, 'a');INSERT INTO PageSplit VALUES (25, 'a');INSERT INTO PageSplit VALUES (26, 'a');INSERT INTO PageSplit VALUES (27, 'a');INSERT INTO PageSplit VALUES (28, 'a');INSERT INTO PageSplit VALUES (29, 'a');INSERT INTO PageSplit VALUES (31, 'a');INSERT INTO PageSplit VALUES (32, 'a');INSERT INTO PageSplit VALUES (33, 'a');INSERT INTO PageSplit VALUES (34, 'a');INSERT INTO PageSplit VALUES (35, 'a');INSERT INTO PageSplit VALUES (36, 'a');INSERT INTO PageSplit VALUES (37, 'a');INSERT INTO PageSplit VALUES (38, 'a');INSERT INTO PageSplit VALUES (39, 'a');INSERT INTO PageSplit VALUES (40, 'a');INSERT INTO PageSplit VALUES (41, 'a');INSERT INTO PageSplit VALUES (41, 'a');INSERT INTO PageSplit VALUES (42, 'a');INSERT INTO PageSplit VALUES (43, 'a');INSERT INTO PageSplit VALUES (44, 'a');INSERT INTO PageSplit VALUES (45, 'a');INSERT INTO PageSplit VALUES (46, 'a');INSERT INTO PageSplit VALUES (47, 'a');INSERT INTO PageSplit VALUES (48, 'a');INSERT INTO PageSplit VALUES (49, 'a');INSERT INTO PageSplit VALUES (50, 'a');INSERT INTO PageSplit VALUES (51, 'a');INSERT INTO PageSplit VALUES (52, 'a');INSERT INTO PageSplit VALUES (53, 'a');INSERT INTO PageSplit VALUES (54, 'a');INSERT INTO PageSplit VALUES (55, 'a');INSERT INTO PageSplit VALUES (56, 'a');INSERT INTO PageSplit VALUES (57, 'a');INSERT INTO PageSplit VALUES (58, 'a');INSERT INTO PageSplit VALUES (59, 'a');INSERT INTO PageSplit VALUES (60, 'a');INSERT INTO PageSplit VALUES (61, 'a');INSERT INTO PageSplit VALUES (62, 'a');INSERT INTO PageSplit VALUES (63, 'a');INSERT INTO PageSplit VALUES (64, 'a');INSERT INTO PageSplit VALUES (65, 'a');INSERT INTO PageSplit VALUES (66, 'a');INSERT INTO PageSplit VALUES (67, 'a');INSERT INTO PageSplit VALUES (68, 'a');INSERT INTO PageSplit VALUES (69, 'a');INSERT INTO PageSplit VALUES (70, 'a');INSERT INTO PageSplit VALUES (71, 'a');GO 

Next, insert the next value (which is 72) and check how many log this will generate:

接下来,插入下一个值(即72),并检查将生成多少日志:


BEGIN TRANINSERT INTO PageSplit VALUES (72, 'a');GO SELECT [database_transaction_log_bytes_used] FROM sys.dm_tran_database_transactionsWHERE [database_id] = DB_ID ('PageSplitDemo');GO 

Lastly, commit the transaction and try to insert the missing 30th value:

最后,提交事务并尝试插入缺少的 30 值:


COMMIT TRANGO BEGIN TRANINSERT INTO PageSplit VALUES (30, 'a');GO SELECT [database_transaction_log_bytes_used] FROM sys.dm_tran_database_transactionsWHERE [database_id] = DB_ID ('PageSplitDemo');GO 

As you can see, the difference is amazing and the ratio is almost 19x! Also, keep in mind that as the data becomes smaller, the ratio becomes worse.

如您所见,两者的区别是惊人的,比率几乎是19倍! 另外,请记住,随着数据变小,比率会变差。

In order to mitigate the impact of such indexes on our environment, we need to keep an eye on the indexes that regularly end up fragmented and play with their fill factor . This can be achieved by using “sys.dm_db_index_physical_stats”. This DMF has three different execution modes:

为了减轻此类索引对我们环境的影响,我们需要关注经常以碎片结尾并使用其填充因子的索引。 这可以通过使用“ sys.dm_db_index_physical_stats ”来实现。 该DMF具有三种不同的执行模式:

  • LIMITED – can only return the logical fragmentation of the leaf level. It examines the next level up in the index which consists of ordered list of leaf level pages’ IDs. This list needs to be compared with the allocation order of the pages to receive the fragmentation.LIMITED –仅可以返回叶级的逻辑碎片。 它检查索引中的下一级,该索引由叶级页面ID的有序列表组成。 需要将该列表与页面的分配顺序进行比较,以接收碎片。
  • th page, otherwise read all pages页面,否则读取所有页面
  • DETAILED – again calculate the fragmentation via DETAILED mode and read all the pages to produce all other statistics详细-通过“详细”模式再次计算碎片并读取所有页面以产生所有其他统计信息

Note that nothing comes without a price. If you are utilizing the fill factor, you are proactively reserving additional space in your indexes to lessen the impact of the extra log records that will be created with the page splits.

请注意,没有价格就没有东西。 如果您使用填充因子,则将在索引中主动保留额外的空间,以减少将因页面拆分而创建的额外日志记录的影响。

许多小交易 ( Many tiny transactions )

We will skip the circumstances under which the log records are being flushed to disk as we already covered it in part 1. If your workload is using a lot of transactions that are processing low amount of data (for example – insert a single row), you would see a lot of small-sized log flushes. In order to have a proper log throughput, SQL Server is performing the log flushes asynchronously. There is a limitation applied in this situation: we cannot have more than 32 concurrent log flushes for a single log file. Unfortunately, we cannot circumvent this rule.

我们将跳过在第1部分中已经介绍过将日志记录刷新到磁盘的情况。 如果您的工作负载正在使用大量正在处理少量数据的事务(例如–插入一行),则会看到许多小型日志刷新。 为了获得适当的日志吞吐量,SQL Server正在异步执行日志刷新。 在这种情况下有一个限制:单个日志文件的并发日志刷新不能超过32个。 不幸的是,我们不能规避这一规则。

Basically we have two general cases:

基本上,我们有两种一般情况:

  • For a bad-performing IO subsystem, the number of log flushes will overwhelm the system, leading to log throughput problems对于性能不佳的IO子系统,日志刷新次数将使系统不堪重负,从而导致日志吞吐量问题
  • For a well-performing IO subsystem, the limit of 32 concurrent log flushes can produce a saturation point when trying to make the transactions durable对于性能良好的IO子系统,尝试使事务持久化时,限制32个并发日志刷新会产生饱和点

A very counter-intuitive solution that could help in both scenarios, is to make your transactions longer (let them process larger amount of data), thus, decreasing the number of log flushes. Subsequently, for the slow IO subsystem it will always help to move your log to a faster storage but then you can face the log’s limitations. Unfortunately, if you end up there and you cannot rewrite your code to utilize longer transactions, the only solution (up to now) was to cut your workload and split it over multiple databases.

一个非常违反直觉的解决方案在两种情况下都可以帮助您:使事务更长(让它们处理更多数据),从而减少日志刷新次数。 随后,对于慢速的IO子系统,它将始终有助于将日志移至更快的存储中,但随后您将面临日志的局限性。 不幸的是,如果您最终到那里并且无法重写代码以利用更长的事务,那么唯一的解决方案(到目前为止)是削减工作量并将其分配到多个数据库中。

Lucky for us, we have a great new feature in SQL Server 2014 called “Delayed Durability”. It can be very beneficial in situations where we cannot rewrite our code but can tolerate a data loss. There is a great article by Aaron Bertrand on this topic and I strongly recommend checking it out: http://sqlperformance.com/2014/04/io-subsystem/delayed-durability-in-sql-server-2014

幸运的是,我们在SQL Server 2014中提供了一项出色的新功能,称为“延迟持久性”。 在我们无法重写代码但可以容忍数据丢失的情况下,这将是非常有益的。 亚伦·伯特兰德(Aaron Bertrand)撰写了一篇有关该主题的精彩文章,我强烈建议您查看一下: http : //sqlperformance.com/2014/04/io-subsystem/delayed-durability-in-sql-server-2014

In this article we have learned that the problems with transaction logs could also originate in places other than the extra LSNs being generated. Keeping in mind the top causes for log’s throughput issues, in the next article I will focus on how we can mitigate the risk of meeting such by adhering to several configuration best practices.

在本文中,我们了解到事务日志的问题也可能源于生成额外LSN之外的其他地方。 牢记导致日志吞吐量问题的主要原因,在下一篇文章中,我将重点介绍如何通过遵循几种配置最佳实践来降低遇到这种情况的风险。

翻译自: https://www.sqlshack.com/sql-server-transaction-log-part-2-top-reasons-log-performance-problems/

SQL Server事务日志–第2部分–日志性能问题的主要原因相关推荐

  1. SQL Server 事务日志

    https://docs.microsoft.com/en-us/sql/relational-databases/logs/the-transaction-log-sql-server?view=s ...

  2. Sql Server事务日志

    本文导读:SQL Server中的数据库都是由一或多个数据文件以及一或多个事务日志文件组成的.SQL Server事务日志主要是用来记录所有事务对数据库所做的修改,SQL SERVER利用事务日志来确 ...

  3. 了解SQL Server事务日志备份和完整备份的日志序列号

    This article explores the SQL Server Transaction log backups and log sequence number (LSN) in combin ...

  4. 如何使用损坏或删除SQL Server事务日志文件重建数据库

    This is the last article, but not the least one, in the SQL Server Transaction Log series. In this s ...

  5. 读取SQL Server事务日志

    介绍 (Introduction) There has always been some debate as to whether or not there are real benefits to ...

  6. SQL Server事务日志体系结构

    This article will cover SQL Server transaction log architecture including file topography, basic ove ...

  7. 10个最重要SQL Server事务日志神话

    Myth: SQL transaction log truncation will make it smaller 误解: SQL事务日志截断将使其变小 The truncation process ...

  8. 什么是SQL Server事务日志中的虚拟日志文件?

    什么是SQL Server事务日志文件? (What is a SQL Server transaction log file?) SQL Server事务日志文件是每个SQL Server数据库的组 ...

  9. 删除不需要的(辅助)SQL Server事务日志文件

    This article explores the use of multiple SQL Server Transaction Log Files and the process of removi ...

  10. SQL Server事务日志备份,截断和缩减操作

    In this article, we will cover SQL Server Transaction log backups, truncate and shrink operations wi ...

最新文章

  1. 论网络营销在我国的发展
  2. python打印mysql版本信息
  3. [VN2020 公开赛]CSRe
  4. Redis都有哪些监控指标,看完你就懂了!
  5. .Net 如何模拟会话级别的信号量,对http接口调用频率进行限制(有demo)
  6. 【kafka】kerberos client is being asked for a password not available to garner authentication informa
  7. 跨境电商ERP系统的相关信息?
  8. python_PDF合成软件_ZHOU125disorder_
  9. spring动态代理之cglib动态代理
  10. JS - 数字金额转换中文汉字金额
  11. 邮件实现详解(一)------邮件发送的基本过程与概念
  12. 【Android Studio使用教程2】Android Studio创建项目
  13. selenium自动化测试环境搭建及启动safair浏览器(Mac)
  14. 截图工具snipaste安装和使用
  15. 百度地图开发(3)实现本地两点间步行导航
  16. 2023年湖南建筑八大员(材料员)模拟真题及答案
  17. NTC热敏电阻与浪涌电流,热启动不会失效?
  18. 网络基础之网络协议篇
  19. 多波速3D双体水文测绘无人船,无人测绘船,水下地形测绘无人船
  20. 文墨绘学艺术学堂邀请中国书法学会副会长锻铁林弘扬中国“正统书法”

热门文章

  1. Go语言学习笔记——Go语言数据类型
  2. rest_framework-00-规范-APIview源码解析-认证
  3. 正则表达+验证 [记录]
  4. 机器学习笔记 增强学习与马尔科夫模型(1)
  5. (2)WePHP 控制器与使用模板
  6. 通用权限管理平台--数据模型定义
  7. [转]win7 64位下android开发环境的搭建
  8. 【零基础学Java】—对象数组(十三)
  9. 句句真研—每日长难句打卡Day2
  10. 多线程android代码,android入门 — 多线程(一)(示例代码)