转载自:http://preshing.com/20120612/an-introduction-to-lock-free-programming/#sequential-consistency

Lock-free programming is a challenge, not just because of the complexity of the task itself, but because of how difficult it can be to penetrate the subject in the first place.

I was fortunate in that my first introduction to lock-free (also known as lockless) programming was Bruce Dawson’s excellent and comprehensive white paper, Lockless Programming Considerations. And like many, I’ve had the occasion to put Bruce’s advice into practice developing and debugging lock-free code on platforms such as the Xbox 360.

Since then, a lot of good material has been written, ranging from abstract theory and proofs of correctness to practical examples and hardware details. I’ll leave a list of references in the footnotes. At times, the information in one source may appear orthogonal to other sources: For instance, some material assumes sequential consistency, and thus sidesteps the memory ordering issues which typically plague lock-free C/C++ code. The new C++11 atomic library standard throws another wrench into the works, challenging the way many of us express lock-free algorithms.

In this post, I’d like to re-introduce lock-free programming, first by defining it, then by distilling most of the information down to a few key concepts. I’ll show how those concepts relate to one another using flowcharts, then we’ll dip our toes into the details a little bit. At a minimum, any programmer who dives into lock-free programming should already understand how to write correct multithreaded code using mutexes, and other high-level synchronization objects such as semaphores and events.

What Is It?

People often describe lock-free programming as programming without mutexes, which are also referred to as locks. That’s true, but it’s only part of the story. The generally accepted definition, based on academic literature, is a bit more broad. At its essence, lock-free is a property used to describe some code, without saying too much about how that code was actually written.

Basically, if some part of your program satisfies the following conditions, then that part can rightfully be considered lock-free. Conversely, if a given part of your code doesn’t satisfy these conditions, then that part is not lock-free.

In this sense, the lock in lock-free does not refer directly to mutexes, but rather to the possibility of “locking up” the entire application in some way, whether it’s deadlock, livelock – or even due to hypothetical thread scheduling decisions made by your worst enemy. That last point sounds funny, but it’s key. Shared mutexes are ruled out trivially, because as soon as one thread obtains the mutex, your worst enemy could simply never schedule that thread again. Of course, real operating systems don’t work that way – we’re merely defining terms.

Here’s a simple example of an operation which contains no mutexes, but is still not lock-free. Initially, X = 0. As an exercise for the reader, consider how two threads could be scheduled in a way such that neither thread exits the loop.

while (X == 0)
{X = 1 - X;
}

Nobody expects a large application to be entirely lock-free. Typically, we identify a specific set of lock-free operations out of the whole codebase. For example, in a lock-free queue, there might be a handful of lock-free operations such as pushpop, perhaps isEmpty, and so on.

Herlihy & Shavit, authors of The Art of Multiprocessor Programming, tend to express such operations as class methods, and offer the following succinct definition of lock-free (see slide 150): “In an infinite execution, infinitely often some method call finishes.” In other words, as long as the program is able to keep calling those lock-free operations, the number of completed calls keeps increasing, no matter what. It is algorithmically impossible for the system to lock up during those operations.

One important consequence of lock-free programming is that if you suspend a single thread, it will never prevent other threads from making progress, as a group, through their own lock-free operations. This hints at the value of lock-free programming when writing interrupt handlers and real-time systems, where certain tasks must complete within a certain time limit, no matter what state the rest of the program is in.

A final precision: Operations that are designed to block do not disqualify the algorithm. For example, a queue’s pop operation may intentionally block when the queue is empty. The remaining codepaths can still be considered lock-free.

Lock-Free Programming Techniques

It turns out that when you attempt to satisfy the non-blocking condition of lock-free programming, a whole family of techniques fall out: atomic operations, memory barriers, avoiding the ABA problem, to name a few. This is where things quickly become diabolical.

So how do these techniques relate to one another? To illustrate, I’ve put together the following flowchart. I’ll elaborate on each one below.

Atomic Read-Modify-Write Operations

Atomic operations are ones which manipulate memory in a way that appears indivisible: No thread can observe the operation half-complete. On modern processors, lots of operations are already atomic. For example, aligned reads and writes of simple types are usually atomic.

Read-modify-write (RMW) operations go a step further, allowing you to perform more complex transactions atomically. They’re especially useful when a lock-free algorithm must support multiple writers, because when multiple threads attempt an RMW on the same address, they’ll effectively line up in a row and execute those operations one-at-a-time. I’ve already touched upon RMW operations in this blog, such as when implementing a lightweight mutex, a recursive mutex and a lightweight logging system.

Examples of RMW operations include _InterlockedIncrement on Win32, OSAtomicAdd32 on iOS, and std::atomic<int>::fetch_add in C++11. Be aware that the C++11 atomic standard does not guarantee that the implementation will be lock-free on every platform, so it’s best to know the capabilities of your platform and toolchain. You can call std::atomic<>::is_lock_free to make sure.

Different CPU families support RMW in different ways. Processors such as PowerPC and ARM expose load-link/store-conditional instructions, which effectively allow you to implement your own RMW primitive at a low level, though this is not often done. The common RMW operations are usually sufficient.

As illustrated by the flowchart, atomic RMWs are a necessary part of lock-free programming even on single-processor systems. Without atomicity, a thread could be interrupted halfway through the transaction, possibly leading to an inconsistent state.

Compare-And-Swap Loops

Perhaps the most often-discussed RMW operation is compare-and-swap (CAS). On Win32, CAS is provided via a family of intrinsics such as _InterlockedCompareExchange. Often, programmers perform compare-and-swap in a loop to repeatedly attempt a transaction. This pattern typically involves copying a shared variable to a local variable, performing some speculative work, and attempting to publish the changes using CAS:

void LockFreeQueue::push(Node* newHead)
{for (;;){// Copy a shared variable (m_Head) to a local.Node* oldHead = m_Head;// Do some speculative work, not yet visible to other threads.newHead->next = oldHead;// Next, attempt to publish our changes to the shared variable.// If the shared variable hasn't changed, the CAS succeeds and we return. // Otherwise, repeat. if (_InterlockedCompareExchange(&m_Head, newHead, oldHead) == oldHead) return; } } 

Such loops still qualify as lock-free, because if the test fails for one thread, it means it must have succeeded for another – though some architectures offer a weaker variant of CAS where that’s not necessarily true. Whenever implementing a CAS loop, special care must be taken to avoid the ABA problem.

Sequential Consistency

Sequential consistency means that all threads agree on the order in which memory operations occurred, and that order is consistent with the order of operations in the program source code. Under sequential consistency, it’s impossible to experience memory reordering shenanigans like the one I demonstrated in a previous post.

A simple (but obviously impractical) way to achieve sequential consistency is to disable compiler optimizations and force all your threads to run on a single processor. A processor never sees its own memory effects out of order, even when threads are pre-empted and scheduled at arbitrary times.

Some programming languages offer sequentially consistency even for optimized code running in a multiprocessor environment. In C++11, you can declare all shared variables as C++11 atomic types with default memory ordering constraints. In Java, you can mark all shared variables as volatile. Here’s the example from my previous post, rewritten in C++11 style:

std::atomic<int> X(0), Y(0);
int r1, r2;void thread1()
{X.store(1); r1 = Y.load(); } void thread2() { Y.store(1); r2 = X.load(); } 

Because the C++11 atomic types guarantee sequential consistency, the outcome r1 = r2 = 0 is impossible. To achieve this, the compiler outputs additional instructions behind the scenes – typically memory fences and/or RMW operations. Those additional instructions may make the implementation less efficient compared to one where the programmer has dealt with memory ordering directly.

Memory Ordering

As the flowchart suggests, any time you do lock-free programming for multicore (or any symmetric multiprocessor), and your environment does not guarantee sequential consistency, you must consider how to prevent memory reordering.

On today’s architectures, the tools to enforce correct memory ordering generally fall into three categories, which prevent both compiler reordering and processor reordering:

  • A lightweight sync or fence instruction, which I’ll talk about in future posts;
  • A full memory fence instruction, which I’ve demonstrated previously;
  • Memory operations which provide acquire or release semantics.

Acquire semantics prevent memory reordering of operations which follow it in program order, and release semantics prevent memory reordering of operations preceding it. These semantics are particularly suitable in cases when there’s a producer/consumer relationship, where one thread publishes some information and the other reads it. I’ll also talk about this more in a future post.

Different Processors Have Different Memory Models

Different CPU families have different habits when it comes to memory reordering. The rules are documented by each CPU vendor and followed strictly by the hardware. For instance, PowerPC and ARM processors can change the order of memory stores relative to the instructions themselves, but normally, the x86/64 family of processors from Intel and AMD do not. We say the former processors have a more relaxed memory model.

There’s a temptation to abstract away such platform-specific details, especially with C++11 offering us a standard way to write portable lock-free code. But currently, I think most lock-free programmers have at least some appreciation of platform differences. If there’s one key difference to remember, it’s that at the x86/64 instruction level, every load from memory comes with acquire semantics, and every store to memory provides release semantics – at least for non-SSE instructions and non-write-combined memory. As a result, it’s been common in the past to write lock-free code which works on x86/64, but fails on other processors.

If you’re interested in the hardware details of how and why processors perform memory reordering, I’d recommend Appendix C of Is Parallel Programming Hard. In any case, keep in mind that memory reordering can also occur due to compiler reordering of instructions.

In this post, I haven’t said much about the practical side of lock-free programming, such as: When do we do it? How much do we really need? I also haven’t mentioned the importance of validating your lock-free algorithms. Nonetheless, I hope for some readers, this introduction has provided a basic familiarity with lock-free concepts, so you can proceed into the additional reading without feeling too bewildered. As usual, if you spot any inaccuracies, let me know in the comments.

[This article was featured in Issue #29 of Hacker Monthly.]

Additional References

  • Anthony Williams’ blog and his book, C++ Concurrency in Action
  • Dmitriy V’jukov’s website and various forum discussions
  • Bartosz Milewski’s blog
  • Charles Bloom’s Low-Level Threading series on his blog
  • Doug Lea’s JSR-133 Cookbook
  • Howells and McKenney’s memory-barriers.txt document
  • Hans Boehm’s collection of links about the C++11 memory model
  • Herb Sutter’s Effective Concurrency series

转载于:https://www.cnblogs.com/hugb/p/7905161.html

An Introduction to Lock-Free Programming相关推荐

  1. Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十五章:第一人称摄像机和动态索引...

    Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十五章:第一人称摄像机和动态索引 原文:Introduction to 3 ...

  2. Introduction to 3D Game Programming with DirectX 12一书学习记录(第一个例子编译错误)

    准备开始学一学d3d,听说<Introduction to 3D Game Programming with DirectX 12>这本书不错,于是就拿来学一学.不料第一个例子,按照书中的 ...

  3. Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十九章:法线贴图

    Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十九章:法线贴图 原文:Introduction to 3D Game P ...

  4. 第一.Introduction to 3D Game Programming with DirectX 11介绍一

    搜索<Introduction to 3D Game Programming with DirectX 11>下载本书及代码 有的需要从VC++6.0的习惯中走出来 从VC++6.0到VS ...

  5. Introduction to 3D Game Programming with DirectX 11 翻译 --- 开篇

    Direct3D 11简介 Direct3D 11是一个渲染库,用于在Windows平台上使用现代图形硬件编写高性能3D图形应用程序.Direct3D是一个windows底层库,因为它的应用程序编程接 ...

  6. Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十三章:计算着色器(The Compute Shader)...

    Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十三章:计算着色器(The Compute Shader) 原文: Int ...

  7. 《Introduction to 3D Game Programming with Directx 11》随书代码在 Win10 和 Visual Studio 2017上的修改

    <Introduction to 3D Game Programming with Directx 11>书中代码的修改以适配Win10 SDK(原版代码需依赖Microsoft Dire ...

  8. Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第二十一章:环境光遮蔽(AMBIENT OCCLUSION)

    学习目标 熟悉环境光遮蔽的基本思路,以及通过光线跟踪的实现方法: 学习如何在屏幕坐标系下实现实时模拟的环境光遮蔽. 1 通过光线追踪实现的环境光遮蔽 其中一种估算点P遮蔽的方法是光线跟踪.我们随机跟踪 ...

  9. Introduction to 3D Game Programming with DirectX 12 学习笔记之 --- 第十五章:第一人称摄像机和动态索引

    代码工程地址: https://github.com/jiabaodan/Direct12BookReadingNotes 学习目标 回顾视景坐标系变换的数学算法: 熟悉第一人称摄像机的功能: 实现第 ...

  10. Introduction to 3D Game Programming with DirectX 11学习笔记 6 Direct3D中的绘制(一)

    顶点和顶点布局 在Direct3D中,顶点由空间位置和各种附加属性组成,Direct3D可以让我们灵活地建立属于我们自己的顶点格式:换句话说,它允许我们定义顶点的分量.要创建一个自定义的顶点格式,我们 ...

最新文章

  1. opencv数字图像处理(图像边缘)
  2. Spring Cloud与Duddo比较(非原创)
  3. virtualenv在ubuntu系统中的缺点
  4. 普通路由器封QQ 2010
  5. Java调用存储过程出现Bug,sql语法错误
  6. Atitit topic index Abt 150 toic [原]Atitit hi dev eff topic by use dsl sql coll op 提升开发效率sql ds
  7. LOIC网站压力测试工具
  8. 矩阵2-打印魔方矩阵(非常简单)
  9. linux at24测试程序,C51读写AT24C04源代码及AT24C04测试程序
  10. tpadmin合成推广二维码
  11. 创业4年女掌门刘静瑜,创造动力电池,中创新航市值超600亿
  12. 通达信最新 行情服务器,通达信数据接收服务器地址及端口号
  13. 在AI里怎么把一行字拆成单个的字,并且可编辑
  14. 源自神话的写作要义之英雄之旅
  15. 写在2019年的最后一天,有感而发
  16. 合工大机器人足球仿真考试题56题(底层uva)
  17. 物联网云平台—物联网背后的掌舵者?
  18. Appium报错解决
  19. 编辑出库单issue
  20. 2021年P气瓶充装考试题及P气瓶充装考试内容

热门文章

  1. rhythmbox插件开发笔记1:简介入门
  2. 关于OSPF的区域划分规则
  3. codevs 1296
  4. PID学习笔记:模拟加热系统的PID控制
  5. git-secrets安装教程
  6. ETC是什么,ETC系统主要有哪几部分构成?
  7. JS Worker执行多线程
  8. 泪目了,一位轮椅上的清华博士师兄
  9. DEA数据包络分析----(投入、中间变量及产出)分期望与非期望讨论第一篇
  10. java 释放句柄_Java文件句柄释放