Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Benefits, drawbacks of lock-free programming for multicore

Posted: 01 Sep 2011 ?? ?Print Version ?Bookmark and Share

Keywords:parallel software? multi-core? lock-free programming?

Many professional engineers often feel uncomfortable in this Brave New World of truly parallel software, which is considerably different from the "pseudo-concurrency" of traditional single-CPU multi-tasking. This is what I observe when I teach courses in multicore software design.

When I get to the part of the course about shared resources, the attendees are shifting nervously in their seats as I describe four different kinds of semaphores.1 By the time I add to the discussion three or more varieties of "spin-locks" with their characteristic active "spinning," some attendees begin to audibly groan. As I try to explain the often subtle differences in usage of these seven or more mechanisms, attendees frequently stop me in mid-sentence to ask: "Hey David, is there any way I can just avoid that entire can of worms?"

What they're really asking is: "Hey David, is there a way I can do resource sharing efficiently without locks in a multi-core environment?"

My answer? Facilities for lock-free programming are indeed available in the hardware of many multi-core system on chips (SoCs), as well as in the software of some multi-core operating systems.

The caveat is that unfortunately the development of application software algorithms with these facilities is both different and more challenging than working in the traditional way. When using traditional locks, a software designer first identifies "critical sections" of code and then "protects" them by selecting one of the traditional locking mechanisms to "guard" the critical sections. In the past, avoiding locks seemed dangerous or tended to involve intricate, convoluted algorithms. For these reasons, lock-free programming has not been widely practised.

But as application software developers gain experience with multi-core SoCs and multi-core operating systems, they frequently discover that traditional multi-tasking design approaches are often inappropriate in a multi-core environment. Multiple tasks, and often multiple cores, can spend a great deal of time waiting on locked locks, thus reducing parallelism, increasing latencies, and reducing the benefits of using a multi-core SoC.

While lock-free programming is not a cure-all, it can be used sensibly to provide a significant performance advantage over lock-based programming. This advantage is most often achieved by using lock-free programming within a small percentage of application software's tightest, most deeply-nested and heavily-executed loops.

The very best form of lock-free programming is simply to design application software as very large and totally independent parallel chunks of code. Such immense blocks of code could run concurrently (typically on different cores) for long periods of time without any interaction at all between them. There would be no need for locks, since there would be no data interactions. But this is not practical in many applications.

If data interactions are unavoidable, some simple data interactions between parallel chunks of code can be governed by a lock-free operation called CAS, or atomic compare-and-swap, that is offered in hardware on various multi-core SoCs and also in a number of multi-core operating systems. I'll give several examples of how to use CAS later in this article.

Problems with locks
Locking mechanisms, such as semaphores, mutexes, and multiple reader-writer locks, are problematic even in single-CPU multi-tasking environments. Using them in a multi-core environment tends to exacerbate their problems because of the true parallelism involved and the more chaotic nature of task scheduling done by multi-core operating systems. In multi-core environments, additional locking mechanisms called spin-locks join the fray. Here are some oft-encountered problems involving these locks.

DeadlocksA deadlock is a situation where two (or more) tasks wait for each other to release a resource that the other is holding. Since they're waiting, they won't release anything; and as a result, they won't do any useful work ever again. An example is shown in figure 1.

a deadlock scenario

Figure 1:Shown is a deadlock scenario.


1???2???3???4?Next Page?Last Page



Article Comments - Benefits, drawbacks of lock-free pro...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top