z-logo
open-access-imgOpen Access
Book Review
Author(s) -
Dan Nagle
Publication year - 2011
Publication title -
sci. program.
Language(s) - English
DOI - 10.3233/spr-2011-0318
Subject(s) - computer software , qa76.75 76.765
The Art of Concurrency is subtitled A Thread Monkey’s Guide to Writing Parallel Applications. It has a Preface, 11 Chapters, a Glossary and 285 pages, including the index. The author wants to help experienced professional applications programmers master the task of multi-threaded programming. The idea is to take simple problems, find a suitable algorithm, and apply a multi-threaded solution using one (or more) of OpenMP, pthreads, Intel Threading Building Blocks (hereinafter TBB), or Windows threads. Most of the examples require C, except that the TBB examples require C++. C is a good choice, for it is a lingua franca of professional programmers and it clearly displays all the steps involved. The Preface explains the hardware reasons for multicore chips, and the response of software that is now to be written as multi-threaded applications. This book is intended for all programmers everywhere. The programmer should have some familiarity with multithreading programming methods, specifically on any scheme to be actually used. The focus here is algorithms rather than library details. Then an outline of the chapters is presented. Not that scientific programmers are not professionals, but where does this book fit in the scientific programming literature? Let’s keep reading, and find the answer to that question. The first chapter, Want to Go Faster? Raise Your Hand If You Want to Go Faster! tells us that the author wants to share his experience parallel programming. And he certainly has a great deal of experience to share. We next learn that “Thread Monkey” is good, just like “Grease Monkey” is good. On the other hand, “Code Monkey” is bad. This not only assuaged my ego, but it also helped to get my filters adjusted to the author’s sense of humor. The author clearly defines and distinguishes parallelism and concurrency. Concurrent means may be in progress at the same time as something else, parallel means may be executing at the same time as something else. The author distinguishes the kind of parallelism he will discuss from the scalable and popular, if tedious, message passing. So why would a programmer need to understand multithreading? Because that’s where the hardware is going. Isn’t multi-threading hard? Not if you follow the rules, which aren’t all that hard to learn. We’ll assume that the programmer has a working serial program. What are the steps towards parallelism? We are told: first, Analysis: Identify Possible Concurrency; next, Design and Implementation: Threading the Algorithm; next, Test for Correctness: Detecting and Fixing Threading Errors; finally, Tune for Performance: Removing Performance Bottlenecks. And so we’ve already gotten some guidance. Why not start our parallel application from scratch? Because then you’ve got two sources of error: logic (ordinary bugs) and parallelism. Now, we can examine some idealized hardware (including Parallel Random Access Machine, PRAM) and we’re off to examine algorithms for concurrency. The next chapter, Concurrent or Not Concurrent, discusses design models, specifically, data decomposition and task decomposition. We’ll start with task decomposition. So we seek tasks that are independent, that is, that do not have dependencies. How does one identify independent tasks? Experience, which to the novice means practice. There are only two data dependencies: Want new value and got old value; or want old value and got new value. One must guard against both. One may have first encountered them when vectorizing. Try imagining two sections of code executing simultaneously. Do they interfere? Then one must ask how to map tasks to threads, and how much work will be in each task, that is, the granularity of each task. The greater the work per thread, the better. The author warns us away from using Thread Local Storage (TLS) as requiring high latency and use other language constructs to achieve the desired end. How to assign tasks to threads? With OpenMP and TBB, it’s done automatically, although the programmer can exert some influ-

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom