Book! Scalable Planning: Concurrent Actions from Elemental Thinking

More info and status HERE.

From cover blurb: Concurrency describes any potential time overlap in a set of activities. Its most onerous complexities have been tackled mostly by parallel programmers trying to speed up their applications by harnessing the power of multiple computers (processors, or cores) tied together. Other programmers have mostly remained content with the relative simplicity and ever-increasing speeds of standard sequential (non-parallel) computers, and the rest of us with one-step-at-a-time approaches. But those days are ending: Typical processor speeds have leveled off, and now even laptops and phones are picking up the slack by integrating multiple processors and graphics coprocessors. In the human realm, as communication of all sorts becomes faster and more ubiquitous, we have ever more services (by computers and people) at our disposal, their decentralized nature implying concurrency. How can we humans plan for, and keep track of, all this available concurrency with our "one track" minds? Can these concurrent plans scale up to exploit ever larger collections of processors and/or services?

This text uses simple analogies, examples, and thought experiments to explain basic concepts in concurrency to a broad audience, and to devise an intuitive "elementary particle of activity". A new graphical representation called ScalPL (Scalable Planning Language) is then introduced for building even complex concurrent activities of all kinds from those elemental activities, one mind-sized bite at a time. For programmers, structured and object-oriented programming are extended into the concurrent realm, and performance techniques are explored. For the more serious student, axiomatic semantics and proof techniques are covered.

As the world becomes flatter, communication speeds increase, organizations become decentralized, and processors become ubiquitous, Scalable Planning will help you master the trend toward increased concurrency which is here to stay.


Dr. David C. DiNucci has been developing software for almost 40 years, and involved in parallel computing for over 25. He spent most of the 1990s at NASA Ames Research Center developing and studying parallel tools, and in 1997, led a national planning team for an early grid (cloud) research project. In the years since, he has continued to refine the techniques and notations described here.

In addition to the papers cited in the bibliography here, Dr. DiNucci also served on the MPI standards committee, the original Grid Forum, and wrote two chapters in the 1987 book "Programming Parallel Processors".