Concurrency in Software

[via Hemant] Herb Sutter calls it the biggest change in software development after the object-oriented revolution.

The major processor manufacturers and architectures, from Intel and AMD to Sparc and PowerPC, have run out of room with most of their traditional approaches to boosting CPU performance. Instead of driving clock speeds and straight-line instruction throughput ever higher, they are instead turning en masse to hyperthreading and multicore architectures. Both of these features are already available on chips today; in particular, multicore is available on current PowerPC and Sparc IV processors, and is coming in 2005 from Intel and AMD. Indeed, the big theme of the 2004 In-Stat/MDR Fall Processor Forum was multicore devices, as many companies showed new or updated multicore processors. Looking back, its not much of a stretch to call 2004 the year of multicore.

And that puts us at a fundamental turning point in software development.

The performance lunch isnt free any more. Sure, there will continue to be generally applicable performance gains that everyone can pick up, thanks mainly to cache size improvements. But if you want your application to benefit from the continued exponential throughput advances in new processors, it will need to be a well-written concurrent (usually multithreaded) application. And thats easier said than done, because not all problems are inherently parallelizable and because concurrent programming is hard.

The clear primary that applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most 1/100 of such a chips potential throughput. Oh, performance doesnt matter so much, computers just keep getting faster has always been a nave statement to be viewed with suspicion, and for the near future it will almost always be simply wrong.

Perhaps a less obvious consequence is that applications are likely to become increasingly CPU-bound. Of course, not every application operation will be CPU-bound, and even those that will be affected wont become CPU-bound overnight if they arent already, but we seem to have reached the end of the applications are increasingly I/O-bound or network-bound or database-bound trend, because performance in those areas is still improving rapidly (gigabit WiFi, anyone?) while traditional CPU performance-enhancing techniques have maxed out.

Published by

Rajesh Jain

An Entrepreneur based in Mumbai, India.