Code Parallel or Die

by Frank Chism on April 14, 2008 03:22pm

“What we have here is a failure to communicate.”
- Luke in “Cool Hand Luke”

A fist full of cores in a server rich environment.
If you haven’t noticed that the world has changed, you had better wake up and smell the coffee. The era of killer apps written in serial code is ending. Like the norm in species extinction, the reason many ISVs will not survive this change is that the environment in which we develop code has changed rapidly and not all species can adapt to this new world. The change was preceded by a hint of ‘The Shape of Things to Come’ in the Beowulf and compute cluster revolution. This revolution created the first generation of commodity clusters based on microprocessor nodes connected by commodity networks. An evolutionary message from this generation is that if you are building a cluster you really ought to use server class nodes. You _can_ build a cluster from desktop parts, but the higher you scale your node count the more you want the convenience and reliability of server parts. I remember seeing a look of outrage on the face of an Intel marketing person when I commented that I thought their then brand new Itanium Tiger platform was a really nice part for building a cluster. He thought of the new platform as a High Performance Server, not as a ‘part’. Well get used to it. We don’t build computers out of discreet components like transistors any more (which is good for him because Intel makes really big Integrated Circuits, not transistors) and we have just moved up to the point where the level of integration is the server, not the processor.

And Then There Were Cores
So into this environment where servers are parts and many high performance computing applications were written to a distributed memory communications aware parallel programming model, the microprocessor has changed in the most radical way I can remember. Our equilibrium has been punctuated (see: "Punctuated equilibria: an alternative to phyletic gradualism" (1972) pp 82-115 in "Models in paleobiology", edited by Schopf, TJM Freeman, Cooper & Co, San Francisco. Eldredge, N. & Gould, S.J.) and we can expect rapid change to a whole new kind of parallelism. The change I refer to is the introduction and near ubiquity of multiple core processors. For years software developers have been able to count on dramatic increases in CPU speed and processing power on a Moore’s Law schedule. But, Moore did not say that the processing speed and power of CPUs would double every eighteen months, he said that the number of gates on a single chip would double every eighteen months. So, all was good for a long time because that doubling of gates meant smaller _and_ faster gates and greater architectural complexity and thus ‘faster’ CPUs. So by about 2005, just about every good idea for a better computer processor architecture ever thought of was incorporated into the major microprocessors on the market. So, when 2006 rolled around the obvious thing to do was to use the next doubling of gates to double the number of ‘processors’ on a microprocessor. Dual core was here and the trend will continue for at least a couple of doublings. Who knows what strange beasties will emerge when we get to the point where ‘many core’ processors become ‘too many core’ processors. Until then, though we can be sure: Multicore killed the serial star....

More on this tomorrow.