The Economist writes:
The ability to build powerful computers cheaply, combined with growing commercial demand for high-end computing power, is creating a renaissance in the field of supercomputing.
Two applications in particular have driven the development of supercomputers: the modelling of climate change and of what happens inside a nuclear explosionthe second of which is necessary because of the ban on actual nuclear testing that is obeyed by established nuclear powers.
Will the future be full of supercomputers built from more off-the-shelf parts? The American government wants researchers to focus on more customised and expensive systems (reversing the policy in place since the early 1990s). A report this year by the president’s science adviser warned that research in high-end supercomputing has not kept pace with demand. Consequently, this month Congress passed legislation to increase funding of supercomputer research.
America’s Defence Advanced Research Projects Agency, an arm of the Pentagon, also wants a petaflop machine for its research work. Whether the competition for this contract will be won by a supercomputer built from off-the-shelf components, or built from scratch, is unclear as yet. However, some think that the real limiting factor towards achieving such a machine is software rather than hardware.
At the International Supercomputer Conference, held in Heidelberg, Germany last month, Steve Wallach, a vice-president of Chiaro, a router manufacturer based in Richardson, Texas, and a supercomputer expert, suggested that supercomputer hardware may have to relinquish some performance in order to make the systems easier to program. This is a particular problem for machines built from off-the-shelf systems which often have very low efficiencies. While they may be excellent at running the benchmark programs that set the speed rankings, many of their multiple processors remain idle when confronted with real computing tasks.