Grid computing is about sharing processing power. The lead story in the Business section of one of the recent issues (October 7, 2004) of The Economist provides an update on the efforts to create a computer the size of the world.
The stated goal of grid computing is to create a worldwide network of computers interconnected so well and so fast that they act as one. Yet most of the time, this over-hyped catchphrase is used to describe rather mundane improvements that allow companies to manage their workload more flexibly by tapping into idle time on their computers. At a meeting on computing in high-energy physics held in late September in Interlaken, Switzerland, physicists and engineers reviewed progress towards an altogether more ambitious type of computing grid, which aims to create a truly seamless system.
Physicists’ demand for computing power is being spurred by the flood of data that will pour out of the Large Hadron Collider (LHC), the next-generation particle smasher due to start operation in 2007 at CERN, the European particle physics laboratory near Geneva. This machine will produce some 15 petabytes (millions of billions of bytes) of data a year…Some 100,000 of today’s fastest personal computerswith accompanying bits and bobs such as tape and disk storage and high-speed networking equipmentwill be needed to analyse all this data.
The decision to build a distributed computing system to deal with this deluge of data predates the hype about grid technology and is purely pragmatic: it would be difficult to fund the necessary computational power and storage capacity if it were concentrated on one site. If, on the other hand, the computations are distributed among the hundreds of institutes worldwide that are involved in the LHC, each institute can tap into national or regional funding sources to raise cash, spreading the pain.
The vision of a single grid, in the same sense that most users perceive a single web, remains a long way off.
Not all problems are best solved using the distributed clusters that underpin grids. True supercomputers are irreplaceable for some scientific problems, such as weather forecasting, where many processors must communicate frequently with one another. At the other extreme, scavenging spare computer power from personal computers on the internet is proving an increasingly effective approach for problems that can be split into a large number of small, independent parts. SETI@home, a screensaver which was the first and remains the best-known of these programs, uses idle time to analyse radio signals, looking for messages from aliens. For now, SETI@home is still the largest of these projects, although a new general-purpose platform called BOINC has been launched to tackle more diverse problems.
Dan Farber wrote in the September 2004 issue of Release 1.0: To date, grid computing has been used to coordinate computing resources from multiple owners to handle a single large scientific task, such as the SETI@Home project, which harnesses 5 million PCs to search for deep-space radio signals from extraterrestrials, or IBMs Butterfly.net, which uses a grid for a multiplayer game network. Grid.org, a website for large-scale research projects powered by Austin-based United Devices grid computing solution, harnesses 2.5 million systems in more than 225 countries to deliver in excess of 150 teraflops of power to applications. Using grid.org, the Anthrax Research Project screened 3.57 billion molecules for suitability as a treatment for advanced-stage anthrax in 24 days. The screening would have taken years using conventional methods…The next phase for grid computing is to apply grid computing and IT automation concepts broadly as a framework for administering commercial enterprise IT infrastructure.
Monday: Recent Developments (continued)