Supercomputing: US vs Japan

Business Week asks if the US can recapture the lead from Japan:

For years, some U.S. supercomputing gurus had been warning that Washington’s support of high-performance computing was too narrowly focused on the needs of the Pentagon’s nuclear-weapons programs. Even acknowledging the U.S. strength in software, they warned that scientific research was being hobbled because U.S. supers were not designed to solve the really tough issues facing civilian scientists and engineers. Earth Simulator, built by Japan’s NEC Corp., was proof positive of just how far behind the U.S. had fallen in scientific supercomputing.

Academic scientists who model the birth of stars and the origin of life may have the greatest hunger for supercomputing power. But supercomputers are used in a wide swath of industries, including finance, insurance, semiconductors, and telecommunications. Indeed, roughly half of the world’s top 500 supers are owned by corporations.

While the machines used by business today don’t have the muscle to tackle the “grand challenge” problems in science, such as predicting climate change, they have become essential in developing better products and speeding them to market. Procter & Gamble Co. even engineers the superabsorbent materials in its baby diapers with a supercomputer. Now, IBM and other suppliers are evolving designs that promise a new class of ultrafast supers and innovative software development tools.

The U.S. may need the extra brawn. The power of Japan’s Earth Simulator “will contribute to fundamental changes in every field,” says Tetsuya Sato, director of the Earth Simulator Center (ESC). The Center is now nailing down a collaboration with Japan’s auto makers to harness the super for automotive engineering and simulated crash testing.

Earth simulator isn’t the only threat. In computational biology — using software to tackle problems ranging from medical diagnosis to drug discovery — the U.S. has an even bigger handicap. In 2001, Japan’s Institute of Physical & Chemical Research, known as Riken, built a special-purpose computer for such notoriously difficult jobs as simulating the function of proteins. Called the Molecular Dynamics Machine, it has a speed of 78 teraflops — twice as fast as Earth Simulator.

There are two basic approaches to supercomputer design. NEC’s supers use a so-called vector architecture, meaning they have custom silicon processors for brains. These chips are specifically designed for the heavy-duty math in science and engineering. In contrast, virtually all U.S. supers do their thinking with ordinary microprocessors — the chips found in PCs and video games. Until Earth Simulator came along, the U.S. was smug about this approach. Because commercial off-the-shelf (COTS) chips are produced in huge volumes, they’re much less expensive than NEC’s chips. So when more speed is needed, IBM, Hewlett-Packard, or Dell can just “scale up,” lashing together 100 or 1,000 more chips — the “scalar” approach.

However, the peak-speed ratings of COTS clusters can be deceptive. When running the complex software used to tackle really difficult issues in physics, chemistry, and simulated crash tests of cars, COTS systems rarely eke out even 10% of their peak speed over extended periods. NEC’s machines chug along at 30% to 60%.

Published by

Rajesh Jain

An Entrepreneur based in Mumbai, India.