Beyond that, the teams will pursue different game plans. “There’s competition as well as collaboration,” says Intel project leader Wilfred Pinfold, “and there won’t be just one answer.”
Sandia National Laboratory’s effort, dubbed X-caliber, will attempt to further limit data shuffling with something called smart memory, a form of data storage with rudimentary processing capabilities. Performing simple calculations without moving data out of memory consumes an order of magnitude less energy than today’s supercomputers. “We move the work to the data rather than move the data to where the computing happens,” Murphy says.
Intel’s project, called Runnemede, is wringing more efficiency from its system using innovative techniques that selectively reduce or turn off power to individual components, says Josep Torrellas, a computer scientist at the University of Illinois who is an architect with the team. He and his colleagues are designing chips with about 1,000 processors arranged in groups whose voltage can be controlled independently, so that each group receives only what it needs at a given moment.
Graphics chip maker NVIDIA leads a third research thrust, called Echelon, which builds on the capabilities of the company’s graphics-processing chips. Such processors consume just one-seventh as much energy per instruction as a conventional processor, according to architecture director Stephen Keckler. The graphics chips efficiently execute many operations at once, in contrast to traditional processors that perform one at a time as quickly as possible. The Echelon team plans to combine its graphics processors with standard processors so that their computer can automatically choose the most appropriate combination for the task at hand.
Finally, the Angstrom project, based at MIT, is creating a computer that self-adjusts on the fly to reduce energy use. The system goes through a search process to optimize settings such as the number of processors in use, says Anant Agarwal, the MIT computer scientist who heads the project. In a computing first, it will even be able to automatically select algorithms based on their energy efficiency, he says. This self-regulation should help make life easier for software engineers working with the machine. “Other approaches often require programmers to worry about optimizing performance and energy use simultaneously, which is awfully hard to do,” Agarwal says.
Though the Darpa challenge focuses on supercomputers, the technology it spawns will probably ripple throughout the industry, making its way into data centers, automotive computers, and cell phones. Today’s desktops rival the top supercomputers of the late 1980s; 2020 may find us using laptops that outperform Tianhe-1A. And if Darpa’s four ultraefficient developer teams succeed, maybe we can even leave the chargers at home.
Floating point operations per second, a standard measure of computing power.
Supercomputing three orders of magnitude above the current frontier, with quintillions of calculations per second.
A form of data storage with its own computing capabilities. Such memory reduces the need to move data to a processor.
computer system in which each processor has its own dedicated set of memory chips.