Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

IBM shares plans for supercomputing future

Agam Shah | Nov. 14, 2014
IBM plans to load future supercomputers with more co-processors and accelerators to increase computing speed and power efficiency.

"At the individual compute element level, we continue the Von Neumann approach. At the level of the system, however, we are providing an additional way to compute, which is to move the compute to the data. There are multiple ways to reduce latency in a system and reduce the amount of data which has to be moved. This saves time and energy," Turek said.

Moving computing closer to data in storage or memory is not a new concept. IBM has been building appliances and servers with CPUs targeted at specific workloads, and has been disaggregating memory, storage and processing subsystems into separate boxes. But IBM is looking at optimizing entire supercomputing workloads that involve modeling, simulation, visualization and complex analytics on massive data sets.

The model will work in research areas like oil and gas exploration, life sciences, weather modeling, and materials research. Applications will need to be written and well-defined for processing at different levels, and IBM is working with companies, institutions and researchers to define software models for key sectors.

The fastest supercomputers today are calculated with the LINPACK benchmark, which is a simple measurement based on floating point operations. IBM isn't ignoring Top500, but providing a different approach to speed up supercomputing.

LINPACK is good at measuring basic speed, but has under-represented the utility of supercomputers, Turek said, adding that the benchmark doesn't fully account for specialized processing elements like integer processing and FPGAs.

"The Top500 list measures some elements of the behavior of compute nodes, but is incomplete in terms of its characterization of workflows that require merging modeling, simulation and analytics. Our own research shows that many classic HPC applications are only moderately related to the measure of LINPACK," Turek said.

Organizations building supercomputers have learned to build software to take advantage of LINPACK, which is a poor measurement of supercomputing performance, said Nathan Brookwood, principal analyst at Insight 64.

"Top500 takes a very simple view of computer performance. Everybody loves simplicity," Brookwood said.

The real performance of some specialized applications goes far beyond LINPACK, and IBM's approach makes sense, Brookwood said.

"IBM is right, there's a lot of ways to skin the cat for different applications. Those with different applications will have a different effect, and it's hard to capture those numbers," Brookwood said.

There are companies developing computers that give a new spin on how data is accessed and interpreted. D-Wave Systems is offering what is believed to be the world's first and only quantum computer, which is being used by NASA, Lockheed Martin and Google for specific tasks. The others are in experimental phase. IBM has built an experimental computer with a chip designed to mimic a human brain. Hewlett-Packard's Machine has a new type of memory called memristor and will transfer data using light beams.

 

Previous Page  1  2 

Sign up for Computerworld eNewsletters.