Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

IBM shares plans for supercomputing future

Agam Shah | Nov. 14, 2014
IBM plans to load future supercomputers with more co-processors and accelerators to increase computing speed and power efficiency.

IBM plans to load future supercomputers with more co-processors and accelerators to increase computing speed and power efficiency.

Supercomputers with this new architecture could be out within the next year. The aim is to boost data processing at the storage, memory and I/O levels, said Dave Turek, vice president of technical computing for OpenPower at IBM.

That will help break down parallel computational tasks into small chunks, reducing the compute cycles required to solve problems. That's one way to overcome scaling and economic limitations of parallel computing that affect conventional computing models, Turek said.

"We looked at this and said, we can't keep doing what we've done, but that won't even work [any longer] when you look at the volume of data people are starting to entertain," Turek said.

Memory, storage and I/O work in tandem to boost system performance, but there are bottlenecks with current supercomputing models. A lot of time and energy is wasted in continuously moving large chunks of data between processors, memory and storage. IBM wants to decrease the amount of data that has to be moved, which could help process data up to three times faster than current supercomputing models.

"When we are working with petabytes and exabytes of data, moving this amount of data is extremely inefficient and time consuming, so we have to move processing to the data. We do this by providing compute capability throughout the system hierarchy," Turek said.

IBM has built the world's fastest computers for decades, including the third- and fifth-fastest, according to a recent Top500 list. But the amount of data being fed to servers is outpacing the growth of supercomputing speeds. Networks aren't getting faster, the chip clock speeds aren't increasing and there isn't a huge increase in data-access time, Turek said.

"Applications no longer just live in the classic compute microprocessors, instead application and workflow computation are distributed throughout the system hierarchy," Turek said.

IBM's execution model is proprietary, but Turek provided a simple example of reducing the size of data sets by decomposing information in storage, which can then be moved to memory. Such a model can be applied to an oil and gas workflow — which typically takes months — and it would significantly shorten the time required to make decisions about drilling.

"We see a hierarchy of storage and memory including nonvolatile RAM, which means much lower latency, higher bandwidths, without the requirement to move the data all the way back to central storage," Turek said.

IBM is not trying to challenge conventional computing architectures such as the Von Neumann approach, in which data is pushed into a processor, calculated and pushed back in the memory. Most computer systems today work on the Von Neumann architecture, which was derived in the 1940s by mathematician John von Neumann.

 

1  2  Next Page 

Sign up for Computerworld eNewsletters.