Stanford University researchers have built a multi-layered "high-rise" chip that could significantly outperform traditional computer chips, taking on the hefty workloads that will be needed for the Internet of Things and big data.
Using nanotechnology, the new chips are built with layers of processing on top of layers of memory, greatly cutting down on the time and energy typically needed to move information from memory to processing and back.
Max Shulaker, a researcher on the project and a Ph.D candidate in Stanford's Department of Electrical Engineering, said they have built a four-layer chip but he could easily see them building a 100-layer chip if that was needed.
"The slowest part of any computer is sending information back and forth from the memory to the processor and back to the memory. That takes a lot of time and lot of energy," Shulaker told Computerworld. "If you look at where the new exciting apps are, it's with big data... For these sorts of new applications, we need to find a way to handle this big data."
The conventional separation of memory and logic is not well-suited for these types of heavy workloads. With traditional chip design, information is passed from the memory to the processor for computing, and then it goes back to the memory to be saved again.
In relative terms, that takes a lot of energy and time way more than the computation itself.
"People talk about the Internet of Things, where we're going to have millions and trillions of sensors beaming information all around," said Shulaker. "You can beam all the data to the cloud to organize all the data there, but that's a huge data deluge. You need [a chip] that can process on all this data... You want to make sense of this data before you send it off to the cloud."
That, he noted, would make working with the cloud, as well as with the Internet of Things, more efficient.
The new high-rise chip is based on three emerging technologies, according to Stanford.
The researchers, led by Subhasish Mitra, a Stanford associate professor of electrical engineering and computer science, and H.S. Philip Wong, a professor in Stanford's school of engineering, used carbon nanotube transistors instead of silicon and replaced typical memory with resistive random-access memory (RRAM) or spin-transfer torque magnetic random-access memory (STT-RAM). Both use less power and are more efficient than traditional memory systems.
The third new technique is to build the logic and memory technologies in layers that sit on top of each other in what scientists describe as "high-rise" structures.
"The connectivity between the layers increases by three orders of magnitude or a thousand times the benefit in bandwidth of how much data you can move back and forth," Shulaker said. "For all of these Internet of Things applications, all of them would run much, much more efficiently and much, much faster. For way less energy, you'd be able to do way more work."
Sign up for Computerworld eNewsletters.