Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

MIT invention to speed up data centers should cheer developers

Stephen Lawson | July 21, 2014
Fastpass uses parallel processing to eliminate the need for complicated network queues, researchers say.

The current decentralized way of forwarding packets allows for vast networks with little oversight. But because traffic is unpredictable, network designers have to either invest in fat enough pipes to carry the highest possible load or put a queue in each switch to hold packets until they can go out. Usually, it's a balancing act between the two.

"It's very hard to figure out how big the queues need to be. ... This has been a difficult question since 1960," Balakrishnan said. Making them too big can slow performance, while making them too small can lead to dropped packets and time-consuming retransmissions.

Fastpass assigns transmission times and selects paths for each packet, and it can do that quicker than a typical switch can, according to MIT. Fastpass is so much faster that even though it makes every packet go over the network to the arbiter, a trip that may take about 40 microseconds, it still speeds things up, according to MIT.

With that kind of speed, there's essentially no need for queues. In experiments in a Facebook data center, Fastpass cut the average length of a queue by 99.6 percent, the researchers say. Latency, or the delay between requesting and receiving an item, went from 3.56 microseconds to 0.23 microseconds.

In the test, an arbiter with just eight cores was able to make decisions for a network carrying 2.2 terabits of data per second, which is equal to a 2,000-server data center with gigabit-speed links running at full speed, MIT said. The arbiter was linked to a twin system for redundancy.

Instead of making all eight cores work on assigning a transmission for one time slot at a time, Balakrishnan's team gave each core its own time slot. One tries to fill the next time slot while another is working two slots ahead, and another three slots ahead, and so on.

"You want to allocate for many time slots into the future, in parallel," Balakrishnan said. Each core looks through the full list of transmission requests, assigns one, and modifies the list, and all the cores can work on the problem simultaneously.

Fastpass, or software like it, could be implemented in dedicated server clusters or even built into specialized chips, Balakrishnan said. The researchers plan to release the Fastpass software as open source, though they warned it's not production-ready code.

"Anyone with a high-speed data center should be interested," he said.


Previous Page  1  2 

Sign up for Computerworld eNewsletters.