Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Research community looks to SDN to help distribute data from the Large Hadron Collider

John Dix | May 27, 2015
Most advanced research and education networks have transitioned to 100 Gbps, but as fast as the core networks progress, the capabilities at the edge progress even faster.

Once set up, we quickly achieved more than 1 Tbps on the conference floor and about 400 Gbps over the wide area networks. The whole facility was set up, operated with all the SDN related aspects mentioned above, and torn down and packed for shipment in just over one intense week.

The exercise was a great success, and one that we hope will show the way towards next generation extreme-scale global systems that are intelligently managed and efficiently configured on the fly. We're progressing well, and expect to go from testing to preproduction and we hope into production in the next year or so.

After Supercomputing in 2014, we set up a test bed and Julian has started to work with a number of different SDN-capable switches--including Brocade's SDN-enabled MLXe router at Caltech and switches at other places. So we're progressing and we expect to go from testing to preproduction and we hope into production in the next year or so.

This is just one cycle in an ongoing development effort, keeping pace with the expanding needs and working at the limits of the latest technologies and developing new concepts of how to deal with data on a massive scale in each area. One target is the Large Hadron Collider, but other projects in astrophysics, climatology and genomics, among others, could no doubt benefit from our ongoing developments.

So the goal of all these efforts is to enable users to set up large flows using SDN?

NEWMAN: Yes. The first users are data managers with very large volumes of data, from tens of terabytes to petabytes, who need to transfer data in an organized way. We can assign those flows to circuits, and give them dedicated bandwidth while the transfers are in progress, to make the task of transferring the data shorter and more predictable.

Then there are thousands of physicist who access and process the data remotely, and repeatedly, they continue to improve their software and analysis methods in the search for the next round of discoveries. This large community also uses dynamic caching methods, where chunks of the data are brought to the user so that the processing power available locally each group of users can be well used. We'll probably treat each research team, or a set of research teams in a given region of the world as a group, in order to reduce the overall complexity of an already complex global undertaking.

So some folks will have direct access to the controller while others will have to make requests of you folks?

NEWMAN: People are authorized once they have enough data to deal with. You see, there's a scale matching problem. Given the throughput we deal with, if you have less than, let's say a terabyte of data, it hardly matters. If I have a data center with tens to hundreds of terabytes to transfer at a time, there would be some interaction between the data manager side and the network side. The data manager can make a request, "I've got this data to transfer from A to B," and the network side can use a set of controllers to help manage the flows, and see that the entire set of data arrives, in an acceptable time.

 

Previous Page  1  2  3  4  5  6  7  8  Next Page 

Sign up for Computerworld eNewsletters.