Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Research community looks to SDN to help distribute data from the Large Hadron Collider

John Dix | May 27, 2015
Most advanced research and education networks have transitioned to 100 Gbps, but as fast as the core networks progress, the capabilities at the edge progress even faster.

When the Large Hadron Collider (LHC) starts back up in June, the data collected and distributed worldwide for research will surpass the 200 petabytes exchanged among LHC sites the last time the collider was operational. Network challenges at this scale are different from what enterprises typically confront, but Harvey Newman, Professor of Physics at Caltech, who has been a leader in global scale networking and computing for the high energy physics community for the last 30 years, and Julian Bunn, Principal Computational Scientist at Caltech, hope to introduce a technology to this rarified environment that enterprises are also now contemplating: Software Defined Networking (SDN). Network World Editor in Chief John Dix recently sat down with Newman and Bunn to get a glimpse inside the demanding world of research networks and the promise of SDN.

Can we start with an explanation of the different players in your world?

NEWMAN: My group is a high energy physics group with a focus on the Large Hadron Collider (LHC) program that is about to start data taking at a higher energy than ever before, but over the years we've also had responsibility for the development of international networking for our field. So we deal with many teams of users located at sites throughout the world, as well as individuals and groups that are managing data operations, and network organizations like the Energy Sciences Network, Internet2, and GEANT (in addition to the national networks in Europe and the regional networks of the United States and Brazil).

For the last 12 years or so we've developed the concept of including networks along with computing and storage as an active part of global grid systems, and had a number of projects along these lines. Working together with some of the networks, like Internet2 and ESnet, we were able to use dynamic circuits to support a set of flows or dataset transfers and give them priority, with some level of guaranteed bandwidth.

So that was a useful thing to do, and it's been used to some extent. But that approach is not generalizable in that not everybody can slice up the network into pieces and assign themselves guaranteed bandwidth. And then you face the issue of how well they're using the bandwidth they reserved, and whether we would be better off assigning large data flows to slices or just do things in a more traditional way using a shared general purpose network.

So I presume that's where the interest in SDN came in?

What's happening with SDN is a couple of things. We saw the possibility to intercept selected sets of packets on the network side, and assign flow rules to them so we don't really have to interact much with the application. There is some interaction but it's not very extensive, and that allows us to identify certain flows without requiring that a lot of the applications to be changed in any pervasive way. The other thing is, beyond circuits you have different classes of flows to load-balance, and you want to prevent saturating any sector of the infrastructure. In other words, we want mechanisms so our community can handle these large flows without impeding what other people are doing. And in the end, once the basic mechanisms are in place we want to apply machine learning methods to optimize the LHC experiments' distributed data analysis operations.

 

1  2  3  4  5  6  7  8  Next Page 

Sign up for Computerworld eNewsletters.