Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Research community looks to SDN to help distribute data from the Large Hadron Collider

John Dix | May 27, 2015
Most advanced research and education networks have transitioned to 100 Gbps, but as fast as the core networks progress, the capabilities at the edge progress even faster.

So the experiments at the Large Hadron Collider are the data sources and the networks are used to distribute this data to users for analysis?

Right. Data is taken by the experiments, processed for the first time at CERN and then distributed to the Tier 1 centers via a sort of star network with some dedicated crosslinks for further processing and analysis by hundreds of physics teams. Once the data is at the Tier 1s it can be further distributed to the Tier 2s and Tier 3s, and once there, any site can act as the source of data to be accessed, or transferred to another site for further analysis. It is important to realize that the software base of the experiments, each consisting of several millions of lines of code, is under continual development as the physics groups improve their algorithms and their understanding and calibration of the particle detector systems used to take the data, with the goal of optimally separating out the new physics "signals" from the "backgrounds" that result from physics processes we already understand.

The data distribution from CERN to the Tier 1s is relatively straightforward, but data distribution to and among the Tier 2s and Tier 3s at sites throughout the world is complex. That is why we invented the LHCONE concept in 2010, together with CERN: to improve operations involving the Tier 2 and Tier 3 sites, and allow them to make better use of their computing and storage resources in order to accelerate the progress of the LHC program.

To understand the scale, and the level of challenge, you have to realize that more than 200 petabytes of data were exchanged among the LHC sites during the past year, and the prospect is for even greater data transfer volumes once the next round of data taking starts at the LHC this June.

The first thing that was done in LHCONE was to create a virtual routing and forwarding fabric (VRF). This was something proposed and implemented by all the research and education networks, including Internet2, ESnet, GEANT, and some of the leading national networks in Europe and Asia, soon to be joined by Latin America.

That has really improved access. We can see dataflow has improved. It was a very complex undertaking and very hard to scale because we have all of these special routing tables. But now the next part of LHCONE, and the original idea, is a set of point-to-point circuits.

You remember I talked about dynamic circuits and assigning flows to circuits with bandwidth guarantees. A member of our team at Caltech together with a colleague at Princeton has developed an application that sets up a circuit across LHCONE, and then assigns a dataset transfer (consisting of many files, totaling from one to 100 terabytes, typically) to the circuit.

 

Previous Page  1  2  3  4  5  6  7  8  Next Page 

Sign up for Computerworld eNewsletters.