Topics

CHEP ’09: clouds, data, Grids and the LHC

15 July 2009

Alan Silverman reports from the international conference held in Prague.

CCche1_06_09

The CHEP series of conferences is held every 18 months and covers the wide field of computing in high-energy and nuclear physics. CHEP ’09, the 17th in the series, was held in Prague on 21–27 March and attracted 615 attendees from 41 countries. It was co-organized by the Czech academic-network operator CESNET, Charles University in Prague (Faculty of Mathematics and Physics), the Czech Technical University, and the Institute of Physics and the Nuclear Physics Institute of the Czech Academy of Sciences. Throughout the week some 500 papers and posters were presented. As usual, given the CHEP tradition of devoting the morning sessions to plenary talks and limiting the number of afternoon parallel sessions to six or seven, the organizers found themselves short of capacity for oral presentations. They received 500 offers for the 200 programme slots, so the remainder were shown as posters, split into three full-day sessions of around 100 each day. The morning coffee break was extended specifically to allow time to browse the posters and discuss with the poster authors.

A large number of the presentations related to some aspect of computing for the up-coming LHC experiments but there was also a healthy number of contributions from experiments elsewhere in the world, including Brookhaven National Laboratory, Fermilab and SLAC (where BaBar is still analysing its data although the experiment has stopped data-taking) in the US, KEK in Japan and DESY in Germany.

Data and performance

CCche2_06_09

The conference was preceded by a Worldwide LHC Computing Grid (WLCG) Workshop, summarized at CHEP ’09 by Harry Renshall from CERN. There was a good mixture of Tier 0, T1 and T2 representatives in the total of the 228 people present at the workshop, which began with a review of each of the LHC experiment’s plans. All of these include more stress-testing in some form or other before the restart of the LHC. The transition to the European Grid Initiative from the Enabling Grids for E-sciencE project is clearly an issue, as is the lack of a winter shutdown in the LHC plans. There was discussion on whether or not there should be a new “Computing Challenge”, to test the readiness of the WLCG. The eventual decision was “yes”, but to rename it STEP ’09 (Scale Testing for the Experimental Programme), schedule it for May or June 2009 and concentrate on tape recall and event processing. The workshop concluded that ongoing emphasis should be put on stability, preparing for a 44-week run and continuing the good work that has now started on data analysis.

Sergio Bertolucci, CERN’s director for research and scientific computing, gave the opening talk of the conference. He reviewed the LHC start-up and initial running, the steps being taken for the repairs following the incident of 19 September 2008 as well as to avoid any repetition, and the plans for the restart. He compared the work currently being done at Fermilab, and how CERN will learn from this in the search for the Higgs boson. Les Robertson of CERN, who led the WLCG project through the first six years, discussed how we got here and what will come next. A very simple Grid was first presented at CHEP in Padova in 2000, leading Robertson to label the 2000s as the decade of the Grid. Thanks to the development and adoption of standards, Grids have now developed and matured, with an increasing number of sciences and industrial applications making use of them. However, Robertson thinks that we should be looking at locating Grid centres where energy is cheap, using virtualization to share processing power better, and starting to look at “clouds”: what are they in comparison to Grids?

The theme of using clouds, which enable access to leased computing power and storage capacity, came up several times in the meeting. For example, the Belle experiment at KEK is experimenting with the use of clouds for Monte Carlo simulations in its planning for SuperBelle; and the STAR experiment at Brookhaven is also considering clouds for Monte Carlo production. Another of Robertson’s suggestions for future work, “virtualization”, was also one of the most common topics in terms of contributions throughout the week, with different uses cropping up time and again in the conference’s various streams.

Other notable plenary talks included those by Neil Geddes, Kors Bos and Ruth Pordes. Geddes, of the UK Science and Technology Facilities Council Rutherford Appleton Laboratory, asked “can WLCG deliver?” He deduced that it can, and in fact does, but that there are many challenges still to face. Bos, of Nikhef and the ATLAS collaboration, compared the different computing approaches across the LHC experiments, pointing out similarities and contrasts. Femilab’s Pordes, who is executive director of the Open Science Grid, described work in the US on evolving Grids to make them easier to use and more accessible to a wider audience of researchers and scientists.

The conference had a number of commercial sponsors, in particular IBM, Intel and Sun Microsystems, and part of Wednesday morning was devoted to speakers from these corporations. IBM used its slot to describe a machine that aims to offer cooler, denser and more efficient computing power. Intel focused on its effort to get more computing for less energy, making note of work done under the openlab partnership with CERN (CERN openlab enters phase three). The company hopes to address this partially by increasing computing-energy efficiency (denser packaging, more cores, more parallelism etc) because it realizes that power is constraining growth in every part of computing. The speaker from Sun presented ideas on building state-of-the-art data centres. He claimed that raised floors are dead and instead proposed “containers” or a similar “pod architecture” with built-in cooling and a modular structure connected to overhead, hot-pluggable busways. Another issue is to build “green” centres and he cited solar farms in Abu Dhabi as well as a scheme to use free ocean-cooling for floating ship-based computing centres.

It impossible to summarize in a short report the seven streams of material presented in the afternoon sessions but some highlights deserve to be mentioned. The CERN-developed Indico conference tool was presented with statistics showing that it has been adopted by more than 40 institutes and manages material for an impressive 60,000 workshops, conferences and meetings. The 44 Grid middleware talks and 76 poster presentations can be summarized as follows: production Grids are here; Grid middleware is usable and is being used; standards are evolving but have a long way to go; and the use of network bandwidth is keeping pace with technology. From the stream of talks on distributed processing and analysis, the clear message is that much work has been done on user-analysis tools since the last CHEP, with some commonalities between the LHC experiments. Data-management and access protocols for analysis are a major concern and the storage fabric is expected to be stressed when the LHC starts running.

Dario Barberis of Genova/INFN and ATLAS presented the conference summary. He had searched for the most common words in the 500 submitted abstracts and the winner was “data”, sometimes linked with “access”, “management” or “analysis”. He noted that users want simple access to data, so the computing community needs to provide easy-to-use tools that hide the complexity of the Grid. Of course “Grid” was another of the most common words, but the word “cloud” did not appear in the top 100 although clouds were much discussed in plenary and parallel talks. For Barberis, a major theme was “performance” – at all levels, from individual software codes to global Grid performance. He felt that networking is a neglected but important topic (for example the famous digital divide and end-to-end access times). His conclusion was that performance will be a major area of work in the future as well as the major topic at the next CHEP in Taipei, on 17–22 October 2010.

bright-rec iop pub iop-science physcis connect