Topics

AI and GPUs take centre stage at vCHEP

18 July 2021
vCHEP2021 group photo

The 25th International Conference on Computing in High-Energy and Nuclear Physics (CHEP) gathered more than 1000 participants online from 17 to 21 May. Dubbed “vCHEP”, the event took place virtually after this year’s in-person event in Norfolk, Virginia, had to be cancelled due to the COVID-19 pandemic. Participants tuned in across 20 time zones, from Brisbane to Honolulu, to live talks, recorded sessions, excellent discussions on chat apps (to replace the traditional coffee-break interactions) and special sessions that linked job seekers with recruiters.

Given vCHEP’s virtual nature this year, there was a different focus on the content. Plenary speakers are usually invited, but this time the organisers invited papers of up to 10 pages to be submitted, and chose a plenary programme from the most interesting and innovative. Just 30 had to be selected from more than 200 submissions — twice as many as expected — but the outcome was a diverse programme tackling the huge issues of data rate and event complexity in future experiments in nuclear and high-energy physics (HEP).

Artificial intelligence

So what were the hot topics at vCHEP? One outstanding one was artificial intelligence and machine learning. There were more papers submitted on this theme than any other, showing that the field is continuing to innovate in this domain. 

Interest in using graph neural networks for the problem of charged-particle tracking was very high, with three plenary talks. Using a graph to represent the hits in a tracker as nodes and possible connections between hits as edges is a very natural way to represent the data that we get from experiments. The network can be effectively trained to pick out the edges representing the true tracks and reject those that are just spurious connections. The time needed to get to a good solution has improved dramatically in just a few years, and the scaling of the solution to dense environments, such as at the High-Luminosity LHC (HL-LHC), is very promising for this relatively new technique. 

ATLAS showed off their new fast-simulation framework

On the simulation side, work was presented showcasing new neural-network architectures that use a “bounded information-bottleneck autoencoder” to improve training stability, providing a solution that replicates important features such as how real minimum-ionising particles interact with calorimeters. ATLAS also showed off their new fast-simulation framework, which combines traditional parametric simulation with generative adversarial networks, to provide better agreement with Geant4 than ever before.

New architectures

Machine learning is very well suited to new computing architectures, such as graphics processing units (GPUs), but many other experimental-physics codes are also being rewritten to take advantage of these new architectures. IceCube are simulating photon transport in the Antarctic ice on GPUs, and presented detailed work on their performance analysis that led to recent significant speed-ups. Meanwhile, LHCb will introduce GPUs to their trigger farm for Run 3, and showed how much this will improve the energy consumption per event of the high-level trigger. This will help to meet the physical constraints of power and cooling close to the detector, and is a first step towards bringing HEP’s overall computing energy consumption to the table as an important parameter. 

LHCb will introduce GPUs to their trigger farm for Run 3

Encouraging work on porting event generation to GPUs was also presented — particularly appropriately, given the spiralling costs of higher order generators for HL-LHC physics. Looking at the long-term future of these new code bases, there were investigations of porting calorimeter simulation and liquid-argon time-projection chamber software to different toolkits for heterogeneous programming, a topic that will become even more important as computing centres diversify their offerings.

Keeping up with benchmarking and valuing these heterogeneous resources is an important topic for the Worldwide LHC Computing Grid, and a report from the HEPiX Benchmarking group pointed to the future for evaluating modern CPUs and GPUs for a variety of real-world HEP applications. Staying on the facilities topic, R&D was presented on how to optimise delivering reliable and affordable storage for HEP, based on CephFS and the CERN-developed EOS storage system. This will be critical to providing the massive storage needed in the future. The network between facilities will likely become dynamically configurable in the future, and how best to take advantage of machine learning for traffic prediction is being investigated.

Quantum computing

vCHEP was also the first edition of CHEP with a dedicated parallel session on quantum computing. Meshing very well with CERN’s Quantum Initiative, this showed how seriously investigations of how to use this technology in the future are being taken. Interesting results on using quantum support-vector machines to train networks for signal/background classification for B-meson decays were highlighted.

On a meta note, presentations also explored how to adapt outreach events to a virtual setup, to keep up public engagement during lockdown, and how best to use online software training to equip the future generation of physicists with the advanced software skills they will need.

Was vCHEP a success? So far, the feedback is overwhelmingly positive. It was a showcase for the excellent work going on in the field, and 11 of the best papers will be published in a special edition of Computing and Software for Big Science — another first for CHEP in 2021.

bright-rec iop pub iop-science physcis connect