Topics

How to unfold with AI

Open-science unfolding

All scientific measurements are affected by the limitations of measuring devices. To make a fair comparison between data and a scientific hypothesis, theoretical predictions must typically be smeared to approximate the known distortions of the detector. Data is then compared with theory at the level of the detector’s response. This works well for targeted measurements, but the detector simulation must be reapplied to the underlying physics model for every new hypothesis.

The alternative is to try to remove detector distortions from the data, and compare with theoretical predictions at the level of the theory. Once detector effects have been “unfolded” from the data, analysts can test any number of hypotheses without having to resimulate or re-estimate detector effects – a huge advantage for open science and data preservation that allows comparisons between datasets from different detectors. Physicists without access to the smearing functions can only use unfolded data.

No simple task

But unfolding detector distortions is no simple task. If the mathematical problem is solved through a straightforward inversion, using linear algebra, noisy fluctuations are amplified, resulting in large uncertainties. Some sort of “regularisation” must be imposed to smooth the fluctuations, but algorithms vary substantively and none is preeminent. Their scope has remained limited for decades. No traditional algorithm is capable of reliably unfolding detector distortions from data relative to more than a few observables at a time.

In the past few years, a new technique has emerged. Rather than unfolding detector effects from only one or two observables, it can unfold detector effects from multiple observables in a high-dimensional space; and rather than unfolding detector effects from binned histograms, it unfolds detector effects from an unbinned distribution of events. This technique is inspired by both artificial-intelligence techniques and the uniquely sparse and high-dimensional data sets of the LHC.

An ill-posed problem

Unfolding is used in many fields. Astronomers unfold point-spread functions to reveal true sky distributions. Medical physicists unfold detector distortions from CT and MRI scans. Geophysicists use unfolding to infer the Earth’s internal structure from seismic-wave data. Economists attempt to unfold the true distribution of opinions from incomplete survey samples. Engineers use deconvolution methods for noise reduction in signal processing. But in recent decades, no field has had a greater need to innovate unfolding techniques than high-energy physics, given its complex detectors, sparse datasets and stringent standards for statistical rigour.

In traditional unfolding algorithms, analysers first choose which quantity they are interested in measuring. An event generator then creates a histogram of the true values of this observable for a large sample of events in their detector. Next, a Monte Carlo simulation simulates the detector response, accounting for noise, background modelling, acceptance effects, reconstruction errors, misidentification errors and energy smearing. A matrix is constructed that transforms the histogram of the true values of the observable into the histogram of detector-level events. Finally, analysts “invert” the matrix and apply it to data, to unfold detector effects from the measurement.

How to unfold traditionally

Diverse algorithms have been invented to unfold distortions from data, with none yet achieving preeminence.

• Developed by Soviet mathematician Andrey Tikhonov in the late 1940s, Tikhonov regularisation (TR) frames unfolding as a minimisation problem with a penalty term added to suppress fluctuations in the solution.

• In the 1950s, statistical mechanic Edwin Jaynes took inspiration from information theory to seek solutions with maximum entropy, seeking to minimise bias beyond the data constraints.

• Between the 1960s and the 1990s, high-energy physicists increasingly drew on the linear algebra of 19th-century mathematicians Eugenio Beltrami and Camille Jordan to develop singular value decomposition as a pragmatic way to suppress noisy fluctuations.

• In the 1990s, Giulio D’Agostini and other high-energy physicists developed iterative Bayesian unfolding (IBU)– a similar technique to Lucy–Richardson deconvolution, which was developed independently in astronomy in the 1970s. An explicitly probabilistic approach well suited to complex detectors, IBU may be considered a forerunner of the neural-network-based technique described in this article.

IBU and TR are the most widely-used approaches in high-energy physics today, with the RooUnfold tool started by Tim Adye serving countless analysts.

At this point in the analysis, the ill-posed nature of the problem presents a major challenge. A simple matrix inversion seldom suffices as statistical noise produces large changes in the estimated input. Several algorithms have been proposed to regularise these fluctuations. Each comes with caveats and constraints, and there is no consensus on a single method that outperforms the rest (see “How to unfold traditionally” panel).

While these approaches have been successfully applied to thousands of measurements at the LHC and beyond, they have limitations. Histogramming is an efficient way to describe the distributions of one or two observables, but the number of bins grows exponentially with the number of parameters, restricting the number of observables that can be simultaneously unfolded. When unfolding only a few observables, model dependence can creep in, for example due to acceptance effects, and if another scientist wants to change the bin sizes or measure a different observable, they will have to redo the entire process.

New possibilities

AI opens up new possibilities for unfolding particle-physics data. Choosing good parameterisations in a high-dimensional space is difficult for humans, and binning is a way to limit the number of degrees of freedom in the problem, making it more tractable. Machine learning (ML) offers flexibility due to the large number of parameters in a deep neural network. Dozens of observables can be unfolded at once, and unfolded datasets can be published as an unbinned collection of individual events that have been corrected for detector distortions as an ensemble.

Unfolding performance

One way to represent the result is as a set of simulated events with weights that encode information from the data. For example, if there are 10 times as many simulated events as real events, the average weight would be about 0.1, with the distribution of weights correcting the simulation to match reality, and errors on the weights reflecting the uncertainties inherent in the unfolding process. This approach gives maximum flexibility to future analysts, who can recombine them into any binning or combination they desire. The weights can be used to build histograms or compute statistics. The full covariance matrix can also be extracted from the weights, which is important for downstream fits.

But how do we know the unfolded values are capturing the truth, and not just “hallucinations” from the AI model?

An important validation step for these analyses are tests performed on synthetic data with a known answer. Analysts take new simulation models, different from the one being used for the primary analysis, and treat them as if they were real data. By unfolding these alternative simulations, researchers are able to compare their results to a known answer. If the biases are large, analysts will need to refine their methods to reduce the model-dependency. If the biases are small compared to the other uncertainties then this remaining difference can be added into the total uncertainty estimate, which is calculated in the traditional way using hundreds of simulations. In unfolding problems, the choice of regularisation method and strength always involves some tradeoff between bias and variance.

Just as unfolding in two dimensions instead of one with traditional methods can reduce model dependence by incorporating more aspects of the detector response, ML methods use the same underlying principle to include as much of the detector response as possible. Learning differences between data and simulation in high-dimensional spaces is the kind of task that ML excels at, and the results are competitive with established methods (see “Better performance” figure).

Neural learning

In the past few years, AI techniques have proven to be useful in practice, yielding publications from the LHC experiments, the H1 experiment at HERA and the STAR experiment at RHIC. The key idea underpinning the strategies used in each of these results is to use neural networks to learn a function that can reweight simulated events to look like data. The neural network is given a list of relevant features about an event such as the masses, energies and momenta of reconstructed objects, and trained to output the probability that it is from a Monte Carlo simulation or the data itself. Neural connections that reweight and combine the inputs across multiple layers are iteratively adjusted depending on the network’s performance. The network thereby learns the relative densities of the simulation and data throughout phase space. The ratio of these densities is used to transform the simulated distribution into one that more closely resembles real events (see “OmniFold” figure).

Illustration of AI unfolding using the OmniFold algorithm

As this is a recently-developed technique, there are plenty of opportunities for new developments and improvements. These strategies are in principle capable of handling significant levels of background subtraction as well as acceptance and efficiency effects, but existing LHC measurements using AI-based unfolding generally have small backgrounds. And as with traditional methods, there is a risk in trying to estimate too many parameters from not enough data. This is typically controlled by stopping the training of the neural network early, combining multiple trainings into a single result, and performing cross validations on different subsets of the data.

Beyond the “OmniFold” methods we are developing, an active community is also working on alternative techniques, including ones based on generative AI. Researchers are also considering creative new ways to use these unfolded results that aren’t possible with traditional methods. One possibility in development is unfolding not just a selection of observables, but the full event. Another intriguing direction could be to generate new events with the corrections learnt by the network built-in. At present, the result of the unfolding is a reweighted set of simulated events, but once the neural network has been trained, its reweighting function could be used to simulate the unfolded sample from scratch, simplifying the output.

CERN and ESA: a decade of innovation

Sky maps

Particle accelerators and spacecraft both operate in harsh radiation environments, extreme temperatures and high vacuum. Each must process large amounts of data quickly and autonomously. Much can be gained from cooperation between scientists and engineers in each field.

Ten years ago, the European Space Agency (ESA) and CERN signed a bilateral cooperation agreement to share expertise and facilities. The goal was to expand the limits of human knowledge and keep Europe at the leading edge of progress, innovation and growth. A decade on, CERN and ESA have collaborated on projects ranging from cosmology and planetary exploration to Earth observation and human spaceflight, supporting new space-tech ventures and developing electronic systems, radiation-monitoring instruments and irradiation facilities.

1. Mapping the universe

The Euclid space telescope is exploring the dark universe by mapping the large-scale structure of billions of galaxies out to 10 billion light-years across more than a third of the sky. With tens of petabytes expected in its final data set – already a substantial reduction of the 850 billion bits of compressed images Euclid processes each day – it will generate more data than any other ESA mission by far.

With many CERN cosmologists involved in testing theories of beyond-the-Standard-Model physics, Euclid first became a CERN-recognised experiment in 2015. CERN also contributes to the development of Euclid’s “science ground segment” (SGS), which processes raw data received from the Euclid spacecraft into usable scientific products such as galaxy catalogues and dark-matter maps. CERN’s virtual-machine file system (CernVM-FS) has been integrated into the SGS to allow continuous software deployment across Euclid’s nine data centres and on developers’ laptops.

The telescope was launched in July 2023 and began observations in February 2024. The first piece of its great map of the universe was released in October 2024, showing millions of stars and galaxies from observations and covering 132 square degrees of the southern sky (see “Sky map” figure). Based on just two weeks of observations, it accounts for just 1% of project’s six-year survey, which will be the largest cosmic map ever made.

Future CERN–ESA collaborations on cosmology, astrophysics and multimessenger astronomy are likely to include the Laser Interferometer Space Antenna (LISA) and the NewAthena X-ray observatory. LISA will be the first space-based observatory to study gravitational waves. NewAthena will study the most energetic phenomena in the universe. Both projects are expected to be ready to launch about 10 years from now.

2. Planetary exploration

Though planetary exploration is conceptually far from fundamental physics, its technical demands require similar expertise. A good example is the Jupiter Icy Moons Explorer (JUICE) mission, which will make detailed observations of the gas giant and its three large ocean-bearing moons Ganymede, Callisto and Europa.

Jupiter’s magnetic field is a million times greater in volume than Earth’s magnetosphere, trapping large fluxes of highly energetic electrons and protons. Before JUICE, the direct and indirect impact of high-energy electrons on modern electronic devices, and in particular their ability to cause “single event effects”, had never been studied before. Two test campaigns took place in the VESPER facility, which is part of the CERN Linear Electron Accelerator for Research (CLEAR) project. Components were tested with tuneable beam energies between 60 and 200 MeV, and average fluxes of roughly 108 electrons per square centimetre per second, mirroring expected radiation levels in the Jovian system.

JUICE radiation-monitor measurements

JUICE was successfully launched in April 2023, starting an epic eight-year journey to Jupiter including several flyby manoeuvres that will be used to commission the onboard instruments (see “Flyby” figure). JUICE should reach Jupiter in July 2031. It remains to be seen whether test results obtained at CERN have successfully de-risked the mission.

Another interesting example of cooperation on planetary exploration is the Mars Sample Return mission, which must operate in low temperatures during eclipse phases. CERN supported the main industrial partner, Thales Alenia Space, in qualifying the orbiter’s thermal-protection systems in cryogenic conditions.

3. Earth observation

Earth observation from orbit has applications ranging from environmental monitoring to weather forecasting. CERN and ESA collaborate both on developing the advanced technologies required by these applications and ensuring they can operate in the harsh radiation environment of space.

In 2017 and 2018, ESA teams came to CERN’s North Area with several partner companies to test the performance of radiation monitors, field-programmable gate arrays (FPGAs) and electronics chips in ultra-high-energy ion beams at the Super Proton Synchrotron. The tests mimicked the ultra-high-energy part of the galactic cosmic-ray spectrum, whose effects had never previously been measured on the ground beyond 10 GeV/nucleon. In 2017, ESA’s standard radiation-environment monitor and several FPGAs and multiprocessor chips were tested with xenon ions. In 2018, the highlight of the campaign was the testing of Intel’s Myriad-2 artificial intelligence (AI) chip with lead ions (see “Space AI” figure). Following its radiation characterisation and qualification, in 2020 the chip embarked on the φ-sat-1 mission to autonomously detect clouds using images from a hyperspectral camera.

Myriad 2 chip testing

More recently, CERN joined Edge SpAIce – an EU project to monitor ecosystems onboard the Balkan-1 satellite and track plastic pollution in the oceans. The project will use CERN’s high-level synthesis for machine learning (hls4ml) AI technology to run inference models on an FPGA that will be launched in 2025.

Looking further ahead, ESA’s φ-lab and CERN’s Quantum Technology Initiative are sponsoring two PhD programmes to study the potential of quantum machine learning, generative models and time-series processing to advance Earth observation. Applications may accelerate the task of extracting features from images to monitor natural disasters, deforestation and the impact of environmental effects on the lifecycle of crops.

4. Dosimetry for human spaceflight

In space, nothing is more important than astronauts’ safety and wellbeing. To this end, in August 2021 ESA astronaut Thomas Pesquet activated the LUMINA experiment inside the International Space Station (ISS), as part of the ALPHA mission (see “Space dosimetry” figure). Developed under the coordination of the French Space Agency and the Laboratoire Hubert Curien at the Université Jean-Monnet-Saint-Étienne and iXblue, LUMINA uses two several-kilometre-long phosphorous-doped optical fibres as active dosimeters to measure ionising radiation aboard the ISS.

ESA astronaut Thomas Pesquet

When exposed to radiation, optical fibres experience a partial loss of transmitted power. Using a reference control channel, radiation-induced attenuation can be accurately measured related to the total ionising dose, with the sensitivity of the device primarily governed by the length of the fibre. Having studied optical-fibre-based technologies for many years, CERN helped optimise the architecture of the dosimeters and performed irradiation tests to calibrate the instrument, which will operate on the ISS for a period of up to five years.

LUMINA complements dosimetry measurements performed on the ISS using CERN’s Timepix technology – an offshoot of the hybrid-pixel-detector technology developed for the LHC experiments (CERN Courier September/October 2024 p37). Timepix dosimeters have been integrated in multiple NASA payloads since 2012.

5. Radiation-hardness assurance

It’s no mean feat to ensure that CERN’s accelerator infrastructure functions in increasingly challenging radiation environments. Similar challenges are found in space. Damage can be caused by accumulating ionising doses, single-event effects (SEEs) or so-called displacement damage dose, which dislodges atoms within a material’s crystal lattice rather than ionising them. Radiation-hardness assurance (RHA) reduces radiation-induced failures in space through environment simulations, part selection and testing, radiation-tolerant design, worst-case analysis and shielding definition.

Since its creation in 2008, CERN’s Radiation to Electronics project has amplified the work of many equipment and service groups in modelling, mitigating and testing the effect of radiation on electronics. A decade later, joint test campaigns with ESA demonstrated the value of CERN’s facilities and expertise to RHA for spaceflight. This led to the signing of a joint protocol on radiation environments, technologies and facilities in 2019, which also included radiation detectors and radiation-tolerant systems, and components and simulation tools.

CHARM facility

Among CERN’s facilities is CHARM: the CERN high-energy-accelerator mixed-field facility, which offers an innovative approach to low-cost RHA. CHARM’s radiation field is generated by the interaction between a 24 GeV/c beam from the Proton Synchrotron and a metallic target. CHARM offers a uniquely wide spectrum of radiation types and energies, the possibility to adjust the environment using mobile shielding, and enough space to test a medium-sized satellite in full operating conditions.

Radiation testing is particularly challenging for the new generation of rapidly developed and often privately funded “new space” projects, which frequently make use of commercial and off-the-shelf (COTS) components. Here, RHA relies on testing and mitigation rather than radiation hardening by design. For “flip chip” configurations, which have their active circuitry facing inward toward the substrate, and dense three-dimensional structures that cannot be directly exposed without compromising their performance, heavy-ion beams accelerated to between 10 and 100 MeV/nucleon are the only way to induce SEE in the sensitive semiconductor volumes of the devices.

To enable testing of highly integrated electronic components, ESA supported studies to develop the CHARM heavy ions for micro-electronics reliability-assurance facility – CHIMERA for short (see “CHIMERA” figure). ESA has sponsored key feasibility activities such as: tuning the ion flux in a large dynamic range; tuning the beam size for board-level testing; and reducing beam energy to maximise the frequency of SEE while maintaining a penetration depth of a few millimetres in silicon.

6. In-orbit demonstrators

Weighing 1 kg and measuring just 10 cm on each side – a nanosatellite standard – the CELESTA satellite was designed to study the effects of cosmic radiation on electronics (see “CubeSat” figure). Initiated in partnership with the University of Montpellier and ESA, and launched in July 2022, CELESTA was CERN’s first in-orbit technology demonstrator.

Radiation-testing model of the CELESTA satellite

As well as providing the first opportunity for CHARM to test a full satellite, CELESTA offered the opportunity to flight-qualify SpaceRadMon, which counts single-event upsets (SEUs) and single-event latchups (SELs) in static random-access memory while using a field-effect transistor for dose monitoring. (SEUs are temporary errors caused by a high-energy particle flipping a bit and SELs are short circuits induced by high-energy particles.) More than 30 students contributed to the mission development, partially in the frame of ESA’s Fly Your Satellite Programme. Built from COTS components calibrated in CHARM, SpaceRadMon has since been adopted by other ESA missions such as Trisat and GENA-OT, and could be used in the future as a low-cost predictive maintenance tool to reduce space debris and improve space sustainability.

The maiden flight of the Vega-C launcher placed CELESTA on an atypical quasi-circular medium-Earth orbit in the middle of the inner Van Allen proton belt at roughly 6000 km. Two months of flight data sufficed to validate the performance of the payload and the ground-testing procedure in CHARM, though CELESTA will fly for thousands of years in a region of space where debris is not a problem due to the harsh radiation environment.

The CELESTA approach has since been adopted by industrial partners to develop radiation-tolerant cameras, radios and on-board computers.

7. Stimulating the space economy

Space technology is a fast-growing industry replete with opportunities for public–private cooperation. The global space economy will be worth $1.8 trillion by 2035, according to the World Economic Forum – up from $630 billion in 2023 and growing at double the projected rate for global GDP.

Whether spun off from space exploration or particle physics, ESA and CERN look to support start-up companies and high-tech ventures in bringing to market technologies with positive societal and economic impacts (see “Spin offs” figure). The use of CERN’s Timepix technology in space missions is a prime example. Private company Advacam collaborated with the Czech Technical University to provide a Timepix-based radiation-monitoring payload called SATRAM to ESA’s Proba-V mission to map land cover and vegetation growth across the entire planet every two days.

The Hannover Messe fair

Advacam is now testing a pixel-detector instrument on JoeySat – an ESA-sponsored technology demonstrator for OneWeb’s next-generation constellation of satellites designed to expand global connectivity. Advacam is also working with ESA on radiation monitors for Space Rider and NASA’s Lunar Gateway. Space Rider is a reusable spacecraft whose maiden voyage is scheduled for the coming years, and Lunar Gateway is a planned space station in lunar orbit that could act as a staging post for Mars exploration.

Another promising example is SigmaLabs – a Polish startup founded by CERN alumni specialising in radiation detectors and predictive-maintenance R&D for space applications. SigmaLabs was recently selected by ESA and the Polish Space Agency to provide one of the experiments expected to fly on Axiom Mission 4 – a private spaceflight to the ISS in 2025 that will include Polish astronaut and CERN engineer Sławosz Uznański (CERN Courier May/June 2024 p55). The experiment will assess the scalability and versatility of the SpaceRadMon radiation-monitoring technology initially developed at CERN for the LHC and flight tested on the CELESTA CubeSat.

In radiation-hardness assurance, the CHIMERA facility is associated with the High-Energy Accelerators for Radiation Testing and Shielding (HEARTS) programme sponsored by the European Commission. Its 2024 pilot user run is already stimulating private innovation, with high-energy heavy ions used to perform business-critical research on electronic components for a dozen aerospace companies.

A word with CERN’s next Director-General

Mark Thomson

What motivates you to be CERN’s next Director-General?

CERN is an incredibly important organisation. I believe my deep passion for particle physics, coupled with the experience I have accumulated in recent years, including leading the Deep Underground Neutrino Experiment, DUNE, through a formative phase, and running the Science and Technology Facilities Council in the UK, has equipped me with the right skill set to lead CERN though a particularly important period.

How would you describe your management style?

That’s a good question. My overarching approach is built around delegating and trusting my team. This has two advantages. First, it builds an empowering culture, which in my experience provides the right environment for people to thrive. Second, it frees me up to focus on strategic planning and engagement with numerous key stakeholders. I like to focus on transparency and openness, to build trust both internally and externally.

How will you spend your familiarisation year before you take over in 2026?

First, by getting a deep understanding of CERN “from within”, to plan how I want to approach my mandate. Second, by lending my voice to the scientific discussion that will underpin the third update to the European strategy for particle physics. The European strategy process is a key opportunity for the particle-physics community to provide genuine bottom-up input and shape the future. This is going to be a really varied and exciting year.

What open question in fundamental physics would you most like to see answered in your lifetime?

I am going to have to pick two. I would really like to understand the nature of dark matter. There are a wide range of possibilities, and we are addressing this question from multiple angles; the search for dark matter is an area where the collider and non-collider experiments can both contribute enormously. The second question is the nature of the Higgs field. The Higgs boson is just so different from anything else we’ve ever seen. It’s not just unique – it’s unique and very strange. There are just so many deep questions, such as whether it is fundamental or composite. I am confident that we will make progress in the coming years. I believe the High-Luminosity LHC will be able to make meaningful measurements of the self-coupling at the heart of the Higgs potential. If you’d asked me five years ago whether this was possible, I would have been doubtful. But today I am very optimistic because of the rapid progress with advanced analysis techniques being developed by the brilliant scientists on the LHC experiments.

What areas of R&D are most in need of innovation to meet our science goals?

Artificial intelligence is changing how we look at data in all areas of science. Particle physics is the ideal testing ground for artificial intelligence, because our data is complex there are none of the issues around the sensitive nature of the data that exist in other fields. Complex multidimensional datasets are where you’ll benefit the most from artificial intelligence. I’m also excited by the emergence of new quantum technologies, which will open up fresh opportunities for our detector systems and also new ways of doing experiments in fundamental physics. We’ve only scratched the surface of what can be achieved with entangled quantum systems.

How about in accelerator R&D?

There are two areas that I would like to highlight: making our current technologies more sustainable, and the development of high-field magnets based on high-temperature superconductivity. This connects to the question of innovation more broadly. To quote one example among many, high-temperature superconducting magnets are likely to be an important component of fusion reactors just as much as particle accelerators, making this a very exciting area where CERN can deploy its engineering expertise and really push that programme forward. That’s not just a benefit for particle physics, but a benefit for wider society.

How has CERN changed since you were a fellow back in 1994?

The biggest change is that the collider experiments are larger and more complex, and the scientific and technical skills required have become more specialised. When I first came to CERN, I worked on the OPAL experiment at LEP – a collaboration of less than 400 people. Everybody knew everybody, and it was relatively easy to understand the science of the whole experiment.

My overarching approach is built around delegating and trusting my team

But I don’t think the scientific culture of CERN and the particle-physics community has changed much. When I visit CERN and meet with the younger scientists, I see the same levels of excitement and enthusiasm. People are driven by the wonderful mission of discovery. When planning the future, we need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics. Today we have an amazing machine that’s running beautifully: the LHC. I also don’t think it is possible to overstate the excitement of the High-Luminosity LHC. So there’s a clear and exciting future out to the early 2040s for today’s early-career researchers. The question is what happens beyond that? This is one reason to ensure that there is not a large gap between the end of the High-Luminosity LHC and the start of whatever comes next.

Should the world be aligning on a single project?

Given the increasing scale of investment, we do have to focus as a global community, but that doesn’t necessarily mean a single project. We saw something similar about 10 years ago when the global neutrino community decided to focus its efforts on two complementary long-baseline projects, DUNE and Hyper-Kamiokande. From the perspective of today’s European strategy, the Future Circular Collider (FCC) is an extremely appealing project that would map out an exciting future for CERN for many decades. I think we’ll see this come through strongly in an open and science-driven European strategy process.

How do you see the scientific case for the FCC?

For me, there are two key points. First, gaining a deep understanding of the Higgs boson is the natural next step in our field. We have discovered something truly unique, and we should now explore its properties to gain deeper insights into fundamental physics. Scientifically, the FCC provides everything you want from a Higgs factory, both in terms of luminosity and the opportunity to support multiple experiments.

Second, investment in the FCC tunnel will provide a route to hadron–hadron collisions at the 100 TeV scale. I find it difficult to foresee a future where we will not want this capability.

These two aspects make the FCC a very attractive proposition.

How successful do you believe particle physics is in communicating science and societal impacts to the public and to policymakers?

I think we communicate science well. After all, we’ve got a great story. People get the idea that we work to understand the universe at its most basic level. It’s a simple and profound message.

Going beyond the science, the way we communicate the wider industrial and societal impact is probably equally important. Here we also have a good story. In our experiments we are always pushing beyond the limits of current technology, doing things that have not been done before. The technologies we develop to do this almost always find their way back into something that will have wider applications. Of course, when we start, we don’t know what the impact will be. That’s the strength and beauty of pushing the boundaries of technology for science.

Would the FCC give a strong return on investment to the member states?

Absolutely. Part of the return is the science, part is the investment in technology, and we should not underestimate the importance of the training opportunities for young people across Europe. CERN provides such an amazing and inspiring environment for young people. The scale of the FCC will provide a huge number of opportunities for young scientists and engineers.

We need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics

In terms of technology development, the detectors for the electron–positron collider will provide an opportunity for pushing forward and deploying new, advanced technologies to deliver the precision required for the science programme. In parallel, the development of the magnet technologies for the future hadron collider will be really exciting, particularly the potential use of high-temperature superconductors, as I said before.

It is always difficult to predict the specific “return on investment” on the technologies for big scientific research infrastructure. Part of this challenge is that some of that benefits might be 20, 30, 40 years down the line. Nevertheless, every retrospective that has tried, has demonstrated that you get a huge downstream benefit.

Do we reward technical innovation well enough in high-energy physics?

There needs to be a bit of a culture shift within our community. Engineering and technology innovation are critical to the future of science and critical to the prosperity of Europe. We should be striving to reward individuals working in these areas.

Should the field make it more flexible for physicists and engineers to work in industry and return to the field having worked there?

This is an important question. I actually think things are changing. The fluidity between academia and industry is increasing in both directions. For example, an early-career researcher in particle physics with a background in deep artificial-intelligence techniques is valued incredibly highly by industry. It also works the other way around, and I experienced this myself in my career when one of my post-doctoral researchers joined from an industry background after a PhD in particle physics. The software skills they picked up from industry were incredibly impactful.

I don’t think there is much we need to do to directly increase flexibility – it’s more about culture change, to recognise that fluidity between industry and academia is important and beneficial. Career trajectories are evolving across many sectors. People move around much more than they did in the past.

Does CERN have a future as a global laboratory?

CERN already is a global laboratory. The amazing range of nationalities working here is both inspiring and a huge benefit to CERN.

How can we open up opportunities in low- and middle-income countries?

I am really passionate about the importance of diversity in all its forms and this includes national and regional inclusivity. It is an agenda that I pursued in my last two positions. At the Deep Underground Neutrino Experiment, I was really keen to engage the scientific community from Latin America, and I believe this has been mutually beneficial. At STFC, we used physics as a way to provide opportunities for people across Africa to gain high-tech skills. Going beyond the training, one of the challenges is to ensure that people use these skills in their home nations. Otherwise, you’re not really helping low- and middle-income countries to develop.

What message would you like to leave with readers?

That we have really only just started the LHC programme. With more than a factor of 10 increase in data to come, coupled with new data tools and upgraded detectors, the High-Luminosity LHC represents a major opportunity for a new discovery. Its nature could be a complete surprise. That’s the whole point of exploring the unknown: you don’t know what’s out there. This alone is incredibly exciting, and it is just a part of CERN’s amazing future.

The other 99%

Quarks contribute less than 1% to the mass of protons and neutrons. This provokes an astonishing question: where does the other 99% of the mass of the visible universe come from? The answer lies in the gluon, and how it interacts with itself to bind quarks together inside hadrons.

Much remains to be understood about gluon dynamics. At present, the chief experimental challenge is to observe the onset of gluon saturation – a dynamic equilibrium between gluon splitting and recombination predicted by QCD. The experimental key looks likely to be a rare but intriguing type of LHC interaction known as an ultra­peripheral collision (UPC), and the breakthrough may come as soon as the next experimental run.

Gluon saturation is expected to end the rapid growth in gluon density measured at the HERA electron–proton collider at DESY in the 1990s and 2000s. HERA observed this growth as the energy of interactions increased and as the fraction of the proton’s momentum borne by the gluons (Bjorken x) decreased.

So gluons become more numerous in hadrons as their energy decreases – but to what end?

Gluonic hotspots are now being probed with unprecedented precision at the LHC and are central to understanding the high-energy regime of QCD

Nonlinear effects are expected to arise due to processes like gluon recombination, wherein two gluons combine to become one. When gluon recombination becomes a significant factor in QCD dynamics, gluon saturation sets in – an emergent phenomenon whose energy scale is a critical parameter to determine experimentally. At this scale, gluons begin to act like classical fields and gluon density plateaus. A dilute partonic picture transitions to a dense, saturated state. For recombination to take precedence over splitting, gluon momenta must be very small, corresponding to low values of Bjorken x. The saturation scale should also be directly proportional to the colour-charge density, making heavy nuclei like lead ideal for studying nonlinear QCD phenomena.

But despite strong theoretical reasoning and tantalising experimental hints, direct evidence for gluon saturation remains elusive.

Since the conclusion of the HERA programme, the quest to explore gluon saturation has shifted focus to the LHC. But with no point-like electron to probe the hadronic target, LHC physicists had to find a new point-like probe: light itself. UPCs at the LHC exploit the flux of quasi-real high-energy photons generated by ultra-relativistic particles. For heavy ions like lead, this flux of photons is enhanced by the square of the nuclear charge, enabling studies of photon-proton (γp) and photon-nucleus interactions at centre-of-mass energies reaching the TeV scale.

Keeping it clean

What really sets UPCs apart is their clean environment. UPCs occur at large impact parameters well outside the range of the strong nuclear force, allowing the nuclei to remain intact. Unlike hadronic collisions, which can produce thousands of particles, UPCs often involve only a few final-state particles, for example a single J/ψ, providing an ideal laboratory for gluon saturation. J/ψ are produced when a cc pair created by two or more gluons from one nucleus is brought on-shell by interacting with a quasi-real photon from the other nucleus (see “Sensitivity to saturation” figure).

Power-law observation

Gluon saturation models predict deviations in the γp → J/ψp cross section from the power-law behaviour observed at HERA. The LHC experiments are placing a significant focus on investigating the energy dependence of this process to identify potential signatures of saturation, with ALICE and LHCb extending studies to higher γp centre-of-mass energies (Wγp) and lower Bjorken x than HERA. The results so far reveal that the cross-section continues to increase with energy, consistent with the power-law trend (see “Approaching the plateau?” figure).

The symmetric nature of pp collisions introduces significant challenges. In pp collisions, either proton can act as the photon source, leading to an intrinsic ambiguity in identifying the photon emitter. In proton–lead (pPb) collisions, the lead nucleus overwhelmingly dominates photon emission, eliminating this ambiguity. This makes pPb collisions an ideal environment for precise studies of the photoproduction of J/ψ by protons.

During LHC Run 1, the ALICE experiment probed Wγp up to 706 GeV in pPb collisions, more than doubling HERA’s maximum reach of 300 GeV. This translates to probing Bjorken-x values as low as 10–5, significantly beyond the regime explored at HERA. LHCb took a different approach. The collaboration inferred the behaviour of pp collisions at high energies (“W+ solutions”) by assuming knowledge of their energy dependence at low energies (“W- solutions”), allowing LHCb to probe gluon energies as small as 10–6 in Bjorken x and Wγp up to 2 TeV.

There is not yet any theoretical consensus on whether LHC data align with gluon-saturation predictions, and the measurements remain statistically limited, leaving room for further exploration. Theoretical challenges include incomplete next-to-leading-order calculations and the reliance of some models on fits to HERA data. Progress will depend on robust and model-independent calculations and high-quality UPC data from pPb collisions in LHC Run 3 and Run 4.

Some models predict a slowing increase in the γp → J/ψp cross section with energy at small Bjorken x. If these models are correct, gluon saturation will likely be discovered in LHC Run 4, where we expect to see a clear observation of whether pPb data deviate from the power law observed so far.

Gluonic hotspots

If a UPC photon interacts with the collective colour field of a nucleus – coherent scattering – it probes its overall distribution of gluons. If a UPC photon interacts with individual nucleons or smaller sub-nucleonic structures – incoherent scattering – it can probe smaller-scale gluon fluctuations.

Simulations of the transverse density of gluons in protons

These fluctuations, known as gluonic hotspots, are theorised to become more numerous and overlap in the regime of gluon saturation (see “Onset of saturation” figure). Now being probed with unprecedented precision at the LHC, they are central to understanding the high-energy regime of QCD.

Gluonic hotspots are used to model the internal transverse structure of colliding protons or nuclei (see “Hotspot snapshots” figure). The saturation scale is inherently impact-parameter dependent, with the densest colour charge densities concentrated at the core of the proton or nucleus, and diminishing toward the periphery, though subject to fluctuations. Researchers are increasingly interested in exploring how these fluctuations depend on the impact parameter of collisions to better characterise the spatial dynamics of colour charge. Future analyses will pinpoint contributions from localised hotspots where saturation effects are most likely to be observed.

The energy dependence of incoherent or dissociative photoproduction promises a clear signature for gluon saturation, independent of the coherent power-law method described above. As saturation sets in, all gluon configurations in the target converge to similar densities, causing the variance of the gluon field to decrease, and with it the dissociative cross section. Detecting a peak and a decline in the incoherent cross-section as a function of energy would represent a clear signature of gluon saturation.

Simulations of the transverse density of gluons in lead nuclei

The ALICE collaboration has taken significant steps in exploring this quantum terrain, demonstrating the possibility of studying different geometrical configurations of quantum fluctuations in processes where protons or lead nucleons dissociate. The results highlight a striking correlation between momentum transfer, which is inversely proportional to the impact parameter, and the size of the target structure. The observation that sub-nucleonic structures impart the greatest momentum transfer is compelling evidence for gluonic quantum fluctuations at the sub-nucleon level.

Into the shadows

In 1982 the European Muon Collaboration observed an intriguing phenomenon: nuclei appeared to contain fewer gluons than expected based on the contributions from their individual protons and neutrons. This effect, known as nuclear shadowing, was observed in experiments conducted at CERN at moderate values of Bjorken x. It is now known to occur because the interaction of a probe with one gluon reduces the likelihood of the probe interacting with other gluons within the nucleus – the gluons hiding behind them, in their shadow, so to speak. At smaller values of Bjorken x, saturation further suppresses the number of gluons contributing to the interaction.

Nuclear suppression factor for lead relative to protons

The relationship between gluon saturation and nuclear shadowing is poorly understood, and separating their effects remains an open challenge. The situation is further complicated by an experimental reliance on lead–lead (PbPb) collisions, which, like pp collisions, suffer from ambiguity in identifying the interacting nucleus, unless the interaction is accompanied by an ejected neutron.

The ALICE, CMS and LHCb experiments have extensively studied nuclear shadowing via the exclusive production of vector mesons such as J/ψ in ultraperipheral PbPb
collisions. Results span photon–nucleus collision energies from 10 to 1000 GeV. The onset of nuclear shadowing, or another nonlinear QCD phenomenon like saturation, is clearly visible as a function of energy and Bjorken x (see “Nuclear shadowing” figure).

Multidimensional maps

While both saturation-based and gluon shadowing models describe the data reasonably well at high energies, neither framework captures the observed trends across the entire kinematic range. Future efforts must go beyond energy dependence by being differential in momentum transfer and studying a range of vector mesons with complementary sensitivities to the saturation scale.

Soon to be constructed at Brookhaven National Laboratory, the Electron-Ion Collider (EIC) promises to transform our understanding of gluonic matter. Designed specifically for QCD research, the EIC will probe gluon saturation and shadowing in unprecedented detail, using a broad array of reactions, collision species and energy levels. By providing a multidimensional map of gluonic behaviour, the EIC will address funda­mental questions such as the origin of mass and nuclear spin.

ALICE’s high-granularity forward calorimeter

Before then, a tenfold increase in PbPb statistics in LHC Runs 3 and 4 will allow a transformative leap in low Bjorken-x physics. Though not originally designed for low Bjorken-x physics, the LHC’s unparalleled energy reach and diverse range of colliding systems offers unique opportunities to explore gluon dynamics at the highest energies.

Enhanced capabilities

Surpassing the gains from increased luminosity alone, ALICE’s new triggerless detector readout mode will offer a vast improvement over previous runs, which were constrained by dedicated triggers and bandwidth limitations. Subdetector upgrades will also play an important role. The muon forward tracker has already enhanced ALICE’s capabilities, and the high-granularity forward calorimeter set to be installed in time for Run 4 is specifically designed to improve sensitivity to small Bjorken-x physics (see “Saturation specific” figure).

Ultraperipheral-collision physics at the LHC is far more than a technical exploration of QCD. Gluons govern the structure of all visible matter. Saturation, hotspots and shadowing shed light on the origin of 99% of the mass of the visible universe. 

Charm and synthesis

In 1955, after a year of graduate study at Harvard, I joined a group of a dozen or so students committed to studying elementary particle theory. We approached Julian Schwinger, one of the founders of quantum electrodynamics, hoping to become his thesis students – and we all did.

Schwinger lined us up in his office, and spent several hours assigning thesis subjects. It was a remarkable performance. I was the last in line. Having run out of well-defined thesis problems, he explained to me that weak and electromagnetic interactions share two remarkable features: both are vectorial and both display aspects of universality. Schwinger suggested that I create a unified theory of the two interactions – an electroweak synthesis. How I was to do this he did not say, aside from slyly hinting at the Yang–Mills gauge theory.

By the summer of 1958, I had convinced myself that weak and electromagnetic interactions might be described by a badly broken gauge theory, and Schwinger that I deserved a PhD. I had hoped to partly spend a postdoctoral fellowship in Moscow at the invitation of the recent Russian Nobel laureate Igor Tamm, and sought to visit Niels Bohr’s institute in Copenhagen while awaiting my Soviet visa. With Bohr’s enthusiastic consent, I boarded the SS Île de France with my friend Jack Schnepps. Following a memorable and luxurious crossing – one of the great ship’s last – Jack drove south to Padova to work with Milla Baldo-Ceolin’s emulsion group in Padova, and I took the slow train north to Copenhagen. Thankfully, my Soviet visa never arrived. I found the SU(2) × U(1) structure of the electroweak model in the spring of 1960 at Bohr’s famous institute at Blegsdamvej 19, and wrote the paper that would earn my share of the 1979 Nobel Prize.

We called the new quark flavour charm, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day

A year earlier, in 1959, Augusto Gamba, Bob Marshak and Susumo Okubo had proposed lepton–hadron symmetry, which regarded protons, neutrons and lambda hyperons as the building blocks of all hadrons, to match the three known leptons at the time: neutrinos, electrons and muons. The idea was falsified by the discovery of a second neutrino in 1962, and superseded in 1964 by the invention of fractionally charged hadron constituents, first by George Zweig and André Petermann, and then decisively by Murray Gell-Mann with his three flavours of quarks. Later in 1964, while on sabbatical in Copenhagen, James Bjorken and I realised that lepton–hadron symmetry could be revived simply by adding a fourth quark flavour to Gell-Mann’s three. We called the new quark flavour “charm”, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day.

Annus mirabilis

1964 was a remarkable year. In addition to the invention of quarks, Nick Samios spotted the triply strange Ω baryon, and Oscar Greenberg devised what became the critical notion of colour. Arno Penzias and Robert Wilson stumbled on the cosmic microwave background radiation. James Cronin, Val Fitch and others discovered CP violation. Robert Brout, François Englert, Peter Higgs and others invented spontaneously broken non-Abelian gauge theories. And to top off the year, Abdus Salam rediscovered and published my SU(2) × U(1) model, after I had more-or-less abandoned electroweak thoughts due to four seemingly intractable problems.

Four intractable problems of early 1964

How could the W and Z bosons acquire masses while leaving the photon massless?

Steven Weinberg, my friend from both high-school and college, brilliantly solved this problem in 1967 by subjecting the electroweak gauge group to spontaneous symmetry breaking, initiating the half-century-long search for the Higgs boson. Salam published the same solution in 1968.

How could an electroweak model of leptons be extended to describe the weak interactions of hadrons?

John Iliopoulos, Luciano Maiani and I solved this problem in 1970 by introducing charm and quark-lepton symmetry to avoid unobserved strangeness-changing neutral currents.

Was the spontaneously broken electroweak gauge model mathematically consistent?

Gerard ’t Hooft announced in 1971 that he had proven Steven Weinberg’s electroweak model to be renormalisable. In 1972, Claude Bouchiat, John Iliopoulos and Philippe Meyer demonstrated the electroweak model to be free of Adler anomalies provided that lepton–quark symmetry is maintained.

Could the electroweak model describe CP violation without invoking additional spinless fields?

In 1973, Makoto Kobayashi and Toshihide Maskawa showed that the electroweak model could easily and naturally violate CP if there are more than four quark flavours.

Much to my surprise and delight, all of them would be solved within just a few years, with the last theoretical obstacle removed by Makoto Kobayashi and Toshihide Maskawa in 1973 (see “Four intractable problems” panel). A few months later, Paul Musset announced that CERN’s Gargamelle detector had won the race to detect weak neutral-current interactions, giving the electroweak model the status of a predictive theory. Remarkably, the year had begun with Gell-Mann, Harald Fritzsch and Heinrich Leutwyler proposing QCD, and David Gross, Frank Wilczek and David Politzer showing it to be asymptotically free. The Standard Model of particle physics was born.

Charmed findings

But where were the charmed quarks? Early on Monday morning on 11 November, 1974, I was awakened by a phone call from Sam Ting, who asked me to come to his MIT office as soon as possible. He and Ulrich Becker were waiting for me impatiently. They showed me an amazingly sharp resonance. Could it be a vector meson like the ρ or ω and be so narrow, or was it something quite different? I hopped in my car and drove to Harvard, where my colleagues Alvaro de Rújula and Howard Georgi excitedly regaled me about the Californian side of the story. A few days later, experimenters in Frascati confirmed the BNL–SLAC discovery, and de Rújula and I submitted our paper “Is Bound Charm Found?” – one of two papers on the J/ψ discovery printed in Physical Review Letters on 5 July 1965 that would prove to be correct. Among five false papers was one written by my beloved mentor, Julian Schwinger.

Sam Ting at CERN in 1976

The second correct paper was by Tom Appelquist and David Politzer. Well before that November, they had realised (without publishing) that bound states of a charmed quark and its antiquark lying below the charm threshold would be exceptionally narrow due the asymptotic freedom of QCD. De Rújula suggested to them that such a system be called charmonium in an analogy with positronium. His term made it into the dictionary. Shortly afterward, the 1976 Nobel Prize in Physics was jointly awarded to Burton Richter and Sam Ting for “their pioneering work in the discovery of a heavy elementary particle of a new kind” – evidence that charm was not yet a universally accepted explanation. Over the next few years, experimenters worked hard to confirm the predictions of theorists at Harvard and Cornell by detecting and measuring the masses, spins and transitions among the eight sub-threshold charmonium states. Later on, they would do the same for 14 relatively narrow states of bottomonium.

Abdus Salam, Tom Ball and Paul Musset

Other experimenters were searching for particles containing just one charmed quark or antiquark. In our 1975 paper “Hadron Masses in a Gauge Theory”, de Rújula, Georgi and I included predictions of the masses of several not-yet-discovered charmed mesons and baryons. The first claim to have detected charmed particles was made in 1975 by Robert Palmer and Nick Samios at Brookhaven, again with a bubble-chamber event. It seemed to show a cascade decay process in which one charmed baryon decays into another charmed baryon, which itself decays. The measured masses of both of the charmed baryons were in excellent agreement with our predictions. Though the claim was not widely accepted, I believe to this day that Samios and Palmer were the first to detect charmed particles.

Sheldon Glashow and Steven Weinberg

The SLAC electron–positron collider, operating well above charm threshold, was certainly producing charmed particles copiously. Why were they not being detected? I recall attending a conference in Wisconsin that was largely dedicated to this question. On the flight home, I met my old friend Gerson Goldhaber, who had been struggling unsuccessfully to find them. I think I convinced him to try a bit harder. A couple of weeks later in 1976, Goldhaber and François Pierre succeeded. My role in charm physics had come to a happy ending. 

  • This article is adapted from a presentation given at the Institute of High-Energy Physics in Beijing on 20 October 2024 to celebrate the 50th anniversary of the discovery of the J/ψ.

Muon cooling kickoff at Fermilab

More than 100 accelerator scientists, engineers and particle physicists gathered in person and remotely at Fermilab from 30 October to 1 November for the first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider. High-energy muon colliders offer a unique combination of discovery potential and precision. Unlike protons, muons are point-like particles that can achieve comparable physics outcomes at lower centre-of-mass energies. The large mass of the muon also suppresses synchrotron radiation, making muon colliders promising candidates for exploration at the energy frontier.

The International Muon Collider Collaboration (IMCC), supported by the EU MuCol study, is working to assess the potential of a muon collider as a future facility, along with the R&D needed to make it a reality. European engagement in this effort crystalised following the 2020 update to the European Strategy for Particle Physics (ESPPU), which identified the development of bright muon beams as a high-priority initiative. Worldwide interest in a muon collider is quickly growing: the 2023 Particle Physics Project Prioritization Panel (P5) recently identified it as an important future possibility for the US particle-physics community; Japanese colleagues have proposed a muon-collider concept, muTRISTAN (CERN Courier July/August 2024 p8); and Chinese colleagues have actively contributed to IMCC efforts as collaboration members.

Lighting the way

The workshop focused on reviewing the scope and design progress of a muon-cooling demonstrator facility, identifying potential host sites and timelines, and exploring science programmes that could be developed alongside it. Diktys Stratakis (Fermilab) began by reviewing the requirements and challenges of muon cooling. Delivering a high-brightness muon beam will be essential to achieving the luminosity needed for a muon collider. The technique proposed for this is ionisation cooling, wherein the phase-space volume of the muon beam decreases as it traverses a sequence of cells, each containing an energy- absorbing mat­erial and accelerating radiofrequency (RF) cavities.

Roberto Losito (CERN) called for a careful balance between ambition and practicality – the programme must be executed in a timely way if a muon collider is to be a viable next-generation facility. The Muon Cooling Demonstrator programme was conceived to prove that this technology can be developed, built and reliably operated. This is a critical step for any muon-collider programme, as highlighted in the ESPPU–LDG Accelerator R&D Roadmap published in 2022. The plan is to pursue a staged approach, starting with the development of the magnet, RF and absorber technology, and demonstrating the robust operation of high-gradient RF cavities in high magnetic fields. The components will then be integrated into a prototype cooling cell. The programme will conclude with a demonstration of the operation of a multi-cell cooling system with a beam, building on the cooling proof of principle made by the Muon Ionisation Cooling Experiment.

Chris Rogers (STFC RAL) summarised an emerging consensus that it is critical to demonstrate the reliable operation of a cooling lattice formed of multiple cells. While the technological complexity of the cooling-cell prototype will undergo further review, the preliminary choice presents a moderately challenging performance that could be achieved within five to seven years with reasonable investment. The target cooling performance of a whole cooling lattice remains to be established and depends on future funding levels. However, delegates agreed that a timely demonstration is more important than an ambitious cooling target.

Worldwide interest in a muon collider is quickly growing

The workshop also provided an opportunity to assess progress in designing the cooling-cell prototype. Given that the muon beam originates from hadron decays and is initially the size of a watermelon, solenoid magnets were chosen as they can contain large beams in a compact lattice and provide focusing in both horizontal and vertical planes simultaneously. Marco Statera (INFN LASA) presented preliminary solutions for the solenoid coil configuration based on high-temperature superconductors operating at 20 K: the challenge is to deliver the target magnetic field profile given axial forces, coil stresses and compact integration.

In ionisation cooling, low-Z absorbers are used to reduce the transverse momenta of the muons while keeping the multiple scattering at manageable levels. Candidate materials are lithium hydride and liquid hydrogen. Chris Rogers discussed the need to test absorbers and containment windows at the highest intensities. The potential for performance tests using muons or intensity tests using another particle species such as protons was considered to verify understanding of the collective interaction between the beam and the absorber. RF cavities are required to replace longitudinal energy lost in the absorbers.  Dario Giove (INFN LASA) introduced the prototype of an RF structure based on three coupled 704 MHz cavities and presented a proposal to use existing INFN capabilities to carry out a test programme for materials and cavities in magnetic fields. The use of cavity windows was also discussed, as it would enable greater accelerating gradients, though at the cost of beam degradation, increased thermal loads and possible cavity detuning. The first steps in integ­rating these latest hardware designs into a compact cooling cell were presented by Lucio Rossi (INFN LASA and UMIL). Future work needs to address the management of the axial forces and cryogenic heat loads, Rossi observed.

Many institutes presented a strong interest in contributing to the programme, both in the hardware R&D and hosting the eventual demonstrator. The final sessions of the workshop focused on potential host laboratories.

The event underscored the critical need for sustained innovation, timely implementation and global cooperation

At CERN, two potential sites were discussed, with ongoing studies focusing on the TT7 tunnel, where a moderate-power 10 kW proton beam from the Proton Synchrotron could be used for muon production. Preliminary beam physics studies of muon beam production and transport are already underway. Lukasz Krzempek (CERN) and Paul Jurj (Imperial College London) presented the first integration and beam-physics studies of the demonstrator facility in the TT7 tunnel, highlighting civil engineering and beamline design requirements, logistical challenges and safety considerations, finding no apparent showstoppers.

Jeff Eldred (Fermilab) gave an overview of Fermilab’s broad range of candidate sites and proton-beam energies. While further feasibility studies are required, Eldred highlighted that using 8 GeV protons from the Booster is an attractive option due to the favourable existing infrastructure and its alignment with Fermilab’s muon-collider scenario, which envisions a proton driver based on the same Booster proton energy.

The Fermilab workshop represented a significant milestone in advancing the Muon Cooling Demonstrator, highlighting enthusiasm from the US community to join forces with the IMCC and growing interest in Asia. As Mark Palmer (BNL) observed in his closing remarks, the event underscored the critical need for sustained innovation, timely implementation and global cooperation to make the muon collider a reality.

CLOUD explains Amazon aerosols

In a paper published in the journal Nature, the CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models.

Aerosols are microscopic particles suspended in the atmosphere that arise from both natural sources and human activities. They play an important role in Earth’s climate system because they seed clouds and influence their reflectivity and coverage. Most aerosols arise from the spontaneous condensation of molecules that are present in the atmosphere only in minute concentrations. However, the vapours responsible for their formation are not well understood, particularly in the remote upper troposphere.

The CLOUD (Cosmics Leaving Outdoor Droplets) experiment at CERN is designed to investigate the formation and growth of atmospheric aerosol particles in a controlled laboratory environment. CLOUD comprises a 26 m3 ultra-clean chamber and a suite of advanced instruments that continuously analyse its contents. The chamber contains a precisely selected mixture of gases under atmospheric conditions, into which beams of charged pions are fired from CERN’s Proton Synchrotron to mimic the influence of galactic cosmic rays.

“Large concentrations of aerosol particles have been observed high over the Amazon rainforest for the past 20 years, but their source has remained a puzzle until now,” says CLOUD spokesperson Jasper Kirkby. “Our latest study shows that the source is isoprene emitted by the rainforest and lofted in deep convective clouds to high altitudes, where it is oxidised to form highly condensable vapours. Isoprene represents a vast source of biogenic particles in both the present-day and pre-industrial atmospheres that is currently missing in atmospheric chemistry and climate models.”

Isoprene is a hydrocarbon containing five carbon atoms and eight hydrogen atoms. It is emitted by broad-leaved trees and other vegetation and is the most abundant non-methane hydrocarbon released into the atmosphere. Until now, isoprene’s ability to form new particles has been considered negligible.

Seeding clouds

The CLOUD results change this picture. By studying the reaction of hydroxyl radicals with isoprene at upper tropospheric temperatures of –30 °C and –50 °C, the collaboration discovered that isoprene oxidation products form copious particles at ambient isoprene concentrations. This new source of aerosol particles does not require any additional vapours. However, when minute concentrations of sulphuric acid or iodine oxoacids were introduced into the CLOUD chamber, a 100-fold increase in aerosol formation rate was observed. Although sulphuric acid derives mainly from anthropogenic sulphur dioxide emissions, the acid concentrations used in CLOUD can also arise from natural sources.

In addition, the team found that isoprene oxidation products drive rapid growth of particles to sizes at which they can seed clouds and influence the climate – a behaviour that persists in the presence of nitrogen oxides produced by lightning at upper-tropospheric concentrations. After continued growth and descent to lower altitudes, these particles may provide a globally important source for seeding shallow continental and marine clouds, which influence Earth’s radiative balance – the amount of incoming solar radiation compared to outgoing longwave radiation (see “Seeding clouds” figure).

“This new source of biogenic particles in the upper troposphere may impact estimates of Earth’s climate sensitivity, since it implies that more aerosol particles were produced in the pristine pre-industrial atmosphere than previously thought,” adds Kirkby. “However, until our findings have been evaluated in global climate models, it’s not possible to quantify the effect.”

The CLOUD findings are consistent with aircraft observations over the Amazon, as reported in an accompanying paper in the same issue of Nature. Together, the two papers provide a compelling picture of the importance of isoprene-driven aerosol formation and its relevance for the atmosphere.

Since it began operation in 2009, the CLOUD experiment has unearthed several mechanisms by which aerosol particles form and grow in different regions of Earth’s atmosphere. “In addition to helping climate researchers understand the critical role of aerosols in Earth’s climate, the new CLOUD result demonstrates the rich diversity of CERN’s scientific programme and the power of accelerator-based science to address societal challenges,” says CERN Director for Research and Computing, Joachim Mnich.

Painting Higgs’ portrait in Paris

The 14th Higgs Hunting workshop took place from 23 to 25 September 2024 at Orsay’s IJCLab and Paris’s Laboratoire Astroparticule et Cosmologie. More than 100 participants joined lively discussions to decipher the latest developments in theory and results from the ATLAS and CMS experiments.

The portrait of the Higgs boson painted by experimental data is becoming more and more precise. Many new Run 2 and first Run 3 results have developed the picture this year. Highlights included the latest di-Higgs combinations with cross-section upper limits reaching down to 2.5 times the Standard Model (SM) expectations. A few excesses seen in various analyses were also discussed. The CMS collaboration reported a brand new excess of top–antitop events near the top–antitop production threshold, with a local significance of more than 5σ above the background described by perturbative quantum chromodynamics (QCD) only, that could be due to a pseudoscalar top–antitop bound state. A new W-boson mass measurement by the CMS collaboration – a subject deeply connected to electroweak symmetry breaking – was also presented, reporting a value consistent with the SM prediction with a very accurate precision of 9.9 MeV (CERN Courier November/December 2024 p7).

Parton shower event generators were in the spotlight. Historical talks by Torbjörn Sjöstrand (Lund University) and Bryan Webber (University of Cambridge) described the evolution of the PYTHIA and HERWIG generators, the crucial role they played in the discovery of the Higgs boson, and the role they now play in the LHC’s physics programme. Differences in the modelling of the parton–shower systematics by the ATLAS and CMS collaborations led to lively discussions!

The vision talk was given by Lance Dixon (SLAC) about the reconstruction of scattering amplitudes directly from analytic properties, as a complementary approach to Lagrangians and Feynman diagrams. Oliver Bruning (CERN) conveyed the message that the HL-LHC accelerator project is well on track, and Patricia McBride (Fermilab) reached a similar conclusion regarding ATLAS and CMS’s Phase-2 upgrades, enjoining new and young people to join the effort, to ensure they are ready and commissioned for the start of Run 4.

The next Higgs Hunting workshop will be held in Orsay and Paris from 15 to 17 July 2025, following EPS-HEP in Marseille from 7 to 11 July.

Trial trap on a truck

Thirty years ago, physicists from Harvard University set out to build a portable antiproton trap. They tested it on electrons, transporting them 5000 km from Nebraska to Massachusetts, but it was never used to transport antimatter. Now, a spin-off project of the Baryon Antibaryon Symmetry Experiment (BASE) at CERN has tested their own antiproton trap, this time using protons. The ultimate goal is to deliver antiprotons to labs beyond CERN’s reach.

“For studying the fundamental properties of protons and antiprotons, you need to take extremely precise measurements – as precise as you can possibly make it,” explains principal investigator Christian Smorra. “This level of precision is extremely difficult to achieve in the antimatter factory, and can only be reached when the accelerator is shut down. This is why we need to relocate the measurements – so we can get rid of these problems and measure anytime.”

The team has made considerable strides to miniaturise their apparatus. BASE-STEP is far and away the most compact design for an antiproton trap yet built, measuring just 2 metres in length, 1.58 metres in height and 0.87 metres across. Weighing in at 1 tonne, transportation is nevertheless a complex operation. On 24 October, 70 protons were introduced into the trap and lifted onto a truck using two overhead cranes. The protons made a round trip through CERN’s main site before returning home to the antimatter factory. All 70 protons were safely transported and the experiment with these particles continued seemlessly, successfully demonstrating the trap’s performance.

Antimatter needs to be handled carefully, to avoid it annihilating with the walls of the trap. This is hard to achieve in the controlled environment of a laboratory, let alone on a moving truck. Just like in the BASE laboratory, BASE–STEP uses a Penning trap with two electrode stacks inside a single solenoid. The magnetic field confines charged particles radially, and the electric fields trap them axially. The first electrode stack collects antiprotons from CERN’s antimatter factory and serves as an “airlock” by protecting antiprotons from annihilation with the molecules of external gases. The second is used for long-term storage. While in transit, non-destructive image-current detection monitors the particles and makes sure they have not hit the walls of the trap.

“We originally wanted a system that you can put in the back of your car,” says Smorra. “Next, we want to try using permanent magnets instead of a superconducting solenoid. This would make the trap even smaller and save CHF 300,000. With this technology, there will be so much more potential for future experiments at CERN and beyond.”

With or without a superconducting magnet, continuous cooling is essential to prevent heat from degrading the trap’s ultra-high vacuum. Penning traps conventionally require two separate cooling systems – one for the trap and one for the superconducting magnet. BASE-STEP combines the cooling systems into one, as the Harvard team proposed in 1993. Ultimately, the transport system will have a cryocooler that is attached to a mobile power generator with a liquid-helium buffer tank present as a backup. Should the power generator be interrupted, the back-up cooling system provides a grace period of four hours to fix it and save the precious cargo of antiprotons. But such a scenario carries no safety risk given the miniscule amount of antimatter being transported. “The worst that can happen is the antiprotons annihilate, and you have to go back to the antimatter factory to refill the trap,” explains Smorra.

With the proton trial-run a success, the team are confident they will be able to use this apparatus to successfully deliver antiprotons to precision laboratories in Europe. Next summer, BASE-STEP will load up the trap with 1000 antiprotons and hit the road. Their first stop is scheduled to be Heinrich Heine University in  Germany.

“We can use the same apparatus for the antiproton transport,” says Smorra. “All we need to do is switch the polarity of the electrodes.”

Emphasising the free circulation of scientists

Physics is a universal language that unites scientists worldwide. No event illustrates this more vividly than the general assembly of the International Union of Pure and Applied Physics (IUPAP). The 33rd assembly convened 100 delegates representing territories around the world in Haikou, China, from 10 to 14 October 2024. Amid today’s polarised global landscape, one clear commitment emerged: to uphold the universality of science and ensure the free movement of scientists.

IUPAP was established in 1922 in the aftermath of World War I to coordinate international efforts in physics. Its logo is recognisable from conferences and proceedings, but its mission is less widely understood. IUPAP is the only worldwide organisation dedicated to the advancement of all fields of physics. Its goals include promoting global development and cooperation in physics by sponsoring international meetings; strengthening physics education, especially in developing countries; increasing diversity and inclusion in physics; advancing the participation and recognition of women and of people from under-represented groups; enhancing the visibility of early-career talents; and promoting international agreements on symbols, units, nomenclature and standards. At the 33rd assembly, 300 physicists were elected to the executive council and specialised commissions for a period of three years.

Global scientific initiatives were highlighted, including the International Year of Quantum Science and Technology (IYQ2025) and the International Decade on Science for Sustainable Development (IDSSD) from 2024 to 2033, which was adopted by the United Nations General Assembly in August 2023. A key session addressed the importance of industry partnerships, with delegates exploring strategies to engage companies in IYQ2025 and IDSSD to further IUPAP’s mission of using physics to drive societal progress. Nobel laureate Giorgio Parisi discussed the role of physics in promoting a sustainable future, and public lectures by fellow laureates Barry Barish, Takaaki Kajita and Samuel Ting filled the 1820-seat Oriental Universal Theater with enthusiastic students.

A key focus of the meeting was visa-related issues affecting international conferences. Delegates reaffirmed the union’s commitment to scientists’ freedom of movement. IUPAP stands against any discrimination in physics and will continue to sponsor events only in locations that uphold this value – a stance that is orthogonal to the policy of countries imposing sanctions on scientists affiliated with specific institutions.

A joint session with the fall meeting of the Chinese Physical Society celebrated the 25th anniversary of the IUPAP working group “Women in Physics” and emphasised diversity, equity and inclusion in the field. Since 2002, IUPAP has established precise guidelines for the sponsorship of conferences to ensure that women are fairly represented among participants, speakers and committee members, and has actively monitored the data ever since. This has contributed to a significant change in the participation of women in IUPAP-sponsored conferences. IUPAP is now building on this still-necessary work on gender by focusing on discrimination on the grounds of disability and ethnicity.

The closing ceremony brought together the themes of continuity and change. Incoming president Silvina Ponce Dawson (University of Buenos Aires) and president-designate Sunil Gupta (Tata Institute) outlined their joint commitment to maintaining an open dialogue among all physicists in an increasingly fragmented world, and to promoting physics as an essential tool for development and sustainability. Outgoing leaders Michel Spiro (CNRS) and Bruce McKellar (University of Melbourne) were honoured for their contributions, and the ceremonial handover symbolised a smooth transition of leadership.

As the general assembly concluded, there was a palpable sense of momentum. From strategic modernisation to deeper engagement with global issues, IUPAP is well-positioned to make physics more relevant and accessible. The resounding message was one of unity and purpose: the physics community is dedicated to leveraging science for a brighter, more sustainable future.

bright-rec iop pub iop-science physcis connect