Comsol -leaderboard other pages

Topics

A new phase for the FCC

FCC Week 2025 gathered more than 600 participants from 34 countries together in Vienna from 19 to 23 May. The meeting was the first following the submission of the FCC’s feasibility study to the European Strategy for Particle Physics (CERN Courier May/June 2025 p9). Comprising three volumes – covering physics and detectors, accelerators and infrastructure, and civil engineering and sustainability – the study represents the most comprehensive blueprint to date for a next-generation collider facility. The next phase will focus on preparing a robust implementation strategy, via technical design, cost assessment, environmental planning and global engagement.

CERN Director-General Fabiola Gianotti estimated the integral FCC programme to offer unparalleled opportunities to explore physics at the shortest distances, and noted growing support and enthusiasm for the programme within the community. That enthusiasm is reflected in the growing collaboration: the FCC collaboration now includes 162 institutes from 38 countries, with 28 new Memoranda of Understanding signed in the past year. These include new partnerships in Latin America, Asia and Ukraine, as well as Statements of Intent from the US and Canada. The FCC vision has also gained visibility in high-level policy dialogues, including the Draghi report on European competitiveness. Scientific plenaries and parallel sessions highlighted updates on simulation tools, rare-process searches and strategies to probe beyond the Standard Model. Detector R&D has progressed significantly, with prototyping, software development and AI-driven simulations advancing rapidly.

In accelerator design, developments included updated lattice and optics concepts involving global “head-on” compensation (using opposing beam interactions) and local chromaticity corrections (to the dependence of beam optics on particle energy). Refinements were also presented to injection schemes, beam collimation and the mitigation of collective effects. A central tool in these efforts is the Xsuite simulation platform, whose capabilities now include spin tracking and modelling based on real collider environments such as SuperKEKB.

Technical innovations also came to the fore. The superconducting RF system for FCC-ee includes 400 MHz Nb/Cu cavities for low-energy operation and 800 MHz Nb cavities for higher-energy modes. The introduction of reverse-phase operation and new RF source concepts – such as the tristron, with energy efficiencies above 90% (CERN Courier May/June 2025 p30) – represent major design advances.

Design developments

Vacuum technologies based on ultrathin NEG coating and discrete photon stops, as well as industrialisation strategies for cost control, are under active development. For FCC-hh, high-field magnet R&D continues on both Nb3Sn prototypes and high-temperature superconductors.

Sessions on technical infrastructure explored everything from grid design, cryogenics and RF power to heat recovery, robotics and safety systems. Sustainability concepts, including renewable energy integration and hydrogen storage, showcased the project’s interdisciplinary scope and long-term environmental planning.

FCC Week 2025 extended well beyond the conference venue, turning Vienna into a vibrant hub for public science outreach

The Early Career Researchers forum drew nearly 100 participants for discussions on sustainability, governance and societal impact. The session culminated in a commitment to inclusive collaboration, echoed by the quote from Austrian-born artist, architect and environmentalist Friedensreich Hundertwasser (1928–2000): “Those who do not honour the past lose the future. Those who destroy their roots cannot grow.”

This spirit of openness and public connection also defined the week’s city-wide engagement. FCC Week 2025 extended well beyond the conference venue, turning Vienna into a vibrant hub for public science outreach. In particular, the “Big Science, Big Impact” session – co-organised with the Austrian Federal Economic Chamber (WKO) – highlighted CERN’s broader role in economic development. Daniel Pawel Zawarczynski (WKO) shared examples of small and medium enterprise growth and technology transfer, noting that CERN participation can open new markets, from tunnelling to aerospace. Economist Gabriel Felbermayr referred to a recent WIFO analysis indicating a benefit-to-cost ratio for the FCC greater than 1.2 under conservative assumptions. The FCC is not only a tool for discovery, observed Johannes Gutleber (CERN), but also a platform enabling technology development, open software innovation and workforce training.

The FCC awards celebrate the creativity, rigour and passion that early-career researchers bring to the programme. This year, Tsz Hong Kwok (University of Zürich) and Audrey Piccini (CERN) won poster prizes, Sara Aumiller (TU München) and Elaf Musa (DESY) received innovation awards, and Ivan Karpov (CERN) and Nicolas Vallis (PSI) were honoured with paper prizes sponsored by Physical Review Accelerators and Beams. As CERN Council President Costas Fountas reminded participants, the FCC is not only about pushing the frontiers of knowledge, but also about enabling a new generation of ideas, collaborations and societal progress.

Discovering the neutrino sky

Lake Baikal, the Mediterranean Sea and the deep, clean ice at the South Pole: trackers. The atmosphere: a calorimeter. Mountains and even the Moon: targets. These will be the tools of the neutrino astrophysicist in the next two decades. Potentially observable energies dwarf those of the particle physicist doing repeatable experiments, rising up to 1 ZeV (1021 eV) for some detector concepts.

The natural accelerators of the neutrino astrophysicist are also humbling. Consider, for instance, the extraordinary relativistic jets emerging from the supermassive black hole in Messier 87 – an accelerator that stretches for about 5000 light years, or roughly 315 million times the distance from the Earth to the Sun.

Alongside gravitational waves, high-energy neutrinos have opened up a new chapter in astronomy. They point to the most extreme events in the cosmos. They can escape from regions where high-energy photons are attenuated by gas and dust, such as NGC 1068, the first steady neutrino emitter to be discovered (see “The neutrino sky” figure). Their energies can rise orders of magnitude above 1 PeV (1015 eV), where the universe becomes opaque to photons due to pair production with the cosmic microwave background. Unlike charged cosmic rays, they are not deflected by magnetic fields, preserving their original direction.

Breaking into the exascale calls for new thinking

High-energy neutrinos therefore offer a unique window into some of the most profound questions in modern physics. Are there new particles beyond the Standard Model at the highest energies? What acceleration mechanisms allow nature to propel them to such extraordinary energies? And is dark matter implicated in these extreme events? With the observation of a 220+570–110 PeV neutrino confounding the limits set by prior observatories and opening up the era of ultra-high-energy neutrino astronomy (CERN Courier March/April 2025 p7), the time is ripe for a new generation of neutrino detectors on an even grander scale (see “Thinking big” table).

A cubic-kilometre ice cube

Detecting high-energy neutrinos is a serious challenge. Though the neutrino–nucleon cross section increases a little less than linearly with neutrino energy, the flux of cosmic neutrinos drops as the inverse square or faster, reducing the event rate by nearly an order of magnitude per decade. A cubic-kilometre-scale detector is required to measure cosmic neutrinos beyond 100 TeV, and Earth starts to be opaque as energies rise beyond a PeV or so, when the odds of a neutrino being absorbed as it passes through the planet are roughly even depending on the direction of the event.

Thinking big

The journey of cosmic neutrino detection began off the coast of the Hawaiian Islands in the 1980s, led by John Learned of the University of Hawaii at Mānoa. The DUMAND (Deep Underwater Muon And Neutrino Detector) project sought to use both an array of optical sensors to measure Cherenkov light and acoustic detectors to measure the pressure waves generated by energetic particle cascades in water. It was ultimately cancelled in 1995 due to engineering difficulties related to deep-sea installation, data transmission over long underwater distances and sensor reliability under high pressure.

The next generation of cubic-kilometre-scale neutrino detectors built on DUMAND’s experience. The IceCube Neutrino Observatory has pioneered neutrino astronomy at the South Pole since 2011, probing energies from 10 GeV to 100 PeV, and is now being joined by experiments under construction such as KM3NeT in the Mediterranean Sea, which observed the 220 PeV candidate, and Baikal–GVD in Lake Baikal, the deepest lake on Earth. All three experiments watch for the deep inelastic scattering of high-energy neutrinos, using optical sensors to detect Cherenkov photons emitted by secondary particles.

Exascale from above

A decade of data-taking from IceCube has been fruitful. The Milky Way has been observed in neutrinos for the first time. A neutrino candidate event has been observed that is consistent with the Glashow resonance – the resonant production in the ice of a real W boson by a 6.3 PeV electron–antineutrino – confirming a longstanding prediction from 1960. Neutrino emission has been observed from supermassive black holes in NGC 1068 and TXS 0506+056. A diffuse neutrino flux has been discovered beyond 10 TeV. Neutrino mixing parameters have been measured. And flavour ratios have been constrained: due to the averaging of neutrino oscillations over cosmological distances, significant deviations from a 1:1:1 ratio of electron, muon and tau neutrinos could imply new physics such as the violation of Lorentz invariance, non-standard neutrino interactions or neutrino decay.

The sensitivity and global coverage of water-Cherenkov neutrino observatories is set to increase still further. The Pacific Ocean Neutrino Experiment (P-ONE) aims to establish a cubic-kilometre-scale deep-sea neutrino telescope off the coast of Canada; IceCube will expand the volume of its optical array by a factor eight; and the TRIDENT and HUNT experiments, currently being prototyped in the South China Sea, may offer the largest detector volumes of all. These detectors will improve sky coverage, enhance angular resolution, and increase statistical precision in the study of neutrino sources from 1 TeV to 10 PeV and above.

Breaking into the exascale calls for new thinking.

Into the exascale

Optical Cherenkov detectors have been exceptionally successful in establishing neutrino astronomy, however, the attenuation of optical photons in water and ice requires the horizontal spacing of photodetectors to a few hundred metres at most, constraining the scalability of the technology. To achieve sensitivity to ultra-high energies measured in EeV (1018 eV), an instrumented area of order 100 km2 would be required. Constructing an optical-based detector on such a scale is impractical.

Earth skimming

One solution is to exchange the tracking volume of IceCube and its siblings with a larger detector that uses the atmosphere as a calorimeter: the deposited energy is sampled on the Earth’s surface.

The Pierre Auger Observatory in Argentina epitomises this approach. If IceCube is presently the world’s largest detector by volume, the Pierre Auger Observatory is the world’s largest detector by area. Over an area of 3000 km2, 1660 water Cherenkov detectors and 24 fluorescence telescopes sample the particle showers generated when cosmic rays with energies beyond 10 EeV strike the atmosphere, producing billions of secondary particles. Among the showers it detects are surely events caused by ultra-high-energy neutrinos, but how might they be identified?

Out on a limb

One of the most promising approaches is to filter events based on where the air shower reaches its maximum development in the atmosphere. Cosmic rays tend to interact after traversing much less atmosphere than neutrinos, since the weakly interacting neutrinos have a much smaller cross-section than the hadronically interacting cosmic rays. In some cases, tau neutrinos can even skim the Earth’s atmospheric edge or “limb” as seen from space, interacting to produce a strongly boosted tau lepton that emerges from the rock (unlike an electron) to produce an upward-going air shower when it decays tens of kilometres later – though not so much later (unlike a muon) that it has escaped the atmosphere entirely. This signature is not possible for charged cosmic rays. So far, Auger has detected no neutrino candidate events of either topology, imposing stringent upper limits on the ultra-high-energy neutrino flux that are compatible with limits set by IceCube. The AugerPrime upgrade, soon expected to be fully operational, will equip each surface detector with scintillator panels and improved electronics.

Pole position

Experiments in space are being developed to detect these rare showers with an even larger instrumentation volume. POEMMA (Probe of Extreme Multi-Messenger Astrophysics) is a proposed satellite mission designed to monitor the Earth’s atmosphere from orbit. Two satellites equipped with fluorescence and Cherenkov detectors will search for ultraviolet photons produced by extensive air showers (see “Exascale from above” figure). EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon 2) will test the same detection methods from the vantage point of high-atmosphere balloons. These instruments can help distinguish cosmic rays from neutrinos by identifying shallow showers and up-going events.

Another way to detect ultra-high-energy neutrinos is by using mountains and valleys as natural neutrino targets. This Earth-skimming technique also primarily relies on tau neutrinos, as the tau leptons produced via deep inelastic scattering in the rock can emerge from Earth’s crust and decay within the atmosphere to generate detectable particle showers in the air.

The Giant Radio Array for Neutrino Detection (GRAND) aims to detect radio signals from these tau-induced air showers using a large array of radio antennas spread over thousands of square kilometres (see “Earth skimming” figure). GRAND is planned to be deployed in multiple remote, mountainous locations, with the first site in western China, followed by others in South America and Africa. The Tau Air-Shower Mountain-Based Observatory (TAMBO) has been proposed to be deployed on the face of the Colca Canyon in the Peruvian Andes, where an array of scintillators will detect the electromagnetic signals from tau-induced air showers.

Another proposed strategy that builds upon the Earth-skimming principle is the Trinity experiment, which employs an array of Cherenkov telescopes to observe nearby mountains. Ground-based air Cherenkov detectors are known for their excellent angular resolution, allowing for precise pointing to trace back to the origin of the high-energy primary particles. Trinity is a proposed system of 18 wide-field Cherenkov telescopes optimised for detecting neutrinos in the 10 PeV–1000 PeV energy range from the direction of nearby mountains – an approach validated by experiments such as Ashra–NTA, deployed on Hawaii’s Big Island utilising the natural topography of the Mauna Loa, Mauna Kea and Hualālai volcanoes.

Diffuse neutrino landscape

All these ultra-high-energy experiments detect particle showers as they develop in the atmosphere, whether from above, below or skimming the surface. But “Askaryan” detectors operate deep within the ice of the Earth’s poles, where both the neutrino interaction and detection occur.

In 1962 Soviet physicist Gurgen Askaryan reasoned that electromagnetic showers must develop a net negative charge excess as they develop, due to the Compton scattering of photons off atomic electrons and the ionisation of atoms by charged particles in the shower. As the charged shower propagates faster than the phase velocity of light in the medium, it should emit radiation in a manner analogous to Cherenkov light. However, there are key differences: Cherenkov radiation is typically incoherent and emitted by individual charged particles, while Askaryan radiation is coherent, being produced by a macroscopic buildup of charge, and is significantly stronger at radio frequencies. The Askaryan effect was experimentally confirmed at SLAC in 2001.

Optimised arrays

Because the attenuation length of radio waves is an order of magnitude longer than for optical photons, it becomes feasible to build much sparser arrays of radio antennas to detect the Askaryan signals than the compact optical arrays used in deep ice Cherenkov detectors. Such detectors are optimised to cover thousands of square kilometres, with typical energy thresholds beyond 100 PeV.

The Radio Neutrino Observatory in Greenland (RNO-G) is a next-generation in-ice radio detector currently under construction on the ~3 km-thick ice sheet above central Greenland, operating at frequencies in the 150–700 MHz range. RNO-G will consist of a sparse array of 35 autonomous radio detector stations, each separated by 1.25 km, making it the first large-scale radio neutrino array in the northern hemisphere.

Moon skimming

In the southern hemisphere, the proposed IceCube-Gen2 will complement the aforementioned eightfold expanded optical array with a radio component covering a remarkable 500 km2. The cold Antarctic ice provides an optimal medium for radio detection, with radio attenuation lengths of roughly 2 km facilitating cost-efficient instrumentation of the large volumes needed to measure the low ultra-high-energy neutrino flux. The radio array will combine in-ice omnidirectional antennas 150 m below the surface with high-gain antennas at a depth of 15 m and upward-facing antennas on the surface to veto the cosmic-ray background.

The IceCube-Gen2 radio array will have the sensitivity to probe features of the spectrum of astrophysical neutrino beyond the PeV scale, addressing the tension between upper limits from Auger and IceCube, and KM3NeT’s 220 +570–110PeV neutrino candidate – the sole ultra-high-energy neutrino yet observed. Extrapolating an isotropic and diffuse flux, IceCube should have detected 75 events in the 72–2600 PeV energy range over its operational period. However, no events have been observed above 70 PeV.

Perhaps the most ambitious way to observe ultra-high-energy neutrinos is to use the Moon as a target

If the detected KM3NeT event has a neutrino energy of around 100 PeV, it could originate from the same astrophysical sources responsible for accelerating ultra-high-energy cosmic rays. In this case, interactions between accelerated protons and ambient photons from starlight or synchrotron radiation would produce pions that decay into ultra-high-energy neutrinos. Alternatively, if its true energy is closer to 1 EeV, it is more likely cosmogenic: arising from the Greisen–Zatsepin–Kuzmin process, in which ultra-high-energy cosmic rays interact with cosmic microwave background photons, producing a Δ-resonance that decays into pions and ultimately neutrinos. IceCube-Gen2 will resolve the spectral shape from PeV to 10 EeV and differentiate between these two possible production mechanisms (see “Diffuse neutrino landscape” figure).

Moonshots

Remarkably, the Radar Echo Telescope (RET) is exploring using radar to actively probe the ice for transient signals. Unlike Askaryan-based detectors, which passively listen for radio pulses generated by charge imbalances in particle cascades, RET’s concept is to beam a radar signal and watch for reflections off the ionisation caused by particle showers. SLAC’s T576 experiment demonstrated the concept in the lab in 2022 by observing a radar echo from a beam of high-energy electrons scattering off a plastic target. RET has now been deployed in Greenland, where it seeks echoes from down-going cosmic rays as a proof of concept.

Full-sky coverage

Perhaps the most ambitious way to observe ultra-high-energy neutrinos foresees using the Moon as a target. When neutrinos with energies above 100 EeV interact near the rim of the Moon, they can induce particle cascades that generate coherent Askaryan radio emission which could be detectable on Earth (see “Moon skimming” figure). Observations could be conducted from Earth-based radio telescopes or from satellites orbiting the Moon to improve detection sensitivity. Lunar Askaryan detectors could potentially be sensitive to neutrinos up to 1 ZeV (1021 eV). No confirmed detections have been reported so far.

Neutrino network

Proposed neutrino observatories are distributed across the globe – a necessary requirement for full sky coverage, given the Earth is not transparent to ultra-high-energy neutrinos (see “Full-sky coverage” figure). A network of neutrino telescopes ensures that transient astrophysical events can always be observed as the Earth rotates. This is particularly important for time-domain multi-messenger astronomy, enabling coordinated observations with gravitational wave detectors and electromagnetic counterparts. The ability to track neutrino signals in real time will be key to identifying the most extreme cosmic accelerators and probing fundamental physics at ultra-high energies.

Accelerators on autopilot

The James Webb Space Telescope and the LHC

Particle accelerators can be surprisingly temperamental machines. Expertise, specialisation and experience is needed to maintain their performance. Nonlinear and resonant effects keep accelerator engineers and physicists up late into the night. With so many variables to juggle and fine-tune, even the most seasoned experts will be stretched by future colliders. Can artificial intelligence (AI) help?

Proposed solutions take inspiration from space telescopes. The two fields have been jockeying to innovate since the Hubble Space Telescope launched with minimal automation in 1990. In the 2000s, multiple space missions tested AI for fault detection and onboard decision-making, before the LHC took a notable step forward for colliders in the 2010s by incorporating machine learning (ML) in trigger decisions. Most recently, the James Webb Space Telescope launched in 2021 using AI-driven autonomous control systems for mirror alignment, thermal balancing and scheduling science operations with minimal intervention from the ground. The new Efficient Particle Accelerators project at CERN, which I have led since its approval in 2023, is now rolling out AI at scale across CERN’s accelerator complex (see “Dynamic and adaptive” image.

AI-driven automation will only become more necessary in the future. As well as being unprecedented in size and complexity, future accelerators will also have to navigate new constraints such as fluctuating energy availability from intermittent sources like wind and solar power, requiring highly adaptive and dynamic machine operation. This would represent a step change in complexity and scale. A new equipment integration paradigm would automate accelerator operation, equipment maintenance, fault analysis and recovery. Every item of equipment will need to be fully digitalised and able to auto-configure, auto-stabilise, auto-analyse and auto-recover. Like a driverless car, instrumentation and software layers must also be added for safe and efficient performance.

On-site human intervention of the LHC could be treated as a last resort – or perhaps designed out entirely

The final consideration is full virtualisation. While space telescopes are famously inaccessible once deployed, a machine like the Future Circular Collider (FCC) would present similar challenges. Given the scale and number of components, on-site human intervention should be treated as a last resort – or perhaps designed out entirely. This requires a new approach: equipment must be engineered for autonomy from the outset – with built-in margins, high reliability, modular designs and redundancy. Emerging technologies like robotic inspection, automated recovery systems and digital twins will play a central role in enabling this. A digital twin – a real-time, data-driven virtual replica of the accelerator – can be used to train and constrain control algorithms, test scenarios safely and support predictive diagnostics. Combined with differentiable simulations and layered instrumentation, these tools will make autonomous operation not just feasible, but optimal.

The field is moving fast. Recent advances allow us to rethink how humans interact with complex machines – not by tweaking hardware parameters, but by expressing intent at a higher level. Generative pre-trained transformers, a class of large language models, open the door to prompting machines with concepts rather than step-by-step instructions. While further R&D is needed for robust AI copilots, tailor-made ML models have already become standard tools for parameter optimisation, virtual diagnostics and anomaly detection across CERN’s accelerator landscape.

Progress is diverse. AI can reconstruct LHC bunch profiles using signals from wall current monitors, analyse camera images to spot anomalies in the “dump kickers” that safely remove beams, or even identify malfunctioning beam-position monitors. In the following, I identify four different types of AI that have been successfully deployed across CERN’s accelerator complex. They are merely the harbingers of a whole new way of operating CERN’s accelerators.

1. Beam steering with reinforcement learning

In 2020, LINAC4 became the new first link in the LHC’s modernised proton accelerator chain – and quickly became an early success story for AI-assisted control in particle accelerators.

Small deviations in a particle beam’s path within the vacuum chamber can have a significant impact, including beam loss, equipment damage or degraded beam quality. Beams must stay precisely centred in the beampipe to maintain stability and efficiency. But their trajectory is sensitive to small variations in magnet strength, temperature, radiofrequency phase and even ground vibrations. Worse still, errors typically accumulate along the accelerator, compounding the problem. Beam-position monitors (BPMs) provide measurements at discrete points – often noisy – while steering corrections are applied via small dipole corrector magnets, typically using model-based correction algorithms.

Beam steering

In 2019, the reinforcement learning (RL) algorithm normalised advantage function (NAF) was trained online to steer the H beam in the horizontal plane of LINAC4 during commissioning. In RL, an agent learns by interacting with its environment and receiving rewards that guide it toward better decisions. NAF uses a neural network to model the so-called Q-function that estimates rewards in RL and uses this to continuously refine its control policy.

Initially, the algorithm required many attempts to find an effective strategy, and in early iterations it occasionally worsened the beam trajectory, but as training progressed, performance improved rapidly. Eventually, the agent achieved a final trajectory better aligned than the goal of an RMS of 1 mm (see “Beam steering” figure).

This experiment demonstrated that RL can learn effective control policies for accelerator-physics problems within a reasonable amount of time. The agent was fully trained after about 300 iterations, or 30 minutes of beam time, making online training feasible. Since 2019, the use of AI techniques has expanded significantly across accelerator labs worldwide, targeting more and more problems that don’t have any classical solution. At CERN, tools such as GeOFF (Generic Optimisation Framework and Front­end) have been developed to standardise and scale these approaches throughout the accelerator complex.

2. Efficient injection with Bayesian optimisation

Bayesian optimisation (BO) is a global optimisation technique that uses a probabilistic model to find the optimal parameters of a system by balancing exploration and exploitation, making it ideal for expensive or noisy evaluations. A game-changing example of its use is the record-breaking LHC ion run in 2024. BO was extensively used all along the ion chain, and made a significant difference in LEIR (the low-energy ion ring, the first synchrotron in the chain) and in the Super Proton Synchrotron (SPS, the last accelerator before the LHC). In LEIR, most processes are no longer manually optimised, but the multi-turn injection process is still non-trivial and depends on various longitudinal and transverse parameters from its injector LINAC3.

Quick recovery

In heavy-ion accelerators, particles are injected in a partially stripped charge state and must be converted to higher charge states at different stages for efficient acceleration. In the LHC ion injector chain, the stripping foil between LINAC3 and LEIR raises the charge of the lead ions from Pb27+ to Pb54+. A second stripping foil, between the PS and SPS, fully ionises the beam to Pb82+ ions for final acceleration toward the LHC. These foils degrade over time due to thermal stress, radiation damage and sputtering, and must be remotely exchanged using a rotating wheel mechanism. Because each new foil has slightly different stripping efficiency and scattering properties, beam transmission must be re-optimised – a task that traditionally required expert manual tuning.

In 2024 it was successfully demonstrated that BO with embedded physics constraints can efficiently optimise the 21 most important parameters between LEIR and the LINAC3 injector. Following a stripping foil exchange, the algorithm restored the accumulated beam intensity in LEIR to better than nominal levels within just a few dozen iterations (see “Quick recovery” figure).

This example shows how AI can now match or outperform expert human tuning, significantly reducing recovery time, freeing up operator bandwidth and improving overall machine availability.

3. Adaptively correcting the 50 Hz ripple

In high-precision accelerator systems, even tiny perturbations can have significant effects. One such disturbance is the 50 Hz ripple in power supplies – small periodic fluctuations in current that originate from the electrical grid. While these ripples were historically only a concern for slow-extracted proton beams sent to fixed-target experiments, 2024 revealed a broader impact.

SPS intensity

In the SPS, adaptive Bayesian optimisation (ABO) was deployed to control this ripple in real time. ABO extends BO by learning the objective not only as a function of the control parameters, but also as a function of time, which then allows continuous control through forecasting.

The algorithm generated shot-by-shot feed-forward corrections to inject precise counter-noise into the voltage regulation of one of the quadrupole magnet circuits. This approach was already in use for the North Area proton beams, but in summer 2024 it was discovered that even for high-intensity proton beams bound for the LHC, the same ripple could contribute to beam losses at low energy.

Thanks to existing ML frameworks, prior experience with ripple compensation and available hardware for active noise injection, the fix could be implemented quickly. While the gains for protons were modest – around 1% improvement in losses – the impact for LHC ion beams was far more dramatic. Correcting the 50 Hz ripple increased ion transmission by more than 15%. ABO is therefore now active whenever ions are accelerated, improving transmission and supporting the record beam intensity achieved in 2024 (see “SPS intensity” figure).

4. Predicting hysteresis with transformers

Another outstanding issue in today’s multi-cycling synchrotrons with iron-dominated electromagnets is correcting for magnetic hysteresis – a phenomenon where the magnetic field depends not only on the current but also on its cycling history. Cumbersome mitigation strategies include playing dummy cycles and manually re-tuning parameters after each change in magnetic history.

SPS hysteresis

While phenomenological hysteresis models exist, their accuracy is typically insufficient for precise beam control. ML offers a path forward, especially when supported by high-quality field measurement data. Recent work using temporal fusion transformers – a deep-learning architecture designed for multivariate time-series prediction – has demonstrated that ML-based models can accurately predict field deviations from the programmed transfer function across different SPS magnetic cycles (see “SPS hysteresis” figure). This hysteresis model is now used in the SPS control room to provide feed-forward corrections – pre-emptive adjustments to magnet currents based on the predicted magnetic state – ensuring field stability without waiting for feedback from beam measurements and manual adjustments.

A blueprint for the future

With the Efficient Particle Accelerators project, CERN is developing a blueprint for the next generation of autonomous equipment. This includes concepts for continuous self-analysis, anomaly detection and new layers of “Internet of Things” instrumentation that support auto-configuration and predictive maintenance. The focus is on making it easier to integrate smart software layers. Full results are expected by the end of LHC Run 3, with robust frameworks ready for deployment in Run 4.

AI can now match or outperform expert human tuning, significantly reducing recovery time and improving overall machine availability

The goal is ambitious: to reduce maintenance effort by at least 50% wherever these frameworks are applied. This is based on a realistic assumption – already today, about half of all interventions across the CERN accelerator complex are performed remotely, a number that continues to grow. With current technologies, many of these could be fully automated.

Together, these developments will not only improve the operability and resilience of today’s accelerators, but also lay the foundation for CERN’s future machines, where human intervention during operation may become the exception rather than the rule. AI is set to transform how we design, build and operate accelerators – and how we do science itself. It opens the door to new models of R&D, innovation and deep collaboration with industry. 

Powering into the future

The Higgs boson is the most intriguing and unusual object yet discovered by fundamental science. There is no higher experimental priority for particle physics than building an electron–positron collider to produce it copiously and study it precisely. Given the importance of energy efficiency and cost effectiveness in the current geopolitical context, this gives unique strategic importance to developing a humble technology called the klystron – a technology that will consume the majority of site power at every major electron–positron collider under consideration, but which has historically only achieved 60% energy efficiency.

The klystron was invented in 1937 by two American brothers, Russell and Sigurd Varian. The Varians wanted to improve aircraft radar systems. At the time, there was a growing need for better high-frequency amplification to detect objects at a distance using radar, a critical technology in the lead-up to World War II.

The Varian’s RF source operated around 3.2 GHz, or a wavelength of about 9.4 cm, in the microwave region of the electromagnetic spectrum. At the time, this was an extraordinarily high frequency – conventional vacuum tubes struggled beyond 300 MHz. Microwave wavelengths promised better resolution, less noise, and the ability to penetrate rain and fog. Crucially, antennas could be small enough to fit on ships and planes. But the source was far too weak for radar.

Klystrons are ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories

The Varians’ genius was to invent a way to amplify the electromagnetic signal by up to 30 dB, or a factor of 1000. The US and British military used the klystron for airborne radar, submarine detection of U-boats in the Atlantic and naval gun targeting beyond visual range. Radar helped win the Battle of Britain, the Battle of the Atlantic and Pacific naval battles, making surprise attacks harder by giving advance warning. Winston Churchill called radar “the secret weapon of WWII”, and the klystron was one of its enabling technologies.

With its high gain and narrow bandwidth, the klystron was the first practical microwave amplifier and became foundational in radio-frequency (RF) technology. This was the first time anyone had efficiently amplified microwaves with stability and directionality. Klystrons have since been used in satellite communication, broadcasting and particle accelerators, where they power the resonant RF cavities that accelerate the beams. Klystrons are therefore ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories, which are central to the future of high-energy physics.

Klystrons and the Higgs

Hadron colliders like the LHC tend to be circular. Their fundamental energy limit is given by the maximum strength of the bending magnets and the circumference of the tunnel. A handful of RF cavities repeatedly accelerate beams of protons or ions after hundreds or thousands of bending magnets force the beams to loop back through them.

Operating principle

Thanks to their clean and precisely controllable collisions, all Higgs factories under consideration are electron–positron colliders. Electron–positron colliders can be either circular or linear in construction. The dynamics of circular electron–positron colliders are radically different as the particles are 2000 times lighter than protons. The strength required from the bending magnets is relatively low for any practical circumference, however, the energy of the particles must be continually replenished, as they radiate away energy in the bends through synchrotron radiation, requiring hundreds of RF cavities. RF cavities are equally important in the linear case. Here, all the energy must be imparted in a single pass, with each cavity accelerating the beam only once, requiring either hundreds or even thousands of RF cavities.

Either way, 50 to 60% of the total energy consumed by an electron-positron collider is used for RF acceleration, compared to a relatively small fraction in a hadron collider. Efficiently powering the RF cavities is of paramount importance to the energy efficiency and cost effectiveness of the facility as a whole. RF acceleration is therefore of far greater significance at electron–positron colliders than at hadron colliders.

From a pen to a mid-size car

RF cavities cannot simply be plugged into the wall. These finely tuned resonant structures must be excited by RF power – an alternating microwave electromagnetic field that is supplied through waveguides at the appropriate frequency. Due to the geometry of resonant cavities, this excites an on-axis oscillating electrical field. Particles that arrive when the electrical field has the right direction are accelerated. For this reason, particles in an accelerator travel in bunches separated by a long distance, during which the RF field is not optimised for acceleration.

CLIC klystron

Despite the development of modern solid-state amplifiers, the Varians’ klystron is still the most practical technology to generate RF when the power required is in the MW level. They can be as small as a pen or as large and heavy as a mid-size car, depending on the frequency and power required. Linear colliders use higher frequency because they also come with higher gradients and make the linac shorter, whereas a circular collider does not need high gradients as the energy to be given each turn is smaller.

Klystrons fall under the general classification of vacuum tubes – fully enclosed miniature electron accelerators with their own source, accelerating path and “interaction region” where the RF field is produced. Their name is derived from the Greek verb describing the action of waves crashing against the seashore. In a klystron, RF power is generated when electrons crash against a decelerating electric field.

Every klystron contains at least two cavities: an input and an output. The input cavity is powered by a weak RF source that must be amplified. The output cavity generates the strongly amplified RF signal generated by the klystron. All this comes encapsulated in an ultra-high vacuum volume inside the field of a solenoid for focusing (see “Operating principle” figure).

Thanks to the efforts made in recent years, high-efficiency klystrons are now approaching the ultimate theoretical limit

Inside the klystron, electrons leave a heated cathode and are accelerated by a high voltage applied between the cathode and the anode. As they are being pushed forward, a small input RF signal is applied to the input cavity, either accelerating or decelerating the electrons according to their time of arrival. After a long drift, late-emitted accelerated electrons catch up with early-emitted decelerated electrons, intersecting with those that did not see any net accelerating force. This is called velocity bunching.

A second, passive accelerating cavity is placed at the location where maximum bunching occurs. Though of a comparable design, this cavity behaves in an inverse fashion to those used in particle accelerators. Rather than converting the energy of an electromagnetic field into the kinetic energy of particles, the kinetic energy of particles is converted into RF electromagnetic waves. This process can be enhanced by the presence of other passive cavities in between the already mentioned two, as well as by several iterations of bunching and de-bunching before reaching the output cavity. Once decelerated, the spent beam finishes its life in a dump or a water-cooled collector.

Optimising efficiency

Klystrons are ultimately RF amplifiers with a very high gain of the order of 30 to 60 dB and a very narrow bandwidth. They can be built at any frequency from a few hundred MHz to tens of GHz, but each operates within a very small range of frequencies called the bandwidth. After broadcasting became reliant on wider bandwidth vacuum tubes, their application in particle accelerators turned into a small market for high-power klystrons. Most klystrons for science are manufactured by a handful of companies which offer a limited number of models that have been in operation for decades. Their frequency, power and duty cycle may not correspond to the specifications of a new accelerator being considered – and in most cases, little or no thought has been given to energy efficiency or carbon footprint.

Battling space charge

When searching for suitable solutions for the next particle-physics collider, however, optimising the energy efficiency of klystrons and other devices that will determine the final energy bill and CO2 emissions is a task of the utmost importance. Therefore, nearly a decade ago, RF experts at CERN and the University of Lancaster began the High-Efficiency Klystron (HEK) project to maximise beam-to-RF efficiency: the fraction of the power contained in the klystron’s electron beam that is converted into RF power by the output cavity.

The complexity of klystrons resides on the very nonlinear fields to which the electrons are subjected. In the cathode and the first stages of electrostatic acceleration, the collective effect of “space-charge” forces between the electrons determines the strongly nonlinear dynamics of the beam. The same is true when the bunching tightens along the tube, with mutual repulsion between the electrons preventing optimal bunching at the output cavity.

For this reason, designing klystrons is not susceptible to simple analytical calculations. Since 2017, CERN has developed a code called KlyC that simulates the beam along the klystron channel and optimises parameters such as frequency and distance between cavities 100 to 1000 times faster than commercial 3D codes. KlyC is available in the public domain and is being used by an ever-growing list of labs and industrial partners.

Perveance

The main characteristic of a klystron is an obscure magnitude inherited from electron-gun design called perveance. For small perveances, space-charge forces are small, due to either high energy or low intensity, making bunching easy. For large perveances, space-charge forces oppose bunching, lowering beam-to-RF efficiency. High-power klystrons require large currents and therefore high perveances. One way to produce highly efficient, high-power klystrons is therefore for multiple cathodes to generate multiple low-perveance electron beams in a “multi-beam” (MB) klystron.

High-luminosity gains

Overall, there is an almost linear dependence between perveance and efficiency. Thanks to the efforts made in recent years, high-efficiency klystrons are now outperforming industrial klystrons by 10% in efficiency for all values of perveance, and approaching the ultimate theoretical limit (see “Battling space charge” figure).

One of the first designs to be brought to life was based on the E37113, a pulsed klystron with 6 MW peak power working in the X-band at 12 GHz, commercialised by CANON ETD. This klystron is currently used in the test facility at CERN for validating CLIC RF prototypes, which could greatly benefit from a larger power. As part of a collaboration with CERN, CANON ETD built a new tube, according to the design optimised at CERN, to reach a beam-to-RF efficiency of 57% instead of the original 42% (see “CLIC klystron” image and CERN Courier September/October 2022 p9).

As its interfaces with the high-voltage (HV) source and solenoid were kept identical, one can now benefit from 8 MW of RF power for the same energy consumption as before. As changes in the manufacturing of the tube channel are just a small fraction of the manufacture of the instrument, its price should not increase considerably, even if more accurate production methods are required.

In pursuit of power

Towards an FCC klystron

Another successful example of re-designing a tube for high efficiency is the TH2167 – the klystron behind the LHC, which is manufactured by Thales. Originally exhibiting a beam-to-RF efficiency of 60%, it was re-designed by the CERN team to gain 10% and reach 70% efficiency, while again using the same HV source and solenoid. The tube prototype has been built and is currently at CERN, where it has demonstrated the capacity to generate 350 kW of RF power with the same input energy as previously required to produce 300 kW. This power will be decisive when dealing with the higher intensity beam expected after the LHC luminosity upgrade. And all this again for a price comparable to previous models (see “High-luminosity gains” image).

The quest for the highest efficiency is not over yet. The CERN team is currently working on a design that could power the proposed Future Circular collider (FCC). Using about a hundred accelerating cavities, the electron and positron beams will need to be replenished with 100 MW of RF power, and energy efficiency is imperative.

The quest for the highest efficiency is not over yet

Although the same tube in use for the LHC, now boosted to 70% efficiency, could be used to power the FCC, CERN is working towards a vacuum tube that could reach an efficiency over 80%. A two-stage multi-beam klystron was initially designed that was capable of reaching 86% efficiency and generating 1 MW of continuous-wave power (see “Towards an FCC klystron” figure).

Motivated by recent changes in FCC parameters, we have rediscovered an old device called a tristron, which is not a conventional klystron but a “gridded tube” where the electron beam bunching mechanism is different. Tristons have a lower power gain but much greater flexibility. Simulations have confirmed that they can reach efficiencies as high as 90%. This could be a disruptive technology with applications well beyond accelerators. Manufacturing a prototype is an excellent opportunity for knowledge transfer from fundamental research to industrial applications.

FCC feasibility study complete

The final report of a detailed study investigating the technical and financial feasibility of a Future Circular Collider (FCC) at CERN was released on 31 March. Building on a conceptual design study conducted between 2014 and 2018, the three-volume report is authored by over 1400 scientists and engineers in more than 400 institutes worldwide, and covers aspects of the project ranging from civil engineering to socioeconomic impact. As recommended in the 2020 update to the European Strategy for Particle Physics (ESPP), it was completed in time to serve as an input to the ongoing 2026 update to the ESPP (see “European strategy update: the community speaks“).

The FCC is a proposed collider infrastructure that could succeed the LHC in the 2040s. Its scientific motivation stems from the discovery in 2012 of the final particle of the Standard Model (SM), the Higgs boson, with a mass of just 125 GeV, and the wealth of precision measurements and exploratory searches during 15 years of LHC operations that have excluded many signatures of new physics at the TeV scale. The report argues that the FCC is particularly well equipped to study the Higgs and associated electroweak sectors in detail and that it provides a broad and powerful exploratory tool that would push the limits of the unknown as far as possible.

The report describes how the FCC will seek to address key domains formulated in the 2013 and 2020 ESPP updates, including: mapping the properties of the Higgs and electroweak gauge bosons with accuracies orders of magnitude better than today to probe the processes that led to the emergence of the Brout–Englert–Higgs field’s nonzero vacuum expectation value; ensuring a comprehensive and accurate campaign of precision electroweak, quantum chromodynamics, flavour and top-quark measurements sensitive to tiny deviations from the SM, probing energy scales far beyond the direct kinematic reach; improving by orders of magnitude the sensitivity to rare and elusive phenomena at low energies, including the possible discovery of light particles with very small couplings such as those relevant to the search for dark matter; and increasing by at least an order of magnitude the direct discovery reach for new particles at the energy frontier.

This technology has significant potential for industrial and societal applications

The FCC research programme outlines two possible stages: an electron–positron collider (FCC-ee) running at several centre-of-mass energies to serve as a Higgs, electroweak and top-quark factory, followed at a later stage by a proton–proton collider (FCC-hh) operating at an unprecedented collision energy. An FCC-ee with four detectors is judged to be “the electroweak, Higgs and top factory project with the highest luminosity proposed to date”, able to produce 6 × 1012 Z bosons, 2.4 × 108 W pairs, almost 3 × 106 Higgs bosons, and 2 × 106 top-quark pairs over 15 years of operations. Its versatile RF system would enable flexibility in the running sequence, states the report, allowing experimenters to move between physics programmes and scan through energies at ease. The report also outlines how the FCC-ee injector offers opportunities for other branches of science, including the production of spatially coherent photon beams with a brightness several orders of magnitude higher than any existing or planned light source.

The estimated cost of the construction of the FCC-ee is CHF 15.3 billion. This investment, which would be distributed over a period of about 15 years starting from the early 2030s, includes civil engineering, technical infrastructure, electron and positron accelerators, and four detectors.

Ready for construction

The report describes how key FCC-ee design approaches, such as a double-ring layout, top-up injection with a full-energy booster, a crab-waist collision scheme, and precise energy calibration, have been demonstrated at several previous or presently operating colliders. The FCC-ee is thus “technically ready for construction” and is projected to deliver four-to-five orders of magnitude higher luminosity per unit electrical power than LEP. During operation, its energy consumption is estimated to vary
from 1.1 to 1.8 TWh/y depending on the operation mode compared to CERN’s current consumption of about 1.3 TWh/y. Decarbonised energy including an ever-growing contribution from renewable sources would be the main source of energy for the FCC. Ongoing technology R&D aims at further increasing FCC-ee’s energy efficiency (see “Powering into the future”).

Assuming 14 T Nb3Sn magnet technology as a baseline design, a subsequent hadron collider with a centre-of-mass energy of 85 TeV entering operation in the early 2070s would extend the energy frontier by a factor six and provide an integrated luminosity five to 10 times higher than that of the HL-LHC during 25 years of operation. With four detectors, FCC-hh would increase the mass reach of direct searches for new particles to several tens of TeV, probing a broad spectrum of beyond-the-SM theories and potentially identifying the sources of any deviations found in precision measurements at FCC-ee, especially those involving the Higgs boson. An estimated sample of more than 20 billion Higgs bosons would allow the absolute determination of its couplings to muons, to photons, to the top quark and to Zγ below the percent level, while di-Higgs production would bring the uncertainty on the Higgs self-coupling below the 5% level. FCC-hh would also significantly advance understanding of the hot QCD medium by enabling lead–lead and other heavy-ion collisions at unprecedented energies, and could be configured to provide electron–proton and electron–ion collisions, says the report.

The FCC-hh design is based on LHC experience and would leverage a substantial amount of the technical infrastructure built for the first FCC stage. Two hadron injector options are under study involving a superconducting machine in either the LHC or SPS tunnel. For the purpose of a technical feasibility analysis, a reference scenario based on 14 T Nb3Sn magnets cooled to 1.9 K was considered, yielding 2.4 MW of synchrotron radiation and a power consumption of 360 MW or 2.3 TWh/y – a comparable power consumption to FCC-ee.

FCC-hh’s power consumption might be reduced below 300 MW if the magnet temperature can be raised to 4.5 K. Outlining the potential use of high-
temperature superconductors for 14 to 20 T dipole magnets operating at temperatures between 4.5 K and 20 K, the report notes that such technology could either extend the centre-of-mass energy of FCC-hh to 120 TeV or lead to significantly improved operational sustainability at the same collision energy. “The time window of more than 25 years opened by the lepton-collider stage is long enough to bring that technology to market maturity,” says FCC study leader Michael Benedikt  (CERN). “High-temperature superconductors have significant potential for industrial and societal applications, and particle accelerators can serve as pilots for market uptake, as was the case with the Tevatron and the LHC for NbTi technology.”

Society and sustainability

The report details the concepts and paths to keep the FCC’s environmental footprint low while boosting new technologies to benefit society and developing territorial synergies such as energy reuse. The civil construction process for FCC-ee, which would also serve FCC-hh, is estimated to result in about 500,000 tCO2(eq) over a period of 10 years, which the authors say corresponds to approximately one-third of the carbon budget of the Paris Olympic Games. A socio-economic impact assessment of the FCC integrating environmental aspects throughout its entire lifecycle reveals a positive cost–benefit ratio, even under conservative assumptions and adverse implementation conditions.

The actual journey towards the realisation of the FCC starts now

A major achievement of the FCC feasibility study has been the development of the layout and placement of the collider ring and related infrastructure, which have been optimised for scientific benefit while taking into account territorial compatibility, environmental and construction constraints, and cost. No fewer than 100 scenarios were developed and analysed before settling on the preferred option: a ring circumference of 90.7 km with shaft depths ranging between 200 and 400 m, with eight surface sites and four experiments. Throughout the study, CERN has been accompanied by its host states, France and Switzerland, working with entities at the local, regional and national levels to ensure a constructive dialogue with territorial stakeholders.

The final report of the FCC feasibility study together with numerous referenced technical documents have been submitted to the ongoing ESPP 2026 update, along with studies of alternative projects proposed by the community. The CERN Council may take a decision around 2028.

“After four years of effort, perseverance and creativity, the FCC feasibility study was concluded on 31 March 2025,” says Benedikt. “The actual journey towards the realisation of the FCC starts now and promises to be at least as fascinating as the successive steps that brought us to the present state.”

Gaseous detectors school at CERN

How do wire-based detectors compare to resistive-plate chambers? How well do micropattern gaseous detectors perform? Which gas mixtures optimise operation? How will detectors face the challenges of future more powerful accelerators?

Thirty-two students attended the first DRD1 Gaseous Detectors School at CERN last November. The EP-DT Gas Detectors Development (GDD) lab hosted academic lectures and varied hands-on laboratory exercises. Students assembled their own detectors, learnt about their operating characteristics and explored radiation-imaging methods with state-of-the-art readout approaches – all under the instruction of more than 40 distinguished lecturers and tutors, including renowned scientists, pioneers of innovative technologies and emerging experts.

DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. The collaboration focuses on knowledge sharing and scientific exchange, in addition to the development of novel gaseous detector technologies to address the needs of future experiments. This instrumentation school, initiated in DRD1’s first year, marks the start of a series of regular training events for young researchers that will also serve to exchange ideas between research groups and encourage collaboration.

The school will take place annually, with future editions hosted at different DRD1 member institutes to reach students from a number of regions and communities.

Educational accelerator open to the public

What better way to communicate accelerator physics to the public than using a functioning particle accelerator? From January, visitors to CERN’s Science Gateway were able to witness a beam of protons being accelerated and focused before their very eyes. Its designers believe it to be the first working proton accelerator to be exhibited in a museum.

“ELISA gives people who visit CERN a chance to really see how the LHC works,” says Science Gateway’s project leader Patrick Geeraert. “This gives visitors a unique experience: they can actually see a proton beam in real time. It then means they can begin to conceptualise the experiments we do at CERN.”

The model accelerator is inspired by a component of LINAC 4 – the first stage in the chain of accelerators used to prepare beams of protons for experiments at the LHC. Hydrogen is injected into a low-pressure chamber and ionised; a one-metre-long RF cavity accelerates the protons to 2 MeV, which then pass through a thin vacuum-sealed window. In dim light, the protons in the air ionise the gas molecules, producing visible light, allowing members of the public to see the beam’s progress before their very eyes (see “Accelerating education” figure).

ELISA – the Experimental Linac for Surface Analysis – will also be used to analyse the composition of cultural artefacts, geological samples and objects brought in by members of the public. This is an established application of low-energy proton accelerators: for example, a particle accelerator is hidden 15 m below the famous glass pyramids of the Louvre in Paris, though it is almost 40 m long and not freely accessible to the public.

“The proton-beam technique is very effective because it has higher sensitivity and lower backgrounds than electron beams,” explains applied physicist and lead designer Serge Mathot. “You can also perform the analysis in the ambient air, instead of in a vacuum, making it more flexible and better suited to fragile objects.”

For ELISA’s first experiment, researchers from the Australian Nuclear Science Technology Organisation and from Oxford’s Ashmolean Museum have proposed a joint research project about the optimisation of ELISA’s analysis of paint samples designed to mimic ancient cave art. The ultimate goal is to work towards a portable accelerator that can be taken to regions of the world that don’t have access to proton beams.

Chamonix looks to CERN’s future

The Chamonix Workshop 2025, held from 27 to 30 January, brought together CERN’s accelerator and experimental communities to reflect on achievements, address challenges and chart a course for the future. As the discussions made clear, CERN is at a pivotal moment. The past decade has seen transformative developments across the accelerator complex, while the present holds significant potential and opportunity.

The workshop opened with a review of accelerator operations, supported by input from December’s Joint Accelerator Performance Workshop. Maintaining current performance levels requires an extraordinary effort across all the facilities. Performance data from the ongoing Run 3 shows steady improvements in availability and beam delivery. These results are driven by dedicated efforts from system experts, operations teams and accelerator physicists, all working to ensure excellent performance and high availability across the complex.

Electron clouds parting

Attention is now turning to Run 4 and the High-Luminosity LHC (HL-LHC) era. Several challenges have been identified, including the demand for high-intensity beams, radiofrequency (RF) power limitations and electron-cloud effects. In the latter case, synchrotron-radiation photons strike the beam-pipe walls, releasing electrons which are then accelerated by proton bunches, triggering a cascading electron-cloud buildup. Measures to address these issues will be implemented during Long Shutdown 3 (LS3), ensuring CERN’s accelerators continue to meet the demands of its diverse physics community.

LS3 will be a crucial period for CERN. In addition to the deployment of the HL-LHC and major upgrades to the ATLAS and CMS experiments, it will see a widespread programme of consolidation, maintenance and improvements across the accelerator complex to secure future exploitation over the coming decades.

Progress on the HL-LHC upgrade was reviewed in detail, with a focus on key systems – magnets, cryogenics and beam instrumentation – and on the construction of critical components such as crab cavities. The next two years will be decisive, with significant system testing scheduled to ensure that these technologies meet ambitious performance targets.

Planning for LS3 is already well advan­ced. Coordination between all stakeholders has been key to aligning complex interdependencies, and the experienced teams are making strong progress in shaping a resource-loaded plan. The scale of LS3 will require meticulous coordination, but it also represents a unique opportunity to build a more robust and adaptable accelerator complex for the future. Looking beyond LS3, CERN’s unique accelerator complex is well positioned to support an increasingly diverse physics programme. This diversity is one of CERN’s greatest strengths, offering complementary opportunities across a wide range of fields.

The high demand for beam time at ISOLDE, n_TOF, AD-ELENA and the North and East Areas underscores the need for a well-balanced approach that supports a broad range of physics. The discussions highlighted the importance of balancing these demands while ensuring that the full potential of the accelerator complex is realised.

Future opportunities such as those highlighted by the Physics Beyond Colliders study will be shaped by discussions being held as part of the update of the European Strategy for Particle Physics (ESPP). Defining the next generation of physics programmes entails striking a careful balance between continuity and innovation, and the accelerator community will play a central role in setting the priorities.

A forward-looking session at the workshop focused on the Future Circular Collider (FCC) Feasibility Study and the next steps. The physics case was presented alongside updates on territorial implementation and civil-engineering investigations and plans. How the FCC-ee injector complex would fit into the broader strategic picture was examined in detail, along with the goals and deliverables of the pre-technical design report (pre-TDR) phase that is planned to follow the Feasibility Study’s conclusion.

While the FCC remains a central focus, other future projects were also discussed in the context of the ESPP update. These include mature linear-collider proposals, the potential of a muon collider and plasma wakefield acceleration. Development of key technologies, such as high-field magnets and superconducting RF systems, will underpin the realisation of future accelerator-based facilities.

The next steps – preparing for Run 4, implementing the LS3 upgrade programmes and laying the groundwork for future projects – are ambitious but essential. CERN’s future will be shaped by how well we seize these opportunities.

The shared expertise and dedication of CERN’s personnel, combined with a clear strategic vision, provide a solid foundation for success. The path ahead is challenging, but with careful planning, collaboration and innovation, CERN’s accelerator complex will remain at the heart of discovery for decades to come.

The triggering of tomorrow

The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.

Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.

Evolving architecture

Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.

Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.

To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.

ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.

The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.

Substantial upgrades

The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.

Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.

The third edition of TDHEP suggests that innovation in this field is only set to accelerate

The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.

Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.

Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.

These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.

Probing the quark–gluon plasma in Nagasaki

The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted 346 physicists to Nagasaki, Japan, from 22 to 27 September 2024. Delegates discussed the recent experimental and theoretical findings on perturbative probes of the quark–gluon plasma (QGP) – a hot and deconfined state of matter formed in ultrarelativistic heavy-ion collisions.

The four main LHC experiments played a prominent role at the conference, presenting a large set of newly published results from studies performed on data collected during LHC Run 2, as well as several new preliminary results performed on the new data samples from Run 3.

Jet modifications

A number of significant results on the modification of jets in heavy-ion collisions were presented. Splitting functions characterising the evolution of parton showers are expected to be modified in the presence of the QGP, providing experimental access to the medium properties. A more differential look at these modifications was presented through a correlated measurement of the shared momentum fraction and opening angle of the first splitting satisfying the “soft drop” condition in jets. Additionally, energy–energy correlators have recently emerged as promising observables where the properties of jet modification in the medium might be imprinted at different scales on the observable.

The first measurements of the two-particle energy–energy correlators in p–Pb and Pb–Pb collisions were presented, showing modifications in both the small- and large-angle correlations for both systems compared to pp collisions. A long-sought after effect of energy exchanges between the jet and the medium is a correlated response of the medium in the jet direction. For the first time, measurements of hadron–boson correlations in events containing photons or Z bosons showed a clear depletion of the bulk medium in the direction of the Z boson, providing direct evidence of a medium response correlated to the propagating back-to-back jet. In pp collisions, the first direct measurement of the dead cone of beauty quarks, using novel machine-learning methods to reconstruct the beauty hadron from partial decay information, was also shown.

Several new results from studies of particle production in ultraperipheral heavy-ion collisions were discussed. These studies allow us to investigate the possible onset of gluon saturation at low Bjorken-x values. In this context, new results of charm photoproduction, with measurements of incoherent and coherent J/ψ mesons, as well as of D0 mesons, were released. Photonuclear production cross-sections of di-jets, covering a large interval of photon energies to scan over different regions of Bjorken-x, were also presented. These measurements pave the way for setting constraints on the gluon component of nuclear parton distribution functions at low Bjorken-x values, over a wide Q2 range, in the absence of significant final-state effects.

New experiments will explore higher-density regions of the QCD–matter phase diagram

During the last few years, a significant enhancement of charm and beauty-baryon production in proton–proton collisions was observed, compared to measurements in e+e and ep collisions. These observations have challenged the assumption of the universality of heavy-quark fragmentation across different collision systems. Several intriguing measurements on this topic were released at the conference. In addition to an extended set of charm meson-to-meson and baryon-to-meson production yield ratios, the first measurements of the production of Σc0,++(2520) relative to Σc0,++(2455) at the LHC, obtained exploiting the new Run 3 data samples, were discussed. New insights on the structure of the exotic χc1(3872) state and its hadronisation mechanism were garnered by measuring the ratio of its production yield to that of ψ(2S) mesons in hadronic collisions.

Additionally, strange-to-non-strange production-yield ratios for charm and beauty mesons as a function of the collision multiplicity were released, pointing toward an enhanced strangeness production in a higher colour-density environment. Several theoretical approaches implementing modified hadronisation mechanisms with respect to in-vacuum fragmentation have proven to be able to reproduce at least part of the measurements, but a comprehensive description of the heavy-quark hadronisation, in particular for the baryonic sector, is still to be reached.

A glimpse into the future of the experimental opportunities in this field was also provided. A new and intriguing set of physics observables for a complete characterisation of the QGP with hard probes will become accessible with the planned upgrades of the ALICE, ATLAS, CMS and LHCb detectors, both during the next long LHC shutdown and in the more distant future. New experiments at CERN, such as NA60+, or in other facilities like the Electron–Ion Collider in the US and J-PARC-HI in Japan, will explore higher-density regions of the QCD–matter phase diagram.

The next edition of this conference series is scheduled to be held in Nashville, US, from 1 to 5 June 2026.

bright-rec iop pub iop-science physcis connect