Automated space telescopes are inspiring a new generation of particle accelerators that are primarily operated by AI. Verena Kain highlights four ways machine learning is already making the LHC more efficient.

Particle accelerators can be surprisingly temperamental machines. Expertise, specialisation and experience is needed to maintain their performance. Nonlinear and resonant effects keep accelerator engineers and physicists up late into the night. With so many variables to juggle and fine-tune, even the most seasoned experts will be stretched by future colliders. Can artificial intelligence (AI) help?
Proposed solutions take inspiration from space telescopes. The two fields have been jockeying to innovate since the Hubble Space Telescope launched with minimal automation in 1990. In the 2000s, multiple space missions tested AI for fault detection and onboard decision-making, before the LHC took a notable step forward for colliders in the 2010s by incorporating machine learning (ML) in trigger decisions. Most recently, the James Webb Space Telescope launched in 2021 using AI-driven autonomous control systems for mirror alignment, thermal balancing and scheduling science operations with minimal intervention from the ground. The new Efficient Particle Accelerators project at CERN, which I have led since its approval in 2023, is now rolling out AI at scale across CERN’s accelerator complex (see “Dynamic and adaptive” image.
AI-driven automation will only become more necessary in the future. As well as being unprecedented in size and complexity, future accelerators will also have to navigate new constraints such as fluctuating energy availability from intermittent sources like wind and solar power, requiring highly adaptive and dynamic machine operation. This would represent a step change in complexity and scale. A new equipment integration paradigm would automate accelerator operation, equipment maintenance, fault analysis and recovery. Every item of equipment will need to be fully digitalised and able to auto-configure, auto-stabilise, auto-analyse and auto-recover. Like a driverless car, instrumentation and software layers must also be added for safe and efficient performance.
On-site human intervention of the LHC could be treated as a last resort – or perhaps designed out entirely
The final consideration is full virtualisation. While space telescopes are famously inaccessible once deployed, a machine like the Future Circular Collider (FCC) would present similar challenges. Given the scale and number of components, on-site human intervention should be treated as a last resort – or perhaps designed out entirely. This requires a new approach: equipment must be engineered for autonomy from the outset – with built-in margins, high reliability, modular designs and redundancy. Emerging technologies like robotic inspection, automated recovery systems and digital twins will play a central role in enabling this. A digital twin – a real-time, data-driven virtual replica of the accelerator – can be used to train and constrain control algorithms, test scenarios safely and support predictive diagnostics. Combined with differentiable simulations and layered instrumentation, these tools will make autonomous operation not just feasible, but optimal.
The field is moving fast. Recent advances allow us to rethink how humans interact with complex machines – not by tweaking hardware parameters, but by expressing intent at a higher level. Generative pre-trained transformers, a class of large language models, open the door to prompting machines with concepts rather than step-by-step instructions. While further R&D is needed for robust AI copilots, tailor-made ML models have already become standard tools for parameter optimisation, virtual diagnostics and anomaly detection across CERN’s accelerator landscape.
Progress is diverse. AI can reconstruct LHC bunch profiles using signals from wall current monitors, analyse camera images to spot anomalies in the “dump kickers” that safely remove beams, or even identify malfunctioning beam-position monitors. In the following, I identify four different types of AI that have been successfully deployed across CERN’s accelerator complex. They are merely the harbingers of a whole new way of operating CERN’s accelerators.
1. Beam steering with reinforcement learning
In 2020, LINAC4 became the new first link in the LHC’s modernised proton accelerator chain – and quickly became an early success story for AI-assisted control in particle accelerators.
Small deviations in a particle beam’s path within the vacuum chamber can have a significant impact, including beam loss, equipment damage or degraded beam quality. Beams must stay precisely centred in the beampipe to maintain stability and efficiency. But their trajectory is sensitive to small variations in magnet strength, temperature, radiofrequency phase and even ground vibrations. Worse still, errors typically accumulate along the accelerator, compounding the problem. Beam-position monitors (BPMs) provide measurements at discrete points – often noisy – while steering corrections are applied via small dipole corrector magnets, typically using model-based correction algorithms.

In 2019, the reinforcement learning (RL) algorithm normalised advantage function (NAF) was trained online to steer the H– beam in the horizontal plane of LINAC4 during commissioning. In RL, an agent learns by interacting with its environment and receiving rewards that guide it toward better decisions. NAF uses a neural network to model the so-called Q-function that estimates rewards in RL and uses this to continuously refine its control policy.
Initially, the algorithm required many attempts to find an effective strategy, and in early iterations it occasionally worsened the beam trajectory, but as training progressed, performance improved rapidly. Eventually, the agent achieved a final trajectory better aligned than the goal of an RMS of 1 mm (see “Beam steering” figure).
This experiment demonstrated that RL can learn effective control policies for accelerator-physics problems within a reasonable amount of time. The agent was fully trained after about 300 iterations, or 30 minutes of beam time, making online training feasible. Since 2019, the use of AI techniques has expanded significantly across accelerator labs worldwide, targeting more and more problems that don’t have any classical solution. At CERN, tools such as GeOFF (Generic Optimisation Framework and Frontend) have been developed to standardise and scale these approaches throughout the accelerator complex.
2. Efficient injection with Bayesian optimisation
Bayesian optimisation (BO) is a global optimisation technique that uses a probabilistic model to find the optimal parameters of a system by balancing exploration and exploitation, making it ideal for expensive or noisy evaluations. A game-changing example of its use is the record-breaking LHC ion run in 2024. BO was extensively used all along the ion chain, and made a significant difference in LEIR (the low-energy ion ring, the first synchrotron in the chain) and in the Super Proton Synchrotron (SPS, the last accelerator before the LHC). In LEIR, most processes are no longer manually optimised, but the multi-turn injection process is still non-trivial and depends on various longitudinal and transverse parameters from its injector LINAC3.

In heavy-ion accelerators, particles are injected in a partially stripped charge state and must be converted to higher charge states at different stages for efficient acceleration. In the LHC ion injector chain, the stripping foil between LINAC3 and LEIR raises the charge of the lead ions from Pb27+ to Pb54+. A second stripping foil, between the PS and SPS, fully ionises the beam to Pb82+ ions for final acceleration toward the LHC. These foils degrade over time due to thermal stress, radiation damage and sputtering, and must be remotely exchanged using a rotating wheel mechanism. Because each new foil has slightly different stripping efficiency and scattering properties, beam transmission must be re-optimised – a task that traditionally required expert manual tuning.
In 2024 it was successfully demonstrated that BO with embedded physics constraints can efficiently optimise the 21 most important parameters between LEIR and the LINAC3 injector. Following a stripping foil exchange, the algorithm restored the accumulated beam intensity in LEIR to better than nominal levels within just a few dozen iterations (see “Quick recovery” figure).
This example shows how AI can now match or outperform expert human tuning, significantly reducing recovery time, freeing up operator bandwidth and improving overall machine availability.
3. Adaptively correcting the 50 Hz ripple
In high-precision accelerator systems, even tiny perturbations can have significant effects. One such disturbance is the 50 Hz ripple in power supplies – small periodic fluctuations in current that originate from the electrical grid. While these ripples were historically only a concern for slow-extracted proton beams sent to fixed-target experiments, 2024 revealed a broader impact.

In the SPS, adaptive Bayesian optimisation (ABO) was deployed to control this ripple in real time. ABO extends BO by learning the objective not only as a function of the control parameters, but also as a function of time, which then allows continuous control through forecasting.
The algorithm generated shot-by-shot feed-forward corrections to inject precise counter-noise into the voltage regulation of one of the quadrupole magnet circuits. This approach was already in use for the North Area proton beams, but in summer 2024 it was discovered that even for high-intensity proton beams bound for the LHC, the same ripple could contribute to beam losses at low energy.
Thanks to existing ML frameworks, prior experience with ripple compensation and available hardware for active noise injection, the fix could be implemented quickly. While the gains for protons were modest – around 1% improvement in losses – the impact for LHC ion beams was far more dramatic. Correcting the 50 Hz ripple increased ion transmission by more than 15%. ABO is therefore now active whenever ions are accelerated, improving transmission and supporting the record beam intensity achieved in 2024 (see “SPS intensity” figure).
4. Predicting hysteresis with transformers
Another outstanding issue in today’s multi-cycling synchrotrons with iron-dominated electromagnets is correcting for magnetic hysteresis – a phenomenon where the magnetic field depends not only on the current but also on its cycling history. Cumbersome mitigation strategies include playing dummy cycles and manually re-tuning parameters after each change in magnetic history.

While phenomenological hysteresis models exist, their accuracy is typically insufficient for precise beam control. ML offers a path forward, especially when supported by high-quality field measurement data. Recent work using temporal fusion transformers – a deep-learning architecture designed for multivariate time-series prediction – has demonstrated that ML-based models can accurately predict field deviations from the programmed transfer function across different SPS magnetic cycles (see “SPS hysteresis” figure). This hysteresis model is now used in the SPS control room to provide feed-forward corrections – pre-emptive adjustments to magnet currents based on the predicted magnetic state – ensuring field stability without waiting for feedback from beam measurements and manual adjustments.
A blueprint for the future
With the Efficient Particle Accelerators project, CERN is developing a blueprint for the next generation of autonomous equipment. This includes concepts for continuous self-analysis, anomaly detection and new layers of “Internet of Things” instrumentation that support auto-configuration and predictive maintenance. The focus is on making it easier to integrate smart software layers. Full results are expected by the end of LHC Run 3, with robust frameworks ready for deployment in Run 4.
AI can now match or outperform expert human tuning, significantly reducing recovery time and improving overall machine availability
The goal is ambitious: to reduce maintenance effort by at least 50% wherever these frameworks are applied. This is based on a realistic assumption – already today, about half of all interventions across the CERN accelerator complex are performed remotely, a number that continues to grow. With current technologies, many of these could be fully automated.
Together, these developments will not only improve the operability and resilience of today’s accelerators, but also lay the foundation for CERN’s future machines, where human intervention during operation may become the exception rather than the rule. AI is set to transform how we design, build and operate accelerators – and how we do science itself. It opens the door to new models of R&D, innovation and deep collaboration with industry.