Comsol -leaderboard other pages

Topics

Fritz A Ferger 1933–2025

Fritz Ferger, a multi-talented engineer who had a significant impact on the technical development and management of CERN, passed away on 22 March 2025.

Born in Reutlingen, Germany, on 5 April 1933, Fritz obtained his electrical engineering degree in Stuttgart and a doctorate at the University of Grenoble. A contract with General Electric in his pocket, he visited CERN, curious about the 25 GeV Proton Synchrotron, the construction of which was receiving the finishing touches in the late 1950s. He met senior CERN staff and was offered a contract that he, impressed by the visit, accepted in early 1959.

Fritz’s first assignment was the development of a radio-frequency (RF) accelerating cavity for a planned fixed-field alternating-gradient (FFAG) accelerator. This was abandoned in early 1960 in favour of the study of a 2 × 25 GeV proton–proton collider, the Intersecting Storage Rings (ISR). As a first step, the CERN Electron Storage and Accumulation Ring (CESAR) was constructed to test high-vacuum technology and RF accumulation schemes; Fritz designed and constructed the RF system. With CESAR in operation, he moved on to the construction and tests of the high-power RF system of the ISR, a project that was approved in 1965.

After the smooth running-in of the ISR and, for a while having been responsible for the General Engineering Group, he became division leader of the ISR in 1974, a position he held until 1982. Under his leadership the ISR unfolded its full potential with proton beam currents up to 50 A and a luminosity 35 times the design value, leading CERN to acquire the confidence that colliders were the way to go. Due to his foresight, the development of new technologies was encouraged for the accelerator, including superconducting quadrupoles and pumping by cryo- and getter surfaces. Both were applied on a grand scale in LEP and are still essential for the LHC today.

Under his ISR leadership CERN acquired the confidence that colliders were the way to go

When the resources of the ISR Division were refocussed on LEP in 1983, Fritz became the leader of the Technical Inspection and Safety Commission. This absorbed the activities of the previous health and safety groups, but its main task was to scrutinise the LEP project from all technical and safety aspects. Fritz’s responsibility widened considerably when he became leader of the Technical Support Division in 1986. All of the CERN civil engineering, the tunnelling for the 27 km circumference LEP ring, its auxiliary tunnels, the concreting of the enormous caverns for the experiments and the construction of a dozen surface buildings were in full swing and brought to a successful conclusion in the following years. New buildings on the Meyrin site were added, including the attractive Building 40 for the large experimental groups, in which he took particular pride. At the same time, and under pressure to reduce expenditure, he had to manage several difficult outsourcing contracts.

When he retired in 1997, he could look back on almost 40 years dedicated to CERN; his scientific and technical competence paired with exceptional organisational and administrative talent. We shall always remember him as an exacting colleague with a wide range of interests, and as a friend, appreciated for his open and helpful attitude.

We grieve his loss and offer our sincere condolences to his widow Catherine and their daughters Sophie and Karina.

The minimalism of many worlds

Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.

But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.

The view from the lab

The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.

Valued measurements

The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?

Dynamical probes

One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)

One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.

Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles

Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.

Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.

Practice makes perfect

We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.

Origins

In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.

Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whichever path the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.

Unfeasible interference

But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.

And decoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.

Emergence and scale

Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.

On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).

Two views of quantum mechanics

Quantum phenomenon Lab view Decoherent view

Dynamics

Unitary (i.e. governed by the Schrödinger equation) only between measurements

Always unitary

Quantum/classical transition

Conceptual jump between fundamentally different systems

Purely dynamical: classical physics is a limiting case of quantum physics

Measurements

Cannot be treated internal to the formalism

Just one more dynamical interaction

Role of the observer

Conceptually central

Just one more physical system

But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.

Many worlds

At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.

Superpositions are not probabilities

(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)

Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)

Alternative views

Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)

The Everett interpretation is just the decoherent view taken fully seriously

Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.

Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.

The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.

Discovering the neutrino sky

Lake Baikal, the Mediterranean Sea and the deep, clean ice at the South Pole: trackers. The atmosphere: a calorimeter. Mountains and even the Moon: targets. These will be the tools of the neutrino astrophysicist in the next two decades. Potentially observable energies dwarf those of the particle physicist doing repeatable experiments, rising up to 1 ZeV (1021 eV) for some detector concepts.

The natural accelerators of the neutrino astrophysicist are also humbling. Consider, for instance, the extraordinary relativistic jets emerging from the supermassive black hole in Messier 87 – an accelerator that stretches for about 5000 light years, or roughly 315 million times the distance from the Earth to the Sun.

Alongside gravitational waves, high-energy neutrinos have opened up a new chapter in astronomy. They point to the most extreme events in the cosmos. They can escape from regions where high-energy photons are attenuated by gas and dust, such as NGC 1068, the first steady neutrino emitter to be discovered (see “The neutrino sky” figure). Their energies can rise orders of magnitude above 1 PeV (1015 eV), where the universe becomes opaque to photons due to pair production with the cosmic microwave background. Unlike charged cosmic rays, they are not deflected by magnetic fields, preserving their original direction.

Breaking into the exascale calls for new thinking

High-energy neutrinos therefore offer a unique window into some of the most profound questions in modern physics. Are there new particles beyond the Standard Model at the highest energies? What acceleration mechanisms allow nature to propel them to such extraordinary energies? And is dark matter implicated in these extreme events? With the observation of a 220+570–110 PeV neutrino confounding the limits set by prior observatories and opening up the era of ultra-high-energy neutrino astronomy (CERN Courier March/April 2025 p7), the time is ripe for a new generation of neutrino detectors on an even grander scale (see “Thinking big” table).

A cubic-kilometre ice cube

Detecting high-energy neutrinos is a serious challenge. Though the neutrino–nucleon cross section increases a little less than linearly with neutrino energy, the flux of cosmic neutrinos drops as the inverse square or faster, reducing the event rate by nearly an order of magnitude per decade. A cubic-kilometre-scale detector is required to measure cosmic neutrinos beyond 100 TeV, and Earth starts to be opaque as energies rise beyond a PeV or so, when the odds of a neutrino being absorbed as it passes through the planet are roughly even depending on the direction of the event.

Thinking big

The journey of cosmic neutrino detection began off the coast of the Hawaiian Islands in the 1980s, led by John Learned of the University of Hawaii at Mānoa. The DUMAND (Deep Underwater Muon And Neutrino Detector) project sought to use both an array of optical sensors to measure Cherenkov light and acoustic detectors to measure the pressure waves generated by energetic particle cascades in water. It was ultimately cancelled in 1995 due to engineering difficulties related to deep-sea installation, data transmission over long underwater distances and sensor reliability under high pressure.

The next generation of cubic-kilometre-scale neutrino detectors built on DUMAND’s experience. The IceCube Neutrino Observatory has pioneered neutrino astronomy at the South Pole since 2011, probing energies from 10 GeV to 100 PeV, and is now being joined by experiments under construction such as KM3NeT in the Mediterranean Sea, which observed the 220 PeV candidate, and Baikal–GVD in Lake Baikal, the deepest lake on Earth. All three experiments watch for the deep inelastic scattering of high-energy neutrinos, using optical sensors to detect Cherenkov photons emitted by secondary particles.

Exascale from above

A decade of data-taking from IceCube has been fruitful. The Milky Way has been observed in neutrinos for the first time. A neutrino candidate event has been observed that is consistent with the Glashow resonance – the resonant production in the ice of a real W boson by a 6.3 PeV electron–antineutrino – confirming a longstanding prediction from 1960. Neutrino emission has been observed from supermassive black holes in NGC 1068 and TXS 0506+056. A diffuse neutrino flux has been discovered beyond 10 TeV. Neutrino mixing parameters have been measured. And flavour ratios have been constrained: due to the averaging of neutrino oscillations over cosmological distances, significant deviations from a 1:1:1 ratio of electron, muon and tau neutrinos could imply new physics such as the violation of Lorentz invariance, non-standard neutrino interactions or neutrino decay.

The sensitivity and global coverage of water-Cherenkov neutrino observatories is set to increase still further. The Pacific Ocean Neutrino Experiment (P-ONE) aims to establish a cubic-kilometre-scale deep-sea neutrino telescope off the coast of Canada; IceCube will expand the volume of its optical array by a factor eight; and the TRIDENT and HUNT experiments, currently being prototyped in the South China Sea, may offer the largest detector volumes of all. These detectors will improve sky coverage, enhance angular resolution, and increase statistical precision in the study of neutrino sources from 1 TeV to 10 PeV and above.

Breaking into the exascale calls for new thinking.

Into the exascale

Optical Cherenkov detectors have been exceptionally successful in establishing neutrino astronomy, however, the attenuation of optical photons in water and ice requires the horizontal spacing of photodetectors to a few hundred metres at most, constraining the scalability of the technology. To achieve sensitivity to ultra-high energies measured in EeV (1018 eV), an instrumented area of order 100 km2 would be required. Constructing an optical-based detector on such a scale is impractical.

Earth skimming

One solution is to exchange the tracking volume of IceCube and its siblings with a larger detector that uses the atmosphere as a calorimeter: the deposited energy is sampled on the Earth’s surface.

The Pierre Auger Observatory in Argentina epitomises this approach. If IceCube is presently the world’s largest detector by volume, the Pierre Auger Observatory is the world’s largest detector by area. Over an area of 3000 km2, 1660 water Cherenkov detectors and 24 fluorescence telescopes sample the particle showers generated when cosmic rays with energies beyond 10 EeV strike the atmosphere, producing billions of secondary particles. Among the showers it detects are surely events caused by ultra-high-energy neutrinos, but how might they be identified?

Out on a limb

One of the most promising approaches is to filter events based on where the air shower reaches its maximum development in the atmosphere. Cosmic rays tend to interact after traversing much less atmosphere than neutrinos, since the weakly interacting neutrinos have a much smaller cross-section than the hadronically interacting cosmic rays. In some cases, tau neutrinos can even skim the Earth’s atmospheric edge or “limb” as seen from space, interacting to produce a strongly boosted tau lepton that emerges from the rock (unlike an electron) to produce an upward-going air shower when it decays tens of kilometres later – though not so much later (unlike a muon) that it has escaped the atmosphere entirely. This signature is not possible for charged cosmic rays. So far, Auger has detected no neutrino candidate events of either topology, imposing stringent upper limits on the ultra-high-energy neutrino flux that are compatible with limits set by IceCube. The AugerPrime upgrade, soon expected to be fully operational, will equip each surface detector with scintillator panels and improved electronics.

Pole position

Experiments in space are being developed to detect these rare showers with an even larger instrumentation volume. POEMMA (Probe of Extreme Multi-Messenger Astrophysics) is a proposed satellite mission designed to monitor the Earth’s atmosphere from orbit. Two satellites equipped with fluorescence and Cherenkov detectors will search for ultraviolet photons produced by extensive air showers (see “Exascale from above” figure). EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon 2) will test the same detection methods from the vantage point of high-atmosphere balloons. These instruments can help distinguish cosmic rays from neutrinos by identifying shallow showers and up-going events.

Another way to detect ultra-high-energy neutrinos is by using mountains and valleys as natural neutrino targets. This Earth-skimming technique also primarily relies on tau neutrinos, as the tau leptons produced via deep inelastic scattering in the rock can emerge from Earth’s crust and decay within the atmosphere to generate detectable particle showers in the air.

The Giant Radio Array for Neutrino Detection (GRAND) aims to detect radio signals from these tau-induced air showers using a large array of radio antennas spread over thousands of square kilometres (see “Earth skimming” figure). GRAND is planned to be deployed in multiple remote, mountainous locations, with the first site in western China, followed by others in South America and Africa. The Tau Air-Shower Mountain-Based Observatory (TAMBO) has been proposed to be deployed on the face of the Colca Canyon in the Peruvian Andes, where an array of scintillators will detect the electromagnetic signals from tau-induced air showers.

Another proposed strategy that builds upon the Earth-skimming principle is the Trinity experiment, which employs an array of Cherenkov telescopes to observe nearby mountains. Ground-based air Cherenkov detectors are known for their excellent angular resolution, allowing for precise pointing to trace back to the origin of the high-energy primary particles. Trinity is a proposed system of 18 wide-field Cherenkov telescopes optimised for detecting neutrinos in the 10 PeV–1000 PeV energy range from the direction of nearby mountains – an approach validated by experiments such as Ashra–NTA, deployed on Hawaii’s Big Island utilising the natural topography of the Mauna Loa, Mauna Kea and Hualālai volcanoes.

Diffuse neutrino landscape

All these ultra-high-energy experiments detect particle showers as they develop in the atmosphere, whether from above, below or skimming the surface. But “Askaryan” detectors operate deep within the ice of the Earth’s poles, where both the neutrino interaction and detection occur.

In 1962 Soviet physicist Gurgen Askaryan reasoned that electromagnetic showers must develop a net negative charge excess as they develop, due to the Compton scattering of photons off atomic electrons and the ionisation of atoms by charged particles in the shower. As the charged shower propagates faster than the phase velocity of light in the medium, it should emit radiation in a manner analogous to Cherenkov light. However, there are key differences: Cherenkov radiation is typically incoherent and emitted by individual charged particles, while Askaryan radiation is coherent, being produced by a macroscopic buildup of charge, and is significantly stronger at radio frequencies. The Askaryan effect was experimentally confirmed at SLAC in 2001.

Optimised arrays

Because the attenuation length of radio waves is an order of magnitude longer than for optical photons, it becomes feasible to build much sparser arrays of radio antennas to detect the Askaryan signals than the compact optical arrays used in deep ice Cherenkov detectors. Such detectors are optimised to cover thousands of square kilometres, with typical energy thresholds beyond 100 PeV.

The Radio Neutrino Observatory in Greenland (RNO-G) is a next-generation in-ice radio detector currently under construction on the ~3 km-thick ice sheet above central Greenland, operating at frequencies in the 150–700 MHz range. RNO-G will consist of a sparse array of 35 autonomous radio detector stations, each separated by 1.25 km, making it the first large-scale radio neutrino array in the northern hemisphere.

Moon skimming

In the southern hemisphere, the proposed IceCube-Gen2 will complement the aforementioned eightfold expanded optical array with a radio component covering a remarkable 500 km2. The cold Antarctic ice provides an optimal medium for radio detection, with radio attenuation lengths of roughly 2 km facilitating cost-efficient instrumentation of the large volumes needed to measure the low ultra-high-energy neutrino flux. The radio array will combine in-ice omnidirectional antennas 150 m below the surface with high-gain antennas at a depth of 15 m and upward-facing antennas on the surface to veto the cosmic-ray background.

The IceCube-Gen2 radio array will have the sensitivity to probe features of the spectrum of astrophysical neutrino beyond the PeV scale, addressing the tension between upper limits from Auger and IceCube, and KM3NeT’s 220 +570–110PeV neutrino candidate – the sole ultra-high-energy neutrino yet observed. Extrapolating an isotropic and diffuse flux, IceCube should have detected 75 events in the 72–2600 PeV energy range over its operational period. However, no events have been observed above 70 PeV.

Perhaps the most ambitious way to observe ultra-high-energy neutrinos is to use the Moon as a target

If the detected KM3NeT event has a neutrino energy of around 100 PeV, it could originate from the same astrophysical sources responsible for accelerating ultra-high-energy cosmic rays. In this case, interactions between accelerated protons and ambient photons from starlight or synchrotron radiation would produce pions that decay into ultra-high-energy neutrinos. Alternatively, if its true energy is closer to 1 EeV, it is more likely cosmogenic: arising from the Greisen–Zatsepin–Kuzmin process, in which ultra-high-energy cosmic rays interact with cosmic microwave background photons, producing a Δ-resonance that decays into pions and ultimately neutrinos. IceCube-Gen2 will resolve the spectral shape from PeV to 10 EeV and differentiate between these two possible production mechanisms (see “Diffuse neutrino landscape” figure).

Moonshots

Remarkably, the Radar Echo Telescope (RET) is exploring using radar to actively probe the ice for transient signals. Unlike Askaryan-based detectors, which passively listen for radio pulses generated by charge imbalances in particle cascades, RET’s concept is to beam a radar signal and watch for reflections off the ionisation caused by particle showers. SLAC’s T576 experiment demonstrated the concept in the lab in 2022 by observing a radar echo from a beam of high-energy electrons scattering off a plastic target. RET has now been deployed in Greenland, where it seeks echoes from down-going cosmic rays as a proof of concept.

Full-sky coverage

Perhaps the most ambitious way to observe ultra-high-energy neutrinos foresees using the Moon as a target. When neutrinos with energies above 100 EeV interact near the rim of the Moon, they can induce particle cascades that generate coherent Askaryan radio emission which could be detectable on Earth (see “Moon skimming” figure). Observations could be conducted from Earth-based radio telescopes or from satellites orbiting the Moon to improve detection sensitivity. Lunar Askaryan detectors could potentially be sensitive to neutrinos up to 1 ZeV (1021 eV). No confirmed detections have been reported so far.

Neutrino network

Proposed neutrino observatories are distributed across the globe – a necessary requirement for full sky coverage, given the Earth is not transparent to ultra-high-energy neutrinos (see “Full-sky coverage” figure). A network of neutrino telescopes ensures that transient astrophysical events can always be observed as the Earth rotates. This is particularly important for time-domain multi-messenger astronomy, enabling coordinated observations with gravitational wave detectors and electromagnetic counterparts. The ability to track neutrino signals in real time will be key to identifying the most extreme cosmic accelerators and probing fundamental physics at ultra-high energies.

Accelerators on autopilot

The James Webb Space Telescope and the LHC

Particle accelerators can be surprisingly temperamental machines. Expertise, specialisation and experience is needed to maintain their performance. Nonlinear and resonant effects keep accelerator engineers and physicists up late into the night. With so many variables to juggle and fine-tune, even the most seasoned experts will be stretched by future colliders. Can artificial intelligence (AI) help?

Proposed solutions take inspiration from space telescopes. The two fields have been jockeying to innovate since the Hubble Space Telescope launched with minimal automation in 1990. In the 2000s, multiple space missions tested AI for fault detection and onboard decision-making, before the LHC took a notable step forward for colliders in the 2010s by incorporating machine learning (ML) in trigger decisions. Most recently, the James Webb Space Telescope launched in 2021 using AI-driven autonomous control systems for mirror alignment, thermal balancing and scheduling science operations with minimal intervention from the ground. The new Efficient Particle Accelerators project at CERN, which I have led since its approval in 2023, is now rolling out AI at scale across CERN’s accelerator complex (see “Dynamic and adaptive” image.

AI-driven automation will only become more necessary in the future. As well as being unprecedented in size and complexity, future accelerators will also have to navigate new constraints such as fluctuating energy availability from intermittent sources like wind and solar power, requiring highly adaptive and dynamic machine operation. This would represent a step change in complexity and scale. A new equipment integration paradigm would automate accelerator operation, equipment maintenance, fault analysis and recovery. Every item of equipment will need to be fully digitalised and able to auto-configure, auto-stabilise, auto-analyse and auto-recover. Like a driverless car, instrumentation and software layers must also be added for safe and efficient performance.

On-site human intervention of the LHC could be treated as a last resort – or perhaps designed out entirely

The final consideration is full virtualisation. While space telescopes are famously inaccessible once deployed, a machine like the Future Circular Collider (FCC) would present similar challenges. Given the scale and number of components, on-site human intervention should be treated as a last resort – or perhaps designed out entirely. This requires a new approach: equipment must be engineered for autonomy from the outset – with built-in margins, high reliability, modular designs and redundancy. Emerging technologies like robotic inspection, automated recovery systems and digital twins will play a central role in enabling this. A digital twin – a real-time, data-driven virtual replica of the accelerator – can be used to train and constrain control algorithms, test scenarios safely and support predictive diagnostics. Combined with differentiable simulations and layered instrumentation, these tools will make autonomous operation not just feasible, but optimal.

The field is moving fast. Recent advances allow us to rethink how humans interact with complex machines – not by tweaking hardware parameters, but by expressing intent at a higher level. Generative pre-trained transformers, a class of large language models, open the door to prompting machines with concepts rather than step-by-step instructions. While further R&D is needed for robust AI copilots, tailor-made ML models have already become standard tools for parameter optimisation, virtual diagnostics and anomaly detection across CERN’s accelerator landscape.

Progress is diverse. AI can reconstruct LHC bunch profiles using signals from wall current monitors, analyse camera images to spot anomalies in the “dump kickers” that safely remove beams, or even identify malfunctioning beam-position monitors. In the following, I identify four different types of AI that have been successfully deployed across CERN’s accelerator complex. They are merely the harbingers of a whole new way of operating CERN’s accelerators.

1. Beam steering with reinforcement learning

In 2020, LINAC4 became the new first link in the LHC’s modernised proton accelerator chain – and quickly became an early success story for AI-assisted control in particle accelerators.

Small deviations in a particle beam’s path within the vacuum chamber can have a significant impact, including beam loss, equipment damage or degraded beam quality. Beams must stay precisely centred in the beampipe to maintain stability and efficiency. But their trajectory is sensitive to small variations in magnet strength, temperature, radiofrequency phase and even ground vibrations. Worse still, errors typically accumulate along the accelerator, compounding the problem. Beam-position monitors (BPMs) provide measurements at discrete points – often noisy – while steering corrections are applied via small dipole corrector magnets, typically using model-based correction algorithms.

Beam steering

In 2019, the reinforcement learning (RL) algorithm normalised advantage function (NAF) was trained online to steer the H beam in the horizontal plane of LINAC4 during commissioning. In RL, an agent learns by interacting with its environment and receiving rewards that guide it toward better decisions. NAF uses a neural network to model the so-called Q-function that estimates rewards in RL and uses this to continuously refine its control policy.

Initially, the algorithm required many attempts to find an effective strategy, and in early iterations it occasionally worsened the beam trajectory, but as training progressed, performance improved rapidly. Eventually, the agent achieved a final trajectory better aligned than the goal of an RMS of 1 mm (see “Beam steering” figure).

This experiment demonstrated that RL can learn effective control policies for accelerator-physics problems within a reasonable amount of time. The agent was fully trained after about 300 iterations, or 30 minutes of beam time, making online training feasible. Since 2019, the use of AI techniques has expanded significantly across accelerator labs worldwide, targeting more and more problems that don’t have any classical solution. At CERN, tools such as GeOFF (Generic Optimisation Framework and Front­end) have been developed to standardise and scale these approaches throughout the accelerator complex.

2. Efficient injection with Bayesian optimisation

Bayesian optimisation (BO) is a global optimisation technique that uses a probabilistic model to find the optimal parameters of a system by balancing exploration and exploitation, making it ideal for expensive or noisy evaluations. A game-changing example of its use is the record-breaking LHC ion run in 2024. BO was extensively used all along the ion chain, and made a significant difference in LEIR (the low-energy ion ring, the first synchrotron in the chain) and in the Super Proton Synchrotron (SPS, the last accelerator before the LHC). In LEIR, most processes are no longer manually optimised, but the multi-turn injection process is still non-trivial and depends on various longitudinal and transverse parameters from its injector LINAC3.

Quick recovery

In heavy-ion accelerators, particles are injected in a partially stripped charge state and must be converted to higher charge states at different stages for efficient acceleration. In the LHC ion injector chain, the stripping foil between LINAC3 and LEIR raises the charge of the lead ions from Pb27+ to Pb54+. A second stripping foil, between the PS and SPS, fully ionises the beam to Pb82+ ions for final acceleration toward the LHC. These foils degrade over time due to thermal stress, radiation damage and sputtering, and must be remotely exchanged using a rotating wheel mechanism. Because each new foil has slightly different stripping efficiency and scattering properties, beam transmission must be re-optimised – a task that traditionally required expert manual tuning.

In 2024 it was successfully demonstrated that BO with embedded physics constraints can efficiently optimise the 21 most important parameters between LEIR and the LINAC3 injector. Following a stripping foil exchange, the algorithm restored the accumulated beam intensity in LEIR to better than nominal levels within just a few dozen iterations (see “Quick recovery” figure).

This example shows how AI can now match or outperform expert human tuning, significantly reducing recovery time, freeing up operator bandwidth and improving overall machine availability.

3. Adaptively correcting the 50 Hz ripple

In high-precision accelerator systems, even tiny perturbations can have significant effects. One such disturbance is the 50 Hz ripple in power supplies – small periodic fluctuations in current that originate from the electrical grid. While these ripples were historically only a concern for slow-extracted proton beams sent to fixed-target experiments, 2024 revealed a broader impact.

SPS intensity

In the SPS, adaptive Bayesian optimisation (ABO) was deployed to control this ripple in real time. ABO extends BO by learning the objective not only as a function of the control parameters, but also as a function of time, which then allows continuous control through forecasting.

The algorithm generated shot-by-shot feed-forward corrections to inject precise counter-noise into the voltage regulation of one of the quadrupole magnet circuits. This approach was already in use for the North Area proton beams, but in summer 2024 it was discovered that even for high-intensity proton beams bound for the LHC, the same ripple could contribute to beam losses at low energy.

Thanks to existing ML frameworks, prior experience with ripple compensation and available hardware for active noise injection, the fix could be implemented quickly. While the gains for protons were modest – around 1% improvement in losses – the impact for LHC ion beams was far more dramatic. Correcting the 50 Hz ripple increased ion transmission by more than 15%. ABO is therefore now active whenever ions are accelerated, improving transmission and supporting the record beam intensity achieved in 2024 (see “SPS intensity” figure).

4. Predicting hysteresis with transformers

Another outstanding issue in today’s multi-cycling synchrotrons with iron-dominated electromagnets is correcting for magnetic hysteresis – a phenomenon where the magnetic field depends not only on the current but also on its cycling history. Cumbersome mitigation strategies include playing dummy cycles and manually re-tuning parameters after each change in magnetic history.

SPS hysteresis

While phenomenological hysteresis models exist, their accuracy is typically insufficient for precise beam control. ML offers a path forward, especially when supported by high-quality field measurement data. Recent work using temporal fusion transformers – a deep-learning architecture designed for multivariate time-series prediction – has demonstrated that ML-based models can accurately predict field deviations from the programmed transfer function across different SPS magnetic cycles (see “SPS hysteresis” figure). This hysteresis model is now used in the SPS control room to provide feed-forward corrections – pre-emptive adjustments to magnet currents based on the predicted magnetic state – ensuring field stability without waiting for feedback from beam measurements and manual adjustments.

A blueprint for the future

With the Efficient Particle Accelerators project, CERN is developing a blueprint for the next generation of autonomous equipment. This includes concepts for continuous self-analysis, anomaly detection and new layers of “Internet of Things” instrumentation that support auto-configuration and predictive maintenance. The focus is on making it easier to integrate smart software layers. Full results are expected by the end of LHC Run 3, with robust frameworks ready for deployment in Run 4.

AI can now match or outperform expert human tuning, significantly reducing recovery time and improving overall machine availability

The goal is ambitious: to reduce maintenance effort by at least 50% wherever these frameworks are applied. This is based on a realistic assumption – already today, about half of all interventions across the CERN accelerator complex are performed remotely, a number that continues to grow. With current technologies, many of these could be fully automated.

Together, these developments will not only improve the operability and resilience of today’s accelerators, but also lay the foundation for CERN’s future machines, where human intervention during operation may become the exception rather than the rule. AI is set to transform how we design, build and operate accelerators – and how we do science itself. It opens the door to new models of R&D, innovation and deep collaboration with industry. 

Powering into the future

The Higgs boson is the most intriguing and unusual object yet discovered by fundamental science. There is no higher experimental priority for particle physics than building an electron–positron collider to produce it copiously and study it precisely. Given the importance of energy efficiency and cost effectiveness in the current geopolitical context, this gives unique strategic importance to developing a humble technology called the klystron – a technology that will consume the majority of site power at every major electron–positron collider under consideration, but which has historically only achieved 60% energy efficiency.

The klystron was invented in 1937 by two American brothers, Russell and Sigurd Varian. The Varians wanted to improve aircraft radar systems. At the time, there was a growing need for better high-frequency amplification to detect objects at a distance using radar, a critical technology in the lead-up to World War II.

The Varian’s RF source operated around 3.2 GHz, or a wavelength of about 9.4 cm, in the microwave region of the electromagnetic spectrum. At the time, this was an extraordinarily high frequency – conventional vacuum tubes struggled beyond 300 MHz. Microwave wavelengths promised better resolution, less noise, and the ability to penetrate rain and fog. Crucially, antennas could be small enough to fit on ships and planes. But the source was far too weak for radar.

Klystrons are ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories

The Varians’ genius was to invent a way to amplify the electromagnetic signal by up to 30 dB, or a factor of 1000. The US and British military used the klystron for airborne radar, submarine detection of U-boats in the Atlantic and naval gun targeting beyond visual range. Radar helped win the Battle of Britain, the Battle of the Atlantic and Pacific naval battles, making surprise attacks harder by giving advance warning. Winston Churchill called radar “the secret weapon of WWII”, and the klystron was one of its enabling technologies.

With its high gain and narrow bandwidth, the klystron was the first practical microwave amplifier and became foundational in radio-frequency (RF) technology. This was the first time anyone had efficiently amplified microwaves with stability and directionality. Klystrons have since been used in satellite communication, broadcasting and particle accelerators, where they power the resonant RF cavities that accelerate the beams. Klystrons are therefore ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories, which are central to the future of high-energy physics.

Klystrons and the Higgs

Hadron colliders like the LHC tend to be circular. Their fundamental energy limit is given by the maximum strength of the bending magnets and the circumference of the tunnel. A handful of RF cavities repeatedly accelerate beams of protons or ions after hundreds or thousands of bending magnets force the beams to loop back through them.

Operating principle

Thanks to their clean and precisely controllable collisions, all Higgs factories under consideration are electron–positron colliders. Electron–positron colliders can be either circular or linear in construction. The dynamics of circular electron–positron colliders are radically different as the particles are 2000 times lighter than protons. The strength required from the bending magnets is relatively low for any practical circumference, however, the energy of the particles must be continually replenished, as they radiate away energy in the bends through synchrotron radiation, requiring hundreds of RF cavities. RF cavities are equally important in the linear case. Here, all the energy must be imparted in a single pass, with each cavity accelerating the beam only once, requiring either hundreds or even thousands of RF cavities.

Either way, 50 to 60% of the total energy consumed by an electron-positron collider is used for RF acceleration, compared to a relatively small fraction in a hadron collider. Efficiently powering the RF cavities is of paramount importance to the energy efficiency and cost effectiveness of the facility as a whole. RF acceleration is therefore of far greater significance at electron–positron colliders than at hadron colliders.

From a pen to a mid-size car

RF cavities cannot simply be plugged into the wall. These finely tuned resonant structures must be excited by RF power – an alternating microwave electromagnetic field that is supplied through waveguides at the appropriate frequency. Due to the geometry of resonant cavities, this excites an on-axis oscillating electrical field. Particles that arrive when the electrical field has the right direction are accelerated. For this reason, particles in an accelerator travel in bunches separated by a long distance, during which the RF field is not optimised for acceleration.

CLIC klystron

Despite the development of modern solid-state amplifiers, the Varians’ klystron is still the most practical technology to generate RF when the power required is in the MW level. They can be as small as a pen or as large and heavy as a mid-size car, depending on the frequency and power required. Linear colliders use higher frequency because they also come with higher gradients and make the linac shorter, whereas a circular collider does not need high gradients as the energy to be given each turn is smaller.

Klystrons fall under the general classification of vacuum tubes – fully enclosed miniature electron accelerators with their own source, accelerating path and “interaction region” where the RF field is produced. Their name is derived from the Greek verb describing the action of waves crashing against the seashore. In a klystron, RF power is generated when electrons crash against a decelerating electric field.

Every klystron contains at least two cavities: an input and an output. The input cavity is powered by a weak RF source that must be amplified. The output cavity generates the strongly amplified RF signal generated by the klystron. All this comes encapsulated in an ultra-high vacuum volume inside the field of a solenoid for focusing (see “Operating principle” figure).

Thanks to the efforts made in recent years, high-efficiency klystrons are now approaching the ultimate theoretical limit

Inside the klystron, electrons leave a heated cathode and are accelerated by a high voltage applied between the cathode and the anode. As they are being pushed forward, a small input RF signal is applied to the input cavity, either accelerating or decelerating the electrons according to their time of arrival. After a long drift, late-emitted accelerated electrons catch up with early-emitted decelerated electrons, intersecting with those that did not see any net accelerating force. This is called velocity bunching.

A second, passive accelerating cavity is placed at the location where maximum bunching occurs. Though of a comparable design, this cavity behaves in an inverse fashion to those used in particle accelerators. Rather than converting the energy of an electromagnetic field into the kinetic energy of particles, the kinetic energy of particles is converted into RF electromagnetic waves. This process can be enhanced by the presence of other passive cavities in between the already mentioned two, as well as by several iterations of bunching and de-bunching before reaching the output cavity. Once decelerated, the spent beam finishes its life in a dump or a water-cooled collector.

Optimising efficiency

Klystrons are ultimately RF amplifiers with a very high gain of the order of 30 to 60 dB and a very narrow bandwidth. They can be built at any frequency from a few hundred MHz to tens of GHz, but each operates within a very small range of frequencies called the bandwidth. After broadcasting became reliant on wider bandwidth vacuum tubes, their application in particle accelerators turned into a small market for high-power klystrons. Most klystrons for science are manufactured by a handful of companies which offer a limited number of models that have been in operation for decades. Their frequency, power and duty cycle may not correspond to the specifications of a new accelerator being considered – and in most cases, little or no thought has been given to energy efficiency or carbon footprint.

Battling space charge

When searching for suitable solutions for the next particle-physics collider, however, optimising the energy efficiency of klystrons and other devices that will determine the final energy bill and CO2 emissions is a task of the utmost importance. Therefore, nearly a decade ago, RF experts at CERN and the University of Lancaster began the High-Efficiency Klystron (HEK) project to maximise beam-to-RF efficiency: the fraction of the power contained in the klystron’s electron beam that is converted into RF power by the output cavity.

The complexity of klystrons resides on the very nonlinear fields to which the electrons are subjected. In the cathode and the first stages of electrostatic acceleration, the collective effect of “space-charge” forces between the electrons determines the strongly nonlinear dynamics of the beam. The same is true when the bunching tightens along the tube, with mutual repulsion between the electrons preventing optimal bunching at the output cavity.

For this reason, designing klystrons is not susceptible to simple analytical calculations. Since 2017, CERN has developed a code called KlyC that simulates the beam along the klystron channel and optimises parameters such as frequency and distance between cavities 100 to 1000 times faster than commercial 3D codes. KlyC is available in the public domain and is being used by an ever-growing list of labs and industrial partners.

Perveance

The main characteristic of a klystron is an obscure magnitude inherited from electron-gun design called perveance. For small perveances, space-charge forces are small, due to either high energy or low intensity, making bunching easy. For large perveances, space-charge forces oppose bunching, lowering beam-to-RF efficiency. High-power klystrons require large currents and therefore high perveances. One way to produce highly efficient, high-power klystrons is therefore for multiple cathodes to generate multiple low-perveance electron beams in a “multi-beam” (MB) klystron.

High-luminosity gains

Overall, there is an almost linear dependence between perveance and efficiency. Thanks to the efforts made in recent years, high-efficiency klystrons are now outperforming industrial klystrons by 10% in efficiency for all values of perveance, and approaching the ultimate theoretical limit (see “Battling space charge” figure).

One of the first designs to be brought to life was based on the E37113, a pulsed klystron with 6 MW peak power working in the X-band at 12 GHz, commercialised by CANON ETD. This klystron is currently used in the test facility at CERN for validating CLIC RF prototypes, which could greatly benefit from a larger power. As part of a collaboration with CERN, CANON ETD built a new tube, according to the design optimised at CERN, to reach a beam-to-RF efficiency of 57% instead of the original 42% (see “CLIC klystron” image and CERN Courier September/October 2022 p9).

As its interfaces with the high-voltage (HV) source and solenoid were kept identical, one can now benefit from 8 MW of RF power for the same energy consumption as before. As changes in the manufacturing of the tube channel are just a small fraction of the manufacture of the instrument, its price should not increase considerably, even if more accurate production methods are required.

In pursuit of power

Towards an FCC klystron

Another successful example of re-designing a tube for high efficiency is the TH2167 – the klystron behind the LHC, which is manufactured by Thales. Originally exhibiting a beam-to-RF efficiency of 60%, it was re-designed by the CERN team to gain 10% and reach 70% efficiency, while again using the same HV source and solenoid. The tube prototype has been built and is currently at CERN, where it has demonstrated the capacity to generate 350 kW of RF power with the same input energy as previously required to produce 300 kW. This power will be decisive when dealing with the higher intensity beam expected after the LHC luminosity upgrade. And all this again for a price comparable to previous models (see “High-luminosity gains” image).

The quest for the highest efficiency is not over yet. The CERN team is currently working on a design that could power the proposed Future Circular collider (FCC). Using about a hundred accelerating cavities, the electron and positron beams will need to be replenished with 100 MW of RF power, and energy efficiency is imperative.

The quest for the highest efficiency is not over yet

Although the same tube in use for the LHC, now boosted to 70% efficiency, could be used to power the FCC, CERN is working towards a vacuum tube that could reach an efficiency over 80%. A two-stage multi-beam klystron was initially designed that was capable of reaching 86% efficiency and generating 1 MW of continuous-wave power (see “Towards an FCC klystron” figure).

Motivated by recent changes in FCC parameters, we have rediscovered an old device called a tristron, which is not a conventional klystron but a “gridded tube” where the electron beam bunching mechanism is different. Tristons have a lower power gain but much greater flexibility. Simulations have confirmed that they can reach efficiencies as high as 90%. This could be a disruptive technology with applications well beyond accelerators. Manufacturing a prototype is an excellent opportunity for knowledge transfer from fundamental research to industrial applications.

Charting DESY’s future

How would you describe DESY’s scientific culture?

DESY is a large laboratory with just over 3000 employees. It was founded 65 years ago as an accelerator lab, and at its heart it remains one, though what we do with the accelerators has evolved over time. It is fully funded by Germany.

In particle physics, DESY has performed many important studies, for example to understand the charm quark following the November Revolution of 1974. The gluon was discovered here in the late 1970s. In the 1980s, DESY ran the first experiments to study B mesons, laying the groundwork for core programmes such as LHCb at CERN and the Belle II experiment in Japan. In the 1990s, the HERA accelerator focused on probing the structure of the proton, which, incidentally, was the subject of my PhD, and those results have been crucial for precision studies of the Higgs boson.

Over time, DESY has become much more than an accelerator and particle-physics lab. Even in the early days, it used what is called synchrotron radiation, the light emitted when electrons change direction in the accelerator. This light is incredibly useful for studying matter in detail. Today, our accelerators are used primarily for this purpose: they generate X-rays that image tiny structures, for example viruses.

DESY’s culture is shaped by its very engaged and loyal workforce. People often call themselves “DESYians” and strongly identify with the laboratory. At its heart, DESY is really an engineering lab. You need an amazing engineering workforce to be able to construct and operate these accelerators.

Which of DESY’s scientific achievements are you most proud of?

The discovery of the gluon is, of course, an incredible achievement, but actually I would say that DESY’s greatest accomplishment has been building so many cutting-edge accelerators: delivering them on time, within budget, and getting them to work as intended.

Take the PETRA accelerator, for example – an entirely new concept when it was first proposed in the 1970s. The decision to build it was made in 1975; construction was completed by 1978; and by 1979 the gluon was discovered. So in just four years, we went from approving a 2.3 km accelerator to making a fundamental discovery, something that is absolutely crucial to our understanding of the universe. That’s something I’m extremely proud of.

I’m also very proud of the European X-ray Free-Electron Laser (XFEL), completed in 2017 and now fully operational. Before that, in 2005 we launched the world’s first free-electron laser, FLASH, and of course in the 1990s HERA, another pioneering machine. Again and again, DESY has succeeded in building large, novel and highly valuable accelerators that have pushed the boundaries of science.

What can we look forward to during your time as chair?

We are currently working on 10 major projects in the next three years alone! PETRA III will be running until the end of 2029, but our goal is to move forward with PETRA IV, the world’s most advanced X-ray source. Securing funding for that first, and then building it, is one of my main objectives. In Germany, there’s a roadmap process, and by July this year we’ll know whether an independent committee has judged PETRA IV to be one of the highest-priority science projects in the country. If all goes well, we aim to begin operating PETRA IV in 2032.

Our FLASH soft X-ray facility is also being upgraded to improve beam quality, and we plan to relaunch it in early September. That will allow us to serve more users and deliver better beam quality, increasing its impact.

In parallel, we’re contributing significantly to the HL-LHC upgrade. More than 100 people at DESY are working on building trackers for the ATLAS and CMS detectors, and parts of the forward calorimeter of CMS. That work needs to be completed by 2028.

Hunting axions

Astroparticle physics is another growing area for us. Over the next three years we’re completing telescopes for the Cherenkov Telescope Array and building detectors for the IceCube upgrade. For the first time, DESY is also constructing a space camera for the satellite UltraSat, which is expected to launch within the next three years.

At the Hamburg site, DESY is diving further into axion research. We’re currently running the ALPS II experiment, which has a fascinating “light shining through a wall” setup. Normally, of course, light can’t pass through something like a thick concrete wall. But in ALPS II, light inside a magnet can convert into an axion, a hypothetical dark-matter particle that can travel through matter almost unhindered. On the other side, another magnet converts the axion back into light. So, it appears as if the light has passed through the wall, when in fact it was briefly an axion. We started the experiment last year. As with most experiments, we began carefully, because not everything works at once, but two more major upgrades are planned in the next two years, and that’s when we expect ALPS II to reach its full scientific potential.

We’re also developing additional axion experiments. One of them, in collaboration with CERN, is called BabyIAXO. It’s designed to look for axions from the Sun, where you have both light and magnetic fields. We hope to start construction before the end of the decade.

Finally, DESY also has a strong and diverse theory group. Their work spans many areas, and it’s exciting to see what ideas will emerge from them over the coming years.

How does DESY collaborate with industry to deliver benefits to society?

We already collaborate quite a lot with industry. The beamlines at PETRA, in particular, are of strong interest. For example, BioNTech conducted some of its research for the COVID-19 vaccine here. We also have a close relationship with the Fraunhofer Society in Germany, which focuses on translating basic research into industrial applications. They famously developed the MP3 format, for instance. Our collaboration with them is quite structured, and there have also been several spinoffs and start-ups based on technology developed at DESY. Looking ahead, we want to significantly strengthen our ties with industry through PETRA IV. With much higher data rates and improved beam quality, it will be far easier to obtain results quickly. Our goal is for 10% of PETRA IV’s capacity to be dedicated to industrial use. Furthermore, we are developing a strong ecosystem for innovation on the campus and the surrounding area, with DESY in the centre, called the Science City Hamburg Bahrenfeld.

What’s your position on “dual use” research, which could have military applications?

The discussion around dual-use research is complicated. Personally, I find the term “dual use” a bit odd – almost any high-tech equipment can be used for both civilian and military purposes. Take a transistor for example, which has countless applications, including military ones, but it wasn’t invented for that reason. At DESY, we’re currently having an internal discussion about whether to engage in projects that relate to defence. This is part of an ongoing process where we’re trying to define under what conditions, if any, DESY would take on targeted projects related to defence. There are a range of views within DESY, and I think that diversity of opinion is valuable. Some people are firmly against this idea, and I respect that. Honestly, it’s probably how I would have felt 10 or 20 years ago. But others believe DESY should play a role. Personally, I’m open to it.

If our expertise can help people defend themselves and our freedom in Europe, that’s something worth considering. Of course, I would love to live in a world without weapons, where no one attacks anyone. But if I were attacked, I’d want to be able to defend myself. I prefer to work on shields, not swords, like in Asterix and Obelix, but, of course, it’s never that simple. That’s why we’re taking time with this. It’s a complex and multifaceted issue, and we’re engaging with experts from peace and security research, as well as the social sciences, to help us understand all dimensions. I’ve already learned far more about this than I ever expected to. We hope to come to a decision on this later this year.

You are DESY’s first female chair. What barriers do you think still exist for women in physics, and how can institutions like DESY address them?

There are two main barriers, I think. The first is that, in my opinion, society at large still discourages girls from going into maths and science.

Certainly in Germany, if you stopped a hundred people on the street, I think most of them would still say that girls aren’t naturally good at maths and science. Of course, there are always exceptions: you do find great teachers and supportive parents who go against this narrative. I wouldn’t be here today if I hadn’t received that kind of encouragement.

That’s why it’s so important to actively counter those messages. Girls need encouragement from an early age, they need to be strengthened and supported. On the encouragement side, DESY is quite active. We run many outreach activities for schoolchildren, including a dedicated school lab. Every year, more than 13,000 school pupils visit our campus. We also take part in Germany’s “Zukunftstag”, where girls are encouraged to explore careers traditionally considered male-dominated, and boys do the same for fields seen as female-dominated.

Looking ahead, we want to significantly strengthen our ties with industry

The second challenge comes later, at a different career stage, and it has to do with family responsibilities. Often, family work still falls more heavily on women than men in many partnerships. That imbalance can hold women back, particularly during the postdoc years, which tend to coincide with the time when many people are starting families. It’s a tough period, because you’re trying to advance your career.

Workplaces like DESY can play a role in making this easier. We offer good childcare options, flexibility with home–office arrangements, and even shared leadership positions, which help make it more manageable to balance work and family life. We also have mentoring programmes. One example is dynaMENT, where female PhD students and postdocs are mentored by more senior professionals. I’ve taken part in that myself, and I think it’s incredibly valuable.

Do you have any advice for early-career women physicists?

If I could offer one more piece of advice, it’s about building a strong professional network. That’s something I’ve found truly valuable. I’m fortunate to have a fantastic international network, both male and female colleagues, including many women in leadership positions. It’s so important to have people you can talk to, who understand your challenges, and who might be in similar situations. So if you’re a student, I’d really recommend investing in your network. That’s very important, I think.

What are your personal reflections on the next-generation colliders?

Our generation has a responsibility to understand the electroweak scale and the Higgs boson. These questions have been around for almost 90 years, since 1935 when Hideki Yukawa explored the idea that forces might be mediated by the exchange of massive particles. While we’ve made progress, a true understanding is still out of reach. That’s what the next generation of machines is aiming to tackle.

The problem, of course, is cost. All the proposed solutions are expensive, and it is very challenging to secure investments for such large-scale projects, even though the return on investment from big science is typically excellent: these projects drive innovation, build high-tech capability and create a highly skilled workforce.

Europe’s role is more vital than ever

From a scientific point of view, the FCC is the most comprehensive option. As a Higgs factory, it offers a broad and strong programme to analyse the Higgs and electroweak gauge bosons. But who knows if we’ll be able to afford it? And it’s not just about money. The timeline and the risks also matter. The FCC feasibility report was just published and is still under review by an expert committee. I’d rather not comment further until I’ve seen the full information. I’m part of the European Strategy Group and we’ll publish a new report by the end of the year. Until then, I want to understand all the details before forming an opinion.

It’s good to have other options too. The muon collider is not yet as technically ready as the FCC or linear collider, but it’s an exciting technology and could be the machine after next. Another could be using plasma-wakefield acceleration, which we’re very actively working on at DESY. It could enable us to build high-energy colliders on a much smaller scale. This is something we’ll need, as we can’t keep building ever-larger machines forever. Investing in accelerator R&D to develop these next-gen technologies is crucial.

Still, I really hope there will be an intermediate machine in the near future, a Higgs factory that lets us properly explore the Higgs boson. There are still many mysteries there. I like to compare it to an egg: you have to crack it open to see what’s inside. And that’s what we need to do with the Higgs.

One thing that is becoming clearer to me is the growing importance of Europe. With the current uncertainties in the US, which are already affecting health and climate research, we can’t assume fundamental research will remain unaffected. That’s why Europe’s role is more vital than ever.

I think we need to build more collaborations between European labs. Sharing expertise, especially through staff exchanges, could be particularly valuable in engineering, where we need a huge number of highly skilled professionals to deliver billion-euro projects. We’ve got one coming up ourselves, and the technical expertise for that will be critical.

I believe science has a key role to play in strengthening Europe, not just culturally, but economically too. It’s an area where we can and should come together.

Clean di-pions reveal vector mesons

LHCb figure 1

Heavy-ion collisions usually have very high multiplicities due to colour flow and multiple nucleon interactions. However, when the ions are separated by greater than about twice their radii in so-called ultra-peripheral collisions (UPC), electromagnetic-induced interactions dominate. In these colour-neutral interactions, the ions remain intact and a central system with few particles is produced whose summed transverse momenta, being the Fourier transform of the distance between the ions, is typically less than 100 MeV/c.

In the photoproduction of vector mesons, a photon, radiated from one of the ions, fluctuates into a virtual vector meson long before it reaches the target and then interacts with one or more nucleons in the other ion. The production of ρ mesons has been measured at the LHC by ALICE in PbPb and XeXe collisions, while J/ψ mesons have been measured in PbPb collisions by ALICE, CMS and LHCb. Now, LHCb has isolated a precisely measured, high-statistics sample of di-pions with backgrounds below 1% in which several vector mesons are seen.

Figure 1 shows the invariant mass distribution of the pions, and the fit to the data requires contributions from the ρ meson, continuum ππ, the ω meson and two higher mass resonances at about 1.35 and 1.80 GeV, consistent with excited ρ mesons. The higher structure was also discernible in previous measurements by STAR and ALICE. Since its discovery in 1961, the ρ meson has proved challenging to describe because of its broad width and because of interference effects. More data in the di-pion channel, particularly when practically background-free down almost to production threshold, are therefore welcome. These data may help with hadronic corrections to the prediction of muon g-2: the dip and bump structure at high masses seen by LHCb is qualitatively similar to that observed by BaBar in e+e → π+π scattering (CERN Courier March/April 2025 p21). From the invariant mass spectrum, LHCb has measured the cross-sections for ρ, ω, ρand ρ′′ as a function of rapidity in photoproduction on lead nuclei.

Naively, comparison of the photo­production on the nucleus and on the proton should simply scale with the number of nucleons, and can be calculated in the impulse approximation that only takes into account the nuclear form factor, neglecting all other potential nuclear effects.

However, nuclear shadowing, caused by multiple interactions as the meson passes through the nucleus, leads to a suppression (CERN Courier January/February 2025 p31). In addition, there may be further non-linear QCD effects at play.

Elastic re-scattering is usually described through a Glauber calculation that takes account of multiple elastic scatters. This is extended in the GKZ model using Gribov’s formalism to include inelastic scatters. The inset in figure 1 shows the measured differential cross-section for the ρ meson as a function of rapidity for LHCb data compared to the GKZ prediction, to a prediction for the STARlight generator, and to ALICE data at central rapidities. Additional suppression due to nuclear effects is observed above that predicted by GKZ.

European strategy update: the community speaks

Community input themes of the European Strategy process

The deadline for submitting inputs to the 2026 update of the European Strategy for Particle Physics (ESPP) passed on 31 March. A total of 263 submissions, ranging from individual to national perspectives, express the priorities of the high-energy physics community (see “Community inputs” figure). These inputs will be distilled by expert panels in preparation for an Open Symposium that will be held in Venice from 23 to 27 June (CERN Courier March/April 2025 p11).

Launched by the CERN Council in March 2024, the stated aim of the 2026 update to the ESPP is to develop a visionary and concrete plan that greatly advances human knowledge in fundamental physics, in particular through the realisation of the next flagship project at CERN. The community-wide process, which is due to submit recom­mendations to Council by the end of the year, is also expected to prioritise alternative options to be pursued if the preferred project turns out not to be feasible or competitive.

“We are heartened to see so many rich and varied contributions, in particular the national input and the various proposals for the next large-scale accelerator project at CERN,” says strategy secretary Karl Jakobs of the University of Freiburg, speaking on behalf of the European Strategy Group (ESG). “We thank everyone for their hard work and rigour.”

Two proposals for flagship colliders are at an advanced stage: a Future Circular Collider (FCC) and a Linear Collider Facility (LCF). As recommended in the 2020 strategy update, a feasibility study for the FCC was released on 31 March, describing a 91 km-circumference infrastructure that could host an electron–positron Higgs and electroweak factory followed by an energy-frontier hadron collider at a later stage. Inputs for an electron–positron LCF cover potential starting configurations based on Compact Linear Collider (CLIC) or International Linear Collider (ILC) technologies. It is proposed that the latter LCF could be upgraded using CLIC, Cool Copper Collider, plasma-wakefield or energy-recovery technologies and designs. Other proposals outline a muon collider and a possible plasma-wakefield collider, as well as potential “bridging” projects to a future flagship collider. Among the latter are LEP3 and LHeC, which would site an electron–positron and an electron–proton collider, respectively, in the existing LHC tunnel. For the LHeC, an additional energy-recovery linac would need to be added to CERN’s accelerator complex.

Future choices

In probing beyond the Standard Model and more deeply studying the Higgs boson and its electroweak domain, next-generation colliders will pick up where the High-Luminosity LHC (HL-LHC) leaves off. In a joint submission, the ATLAS and CMS collaborations presented physics projections which suggest that the HL-LHC will be able to: observe the H  µ+µ and H  Zγ decays of the Higgs boson; observe Standard Model di-Higgs production; and measure the Higgs’ trilinear self-coupling with a precision better than 30%. The joint document also highlights the need for further progress in high-precision theoretical calculations aligned with the demands of the HL-LHC and serves as important input to the discussion on the choice of a future collider at CERN.

Neutrinos and cosmic messengers, dark matter and the dark sector, strong interactions and flavour physics also attracted many inputs, allowing priorities in non-collider physics to complement collider programmes. Underpinning the community’s physics aspirations are numerous submissions in the categories of accelerator science and technology, detector instrumentation and computing. Progress in these technologies is vital for the realisation of a post-LHC collider, which was also reflected by the recommendation of the 2020 strategy update to define R&D roadmaps. The scientific and technical inputs will be reviewed by the Physics Preparatory Group (PPG), which will conduct comparative assessments of the scientific potential of various proposed projects against defined physics benchmarks.

We are heartened to see so many rich and varied contributions

Key to the ESPP 2026 update are 57 national and national-laboratory submissions, including some from outside Europe. Most identify the FCC as the preferred project to succeed the LHC. If the FCC is found to be unfeasible, many national communities propose that a linear collider at CERN should be pursued, while taking into account the global context: a 250 GeV linear collider may not be competitive if China decides to proceed with a Circular Electron Positron Collider at a comparable energy on the anticipated timescale, potentially motivating a higher energy electron–positron machine or a proton–proton collider instead.

Complex process

In its review, the ESG will take the physics reach of proposed colliders as well as other factors into account. This complex process will be undertaken by seven working groups, addressing: national inputs; diversity in European particle physics; project comparison; implementation of the strategy and deliverability of large projects; relations with other fields of physics; sustainability and environmental impact; public engagement, education, communication and social and career aspects for the next generation; and knowledge and technology transfer. “The ESG and the PPG have their work cut out and we look forward to further strong participation by the full community, in particular at the Open Symposium,” says Jakobs.

A briefing book prepared by the PPG based on the community input and discussions at the Open Symposium will be submitted to the ESG by the end of September for consideration during a five-day-long drafting session, which is scheduled to take place from 1 to 5 December. The CERN Council will then review the final ESG recommendations ahead of a special session to be held in Budapest in May 2026.

Machine learning in industry

Antoni Shtipliyski

In the past decade, machine learning has surged into every corner of industry, from travel and transport to healthcare and finance. For early-career researchers, who have spent their PhDs and postdocs coding, a job in machine learning may seem a natural next step.

“Scientists often study nature by attempting to model the world around us into math­ematical models and computer code,” says Antoni Shtipliyski, engineering manager at Skyscanner. “But that’s only one part of the story if the aim is to apply these models to large-scale research questions or business problems. A completely orthogonal set of challenges revolves around how people collaborate to build and operate these systems. That’s where the real work begins.”

Used to large-scale experiments and collaborative problem solving, particle physicists are uniquely well-equipped to step into machine-learning roles. Shtipliyski worked on upgrades for the level-1 trigger system of the CMS experiment at CERN, before leaving to lead the machine-learning operations team in one of the biggest travel companies in the world.

Effective mindset

“At CERN, building an experimental detector is just the first step,” says Shtipliyski. “To be useful, it needs to be operated effectively over a long period of time. That’s exactly the mindset needed in industry.”

During his time as a physicist, Shtipliyski gained multiple skills that continue to help him at work today, but there were also a number of other areas he developed to succeed in machine learning in industry. One critical gap in a physicists’ portfolio, he notes, is that many people interpret machine-learning careers as purely algorithmic development and model training.

“At Skyscanner, my team doesn’t build models directly,” he says. “We look after the platform used to push and serve machine-learning models to our users. We oversee the techno-social machine that delivers these models to travellers. That’s the part people underestimate, and where a lot of the challenges lie.”

An important factor for physicists transitioning out of academia is to understand the entire lifecycle of a machine-learning project. This includes not only developing an algorithm, but deploying it, monitoring its performance, adapting it to changing conditions and ensuring that it serves business or user needs.

Learning to write and communicate yourself is incredibly powerful

“In practice, you often find new ways that machine-learning models surprise you,” says Shtipliyski. “So having flexibility and confidence that the evolved system still works is key. In physics we’re used to big experiments like CMS being designed 20 years before being built. By the time it’s operational, it’s adapted so much from the original spec. It’s no different with machine-learning systems.”

This ability to live with ambiguity and work through evolving systems is one of the strongest foundations physicists can bring. But large complex systems cannot be built alone, so companies will be looking for examples of soft skills: teamwork, collaboration, communication and leadership.

“Most people don’t emphasise these skills, but I found them to be among the most useful,” Shtipliyski says. “Learning to write and communicate yourself is incredibly powerful. Being able to clearly express what you’re doing and why you’re doing it, especially in high-trust environments, makes everything else easier. It’s something I also look for when I do hiring.”

Industry may not offer the same depth of exploration as academia, but it does offer something equally valuable: breadth, variety and a dynamic environment. Work evolves fast, deadlines come more readily and teams are constantly changing.

“In academia, things tend to move more slowly. You’re encouraged to go deep into one specific niche,” says Shtipliyski. “In industry, you often move faster and are sometimes more shallow. But if you can combine the depth of thought from academia with the breadth of experience from industry, that’s a winning combination.”

Applied skills

For physicists eyeing a career in machine learning, the most they can do is to familiarise themselves with tools and practices for building and deploying models. Show that you can use the skills developed in academia and apply them to other environments. This tells recruiters that you have a willingness to learn, and is a simple but effective way of demonstrating commitment to a project from start to finish, beyond your assigned work.

“People coming from physics or mathematics might want to spend more time on implementation,” says Shtipliyski. “Even if you follow a guided walkthrough online, or complete classes on Coursera, going through the whole process of implementing things from scratch teaches you a lot. This puts you in a position to reason about the big picture and shows employers your willingness to stretch yourself, to make trade-offs and to evaluate your work critically.”

A common misconception is that practicing machine learning outside of academia is somehow less rigorous or less meaningful. But in many ways, it can be more demanding.

Scientific development is often driven by arguments of beauty and robustness. In industry, there’s less patience for that,” he says. “You have to apply it to a real-world domain – finance, travel, healthcare. That domain shapes everything: your constraints, your models, even your ethics.”

Shtipliyski emphasises that the technical side of machine learning is only one half of the equation. The other half is organisational: helping teams work together, navigate constraints and build systems that evolve over time. Physicists would benefit from exploring different business domains to understand how machine learning is used in different contexts. For example, GDPR constraints make privacy a critical issue in healthcare and tech. Learning how government funding is distributed throughout each project, as well as understanding how to build a trusting relationship between the funding agencies and the team, is equally important.

“A lot of my day-to-day work is just passing information, helping people build a shared mental model,” he says. “Trust is earned by being vulnerable yourself, which allows others to be vulnerable in turn. Once that happens, you can solve almost any problem.”

Taking the lead

Particle physicists are used to working in high-stakes, international teams, so this collaborative mindset is engrained in their training. But many may not have had the opportunity to lead, manage or take responsibility for an entire project from start to finish.

“In CMS, I did not have a lot of say due to the complexity and scale of the project, but I was able to make meaningful contributions in the validation and running of the detector,” says Shtipliyski. “But what I did not get much exposure to was the end-to-end experience, and that’s something employers really want to see.”

This does not mean you need to be a project manager to gain leadership experience. Early-career researchers have the chance to up-skill when mentoring a newcomer, help improve the team’s workflow in a proactive way, or network with other physicists and think outside the box.

You can be the dedicated expert in the room, even if you’re new. That feels really empowering

“Even if you just shadow an existing project, if you can talk confidently about what was done, why it was done and how it might be done differently – that’s huge.”

Many early-career researchers hesitate prior to leaving academia. They worry about making the “wrong” choice, or being labelled as a “finance person” or “tech person” as soon as they enter another industry. This is something Shtipliyski struggled to reckon with, but eventually realised that such labels do not define you.

“It was tough at CERN trying to anticipate what comes next,” he admits. “I thought that I could only have one first job. What if it’s the wrong one? But once a scientist, always a scientist. You carry your experiences with you.”

Shtipliyski quickly learnt that industry operates under a different set of rules: where everyone comes from a different background, and the levels of expertise differ depending on the person you will speak to next. Having faced intense imposter syndrome at CERN – having shared spaces with world-leading experts – industry offered Shtipliyski a more level playing field.

“In academia, there’s a kind of ladder: the longer you stay, the better you get. In industry, it’s not like that,” says Shtipliyski. “You can be the dedicated expert in the room, even if you’re new. That feels really empowering.”

Industry rewards adaptability as much as expertise. For physicists stepping beyond academia, the challenge is not abandoning their training, but expanding it – learning to navigate ambiguity, communicate clearly and understand the full lifecycle of real-world systems. Harnessing a scientist’s natural curiosity, and demonstrating flexibility, allows the transition to become less about leaving science behind, and more about discovering new ways to apply it.

“You are the collection of your past experiences,” says Shtipliyski. “You have the freedom to shape the future.”

DESI hints at evolving dark energy

The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.

Cosmological constant

In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.

These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (
CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.

The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).

Tension with ΛCDM

The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.

The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.

Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.

The new data release provides significant evidence of a deviation from the ΛCDM model

An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.

Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.

Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.

bright-rec iop pub iop-science physcis connect