Comsol -leaderboard other pages

Topics

Seven colliders for CERN

Seven ambitious, diverse and technically complex colliders have been proposed as options for CERN’s next large-scale collider project: CLIC, FCC-ee, FCC-hh, LCF, LEP3, LHeC and a muon collider. The European Strategy Group tasked a working group drawn from across the field (WG2a) to compare these projects on the basis of their technical maturity, performance expectations, risk profiles, and schedule and cost uncertainties. This evaluation is based on documentation submitted for the 2026 update to the European Strategy for Particle Physics (CERN Courier May/June 2025 p8). With WG2a’s final report now published, clear-eyed comparisons can be made across the seven projects.

CLIC

The Compact Linear Collider (CLIC) is a staged linear collider that collides a polarised electron beam with an unpolarised positron beam at two interaction points (IPs) which share the luminosity (see figures and “Design parameters” table). It is based on a two-beam acceleration scheme where power from an intense 1 GHz drive beam is extracted and used to operate an X-band 12 GHz linac with accelerating gradients from 72 to 100 MV/m. The potential of two-beam acceleration to achieve high gradients enables a compact linear-collider footprint. Collision energies between 380 GeV and 1.5 TeV can be achieved with a total tunnel length of 12.1 or 29.4 km, respectively. The proof-of-concept work at the CLIC Test Facility 3 (CTF3) has demonstrated the principles successfully, but not yet at a scale representative of a full collider. A larger-scale demonstration with higher beam currents and more accelerating structures would be necessary to achieve full confidence in CLIC’s construction readiness.

CLIC

The project has a well developed design incorporating decades of effort, and detailed start-to-end (damping ring to IP) simulations have been performed indicating that CLIC’s design luminosity is achievable. CLIC requires tight fabrication and alignment tolerances, active stabilisation, and various feedback and beam-based correction concepts. Failure to achieve all of its tight specifications could translate into a luminosity reduction in practical operation. CLIC still requires a substantial preparation phase and territorial implementation studies, which introduces some uncertainty on its proposed timeline.

FCC-ee

The electron–positron Future Circular Collider (FCC-ee) is the proposed first stage of the integrated FCC programme. This double-ring collider, with a 90.7 km circumference, enables collision centre-of-mass energies up to 365 GeV and allows for four IPs.

FCC-ee

FCC-ee stands out for its level of detail and engineering completeness. The FCC Feasibility Study, including a cost estimate, was recently completed and has undergone scrutiny by expert committees, CERN Council and its subordinate bodies (CERN Courier May/June 2025 p9). This preparation translates into a relatively high technical-readiness level (TRL) across major subsystems, with only a few lower-level/lower-cost elements requiring targeted R&D. The layout has been chosen after a detailed placement study considering territorial, geological and environmental constraints. Dialogue with the public and host-state authorities has begun.

Performance estimates for FCC-ee are considered robust: previous experience with machines such as LEP, PEP-II, DAΦNE and SuperKEKB has provided guidance for the design and bodes well for achieving the performance targets with confidence. In terms of readiness, FCC-ee is the only project that already possesses a complete risk-management framework integrated into its construction planning.

FCC-hh

The hadron version of the Future Circular Collider (FCC-hh) would provide proton–proton collisions up to a nominal energy of 85 TeV – the maximum achievable in the 90.7 km tunnel for the target dipole field of 14 T. As a second stage of the integrated FCC programme, it would occupy the tunnel after the removal of FCC-ee, and so could potentially start operation in the mid-2070s. FCC-hh’s cost uncertainty is currently dominated by its magnets. The baseline design uses superconducting Nb3Sn dipoles operating at 1.9 K, though high-temperature superconducting (HTS) magnets could reduce the electricity consumption or allow higher fields and beam energies for the same power consumption. Both technology approaches are active research directions of Europe’s high-field magnet programme.

FCC-hh

The required Nb3Sn technology is progressing steadily, but still needs 15 to 20 years of R&D before industry-ready designs could be available. HTS cables satisfying the specifications required for the magnets of a high-luminosity collider, although extremely promising, are at an even earlier stage of development. If FCC-hh were to proceed as a standalone project, operations could possibly start around 2055 from a technical perspective. In that case the magnets would need to be based on Nb3Sn technology, as HTS accelerator-magnet technology is not expected to be available in that timeframe.

FCC-hh’s performance expectations draw strength from the LHC experience, though the achievable integrated luminosity would depend on the required “luminosity levelling” scenario that might be determined by pile-up control at the experiments. Luminosity levelling is a technique used in particle colliders such as the LHC to keep the instantaneous luminosity approximately constant at the maximum level compatible with detector readout, rather than letting it start very high and then decay rapidly.

LCF

The Linear Collider Facility (LCF) is a linear electron-positron collider, based on the design of the International Linear Collider (ILC), in a 33.5 km tunnel with two IPs sharing the pulses delivered by the collider and with double the repetition rate of ILC. The first phase aims at a centre-of-mass energy of 250 GeV, though the tunnel is sized to accommodate an upgrade to 550 GeV. LCF’s main linacs incorporate 1.3 GHz bulk-Nb superconducting radiofrequency (SRF) cavities for acceleration, operated at an average gradient of 31.5 MV/m and a cavity quality factor twice that of the ILC design at the same accelerating gradient. The quality factor of an RF cavity is a measure of how efficiently the cavity stores electromagnetic energy compared with how much it loses per cycle. LCF can deliver polarised positron and electron beams. Its engineering definition is solid and its SRF technology widely used in several operational facilities, most prominently at the European XFEL, however, the specific performance targets exceed what has been routinely achieved in operation to date. Demonstrating this combination of high gradient and high quality remains a central R&D requirement.

LCF

Several lower-TRL components – such as the polarised positron source, beam dumps and certain RF systems – also require focused development. Final-focus performance, which is more critical in linear colliders compared to circular colliders, relies on validation at KEK’s Accelerator Test Facility 2, which is being extended and upgraded. The overall schedule is credible but depends on securing the needed R&D funding and would require a preparation phase including detailed territorial implementation studies and geological investigations.

LEP3

The Large Electron Positron collider 3 (LEP3) proposal explores the reuse of the existing LEP/LHC tunnel for a new circular electron–positron (e+e) collider. LEP3 has two IPs and the potential for collision energies ranging from 91 to 230 GeV; its luminosity performance and energy range are limited by synchrotron radiation emission, which is more severe than in FCC-ee due to its smaller radius and the limited space available for the SRF installation.

LEP3

The LEP3 proposal is not yet based on a conceptual or technical design report. Its optics and performance estimates depend on extrapolations from FCC-ee and earlier preliminary studies, and the design has not undergone full simulation-based validation. The current design relies on HTS combined quadrupole and sextupole focusing magnets. Though they would be central to LEP3 achieving a competitive luminosity and power efficiency, these components currently have low TRL scores.

Although tunnel reuse simplifies territorial planning, logistics such as dismantling HL-LHC components introduce non-trivial uncertainties for LEP3. In the absence of a conceptual design report, timelines, costs and risks are subject to significant uncertainty.

LHeC

The Large Hadron–Electron Collider (LHeC) proposal incorporates a novel energy-recovery linac (ERL) coupled to the LHC. High-luminosity collisions take place between a 7 TeV proton beam from the HL–LHC and a high-intensity 50 GeV electron beam accelerated in the new ERL. The LHeC ERL would consist of two linacs based on bulk-Nb SRF 800 MHz cavities, connected by recirculation arcs, resulting in a total machine circumference equal to one third that of the LHC. After acceleration, the beam will collide with the proton beam and will be successively decelerated in the same SRF cavities, “giving back” the energy to the RF system.

LHeC

The LHeC’s performance depends critically on demonstrating high-current, multi-pass energy recovery at multi-GeV energies, which has not yet been demonstrated. The PERLE (Powerful Energy Recovery Linac for Experiments) demonstrator under construction at IJCLab in Orsay will test critical elements of this technology. The main LHeC performance uncertainties relate to the efficiency of energy recovery and beam-loss control of the electron beam during the deceleration process after colliding with the proton beam. Schedule, cost and performance will depend on the outcomes demonstrated at PERLE.

Muon collider

Among the large-scale collider proposals submitted to the European Strategy for Particle Physics update, a muon collider offers a potentially energy-efficient path toward high-luminosity lepton collisions at a centre-of-mass energy of 10 TeV. The larger mass of the muons, as compared with electrons and positrons, reduces the amount of synchrotron radiation emitted in a circular collider of a given energy and radius. The muons are generated from the decays of pions produced by the collision of a high-power proton beam with a target. “Ionisation cooling” of the muon beams via energy loss in absorbers made of low-atomic-number materials and acceleration by means of high-gradient RF cavities immersed in strong magnetic fields is required to reduce the energy spread and divergence of this tertiary beam. Fast acceleration is then needed to extend the muons’ lifetimes in the laboratory frame, thereby reducing the fraction that decays before collision. To achieve this, novel rapid-cycling synchrotrons (RCSs) could be installed in the existing SPS and LHC tunnels.

Muon collider

Neutrino-induced radiation and technological challenges such as high-field solenoids and operating radiofrequency cavities in multi-Tesla magnetic fields present major challenges that require extensive R&D. Demonstrating the required muon cooling at the required level in all six dimensions of phase space is a necessary ingredient to validate the performance, schedule and cost estimates.

Design parameters

WG2a’s comparison, together with the analysis conducted by the other working groups of the European Strategy Group, notably that of WG2b, which is providing an assessment of the physics reach of the various proposals, provides vital input to the recommendations that the European particle-physics community will make for securing the future of the field. 

What can you do with 380 million Higgs bosons?

The Higgs boson is uniquely simple – the only Standard Model particle with no spin. Paradoxically, this allows its behaviour to be uniquely complex, notably due to the “scalar potential” built from the strength of its own field. Shaped like a Mexican hat, the Higgs potential has a local maximum of potential energy at zero field, and a ring of minima surrounding it.

In the past, the Higgs field settled into this ring, where it still dwells today. Since then, the field has been permanently “switched on” – a directionless field with a nonzero “vacuum expectation value” that is ubiquitous throughout the universe. Its interactions with a number of other fundamental particles give them mass. What remains unclear is how the Higgs field behaves once pushed from this familiar minimum. Where will it go next, how did it get there in the first place and might new physics modify this picture?

The LHC alone has shed experimental light on this physics. Further progress on this compelling frontier of fundamental science requires upgrades and new colliders. The next step along this path is the High-Luminosity LHC (HL-LHC), which is scheduled to begin operations in 2030. The HL-LHC is set to outperform the LHC by far, with a total dataset of 380 million Higgs bosons created inside the ATLAS and CMS experiments – a sample more than 10 times larger than any studied so far (see “A leap in technology” panel). We still need to unlock the full reach of the HL-LHC, but three scientific questions may serve to illustrate what can be studied with 380 million Higgs bosons.

What is the fate of the universe?

The stability of our universe hangs in a delicate balance. Quantum corrections could make the Higgs potential bend downward again at high values of the Higgs field, creating a lower-energy state beneath our own (see “The Higgs potential” panel). Through quantum tunnelling, tiny regions of space could spontaneously make the transition, releasing energy as the Higgs field settles into a new minimum of the Higgs potential. Bubbles of the new vacuum would expand at the speed of light, changing the vacuum state of the regions they encounter.

A second minimum?

Details matter. The Higgs potential is modified by the effect of virtual loops from all particles interacting with the Higgs field. Bosons push the Higgs potential upwards at high field values, and fermions pull it downwards. If the Standard Model remains valid up to high field values, perhaps as high as the Planck scale where quantum gravity is expected to become relevant, these corrections may determine the ultimate fate of the vacuum. As the most massive Standard Model particle yet discovered, the top quark makes a dominant negative contribution at high energies and field strengths. Together with a smaller effect from the mass of the Higgs boson itself, the top-quark mass defines three possible regimes. 

In the stable case, the Higgs potential remains above the current minimum up to high field values, and no deeper minimum is present.

If a second, lower minimum forms at high field values, but is shielded by a large energy barrier, the vacuum can be “metastable”. In that case, quantum tunnelling could in principle occur, but on timescales exceeding the age of the universe.

In the unstable regime, the barrier is low enough for decay to have already occurred.

Current observations place our universe safely within the metastable zone, far from any immediate change (see “A second minimum?” figure). Yet the precision of the latest LHC measurements, based on independent determinations of the top-quark mass (purple ellipses), leaves unresolved whether the universe is stable or metastable. Other uncertainties, such as that on the strength of nature’s strong coupling, also affect the distinction between the two regimes, shifting the boundary between stability and metastability (orange band).

The HL-LHC will be well placed to help resolve the question of the stability of the vacuum thanks to improvements in the measurements of the top quark and Higgs-boson masses (red ellipse). This will rely on combining the HL-LHC’s large dataset, the ingenuity of expected analysis improvements and theoretical progress in the fundamental interpretation of these measurements.

The Higgs potential

The Higgs boson is the only Standard Model particle with no spin – a quantum number that behaves as if fundamental particles were spinning, but which cannot correspond to a physical rotation without violating relativity theory.

This allows the Higgs field to experience a scalar potential – energy penalties that depend on the strength of the Higgs field itself. This is forbidden for fermions
(spin ½) and massless bosons (spin 1) by Lorentz symmetry and gauge invariance.

In the Standard Model, the Higgs field is subject to the Higgs potential, shaped like a Mexican hat, with a maximum of potential energy at zero field, and a minimum at a ring in the complex plane of values of the Higgs field. Its polynomial form is restricted by gauge symmetry. Experimentally, it can be inferred by measuring properties of the Higgs boson such as its self-coupling λ3.

Two effects then modify the Mexican-hat shape in ways that are difficult to predict but have important consequences for particle physics and cosmology. These are due to the interactions of the Higgs field with virtual particles and real thermal excitations. Quantum fluctuations modify the energy penalty of exciting the Higgs field due to virtual loops from all Standard Model particles. Changes in the temperature of the universe also generate changes in the shape of the Higgs potential due to the interaction of the Higgs field with real thermal excitations in the hot early universe. Properties such as λ3 are also affected by these effects.

Davide De Biasio associate editor

Why is there more matter than antimatter?

Constraining the Higgs potential

The Higgs potential wasn’t always a Mexican hat. If the early universe got hot enough, interactions between the Higgs field and a hot plasma of particles shaped the Higgs potential into a steep bowl with a minimum at zero field, yielding no vacuum expectation value. As the universe cooled, this potential drooped into its familiar Mexican-hat shape, with a central peak surrounded by a ring of minima, where the Higgs field sits today. But did the Higgs field pass through an intermediate stage, with a “bump” separating the inner minimum from the ring?

The answer depends on the strength of the Higgs self-coupling, λ3, which governs the trilinear coupling where three Higgs-boson lines meet at a single vertex in a Feynman diagram. But λ3 is not yet measured. The most recent joint ATLAS and CMS analysis excludes values outside of –0.71 to 6.1 times its expected value in the Standard Model with 95% confidence.

In the Standard Model, the vacuum smoothly rolled from zero Higgs field to its new minimum in the outer ring. But if λ3 were at least 50% stronger than in the Standard Model, this smooth “crossover” phase transition may have been prevented by an intermediate bump. The vacuum would then have experienced a strong first-order phase transition (FOPT), like ice melting or water boiling at everyday pressures. As the universe cooled, regions of space would have tunnelled into the new vacuum, forming bubbles that expanded and merged. These bubble-wall collisions, combined with additional processes beyond the Standard Model that violate the conservation of both charge and parity together, could have contributed to the observed excess of matter over antimatter – one of the deepest mysteries of modern physics, wherein there appears to have been an excess of baryons over antibaryons in the early universe of roughly one part in a billion, resulting in the surplus we observe today after the annihilation of the others into photons.

The most direct probe of λ3 comes from Higgs-boson pair production (HH). HH production happens most often by the fusion of gluons from the colliding protons to create a top-quark loop that emits either two Higgs bosons or one Higgs boson splitting into two, yielding sensitivity to λ3.

HH production happens only once for every thousand Higgs bosons produced in the LHC. Searches for this process are already underway, with analyses of the Run 2 dataset by the ATLAS and CMS collaborations showing that a signal 2.5 times larger than the Standard Model expectation is already excluded. This progress far exceeds early expectations, suggesting that the HL-LHC may finally bring λ3 within experimental reach, clarifying the shape of the Higgs potential near its current minimum (see “Constraining the Higgs potential” figure).

Measuring λ3 at the HL-LHC would shed light on whether the Higgs potential follows the Standard Model prediction (black line) or alternative shapes (dashed lines), which may arise from physics beyond the Standard Model (BSM). The corresponding sensitivity can be illustrated through two complementary approaches: one based on HH production, assuming no effects beyond λ3 and providing a largely model-independent view near the potential’s minimum (red bands); and an approach that incorporates higher-order effects, which extend the reach over a broader range of the Higgs field (blue bands).

Since the previous update of the European Strategy for Particle Physics, the projected sensitivity has vastly improved. The combined ATLAS and CMS results are now expected to yield a discovery significance exceeding 7σ, should HH production occur at the Standard Model rate. By the end of the HL-LHC programme, the two experiments are expected to determine λ3 with a 1σ uncertainty of about 30% – enough to exclude the considered BSM potentials at the 95% confidence level if the self-coupling matches the Standard Model prediction.

What lurks beyond the Standard Model?

Puzzles such as the origin of dark matter and the nature of neutrino masses suggest that new physics must lie beyond the Standard Model. With greatly expanded data sets at the HL-LHC, new phenomena may become detectable as resonant peaks from undiscovered particles or deviations in precision observables.

Spotting a new scalar

As an example, consider a BSM scenario that includes an additional scalar boson “S” that mixes with the Higgs boson but remains blind to other Standard Model fields (see “Spotting a new scalar” figure). S could induce observable differences in λ3 (horizontal axis) and the coupling of the Higgs boson to the Z boson, gHZZ (vertical axis). Both couplings are plotted as a factor of their expected Standard Model values. The figure explores scenarios where the coupling deviates from its Standard Model value by as little as a tenth of a permille, and where the trilinear self-coupling may be between 0.5 and 2.5 times the value. Such models could prove to be the underlying cause of deviations from the Standard Model such as contributing to the matter–antimatter asymmetry in the universe. Combinations of model parameters that could allow for a strong FOPT in the early universe are plotted as black dots.

This example analysis serves to illustrate the complementarity of precision measurements and direct searches at the HL-LHC. The parameter space can be narrowed by measuring the axis variables λ3 and gHZZ (blue and orange bands). Direct searches for S → HH and S → ZZ will be able to probe or exclude many of the remaining models (red and purple regions), leaving room for scenarios in which new physics is almost entirely decoupled from the Standard Model.

What’s next?

What once might have seemed like science fiction has become a milestone in our understanding of nature. When Ursula von der Leyen, president of the European Commission, last visited CERN, she reflected on recent progress in the field.

“When you designed a 27 km underground tunnel where particles would clash at almost the speed of light, many thought you were daydreaming. And when you started looking for the Higgs boson, the chances of success seemed incredibly low, but you always proved the sceptics wrong. Your story is one of progress against all odds.”

Today, at a pivotal moment for particle physics, we are redefining what we believe is possible. Plucked from the ATLAS and CMS collaborations’ inputs to the 2026 update to the European Strategy for Particle Physics (CERN Courier November/December 2025 p23), the analy­ses described in this article are just a snapshot of what will be possible at the HL-LHC. In close collaboration with the theory community, experimentalists will use the unmatched datasets and detector capabilities of the HL-LHC and allow the field to explore a rich landscape of anticipated phenomena, including many signatures yet to be imagined.

The future starts now, and it is for us to build.

A leap in technology

Tracking upgrades

The HL-LHC will deliver proton–proton collisions at least five times more intensely than the LHC’s original design. By the end of its lifetime, the HL-LHC is expected to accumulate an integrated dataset of around 3 ab–1 of proton–proton collisions – about six times the data collected during the LHC era.

ATLAS and CMS are undergoing extensive upgrades to cope with the intense environment created by a “pileup” of up to 200 simultaneous proton–proton interactions per bunch crossing. For this, researchers are building ever more precise particle detectors and developing faster, more intelligent software.

The ATLAS and CMS collaborations will implement a full upgrade of their tracking systems, providing extended detector coverage and improved spatial resolution (see “Tracking upgrades” figure). New capabilities are added to either or both experiments, such as precision timing layers outside the tracker, a more performant high-granularity forward calorimeter, new muon detectors designed to handle the increased particle flux, and modernised front- and back-end electronics across the calorimeter and muon systems, among other improvements.

Major advances are also being made in data readout, particle reconstruction and event selection. These include track reconstruction capabilities in the trigger and a significantly increased latency, allowing for more advanced decisions about which collisions to keep for offline analysis. Novel selection techniques are also emerging to handle very high event rates with minimal event content, along with AI-assisted methods for identifying anomalous events already in the first stages of the trigger chain.

Finally, detector advancements go hand-in-hand with innovation in algorithms. The reconstruction of physics objects is being revolutionised by higher detector granularity, precise timing, and the integration of machine learning and hardware accelerators such as modern GPUs. These developments will significantly enhance the identification of charged-particle tracks, interaction vertices, b-quark-initiated jets, tau leptons and other signatures – far surpassing the capabilities foreseen when the HL-LHC was first conceived.

Introducing the axion

In pursuit of the QCD axion

There is an overwhelming amount of evidence for the existence of dark matter in our universe. This type of matter is approximately five times more abundant than the matter that makes up everything we observe: ourselves, the Earth, the Milky Way, all galaxies, neutron stars, black holes and any other imaginable structure.

We call it dark because it has not yet been probed through electroweak or strong interactions. We know it exists because it experiences and exerts gravity. That gravity may be the only bridge between dark matter and our own “baryonic” matter, is a scenario that is as plausible as it is intimidating, since gravitational interactions are too weak to produce detectable signals in laboratory-scale experiments, all of which are made of baryonic matter.

However, dark matter may interact with ordinary matter through non-gravitational forces as well, possibly mediated by new particles. Our optimism is rooted in the need for new physics. We also require new mechanisms to generate neutrino masses and the matter–antimatter asymmetry of the universe, and these new mechanisms may be intimately connected to the physics of dark matter. This view is reinforced by a surprising coincidence: the abundances of baryonic and dark matter are of the same order of magnitude, a fact that is difficult to explain without invoking a non-gravitational connection between the two sectors.

It may be that we have not yet detected dark matter simply because we are not looking in the right place. Like good sailors, the first question we ask is how far the boundaries of the territory to be explored extend. Cosmological and astrophysical observations allow dark-matter masses ranging from ultralight values of order 10–22 eV up to masses of the order of thousands of solar masses. The lower bound arises from the requirement that the dark-matter de Broglie wavelength not exceed the size of the smallest gravitationally bound structures, dwarf galaxies, such that quantum pressure does not suppress their formation (see “Leo P” image). The upper limit can be understood from the requirement that dark matter behave as a smooth, effectively collision-less medium on these small astrophysical structures. This leaves us with a range of possibilities spanning about 90 orders of magnitude, a truly overwhelming landscape. Given that our resources, and our own lifetimes, are finite, we guide our expedition both by theoretical motivation and the capabilities of our experiments to explore this vast territory.

Dark matter could be connected to the Standard Model in alternative ways

The canonical dark-matter candidate where theoretical motivation and experimental capability coincides is the weakly interacting massive particle. “WIMPs” are among the most theoretically economical dark-matter candidates, as they naturally arise in theories with new physics at the electroweak scale and can achieve the observed relic abundance through weak-scale interactions. The latter requirement implies that the mass of thermal WIMPs must lie above the GeV scale – approximately a nucleon mass. This “Lee–Weinberg” bound arises because lighter particles would not have annihilated fast enough in the early universe, leaving behind far more dark matter than we observe today.

WIMPs can be probed using a wide range of experimental strategies. At high-energy colliders, searches rely on missing transverse energy, providing sensitivity to the production of dark-matter particles or to the mediators that connect the dark and visible sectors. Beam dump and fixed-target experiments offer complementary sensitivity to light mediators and portal states. Direct-detection experiments measure nuclear recoils of heavy and stable targets, such as noble liquids like xenon or argon, which are sensitive to energy depositions at the keV scale, allowing us to probe dark-matter masses in the light end of the typical WIMP range with extraordinary sensitivity.

Light dark matter

So far, no conclusive signal has been observed, and the simplest realisations of the WIMP paradigm are becoming increasingly constrained. However, dark matter could be connected to the Standard Model in alternative ways, for example through new force carriers, allowing its mass to fall below the Lee–Weinberg bound. This sub-GeV dark matter, also referred to as light dark matter, appears in highly motivated theoretical frameworks such as asymmetric dark matter, in which an asymmetry between dark-matter particles and antiparticles sets the relic abundance, analogously to the baryon asymmetry that determines the visible matter abundance. In some of the best motivated realisations of this scenario, the dark-matter candidate resides in a confining “hidden sector” (see, for example, “Soft clouds probe dark QCD”). A dark-baryon symmetry may guarantee the stability of such composite dark-matter states, with the baryonic and dark asymmetries being generated by related mechanisms.

Leo P

Dark matter could be even lighter and behave as a wave. This occurs when its mass is below the eV-to-10 eV scale, comparable to the ionisation energy of hydrogen. In this case, its de Broglie wavelength exceeds the typical separation between particles, allowing it to be described as a coherent, classical field. In the ultralight dark-matter regime, the leading candidate is the axion. This particle is a prediction of theories beyond the Standard Model that provide a solution to the strong charge–parity (CP) problem.

In the Standard Model, there is no fundamental reason for CP to be conserved by strong interactions. In fact, two terms in the Lagrangian, of very different origin, contribute to an effective CP-violating angle, which would generically induce an electric dipole moment of hadrons, corresponding phenomenologically to a misalignment of their electromagnetic charge distributions. But remarkably – and this is at the heart of the puzzle – high-precision experiments measuring the neutron electric dipole moment show that this angle cannot be larger than 10–10 radians.

Why is this? To quote Murray Gell-Mann, what is not forbidden tends to occur. This unnaturally precise alignment in the strong sector strongly suggests the presence of a symmetry that forces this angle to vanish.

One of the most elegant and widely studied solutions, proposed by Roberto Peccei and Helen Quinn, consists of extending the Standard Model with a new global symmetry that appears at very high energies and is later broken as the universe cools. Whenever such a symmetry breaks, the theory predicts the appearance of one or more new, extremely light particles. If the symmetry is not perfect, but is slightly disturbed by other effects, this particle is no longer exactly massless and instead acquires a small mass controlled by the symmetry-breaking effects. A familiar example comes from ordinary nuclear physics: pions are light particles because the symmetry that would make them massless is slightly broken by the tiny masses of its constituent quarks.

In this framework, the new light particle is called the axion, independently proposed by Steven Weinberg and Frank Wilczek. The axion has remarkable properties: it naturally drives the unwanted CP-violating angle to zero, and its interactions with ordinary matter are not arbitrary but tightly controlled by the same underlying physics that gives it its tiny mass. Strong-interaction effects predict a narrow, well-defined “target band” relating how heavy the axion is to how strongly it interacts with matter, providing a clear roadmap for current experimental searches (the yellow band in the “In pursuit of the QCD axion” figure).

An excellent candidate

Axions also emerge as excellent dark-matter candidates. They can account for the observed cosmic dark matter through a purely dynamical mechanism in which the axion field begins to oscillate around the minimum of its potential in the early universe, and the resulting oscillations redshift as non-relativistic dark matter. Inflation is a little understood rapid expansion of the early universe by more than 26 orders of magnitude in scale factor that cosmologists invoke to explain large-scale correlations in the cosmic microwave background and cosmic structure. If the Peccei–Quinn symmetry was broken after inflation, the axion field would take random initial values in different regions of space, leading to domains with uncorrelated phases and the formation of cosmic strings. Averaging over these regions removes the freedom to tune the initial angle and makes the axion relic density highly predictive. When the additional axions from cosmic strings and domain walls are included, this scenario points to a well defined axion mass in the tens to few-hundreds of μeV range.

Cavity haloscope

There is now a wide array of ingenious experiments, the result of the work of large international collaborations and decades of technological development, that aim to probe the QCD-axion band in parameter space. Despite the many experimental proposals, so far only ADMX, CAPP and HAYSTAC have reached sensitivities close to this target (see “Cavity haloscope” image). These experiments, known as haloscopes, operate under the assumption that axions constitute the dark matter in our universe. In these setups, a high–quality-factor electromagnetic cavity is placed inside a strong magnetic field in which axions from the dark-matter halo of the Milky Way are expected to convert into photons. The resonant frequency of the cavity is tuned like a radio scanning axion masses. This technique allows experiments to probe couplings many orders of magnitude weaker than typical Standard Model interactions. However, scaling these resonant experiments to significantly different axion masses is challenging as a cavity’s resonant frequency is tied to its size. Moving away from its optimal axion-mass range either forces the cavity volume to become very small, reducing the signal power, or requires geometries that are difficult to realise in a laboratory environment.

Other experimental approaches, such as helioscopes, focus on searching for axions produced in the Sun. These experiments mainly probe the higher-mass region of the QCD-axion band and also place strong constraints on axion-like particles (ALPs). ALPs are also light fields that arise from the breaking of an almost exact global symmetry, but unlike the QCD axion, the symmetry is not explicitly broken by strong-interaction effects, so their masses and couplings are not fixedly related. While such particles do not solve the strong CP problem, they can be viable dark-matter candidates that naturally arise in many extensions of the Standard Model, especially in theories with additional global symmetries and in quantum-gravity frameworks.

Among the proposed experimental efforts to observe post-inflation QCD axions, two stand out as especially promising: MADMAX and ALPHA. Both are haloscopes, designed to detect QCD axions in the galactic dark-matter halo. Neither is traditional. Each uses a novel detector concept to target higher axion masses – a regime that is especially well motivated if the Peccei–Quinn symmetry is broken after inflation (see “In pursuit of the post-inflation axion”).

We are living in an exciting era for dark-matter research. Experimental efforts continue and remain highly promising. A large and well-motivated region of parameter space is likely to become accessible in the near future, and upcoming experiments are projected to probe a significant fraction of the QCD axion parameter space over the coming decades. Clear communication, creativity, open-mindedness in exploring new ideas, and strong coordination and sharing of expertise across different physics communities, will be more important than ever.

In pursuit of the post-inflation axion

High-mass haloscope

One hundred µeV. 25 GHz. 10 m. This is the mass, frequency and de Broglie wavelength of a typical post-inflation axion. Though well motivated as a potential explanation for both the nature of dark matter and the absence of CP violation in the strong interaction, such axions subvert the “particle gas” picture of dark matter familiar to many high-energy physicists, and pose distinct challenges for experimentalists.

Axions could occupy countless orders of magnitude in mass, but those that result from symmetry breaking after cosmic inflation are a particularly interesting target, as their mass is predicted to lie within a narrow window of just one or two orders of magnitude, up to and around 100 µeV (see “Introducing the axion”). Assuming a mass of 100 µeV and a local dark-matter density of 0.4 GeV/cm3 in the Milky Way’s dark-matter halo, a back-of-the-envelope calculation indicates that every cubic de Broglie wavelength should contain more than 1021 axions. Such a high occupation number means that axion dark matter would act like a classical field. Moving through the Earth at several hundreds of kilometres per second, the Milky Way’s axion halo would be nonrelativistic and phase coherent over domains metres in width and tens of microseconds in duration.

Axion haloscopes seek to detect this halo via faint electric-field oscillations. The same couplings that should allow axions to decay to pairs of photons on timescales many orders of magnitude longer than the age of the universe should allow them to “mix” with photons in a strong magnetic field. The magnetic field provides a virtual photon, and the axion oscillates into a real photon. For several decades, the primary detection strategy has been to seek to detect their resonant conversion into an RF signal in a microwave cavity permeated by a magnetic field. The experiment is like a car radio. The cavity is tuned very slowly. At the frequency corresponding to the cosmic axion’s mass, a faint signal would be amplified.

The ADMX, CAPP and HAYSTAC experiments have led the search below 25 μeV. These searches are dauntingly difficult, requiring the whole experiment to be cooled down to around 100 mK. Quantum amplifiers must be able to read out signals as weak as 10–24 W. The current generation of experiments can tune over about 10% of the resonant frequency, remaining stable at each small frequency step for 15 minutes before moving onto the next frequency. The steps are determined by the expected lineshape of the axion signal. Axion velocities in the Milky Way’s dark-matter halo should follow a thermal distribution set by the galaxy’s gravitational potential. This produces a spread of kinetic energies that broadens the corresponding photon frequency spectrum into a boosted-Maxwellian shape with a width about 10–6 of the frequency. For a mass around 100 μeV, the expected width is about 25 kHz.

The trouble is that the resonance frequency of a cavity is set by its diameter: the larger the cavity, the smaller the accessible frequency. Because the signal power scales with the cavity volume, it is increasingly difficult to achieve a good sensitivity at higher masses. For a 100 µeV axion with frequency 25 GHz that oscillates into a 25 GHz photon, the cavity would have to be of order only a centimetre wide.

Probing this parameter space calls for novel detector concepts that decouple the mass of the axion from the volume where axions convert into radio photons. This realisation has motivated a new generation of haloscopes built around electromagnetic structures that no longer rely on the resonant frequency of a closed cavity, but instead engineer large effective volumes matched to high axion masses.

Two complementary approaches – dielectric haloscopes and plasma haloscopes – exploit this idea in different ways. Each offers the possibility of discovering a post-inflation axion in the coming decade.

The MADMAX dielectric haloscope

A MADMAX prototype

Thanks to their electromagnetic coupling, a galactic halo of axions would drive a spatially uniform electric field oscillation parallel to an external magnetic field. For 100 µeV axions, it would oscillate at about 25 GHz. In such a field, a dielectric disc will emit photons perpendicular to its surfaces due to an electromagnetic boundary effect: the discontinuity in permittivity forces the axion-induced field to readjust, producing outgoing microwaves.

The Magnetized Disc and Mirror Axion (MADMAX) collaboration seeks to boost this signal through constructive interference. The trick is multiple discs, with tuneable spacing and a mirror to reflect the photons. As the axion halo would be a classical field, each disc should continuously emit radiation in both directions. For multiple dielectric discs, coherent radiation from all disc surfaces leads to constructive interference when the distance between the discs is about half the electromagnetic wavelength, potentially boosting axion-to-photon conversion in a broad frequency range. The experiment can be tuned for a given axion mass by controlling the spacing between the discs with micron-level precision. Arbitrarily many discs can be incorporated, thereby decoupling the volume where axions can convert into photons from the axion’s mass.

The MADMAX collaboration has developed two indirect techniques to measure the “boost factor” of its dielectric haloscopes. In the first method, scanning a bead along the volume maps the three-dimensional induced electric field, from which the boost factor is then computed as the integral of the electric field over the sensitive volume. This method yielded 15% uncertainty for a prototype booster with a mirror and three 30 cm-diameter sapphire discs (see “A work in progress” figure). By studying the response of the prototype in the absence of an external magnetic field, the collaboration set the world’s best limits on dark-photon dark matter in the mass range from 78.62 to 83.95 μeV.

The MADMAX collaboration has developed two indirect techniques to measure the “boost factor” of its dielectric haloscopes

The boost factor can alternatively be obtained by modelling the booster’s response using physical properties extracted from reflectivity measurements and the behaviour of the power spectrum in the given frequency range. This method was applied to MADMAX prototypes inside the world’s largest warm-bore superconducting dipole magnet. Named after the Italian physicist who designed it in the 1970s, the Morpurgo magnet is normally used to test subdetectors of the ATLAS experiment using beams from CERN’s North Area. Since MADMAX requires no beam, a first axion search using the diameter aperture took place during the 2024 winter shutdown of the LHC. The prototype booster included a 20 cm-diameter mirror and three sapphire discs separated by aluminium rings. Frequencies around 19 GHz were explored by adjusting the mirror position. No significant excess consistent with an axion signal was observed. Despite coming from a small prototype, these results surpass astrophysical bounds and constraints from the CERN Axion Solar Telescope (CAST), demonstrating the detection power of dielectric haloscopes.

As a next step, a prototype booster with a mirror and up to twenty 30 cm-diameter discs is expected to deliver a factor 10 to 100 improvement over the 2024 tests. The positions of its discs will be adjusted inside its stainless-steel cryo­stat using cryogenic piezo motors. The setup is currently being commissioned and is set for installation in the Morpurgo magnet during the third long shutdown of the LHC from mid-2026 to 2029. An important goal is to prove the broad-band scanning capacity of dielectric haloscopes at cryogenic temperatures and conditions close to those of the final MADMAX design. Operating at 4 K will enhance MADMAX’s sensitivity by reducing noise from thermal radiation. A prototype has already been successfully tested inside a custom-made glass fibre cryostat in the Morpurgo magnet in cooperation with CERN’s cryogenic laboratory.

The final baseline detector foresees a 9 T superconducting dipole magnet with a warm bore of about 1.3 m. A first design has been developed and important aspects of its technological feasibility have already been tested, such as quench protection and conductor performance. As a first step, an intermediate 4 T warm-bore magnet is being purchased. It should be available around 2030. Once constructed, the magnet will be installed at DESY’s axion platform inside the former HERA H1 iron yoke, where preparations for the required cryogenic infrastructure are underway.

With MADMAX’s prototype booster scaling towards its final size, and quantum detection techniques such as travelling-wave parametric amplifiers and single-photon detectors being developed, significant improvements in sensitivity are on the horizon for dielectric haloscopes. MADMAX is on a promising path to probing axion dark matter in the 40 to 400 µeV mass range at sensitivities sufficient to discover axion dark matter at the classic Dine–Fischler–Srednicki–Zhitnitsky (DFSZ) and Kim–Shifman–Vainshtein–Zakharov (KSVZ) theory benchmarks.

The ALPHA plasma haloscope

Plasma tuning

In a plasma, photons acquire an effective mass determined by the plasma frequency, which depends on the density of charge carriers. If the plasma frequency is close to the axion’s Compton frequency, axion–photon mixing is resonantly enhanced. As the plasma could in principle be of any volume, the volume in which the axion field converts into photons has been decoupled from the axion mass – but tuning the plasma frequency is not feasible, preventing a detector based on this effect from scanning a wide range of masses.

In 2019, Matthew Lawson, Alexander Millar, Matteo Pancaldi, Edoardo Vitagliano and Frank Wilczek proposed performing this experiment using a metamaterial plasma with a tunable electromagnetic dispersion which mimics that of a real plasma. In a plasma haloscope, this metamaterial is a lattice of thin metallic wires embedded in vacuum. By adjusting the wire spacing, the diameter of the wires and their arrangement, the resonant plasma frequency can be tuned over a wide range.

The ALPHA collaboration was formed in 2021 to build a full-scale plasma haloscope capable of probing axion masses from 40 to 400 μeV, corresponding to axion frequencies from 10 to 100 GHz. While challenges related to detecting an extremely feeble signal remain, the simplicity of the cavity design, particularly in the magnet geometry and the tuning mechanism, offers flexibility.

ALPHA’s design can be pictured as a large-bore superconducting solenoid magnet, and a resonator housing an array of thin copper or superconducting wires stretched along the field direction. Photons are extracted through waveguides and fed into an ultra-low-noise microwave receiver chain, cooled by a dilution refrigerator to below 100 mK, developing quantum-sensing techniques developed in close collaboration with the HAYSTAC collaboration. Photons are amplified with Josephson parametric amplifiers – the same technique used for qubits used in quantum computers, and the topic of the 2025 Nobel Prize in Physics awarded to John Clarke, Michel Devoret and John Martinis. Tests at room temperature in 2022 and 2023 demonstrated that the response of the meta-plasma can be tuned across the 10 to 20 GHz range with a modest number of configuration changes, and that the quality factors exceed 104 even before cooling down to cryogenic temperatures.

Two designs are being pursued to design a tuning mechanism that allows precise adjustment of the plasma frequency with minimal mechanical intervention: a spiral design where a single rotating rod tunes a set of three spiral arms relative to another set of fixed spiral arms (see “Plasma tuning” figure); and a design with multiple spinners rotating groups of wires relative to a fixed grid of wires.

It is an exciting time for axion searches

ALPHA’s development plan proceeds in two main stages. Phase I is currently being constructed at Yale University’s Wright Laboratory, and focuses on employing established technology to demonstrate the technique and search for axions with masses from 40 to 80 μeV. Phase I’s cavity, consisting of copper plasma resonators, will be immersed in a 9 T magnet, 17.5 cm in diameter and 50 cm tall. The expected conversion power in ALPHA’s frequency range is of order 10–24 W – comparable to the thermal noise in a 50 Ω resistor cooled to 50 mK. The read-out chain therefore employs Josephson parametric amplifiers whose noise temperatures approach the standard quantum limit. The system is designed to scan continuously while maintaining sensitivity close to the KSVZ axion-photon coupling, a benchmark for well-motivated axion models. The data-acquisition strategy builds on techniques developed in ADMX and HAYSTAC: fast Fourier transforms of the time-stream, coherent stacking across overlapping frequency bins and real-time evaluation of excess-power statistics.

Several improvements are being developed in parallel for Phase II. Quantum sensing techniques have the potential to boost the signal while reducing noise. Such techniques include HAYSTAC-style noise squeezing, using cavity entanglement and state swapping to enhance the signal, and single-photon detection. Dramatically increasing the quality factor of superconducting plasma resonators will also significantly boost the signal. Last but not least, magnets with a larger bore and higher field, such as the ones being deployed at the neutron scattering facilities at Oak Ridge National Laboratory, are expected to expand the experimental reach up to 200 μeV and push the sensitivity to below the axion–photon coupling of the DFSZ model, another classic theoretical benchmark.

Beginning in 2026, ALPHA Phase I will start taking its first physics data, initially searching for dark photons – a dark-matter candidate that interacts with plasma without requiring the presence of a magnetic field. After commissioning ALPHA’s magnet, a full axion search will commence during 2027 and 2028.

It is an exciting time for axion searches. New experiments are coming online, implementing new ideas to expand the accessible mass ranges. Groups in Italy, Japan and Korea are exploring alternative metamaterial geometries, including superconducting wire meshes and photonic crystals that replicate plasma behaviour at higher frequencies. European teams linked to the IAXO collaboration are considering hybrid systems that couple plasma-like resonators to strong dipole magnets. ALPHA will search for axions in the well-motivated region, first focusing between 40 and 80 μeV, and then between 80 and 200 μeV.

Intense efforts are underway. Discoveries may be just around the corner.

Chen-Ning Yang 1922–2025

Chen-Ning Yang

Chen-Ning Yang, a towering figure in science whose numerous insights shaped contemporary theoretical physics, passed away in Beijing on 18 October 2025 at the age of 103. Yang was one of the greatest physicists of the 20th century, whose profound contributions, often based on principles of symmetry, are central to our contemporary understanding of nature.

Yang was born in 1922 in China’s Anhui province, moving as a child to Tsinghua University in Beijing, when his father was appointed professor of mathematics. Displaced by war, in 1938 he enrolled at the National Southwest Associated University in Kunming, where he earned his Master of Science in 1944, not fully removed from ongoing hostilities in the Second Sino–Japanese War. Yang wrote that his taste in physics was already formed from his education in Kunming.

He was awarded a fellowship for further graduate study in the US and enrolled in 1945 at the University of Chicago. He studied with Enrico Fermi and wrote his thesis on applications of group theory to nuclear physics in 1948 with Edward Teller as his advisor. In 1949, Yang joined the Institute for Advanced Study in Princeton, New Jersey, where he emerged as one of the world’s leading scientists. He wrote that he would probably have taken Fermi’s advice and returned to Chicago, but remained in Princeton to be nearer to Chih Li Tu, whom he married in 1951.

Landmark papers

His years in Princeton were extraordinarily productive, with many landmark papers in particle physics, including a famous analysis of particle decays into two photons, and statistical mechanics, including the celebrated Ising model Lee–Yang circle theorem. Most significantly of all, Yang developed non-abelian gauge theories with Robert Mills in 1954. These have the property that once the gauge groups are identified, new gauge particles and their interactions are determined. Over the subsequent 30 years, a combination of theoretical advances and experimental discoveries identified the gauge particles of our world, establishing Yang–Mills theories as a cornerstone of modern physics, alongside Maxwell’s equations and Einstein’s theory of general relativity. A spontaneously broken Yang–Mills theory, incorporating the Higgs boson, and combined with a Maxwell field, describes the electromagnetic and weak interactions, while a fully unbroken theory, quantum chromodynamics, describes the strong interactions. None of this could have been foreseen in 1954, but as Yang later wrote, “we thought it was beautiful and should be published”.

Yang’s collaboration with Tsung-Dao Lee in 1956 on the groundbreaking possibility of parity non-conservation in weak interactions earned them the 1957 Nobel Prize in Physics, making them the first Nobel laureates of Chinese origin. The confirmation of non-conservation in the experiments of Chien-Shiung Wu and other groups led to further work, with Lee and Rudolf Oehme, on the possibility of charge conjugation and time reversal non-invariance, which were subsequently observed and are now recognised as relevant to the predominance of matter over antimatter in the universe. Around the time of the Nobel Prize, Yang, now famous, reunited with his father from China at CERN. This was their first time together since he left for his doctoral studies in Chicago.

In 1966, Yang accepted the position of Albert Einstein Professor at the new State University of New York at Stony Brook, to which he relocated with his family. In the same year, the Institute for Theoretical Physics, now the C.N. Yang Institute for Theoretical Physics, was founded, and he led it until his retirement from Stony Brook in 1999. At Stony Brook, he continued work in particle physics, and broke new ground in the quantum structure of integrable models and the geometry of gauge field theories. He also profoundly shaped statistical physics, in 1967, discovering the pivotal relation for one-dimensional quantum many-body problems, the Yang–Baxter equation, which opened new directions for research in statistical physics, integrable models, quantum groups and related fields of physics and mathematics.

Building bridges

In 1971, his visit to China sparked a wave of visits there by other well-known scholars, earning him recognition as a pioneer in building bridges of academic exchange between China and the US. As a prominent public figure, he went on to support the restoration and strengthening of basic scientific research in China. He also helped inspire a renaissance of fruitful interplay between physics and mathematics, through his work on the geometry of gauge fields, relating gauge theories to the mathematical concept of fibre bundles, a realisation that grew out of conversations in the 1970s with the mathematician James Simons.

Starting in 1997, he served as honorary director of the newly established Center for Advanced Study at Tsinghua University, now the Institute for Advanced Study, and became a professor at Tsinghua University in 1999. In 2003, he returned as a widower to his childhood home, the campus of Tsinghua University, also spending time at the Chinese University in Hong Kong. In his words, his “life can be said to form a circle”, including a second marriage, with Fan Weng. He took on developing the Institute for Advanced Study as his new mission. Yang poured immense effort into advancing fundamental disciplines and cultivating talents at Tsinghua, making contributions that greatly impacted the reform and development of Chinese higher education.

Yang was elected member or foreign member of more than 10 national and regional academies of sciences, received honorary doctorates from more than 20 prestigious universities worldwide, and was honoured with numerous awards.

In his collected papers, Yang wrote that “taste and style are so important in scientific research, as they are in literature, art and music.” With his own taste having served as his guide, Chen-Ning Yang leaves an opus of exceptional creativity and breadth, providing tools that have enabled generations of physicists to make new discoveries of their own.

European Strategy Group recommends FCC-ee

The European Strategy Group (ESG) has finalised its recommendations for the 2026 update to the European Strategy for Particle Physics. As required by the CERN Council, the recommendations include a preferred option for the next large-scale collider at CERN and a prioritised alternative option to be pursued if the preferred plan turns out not to be feasible or competitive.

“The electron–positron Future Circular Collider (FCC-ee) is recommended as the preferred option for the next flagship collider at CERN,” explains strategy secretary Karl Jakobs of the University of Freiburg. “A descoped FCC-ee is the preferred alternative option. Descoping scenarios include removing the top-quark run, constructing two rather than four interaction regions and experiments, and decreasing the RF-system power.”

The ESG drafted its recommendations in a dedicated meeting at Monte Verità in Ascona, Switzerland. From 1 to 5 December, 62 delegates from across the field built on community inputs and the work of the Physics Preparatory Group to elaborate a proposal for the update to the European Strategy for Particle Physics. The recommendations address a broad range of topics and goals related to research in high-energy physics in Europe and beyond (CERN Courier November/December 2025 p23).

Seven large-scale collider projects have been the subject of a comparative assessment: CLIC, FCC-ee, FCC-hh, LCF, LEP3, LHeC and a muon collider (see “Seven colliders for CERN”). Following community submissions to the strategy process in March 2025 and at the open symposium in Venice in June 2025, a consensus emerged that an electron–positron Higgs and electroweak factory is the optimal collider to follow the High-Luminosity LHC (HL-LHC), with FCC-ee the favoured machine of a strong majority of the community (CERN Courier September/October 2025 p24). The identification of a descoped FCC-ee as the preferred alternative option was a new development in Ascona.

“Descoping would reduce the construction cost of FCC-ee by approximately 15%,” says Jakobs. “Although this would have a significant impact on the breadth of the physics programme and the precision achieved, the descoped FCC-ee would still provide a very strong physics programme and a viable path towards high energies, compared to the alternative collider options. Should additional resources become available, these descoping scenarios would be reversible.”

“The other electron-positron collider options offer substantially reduced precision physics programmes and would not be competitive with a collider like the FCC-ee,” continues Jakobs. “Moreover, in themselves, they currently lack a viable path towards energies of 10 TeV.”

The FCC-ee would maintain European leadership in high-energy particle physics

In preparation for the Ascona meeting, working groups were set up to study national inputs, the physics and technology of the large-scale flagship collider projects, the implementation of the strategy, relations with other fields of physics, sustainability and environmental impact, public engagement, education and communication, as well as social and career aspects, and knowledge and technology transfer.

According to the ESG, the FCC-ee would deliver the world’s broadest high-precision particle-physics programme, with an outstanding discovery potential through the Higgs, electroweak, flavour and top-quark sectors, as well as advances in QCD. Its technical feasibility, scope and cost are defined by the FCC Feasibility Study (CERN Courier May/June 2025 p9). The FCC-ee would maintain European leadership in high-energy particle physics, says the ESG, as well as advancing technology and providing significant societal benefits.

“The FCC-ee or the descoped version would also pave the way towards a hadron collider reusing the tunnel and much of the infrastructure, providing direct discovery reach well beyond the 10 TeV parton energy scale, in line with the community’s ambition for exploration at the highest achievable energy,” concludes Jakobs. “The overwhelming endorsement of the FCC-ee by the particle-physics communities of CERN’s Member and Associate Member States further reinforces it as the preferred path.”

The recommendations of the ESG advise but do not constrain the CERN Council, which is expected to formally deliberate on the official update to the European Strategy for Particle Physics at a dedicated Council Session in Budapest in May 2026.

Two strikes for the light sterile neutrino

In the 1990s, the GALLEX and SAGE experiments studied solar electron neutrinos using large tanks of gallium. Every few days a neutrino would transform a neutron into a proton, and every few weeks the experimenters would count the resulting germanium atoms using radiochemical techniques. To control systematic uncertainties in these difficult experiments, they also exposed the detectors to well-understood radioactive sources of electron neutrinos. But both experiments reported 20% fewer electron neutrinos from radioactive decay than expected.

Thus was born the gallium anomaly, which was carefully checked and confirmed by SAGE’s successor, the BEST experiment, as recently as 2022. The most tempting explanation is the existence of a new particle: a “sterile” neutrino flavour that doesn’t interact via any Standard Model interaction. Neutrino oscillations would transform the missing 20% of electron neutrinos into undetectable sterile neutrinos. It would nevertheless have remained invisible to LEP’s famous measurement of the number of neutrino flavours as it would not couple to the Z boson.

Out the window

This interpretation has been in tension with neutrino-oscillation fits for some time, but a new measurement at the KATRIN experiment likely excludes a sterile-neutrino explanation of the gallium anomaly, says Patrick Huber (Virginia Tech). “There was a strong hint of that from solar neutrinos, but the KATRIN result really nails this window shut. That is not to say the gallium anomaly went away; the experimental evidence here is firm and stands at more than five sigma significance, even under the most conservative assumptions about nuclear cross sections and systematics. So this still requires an explanation, but due to KATRIN we now know for sure it can’t be a vanilla sterile neutrino.”

KATRIN’s main objective is to measure the mass of the electron neutrino (CERN Courier January/February 2020 p28). Though neutrino oscillations imply that the particle is massive, its mass has thus far proved to be below the sensitivity of experiments. The KATRIN experiment, based at the Karlsruhe Institute of Technology in Germany, seeks to remedy this with precise observations of the beta decay of tritium. The heavier the electron neutrino, the lower the maximum energy of the beta-decay electrons. Though KATRIN has not yet been able to uncover evidence for the tiny mass of the electron neutrino, the much larger mass of any sterile neutrino able to explain the gallium anomaly would have made itself felt in precise observations of the endpoint of the energy spectrum of beta-decay electrons thanks to mixing between the neutrino flavours.

After the new KATRIN analysis, the best fit of the sterile neutrino from the gallium anomaly is excluded at 96.6% confidence

“A sterile neutrino would manifest itself as a model-independent kink-like distortion in the beta-decay spectrum, rather than as a deficit in the event rate,” explains lead analyst Thierry Lasserre of the Max-Planck-Institut für Kernphysik, in Heidelberg, Germany. “After the new KATRIN analysis, including 36 million electrons in the last 40 electron volts below the endpoint, the best fit of the sterile neutrino from the gallium anomaly is excluded at 96.6% confidence.”

Though heavy sterile neutrinos remain a well motivated completion of the Standard Model of particle physics with the potential to solve problems in cosmology, light sterile neutrinos struck out a second time in the same volume of Nature last month, thanks to a new measurement at the MicroBooNE experiment at Fermilab, near Chicago.

The MicroBooNE collaboration was following up on a persistent anomaly uncovered by their sister experiment, MiniBooNE, which was itself following up on the infamous LSND anomaly of 2001 (CERN Courier July/August 2020 p32). Both experiments had reported an excess of electron neutrinos in a beam of muon neutrinos generated using a particle accelerator. Here, the sterile-neutrino explanation would be more subtle: muon neutrinos would have to oscillate twice, once into sterile neutrinos and then into electron neutrinos. Using a bespoke liquid-argon time projection chamber, the MicroBooNE collaboration excludes the single-light-sterile-neutrino interpretation of the LSND and MiniBooNE anomalies at 95% confidence.

“The MicroBooNE result is just confirming what we knew from global fits for a long time,” clarifies Huber. “We cannot treat the appearance of electron neutrinos in a muon neutrino beam as a two-flavour problem if a sterile neutrino is involved – if we accept this simple fact of quantum mechanics then LSND and MiniBooNE’s excess of electron neutrinos cannot be due to mixing with a sterile neutrino since the corresponding disappearance of electron and muon neutrinos has not been observed.”

One sterile-neutrino anomaly remains unmentioned, the reactor anomaly, but it has already evaporated into statistical insignificance thanks to new experiments and careful modelling of the flux of electron antineutrinos from nuclear reactors. The promise of experiments with reactor neutrinos is now exemplified by the rapid progress of the Jiangmen Underground Neutrino Observatory (JUNO) in China, which started data taking on 26 August last year (CERN Courier November/December 2025 p9).

Back to the standard paradigm

While the recent KATRIN and MicroBooNE analyses sought evidence for a hypothetical sterile neutrino beyond the standard scenario, JUNO operates within the standard three-flavour framework. Using just 59 days of data, the experiment independently exceeded the precision of previous global fits on two out of six of the parameters governing neutrino oscillations. These are the same mixing angle and mass splitting that govern the oscillations of solar electron neutrinos into other flavours – the very effect that GALLEX and SAGE were initially designed to study in the 1990s. As JUNO gathers data, it will resolve a fine-toothed comb that modulates this oscillation spectrum – the effect of a smaller mass splitting between the three neutrinos. JUNO is designed to resolve these tiny oscillations, revealing a fundamental aspect of nature’s design: the hierarchy of the small and large mass splittings.

“The JUNO result is very exciting,” says Huber, “not so much because of its immediate impact, but because it marks the very successful start of an experiment that will deeply change neutrino physics.”

The JUNO result is exciting because it marks the successful start of an experiment that will deeply change neutrino physics

JUNO is the first of a trio of a new generation of large-scale neutrino-oscillation experiments using controlled sources. Concluding a busy two-month period for neutrinos since the previous edition of CERN Courier was published, the launch of the nuSCOPE collaboration now dangles the promise of a valuable boost to the other two. One hundred physicists attended its kick-off workshop at CERN from 13 to 15 October 2025. The collaboration seeks to implement a concept first proposed 50 years ago by Bruno Pontecorvo: nuSCOPE will eliminate systematic uncertainties related to neutrino flux by measuring the energy and flavour of neutrinos as they are created as well as when they interact with a target.

If approved, nuSCOPE will study neutrino–nucleus interactions with a level of accuracy comparable to that in electron–nucleus scattering, and control the sources of uncertainty projected to be dominant in the DUNE experiment under construction in the US and at the Hyper-Kamiokande experiment under construction in Japan. DUNE and Hyper-Kamiokande both plan to study the oscillations of accelerator-produced beams of muon neutrinos. Their most specialised design goal is to observe another fundamental aspect of physics: whether the weak interaction treats neutrinos and antineutrinos symmetrically.

With three ambitious and sharply divergent experimental concepts, DUNE, Hyper-Kamiokande and JUNO promise substantial progress in neutrino physics in the coming decade. But KATRIN and MicroBooNE now leave precious little merit for the once compelling phenomenology of the single light sterile neutrino.

Two strikes, and you’re out.

Private donors pledge support for FCC

For the first time in CERN’s history, private donors (individuals and philan­thropic foundations) have agreed to support a CERN flagship research project. Recently, a group of friends of CERN, including the Breakthrough Prize Foundation, The Eric and Wendy Schmidt Fund for Strategic Innovation, and the entrepreneurs John Elkann and Xavier Niel, have pledged significant funds towards the construction of the Future Circular Collider (FCC), the potential successor of the Large Hadron Collider. These potential contributions, totalling some 860 million euros and corres­ponding to 1 billion US dollars, would represent a major private-sector investment in the advancement of research in fundamental physics.

“It’s the first time in history that private donors wish to partner with CERN to build an extraordinary research instrument that will allow humanity to take major steps forward in our understanding of fundamental physics and the universe. I am profoundly grateful to them for their generosity, vision and unwavering commitment to knowledge and exploration. Their support is essential to the prospective realisation of the FCC and to enabling future generations of scientists to push the frontiers of scientific discovery and technology,” said CERN Director-General Fabiola Gianotti.

Understanding the fundamental nature of our universe is the mission that unites humanity

“Understanding the fundamental nature of our universe is the mission that unites humanity,” said Pete Worden, chairman of the Breakthrough Prize Foundation. “We’re proud to support the creation of the most powerful scientific instrument in history, that can shed new light on the deepest questions humanity can ask.”

“The Future Circular Collider is an instrument that could push the boundaries of human knowledge and deepen our understanding of the fundamental laws of the universe,” said Eric Schmidt. “Beyond the science, the technologies emerging from this project could benefit society in profound ways, from medicine to computing to sustainable energy, while training a new generation of innovators and problem-solvers. Wendy and I are inspired by the ambition of this project and by what it could mean for the future of humanity.”

“CERN’s Member States are extremely grateful for the interest expressed by our donors in contributing to the funding of the Laboratory’s next flagship project. This once again demonstrates CERN’s relevance and positive impact on society, and the strong interest in CERN’s future that exists well beyond our own particle-physics community,” said the president of the CERN Council Costas Fountas.

The FCC has also been included among 11 proposed “Moonshot” projects in the draft Multiannual Financial Framework for the years 2028–2034, released by the European Commission in July.

Based on strong input from the international particle-physics community, the FCC has been recommended as the preferred option for the next flagship collider at CERN in the ongoing process to update the European Strategy for Particle Physics, which will be concluded by the CERN Council in May 2026 (see “European Strategy Group recommends FCC-ee“). A decision by the CERN Council on the construction of the FCC is expected around 2028.

First indirect evidence for primordial monsters

A monster star giving birth to a quasar

Cosmology has long predicted that the first generation of stars should differ strongly from those forming today. Born out of pristine gas of only hydrogen and helium, they could have reached masses between a thousand and ten thousand times that of the Sun, before collapsing after only a few million years. Such “primordial monsters” have been proposed as the seeds of the first quasars (see “Collapsing monster” image), but clear observations had until now been lacking.

An analysis of the galaxy GS 3073 using the James Webb Space Telescope (JWST) now carries an unexpectedly loud message from the first generation of stars: there is far too much nitrogen to be explained by known stellar populations. This mismatch suggests a different kind of stellar ancestor, one no longer present in our universe. It is the first indirect evidence for the long-sought primordial monsters, first proposed in the early 1960s by Fred Hoyle and William Fowler in the US, and independently by Yakov Zel’dovich and Igor Novikov in the Soviet Union, in attempts to explain the newly discovered quasars.

Black-hole powered

JWST’s near-infrared spectroscopy of GS 3073 reveals the highest nitrogen-to-oxygen ratio yet measured while surveying the universe’s first billion years. Its dense central gas contains almost as many nitrogen atoms as oxygen, while carbon and neon are comparatively modest. In addition, the galaxy has an active nucleus powered by a black hole that is already millions to hundreds of millions of times the mass of the Sun, despite the galaxy’s low metallicity.

Could a primordial monster explain GS 3073? The answer lies in how these huge stars mix and burn their fuel.

GS 3073 could offer the first chemical evidence for the largest stars the universe ever formed and to the early production of massive black holes

Simulations reveal that after an initial phase of hydrogen burning in the core, these stars ignite helium, producing large amounts of carbon and oxygen. Because the stars are so luminous and extended, their interiors are strongly convective. Hot material rises, cool material sinks and chemical elements are constantly stirred. Freshly made carbon from the helium-burning core leaks outward into a surrounding shell where hydrogen is still burning. There, a sequence of reactions known as the CNO cycle converts hydrogen into helium while steadily turning carbon into nitrogen. Over time, this process loads the outer parts of the star with nitrogen, while also moderately enhancing oxygen and neon. The heaviest elements produced in the final burning stages remain trapped in the core and never reach the surface before the star collapses.

Mass loss from such primordial stars is uncertain. Without metals, they cannot generate the strong line-driven winds familiar from massive stars today. Instead, mass may be lost through pulsations, eruptions or interactions in dense environments. But simulations allow a robust conclusion: supermassive primordial stars between roughly one thousand and ten thousand solar masses naturally produce gas with nitrogen-to-oxygen, carbon-to-oxygen and neon-to-oxygen ratios that match those measured in the dense regions of GS 3073. Stars significantly lighter or heavier than this range cannot reproduce the extreme nitrogen-to-oxygen ratio, even before carbon and neon are taken into account.

Under pressure

Radiation pressure could have supported these primordial monsters for no more than a few million years. As their cores contract and heat, photons become energetic enough to convert into electron–positron pairs, reducing the radiation pressure. For classical massive stars with masses in the range of nine to 120 times the mass of the sun, this instability leads to a thermonuclear explosion that we refer to as a supernova. By contrast, supermassive stars are so dominated by gravity due to their much larger mass that they collapse directly into black holes, without undergoing a supernova explosion.

This provides a natural path from supermassive primordial stars to the over-massive black hole now seen in GS 3073’s nucleus. In this scenario, one or a few such giants enrich the surrounding gas with nitrogen-rich material through mass loss during their lives, and leave behind black-hole seeds that later grow by accretion. If this picture is correct, GS 3073 offers the first chemical evidence for the largest stars the universe ever formed and ties them directly to the early production of massive black holes. Future JWST observations, together with next-generation ground-based telescopes, will search for more nitrogen-loud galaxies and map their chemical structures in greater detail.

Longest gamma-ray burst confounds astrophysicists

On 2 July 2025, NASA’s Fermi Gamma-ray Space Telescope observed a gamma-ray burst (GRB 250702B) of a record seven hours in duration. Intriguingly, high-resolution images from the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST) revealed that the burst emerged nearly 1900 light-years from the centre of its host galaxy, near the edge of its disc. But its most unusual feature is that it was seen in X-rays a full day before any gamma rays arrived.

The high-energy transient sky is filled with a cacophony of exotic explosions produced by stellar death. Short GRBs of less than two seconds are produced by the merging of compact objects such as black holes and neutron stars. Longer GRBs are produced by the death of massive stars, with “ultralong” GRBs most often hypothesised to originate in the collapse of massive blue supergiants, as they would allow for accretion onto their central black-hole engines over a period from tens of minutes to hours.

Peculiar observations

GRB 250702B lasted for at least 25,000 seconds (7 hours), superseding the previous longest GRB 111209A by over 10,000 seconds. However, the duration alone was not enough to identify this event as a different class of GRB or as an extreme outlier. Two other observations immediately marked GRB 250702B as peculiar: the multiple gamma-ray episodes seen by Fermi and other high-energy satellites; and the soft X-rays from 0.5 to 4 keV seen by China’s Einstein Probe over a period extending a full day before gamma rays were detected.

No previous GRB is known to have been preceded by X-ray emission over such a period. Nor is it an expectation of standard GRB models, even those invoking a blue supergiant. Instead, these X-rays suggest a relativistic tidal disruption event (TDE) – the shredding of a star by a massive black hole, launching a jet that moves near the speed of light. All known relativistic TDE systems are produced by supermassive black holes weighing a million times the mass of our Sun, or more. Such black holes are found at the centre of their host galaxies, but the HST and JWST observations revealed that the transient had occurred near the edge of its host galaxy’s disc (see “Not from the nucleus” image).

This peripheral origin opens the door to a more exotic scenario involving an intermediate-mass black hole (IMBH) weighing hundreds to thousands of solar masses. IMBHs are a missing link in black-hole evolution between the stellar-mass black holes that gravitational-wave detectors frequently see merging and the supermassive black holes found at the centre of most galaxies. Alternative scenarios reduce the black-hole mass even further, and include a micro-TDE, where a star is shredded by a stellar-mass black hole, or a helium star being eaten by a stellar-mass black hole.

There is little consensus on the origin of GRB 250702B, beyond that it involved an accreting black hole

The rapid gamma-ray variability observed by Fermi and other high-energy satellites is an important clue. The time variability of relativistic jets is thought to be orders of magnitude slower than the characteristic scale set by a black hole’s Schwarzschild radius. While an intermediate-mass black hole of a few hundred solar masses is not incompatible, the observed variability is nearly 100 times faster than that seen in relativistic TDEs. By contrast, with characteristic physical scales smaller in proportion to the smaller masses of their black holes, micro-TDEs and helium-star black-hole mergers have no difficulty accommodating such short-timescale variability.

The environment of the transient also provides crucial clues into its origin. JWST spectroscopy revealed that the light from the transient and its host galaxy was emitted 8 billion years ago, when the universe was just a teenager. The galaxy is among the largest and most massive at that age in the universe, and – unusually for galaxies hosting GRBs – a massive dust lane splits its disc in half. Ongoing star formation at the transient’s location suggests a stellar-mass progenitor, as opposed to an IMBH.

Despite numerous studies, there is little consensus on the origin of GRB 250702B, beyond that it involved an accreting black hole. Its exceptional duration and early X-ray emission initially suggested a supermassive black hole, but its rapid variability and location in its host galaxy instead point to a stellar-mass black hole, with a far rarer IMBH potentially splitting the difference. Given that it is a notably rare once-every-50-years event, the wait for the next ultralong GRB may be long, but astrophysicists are optimistic that theoretical advances will disentangle the different progenitor scenarios and reveal the origin of this extraordinary transient.

bright-rec iop pub iop-science physcis connect