Comsol -leaderboard other pages

Topics

Photon detectors light up Bologna

The 7th international workshop on new Photon-Detectors (PD2025) took place from 3 to 5 December 2025 at Bologna’s Palazzo d’Accursio, attracting more than 150 researchers working on the development and application of photon-detection technologies. The medieval city-hall library, with its transparent floor above archaeological remains spanning more than two millennia, provided a striking setting for three days of discussion on state-of-the-art detector technologies.

Photon detectors lie at the heart of modern experimental physics. Their ability to measure extremely faint light signals, down to the single photons, makes them indispensable in areas ranging from high-energy and nuclear physics to astroparticle physics, astronomy, medical imaging and emerging quantum technologies. In recent years, rapid progress in devices such as silicon photomultipliers (SiPMs), avalanche photodiodes (APDs) and microchannel-plate (MCP-PMT) detectors has delivered improvements in timing resolution, radiation tolerance and large-scale integration. PD2025 provided a timely snapshot of this evolving field, combining technology-driven discussions with reports from experiments already exploiting these advances.

A significant fraction of the invited talks focused on the latest developments in SiPM technology, which has become the workhorse photodetector for many contemporary experiments. Alberto Gola (FBK) and Edoardo Charbon (EPFL) highlighted progress in custom SiPM and digital SPAD devices, respectively, stressing their improvements in photon-detection efficiency and sub-100 ps timing performance, as well as ongoing efforts to mitigate correlated noise and radiation-induced degradation. These technological developments were complemented by reports from large-scale experiments – such as ALICE3, CMS, DARKSIDE, DUNE, ePIC and JUNO-TAO – in high-energy and astroparticle physics, outlining the status of ongoing developments and the anticipated role of SiPM-based systems in large-area calorimetry, precision timing and Cherenkov imaging in future detectors.

Equally prominent were contributions on vacuum photodetectors and on enabling technologies. Albert Lehmann (University of Erlangen-Nürnberg) reviewed the status and future prospects of microchannel-plate photomultiplier tubes (MCP-PMT), while Angelo Rivetti (INFN Torino) addressed the challenges of fast, low-power front-end electronics capable of handling the ever-increasing channel counts of modern detectors. Modelling of photon-detection devices was discussed by Werner Riegler (CERN), who introduced an analytic description of timing and efficiency in SPADs and SiPMs, clarifying their performance limits for single-photon and charged-particle detection.

Several contributions underlined the increasingly close relationship between academia and industry in photon-detector development, touching on technology transfer, production scalability and long-term reliability, issues that are becoming central as detectors transition from small-scale prototypes to systems comprising hundreds of thousands, or even millions, of channels, such as in the case of the use of digital SiPMs for physics experiments.

The next edition of the conference will take place in May 2027 in Beijing.

The top turns thirty

The 18th International Workshop on Top Quark Physics (TOP2025) brought the top-quark community to Seoul, South Korea, from 21 to 26 September 2025. Hosted at Hanyang University, the event offered 135 experimentalists and theorists a chance to exchange results, discuss open questions and explore the future of top-quark physics.

2025 marked the 30th anniversary of the top quark’s discovery by the DØ and CDF experiments at Fermilab. Three decades on, and despite ever-increasing experimental precision, the top quark’s properties remain only partially understood. While its mass is now known at the sub-GeV level and its production cross sections agree well with Standard Model predictions, questions persist about its electroweak couplings, its interactions with the Higgs boson, and the detailed structure of top–antitop production at high energies. Because of its large mass and correspondingly strong coupling to the electroweak sector, many in the community continue to view the top quark as a sensitive probe of physics beyond the Standard Model.

The conference opened with an inspiring keynote address by Juan Antonio Aguilar Saavedra (IFT Madrid), who explored the connections between top-quark physics and quantum science and technology. A notable example is the recent observation of quantum entanglement in top-quark pair production by the ATLAS and CMS experiments, which has opened a promising new line of research linking collider physics with concepts more familiar to quantum information researchers. Entanglement can be measured in the top–antitop system because top quarks decay before hadronisation takes place, allowing direct access to their spin correlations.

Top physics is currently enjoying a golden era. Last year, the CMS collaboration reported an excess near the top–antitop threshold (CERN Courier May/June 2025 p7), later confirmed by ATLAS with a significance of 7.7σ above the background predicted by perturbative quantum chromodynamics (CERN Courier September/October 2025 p9). This excess is consistent with expectations from non-relativistic quantum chromodynamics, an effective theory that describes the dynamics of heavy quark pairs near threshold and with simplified models involving a pseudoscalar “quasi-bound-state”, called toponium.

During a mini-workshop dedicated to toponium, Benjamin Fuks (LPTHE) presented an intriguing scenario in which the excess could be explained by two contributions: one from a top–antitop bound state and another from a beyond-the-Standard-Model signature, although the data are also compatible with Standard-Model-only components.

The next edition of the TOP conference will take place in Antalya, Turkey, from 5 to 9 October 2026.

Charm and beauty alike in fragmentation

LHCb figure 1

Proton–proton collisions at the LHC fling quarks and gluons out at massive energies. As they radiate and split into ever more partons, the strong force confines them into sprays of hadrons called jets. The total momentum of a jet, split among its components, approximates that of the initial quark or gluon, which cannot be accessed directly. By tracking how much of a jet’s momentum each hadron carries, the LHCb collaboration has now compared how charm, beauty and light quarks hadronise.

While the production and radiation of individual quarks and gluons can be treated perturbatively, their conversion into hadrons occurs in the non-perturbative regime and cannot be calculated from first principles. Instead, the transition is described using phenomenological probability distributions, called fragmentation functions, which encode how a quark of a given flavour produces specific hadrons. Measuring the content and structure of jets, as well as their kinematic properties, can help constrain these functions.

Previously, the LHCb collaboration measured observables sensitive to fragmentation functions in samples dominated by light-quark-initiated jets. The same measurements were recently carried out for charm- and beauty-quark-initiated jets, allowing a direct comparison of hadronisation across three different jet flavour categories at a single experiment. The light-quark sample was obtained by selecting jets produced nearly back-to-back with a Z boson. In such events, the single parton initiating the jet is typically a gluon or a light quark. In the forward kinematic region accessible to the LHCb detector, where one incoming parton often carries a large fraction of the proton momentum, the proportion of light-quark jets gets further enhanced. Samples of predominantly charm- and beauty-quark-initiated jets were instead obtained using a dedicated flavour-tagging algorithm, which makes use of LHCb’s excellent performance at heavy flavour identification and reconstruction.

The new measurements allow a direct comparison of hadronisation across three different jet flavour categories

A key observable for constraining fragmentation functions is the longitudinal momentum fraction z, defined as the share of jet momentum carried by a hadron along its axis. With respect to their light-quark analogues, heavy-quark-initiated jets appear suppressed at high z, consistent with the leading heavy-flavour hadron carrying most of the jet momentum (see figure 1).

Previous measurements of the hadron­isation of a heavy quark into a single heavy-flavour hadron showed that this hadron carries most of the parent quark’s momentum. The new LHCb analysis extends this picture to the full multi-hadron structure of heavy-quark-initiated jets and is consistent with single-hadron measurements: relatively few charged hadrons possess a large fraction of the jet momentum – a result compatible with the heavy-flavour hadron carrying most of it. This result demonstrates the complementarity of single- and multi-hadron measurements, which are both necessary to fully understand high-energy hadronisation.

The analysis also measured the transverse momentum of the hadron with respect to the jet axis, which is sensitive to transverse-momentum-dependent fragmentation functions. Experimental constraints on these functions remain limited, yet they are crucial in reconstructing a three-dimensional description of hadronisation.

Space radiobiology

Astronauts are exposed to elevated levels of cosmic radiation during spaceflight. As missions become longer and venture farther from Earth, understanding how this radiation affects the human body has become a pressing scientific challenge. This emerging field of space radiobiology has strong and perhaps unexpected links to the far better established discipline of radiobiology in medical physics, where physicists work closely with clinicians to design and optimise cancer treatments using ionising radiation. In both contexts, the central question is the same: how does radiation interact with living cells, and how can its harmful effects be predicted, mitigated and controlled?

Space Radiobiology is authored by Alessandro Bartoloni (INFN Roma) and Lidia Strigari (University Hospital of Bologna), whose combined expertise spans astroparticle physics, radiation transport and clinical radiobiology. The book explores a meeting point between two fields that have long followed separate paths but are now clearly converging around shared questions in radiation science.

At its core, the book argues for a closer integration of astroparticle physics and medical physics, demonstrating how both fields benefit from a common radiobiology perspective and a shared concern for radiation protection. At the heart of the volume is a thorough and well balanced discussion of space radiation and its implications for human spaceflight. The authors guide the reader through the complexity of the space-radiation environment – galactic cosmic rays, solar-particle events and their interactions – without losing clarity. These elements are consistently linked to real concerns for astronaut health, both for short missions and for the long-duration journeys that are becoming increasingly realistic. By connecting radiation sources, transport mechanisms and biological effects, the book builds a clear picture of where the risks lie and how they might be managed, making it especially relevant at a time when deep-space missions are moving from concept to planning.

What makes the book particularly engaging is that it never treats space research as an isolated niche. Instead, it repeatedly shows how ideas and tools developed for space can feed back into medical physics. From dosimetry and radiation monitoring to risk assessment, the authors highlight how methods refined for astroparticle experiments can be applied in clinical and research settings on Earth. Advances in detectors, modelling and data analysis developed for space missions are presented not as abstract achievements, but as practical contributions that can improve radiation therapy and diagnostic imaging.

From space to the hospital

This interdisciplinary spirit comes through especially well in the case study of the Alpha Magnetic Spectrometer group at INFN Roma Sapienza. Operating aboard the International Space Station, AMS was designed to study cosmic rays and search for signs of dark matter and antimatter. The book shows, however, that its high-precision measurements of charged-particle spectra, particle composition and energy deposition in low-Earth orbit have direct relevance for space radiobiology and radiation-protection research. In particular, AMS data helped characterise the flux, charge and energy distribution of galactic cosmic rays and solar energetic particles, key parameters for modelling dose, dose-rate and track-structure effects in biological tissue. These measurements inform risk assessments for astronaut exposure, improve shielding models, and support more realistic simulations of DNA damage and long-term health effects associated with chronic low-dose, high-energy radiation in space. Rather than serving as a standalone example, this case study acts as a concrete illustration of how cross-disciplinary collaboration actually works in practice: how shared technologies, experimental approaches and theoretical frameworks can produce insights that matter across fields.

Space Radiobiology

The sections on radiobiology strike a careful balance between accessibility and depth. Topics such as DNA damage, cellular responses and long-term health effects are explained clearly, without oversimplifying issues that are inherently complex (CERN Courier November/December 2025 p27). One of the book’s strongest messages is that space radiobiology, with its extreme and unconventional exposure conditions, offers a unique lens for understanding radiation effects that are also relevant to clinical and occupational environments on Earth.

By focusing on shared biological endpoints and common dosimetric challenges, the book shows how progress in one area can meaningfully inform the other. The discussion on developing common platforms for radiation measurement and monitoring reinforces this point, arguing that integrated approaches are not only efficient but scientifically necessary in increasingly complex radiation environments.

Space Radiobiology succeeds in bringing together different scientific communities around a common language and set of challenges. It will resonate with researchers in physics, space science, radiobiology and medical physics, as well as with graduate students looking for a broader, more connected view of radiation science. At a moment when deep-space exploration is becoming a tangible goal rather than a distant idea, the book offers a thoughtful and convincing picture of how lessons learned beyond Earth can shape safer and more effective uses of radiation here at home.

The flavour dependence of jet structures

ALICE figure 1

Partons produced in heavy-ion collisions at the LHC must push their way through a hot, dense quark–gluon plasma (QGP). In doing so, they experience medium-induced energy loss that depends on the parton’s mass. In a recent analysis, the ALICE collaboration compared the yields of charged particles associated with electrons from heavy-flavour hadron decays with those of the light hadrons. Both show a suppression of high-momentum particles emitted opposite to the tagged particle, with no significant difference between the two.

After a hard scattering, high-energy partons fragment into collimated sprays of hadrons known as jets. These are well described in proton–proton (pp) collisions, where their substructures provide stringent tests of perturbative QCD. In heavy-ion collisions, instead, they propagate through the QGP and emerge modified – a phenomenon known as jet quenching. Previous measurements (CERN Courier March/April 2025 p13) suggest that jets initiated by charm and beauty quarks lose less energy than those from light quarks and gluons, owing to their larger mass. This difference is commonly attributed to the dead-cone effect, which suppresses gluon emission by heavy quarks at small angles. Jet-quenching effects can be further characterised by measuring the transverse-momentum distribution of particles within jets, providing insight into the redistribution of the quenched energy.

To study this, the ALICE collaboration employs azimuthal-correlation measurements. This technique measures the angular correlation between a heavy-flavour hadron or its decay daughter (“trigger” particle) and other associated charged particles in the same event. The resulting distribution features two correlation peaks: a near-side peak from particles produced alongside the trigger and an away-side one from the recoiling jet, particularly sensitive to jet-medium interactions. Jet quenching is then quantified by the per-trigger nuclear modification factor IAA, which is the ratio of away-side charged-particle yield in heavy-ion collisions to pp collisions. Values of IAA deviating from unity indicate QGP-induced modifications of the jet.

The ALICE collaboration now reports the measurements of jet-like structures in the heavy-flavour sector of lead-lead collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair. The analysis, based on LHC Run 2 data, uses electrons from semi-leptonic decays of charm and beauty hadrons as trigger particles. Electron identification relies on a combination of energy-loss measurements in the time-projection chamber, energy-momentum matching in the calorimeter and selection of shower shapes. Invariant-mass tagging techniques allowed for the subtraction of the large backgrounds from photon conversions and light-meson decays to electron-positron pairs.

The measurement is challenging due to the high multiplicity of lead-lead collisions and the need to extract jet-like correlations from large combinatorial and collective-motion backgrounds. A corresponding analysis of pp collisions at the same energy provides the reference needed to compare jet evolution in the presence of the QGP.

The away-side shows a suppression for associated particles with transverse momenta between 4 and 7 GeV/c (see figure 1), indicating relevant jet quenching with a 2.5σ significance. Conversely, a hint of an enhancement is observed below 2 GeV/c, possibly signalling the redistribution of lost energy into the medium and the subsequent formation of additional low-momentum particles.

These results are consistent with corresponding measurements using light-flavour triggers across all measured intervals. While this suggests that the QGP modifies jets consistently regardless of the initiating parton’s mass, important caveats remain. Variations in parton-to-hadron momentum scaling, as well as the fact that the heavy flavour is tagged via decay electrons, could introduce kinematic differences that complicate a direct comparison. Whether QCD predicts a deviation remains an open question for future modelling of mass-dependent parton-medium interactions.

LHC Run 3 will provide an order of magnitude more heavy-ion events. This increased luminosity will enable higher-precision analyses, offering a deeper understanding of how QGP modifies heavy- and light-flavour jets.

Erich Lohrmann 1931–2026

Erich Lohrmann

Erich Lohrmann, an experimental physicist who shaped the research programme at DESY, passed away on 10 January 2026 at the age of 94.

Lohrmann was born on 25 May 1931 in Esslingen am Neckar near Stuttgart. From 1950 to 1955 he studied at the Technische Hochschule Stuttgart (TH Stuttgart), where in his doctoral dissertation, completed in 1956, he investigated particle production by cosmic rays in nuclear emulsions and together with Martin Teucher observed the creation and annihilation of an antiproton shortly after its discovery at Berkeley. For this discovery, Owen Chamberlain and Emilio Segrè were awarded the Nobel Prize in Physics in 1959. From 1956 to 1961, Lohrmann continued his work on cosmic rays at TH Stuttgart and at the universities of Bern, Frankfurt and Chicago. In Chicago he also met Masatoshi Koshiba, with whom he shared a lifelong friendship.

Lohrmann joined DESY in 1961. He convinced the director, then Willibald Jentschke, that a liquid-hydrogen bubble chamber exposed to the photon beam from the 6 GeV electron synchrotron would be ideally suited to investigate hadronic reactions. Five million bubble-chamber photographs were analysed by a large collaboration, resulting in a rich scientific harvest that received great international recognition. To facilitate the measurement and analysis of millions of photographs, Erich worked on automated measurement methods and data analysis, and founded DESY’s IT group. In 1969, together with Peter Stähelin, he established the Institute of Informatics at Hamburg University, where he and members of the DESY IT group gave lectures on informatics and data analysis.

As research director from 1968 to 1972 and from 1979 to 1981, he played a key role in strategic decisions at DESY. He was one of the few scientists who encouraged Jentschke to build the electron–positron storage ring DORIS. In the years from 1966 to 1968, it was a risky decision to base DESY’s future on this technique, since the prevailing opinion among particle physicists was that it would only allow tests of the validity of QED. The discovery of the “new particles” in the November Revolution of 1974 (CERN Courier November/December 2024 p41) showed that this was the right decision, and it has shaped research at DESY until today.

Lohrmann was the driving force behind the conception and realisation of the PLUTO detector at the DORIS storage ring. Against considerable opposition, he insisted on a superconducting coil, which laid the foundation for DESY’s expertise in superconducting technology. This was subsequently a crucial prerequisite for the construction of HERA. The experimental programme at DORIS, in which Koshiba’s group was also heavily involved, proved to be extremely successful. The strong Japanese–German collaboration was continued at the large electron–positron storage ring PETRA and later at HERA. The PETRA experiments produced a wealth of new results, the most important of which was the discovery of gluons. After his term as research director ended, Erich then played an influential role in the TASSO experiment.

In the HERA project, he strongly supported Björn Wiik’s forward-looking proposal to build an electron–proton collider with a superconducting proton storage ring 6 km in circumference. In the ZEUS experiment, Lohrmann played a central role in setting up the collaboration, designing the interaction region and in data analysis, to name just a few examples. His important contributions to the critical analysis of publications continued until very recently.

Erich also promoted research with synchrotron radiation through the conversion of DORIS into a high-brilliance radiation source. Later, PETRA was also converted into a synchrotron radiation source. Today, DESY is a world leader in photon science.

From 1976 to 1978, Lohrmann served as CERN director responsible for research. Until his retirement in 1996, he was a professor at the University of Hamburg. With his lectures on physics, statistics and methods of data analysis, he inspired numerous students and provided them with a solid education. Based on his teaching experience, he also authored three books, one of them, Statistical and Numerical Methods of Data Analysis, with his colleague Volker Blobel. Together with Paul Söding, he described the history of DESY in detail up to 2008 in the book Von schnellen Teilchen und hellem Licht. Even after his retirement, Lohrmann was frequently at DESY and remained active in research. One example is the GRAVI experiment, which investigates Newton’s law of gravitation in weak fields.

Despite his great scientific achievements, Erich remained modest. Thanks to his sober yet humorous Swabian manner, his expertise and his commitment to scientists, he enjoyed great trust and esteem.

With the passing of Erich Lohrmann, physics loses a scientist of great foresight and an inspiring teacher. His contributions to physics and his scientific legacy will continue to inspire us in the future.

Matts Roos 1931–2025

Matts Roos

Matts Roos, who promoted the international standardisation of high-energy-physics data and developed the popular statistical minimisation system, passed away on 25 November 2025 in his hometown of Helsinki at the age of 94.

Roos was born on 28 October 1931. He completed an MSc degree in technical physics at the Helsinki University of Technology in 1956 and began his career in Stockholm at AB Atomenergi, where he investigated materials for radiation safety. However, he had basic science in his genes, or at least on his mind. Encouraged by his uncle, Ragnar Granit, who in 1967 was awarded the Nobel Prize in Physiology or Medicine, Roos became a research assistant in theoretical physics at the University of Stockholm, from where, a few years later, he continued to the Nordic Institute for Theoretical Physics and the Niels Bohr Institute in Copenhagen. In 1967, he defended his doctoral thesis on CP non-invariance in neutral-kaon decays.

Together with Arthur H Rosenfeld from Berkeley, Roos laid the foundations of the Particle Data Group (PDG). Rosenfeld published the first tables of particle data in 1957 and in 1963 Roos published his own particle tables. In 1964 these two tables were merged into what is now known as the Review of Particle Physics. Sixty years later, this highly cited opus has swollen to 1400 pages.

A particularly significant phase in Roos’ career were the five years he spent at CERN in Geneva, although Finland was not yet a member of CERN in 1965. Victor Weisskopf, the Director-General of CERN, invited Roos, on the basis of his work with the PDG, to apply for a temporary position in the Theory Division, then lead by Léon Van Hove. Motivated by the work on the validation of properties of by then discovered elementary particles, CERN then invited Roos to lecture on statistical methods. This course eventually crystallised into Statistical Methods in Experimental Physics, published in 1971 in collaboration with Fred James, Daniel Drijard, Bernard Sadoulet and William Eadie.

Roos’ international reputation is also based on another CERN-period achievement that greatly benefited the scientific community: the MINUIT software developed together with James. This is a versatile statistical tool that has been used in particle-physics research throughout the decades, with reference to the original publication still increasing today.

The years abroad brought the sociable and multilingual Roos a wide circle of friends and acquaintances among researchers and made him cosmopolitan. Roos returned to Finland in 1971 after the University of Helsinki appointed him as an associate professor in the field of elementary particle physics. From 1977 until his retirement, he served as a personal professor of particle physics. Later, Roos turned to cosmology in addition to elementary particles. He devoted himself to the field by writing a textbook, Introduction to Cosmology, which went through four editions between 1994 and 2015. Roos also served as a member of the International Neutrino Commission for decades. In 1996 he organised the 17th International Conference on Neutrino Physics and Astrophysics in Helsinki.

In his spare time, Roos began to pursue visual arts in the 1980s, developing over the years from an enthusiastic amateur to a professional painter. He stated that art provides a counterbalance to research work, because “science progresses logically and art illogically”. His interest in art must have been rooted in the family, as his father and brother were well-known photographers and filmmakers, and his sister was an architect.

Roos took an active part in the debates in society, supported colleagues behind the iron curtain with forbidden scientific litera­-ture or Solzhenitsyn, and established a think tank on the civil use of nuclear power. He also helped introduce Transcendental Meditation into Finland, after having experienced it himself during a congress in California in the early 1960s.

After returning to Finland, Matts Roos settled in Helsinki with his Swiss-born wife Jacqueline, whom he met while participating in a choral music society, and with the family’s three children. In the summers, the family enjoyed their cottage in the Sipoo archipelago, where many colleagues also were invited.

We shall keep the memory of the Dear alive.

Seven colliders for CERN

Seven ambitious, diverse and technically complex colliders have been proposed as options for CERN’s next large-scale collider project: CLIC, FCC-ee, FCC-hh, LCF, LEP3, LHeC and a muon collider. The European Strategy Group tasked a working group drawn from across the field (WG2a) to compare these projects on the basis of their technical maturity, performance expectations, risk profiles, and schedule and cost uncertainties. This evaluation is based on documentation submitted for the 2026 update to the European Strategy for Particle Physics (CERN Courier May/June 2025 p8). With WG2a’s final report now published, clear-eyed comparisons can be made across the seven projects.

CLIC

The Compact Linear Collider (CLIC) is a staged linear collider that collides a polarised electron beam with an unpolarised positron beam at two interaction points (IPs) which share the luminosity (see figures and “Design parameters” table). It is based on a two-beam acceleration scheme where power from an intense 1 GHz drive beam is extracted and used to operate an X-band 12 GHz linac with accelerating gradients from 72 to 100 MV/m. The potential of two-beam acceleration to achieve high gradients enables a compact linear-collider footprint. Collision energies between 380 GeV and 1.5 TeV can be achieved with a total tunnel length of 12.1 or 29.4 km, respectively. The proof-of-concept work at the CLIC Test Facility 3 (CTF3) has demonstrated the principles successfully, but not yet at a scale representative of a full collider. A larger-scale demonstration with higher beam currents and more accelerating structures would be necessary to achieve full confidence in CLIC’s construction readiness.

CLIC

The project has a well developed design incorporating decades of effort, and detailed start-to-end (damping ring to IP) simulations have been performed indicating that CLIC’s design luminosity is achievable. CLIC requires tight fabrication and alignment tolerances, active stabilisation, and various feedback and beam-based correction concepts. Failure to achieve all of its tight specifications could translate into a luminosity reduction in practical operation. CLIC still requires a substantial preparation phase and territorial implementation studies, which introduces some uncertainty on its proposed timeline.

FCC-ee

The electron–positron Future Circular Collider (FCC-ee) is the proposed first stage of the integrated FCC programme. This double-ring collider, with a 90.7 km circumference, enables collision centre-of-mass energies up to 365 GeV and allows for four IPs.

FCC-ee

FCC-ee stands out for its level of detail and engineering completeness. The FCC Feasibility Study, including a cost estimate, was recently completed and has undergone scrutiny by expert committees, CERN Council and its subordinate bodies (CERN Courier May/June 2025 p9). This preparation translates into a relatively high technical-readiness level (TRL) across major subsystems, with only a few lower-level/lower-cost elements requiring targeted R&D. The layout has been chosen after a detailed placement study considering territorial, geological and environmental constraints. Dialogue with the public and host-state authorities has begun.

Performance estimates for FCC-ee are considered robust: previous experience with machines such as LEP, PEP-II, DAΦNE and SuperKEKB has provided guidance for the design and bodes well for achieving the performance targets with confidence. In terms of readiness, FCC-ee is the only project that already possesses a complete risk-management framework integrated into its construction planning.

FCC-hh

The hadron version of the Future Circular Collider (FCC-hh) would provide proton–proton collisions up to a nominal energy of 85 TeV – the maximum achievable in the 90.7 km tunnel for the target dipole field of 14 T. As a second stage of the integrated FCC programme, it would occupy the tunnel after the removal of FCC-ee, and so could potentially start operation in the mid-2070s. FCC-hh’s cost uncertainty is currently dominated by its magnets. The baseline design uses superconducting Nb3Sn dipoles operating at 1.9 K, though high-temperature superconducting (HTS) magnets could reduce the electricity consumption or allow higher fields and beam energies for the same power consumption. Both technology approaches are active research directions of Europe’s high-field magnet programme.

FCC-hh

The required Nb3Sn technology is progressing steadily, but still needs 15 to 20 years of R&D before industry-ready designs could be available. HTS cables satisfying the specifications required for the magnets of a high-luminosity collider, although extremely promising, are at an even earlier stage of development. If FCC-hh were to proceed as a standalone project, operations could possibly start around 2055 from a technical perspective. In that case the magnets would need to be based on Nb3Sn technology, as HTS accelerator-magnet technology is not expected to be available in that timeframe.

FCC-hh’s performance expectations draw strength from the LHC experience, though the achievable integrated luminosity would depend on the required “luminosity levelling” scenario that might be determined by pile-up control at the experiments. Luminosity levelling is a technique used in particle colliders such as the LHC to keep the instantaneous luminosity approximately constant at the maximum level compatible with detector readout, rather than letting it start very high and then decay rapidly.

LCF

The Linear Collider Facility (LCF) is a linear electron-positron collider, based on the design of the International Linear Collider (ILC), in a 33.5 km tunnel with two IPs sharing the pulses delivered by the collider and with double the repetition rate of ILC. The first phase aims at a centre-of-mass energy of 250 GeV, though the tunnel is sized to accommodate an upgrade to 550 GeV. LCF’s main linacs incorporate 1.3 GHz bulk-Nb superconducting radiofrequency (SRF) cavities for acceleration, operated at an average gradient of 31.5 MV/m and a cavity quality factor twice that of the ILC design at the same accelerating gradient. The quality factor of an RF cavity is a measure of how efficiently the cavity stores electromagnetic energy compared with how much it loses per cycle. LCF can deliver polarised positron and electron beams. Its engineering definition is solid and its SRF technology widely used in several operational facilities, most prominently at the European XFEL, however, the specific performance targets exceed what has been routinely achieved in operation to date. Demonstrating this combination of high gradient and high quality remains a central R&D requirement.

LCF

Several lower-TRL components – such as the polarised positron source, beam dumps and certain RF systems – also require focused development. Final-focus performance, which is more critical in linear colliders compared to circular colliders, relies on validation at KEK’s Accelerator Test Facility 2, which is being extended and upgraded. The overall schedule is credible but depends on securing the needed R&D funding and would require a preparation phase including detailed territorial implementation studies and geological investigations.

LEP3

The Large Electron Positron collider 3 (LEP3) proposal explores the reuse of the existing LEP/LHC tunnel for a new circular electron–positron (e+e) collider. LEP3 has two IPs and the potential for collision energies ranging from 91 to 230 GeV; its luminosity performance and energy range are limited by synchrotron radiation emission, which is more severe than in FCC-ee due to its smaller radius and the limited space available for the SRF installation.

LEP3

The LEP3 proposal is not yet based on a conceptual or technical design report. Its optics and performance estimates depend on extrapolations from FCC-ee and earlier preliminary studies, and the design has not undergone full simulation-based validation. The current design relies on HTS combined quadrupole and sextupole focusing magnets. Though they would be central to LEP3 achieving a competitive luminosity and power efficiency, these components currently have low TRL scores.

Although tunnel reuse simplifies territorial planning, logistics such as dismantling HL-LHC components introduce non-trivial uncertainties for LEP3. In the absence of a conceptual design report, timelines, costs and risks are subject to significant uncertainty.

LHeC

The Large Hadron–Electron Collider (LHeC) proposal incorporates a novel energy-recovery linac (ERL) coupled to the LHC. High-luminosity collisions take place between a 7 TeV proton beam from the HL–LHC and a high-intensity 50 GeV electron beam accelerated in the new ERL. The LHeC ERL would consist of two linacs based on bulk-Nb SRF 800 MHz cavities, connected by recirculation arcs, resulting in a total machine circumference equal to one third that of the LHC. After acceleration, the beam will collide with the proton beam and will be successively decelerated in the same SRF cavities, “giving back” the energy to the RF system.

LHeC

The LHeC’s performance depends critically on demonstrating high-current, multi-pass energy recovery at multi-GeV energies, which has not yet been demonstrated. The PERLE (Powerful Energy Recovery Linac for Experiments) demonstrator under construction at IJCLab in Orsay will test critical elements of this technology. The main LHeC performance uncertainties relate to the efficiency of energy recovery and beam-loss control of the electron beam during the deceleration process after colliding with the proton beam. Schedule, cost and performance will depend on the outcomes demonstrated at PERLE.

Muon collider

Among the large-scale collider proposals submitted to the European Strategy for Particle Physics update, a muon collider offers a potentially energy-efficient path toward high-luminosity lepton collisions at a centre-of-mass energy of 10 TeV. The larger mass of the muons, as compared with electrons and positrons, reduces the amount of synchrotron radiation emitted in a circular collider of a given energy and radius. The muons are generated from the decays of pions produced by the collision of a high-power proton beam with a target. “Ionisation cooling” of the muon beams via energy loss in absorbers made of low-atomic-number materials and acceleration by means of high-gradient RF cavities immersed in strong magnetic fields is required to reduce the energy spread and divergence of this tertiary beam. Fast acceleration is then needed to extend the muons’ lifetimes in the laboratory frame, thereby reducing the fraction that decays before collision. To achieve this, novel rapid-cycling synchrotrons (RCSs) could be installed in the existing SPS and LHC tunnels.

Muon collider

Neutrino-induced radiation and technological challenges such as high-field solenoids and operating radiofrequency cavities in multi-Tesla magnetic fields present major challenges that require extensive R&D. Demonstrating the required muon cooling at the required level in all six dimensions of phase space is a necessary ingredient to validate the performance, schedule and cost estimates.

Design parameters

WG2a’s comparison, together with the analysis conducted by the other working groups of the European Strategy Group, notably that of WG2b, which is providing an assessment of the physics reach of the various proposals, provides vital input to the recommendations that the European particle-physics community will make for securing the future of the field. 

What can you do with 380 million Higgs bosons?

The Higgs boson is uniquely simple – the only Standard Model particle with no spin. Paradoxically, this allows its behaviour to be uniquely complex, notably due to the “scalar potential” built from the strength of its own field. Shaped like a Mexican hat, the Higgs potential has a local maximum of potential energy at zero field, and a ring of minima surrounding it.

In the past, the Higgs field settled into this ring, where it still dwells today. Since then, the field has been permanently “switched on” – a directionless field with a nonzero “vacuum expectation value” that is ubiquitous throughout the universe. Its interactions with a number of other fundamental particles give them mass. What remains unclear is how the Higgs field behaves once pushed from this familiar minimum. Where will it go next, how did it get there in the first place and might new physics modify this picture?

The LHC alone has shed experimental light on this physics. Further progress on this compelling frontier of fundamental science requires upgrades and new colliders. The next step along this path is the High-Luminosity LHC (HL-LHC), which is scheduled to begin operations in 2030. The HL-LHC is set to outperform the LHC by far, with a total dataset of 380 million Higgs bosons created inside the ATLAS and CMS experiments – a sample more than 10 times larger than any studied so far (see “A leap in technology” panel). We still need to unlock the full reach of the HL-LHC, but three scientific questions may serve to illustrate what can be studied with 380 million Higgs bosons.

What is the fate of the universe?

The stability of our universe hangs in a delicate balance. Quantum corrections could make the Higgs potential bend downward again at high values of the Higgs field, creating a lower-energy state beneath our own (see “The Higgs potential” panel). Through quantum tunnelling, tiny regions of space could spontaneously make the transition, releasing energy as the Higgs field settles into a new minimum of the Higgs potential. Bubbles of the new vacuum would expand at the speed of light, changing the vacuum state of the regions they encounter.

A second minimum?

Details matter. The Higgs potential is modified by the effect of virtual loops from all particles interacting with the Higgs field. Bosons push the Higgs potential upwards at high field values, and fermions pull it downwards. If the Standard Model remains valid up to high field values, perhaps as high as the Planck scale where quantum gravity is expected to become relevant, these corrections may determine the ultimate fate of the vacuum. As the most massive Standard Model particle yet discovered, the top quark makes a dominant negative contribution at high energies and field strengths. Together with a smaller effect from the mass of the Higgs boson itself, the top-quark mass defines three possible regimes. 

In the stable case, the Higgs potential remains above the current minimum up to high field values, and no deeper minimum is present.

If a second, lower minimum forms at high field values, but is shielded by a large energy barrier, the vacuum can be “metastable”. In that case, quantum tunnelling could in principle occur, but on timescales exceeding the age of the universe.

In the unstable regime, the barrier is low enough for decay to have already occurred.

Current observations place our universe safely within the metastable zone, far from any immediate change (see “A second minimum?” figure). Yet the precision of the latest LHC measurements, based on independent determinations of the top-quark mass (purple ellipses), leaves unresolved whether the universe is stable or metastable. Other uncertainties, such as that on the strength of nature’s strong coupling, also affect the distinction between the two regimes, shifting the boundary between stability and metastability (orange band).

The HL-LHC will be well placed to help resolve the question of the stability of the vacuum thanks to improvements in the measurements of the top quark and Higgs-boson masses (red ellipse). This will rely on combining the HL-LHC’s large dataset, the ingenuity of expected analysis improvements and theoretical progress in the fundamental interpretation of these measurements.

The Higgs potential

The Higgs boson is the only Standard Model particle with no spin – a quantum number that behaves as if fundamental particles were spinning, but which cannot correspond to a physical rotation without violating relativity theory.

This allows the Higgs field to experience a scalar potential – energy penalties that depend on the strength of the Higgs field itself. This is forbidden for fermions
(spin ½) and massless bosons (spin 1) by Lorentz symmetry and gauge invariance.

In the Standard Model, the Higgs field is subject to the Higgs potential, shaped like a Mexican hat, with a maximum of potential energy at zero field, and a minimum at a ring in the complex plane of values of the Higgs field. Its polynomial form is restricted by gauge symmetry. Experimentally, it can be inferred by measuring properties of the Higgs boson such as its self-coupling λ3.

Two effects then modify the Mexican-hat shape in ways that are difficult to predict but have important consequences for particle physics and cosmology. These are due to the interactions of the Higgs field with virtual particles and real thermal excitations. Quantum fluctuations modify the energy penalty of exciting the Higgs field due to virtual loops from all Standard Model particles. Changes in the temperature of the universe also generate changes in the shape of the Higgs potential due to the interaction of the Higgs field with real thermal excitations in the hot early universe. Properties such as λ3 are also affected by these effects.

Davide De Biasio associate editor

Why is there more matter than antimatter?

Constraining the Higgs potential

The Higgs potential wasn’t always a Mexican hat. If the early universe got hot enough, interactions between the Higgs field and a hot plasma of particles shaped the Higgs potential into a steep bowl with a minimum at zero field, yielding no vacuum expectation value. As the universe cooled, this potential drooped into its familiar Mexican-hat shape, with a central peak surrounded by a ring of minima, where the Higgs field sits today. But did the Higgs field pass through an intermediate stage, with a “bump” separating the inner minimum from the ring?

The answer depends on the strength of the Higgs self-coupling, λ3, which governs the trilinear coupling where three Higgs-boson lines meet at a single vertex in a Feynman diagram. But λ3 is not yet measured. The most recent joint ATLAS and CMS analysis excludes values outside of –0.71 to 6.1 times its expected value in the Standard Model with 95% confidence.

In the Standard Model, the vacuum smoothly rolled from zero Higgs field to its new minimum in the outer ring. But if λ3 were at least 50% stronger than in the Standard Model, this smooth “crossover” phase transition may have been prevented by an intermediate bump. The vacuum would then have experienced a strong first-order phase transition (FOPT), like ice melting or water boiling at everyday pressures. As the universe cooled, regions of space would have tunnelled into the new vacuum, forming bubbles that expanded and merged. These bubble-wall collisions, combined with additional processes beyond the Standard Model that violate the conservation of both charge and parity together, could have contributed to the observed excess of matter over antimatter – one of the deepest mysteries of modern physics, wherein there appears to have been an excess of baryons over antibaryons in the early universe of roughly one part in a billion, resulting in the surplus we observe today after the annihilation of the others into photons.

The most direct probe of λ3 comes from Higgs-boson pair production (HH). HH production happens most often by the fusion of gluons from the colliding protons to create a top-quark loop that emits either two Higgs bosons or one Higgs boson splitting into two, yielding sensitivity to λ3.

HH production happens only once for every thousand Higgs bosons produced in the LHC. Searches for this process are already underway, with analyses of the Run 2 dataset by the ATLAS and CMS collaborations showing that a signal 2.5 times larger than the Standard Model expectation is already excluded. This progress far exceeds early expectations, suggesting that the HL-LHC may finally bring λ3 within experimental reach, clarifying the shape of the Higgs potential near its current minimum (see “Constraining the Higgs potential” figure).

Measuring λ3 at the HL-LHC would shed light on whether the Higgs potential follows the Standard Model prediction (black line) or alternative shapes (dashed lines), which may arise from physics beyond the Standard Model (BSM). The corresponding sensitivity can be illustrated through two complementary approaches: one based on HH production, assuming no effects beyond λ3 and providing a largely model-independent view near the potential’s minimum (red bands); and an approach that incorporates higher-order effects, which extend the reach over a broader range of the Higgs field (blue bands).

Since the previous update of the European Strategy for Particle Physics, the projected sensitivity has vastly improved. The combined ATLAS and CMS results are now expected to yield a discovery significance exceeding 7σ, should HH production occur at the Standard Model rate. By the end of the HL-LHC programme, the two experiments are expected to determine λ3 with a 1σ uncertainty of about 30% – enough to exclude the considered BSM potentials at the 95% confidence level if the self-coupling matches the Standard Model prediction.

What lurks beyond the Standard Model?

Puzzles such as the origin of dark matter and the nature of neutrino masses suggest that new physics must lie beyond the Standard Model. With greatly expanded data sets at the HL-LHC, new phenomena may become detectable as resonant peaks from undiscovered particles or deviations in precision observables.

Spotting a new scalar

As an example, consider a BSM scenario that includes an additional scalar boson “S” that mixes with the Higgs boson but remains blind to other Standard Model fields (see “Spotting a new scalar” figure). S could induce observable differences in λ3 (horizontal axis) and the coupling of the Higgs boson to the Z boson, gHZZ (vertical axis). Both couplings are plotted as a factor of their expected Standard Model values. The figure explores scenarios where the coupling deviates from its Standard Model value by as little as a tenth of a permille, and where the trilinear self-coupling may be between 0.5 and 2.5 times the value. Such models could prove to be the underlying cause of deviations from the Standard Model such as contributing to the matter–antimatter asymmetry in the universe. Combinations of model parameters that could allow for a strong FOPT in the early universe are plotted as black dots.

This example analysis serves to illustrate the complementarity of precision measurements and direct searches at the HL-LHC. The parameter space can be narrowed by measuring the axis variables λ3 and gHZZ (blue and orange bands). Direct searches for S → HH and S → ZZ will be able to probe or exclude many of the remaining models (red and purple regions), leaving room for scenarios in which new physics is almost entirely decoupled from the Standard Model.

What’s next?

What once might have seemed like science fiction has become a milestone in our understanding of nature. When Ursula von der Leyen, president of the European Commission, last visited CERN, she reflected on recent progress in the field.

“When you designed a 27 km underground tunnel where particles would clash at almost the speed of light, many thought you were daydreaming. And when you started looking for the Higgs boson, the chances of success seemed incredibly low, but you always proved the sceptics wrong. Your story is one of progress against all odds.”

Today, at a pivotal moment for particle physics, we are redefining what we believe is possible. Plucked from the ATLAS and CMS collaborations’ inputs to the 2026 update to the European Strategy for Particle Physics (CERN Courier November/December 2025 p23), the analy­ses described in this article are just a snapshot of what will be possible at the HL-LHC. In close collaboration with the theory community, experimentalists will use the unmatched datasets and detector capabilities of the HL-LHC and allow the field to explore a rich landscape of anticipated phenomena, including many signatures yet to be imagined.

The future starts now, and it is for us to build.

A leap in technology

Tracking upgrades

The HL-LHC will deliver proton–proton collisions at least five times more intensely than the LHC’s original design. By the end of its lifetime, the HL-LHC is expected to accumulate an integrated dataset of around 3 ab–1 of proton–proton collisions – about six times the data collected during the LHC era.

ATLAS and CMS are undergoing extensive upgrades to cope with the intense environment created by a “pileup” of up to 200 simultaneous proton–proton interactions per bunch crossing. For this, researchers are building ever more precise particle detectors and developing faster, more intelligent software.

The ATLAS and CMS collaborations will implement a full upgrade of their tracking systems, providing extended detector coverage and improved spatial resolution (see “Tracking upgrades” figure). New capabilities are added to either or both experiments, such as precision timing layers outside the tracker, a more performant high-granularity forward calorimeter, new muon detectors designed to handle the increased particle flux, and modernised front- and back-end electronics across the calorimeter and muon systems, among other improvements.

Major advances are also being made in data readout, particle reconstruction and event selection. These include track reconstruction capabilities in the trigger and a significantly increased latency, allowing for more advanced decisions about which collisions to keep for offline analysis. Novel selection techniques are also emerging to handle very high event rates with minimal event content, along with AI-assisted methods for identifying anomalous events already in the first stages of the trigger chain.

Finally, detector advancements go hand-in-hand with innovation in algorithms. The reconstruction of physics objects is being revolutionised by higher detector granularity, precise timing, and the integration of machine learning and hardware accelerators such as modern GPUs. These developments will significantly enhance the identification of charged-particle tracks, interaction vertices, b-quark-initiated jets, tau leptons and other signatures – far surpassing the capabilities foreseen when the HL-LHC was first conceived.

Introducing the axion

In pursuit of the QCD axion

There is an overwhelming amount of evidence for the existence of dark matter in our universe. This type of matter is approximately five times more abundant than the matter that makes up everything we observe: ourselves, the Earth, the Milky Way, all galaxies, neutron stars, black holes and any other imaginable structure.

We call it dark because it has not yet been probed through electroweak or strong interactions. We know it exists because it experiences and exerts gravity. That gravity may be the only bridge between dark matter and our own “baryonic” matter, is a scenario that is as plausible as it is intimidating, since gravitational interactions are too weak to produce detectable signals in laboratory-scale experiments, all of which are made of baryonic matter.

However, dark matter may interact with ordinary matter through non-gravitational forces as well, possibly mediated by new particles. Our optimism is rooted in the need for new physics. We also require new mechanisms to generate neutrino masses and the matter–antimatter asymmetry of the universe, and these new mechanisms may be intimately connected to the physics of dark matter. This view is reinforced by a surprising coincidence: the abundances of baryonic and dark matter are of the same order of magnitude, a fact that is difficult to explain without invoking a non-gravitational connection between the two sectors.

It may be that we have not yet detected dark matter simply because we are not looking in the right place. Like good sailors, the first question we ask is how far the boundaries of the territory to be explored extend. Cosmological and astrophysical observations allow dark-matter masses ranging from ultralight values of order 10–22 eV up to masses of the order of thousands of solar masses. The lower bound arises from the requirement that the dark-matter de Broglie wavelength not exceed the size of the smallest gravitationally bound structures, dwarf galaxies, such that quantum pressure does not suppress their formation (see “Leo P” image). The upper limit can be understood from the requirement that dark matter behave as a smooth, effectively collision-less medium on these small astrophysical structures. This leaves us with a range of possibilities spanning about 90 orders of magnitude, a truly overwhelming landscape. Given that our resources, and our own lifetimes, are finite, we guide our expedition both by theoretical motivation and the capabilities of our experiments to explore this vast territory.

Dark matter could be connected to the Standard Model in alternative ways

The canonical dark-matter candidate where theoretical motivation and experimental capability coincides is the weakly interacting massive particle. “WIMPs” are among the most theoretically economical dark-matter candidates, as they naturally arise in theories with new physics at the electroweak scale and can achieve the observed relic abundance through weak-scale interactions. The latter requirement implies that the mass of thermal WIMPs must lie above the GeV scale – approximately a nucleon mass. This “Lee–Weinberg” bound arises because lighter particles would not have annihilated fast enough in the early universe, leaving behind far more dark matter than we observe today.

WIMPs can be probed using a wide range of experimental strategies. At high-energy colliders, searches rely on missing transverse energy, providing sensitivity to the production of dark-matter particles or to the mediators that connect the dark and visible sectors. Beam dump and fixed-target experiments offer complementary sensitivity to light mediators and portal states. Direct-detection experiments measure nuclear recoils of heavy and stable targets, such as noble liquids like xenon or argon, which are sensitive to energy depositions at the keV scale, allowing us to probe dark-matter masses in the light end of the typical WIMP range with extraordinary sensitivity.

Light dark matter

So far, no conclusive signal has been observed, and the simplest realisations of the WIMP paradigm are becoming increasingly constrained. However, dark matter could be connected to the Standard Model in alternative ways, for example through new force carriers, allowing its mass to fall below the Lee–Weinberg bound. This sub-GeV dark matter, also referred to as light dark matter, appears in highly motivated theoretical frameworks such as asymmetric dark matter, in which an asymmetry between dark-matter particles and antiparticles sets the relic abundance, analogously to the baryon asymmetry that determines the visible matter abundance. In some of the best motivated realisations of this scenario, the dark-matter candidate resides in a confining “hidden sector” (see, for example, “Soft clouds probe dark QCD”). A dark-baryon symmetry may guarantee the stability of such composite dark-matter states, with the baryonic and dark asymmetries being generated by related mechanisms.

Leo P

Dark matter could be even lighter and behave as a wave. This occurs when its mass is below the eV-to-10 eV scale, comparable to the ionisation energy of hydrogen. In this case, its de Broglie wavelength exceeds the typical separation between particles, allowing it to be described as a coherent, classical field. In the ultralight dark-matter regime, the leading candidate is the axion. This particle is a prediction of theories beyond the Standard Model that provide a solution to the strong charge–parity (CP) problem.

In the Standard Model, there is no fundamental reason for CP to be conserved by strong interactions. In fact, two terms in the Lagrangian, of very different origin, contribute to an effective CP-violating angle, which would generically induce an electric dipole moment of hadrons, corresponding phenomenologically to a misalignment of their electromagnetic charge distributions. But remarkably – and this is at the heart of the puzzle – high-precision experiments measuring the neutron electric dipole moment show that this angle cannot be larger than 10–10 radians.

Why is this? To quote Murray Gell-Mann, what is not forbidden tends to occur. This unnaturally precise alignment in the strong sector strongly suggests the presence of a symmetry that forces this angle to vanish.

One of the most elegant and widely studied solutions, proposed by Roberto Peccei and Helen Quinn, consists of extending the Standard Model with a new global symmetry that appears at very high energies and is later broken as the universe cools. Whenever such a symmetry breaks, the theory predicts the appearance of one or more new, extremely light particles. If the symmetry is not perfect, but is slightly disturbed by other effects, this particle is no longer exactly massless and instead acquires a small mass controlled by the symmetry-breaking effects. A familiar example comes from ordinary nuclear physics: pions are light particles because the symmetry that would make them massless is slightly broken by the tiny masses of its constituent quarks.

In this framework, the new light particle is called the axion, independently proposed by Steven Weinberg and Frank Wilczek. The axion has remarkable properties: it naturally drives the unwanted CP-violating angle to zero, and its interactions with ordinary matter are not arbitrary but tightly controlled by the same underlying physics that gives it its tiny mass. Strong-interaction effects predict a narrow, well-defined “target band” relating how heavy the axion is to how strongly it interacts with matter, providing a clear roadmap for current experimental searches (the yellow band in the “In pursuit of the QCD axion” figure).

An excellent candidate

Axions also emerge as excellent dark-matter candidates. They can account for the observed cosmic dark matter through a purely dynamical mechanism in which the axion field begins to oscillate around the minimum of its potential in the early universe, and the resulting oscillations redshift as non-relativistic dark matter. Inflation is a little understood rapid expansion of the early universe by more than 26 orders of magnitude in scale factor that cosmologists invoke to explain large-scale correlations in the cosmic microwave background and cosmic structure. If the Peccei–Quinn symmetry was broken after inflation, the axion field would take random initial values in different regions of space, leading to domains with uncorrelated phases and the formation of cosmic strings. Averaging over these regions removes the freedom to tune the initial angle and makes the axion relic density highly predictive. When the additional axions from cosmic strings and domain walls are included, this scenario points to a well defined axion mass in the tens to few-hundreds of μeV range.

Cavity haloscope

There is now a wide array of ingenious experiments, the result of the work of large international collaborations and decades of technological development, that aim to probe the QCD-axion band in parameter space. Despite the many experimental proposals, so far only ADMX, CAPP and HAYSTAC have reached sensitivities close to this target (see “Cavity haloscope” image). These experiments, known as haloscopes, operate under the assumption that axions constitute the dark matter in our universe. In these setups, a high–quality-factor electromagnetic cavity is placed inside a strong magnetic field in which axions from the dark-matter halo of the Milky Way are expected to convert into photons. The resonant frequency of the cavity is tuned like a radio scanning axion masses. This technique allows experiments to probe couplings many orders of magnitude weaker than typical Standard Model interactions. However, scaling these resonant experiments to significantly different axion masses is challenging as a cavity’s resonant frequency is tied to its size. Moving away from its optimal axion-mass range either forces the cavity volume to become very small, reducing the signal power, or requires geometries that are difficult to realise in a laboratory environment.

Other experimental approaches, such as helioscopes, focus on searching for axions produced in the Sun. These experiments mainly probe the higher-mass region of the QCD-axion band and also place strong constraints on axion-like particles (ALPs). ALPs are also light fields that arise from the breaking of an almost exact global symmetry, but unlike the QCD axion, the symmetry is not explicitly broken by strong-interaction effects, so their masses and couplings are not fixedly related. While such particles do not solve the strong CP problem, they can be viable dark-matter candidates that naturally arise in many extensions of the Standard Model, especially in theories with additional global symmetries and in quantum-gravity frameworks.

Among the proposed experimental efforts to observe post-inflation QCD axions, two stand out as especially promising: MADMAX and ALPHA. Both are haloscopes, designed to detect QCD axions in the galactic dark-matter halo. Neither is traditional. Each uses a novel detector concept to target higher axion masses – a regime that is especially well motivated if the Peccei–Quinn symmetry is broken after inflation (see “In pursuit of the post-inflation axion”).

We are living in an exciting era for dark-matter research. Experimental efforts continue and remain highly promising. A large and well-motivated region of parameter space is likely to become accessible in the near future, and upcoming experiments are projected to probe a significant fraction of the QCD axion parameter space over the coming decades. Clear communication, creativity, open-mindedness in exploring new ideas, and strong coordination and sharing of expertise across different physics communities, will be more important than ever.

bright-rec iop pub iop-science physcis connect