Among the fundamental particles, tau leptons occupy a curious spot. They participate in the same sort of reactions as their lighter lepton cousins, electrons and muons, but their large mass means that they can also decay into a shower of pions and they interact more strongly with the Higgs boson. In many new-physics theories, Higgs-like particles – beyond that of the Standard Model – are introduced in order to explain the mass hierarchy or as possible portals to dark matter.
Because of their large mass, tau leptons are especially useful in searches for new physics. However, identifying taus is challenging, as in most cases they decay into a final state of one or more pions and an undetected neutrino. A crucial step in the identification of a tau lepton in the CMS experiment is the hadrons-plus-strips (HPS) algorithm. In the standard CMS reconstruction, a minimum momentum threshold of 20 GeV is imposed, such that the taus have enough momentum to make their decay products fall into narrow cones. However, this requirement reduces sensitivity to low-momentum taus. As a result, previous searches for a Higgs-like resonance φ decaying into two tau leptons required a φ-mass of more than 60 GeV.
The CMS experiment has now been able to extend the φ-mass range down to 20 GeV. To improve sensitivity to low-momentum tau decays, machine learning is used to determine a dynamic cone algorithm that expands the cone size as needed. The new algorithm, requiring one tau decaying into a muon and two neutrinos and one tau decaying into hadrons and a neutrino, is implemented in the CMS Scouting trigger system. Scouting extends CMS’s reach into previously inaccessible phase space by retaining only the most relevant information about the event, and thus facilitating much higher event rates.
The sensitivity of the new algorithm is so high that even the upsilon (Υ) meson, a bound state of the bottom quark and its antiquark, can be seen. Figure 1 shows the distribution of the mass of the visible decay products of tau (Mvis), in this case a muon from one tau lepton and either one or three pions from the other. A clear resonance structure is visible at Mvis = 6 GeV, in agreement with the expectation for the Υ meson. The peak is not at the actual mass of the Υ meson (9.46 GeV) due to the presence of neutrinos in the decay. While Υ→ττ decays have been observed at electron–positron colliders, this marks the first evidence at a hadron collider and serves as an important benchmark for the analysis.
Given the high sensitivity of the new algorithm, CMS performed a search for a possible resonance in the range between 20 and 60 GeV using the data recorded in the years 2022 and 2023, and set competitive exclusion limits (see figure 2). For the 2024 and 2025 data taking, the algorithm was further improved, enhancing the sensitivity even more.
Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024.
Walter was born in Dortmund on 14 July 1942. He studied physics in Hamburg and Heidelberg, achieving his diploma on solid-state detectors in 1969 and his doctoral thesis on transfer reactions on samarium isotopes in 1973. He spent the years from 1973 to 1975 working on transfer reactions of rare-earth elements as a postdoc in Pittsburgh under Bernie Cohen, after which he continued his nuclear-physics experiments at the Jülich cyclotron.
With the decision to build the “Cooler Synchrotron” (COSY) at Forschungszentrum Jülich (FZJ), he terminated his work on transfer reactions, summarised it in a review article, and switched to the field of medium-energy physics. At the end of 1985 he conducted a research stay at CERN, contributing to the PS185 and the JETSET (PS202) experiments at the antiproton storage ring LEAR, while also collaborating with Swedish partners at the CELSIUS synchrotron in Uppsala. In 1986 he habilitated at Ruhr University Bochum, where he was granted an APL professorship in 1996.
With the experience gained at CERN, Oelert proposed the construction of the international COSY-11 experiment as spokesperson, leading the way on studies of threshold production with full acceptance for the reaction products. From first data in 1996, COSY-11 operated successfully for 11 years, producing important results in several meson-production channels.
At CERN, Walter proposed the production of antihydrogen in the interaction of the antiproton beam with a xenon cluster target – the last experiment before the shutdown of LEAR. The experiment was performed in 1995, resulting in the production of nine antihydrogen atoms. This result was an important factor in the decision by CERN management to build the antiproton–decelerator (AD). In order to continue antihydrogen studies, he received substantial support from Jülich for a partnership in the new ATRAP experiment aiming for CPT violation studies in antihydrogen spectroscopy.
Walter retired in 2008, but kept active in antiproton activities at the AD for more than 10 years, during which time he was affiliated with the Johannes Gutenberg University of Mainz. He was one of the main driving forces on the way to the extra-low-energy antiproton ring (ELENA), which was finally built within time and financial constraints, and drastically improved the performance of the antimatter experiments. He also received a number of honours, notably the Merentibus Medal of the Jagiellonian University of Kraków, and was elected as an external member of the Polish Academy of Arts and Sciences.
Walter’s personality – driven, competent, visionary, inspiring, open minded and caring – was the type of glue that made proactive, successful and happy collaborations.
Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83.
Born in Moscow in 1941, Domogatsky obtained his PhD in 1970 from Moscow Lomonosov University and then worked at the Moscow Lebedev Institute. There, he studied the processes of the interaction of low-energy neutrinos with matter and neutrino emission during the gravitational collapse of stars. His work was essential for defining the scientific programme of the Baksan Neutrino Observatory. Already at that time, he had put forward the idea of a network of underground detectors to register neutrinos from supernovae, a programme realised decades later by the current SuperNova Early Warning System, SNEWS. Together with his co-author Dmitry Nadyozhin, he showed that neutrinos released in star collapses are drivers in the formation of isotopes such as Li-7, Be-8 and B-11 in the supernova shell, and that these processes play an important role in cosmic nucleosynthesis.
In 1980 Domogatsky obtained his doctor of science (equivalent to the Western habilitation) and in the same year became the head of the newly founded Laboratory of Neutrino Astrophysics at High Energies at the Institute for Nuclear Research of the Russian Academy of Sciences, INR RAS. The central goal of this laboratory was, and is, the construction of an underwater neutrino telescope in Lake Baikal, a task to which he devoted all his life from that point on. He created a team of enthusiastic young experimentalists, starting site explorations in the following year and obtaining first physics results with test configurations later in the 1980s. At the end of the 1980s, the plan for a neutrino telescope comprising about 200 photomultipliers (NT200) was born, and realised together with German collaborators in the 1990s. The economic crisis following the breakdown of the Soviet Union would surely have ended the project if not for Domogatsky’s unshakable will and strong leadership. With the partial configuration of the project deployed in 1994, first neutrino candidates were identified in 1996: the proof of concept for underwater neutrino telescopes had been delivered.
He shaped the image of the INR RAS and the field of neutrino astronomy
NT200 was shut down a decade ago, by which time a new cubic-kilometre telescope in Lake Baikal was already under construction. This project was christened Baikal–GVD, with GVD standing for gigaton volume telescope, though these letters could equally well denote Domogatsky’s initials. Thus far it has reached about half of the size of the IceCube neutrino telescope at the South Pole.
Domogatsky was born to a family of artists and was surrounded by an artistic atmosphere whilst growing up. His grandfather was a famous sculptor, his father a painter, woodcrafter and book illustrator. His brother followed in his father’s footsteps, while Grigory himself married Svetlana, an art historian. He possessed an outstanding literary, historical and artistic education, and all who met him were struck by his knowledge, his old-fashioned noblesse and his intellectual charm.
Domogatsky was a corresponding member of the Russian Academy of Sciences and the recipient of many prestigious awards, most notably the Bruno Pontecorvo Prize and the Pavel Cherenkov Prize. With his leadership in the Baikal project, Grigory Domogatsky shaped the scientific image of the INR RAS and the field of neutrino astronomy. He will be remembered as a carefully weighing scientist, as a person of incredible stamina, and as the unforgettable father figure of the Baikal project.
Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025.
Elena received her laurea in physics from the Sapienza University of Rome in 1993, followed by a PhD from the University of Torino in 1997. Her early career included postdoctoral positions at Texas A&M University and the Paul Scherrer Institute, as well as a staff position at the University of Torino. In 2009 she joined the University of Southampton as a lecturer, earning promotions to associate professor in 2018 and professor in 2022.
Elena’s research focused on the theory and phenomenology of particle physics at colliders, searching for new forces and exotic supersymmetric particles at the Large Hadron Collider. She explored a wide range of Beyond the Standard Model (BSM) scenarios at current and future colliders. Her work included studies of new gauge bosons such as the Z′, extra-dimensional models, and CP-violating effects in BSM frameworks, as well as dark-matter scattering on nuclei and quantum corrections to vector-boson scattering. She was also one of the authors of “WPHACT”, a Monte Carlo event generator developed for four-fermion physics at electron–positron colliders, which remains a valuable tool for precision studies. Elena investigated novel signatures in decays of the Higgs boson, aiming to uncover deviations from Standard Model expectations, and was known for connecting theory with experimental applications, proposing phenomenological strategies that were both realistic and impactful. She was well known as a research collaborator at CERN and other international institutions.
She authored the WPHACT Monte Carlo event generator that remains a valuable tool for precision studies
Elena played an integral role in shaping the academic community at Southampton and was greatly admired as a teacher. Her remarkable professional achievements were paralleled by strength and optimism in the face of adversity. Despite her long illness, she remained a positive presence, planning ahead for her work and her family. Her colleagues and students remember her as a brilliant scientist, an inspiring mentor and a warm and compassionate person. She will also be missed by her longstanding colleagues from the CMS collaboration at Rutherford Appleton Laboratory.
Elena is survived by her devoted husband, Francesco, and their two daughters.
Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95.
Born on 19 April 1928, in Akita Prefecture, Japan, Ohnuma graduated from the University of Tokyo’s Physics Department in 1950. After studying with Yoichiro Nambu at Osaka University, he came to the US as a Fulbright scholar in 1953, obtaining his doctorate from the University of Rochester in 1956. He maintained a lifelong friendship with neutrino astrophysicist Masatoshi Koshiba, who received his degree from Rochester in the same period. A photo published in the Japanese national newspaper Asahi Shimbun shows him with Koshiba, Richard Feynman and Nambu when the latter won the Nobel Prize in Physics – Ohnuma would often joke that he was the only one pictured who did not win a Nobel.
Ohnuma spent three years doing research at Yale University before returning to Japan to teach at Waseda University. In 1962 he returned to the US with his wife and infant daughter Keiko to work on linear accelerators at Yale. In 1970 he joined the Fermi National Accelerator Laboratory (FNAL), where he contributed significantly to the completion of the Tevatron before moving to the University of Houston in 1986, where he worked on the Superconducting Super Collider (SSC). While he claimed to have moved to Texas because his work at FNAL was done, he must have had high hopes for the SSC, which the first Bush administration slated to be built in Dallas in 1989. Young researchers who worked with him, including me, made up an energetic but inexperienced working team of accelerator researchers. With many FNAL-linked people such as Helen Edwards in the leadership of SSC, we frequently invited professor Ohnuma to Dallas to review the overall design. He was a mentor to me for more than 35 years after our work together at the Texas Accelerator Center in 1988.
Ohnuma reviewed accelerator designs and educated students and young researchers in the US and Japan
After Congress cancelled the SSC in 1993, Ohnuma continued his research at the University of Houston until 1999. Starting in the late 1990s, he visited the JHF, later J-PARC, accelerator group led by Yoshiharu Mori at the University of Tokyo’s Institute for Nuclear Study almost every year. As a member of JHF’s first International Advisory Committee, he reviewed the accelerator design and educated students and young researchers, whom he considered his grandchildren. Indeed, his guidance had grown gentler and more grandfatherly.
In 2000, in semi-retirement, Ohnuma settled at the University of Hawaii, where he continued to frequent the campus most weekdays until his death. Even after the loss of his wife in 2021, he continued walking every day, taking a bus to the university, doing volunteer work at a senior facility, and visiting the Buddhist temple every Sunday. His interest in Zen Buddhism had grown after retirement, and he resolved to copy the Heart Sutra a thousand times on rice paper, with the sumi brush and ink prepared from scratch. We were entertained by his panic at having nearly achieved his goal too soon before his death. The Heart Sutra is a foundational text in Zen Buddhism, chanted on every formal occasion. Undertaking to copy it 1000 times exemplified his considerable tenacity and dedication. Whatever he undertook in the way of study, he was unhurried and unworried, optimistic and cheerful, and persistent.
Particle-beam technology has wide applications in science and industry. Specifically, high-energy x-ray production is being investigated for FLASH radiotherapy, 14 MeV neutrons are being produced for fusion energy production, and compact electron accelerators are being built for medical-device sterilisation. In each instance it is critical to guarantee that the particle beam is delivered to the end user with the correct makeup, and also to ensure that secondary particles created from scattering interactions are shielded from technicians and sensitive equipment. There is no precise way to predict the random walk of any individual particle as it encounters materials and alloys of different shapes within a complicated apparatus. Monte Carlo methods simulate the random paths of many millions of independent particles, revealing the tendencies of these particles in aggregate. Assessing shielding effectiveness is particularly challenging computationally, as the very nature of shielding means simulations produce low particle rate.
A common technique for shielding calculations takes these random walk simulations a step further by applying variance reduction techniques. Variance reduction techniques are a way of introducing biases in the simulation in a smart way to increase the number of particles emerging from the shielding, while still staying true to the total conservation of matter. In some regions within the shielding, particles are split into independent “daughter” particles with independent pathways but some common history. They are given a weight value, so the overall flux of particles is kept constant. In this way, it is possible to predict the behaviour of a one-in-a-million event without having to simulate one million particle trajectories. The performance of these techniques is shown in figure 2.
These kinds of simulations take on new importance with the global race to develop fusion reactors for energy production. Materials will be exposed to conditions they’ve never seen before, mere feet from the fusion reactions that sustain stars. It is imperative to understand the neutron flux from fusion reactions and how they affect critical components in the sustained operation of fusion facilities if they are going to operate to meet our ever-growing energy needs. Monte Carlo simulation packages are capable of both distributed memory (MPI) and shared memory (OpenMP) parallel computation on the world’s largest supercomputers, engaging hundreds of thousands of cores at once. This enables simulations of billions of particle histories. Together with variance reduction, these powerful simulation tools enable precise estimation of particle fluxes in even the most deeply shielded regions.
RadiaSoft offers browser-based modelling of neutron radiation transport with parallel computation and variance reduction capabilities running on Sirepo, their browser-based interface. Examples of fusion tokamak simulations can be seen above. RadiaSoft is also available for comprehensive consultation in x-ray production, radiation shielding and dose-delivery simulations across a wide range of applications.
Last June, the United Nations and UNESCO proclaimed 2025 the International Year of Quantum (IYQ): here is why it really matters.
Everything started a century ago, when scientists like Niels Bohr, Max Planck and Wolfgang Pauli, but also Albert Einstein, Erwin Schrödinger and many others, came up with ideas that would revolutionise our description of the subatomic world. This is when physics transitioned from being a deterministic discipline to a mostly probabilistic one, at least when we look at subatomic scales. Brave predictions of weird behaviours started to attract the attention of an increasingly larger part of the scientific community, and continued to appear decade after decade. The most popular ones being: particle entanglement, the superposition of states and the tunnelling effect. These are also some of the most impactful quantum effects, in terms of the technologies that emerged from them.
One hundred years on, and the scientific community is somewhat acclimatised to observing and measuring the probabilistic nature of particles and quanta. Lasers, MRI and even sliding doors would not exist without the pioneering studies on quantum mechanics. However, it’s common opinion that today we are on the edge of a second quantum revolution.
“International years” are proclaimed to raise awareness, focus global attention, encourage cooperation and mobilise resources towards a certain topic or research domain. The International Year of Quantum also aims to reverse-engineer the approach taken with artificial intelligence (AI), a technology that came along faster than any attempt to educate and prepare the layperson for its adoption. As we know, this is creating a lot of scepticism towards AI, which is often felt to be too complex and designed to generate a loss of control in its users.
The second quantum revolution has begun and we are at the dawn of future powerful applications
The second quantum revolution has begun in recent years and, while we are rapidly moving from simply using the properties of the quantum world to controlling individual quantum systems, we are still at the dawn of future powerful applications. Some quantum sensors are already being used, and quantum cryptography is quite well understood. However, quantum bits need further studies and the exploration of other quantum fields has not even started yet.
Unlike AI, we still have time to push for a more inclusive approach to the development of new technology. During the international year, hundreds of events, workshops and initiatives will emphasise the role of global collaboration in the development of accessible quantum technologies. Through initiatives like the Quantum Technology Initiative (QTI) and the Open Quantum Institute (OQI), CERN is actively contributing not only to scientific research but also to promoting the advancement of its applications for the benefit of society.
The IYQ inaugural event was organised at UNESCO Headquarters in Paris in February 2025. At CERN, this year’s public event season is devoted to the quantum year, and will present talks, performances, a film festival and more. The full programme is available at visit.cern/events.
CERN’s Large Hadron Collider continues to deliver surprises. While searching for additional Higgs bosons, the CMS collaboration may have instead uncovered evidence for the smallest composite particle yet observed in nature – a “quasi-bound” hadron made up of the most massive and shortest-lived fundamental particle known to science and its antimatter counterpart. The findings, which do not yet constitute a discovery claim and could also be susceptible to other explanations, were reported this week at the Rencontres de Moriond conference in the Italian Alps.
Almost all of the Standard Model’s shortcomings motivate the search for additional Higgs bosons. Their properties are usually assumed to be simple. Much as the 125 GeV Higgs boson discovered in 2012 appears to interact with each fundamental fermion with a strength proportional to the fermion’s mass, theories postulating additional Higgs bosons generally expect them to couple more strongly to heavier quarks. This puts the singularly massive top quark at centre stage. If an additional Higgs boson has a mass greater than about 345 GeV and can therefore decay to a top quark–antiquark pair, this should dominate the way it decays inside detectors. Hunting for bumps in the invariant mass spectrum of top–antitop pairs is therefore often considered to be the key experimental signature of additional Higgs bosons above the top–antitop production threshold.
The CMS experiment has observed just such a bump. Intriguingly, however, it is located at the lower limit of the search, right at the top-quark pair production threshold itself, leading CMS to also consider an alternative hypothesis long considered difficult to detect: a top–antitop quasi-bound state known as toponium (see “Threshold excess” figure).
The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC
“When we started the project, toponium was not even considered as a background to this search,” explains CMS physics coordinator Andreas Meyer (DESY). “In our analysis today we are only using a simplified model for toponium – just a generic spin-0 colour-singlet state with a pseudoscalar coupling to top quarks. The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC.”
Though other explanations can’t be ruled out, CMS finds the toponium hypothesis to be sufficient to explain the observed excess. The size of the excess is consistent with the latest theoretical estimate of the cross section to produce pseudoscalar toponium of around 6.4 pb.
“The cross section we obtain for our simplified hypothesis is 8.8 pb with an uncertainty of about 15%,” explains Meyer. “One can infer that this is significantly above five sigma.”
The smallest hadron
If confirmed, toponium would be the final example of quarkonium – a term for quark–antiquark states formed from heavy charm, bottom and perhaps top quarks. Charmonium (charm–anticharm) mesons were discovered at SLAC and Brookhaven National Laboratory in theNovember Revolution of 1974. Bottomonium (bottom–antibottom) mesons were discovered at Fermilab in 1977. These heavy quarks move relatively slowly compared to the speed of light, allowing the strong interaction to be modelled by a static potential as a function of the separation between them. When the quarks are far apart, the potential is proportional to their separation due to the self-interacting gluons forming an elongating flux tube, yielding a constant force of attraction. At close separations, the potential is due to the exchange of individual gluons and is Coulomb-like in form, and inversely proportional to separation, leading to an inverse-square force of attraction. This is the domain where compact quarkonium states are formed, in a near perfect QCD analogy to positronium, wherein an electron and a positron are bound by photon exchange. The Bohr radii of the ground states of charmonium and bottomonium are approximately 0.3 fm and 0.2 fm, and bottomonium is thought to be the smallest hadron yet discovered. Given its larger mass, toponium’s Bohr radius would be an order of magnitude smaller.
For a long time it was thought that toponium bound states were unlikely to be detected in hadron–hadron collisions. The top quark is the most massive and the shortest-lived of the known fundamental particles. It decays into a bottom quark and a real W boson in the time it takes light to travel just 0.1 fm, leaving little time for a hadron to form. Toponium would be unique among quarkonia in that its decay would be triggered by the weak decay of one of its constituent quarks rather than the annihilation of its constituent quarks into photons or gluons. Toponium is expected to decay at twice the rate of the top quark itself, with a width of approximately 3 GeV.
CMS first saw a 3.5 sigma excess in a 2019 search studying the mass range above 400 GeV, based on 35.9 fb−1 of proton–proton collisions at 13 TeV from 2016. Now armed with 138 fb–1 of collisions from 2016 to 2018, the collaboration extended the search down to the top–antitop production threshold at 345 GeV. Searches are complicated by the possibility that quantum interference between background and Higgs signal processes could generate an experimentally challenging peak–dip structure with a more or less pronounced bump.
“The signal reported by CMS, if confirmed, could be due either to a quasi-bound top–antitop meson, commonly called ‘toponium’, or possibly an elementary spin-zero boson such as appears in models with additional Higgs bosons, or conceivably even a combination of the two,” says theorist John Ellis of King’s College London. “The mass of the lowest-lying toponium state can be calculated quite accurately in QCD, and is expected to lie just below the nominal top–antitop threshold. However, this threshold is smeared out by the short lifetime of the top quark, as well as the mass resolution of an LHC detector, so toponium would appear spread out as a broad excess of events in the final states with leptons and jets that generally appear in top decays.”
Quantum numbers
An important task of the analysis is to investigate the quantum numbers of the signal. It could be a scalar particle, like the Higgs boson discovered in 2012, or a pseudoscalar particle – a different type of spin-0 object with odd rather than even parity. To measure its spin-parity, CMS studied the angular correlations of the top-quark-pair decay products, which retain information on the original quantum state. The decays bear all the experimental hallmarks of a pseudoscalar particle, consistent with toponium (see “Angular analysis” figure) or the pseudoscalar Higgs bosons common to many theories featuring extended Higgs sectors.
“The toponium state produced at the LHC would be a pseudoscalar boson, whose decays into these final states would have characteristic angular distributions, and the excess of events reported by CMS exhibits the angular correlations expected for such a pseudoscalar state,” explains Ellis. “Similar angular correlations would be expected in the decays of an elementary pseudoscalar boson, whereas scalar-boson decays would exhibit different angular correlations that are disfavoured by the CMS analysis.”
Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery
Two main challenges now stand in the way of definitively identifying the nature of the excess. The first is to improve the modelling of the creation of top-quark pairs at the LHC, including the creation of bound states at the threshold. The second challenge is to obtain consistency with the ATLAS experiment. “ATLAS had similar studies in the past but with a more conservative approach on the systematic uncertainties,” says ATLAS physics coordinator Fabio Cerutti (LBNL). “This included, for example, larger uncertainties related to parton showers and other top-modelling effects. To shed more light on the CMS observation, be it a new boson, a top quasi-bound state, or some limited understanding of the modelling of top–antitop production at threshold, further studies are needed on our side. We have several analysis teams working on that. We expect to have new results with improved modelling of the top-pair production at threshold and additional variables sensitive to both a new pseudo-scalar boson or a top quasi-bounded state very soon.”
Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery.
“Discovering toponium 50 years after the November Revolution would be an unanticipated and welcome golden anniversary present for its charmonium cousin that was discovered in 1974,” concludes Ellis. “The prospective observation and measurement of the vector state of toponium in e+e– collisions around 350 GeV have been studied in considerable theoretical detail, but there have been rather fewer studies of the observability of pseudoscalar toponium at the LHC. In addition to the angular correlations observed by CMS, the effective production cross section of the observed threshold effect is consistent with non-relativistic QCD calculations. More detailed calculations will be desirable for confirmation that another quarkonium family member has made its appearance, though the omens are promising.”
Just like particle physics, cosmology has its own standard model. It is also powerful in prediction, and brings new mysteries and profound implications. The first was the realisation in 1917 that a homogeneous and isotropic universe must be expanding. This led Einstein to modify his general theory of relativity by introducing a cosmological constant (Λ) to counteract gravity and achieve a static universe – an act he labelled his greatest blunder when Edwin Hubble provided observational proof of the universe’s expansion in 1929. Sixty-nine years later, Saul Perlmutter, Adam Riess and Brian Schmidt went further. Their observations of Type Ia supernovae (SN Ia) showed that the universe’s expansion was accelerating. Λ was revived as “dark energy”, now estimated to account for 68% of the total energy density of the universe.
On large scales the dominant motion of galaxies is the Hubble flow, the expansion of the fabric of space itself
The second dominant component of the model emerged not from theory but from 50 years of astrophysical sleuthing. From the “missing mass problem” in the Coma galaxy cluster in the 1930s to anomalous galaxy-rotation curves in the 1970s, evidence built up that additional gravitational heft was needed to explain the formation of the large-scale structure of galaxies that we observe today. The 1980s therefore saw the proposal of cold dark matter (CDM), now estimated to account for 27% of the energy density of the universe, and actively sought by diverse experiments across the globe and in space.
Dark energy and CDM supplement the remaining 5% of normal matter to form the ΛCDM model. ΛCDM is a remarkable six-parameter framework that models 13.8 billion years of cosmic evolution from quantum fluctuations during an initial phase of “inflation” – a hypothesised expansion of the universe by 26 to 30 orders of magnitude in roughly 10–36 seconds at the beginning of time. ΛCDM successfully models cosmic microwave background (CMB) anisotropies, the large-scale structure of the universe, and the redshifts and distances of SN Ia. It achieves this despite big open questions: the nature of dark matter, the nature of dark energy and the mechanism for inflation.
Cosmologists are eager to guide beyond-ΛCDM model-building efforts by testing its end-to-end predictions, and the model now seems to be failing the most important: predicting the expansion rate of the universe.
One of the main predictions of ΛCDM is the average energy density of the universe today. This determines its current expansion rate, otherwise known as the Hubble constant (H0). The most precise ΛCDM prediction comes from a fit to CMB data from ESA’s Planck satellite (operational 2009 to 2013), which yields H0 = 67.4 ± 0.5 km/s/Mpc. This can be tested against direct measurements in our local universe, revealing a surprising discrepancy (see “The Hubble tension” figure).
At sufficiently large distances, the dominant motion of galaxies is the Hubble flow – the expansion of the fabric of space itself. Directly measuring the expansion rate of the universe calls for fitting the increase in the recession velocity of galaxies deep within the Hubble flow as a function of distance. The gradient is H0.
Receding supernovae
While high-precision spectroscopy allows recession velocity to be precisely measured using the redshifts (z) of atomic spectra, it is more difficult to measure the distance to astrophysical objects. Geometrical methods such as parallax are imprecise at large distances, but “standard candles” with somewhat predictable luminosities such as cepheids and SN Ia allow distance to be inferred using the inverse square-law. Cepheids are pulsating post-main-sequence stars whose radius and observed luminosity oscillate over a period of one to 100 days, driven by the ionisation and recombination of helium in their outer layers, which increases opacity and traps heat; their period increases with their true luminosity. Before going supernova, SN Ia were white dwarf stars in binary systems; when the white dwarf accretes enough mass from its companion star, runaway carbon fusion produces a nearly standardised peak luminosity for a period of one to two weeks. Only SN Ia are deep enough in the Hubble flow to allow precise measurements of H0. When cepheids are observable in the same galaxies, they can be used to calibrate them.
At present, the main driver of the Hubble tension is a 2022 measurement of H0 by the SH0ES (Supernova H0 for the Equation of State) team led by Adam Riess. As the SN Ia luminosity is not known from first principles, SH0ES built a “distance ladder” to calibrate the luminosity of 42 SN Ia within 37 host galaxies. The SN Ia are calibrated against intermediate-distance cepheids, and the cepheids are calibrated against four nearby “geometric anchors” whose distance is known through a geometric method (see “Distance ladder” figure). The geometric anchors are: Milky Way parallaxes from ESA’s Gaia mission; detached eclipsing binaries in the large and small magellanic clouds (LMC and SMC); and the “megamaser” galaxy host NGC4258, where water molecules in the accretion disk of a supermassive black hole emit Doppler-shifting microwave maser photons.
The great strength of the SH0ES programme is its use of NASA and ESA’s Hubble Space Telescope (HST, 1990–) at all three rungs of the distance ladder, bypassing the need for cross-calibration between instruments. SN Ia can be calibrated out to 40 Mpc. As a result, in 2022 SH0ES used measurements of 300 or so high-z SN Ia deep within the Hubble flow to measure H0 = 73.04 ± 1.04 km/s/Mpc. This is in more than 5σ tension with Planck’s ΛCDM prediction of 67.4 ± 0.5 km/s/Mpc.
The value of H0 obtained from fitting Planck CMB data has been shown to be robust in two key ways.
First, Planck data can be bypassed by combining CMB data from NASA’s WMAP probe (2001–2010) with observations by ground-based telescopes. WMAP in combination with the Atacama Cosmology Telescope (ACT, 2007–2022) yields H0 = 67.6 ± 1.1 km/s/Mpc. WMAP in combination with the South Pole Telescope (SPT, 2007–) yields H0 = 68.2 ± 1.1 km/s/Mpc. Second, and more intriguingly, CMB data can be bypassed altogether.
In the early universe, Compton scattering between photons and electrons was so prevalent that the universe behaved as a plasma. Quantum fluctuations from the era of inflation propagated like sound waves until the era of recombination, when the universe had cooled sufficiently for CMB photons to escape the plasma when protons and electrons combined to form neutral atoms. This propagation of inflationary perturbations left a characteristic scale known as the sound horizon in both the acoustic peaks of the CMB and in “baryon acoustic oscillations” (BAOs) seen in the large-scale structure of galaxy surveys (see “Baryon acoustic oscillation” figure). The sound horizon is the distance travelled by sound waves in the primordial plasma.
While the SH0ES measurement relies on standard candles, ΛCDM predictions rely instead on using the sound horizon as a “standard ruler” against which to compare the apparent size of BAOs at different redshifts, and thereby deduce the expansion rate of the universe. Under ΛCDM, the only two free parameters entering the computation of the sound horizon are the baryon density and the dark-matter density. Planck evaluates both by studying the CMB, but they can be obtained independently of the CMB by combining BAO measurements of the dark-matter density with Big Bang nucleosynthesis (BBN) measurements of the baryon density (see “Sound horizon” figure). The latest measurement by the Dark Energy Spectroscopic Instrument in Arizona (DESI, 2021–) yields H0 = 68.53 ± 0.80 km/s/Mpc, in 3.4σ tension with SH0ES and fully independent of Planck.
The next few years will be crucial for understanding the Hubble tension, and may decide the fate of the ΛCDM model. ACT, SPT and the Simons Observatory in Chile (2024–) will release new CMB data. DESI, the Euclid space telescope (2023–) and the forthcoming LSST wide-field optical survey in Chile will release new galaxy surveys. “Standard siren” measurements from gravitational waves with electromagnetic counterparts may also contribute to the debate, although the original excitement has dampened with a lack of new events after GW170817. More accurate measurements of the age of the oldest objects may also provide an important new test. If H0 increases, the age of the universe decreases, and the SH0ES measurement favours less than 13.1 billion years at 2σ significance.
The SH0ES measurement is also being checked directly. A key approach is to test the three-step calibration by seeking alternative intermediate standard candles besides cepheids. One candidate is the peak-luminosity “tip” of the red giant branch (TRGB) caused by the sudden start of helium fusion in low-mass stars. The TRGB is bright enough to be seen in distant galaxies that host SN Ia, though at distances smaller than that of cepheids.
Settling the debate
In 2019 the Carnegie–Chicago Hubble Program (CCHP) led by Wendy Freedman and Barry Madore calibrated SN Ia using the TRGB within the LMC and NGC4258 to determine H0 = 69.8 ± 0.8 (stat) ± 1.7 (syst). An independent reanalysis including authors from the SH0ES collaboration later reported H0 = 71.5 ± 1.8 (stat + syst) km/s/Mpc. The difference in the results suggests that updated measurements with the James Webb Space Telescope (JWST) may settle the debate.
Launched into space on 25 December 2021, JWST is perfectly adapted to improve measurements of the expansion rate of the universe thanks to its improved capabilities in the near infrared band, where the impact of dust is reduced (see “Improved resolution” figure). Its four-times-better spatial resolution has already been used to re-observe a subsample of the 37 hosts galaxies home to the 42 SN Ia studied by SH0ES and the geometric anchor NGC4258.
So far, all observations suggest good agreement with the previous observations by HST. SH0ES used JWST observations to obtain up to a factor 2.5 reduction in the dispersion of the period-luminosity relation for cepheids with no indication of a bias in HST measurements. Most importantly, they were able to exclude the confusion of cepheids with other stars as being responsible for the Hubble tension at 8σ significance.
Meanwhile, the CCHP team provided new measurements based on three distance indicators: cepheids, the TRGB and a new “population based” method using the J-region of the asymptotic giant branch (JAGB) of carbon-rich stars, for which the magnitude of the mode of the luminosity function can serve as a distance indicator (see the last three rows of “The Hubble tension” figure).
The new CCHP results suggest that cepheids may show a bias compared to JAGB and TRGB, though this conclusion was rapidly challenged by SH0ES, who identified a missing source of uncertainty and argued that the size of the sample of SN Ia within hosts with primary distance indicators is too small to provide competitive constraints: they claim that sample variations of order 2.5 km/s/Mpc could explain why the JAGB and TRGB yield a lower value. Agreement may be reached when JWST has observed a larger sample of galaxies – across both teams, 19 of the 37 calibrated by SH0ES have been remeasured so far, plus the geometric anchor NGC 5468 (see “The usual suspects” figure).
At this stage, no single systematic error seems likely to fully explain the Hubble tension, and the problem is more severe than it appears. When calibrated, SN Ia and BAOs constrain not only H0, but the entire redshift range out to z ~ 1. This imposes strong constraints on any new physics introduced in the late universe. For example, recent DESI results suggest that the dynamics of dark energy at late times may not be exactly that of a cosmological constant, but the behaviour needed to reconcile Planck and SH0ES is strongly excluded.
Rather than focusing on the value of the expansion rate, most proposals now focus on altering the calibration of either SN Ia or BAOs. For example, an unknown systematic error could alter the luminosity of SN Ia in our local vicinity, but we have no indication that their magnitude changes with redshift, and this solution appears to be very constrained.
The most promising solution appears to be that some new physics may have altered the value of the sound horizon in the early universe. As the sound horizon is used to calibrate both the CMB and BAOs, reducing it by 10 Mpc could match the value of H0 favoured by SH0ES (see “Sound horizon” figure). This can be achieved either by increasing the redshift of recombination or the energy density in the pre-recombination universe, giving the sound waves less time to propagate.
The best motivated models invoke additional relativistic species in the early universe such as a sterile neutrino or a new type of “dark radiation”. Another intriguing possibility is that dark energy played a role in the pre-recombination universe, boosting the expansion rate at just the right time. The wide variety and high precision of the data make it hard to find a simple mechanism that is not strongly constrained or finely tuned, but existing models have some of the right features. Future data will be decisive in testing them.
Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.
The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.
In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.
Quantum fluctuations
The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.
This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.
As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.
This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.
g-2 and the Standard Model
Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.
The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.
Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.
Data or the lattice?
Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e– annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.
A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.
In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e– annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.
After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).
In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.
This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.
News from the lattice
Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).
Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.
The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.
With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.
The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e– datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.
These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e– into hadrons is also evolving rapidly.
News from electron–positron annihilation
Many experiments have measured the cross-section for e+e– annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e–→ π+π– process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).
The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π– channel. Scrutiny by the Theory Initiative has identified no major problem.
CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e– annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ– channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.
In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ–→ π–π0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.
KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e– → π+π– cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e– → π+π–. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.
The future
While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.
The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus
New analyses of e+e– are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.
At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.
There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.