Comsol -leaderboard other pages

Topics

Two strikes for the light sterile neutrino

In the 1990s, the GALLEX and SAGE experiments studied solar electron neutrinos using large tanks of gallium. Every few days a neutrino would transform a neutron into a proton, and every few weeks the experimenters would count the resulting germanium atoms using radiochemical techniques. To control systematic uncertainties in these difficult experiments, they also exposed the detectors to well-understood radioactive sources of electron neutrinos. But both experiments reported 20% fewer electron neutrinos from radioactive decay than expected.

Thus was born the gallium anomaly, which was carefully checked and confirmed by SAGE’s successor, the BEST experiment, as recently as 2022. The most tempting explanation is the existence of a new particle: a “sterile” neutrino flavour that doesn’t interact via any Standard Model interaction. Neutrino oscillations would transform the missing 20% of electron neutrinos into undetectable sterile neutrinos. It would nevertheless have remained invisible to LEP’s famous measurement of the number of neutrino flavours as it would not couple to the Z boson.

Out the window

This interpretation has been in tension with neutrino-oscillation fits for some time, but a new measurement at the KATRIN experiment likely excludes a sterile-neutrino explanation of the gallium anomaly, says Patrick Huber (Virginia Tech). “There was a strong hint of that from solar neutrinos, but the KATRIN result really nails this window shut. That is not to say the gallium anomaly went away; the experimental evidence here is firm and stands at more than five sigma significance, even under the most conservative assumptions about nuclear cross sections and systematics. So this still requires an explanation, but due to KATRIN we now know for sure it can’t be a vanilla sterile neutrino.”

KATRIN’s main objective is to measure the mass of the electron neutrino (CERN Courier January/February 2020 p28). Though neutrino oscillations imply that the particle is massive, its mass has thus far proved to be below the sensitivity of experiments. The KATRIN experiment, based at the Karlsruhe Institute of Technology in Germany, seeks to remedy this with precise observations of the beta decay of tritium. The heavier the electron neutrino, the lower the maximum energy of the beta-decay electrons. Though KATRIN has not yet been able to uncover evidence for the tiny mass of the electron neutrino, the much larger mass of any sterile neutrino able to explain the gallium anomaly would have made itself felt in precise observations of the endpoint of the energy spectrum of beta-decay electrons thanks to mixing between the neutrino flavours.

After the new KATRIN analysis, the best fit of the sterile neutrino from the gallium anomaly is excluded at 96.6% confidence

“A sterile neutrino would manifest itself as a model-independent kink-like distortion in the beta-decay spectrum, rather than as a deficit in the event rate,” explains lead analyst Thierry Lasserre of the Max-Planck-Institut für Kernphysik, in Heidelberg, Germany. “After the new KATRIN analysis, including 36 million electrons in the last 40 electron volts below the endpoint, the best fit of the sterile neutrino from the gallium anomaly is excluded at 96.6% confidence.”

Though heavy sterile neutrinos remain a well motivated completion of the Standard Model of particle physics with the potential to solve problems in cosmology, light sterile neutrinos struck out a second time in the same volume of Nature last month, thanks to a new measurement at the MicroBooNE experiment at Fermilab, near Chicago.

The MicroBooNE collaboration was following up on a persistent anomaly uncovered by their sister experiment, MiniBooNE, which was itself following up on the infamous LSND anomaly of 2001 (CERN Courier July/August 2020 p32). Both experiments had reported an excess of electron neutrinos in a beam of muon neutrinos generated using a particle accelerator. Here, the sterile-neutrino explanation would be more subtle: muon neutrinos would have to oscillate twice, once into sterile neutrinos and then into electron neutrinos. Using a bespoke liquid-argon time projection chamber, the MicroBooNE collaboration excludes the single-light-sterile-neutrino interpretation of the LSND and MiniBooNE anomalies at 95% confidence.

“The MicroBooNE result is just confirming what we knew from global fits for a long time,” clarifies Huber. “We cannot treat the appearance of electron neutrinos in a muon neutrino beam as a two-flavour problem if a sterile neutrino is involved – if we accept this simple fact of quantum mechanics then LSND and MiniBooNE’s excess of electron neutrinos cannot be due to mixing with a sterile neutrino since the corresponding disappearance of electron and muon neutrinos has not been observed.”

One sterile-neutrino anomaly remains unmentioned, the reactor anomaly, but it has already evaporated into statistical insignificance thanks to new experiments and careful modelling of the flux of electron antineutrinos from nuclear reactors. The promise of experiments with reactor neutrinos is now exemplified by the rapid progress of the Jiangmen Underground Neutrino Observatory (JUNO) in China, which started data taking on 26 August last year (CERN Courier November/December 2025 p9).

Back to the standard paradigm

While the recent KATRIN and MicroBooNE analyses sought evidence for a hypothetical sterile neutrino beyond the standard scenario, JUNO operates within the standard three-flavour framework. Using just 59 days of data, the experiment independently exceeded the precision of previous global fits on two out of six of the parameters governing neutrino oscillations. These are the same mixing angle and mass splitting that govern the oscillations of solar electron neutrinos into other flavours – the very effect that GALLEX and SAGE were initially designed to study in the 1990s. As JUNO gathers data, it will resolve a fine-toothed comb that modulates this oscillation spectrum – the effect of a smaller mass splitting between the three neutrinos. JUNO is designed to resolve these tiny oscillations, revealing a fundamental aspect of nature’s design: the hierarchy of the small and large mass splittings.

“The JUNO result is very exciting,” says Huber, “not so much because of its immediate impact, but because it marks the very successful start of an experiment that will deeply change neutrino physics.”

The JUNO result is exciting because it marks the successful start of an experiment that will deeply change neutrino physics

JUNO is the first of a trio of a new generation of large-scale neutrino-oscillation experiments using controlled sources. Concluding a busy two-month period for neutrinos since the previous edition of CERN Courier was published, the launch of the nuSCOPE collaboration now dangles the promise of a valuable boost to the other two. One hundred physicists attended its kick-off workshop at CERN from 13 to 15 October 2025. The collaboration seeks to implement a concept first proposed 50 years ago by Bruno Pontecorvo: nuSCOPE will eliminate systematic uncertainties related to neutrino flux by measuring the energy and flavour of neutrinos as they are created as well as when they interact with a target.

If approved, nuSCOPE will study neutrino–nucleus interactions with a level of accuracy comparable to that in electron–nucleus scattering, and control the sources of uncertainty projected to be dominant in the DUNE experiment under construction in the US and at the Hyper-Kamiokande experiment under construction in Japan. DUNE and Hyper-Kamiokande both plan to study the oscillations of accelerator-produced beams of muon neutrinos. Their most specialised design goal is to observe another fundamental aspect of physics: whether the weak interaction treats neutrinos and antineutrinos symmetrically.

With three ambitious and sharply divergent experimental concepts, DUNE, Hyper-Kamiokande and JUNO promise substantial progress in neutrino physics in the coming decade. But KATRIN and MicroBooNE now leave precious little merit for the once compelling phenomenology of the single light sterile neutrino.

Two strikes, and you’re out.

First indirect evidence for primordial monsters

A monster star giving birth to a quasar

Cosmology has long predicted that the first generation of stars should differ strongly from those forming today. Born out of pristine gas of only hydrogen and helium, they could have reached masses between a thousand and ten thousand times that of the Sun, before collapsing after only a few million years. Such “primordial monsters” have been proposed as the seeds of the first quasars (see “Collapsing monster” image), but clear observations had until now been lacking.

An analysis of the galaxy GS 3073 using the James Webb Space Telescope (JWST) now carries an unexpectedly loud message from the first generation of stars: there is far too much nitrogen to be explained by known stellar populations. This mismatch suggests a different kind of stellar ancestor, one no longer present in our universe. It is the first indirect evidence for the long-sought primordial monsters, first proposed in the early 1960s by Fred Hoyle and William Fowler in the US, and independently by Yakov Zel’dovich and Igor Novikov in the Soviet Union, in attempts to explain the newly discovered quasars.

Black-hole powered

JWST’s near-infrared spectroscopy of GS 3073 reveals the highest nitrogen-to-oxygen ratio yet measured while surveying the universe’s first billion years. Its dense central gas contains almost as many nitrogen atoms as oxygen, while carbon and neon are comparatively modest. In addition, the galaxy has an active nucleus powered by a black hole that is already millions to hundreds of millions of times the mass of the Sun, despite the galaxy’s low metallicity.

Could a primordial monster explain GS 3073? The answer lies in how these huge stars mix and burn their fuel.

GS 3073 could offer the first chemical evidence for the largest stars the universe ever formed and to the early production of massive black holes

Simulations reveal that after an initial phase of hydrogen burning in the core, these stars ignite helium, producing large amounts of carbon and oxygen. Because the stars are so luminous and extended, their interiors are strongly convective. Hot material rises, cool material sinks and chemical elements are constantly stirred. Freshly made carbon from the helium-burning core leaks outward into a surrounding shell where hydrogen is still burning. There, a sequence of reactions known as the CNO cycle converts hydrogen into helium while steadily turning carbon into nitrogen. Over time, this process loads the outer parts of the star with nitrogen, while also moderately enhancing oxygen and neon. The heaviest elements produced in the final burning stages remain trapped in the core and never reach the surface before the star collapses.

Mass loss from such primordial stars is uncertain. Without metals, they cannot generate the strong line-driven winds familiar from massive stars today. Instead, mass may be lost through pulsations, eruptions or interactions in dense environments. But simulations allow a robust conclusion: supermassive primordial stars between roughly one thousand and ten thousand solar masses naturally produce gas with nitrogen-to-oxygen, carbon-to-oxygen and neon-to-oxygen ratios that match those measured in the dense regions of GS 3073. Stars significantly lighter or heavier than this range cannot reproduce the extreme nitrogen-to-oxygen ratio, even before carbon and neon are taken into account.

Under pressure

Radiation pressure could have supported these primordial monsters for no more than a few million years. As their cores contract and heat, photons become energetic enough to convert into electron–positron pairs, reducing the radiation pressure. For classical massive stars with masses in the range of nine to 120 times the mass of the sun, this instability leads to a thermonuclear explosion that we refer to as a supernova. By contrast, supermassive stars are so dominated by gravity due to their much larger mass that they collapse directly into black holes, without undergoing a supernova explosion.

This provides a natural path from supermassive primordial stars to the over-massive black hole now seen in GS 3073’s nucleus. In this scenario, one or a few such giants enrich the surrounding gas with nitrogen-rich material through mass loss during their lives, and leave behind black-hole seeds that later grow by accretion. If this picture is correct, GS 3073 offers the first chemical evidence for the largest stars the universe ever formed and ties them directly to the early production of massive black holes. Future JWST observations, together with next-generation ground-based telescopes, will search for more nitrogen-loud galaxies and map their chemical structures in greater detail.

Longest gamma-ray burst confounds astrophysicists

On 2 July 2025, NASA’s Fermi Gamma-ray Space Telescope observed a gamma-ray burst (GRB 250702B) of a record seven hours in duration. Intriguingly, high-resolution images from the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST) revealed that the burst emerged nearly 1900 light-years from the centre of its host galaxy, near the edge of its disc. But its most unusual feature is that it was seen in X-rays a full day before any gamma rays arrived.

The high-energy transient sky is filled with a cacophony of exotic explosions produced by stellar death. Short GRBs of less than two seconds are produced by the merging of compact objects such as black holes and neutron stars. Longer GRBs are produced by the death of massive stars, with “ultralong” GRBs most often hypothesised to originate in the collapse of massive blue supergiants, as they would allow for accretion onto their central black-hole engines over a period from tens of minutes to hours.

Peculiar observations

GRB 250702B lasted for at least 25,000 seconds (7 hours), superseding the previous longest GRB 111209A by over 10,000 seconds. However, the duration alone was not enough to identify this event as a different class of GRB or as an extreme outlier. Two other observations immediately marked GRB 250702B as peculiar: the multiple gamma-ray episodes seen by Fermi and other high-energy satellites; and the soft X-rays from 0.5 to 4 keV seen by China’s Einstein Probe over a period extending a full day before gamma rays were detected.

No previous GRB is known to have been preceded by X-ray emission over such a period. Nor is it an expectation of standard GRB models, even those invoking a blue supergiant. Instead, these X-rays suggest a relativistic tidal disruption event (TDE) – the shredding of a star by a massive black hole, launching a jet that moves near the speed of light. All known relativistic TDE systems are produced by supermassive black holes weighing a million times the mass of our Sun, or more. Such black holes are found at the centre of their host galaxies, but the HST and JWST observations revealed that the transient had occurred near the edge of its host galaxy’s disc (see “Not from the nucleus” image).

This peripheral origin opens the door to a more exotic scenario involving an intermediate-mass black hole (IMBH) weighing hundreds to thousands of solar masses. IMBHs are a missing link in black-hole evolution between the stellar-mass black holes that gravitational-wave detectors frequently see merging and the supermassive black holes found at the centre of most galaxies. Alternative scenarios reduce the black-hole mass even further, and include a micro-TDE, where a star is shredded by a stellar-mass black hole, or a helium star being eaten by a stellar-mass black hole.

There is little consensus on the origin of GRB 250702B, beyond that it involved an accreting black hole

The rapid gamma-ray variability observed by Fermi and other high-energy satellites is an important clue. The time variability of relativistic jets is thought to be orders of magnitude slower than the characteristic scale set by a black hole’s Schwarzschild radius. While an intermediate-mass black hole of a few hundred solar masses is not incompatible, the observed variability is nearly 100 times faster than that seen in relativistic TDEs. By contrast, with characteristic physical scales smaller in proportion to the smaller masses of their black holes, micro-TDEs and helium-star black-hole mergers have no difficulty accommodating such short-timescale variability.

The environment of the transient also provides crucial clues into its origin. JWST spectroscopy revealed that the light from the transient and its host galaxy was emitted 8 billion years ago, when the universe was just a teenager. The galaxy is among the largest and most massive at that age in the universe, and – unusually for galaxies hosting GRBs – a massive dust lane splits its disc in half. Ongoing star formation at the transient’s location suggests a stellar-mass progenitor, as opposed to an IMBH.

Despite numerous studies, there is little consensus on the origin of GRB 250702B, beyond that it involved an accreting black hole. Its exceptional duration and early X-ray emission initially suggested a supermassive black hole, but its rapid variability and location in its host galaxy instead point to a stellar-mass black hole, with a far rarer IMBH potentially splitting the difference. Given that it is a notably rare once-every-50-years event, the wait for the next ultralong GRB may be long, but astrophysicists are optimistic that theoretical advances will disentangle the different progenitor scenarios and reveal the origin of this extraordinary transient.

From theories to signals

Over the past decade, many theoretical and experimental landscapes have shifted substantially. Traditional paradigms such as supersymmetry and extra dimensions – once the dominant drivers of LHC search strategies – have gradually given way to a more flexible, signature-oriented approach. The modern search programme is increasingly motivated by signals rather than full theories, providing an interesting backdrop for the return of the SEARCH conference series, which last took place in 2016. The larger and more ambitious 2025 edition attracted hundreds of participants to CERN from 20 to 24 October.

The workshop highlighted how much progress ATLAS and CMS have made in searches for long-lived particles, hidden-valley scenarios (see “Soft cloud” figure) and a host of other unconventional possibilities that now occupy centre stage. Although these ideas were once considered exotic, they have become natural extensions of models connected to cosmology, dark matter and electroweak symmetry breaking. Their experimental signatures are equally rich: displaced vertices, delayed showers, emerging jets or unusual track topologies that demand a rethinking of reconstruction strategies from the ground up.

Deep learning

The most transformative change since previous editions of SEARCH is the integration of AI-based algorithms into every layer of analysis. Deep-learning-driven b-tagging has dramatically increased sensitivity to final states involving heavy flavour, while machine learning is being embedded directly into hardware trigger systems to identify complex event features in real time. This is not technological novelty for its own sake: these tools directly expand the discovery reach of the experiments.

Novel ideas in reconstruction also stood out. Talks showcased how muon detectors can be repurposed as calorimeters to detect late-developing showers, and how tracking frameworks can be adapted to capture extremely displaced tracks that were once discarded as outliers. Such techniques illustrate a broader cultural shift: expanding the search frontier now often comes from reinterpreting detector capabilities in creative ways.

The most transformative change since previous editions of SEARCH is the integration of AI-based algorithms into every layer of analysis

Anomaly detection – the use of unsupervised or semi-supervised deep-learning models to identify data that deviate from learned patterns – was another major focus. These methods, used both offline and in level-one triggers, enable model-agnostic searches that do not rely on an explicit beyond-the-Standard-Model target. Participants noted that this is especially valuable for scenarios like quirks in dark-sector models, where realistic event-generation tools still do not exist. In these cases, anomaly detection may be the only feasible path to discovery.

The rising importance of precision was another theme threading through the discussions. The detailed understanding of detector performance achieved in recent years is unprecedented for a hadron collider. CMS’s muon calibration, which is crucial for its W-mass analy­sis, and ATLAS’s record-breaking jet-calibration accuracy exemplify the progress. This maturity opens the possibility that new physics could first appear as subtle deviations rather than as striking anomalies. As the era of the High-Luminosity LHC approaches, the upcoming additions of precision timing layers and advanced early-tracking capabilities will further strengthen this dimension of the search programme.

The workshop also provided a platform to explore connections between collider searches and other experimental efforts across particle physics. Strong first-order phase transitions, relevant to electroweak baryogenesis, motivated renewed interest in an additional scalar that would modify the Higgs potential. Such a particle could lie anywhere from the MeV scale up to hundreds of GeV – often below the mass ranges targeted by standard resonance searches. Alternative data-taking strategies such as data scouting and data parking offer new opportunities to probe this wide mass window systematically.

Complementarity with flavour physics at LHCb, long-lived particle searches at FASER, and precision experiments seeking electric dipole moments, axion-like particles and other ultralight states, was also highlighted. In a moment without an obvious theoretical favourite, this diversification of experimental approaches is a key strategic strength.

New directions in science are launched by new tools much more often than by new concepts

A recurring sentiment was that the LHC remains a formidable discovery machine, but the community must continue pushing its tools beyond their traditional boundaries. Many discussions at SEARCH 2025 echoed a famous remark by Freeman Dyson: “New directions in science are launched by new tools much more often than by new concepts.” The upcoming upgrades to ATLAS and CMS – precision timing, enhanced tracking earlier in the trigger chain and high-granularity readout – exemplify the kinds of new tools that can reshape the search landscape.

If SEARCH 2025 underscored the need to explore new signatures, technologies and experimental ideas, it also highlighted an equally important message: we must not lose sight of the physics questions that originally motivated the LHC programme. The hierarchy problem, the apparent fine tuning of quantum corrections to the Higgs mass that prevent it rising to the Planck scale, remains unresolved, and supersymmetry continues to offer its most compelling and robust solution by stabilising it through partner particles. With the dramatic advances in reconstruction, triggering and analysis techniques, and with the enormous increase in recorded data from Run 1 through Run 3, the time is ripe to revitalise the inclusive SUSY search programme. A comprehensive, modernised SUSY effort should be a defining element of the combined ATLAS and CMS legacy physics programme, ensuring that the field fully exploits the discovery potential of the LHC dataset accumulated so far.

Asteroid tests challenge nuclear-deflection models

Millions of asteroids orbit the Sun. Smaller fragments often brush the Earth’s atmosphere to light up the sky as meteors. Once every few centuries, a meteoroid has sufficient size to cause regional damage, most recently the Chelyabinsk explosion that injured thousands of people in 2013, and the Tunguska event that flattened thousands of square kilometres of Siberian forest in 1908. Asteroid impacts with global consequences are vastly rarer, especially compared to the frequency with which they appear in the movies. But popular portrayals do carry a grain of truth: in case of an impending collision with Earth, nuclear deflection would be a last-resort option, with fragmentation posing the principal risk. The most important uncertainty in such a mission would be the materials properties of the asteroid – a question recently studied at CERN’s Super Proton Synchrotron (SPS), where experiments revealed that some asteroid materials may be stronger under extreme energy deposition than current models assume.

Planetary defence

“Planetary defence represents a scientific challenge,” says Karl-Georg Schlesinger, co-founder of OuSoCo, a start-up developing advanced material-response models used to benchmark large-scale nuclear deflection simulations. “The world must be able to execute a nuclear deflection mission with high confidence, yet cannot conduct a real-world test in advance. This places extraordinary demands on material and physics data.”

Accelerator facilities play a key role in understanding how asteroid mat­erial behaves under extreme conditions, providing controlled environments where impact-relevant pressures and shock conditions can be reproduced. To probe the material response directly, the team conducted experiments at CERN’s HiRadMat facility in 2024 and 2025, as a part of the Fireball collaboration with the University of Oxford. A sample of the Campo del Cielo meteorite, a metal-rich iron-nickel body, was exposed to 27 successive short, intense pulses of the 440 GeV SPS proton beam, reproducing impact-relevant shock conditions that cannot be achieved with conventional laboratory techniques.

“The material became stronger, exhibiting an increase in yield strength, and displayed a self-stabilising damping behaviour,” explains Melanie Bochmann, co-founder and co-team lead alongside Schlesinger. “Our experiments indicate that – at least for metal-rich asteroid material – a larger device than previously thought can be used without catastrophically breaking the asteroid. This keeps open an emergency option for situations involving very large objects or very short warning times, where non-nuclear methods are insufficient and where current models might assume fragmentation would limit the usable device size.”

Throughout the experiments at the SPS, the team monitored each pulse using laser Doppler vibrometry alongside temperature sensors, capturing in real time how the meteorite softened, flexed and then unexpectedly re-strengthened without breaking. This represents the first experimental evidence that metal-rich asteroid material may behave far more robustly under extreme, sudden energy loading than predicted.

The experiments could also provide valuable insights into planetary formation processes

After the SPS campaign, initial post-irradiation measurements were performed at CERN. These revealed that magnesium inclusions had been activated to produce sodium-22, a radioactive isotope that decays to produce a positron, allowing diagnostics similar to those used in medical imaging. Following these initial measurements, the irradiated meteorite has been transferred to the ISIS Neutron and Muon Source at the Rutherford Appleton Laboratory in the UK, where neutron diffraction and positron annihilation lifetime spectroscopy measurements are planned.

“These analyses are intended to examine changes in the meteorite’s internal structure caused by the irradiation and to confirm, at a microscopic level, the increase in material strength by a factor of 2.5 indicated by the experimental results,” explains Bochmann.

Complementary information can be gathered by space missions. Since NASA’s NEAR Shoemaker spacecraft successfully landed on asteroid Eros in 2001, two Japanese missions and a further US mission have visited asteroids, collecting samples and providing evidence that some asteroids are loosely bound rocky aggregates. In the next mission, NASA and ESA plan to study Apophis, an asteroid several hundreds of metres in size in each dimension that will safely pass closer to Earth than many satellites in geosynchronous orbit on 13 April 2029 – a close encounter expected only once every few thousand years.

The missions will observe how Apophis is twisted, stretched and squeezed by Earth’s gravity, providing a rare opportunity to observe asteroid-scale material response under natural tidal stresses. Bochmann and Schlesinger’s team now plan to study asteroids with a similar rocky composition.

Real-time data

“In our first experimental campaign, we focused on a metal-rich asteroid material because its more homogeneous structure is easier to control and model, and it met all the safety requirements of the experimental facility,” they explain. “This allowed us to collect, for the first time, non-destructive, real-time data on how such material responds to high-energy deposition.”

“As a next step, we plan to study more complex and rocky asteroid materials. One example is a class of meteorites called pallasites, which consist of a metal matrix similar to the meteorite material we have already studied, with up to centimetre-sized magnesium-rich crystals embedded inside. Because these objects are thought to originate from the core–mantle boundary of early planetesimals, such experiments could also provide valuable insights into planetary formation processes.”

How I learnt to stop worrying and love QCD predictions

To begin, could you explain what the muons magnetic moment is, and why it should be anomalous?

Particles react to magnetic fields like tiny bar magnets, depending on their mass, electric charge and spin – a sort of intrinsic angular momentum lacking a true classical analogue. These properties combine into the magnetic moment, along with a quantum-mechanical g-factor which sets the strength of the response. Dirac computed g to be precisely two for electrons, with a formula that applies equally to the other, then-unknown, leptons. We call any deviation from this value anomalous. The name stuck because the first measurements differed from Dirac’s prediction, which initially was not understood. The anomalous piece is a natural probe of new physics, as it arises entirely from quantum fluctuations that may involve as-yet unseen new particles.

What ingredients from the Standard Model go into computing g–2?

Everything. All sectors, all particles, all Standard Model (SM) forces contribute. The dominant and best quantified contributions are due to QED, having been computed through fifth order in the fine structure constant α. We are talking about two independent calculations of more than 12,000 Feynman diagrams, accounting for more than 99.9% of the total SM prediction. Interestingly, two measurements of α disagree at more than 5σ, resulting in an uncertainty of about two parts per billion. While this discrepancy needs to be resolved, it is negligible for the muon g–2 observable. The electroweak contribution was computed at the two-loop level long ago, and updated with better measured input parameters and calculations of nonperturbative effects in quark loops. The resulting uncertainty is close to 40 times smaller than that of the g–2 experiment. Then, the overall uncertainty is determined by our knowledge of the hadronic corrections, which are by far the most difficult to constrain.

What sort of hadronic effects do you have in mind here? How are they calculated?

There are two distinct effects: hadronic vacuum polarisation (HVP) and hadronic light-by-light (HLbL). The former arises at second order in α, is the larger of the two, and the largest source of uncertainty. While interacting with an external magnetic field, the muon emits a virtual photon that can further split into a quark loop before recombining. The HLbL contribution arises at third order and is now known with sufficient precision. The challenge is that loop diagrams must be computed at all virtual energies, down to where the strong force (QCD) becomes non-perturbative and quarks hadronise. There are two ways to tackle this.

Instead of computing the hadronic bubble directly, the data-driven “dispersive” approach relates it to measurable quantities, for example the cross section for electron–positron annihilation into hadrons. About 75% of the total HVP comes from e+e π+π, so the measurement errors in this channel determine the overall uncertainty. The decays of tau leptons into hadrons can also be used as inputs. Since the process is mediated by a charged W boson, instead of a photon, it requires an isospin rotation from the charged to the neutral current. At low energies, this is another challenging non-perturbative problem. While there are phenomenological estimates of this effect, no complete theoretical calculation exists – which means that the uncertainties are not fully quantified. Differing opinions on how to assess them led to controversy over the inclusion of tau decays in the SM prediction of g–2. An alternative to data-driven methods is lattice QCD, which allows for ab initio calculations of the hadronic corrections.

What does “ab initio” mean, in this context?

It means that there are no simplifying assumptions in the QCD calculation. The approximations used in the lattice formulation of QCD come with adjustable parameters and can be described by effective field theories of QCD. For example, we discretise space and time: the distance separating nearest-neighbour points is given by the lattice spacing and the effective field theory guides the approach of the lattice theory to the continuum limit, enabling controlled extrapolations. To evaluate path integrals using Monte Carlo methods, which themselves introduce statistical errors, we also rotate to imaginary time. While not affecting the HVP, this limits the quantities we can compute.

How do you ensure that the lattice predictions are unbiased?

Good question! Lattice calculations are complicated, and it is therefore important to have several results from independent groups for consolidating averages. An important cultural shift in the community is that numerical analyses are now routinely blinded to avoid confirmation bias, making agreements more meaningful. This shifts the focus from central values to systematic errors. For our 2025 White Paper (WP25), the main lattice inputs for HVP were obtained from blinded analyses.

How did you construct the SM prediction for your 2025 White Paper?

To summarise how the SM prediction in WP25 was obtained, sufficiently precise lattice results for HVP arrived just in time. Since measurements of the e+e π+π channel are presently in disagreement with each other, the 2025 prediction solely relied on the lattice average for the HVP. In contrast, the 2020 White Paper (WP20) prediction employed the data-driven method, as the lattice-QCD results were not precise enough to weigh in.

With the experiment’s expected precision jump, it seemed vital for theory to follow suit

While the theory error in WP25 is larger than in WP20, it is a realistic assessment of present uncertainties, which we know how to improve. I stress that the combination of the SM theory error being four times larger than the experimental one and the remaining puzzles, particularly on the data-driven side, means that the question “Does the SM account for the experimental value of the muon’s anomalous magnetic moment?” has not yet been satisfactorily answered. Given the high level of activity, this will, however, happen soon.

Where are the tensions between lattice QCD, data-driven predictions and experimental measurements?

All g–2 experiments are beautifully consistent, and the lattice-based WP25 prediction differs from them by less than one standard deviation. At present, we don’t know if the data-driven method agrees with lattice QCD due to the differences in the e+e π+π measurements. In particular, the 2023 CMD-3 results from the Budker Institute of Nuclear Physics are compatible with lattice results, but disagree with CMD-2, KLOE, BaBar, BESIII and SND, which formed the basis for WP20. All the experimental collaborations are now working on new analyses. BaBar is expected to release a new e+e π+π result soon, and others, including Belle II, will follow. There is also ongoing work on radiative corrections and Monte Carlo generators, both of which are important in solving this puzzle. Once the dust settles, we will see whether the new data-driven evaluation agrees with the lattice average and the g–2 experiment. Either way, this may yield profound insights.

How did the Muon g–2 Theory Initiative come into being?

The first spark came when I received a visiting appointment from Fermilab, offering resources to organise meetings and workshops. At the time, my collaborators and I were gearing up to calculate the HVP in lattice QCD, and the Fermilab g–2 experiment was about to start. With the experiment’s expected precision jump, it seemed vital for theory to follow suit by bringing together communities working on different approaches to the SM contributions, with the goal of pooling our knowledge, reducing theoretical uncertainties and providing reliable predictions.

As Fermilab received my idea positively, I contacted the RBC collaboration and Christoph Lehner joined me with great enthusiasm to shape the effort. We recruited leaders in the experimental and theoretical communities to our Steering Committee. Its role is to coordinate efforts, organise workshops to bring the community together and provide the structure to map out scientific directions and decide on the next steps.

What were the main challenges you faced in coordinating such a complex collaboration?

With so many authors and such high stakes, disagreements naturally arise. In WP20, a consensus was emerging around the data-driven method. The challenge was to come up with a realistic and conservative error estimate, given the up to 3σ tensions between different data sets, including the two most precise measurements of e+e π+π at the time.

Hadronic contribution

As we were finalising our WP20, the picture was unsettled by a new lattice calculation from the Budapest–Marseille–Wuppertal (BMW) collaboration, consistent with earlier lattice results but far more precise. While the value was famously in tension with data-driven methods, the preprint also presented a calculation of the “intermediate window” contribution to the HVP– about 30% of the total – which disagreed with a published RBC/UKQCD result and with data-driven evaluations (CERN Courier March/April 2025 p21). Since BMW was still updating their results and the paper wasn’t yet published, we described the result but excluded it from our SM prediction. Later, in 2023, further complications came from the CMD-3 measurement.

Consolidation between lattice results was first observed for the intermediate window contribution, in 2022 and 2023. This, in turn, revealed a tension with the corresponding data-driven evaluations. Results for the difficult-to-compute long-distance contributions arrived in late fall 2024, yielding consolidated lattice averages for the total HVP, where we had to sort out a few subtleties. This was intense – a lot of work in very little time.

On the data-driven side, we faced the aforementioned tensions between the e+e π+π cross-section measurements. In light of these discrepancies, consensus was reached that we would not attempt a new data-driven average of HVP for WP25, leaving it for the next White Paper. Real conflict arose on the assessment of the quality of the uncertainty estimates for HVP contributions from tau decays and on whether to include them.

And how did you navigate these disagreements?

When the discussions around the assessment of tau-decay uncertainties stopped to converge, we proposed a conflict resolution procedure using the Steering Committee (SC) as the arbitration body, which all authors signed. If a conflict is brought to the SC for resolution, SC members first engage all parties involved to seek resolution. If none is found, the SC makes a recommendation and, if appropriate, the differing scientific viewpoints may be reflected in the document, followed by the recommendation. In the end, just having a conflict-resolution process in place was really helpful. While the SC negotiated a couple of presentation issues, the major disagreements were resolved without triggering the process.

The goal of WP25 was to wrap up a prediction before the announcement of the final Fermilab g–2 measurement. Adopting an internal conflict-resolution process was essential in getting our result out just in time, six days before the deadline.

Lattice QCD has really come of age

What other observables can benefit from advances in lattice QCD?

There are many, and their number is growing – lattice QCD has really come of age. Lattice QCD has been used for years to provide precise predictions of the hadronic parameters needed to describe weak processes, such as decay constants and form factors. A classic example, relevant to the LHC experiments, is the rare decay Bs μ+μ, where, thanks to lattice QCD calculations of the Bs-meson decay constant, the SM prediction is more precise than current experimental measurements. While precision continues to improve with refined methods, the lattice community is broadening the scope with new theoretical frameworks and improved computational methods, enabling calculations once out of reach – such as the (smeared) R-ratio, inclusive decay rates and PDFs.

There’s more g–2 physics over the horizon

Some have argued that the good agreement between lattice–QCD and the final measurement of Fermilab’s muon g–2 experiment means that the g–2 anomaly has now been solved. However, this dramatically oversimplifies the situation: the magnetic moment of the muon remains an intriguing puzzle.

The extraordinary precision of 127 parts per billion (ppb) achieved at Fermilab deserves to be matched by an equally impressive theoretical prediction. At 530 ppb, theory is currently the limiting factor in any comparison. This is the longer-term goal that the Muon g–2 Theory Initiative is now working towards, with inputs from all possible sources (see “How I learnt to stop worrying and love QCD predictions“). In the near future, it will not be possible to reach this precision with lattice QCD alone. Other approaches are needed to make a competitive Standard Model prediction.

Tensions remain

Essentially, all of the uncertainty in g–2 arises from the hadronic vacuum polarisation (HVP) – a quantum correction whereby a radiated virtual photon briefly transforms into a hadronic state before being reabsorbed. Historically, HVP has been evaluated by applying a dispersion relation to cross sections for hadron production in electron–positron collisions, but this method was displaced by lattice–QCD calculations in the theory initiative’s most recent white paper. The lattice community must be congratulated for the level of agreement that has been reached between groups working independently (CERN Courier July/August 2025 p7). By contrast, data-driven predictions are at present inconsistent across the experiments in the low-energy region; even if results from the CMD-3 experiment are excluded as an outlier, tensions remain, suggesting that some systematic errors may not have been completely addressed (CERN Courier March/April 2025 p21). Could a novel experimental technique help resolve the confusion?

The MUonE collaboration proposes a completely independent approach based on a new experimental method. In MUonE, we will determine the running of the electromagnetic coupling, a fundamental quantity that is driven by the same kinds of quantum fluctuations as muon g–2. We will extract it from a precise measurement of the differential cross section for elastic scattering of muons from electrons as a function of the momentum transferred.

MUonE is a relatively inexpensive experiment that we can set up in the existing M2 beamline in CERN’s North Area, already home to the AMBER and NA64-µ experiments. Three years of running, within the conditions of M2 parameters and the performance of the MUonE detector, would reach a statistical precision of approximately 180 ppb with a comparable level of systematic uncertainty.

MUonE will take advantage of silicon sensors that are already being developed for the CMS tracker upgrade. From the results, we will be able to use a dispersion relation to extract HVP’s contribution to g–2. Perhaps more importantly, however, as our method directly measures a function that is part of the lattice calculation, we can directly verify that method. The big challenge will be to keep the systematic uncertainties in the measurement small enough. However, MUonE does not suffer from the intrinsic problem that existing data-driven techniques have, which is that they must numerically integrate over the sharp peaks of hadron production by low-energy resonances. In contrast, the function derived from the space-like process that it will measure is smooth and well-behaved.

Piecing the puzzle 

CERN was the origin of the first brilliant muon g–2 measurements starting back in the 1950s (CERN Courier September/October 2024 p53), and now the laboratory has an opportunity to put another important piece into the g–2 puzzle through the MUonE project. Another component of great importance in this domain will be the new g-2/EDM experiment planned for J-PARC, which will also be performed in completely different conditions, and therefore with very different systematics to the Fermilab experiment.

Soft clouds probe dark QCD

CMS figure 1

Despite decades of searches, experiments have yet to find evidence for a new particle that could account for dark matter on its own. This has strengthened interest in richer “dark-sector” scenarios featuring multiple new states and interactions, potentially analogous to those of the Standard Model (SM). The CMS collaboration targeted one of the most distinctive possible signatures of a dark strong force in proton–proton collisions: a dense, nearly isotropic cloud of low-momentum particles known as a soft unclustered energy pattern (SUEP).

Searches in the LHC proton–proton collision data for events with many low-momentum particles are plagued by overwhelming backgrounds from pileup and soft QCD interactions. The CMS collaboration has recently overcome this challenge by using large-radius clusters of charged particle tracks and relying on quantities that characterise the expected isotropy of SUEP decays.

The 125 GeV Higgs boson serves in many theoretical models as a natural mediator between the SM and a hidden sector, and current experimental constraints still leave room for exotic decays. Motivated by this possibility, CMS focused on Higgs-boson production in association with a vector (W or Z) boson that decays into leptons. While these modes account for < 1% of Higgs bosons produced at the LHC, the leptons provide significant handles for triggering and background suppression.

Rather than relying on SM simulations, which face modelling and statistical challenges for such soft interactions, the background was extrapolated from events with low isotropy or relatively few charged-particle tracks per cluster, using a method that accounts for small correlations between the quantities used in the extrapolation. To validate the approach, an orthogonal sample of events with a high-momentum photon was studied, taking advantage of the Higgs boson’s minuscule coupling to photons and the similarity of background processes in W/Z + jet and photon + jet events that could mimic a SUEP signal.

The data in the search region, consisting of events with a W or Z boson candidate and many isotropically distributed charged particles, was found to be consistent with the SM expectation. Stringent limits were placed on the branching ratio of the 125 GeV Higgs boson decaying to a SUEP shower for a wide range of parameters (see figure 1).

This analysis complements a previous CMS search that primarily targeted much heavier mediators produced via gluon fusion, improving limits on the H  SUEP branching ratio by two orders of magnitude. It additionally provides model-agnostic limits and detailed reinterpretation recipes, maximising the usability of this data for testing alternative theoretical frameworks.

SUEP signatures are not unique to the benchmark scenarios under scrutiny. They naturally emerge in hidden-valley models, where mediators connect the SM to a new, otherwise isolated sector. If the hidden states interact through a “dark QCD”, proton–proton collisions would trigger a crowded cascade of dark partons rather than the familiar collimated showers.

Crucially, unlike in ordinary QCD – where the coupling quickly weakens at energies above confinement – the dark coupling could remain large well beyond its typically low confinement scale. This sustained strong coupling would drive frequent interactions and efficiently redistribute momentum, producing an almost isotropic radiation pattern. As the system cooled, it would then hadronise into numerous soft dark hadrons whose decays back to SM particles would retain this softness and isotropy – yielding the characteristic SUEP probed by CMS.

The beam–bottle debate at PSI

Free neutrons have a lifetime of about 880 seconds, yet a longstanding tension between two measurement techniques continues to puzzle the neutron-physics community. The most precise averages from beam experiments and magnetic-bottle traps yield 888.1 ± 2.0 s and 877.8 ± 0.3 s, respectively – roughly corresponding to a 5σ discrepancy.

On 13 September 2025, 40 representatives of all currently operating neutron-lifetime experiments came together at the Paul Scherrer Institute (PSI) to discuss the current status of the tension and the path forward. Geoffrey Greene (University of Tennessee) opened the workshop by reflecting on five decades of neutron-lifetime measurements from the 1960s to the present.

The beam method employs cold-neutron beams, with protons from neutron beta-decays collected in a magnetic trap and counted. The lifetime is then inferred from the ratio of proton counts to neutron flux. Fred Wietfeldt (Tulane University) highlighted the huge efforts undertaken at the National Institute of Standards and Technology (NIST) in Gaithersburg, most importantly on the absolute calibration of the neutron detector.

Susan Seestrom (Los Alamos National Laboratory) described today’s most precise experiment, the UCNτ experiment at Los Alamos National Laboratory, which uses the magnetic-bottle trap method. It confines ultracold neutrons (UCNs) via their magnetic and gravitational interaction and counts the surviving ones at different times. She also provided an outlook on its next phase, UCNτ+, with increased statistics goals. The τSPECT experiment at PSI’s UCN facility is also based on magnetic confinement of neutrons and has recently started data taking, but has distinct differences. As explained by Martin Fertl from Johannes Gutenberg-University Mainz, τSPECT uses a double-spin-flip method to increase the UCN filling of the purely magnetic trap, and a detector moving in and out of the storage volume to first remove slightly higher-energetic neutrons before storage, and then measures the surviving neutrons in situ after storage.

Kenji Mishima (University of Osaka) presented the neutron-lifetime experiment at J-PARC, based on a new principle: the detection of the charged decay products in an active time-projection-chamber, where the neutrons are captured on a small 3He admixture. This experiment’s systematics are entirely different from those of previous efforts and may offer a unique contribution to the field. Other studies largely excluded the possibility that the beam–bottle discrepancy could be explained by hypothetical exotic decay channels or other non-standard processes.

New results from LANL, NIST, J-PARC and PSI should clarify the currently puzzling situation in the coming years.

Budapest brims with heavy ions

The 25th Zimányi Winter School gathered 120 researchers in Budapest to discuss recent advances in medium- and high-energy nuclear physics. The programme focused on the properties of strongly-interacting matter produced in heavy-ion collisions – little bangs that recreate conditions a few microseconds after the Big Bang.

József Zimányi was a pioneer of Hungarian and international heavy-ion physics, playing a central role in establishing relativistic heavy-ion research in Hungary and contributing key developments to hydrodynamic descriptions of nuclear collisions. Much of the week’s programme revisited the problems that occupied his career, including how the hot, dense system created in a collision evolves and how it converts its energy into the observed hadrons.

Giuseppe Verde (INFN Catania) and Máté Csanád (ELTE) emphasised the role of femtoscopic methods, rooted in the Hanbury Brown–Twiss interferometry originally developed for stellar measurements, in understanding the system that emerges from heavy-ion collisions. Quantum entanglement in high-energy nuclear collisions – a subject closely connected to the 2025 Nobel Prize in Physics – was also explored in a dedicated, invited lecture by Dmitri Kharzeev (Stony Brook University), who described the approach and the results of his team that suggest the origin of the observed thermodynamic properties is quantum entanglement itself.

The NA61/SHINE collaboration reported ongoing studies of isospin-symmetry breaking, including a recent result where the charged-to-neutral kaon ratio in argon–scandium collisions deviates at 4.7σ from expectations based on approximate isospin symmetry (CERN Courier March/April 2025 p9). Further detailed studies are planned, with potential implications for improving the understanding of antimatter production.

Hydrodynamic modelling remains one of the most successful tools in heavy-ion physics. Tetsufumi Hirano (Sophia University, Japan), the first recipient of the Zimányi Medal, discussed how the collision system behaves like an expanding relativistic fluid, whose collective motion encodes its initial conditions and transport properties. Hydrodynamic approaches incorporating spin effects – and the resulting polarisation effects in heavy-ion collisions – were discussed by Wojciech Florkowski (Jagiellonian University) and Victor E Ambrus (West University of Timisoara).

bright-rec iop pub iop-science physcis connect