Comsol -leaderboard other pages

Topics

George Smoot 1945–2025

George Smoot

George Smoot, who led the team that first measured tiny fluctuations in the cosmic microwave background (CMB) and began a revolution in cosmology, passed away in Paris on 18 September 2025.

George earned his undergraduate and doctoral degrees at the Massachusetts Institute of Technology (MIT), and then moved to Berkeley, where he held positions at Lawrence Berkeley National Laboratory (Berkeley Lab) and the Space Sciences Laboratory at the University of California, Berkeley (UC Berkeley). Though trained as a particle physicist, he switched to cosmology and developed research projects, including using differential microwave radiometers (DMRs) on U-2 spy planes to detect the dipole anisotropy of the CMB, a consequence of the motion of the Earth relative to the universe as a whole. He then devoted himself to the measurement of the CMB in detail, and this undertaking occupied him from his proposal of a satellite experiment using DMRs in 1974 to the results of the Cosmic Background Explorer (COBE) satellite in 1992. George subsequently continued research and teaching as a member of the faculty of the UC Berkeley physics department.

In 2006, the Nobel Prize committee recognised John Mather for leading a team that determined the CMB spectrum was a blackbody (arising from thermal equilibrium) to exquisite precision, and George for leading a team that detected temperature variations across the sky in the CMB at the level of one part in a hundred thousand. Those variations were signatures of the primordial density fluctuations that gave rise to galaxies, and so eventually to us. They have been called the DNA of cosmic structure and provide a remarkable window on the early universe and high-energy physics beyond our particle accelerators. The excitement caused by the COBE CMB results was dramatically expressed by Stephen Hawking, who declared them to be “the discovery of the century, if not all time.”

After the Nobel Prize, George intensified his efforts in science education and training young scientists. Indeed, on the day of the prize, George continued to teach his undergraduate introductory physics class.

George created new research institutes internationally to support young scientists. He used his prize money to found the Berkeley Center for Cosmological Physics, a joint effort between UC Berkeley and Berkeley Lab. He also started an annual Berkeley Lab summer workshop for high-school students and teachers, now in its 19th year. Later, he founded the Instituto Avanzado de Cosmología and the international Essential Cosmology for the Next Generation winter schools in Mexico, the Paris Centre for Cosmological Physics, the Institute for the Early Universe in South Korea at the world’s largest women’s university, and more. Many of the scientists trained at those institutes went on to become faculty in their home countries and internationally, and formed their own research groups.

His open online course “Gravity! From the Big Bang to Black Holes” taught nearly 100,000 students

George took special pride in the Oersted Medal awarded to him by the American Association of Physics Teachers in 2009 for “outstanding, widespread, and lasting impact” on the teaching of physics. His massive open online course “Gravity! From the Big Bang to Black Holes” with Pierre Binétruy taught nearly 100,000 students.

In his later years, George’s scientific interests spanned not only the CMB (in particular the Planck satellite), but new sensor technologies such as kinetic inductance detectors and ultrafast detectors that could open up new windows on astrophysical phenomena, gravitational waves and gravitational lensing, features in the inflationary primordial fluctuation spectrum, and dark-matter properties.

The primordial density fluctuations for which George was awarded the Nobel Prize lie at the heart of almost every aspect of cosmology. The revolution started by the COBE results led to the convergence of cosmology and particle physics, exemplified by the centrality of dark matter as a primary issue for both disciplines. George will be remembered for this, for the many students whose lives he touched and whose research he inspired, and for his advocacy of international science.

From theories to signals

Over the past decade, many theoretical and experimental landscapes have shifted substantially. Traditional paradigms such as supersymmetry and extra dimensions – once the dominant drivers of LHC search strategies – have gradually given way to a more flexible, signature-oriented approach. The modern search programme is increasingly motivated by signals rather than full theories, providing an interesting backdrop for the return of the SEARCH conference series, which last took place in 2016. The larger and more ambitious 2025 edition attracted hundreds of participants to CERN from 20 to 24 October.

The workshop highlighted how much progress ATLAS and CMS have made in searches for long-lived particles, hidden-valley scenarios (see “Soft cloud” figure) and a host of other unconventional possibilities that now occupy centre stage. Although these ideas were once considered exotic, they have become natural extensions of models connected to cosmology, dark matter and electroweak symmetry breaking. Their experimental signatures are equally rich: displaced vertices, delayed showers, emerging jets or unusual track topologies that demand a rethinking of reconstruction strategies from the ground up.

Deep learning

The most transformative change since previous editions of SEARCH is the integration of AI-based algorithms into every layer of analysis. Deep-learning-driven b-tagging has dramatically increased sensitivity to final states involving heavy flavour, while machine learning is being embedded directly into hardware trigger systems to identify complex event features in real time. This is not technological novelty for its own sake: these tools directly expand the discovery reach of the experiments.

Novel ideas in reconstruction also stood out. Talks showcased how muon detectors can be repurposed as calorimeters to detect late-developing showers, and how tracking frameworks can be adapted to capture extremely displaced tracks that were once discarded as outliers. Such techniques illustrate a broader cultural shift: expanding the search frontier now often comes from reinterpreting detector capabilities in creative ways.

The most transformative change since previous editions of SEARCH is the integration of AI-based algorithms into every layer of analysis

Anomaly detection – the use of unsupervised or semi-supervised deep-learning models to identify data that deviate from learned patterns – was another major focus. These methods, used both offline and in level-one triggers, enable model-agnostic searches that do not rely on an explicit beyond-the-Standard-Model target. Participants noted that this is especially valuable for scenarios like quirks in dark-sector models, where realistic event-generation tools still do not exist. In these cases, anomaly detection may be the only feasible path to discovery.

The rising importance of precision was another theme threading through the discussions. The detailed understanding of detector performance achieved in recent years is unprecedented for a hadron collider. CMS’s muon calibration, which is crucial for its W-mass analy­sis, and ATLAS’s record-breaking jet-calibration accuracy exemplify the progress. This maturity opens the possibility that new physics could first appear as subtle deviations rather than as striking anomalies. As the era of the High-Luminosity LHC approaches, the upcoming additions of precision timing layers and advanced early-tracking capabilities will further strengthen this dimension of the search programme.

The workshop also provided a platform to explore connections between collider searches and other experimental efforts across particle physics. Strong first-order phase transitions, relevant to electroweak baryogenesis, motivated renewed interest in an additional scalar that would modify the Higgs potential. Such a particle could lie anywhere from the MeV scale up to hundreds of GeV – often below the mass ranges targeted by standard resonance searches. Alternative data-taking strategies such as data scouting and data parking offer new opportunities to probe this wide mass window systematically.

Complementarity with flavour physics at LHCb, long-lived particle searches at FASER, and precision experiments seeking electric dipole moments, axion-like particles and other ultralight states, was also highlighted. In a moment without an obvious theoretical favourite, this diversification of experimental approaches is a key strategic strength.

New directions in science are launched by new tools much more often than by new concepts

A recurring sentiment was that the LHC remains a formidable discovery machine, but the community must continue pushing its tools beyond their traditional boundaries. Many discussions at SEARCH 2025 echoed a famous remark by Freeman Dyson: “New directions in science are launched by new tools much more often than by new concepts.” The upcoming upgrades to ATLAS and CMS – precision timing, enhanced tracking earlier in the trigger chain and high-granularity readout – exemplify the kinds of new tools that can reshape the search landscape.

If SEARCH 2025 underscored the need to explore new signatures, technologies and experimental ideas, it also highlighted an equally important message: we must not lose sight of the physics questions that originally motivated the LHC programme. The hierarchy problem, the apparent fine tuning of quantum corrections to the Higgs mass that prevent it rising to the Planck scale, remains unresolved, and supersymmetry continues to offer its most compelling and robust solution by stabilising it through partner particles. With the dramatic advances in reconstruction, triggering and analysis techniques, and with the enormous increase in recorded data from Run 1 through Run 3, the time is ripe to revitalise the inclusive SUSY search programme. A comprehensive, modernised SUSY effort should be a defining element of the combined ATLAS and CMS legacy physics programme, ensuring that the field fully exploits the discovery potential of the LHC dataset accumulated so far.

Trigger-level search for dijet resonances

ATLAS figure 1

The LHC’s increased collision energies have opened new territory for TeV-scale searches, but its vast datasets also provide unparalleled opportunities to thoroughly explore the electroweak scale. A new ATLAS result uses an unconventional trigger-level analysis (TLA) of the full Run 2 dataset to achieve record sensitivity to low-mass particles decaying into quarks or gluons. ATLAS employs a two-stage trigger system, with a fast hardware-based first-level trigger selecting about 100 kHz of events from the 40 MHz bunch-crossing rate, followed by a software high-level trigger (HLT) that performs detailed event reconstruction and further reduces the accepted event rate by about two orders of magnitude. By recording a much reduced event format at the trigger level, TLA preserves a substantially larger fraction of events than would normally be output by the HLT.

New particles that decay with a two-jet final state feature in many Standard Model (SM) extensions. For example, the properties of “dark mediators” that couple to both quarks and dark matter could explain the present abundance of dark matter by controlling how much of it remains after falling out of equilibrium with normal matter in the early universe. At the LHC, the coupling of dark mediators to quarks would enable both production and decay into quark–antiquark pairs. This should appear as resonances in the dijet mass distribution.

Searching for dijet resonances at low mass is challenging. Dijet production from strong interactions is one of the LHC’s most abundant signatures. Beyond requiring a precise understanding of these enormous backgrounds and the detector response, the low-mass dijet rate far exceeds what ATLAS can record. Only the most energetic dijet events can be kept, limiting conventional dijet searches to masses above approximately 1 TeV.

To access the low-mass region, ATLAS used TLA to record multi-jet events throughout Run 2. By dropping the raw detector data from the readout, these TLA events were ~200 times smaller than standard events while retaining all high-level jet and calorimeter-based variables reconstructed in real-time by the HLT.

The size reduction allowed ATLAS to record TLA events at rates of up to 27 kHz – compared to an average 1.2 kHz for the full detector readout. This rate was achieved in conjunction with the additional trigger bandwidth allocated to TLA at the end of LHC fills and a more efficient use of this bandwidth for dijet events. In Run 2, this was aided by ATLAS’s L1Topo trigger processor, which applies simple topological selections – such as angular correlations between jets – already at first level. The new result uses 1 billion dijet events, or up to 75 times the data sample available to the equivalent conventional search, achieving unprecedented statistical precision.

The new result achieves record sensitivity to low-mass particles decaying into quarks or gluons

This enormous dataset demands excellent control of systematic uncertainties. ATLAS developed a dedicated multi-step calibration for trigger-level jets, achieving a jet energy scale precision of 1 to 4%, comparable to calibrations using full detector readout. The overwhelming SM background was modelled using a data-driven fitting technique, reaching a relative precision better than 1 part in 104.

The search has found the dijet invariant-mass distribution to be consistent with the background expectation. The analysis provides numerical results that can be used to constrain any of the numerous models of dijet resonances, as well as explicit constraints on a specific dark mediator model used as a common benchmark for many ATLAS and CMS searches. The result sets ATLAS’s most stringent exclusion limits to date on the potential coupling of such a mediator to quarks, across a broad range of mediator masses reaching as low as 375 GeV (see figure 1).

The dijet TLA during Run 2 has established a foundation for an expanded trigger-level physics programme. In Run 3, trigger-level jets incorporate tracking information, allowing flavour tagging and improving jet energy resolution and robustness against pile-up. ATLAS also records trigger-level photons and uses them in combination with partial detector readout at full granularity. These and other advances in TLA should enable future ATLAS searches to probe a wider variety of signatures at the electroweak scale.

Asteroid tests challenge nuclear-deflection models

Millions of asteroids orbit the Sun. Smaller fragments often brush the Earth’s atmosphere to light up the sky as meteors. Once every few centuries, a meteoroid has sufficient size to cause regional damage, most recently the Chelyabinsk explosion that injured thousands of people in 2013, and the Tunguska event that flattened thousands of square kilometres of Siberian forest in 1908. Asteroid impacts with global consequences are vastly rarer, especially compared to the frequency with which they appear in the movies. But popular portrayals do carry a grain of truth: in case of an impending collision with Earth, nuclear deflection would be a last-resort option, with fragmentation posing the principal risk. The most important uncertainty in such a mission would be the materials properties of the asteroid – a question recently studied at CERN’s Super Proton Synchrotron (SPS), where experiments revealed that some asteroid materials may be stronger under extreme energy deposition than current models assume.

Planetary defence

“Planetary defence represents a scientific challenge,” says Karl-Georg Schlesinger, co-founder of OuSoCo, a start-up developing advanced material-response models used to benchmark large-scale nuclear deflection simulations. “The world must be able to execute a nuclear deflection mission with high confidence, yet cannot conduct a real-world test in advance. This places extraordinary demands on material and physics data.”

Accelerator facilities play a key role in understanding how asteroid mat­erial behaves under extreme conditions, providing controlled environments where impact-relevant pressures and shock conditions can be reproduced. To probe the material response directly, the team conducted experiments at CERN’s HiRadMat facility in 2024 and 2025, as a part of the Fireball collaboration with the University of Oxford. A sample of the Campo del Cielo meteorite, a metal-rich iron-nickel body, was exposed to 27 successive short, intense pulses of the 440 GeV SPS proton beam, reproducing impact-relevant shock conditions that cannot be achieved with conventional laboratory techniques.

“The material became stronger, exhibiting an increase in yield strength, and displayed a self-stabilising damping behaviour,” explains Melanie Bochmann, co-founder and co-team lead alongside Schlesinger. “Our experiments indicate that – at least for metal-rich asteroid material – a larger device than previously thought can be used without catastrophically breaking the asteroid. This keeps open an emergency option for situations involving very large objects or very short warning times, where non-nuclear methods are insufficient and where current models might assume fragmentation would limit the usable device size.”

Throughout the experiments at the SPS, the team monitored each pulse using laser Doppler vibrometry alongside temperature sensors, capturing in real time how the meteorite softened, flexed and then unexpectedly re-strengthened without breaking. This represents the first experimental evidence that metal-rich asteroid material may behave far more robustly under extreme, sudden energy loading than predicted.

The experiments could also provide valuable insights into planetary formation processes

After the SPS campaign, initial post-irradiation measurements were performed at CERN. These revealed that magnesium inclusions had been activated to produce sodium-22, a radioactive isotope that decays to produce a positron, allowing diagnostics similar to those used in medical imaging. Following these initial measurements, the irradiated meteorite has been transferred to the ISIS Neutron and Muon Source at the Rutherford Appleton Laboratory in the UK, where neutron diffraction and positron annihilation lifetime spectroscopy measurements are planned.

“These analyses are intended to examine changes in the meteorite’s internal structure caused by the irradiation and to confirm, at a microscopic level, the increase in material strength by a factor of 2.5 indicated by the experimental results,” explains Bochmann.

Complementary information can be gathered by space missions. Since NASA’s NEAR Shoemaker spacecraft successfully landed on asteroid Eros in 2001, two Japanese missions and a further US mission have visited asteroids, collecting samples and providing evidence that some asteroids are loosely bound rocky aggregates. In the next mission, NASA and ESA plan to study Apophis, an asteroid several hundreds of metres in size in each dimension that will safely pass closer to Earth than many satellites in geosynchronous orbit on 13 April 2029 – a close encounter expected only once every few thousand years.

The missions will observe how Apophis is twisted, stretched and squeezed by Earth’s gravity, providing a rare opportunity to observe asteroid-scale material response under natural tidal stresses. Bochmann and Schlesinger’s team now plan to study asteroids with a similar rocky composition.

Real-time data

“In our first experimental campaign, we focused on a metal-rich asteroid material because its more homogeneous structure is easier to control and model, and it met all the safety requirements of the experimental facility,” they explain. “This allowed us to collect, for the first time, non-destructive, real-time data on how such material responds to high-energy deposition.”

“As a next step, we plan to study more complex and rocky asteroid materials. One example is a class of meteorites called pallasites, which consist of a metal matrix similar to the meteorite material we have already studied, with up to centimetre-sized magnesium-rich crystals embedded inside. Because these objects are thought to originate from the core–mantle boundary of early planetesimals, such experiments could also provide valuable insights into planetary formation processes.”

Rohini Godbole 1952–2024

Rohini Godbole

Rohini Madhusudan Godbole, one of India’s most influential particle physicists, passed away in her hometown of Pune on 25 October 2024.

Rohini was born on 12 November 1952 to Madhusudan and Malati Godbole. Theirs was a cultured and highly educated family, and she grew up in an atmosphere of intellectual freedom and progressive ideas. Educated at the best schools and colleges in Pune, she joined the Indian Institute of Technology at Bombay, from which she graduated in 1972. She then moved to Stony Brook, where she completed her PhD in particle physics with Jack Smith in 1979. Returning to India, she worked temporarily at the Tata Institute of Fundamental Research before joining the faculty at the University of Bombay (now Mumbai). There she remained until 1997, when she moved to the Centre for High Energy Physics at the Indian Institute of Science at Bangalore (now Bengaluru). She worked there for the rest of her life, continuing after her formal retirement as an emeritus professor. It was only a few months before the end that she moved back to her hometown, to be with her family in her last days.

Rohini was a prolific researcher. She will probably be best remembered pioneering the development, with Manuel Drees, of photon structure functions for use with photon beams at future colliders, but her contributions spanned vacuum polarisation, Higgs physics, top-quark physics with polarised beams, and beyond the Standard Model physics, especially low-energy supersymmetry. She authored a well-known textbook on the latter subject with Probir Roy and Drees.

Rohini was indefatigable in promoting the cause of women in science

Rohini’s broad understanding and warm character combined to make her the best-known face of elementary particle physics from India. She worked tirelessly to promote high-energy physics inside India, organising schools and workshops, and often represented the country in international forums, such as to monitor India’s participation in the LHC and other large international collaborative experiments. Rohini was a dedicated teacher and mentor to a long series of graduate students and postdocs, and a universal elder sister or aunt for the entire community of younger particle physicists in India.

No description of Rohini can be complete without mentioning her indefatigable efforts to promote the cause of women in science. Having herself faced gender discrimination in her younger days, she was determined to ensure that young women scientists received proper opportunities and recognition. She authored two books highlighting the work of Indian women scientists, thereby setting up role models to inspire the younger generation. Even more than these books, however, her own presence and encouragement left a mark on two generations of particle physicists, in India and abroad.

Rohini’s signal contributions were recognised by many awards and distinctions. The government of India awarded her the coveted Padma Shri in 2019, and the government of France awarded her the Ordre National du Mérite in 2021, mentioning her important role in furthering scientific collaboration between India and France. But her true memorial lies in the unique place she holds in the hearts of thousands of students, collaborators, friends and acquaintances. She was an extraordinary person who carved out a niche all by herself, with her scientific talents, her indefatigable energy, her universal amiability and her indomitable will. Her loss is sorely felt.

How I learnt to stop worrying and love QCD predictions

To begin, could you explain what the muons magnetic moment is, and why it should be anomalous?

Particles react to magnetic fields like tiny bar magnets, depending on their mass, electric charge and spin – a sort of intrinsic angular momentum lacking a true classical analogue. These properties combine into the magnetic moment, along with a quantum-mechanical g-factor which sets the strength of the response. Dirac computed g to be precisely two for electrons, with a formula that applies equally to the other, then-unknown, leptons. We call any deviation from this value anomalous. The name stuck because the first measurements differed from Dirac’s prediction, which initially was not understood. The anomalous piece is a natural probe of new physics, as it arises entirely from quantum fluctuations that may involve as-yet unseen new particles.

What ingredients from the Standard Model go into computing g–2?

Everything. All sectors, all particles, all Standard Model (SM) forces contribute. The dominant and best quantified contributions are due to QED, having been computed through fifth order in the fine structure constant α. We are talking about two independent calculations of more than 12,000 Feynman diagrams, accounting for more than 99.9% of the total SM prediction. Interestingly, two measurements of α disagree at more than 5σ, resulting in an uncertainty of about two parts per billion. While this discrepancy needs to be resolved, it is negligible for the muon g–2 observable. The electroweak contribution was computed at the two-loop level long ago, and updated with better measured input parameters and calculations of nonperturbative effects in quark loops. The resulting uncertainty is close to 40 times smaller than that of the g–2 experiment. Then, the overall uncertainty is determined by our knowledge of the hadronic corrections, which are by far the most difficult to constrain.

What sort of hadronic effects do you have in mind here? How are they calculated?

There are two distinct effects: hadronic vacuum polarisation (HVP) and hadronic light-by-light (HLbL). The former arises at second order in α, is the larger of the two, and the largest source of uncertainty. While interacting with an external magnetic field, the muon emits a virtual photon that can further split into a quark loop before recombining. The HLbL contribution arises at third order and is now known with sufficient precision. The challenge is that loop diagrams must be computed at all virtual energies, down to where the strong force (QCD) becomes non-perturbative and quarks hadronise. There are two ways to tackle this.

Instead of computing the hadronic bubble directly, the data-driven “dispersive” approach relates it to measurable quantities, for example the cross section for electron–positron annihilation into hadrons. About 75% of the total HVP comes from e+e π+π, so the measurement errors in this channel determine the overall uncertainty. The decays of tau leptons into hadrons can also be used as inputs. Since the process is mediated by a charged W boson, instead of a photon, it requires an isospin rotation from the charged to the neutral current. At low energies, this is another challenging non-perturbative problem. While there are phenomenological estimates of this effect, no complete theoretical calculation exists – which means that the uncertainties are not fully quantified. Differing opinions on how to assess them led to controversy over the inclusion of tau decays in the SM prediction of g–2. An alternative to data-driven methods is lattice QCD, which allows for ab initio calculations of the hadronic corrections.

What does “ab initio” mean, in this context?

It means that there are no simplifying assumptions in the QCD calculation. The approximations used in the lattice formulation of QCD come with adjustable parameters and can be described by effective field theories of QCD. For example, we discretise space and time: the distance separating nearest-neighbour points is given by the lattice spacing and the effective field theory guides the approach of the lattice theory to the continuum limit, enabling controlled extrapolations. To evaluate path integrals using Monte Carlo methods, which themselves introduce statistical errors, we also rotate to imaginary time. While not affecting the HVP, this limits the quantities we can compute.

How do you ensure that the lattice predictions are unbiased?

Good question! Lattice calculations are complicated, and it is therefore important to have several results from independent groups for consolidating averages. An important cultural shift in the community is that numerical analyses are now routinely blinded to avoid confirmation bias, making agreements more meaningful. This shifts the focus from central values to systematic errors. For our 2025 White Paper (WP25), the main lattice inputs for HVP were obtained from blinded analyses.

How did you construct the SM prediction for your 2025 White Paper?

To summarise how the SM prediction in WP25 was obtained, sufficiently precise lattice results for HVP arrived just in time. Since measurements of the e+e π+π channel are presently in disagreement with each other, the 2025 prediction solely relied on the lattice average for the HVP. In contrast, the 2020 White Paper (WP20) prediction employed the data-driven method, as the lattice-QCD results were not precise enough to weigh in.

With the experiment’s expected precision jump, it seemed vital for theory to follow suit

While the theory error in WP25 is larger than in WP20, it is a realistic assessment of present uncertainties, which we know how to improve. I stress that the combination of the SM theory error being four times larger than the experimental one and the remaining puzzles, particularly on the data-driven side, means that the question “Does the SM account for the experimental value of the muon’s anomalous magnetic moment?” has not yet been satisfactorily answered. Given the high level of activity, this will, however, happen soon.

Where are the tensions between lattice QCD, data-driven predictions and experimental measurements?

All g–2 experiments are beautifully consistent, and the lattice-based WP25 prediction differs from them by less than one standard deviation. At present, we don’t know if the data-driven method agrees with lattice QCD due to the differences in the e+e π+π measurements. In particular, the 2023 CMD-3 results from the Budker Institute of Nuclear Physics are compatible with lattice results, but disagree with CMD-2, KLOE, BaBar, BESIII and SND, which formed the basis for WP20. All the experimental collaborations are now working on new analyses. BaBar is expected to release a new e+e π+π result soon, and others, including Belle II, will follow. There is also ongoing work on radiative corrections and Monte Carlo generators, both of which are important in solving this puzzle. Once the dust settles, we will see whether the new data-driven evaluation agrees with the lattice average and the g–2 experiment. Either way, this may yield profound insights.

How did the Muon g–2 Theory Initiative come into being?

The first spark came when I received a visiting appointment from Fermilab, offering resources to organise meetings and workshops. At the time, my collaborators and I were gearing up to calculate the HVP in lattice QCD, and the Fermilab g–2 experiment was about to start. With the experiment’s expected precision jump, it seemed vital for theory to follow suit by bringing together communities working on different approaches to the SM contributions, with the goal of pooling our knowledge, reducing theoretical uncertainties and providing reliable predictions.

As Fermilab received my idea positively, I contacted the RBC collaboration and Christoph Lehner joined me with great enthusiasm to shape the effort. We recruited leaders in the experimental and theoretical communities to our Steering Committee. Its role is to coordinate efforts, organise workshops to bring the community together and provide the structure to map out scientific directions and decide on the next steps.

What were the main challenges you faced in coordinating such a complex collaboration?

With so many authors and such high stakes, disagreements naturally arise. In WP20, a consensus was emerging around the data-driven method. The challenge was to come up with a realistic and conservative error estimate, given the up to 3σ tensions between different data sets, including the two most precise measurements of e+e π+π at the time.

Hadronic contribution

As we were finalising our WP20, the picture was unsettled by a new lattice calculation from the Budapest–Marseille–Wuppertal (BMW) collaboration, consistent with earlier lattice results but far more precise. While the value was famously in tension with data-driven methods, the preprint also presented a calculation of the “intermediate window” contribution to the HVP– about 30% of the total – which disagreed with a published RBC/UKQCD result and with data-driven evaluations (CERN Courier March/April 2025 p21). Since BMW was still updating their results and the paper wasn’t yet published, we described the result but excluded it from our SM prediction. Later, in 2023, further complications came from the CMD-3 measurement.

Consolidation between lattice results was first observed for the intermediate window contribution, in 2022 and 2023. This, in turn, revealed a tension with the corresponding data-driven evaluations. Results for the difficult-to-compute long-distance contributions arrived in late fall 2024, yielding consolidated lattice averages for the total HVP, where we had to sort out a few subtleties. This was intense – a lot of work in very little time.

On the data-driven side, we faced the aforementioned tensions between the e+e π+π cross-section measurements. In light of these discrepancies, consensus was reached that we would not attempt a new data-driven average of HVP for WP25, leaving it for the next White Paper. Real conflict arose on the assessment of the quality of the uncertainty estimates for HVP contributions from tau decays and on whether to include them.

And how did you navigate these disagreements?

When the discussions around the assessment of tau-decay uncertainties stopped to converge, we proposed a conflict resolution procedure using the Steering Committee (SC) as the arbitration body, which all authors signed. If a conflict is brought to the SC for resolution, SC members first engage all parties involved to seek resolution. If none is found, the SC makes a recommendation and, if appropriate, the differing scientific viewpoints may be reflected in the document, followed by the recommendation. In the end, just having a conflict-resolution process in place was really helpful. While the SC negotiated a couple of presentation issues, the major disagreements were resolved without triggering the process.

The goal of WP25 was to wrap up a prediction before the announcement of the final Fermilab g–2 measurement. Adopting an internal conflict-resolution process was essential in getting our result out just in time, six days before the deadline.

Lattice QCD has really come of age

What other observables can benefit from advances in lattice QCD?

There are many, and their number is growing – lattice QCD has really come of age. Lattice QCD has been used for years to provide precise predictions of the hadronic parameters needed to describe weak processes, such as decay constants and form factors. A classic example, relevant to the LHC experiments, is the rare decay Bs μ+μ, where, thanks to lattice QCD calculations of the Bs-meson decay constant, the SM prediction is more precise than current experimental measurements. While precision continues to improve with refined methods, the lattice community is broadening the scope with new theoretical frameworks and improved computational methods, enabling calculations once out of reach – such as the (smeared) R-ratio, inclusive decay rates and PDFs.

There’s more g–2 physics over the horizon

Some have argued that the good agreement between lattice–QCD and the final measurement of Fermilab’s muon g–2 experiment means that the g–2 anomaly has now been solved. However, this dramatically oversimplifies the situation: the magnetic moment of the muon remains an intriguing puzzle.

The extraordinary precision of 127 parts per billion (ppb) achieved at Fermilab deserves to be matched by an equally impressive theoretical prediction. At 530 ppb, theory is currently the limiting factor in any comparison. This is the longer-term goal that the Muon g–2 Theory Initiative is now working towards, with inputs from all possible sources (see “How I learnt to stop worrying and love QCD predictions“). In the near future, it will not be possible to reach this precision with lattice QCD alone. Other approaches are needed to make a competitive Standard Model prediction.

Tensions remain

Essentially, all of the uncertainty in g–2 arises from the hadronic vacuum polarisation (HVP) – a quantum correction whereby a radiated virtual photon briefly transforms into a hadronic state before being reabsorbed. Historically, HVP has been evaluated by applying a dispersion relation to cross sections for hadron production in electron–positron collisions, but this method was displaced by lattice–QCD calculations in the theory initiative’s most recent white paper. The lattice community must be congratulated for the level of agreement that has been reached between groups working independently (CERN Courier July/August 2025 p7). By contrast, data-driven predictions are at present inconsistent across the experiments in the low-energy region; even if results from the CMD-3 experiment are excluded as an outlier, tensions remain, suggesting that some systematic errors may not have been completely addressed (CERN Courier March/April 2025 p21). Could a novel experimental technique help resolve the confusion?

The MUonE collaboration proposes a completely independent approach based on a new experimental method. In MUonE, we will determine the running of the electromagnetic coupling, a fundamental quantity that is driven by the same kinds of quantum fluctuations as muon g–2. We will extract it from a precise measurement of the differential cross section for elastic scattering of muons from electrons as a function of the momentum transferred.

MUonE is a relatively inexpensive experiment that we can set up in the existing M2 beamline in CERN’s North Area, already home to the AMBER and NA64-µ experiments. Three years of running, within the conditions of M2 parameters and the performance of the MUonE detector, would reach a statistical precision of approximately 180 ppb with a comparable level of systematic uncertainty.

MUonE will take advantage of silicon sensors that are already being developed for the CMS tracker upgrade. From the results, we will be able to use a dispersion relation to extract HVP’s contribution to g–2. Perhaps more importantly, however, as our method directly measures a function that is part of the lattice calculation, we can directly verify that method. The big challenge will be to keep the systematic uncertainties in the measurement small enough. However, MUonE does not suffer from the intrinsic problem that existing data-driven techniques have, which is that they must numerically integrate over the sharp peaks of hadron production by low-energy resonances. In contrast, the function derived from the space-like process that it will measure is smooth and well-behaved.

Piecing the puzzle 

CERN was the origin of the first brilliant muon g–2 measurements starting back in the 1950s (CERN Courier September/October 2024 p53), and now the laboratory has an opportunity to put another important piece into the g–2 puzzle through the MUonE project. Another component of great importance in this domain will be the new g-2/EDM experiment planned for J-PARC, which will also be performed in completely different conditions, and therefore with very different systematics to the Fermilab experiment.

Soft clouds probe dark QCD

CMS figure 1

Despite decades of searches, experiments have yet to find evidence for a new particle that could account for dark matter on its own. This has strengthened interest in richer “dark-sector” scenarios featuring multiple new states and interactions, potentially analogous to those of the Standard Model (SM). The CMS collaboration targeted one of the most distinctive possible signatures of a dark strong force in proton–proton collisions: a dense, nearly isotropic cloud of low-momentum particles known as a soft unclustered energy pattern (SUEP).

Searches in the LHC proton–proton collision data for events with many low-momentum particles are plagued by overwhelming backgrounds from pileup and soft QCD interactions. The CMS collaboration has recently overcome this challenge by using large-radius clusters of charged particle tracks and relying on quantities that characterise the expected isotropy of SUEP decays.

The 125 GeV Higgs boson serves in many theoretical models as a natural mediator between the SM and a hidden sector, and current experimental constraints still leave room for exotic decays. Motivated by this possibility, CMS focused on Higgs-boson production in association with a vector (W or Z) boson that decays into leptons. While these modes account for < 1% of Higgs bosons produced at the LHC, the leptons provide significant handles for triggering and background suppression.

Rather than relying on SM simulations, which face modelling and statistical challenges for such soft interactions, the background was extrapolated from events with low isotropy or relatively few charged-particle tracks per cluster, using a method that accounts for small correlations between the quantities used in the extrapolation. To validate the approach, an orthogonal sample of events with a high-momentum photon was studied, taking advantage of the Higgs boson’s minuscule coupling to photons and the similarity of background processes in W/Z + jet and photon + jet events that could mimic a SUEP signal.

The data in the search region, consisting of events with a W or Z boson candidate and many isotropically distributed charged particles, was found to be consistent with the SM expectation. Stringent limits were placed on the branching ratio of the 125 GeV Higgs boson decaying to a SUEP shower for a wide range of parameters (see figure 1).

This analysis complements a previous CMS search that primarily targeted much heavier mediators produced via gluon fusion, improving limits on the H  SUEP branching ratio by two orders of magnitude. It additionally provides model-agnostic limits and detailed reinterpretation recipes, maximising the usability of this data for testing alternative theoretical frameworks.

SUEP signatures are not unique to the benchmark scenarios under scrutiny. They naturally emerge in hidden-valley models, where mediators connect the SM to a new, otherwise isolated sector. If the hidden states interact through a “dark QCD”, proton–proton collisions would trigger a crowded cascade of dark partons rather than the familiar collimated showers.

Crucially, unlike in ordinary QCD – where the coupling quickly weakens at energies above confinement – the dark coupling could remain large well beyond its typically low confinement scale. This sustained strong coupling would drive frequent interactions and efficiently redistribute momentum, producing an almost isotropic radiation pattern. As the system cooled, it would then hadronise into numerous soft dark hadrons whose decays back to SM particles would retain this softness and isotropy – yielding the characteristic SUEP probed by CMS.

The beam–bottle debate at PSI

Free neutrons have a lifetime of about 880 seconds, yet a longstanding tension between two measurement techniques continues to puzzle the neutron-physics community. The most precise averages from beam experiments and magnetic-bottle traps yield 888.1 ± 2.0 s and 877.8 ± 0.3 s, respectively – roughly corresponding to a 5σ discrepancy.

On 13 September 2025, 40 representatives of all currently operating neutron-lifetime experiments came together at the Paul Scherrer Institute (PSI) to discuss the current status of the tension and the path forward. Geoffrey Greene (University of Tennessee) opened the workshop by reflecting on five decades of neutron-lifetime measurements from the 1960s to the present.

The beam method employs cold-neutron beams, with protons from neutron beta-decays collected in a magnetic trap and counted. The lifetime is then inferred from the ratio of proton counts to neutron flux. Fred Wietfeldt (Tulane University) highlighted the huge efforts undertaken at the National Institute of Standards and Technology (NIST) in Gaithersburg, most importantly on the absolute calibration of the neutron detector.

Susan Seestrom (Los Alamos National Laboratory) described today’s most precise experiment, the UCNτ experiment at Los Alamos National Laboratory, which uses the magnetic-bottle trap method. It confines ultracold neutrons (UCNs) via their magnetic and gravitational interaction and counts the surviving ones at different times. She also provided an outlook on its next phase, UCNτ+, with increased statistics goals. The τSPECT experiment at PSI’s UCN facility is also based on magnetic confinement of neutrons and has recently started data taking, but has distinct differences. As explained by Martin Fertl from Johannes Gutenberg-University Mainz, τSPECT uses a double-spin-flip method to increase the UCN filling of the purely magnetic trap, and a detector moving in and out of the storage volume to first remove slightly higher-energetic neutrons before storage, and then measures the surviving neutrons in situ after storage.

Kenji Mishima (University of Osaka) presented the neutron-lifetime experiment at J-PARC, based on a new principle: the detection of the charged decay products in an active time-projection-chamber, where the neutrons are captured on a small 3He admixture. This experiment’s systematics are entirely different from those of previous efforts and may offer a unique contribution to the field. Other studies largely excluded the possibility that the beam–bottle discrepancy could be explained by hypothetical exotic decay channels or other non-standard processes.

New results from LANL, NIST, J-PARC and PSI should clarify the currently puzzling situation in the coming years.

Vienna’s new hub for particle physics

On 7 November 2025, the Austrian Academy of Sciences inaugurated the Marietta Blau Institute for Particle Physics (MBI). The new centre brings together the former Stefan Meyer Institute for Subatomic Physics and the Institute of High Energy Physics (HEPHY), creating Austria’s largest hub for particle-physics research. In total, about 130 researchers with broad expertise across the discipline now work under the MBI umbrella.

Marietta Blau was one of the first women to study physics at the University of Vienna. As recalled by Brigitte Strohmaier (University of Vienna), who summarised Blau’s biography, she became best known for her work at the Institute for Radium Research between 1923 and 1938, where she developed the nuclear-emulsion technique for detecting charged particles with micrometre-scale precision.

Together with Hertha Wambacher, Blau exposed nuclear emulsions to cosmic rays at Victor Hess’s observatory near Innsbruck, producing photographic evidence of the interactions between high-energy particles and matter.

Staying in Scandinavia when Nazi Germany annexed Austria in 1938, Blau could not return to Vienna. She secured a position at the Polytechnic Institute of Mexico City on the recommendation of Albert Einstein, but found herself isolated from colleagues. From 1944 on, she worked in the US before returning to Vienna in 1960, where she supervised the evaluation of photographic plates from CERN.

Her method of nuclear emulsions was further advanced by Cecil Powell in Bristol, who was awarded the Nobel Prize in Physics in 1950 for discoveries regarding mesons made with this method. On this and other occasions, Marietta Blau was also nominated, but never recognised for her groundbreaking research.

Joachim Kopp, chair of the Scientific Advisory Board of HEPHY, introduced the institute’s scientific outlook. He highlighted the breadth of MBI’s programme, which includes major contributions to CERN experiments such as CMS and ALICE at the LHC, and ASACUSA at the AD/ELENA facility, where antimatter is studied using low-energy antiprotons.

Groups at MBI are also involved in the Belle II experiment at KEK, as well as the dark-matter experiments CRESST and COSINUS at the LNGS underground lab. Neutrino physics, gravitational-wave studies at the Einstein Telescope, as well as tests of fundamental symmetries using ultra-cold hydrogen and deuterium beams, are also part of the research programme. The MBI also builds on the long tradition of detector development and construction for future experiments, complemented by a dedicated theory group.

bright-rec iop pub iop-science physcis connect