Millions of asteroids orbit the Sun. Smaller fragments often brush the Earth’s atmosphere to light up the sky as meteors. Once every few centuries, a meteoroid has sufficient size to cause regional damage, most recently the Chelyabinsk explosion that injured thousands of people in 2013, and the Tunguska event that flattened thousands of square kilometres of Siberian forest in 1908. Asteroid impacts with global consequences are vastly rarer, especially compared to the frequency with which they appear in the movies. But popular portrayals do carry a grain of truth: in case of an impending collision with Earth, nuclear deflection would be a last-resort option, with fragmentation posing the principal risk. The most important uncertainty in such a mission would be the materials properties of the asteroid – a question recently studied at CERN’s Super Proton Synchrotron (SPS), where experiments revealed that some asteroid materials may be stronger under extreme energy deposition than current models assume.
Planetary defence
“Planetary defence represents a scientific challenge,” says Karl-Georg Schlesinger, co-founder of OuSoCo, a start-up developing advanced material-response models used to benchmark large-scale nuclear deflection simulations. “The world must be able to execute a nuclear deflection mission with high confidence, yet cannot conduct a real-world test in advance. This places extraordinary demands on material and physics data.”
Accelerator facilities play a key role in understanding how asteroid material behaves under extreme conditions, providing controlled environments where impact-relevant pressures and shock conditions can be reproduced. To probe the material response directly, the team conducted experiments at CERN’s HiRadMat facility in 2024 and 2025, as a part of the Fireball collaboration with the University of Oxford. A sample of the Campo del Cielo meteorite, a metal-rich iron-nickel body, was exposed to 27 successive short, intense pulses of the 440 GeV SPS proton beam, reproducing impact-relevant shock conditions that cannot be achieved with conventional laboratory techniques.
“The material became stronger, exhibiting an increase in yield strength, and displayed a self-stabilising damping behaviour,” explains Melanie Bochmann, co-founder and co-team lead alongside Schlesinger. “Our experiments indicate that – at least for metal-rich asteroid material – a larger device than previously thought can be used without catastrophically breaking the asteroid. This keeps open an emergency option for situations involving very large objects or very short warning times, where non-nuclear methods are insufficient and where current models might assume fragmentation would limit the usable device size.”
Throughout the experiments at the SPS, the team monitored each pulse using laser Doppler vibrometry alongside temperature sensors, capturing in real time how the meteorite softened, flexed and then unexpectedly re-strengthened without breaking. This represents the first experimental evidence that metal-rich asteroid material may behave far more robustly under extreme, sudden energy loading than predicted.
The experiments could also provide valuable insights into planetary formation processes
After the SPS campaign, initial post-irradiation measurements were performed at CERN. These revealed that magnesium inclusions had been activated to produce sodium-22, a radioactive isotope that decays to produce a positron, allowing diagnostics similar to those used in medical imaging. Following these initial measurements, the irradiated meteorite has been transferred to the ISIS Neutron and Muon Source at the Rutherford Appleton Laboratory in the UK, where neutron diffraction and positron annihilation lifetime spectroscopy measurements are planned.
“These analyses are intended to examine changes in the meteorite’s internal structure caused by the irradiation and to confirm, at a microscopic level, the increase in material strength by a factor of 2.5 indicated by the experimental results,” explains Bochmann.
Complementary information can be gathered by space missions. Since NASA’s NEAR Shoemaker spacecraft successfully landed on asteroid Eros in 2001, two Japanese missions and a further US mission have visited asteroids, collecting samples and providing evidence that some asteroids are loosely bound rocky aggregates. In the next mission, NASA and ESA plan to study Apophis, an asteroid several hundreds of metres in size in each dimension that will safely pass closer to Earth than many satellites in geosynchronous orbit on 13 April 2029 – a close encounter expected only once every few thousand years.
The missions will observe how Apophis is twisted, stretched and squeezed by Earth’s gravity, providing a rare opportunity to observe asteroid-scale material response under natural tidal stresses. Bochmann and Schlesinger’s team now plan to study asteroids with a similar rocky composition.
Real-time data
“In our first experimental campaign, we focused on a metal-rich asteroid material because its more homogeneous structure is easier to control and model, and it met all the safety requirements of the experimental facility,” they explain. “This allowed us to collect, for the first time, non-destructive, real-time data on how such material responds to high-energy deposition.”
“As a next step, we plan to study more complex and rocky asteroid materials. One example is a class of meteorites called pallasites, which consist of a metal matrix similar to the meteorite material we have already studied, with up to centimetre-sized magnesium-rich crystals embedded inside. Because these objects are thought to originate from the core–mantle boundary of early planetesimals, such experiments could also provide valuable insights into planetary formation processes.”
Rohini Madhusudan Godbole, one of India’s most influential particle physicists, passed away in her hometown of Pune on 25 October 2024.
Rohini was born on 12 November 1952 to Madhusudan and Malati Godbole. Theirs was a cultured and highly educated family, and she grew up in an atmosphere of intellectual freedom and progressive ideas. Educated at the best schools and colleges in Pune, she joined the Indian Institute of Technology at Bombay, from which she graduated in 1972. She then moved to Stony Brook, where she completed her PhD in particle physics with Jack Smith in 1979. Returning to India, she worked temporarily at the Tata Institute of Fundamental Research before joining the faculty at the University of Bombay (now Mumbai). There she remained until 1997, when she moved to the Centre for High Energy Physics at the Indian Institute of Science at Bangalore (now Bengaluru). She worked there for the rest of her life, continuing after her formal retirement as an emeritus professor. It was only a few months before the end that she moved back to her hometown, to be with her family in her last days.
Rohini was a prolific researcher. She will probably be best remembered pioneering the development, with Manuel Drees, of photon structure functions for use with photon beams at future colliders, but her contributions spanned vacuum polarisation, Higgs physics, top-quark physics with polarised beams, and beyond the Standard Model physics, especially low-energy supersymmetry. She authored a well-known textbook on the latter subject with Probir Roy and Drees.
Rohini was indefatigable in promoting the cause of women in science
Rohini’s broad understanding and warm character combined to make her the best-known face of elementary particle physics from India. She worked tirelessly to promote high-energy physics inside India, organising schools and workshops, and often represented the country in international forums, such as to monitor India’s participation in the LHC and other large international collaborative experiments. Rohini was a dedicated teacher and mentor to a long series of graduate students and postdocs, and a universal elder sister or aunt for the entire community of younger particle physicists in India.
No description of Rohini can be complete without mentioning her indefatigable efforts to promote the cause of women in science. Having herself faced gender discrimination in her younger days, she was determined to ensure that young women scientists received proper opportunities and recognition. She authored two books highlighting the work of Indian women scientists, thereby setting up role models to inspire the younger generation. Even more than these books, however, her own presence and encouragement left a mark on two generations of particle physicists, in India and abroad.
Rohini’s signal contributions were recognised by many awards and distinctions. The government of India awarded her the coveted Padma Shri in 2019, and the government of France awarded her the Ordre National du Mérite in 2021, mentioning her important role in furthering scientific collaboration between India and France. But her true memorial lies in the unique place she holds in the hearts of thousands of students, collaborators, friends and acquaintances. She was an extraordinary person who carved out a niche all by herself, with her scientific talents, her indefatigable energy, her universal amiability and her indomitable will. Her loss is sorely felt.
To begin, could you explain what the muon’s magnetic moment is, and why it should be anomalous?
Particles react to magnetic fields like tiny bar magnets, depending on their mass, electric charge and spin – a sort of intrinsic angular momentum lacking a true classical analogue. These properties combine into the magnetic moment, along with a quantum-mechanical g-factor which sets the strength of the response. Dirac computed g to be precisely two for electrons, with a formula that applies equally to the other, then-unknown, leptons. We call any deviation from this value anomalous. The name stuck because the first measurements differed from Dirac’s prediction, which initially was not understood. The anomalous piece is a natural probe of new physics, as it arises entirely from quantum fluctuations that may involve as-yet unseen new particles.
What ingredients from the Standard Model go into computing g–2?
Everything. All sectors, all particles, all Standard Model (SM) forces contribute. The dominant and best quantified contributions are due to QED, having been computed through fifth order in the fine structure constant α. We are talking about two independent calculations of more than 12,000 Feynman diagrams, accounting for more than 99.9% of the total SM prediction. Interestingly, two measurements of α disagree at more than 5σ, resulting in an uncertainty of about two parts per billion. While this discrepancy needs to be resolved, it is negligible for the muon g–2 observable. The electroweak contribution was computed at the two-loop level long ago, and updated with better measured input parameters and calculations of nonperturbative effects in quark loops. The resulting uncertainty is close to 40 times smaller than that of the g–2 experiment. Then, the overall uncertainty is determined by our knowledge of the hadronic corrections, which are by far the most difficult to constrain.
What sort of hadronic effects do you have in mind here? How are they calculated?
There are two distinct effects: hadronic vacuum polarisation (HVP) and hadronic light-by-light (HLbL). The former arises at second order in α, is the larger of the two, and the largest source of uncertainty. While interacting with an external magnetic field, the muon emits a virtual photon that can further split into a quark loop before recombining. The HLbL contribution arises at third order and is now known with sufficient precision. The challenge is that loop diagrams must be computed at all virtual energies, down to where the strong force (QCD) becomes non-perturbative and quarks hadronise. There are two ways to tackle this.
Instead of computing the hadronic bubble directly, the data-driven “dispersive” approach relates it to measurable quantities, for example the cross section for electron–positron annihilation into hadrons. About 75% of the total HVP comes from e+e–→ π+π–, so the measurement errors in this channel determine the overall uncertainty. The decays of tau leptons into hadrons can also be used as inputs. Since the process is mediated by a charged W boson, instead of a photon, it requires an isospin rotation from the charged to the neutral current. At low energies, this is another challenging non-perturbative problem. While there are phenomenological estimates of this effect, no complete theoretical calculation exists – which means that the uncertainties are not fully quantified. Differing opinions on how to assess them led to controversy over the inclusion of tau decays in the SM prediction of g–2. An alternative to data-driven methods is lattice QCD, which allows for ab initio calculations of the hadronic corrections.
What does “ab initio” mean, in this context?
It means that there are no simplifying assumptions in the QCD calculation. The approximations used in the lattice formulation of QCD come with adjustable parameters and can be described by effective field theories of QCD. For example, we discretise space and time: the distance separating nearest-neighbour points is given by the lattice spacing and the effective field theory guides the approach of the lattice theory to the continuum limit, enabling controlled extrapolations. To evaluate path integrals using Monte Carlo methods, which themselves introduce statistical errors, we also rotate to imaginary time. While not affecting the HVP, this limits the quantities we can compute.
How do you ensure that the lattice predictions are unbiased?
Good question! Lattice calculations are complicated, and it is therefore important to have several results from independent groups for consolidating averages. An important cultural shift in the community is that numerical analyses are now routinely blinded to avoid confirmation bias, making agreements more meaningful. This shifts the focus from central values to systematic errors. For our 2025 White Paper (WP25), the main lattice inputs for HVP were obtained from blinded analyses.
How did you construct the SM prediction for your 2025 White Paper?
To summarise how the SM prediction in WP25 was obtained, sufficiently precise lattice results for HVP arrived just in time. Since measurements of the e+e–→ π+π– channel are presently in disagreement with each other, the 2025 prediction solely relied on the lattice average for the HVP. In contrast, the 2020 White Paper (WP20) prediction employed the data-driven method, as the lattice-QCD results were not precise enough to weigh in.
With the experiment’s expected precision jump, it seemed vital for theory to follow suit
While the theory error in WP25 is larger than in WP20, it is a realistic assessment of present uncertainties, which we know how to improve. I stress that the combination of the SM theory error being four times larger than the experimental one and the remaining puzzles, particularly on the data-driven side, means that the question “Does the SM account for the experimental value of the muon’s anomalous magnetic moment?” has not yet been satisfactorily answered. Given the high level of activity, this will, however, happen soon.
Where are the tensions between lattice QCD, data-driven predictions and experimental measurements?
All g–2 experiments are beautifully consistent, and the lattice-based WP25 prediction differs from them by less than one standard deviation. At present, we don’t know if the data-driven method agrees with lattice QCD due to the differences in the e+e–→ π+π– measurements. In particular, the 2023 CMD-3 results from the Budker Institute of Nuclear Physics are compatible with lattice results, but disagree with CMD-2, KLOE, BaBar, BESIII and SND, which formed the basis for WP20. All the experimental collaborations are now working on new analyses. BaBar is expected to release a new e+e–→ π+π– result soon, and others, including Belle II, will follow. There is also ongoing work on radiative corrections and Monte Carlo generators, both of which are important in solving this puzzle. Once the dust settles, we will see whether the new data-driven evaluation agrees with the lattice average and the g–2 experiment. Either way, this may yield profound insights.
How did the Muon g–2 Theory Initiative come into being?
The first spark came when I received a visiting appointment from Fermilab, offering resources to organise meetings and workshops. At the time, my collaborators and I were gearing up to calculate the HVP in lattice QCD, and the Fermilab g–2 experiment was about to start. With the experiment’s expected precision jump, it seemed vital for theory to follow suit by bringing together communities working on different approaches to the SM contributions, with the goal of pooling our knowledge, reducing theoretical uncertainties and providing reliable predictions.
As Fermilab received my idea positively, I contacted the RBC collaboration and Christoph Lehner joined me with great enthusiasm to shape the effort. We recruited leaders in the experimental and theoretical communities to our Steering Committee. Its role is to coordinate efforts, organise workshops to bring the community together and provide the structure to map out scientific directions and decide on the next steps.
What were the main challenges you faced in coordinating such a complex collaboration?
With so many authors and such high stakes, disagreements naturally arise. In WP20, a consensus was emerging around the data-driven method. The challenge was to come up with a realistic and conservative error estimate, given the up to 3σ tensions between different data sets, including the two most precise measurements of e+e–→ π+π– at the time.
As we were finalising our WP20, the picture was unsettled by a new lattice calculation from the Budapest–Marseille–Wuppertal (BMW) collaboration, consistent with earlier lattice results but far more precise. While the value was famously in tension with data-driven methods, the preprint also presented a calculation of the “intermediate window” contribution to the HVP– about 30% of the total – which disagreed with a published RBC/UKQCD result and with data-driven evaluations (CERN Courier March/April 2025 p21). Since BMW was still updating their results and the paper wasn’t yet published, we described the result but excluded it from our SM prediction. Later, in 2023, further complications came from the CMD-3 measurement.
Consolidation between lattice results was first observed for the intermediate window contribution, in 2022 and 2023. This, in turn, revealed a tension with the corresponding data-driven evaluations. Results for the difficult-to-compute long-distance contributions arrived in late fall 2024, yielding consolidated lattice averages for the total HVP, where we had to sort out a few subtleties. This was intense – a lot of work in very little time.
On the data-driven side, we faced the aforementioned tensions between the e+e–→ π+π– cross-section measurements. In light of these discrepancies, consensus was reached that we would not attempt a new data-driven average of HVP for WP25, leaving it for the next White Paper. Real conflict arose on the assessment of the quality of the uncertainty estimates for HVP contributions from tau decays and on whether to include them.
And how did you navigate these disagreements?
When the discussions around the assessment of tau-decay uncertainties stopped to converge, we proposed a conflict resolution procedure using the Steering Committee (SC) as the arbitration body, which all authors signed. If a conflict is brought to the SC for resolution, SC members first engage all parties involved to seek resolution. If none is found, the SC makes a recommendation and, if appropriate, the differing scientific viewpoints may be reflected in the document, followed by the recommendation. In the end, just having a conflict-resolution process in place was really helpful. While the SC negotiated a couple of presentation issues, the major disagreements were resolved without triggering the process.
The goal of WP25 was to wrap up a prediction before the announcement of the final Fermilab g–2 measurement. Adopting an internal conflict-resolution process was essential in getting our result out just in time, six days before the deadline.
Lattice QCD has really come of age
What other observables can benefit from advances in lattice QCD?
There are many, and their number is growing – lattice QCD has really come of age. Lattice QCD has been used for years to provide precise predictions of the hadronic parameters needed to describe weak processes, such as decay constants and form factors. A classic example, relevant to the LHC experiments, is the rare decay Bs→ μ+μ–, where, thanks to lattice QCD calculations of the Bs-meson decay constant, the SM prediction is more precise than current experimental measurements. While precision continues to improve with refined methods, the lattice community is broadening the scope with new theoretical frameworks and improved computational methods, enabling calculations once out of reach – such as the (smeared) R-ratio, inclusive decay rates and PDFs.
Some have argued that the good agreement between lattice–QCD and the final measurement of Fermilab’s muon g–2 experiment means that the g–2 anomaly has now been solved. However, this dramatically oversimplifies the situation: the magnetic moment of the muon remains an intriguing puzzle.
The extraordinary precision of 127 parts per billion (ppb) achieved at Fermilab deserves to be matched by an equally impressive theoretical prediction. At 530 ppb, theory is currently the limiting factor in any comparison. This is the longer-term goal that the Muon g–2 Theory Initiative is now working towards, with inputs from all possible sources (see “How I learnt to stop worrying and love QCD predictions“). In the near future, it will not be possible to reach this precision with lattice QCD alone. Other approaches are needed to make a competitive Standard Model prediction.
Tensions remain
Essentially, all of the uncertainty in g–2 arises from the hadronic vacuum polarisation (HVP) – a quantum correction whereby a radiated virtual photon briefly transforms into a hadronic state before being reabsorbed. Historically, HVP has been evaluated by applying a dispersion relation to cross sections for hadron production in electron–positron collisions, but this method was displaced by lattice–QCD calculations in the theory initiative’s most recent white paper. The lattice community must be congratulated for the level of agreement that has been reached between groups working independently (CERN Courier July/August 2025 p7). By contrast, data-driven predictions are at present inconsistent across the experiments in the low-energy region; even if results from the CMD-3 experiment are excluded as an outlier, tensions remain, suggesting that some systematic errors may not have been completely addressed (CERN Courier March/April 2025 p21). Could a novel experimental technique help resolve the confusion?
The MUonE collaboration proposes a completely independent approach based on a new experimental method. In MUonE, we will determine the running of the electromagnetic coupling, a fundamental quantity that is driven by the same kinds of quantum fluctuations as muon g–2. We will extract it from a precise measurement of the differential cross section for elastic scattering of muons from electrons as a function of the momentum transferred.
MUonE is a relatively inexpensive experiment that we can set up in the existing M2 beamline in CERN’s North Area, already home to the AMBER and NA64-µ experiments. Three years of running, within the conditions of M2 parameters and the performance of the MUonE detector, would reach a statistical precision of approximately 180 ppb with a comparable level of systematic uncertainty.
MUonE will take advantage of silicon sensors that are already being developed for the CMS tracker upgrade. From the results, we will be able to use a dispersion relation to extract HVP’s contribution to g–2. Perhaps more importantly, however, as our method directly measures a function that is part of the lattice calculation, we can directly verify that method. The big challenge will be to keep the systematic uncertainties in the measurement small enough. However, MUonE does not suffer from the intrinsic problem that existing data-driven techniques have, which is that they must numerically integrate over the sharp peaks of hadron production by low-energy resonances. In contrast, the function derived from the space-like process that it will measure is smooth and well-behaved.
Piecing the puzzle
CERN was the origin of the first brilliant muon g–2 measurements starting back in the 1950s (CERN Courier September/October 2024 p53), and now the laboratory has an opportunity to put another important piece into the g–2 puzzle through the MUonE project. Another component of great importance in this domain will be the new g-2/EDM experiment planned for J-PARC, which will also be performed in completely different conditions, and therefore with very different systematics to the Fermilab experiment.
Despite decades of searches, experiments have yet to find evidence for a new particle that could account for dark matter on its own. This has strengthened interest in richer “dark-sector” scenarios featuring multiple new states and interactions, potentially analogous to those of the Standard Model (SM). The CMS collaboration targeted one of the most distinctive possible signatures of a dark strong force in proton–proton collisions: a dense, nearly isotropic cloud of low-momentum particles known as a soft unclustered energy pattern (SUEP).
Searches in the LHC proton–proton collision data for events with many low-momentum particles are plagued by overwhelming backgrounds from pileup and soft QCD interactions. The CMS collaboration has recently overcome this challenge by using large-radius clusters of charged particle tracks and relying on quantities that characterise the expected isotropy of SUEP decays.
The 125 GeV Higgs boson serves in many theoretical models as a natural mediator between the SM and a hidden sector, and current experimental constraints still leave room for exotic decays. Motivated by this possibility, CMS focused on Higgs-boson production in association with a vector (W or Z) boson that decays into leptons. While these modes account for < 1% of Higgs bosons produced at the LHC, the leptons provide significant handles for triggering and background suppression.
Rather than relying on SM simulations, which face modelling and statistical challenges for such soft interactions, the background was extrapolated from events with low isotropy or relatively few charged-particle tracks per cluster, using a method that accounts for small correlations between the quantities used in the extrapolation. To validate the approach, an orthogonal sample of events with a high-momentum photon was studied, taking advantage of the Higgs boson’s minuscule coupling to photons and the similarity of background processes in W/Z + jet and photon + jet events that could mimic a SUEP signal.
The data in the search region, consisting of events with a W or Z boson candidate and many isotropically distributed charged particles, was found to be consistent with the SM expectation. Stringent limits were placed on the branching ratio of the 125 GeV Higgs boson decaying to a SUEP shower for a wide range of parameters (see figure 1).
This analysis complements a previous CMS search that primarily targeted much heavier mediators produced via gluon fusion, improving limits on the H → SUEP branching ratio by two orders of magnitude. It additionally provides model-agnostic limits and detailed reinterpretation recipes, maximising the usability of this data for testing alternative theoretical frameworks.
SUEP signatures are not unique to the benchmark scenarios under scrutiny. They naturally emerge in hidden-valley models, where mediators connect the SM to a new, otherwise isolated sector. If the hidden states interact through a “dark QCD”, proton–proton collisions would trigger a crowded cascade of dark partons rather than the familiar collimated showers.
Crucially, unlike in ordinary QCD – where the coupling quickly weakens at energies above confinement – the dark coupling could remain large well beyond its typically low confinement scale. This sustained strong coupling would drive frequent interactions and efficiently redistribute momentum, producing an almost isotropic radiation pattern. As the system cooled, it would then hadronise into numerous soft dark hadrons whose decays back to SM particles would retain this softness and isotropy – yielding the characteristic SUEP probed by CMS.
Free neutrons have a lifetime of about 880 seconds, yet a longstanding tension between two measurement techniques continues to puzzle the neutron-physics community. The most precise averages from beam experiments and magnetic-bottle traps yield 888.1 ± 2.0 s and 877.8 ± 0.3 s, respectively – roughly corresponding to a 5σ discrepancy.
On 13 September 2025, 40 representatives of all currently operating neutron-lifetime experiments came together at the Paul Scherrer Institute (PSI) to discuss the current status of the tension and the path forward. Geoffrey Greene (University of Tennessee) opened the workshop by reflecting on five decades of neutron-lifetime measurements from the 1960s to the present.
The beam method employs cold-neutron beams, with protons from neutron beta-decays collected in a magnetic trap and counted. The lifetime is then inferred from the ratio of proton counts to neutron flux. Fred Wietfeldt (Tulane University) highlighted the huge efforts undertaken at the National Institute of Standards and Technology (NIST) in Gaithersburg, most importantly on the absolute calibration of the neutron detector.
Susan Seestrom (Los Alamos National Laboratory) described today’s most precise experiment, the UCNτ experiment at Los Alamos National Laboratory, which uses the magnetic-bottle trap method. It confines ultracold neutrons (UCNs) via their magnetic and gravitational interaction and counts the surviving ones at different times. She also provided an outlook on its next phase, UCNτ+, with increased statistics goals. The τSPECT experiment at PSI’s UCN facility is also based on magnetic confinement of neutrons and has recently started data taking, but has distinct differences. As explained by Martin Fertl from Johannes Gutenberg-University Mainz, τSPECT uses a double-spin-flip method to increase the UCN filling of the purely magnetic trap, and a detector moving in and out of the storage volume to first remove slightly higher-energetic neutrons before storage, and then measures the surviving neutrons in situ after storage.
Kenji Mishima (University of Osaka) presented the neutron-lifetime experiment at J-PARC, based on a new principle: the detection of the charged decay products in an active time-projection-chamber, where the neutrons are captured on a small 3He admixture. This experiment’s systematics are entirely different from those of previous efforts and may offer a unique contribution to the field. Other studies largely excluded the possibility that the beam–bottle discrepancy could be explained by hypothetical exotic decay channels or other non-standard processes.
New results from LANL, NIST, J-PARC and PSI should clarify the currently puzzling situation in the coming years.
On 7 November 2025, the Austrian Academy of Sciences inaugurated the Marietta Blau Institute for Particle Physics (MBI). The new centre brings together the former Stefan Meyer Institute for Subatomic Physics and the Institute of High Energy Physics (HEPHY), creating Austria’s largest hub for particle-physics research. In total, about 130 researchers with broad expertise across the discipline now work under the MBI umbrella.
Marietta Blau was one of the first women to study physics at the University of Vienna. As recalled by Brigitte Strohmaier (University of Vienna), who summarised Blau’s biography, she became best known for her work at the Institute for Radium Research between 1923 and 1938, where she developed the nuclear-emulsion technique for detecting charged particles with micrometre-scale precision.
Together with Hertha Wambacher, Blau exposed nuclear emulsions to cosmic rays at Victor Hess’s observatory near Innsbruck, producing photographic evidence of the interactions between high-energy particles and matter.
Staying in Scandinavia when Nazi Germany annexed Austria in 1938, Blau could not return to Vienna. She secured a position at the Polytechnic Institute of Mexico City on the recommendation of Albert Einstein, but found herself isolated from colleagues. From 1944 on, she worked in the US before returning to Vienna in 1960, where she supervised the evaluation of photographic plates from CERN.
Her method of nuclear emulsions was further advanced by Cecil Powell in Bristol, who was awarded the Nobel Prize in Physics in 1950 for discoveries regarding mesons made with this method. On this and other occasions, Marietta Blau was also nominated, but never recognised for her groundbreaking research.
Joachim Kopp, chair of the Scientific Advisory Board of HEPHY, introduced the institute’s scientific outlook. He highlighted the breadth of MBI’s programme, which includes major contributions to CERN experiments such as CMS and ALICE at the LHC, and ASACUSA at the AD/ELENA facility, where antimatter is studied using low-energy antiprotons.
Groups at MBI are also involved in the Belle II experiment at KEK, as well as the dark-matter experiments CRESST and COSINUS at the LNGS underground lab. Neutrino physics, gravitational-wave studies at the Einstein Telescope, as well as tests of fundamental symmetries using ultra-cold hydrogen and deuterium beams, are also part of the research programme. The MBI also builds on the long tradition of detector development and construction for future experiments, complemented by a dedicated theory group.
The 25th Zimányi Winter School gathered 120 researchers in Budapest to discuss recent advances in medium- and high-energy nuclear physics. The programme focused on the properties of strongly-interacting matter produced in heavy-ion collisions – little bangs that recreate conditions a few microseconds after the Big Bang.
József Zimányi was a pioneer of Hungarian and international heavy-ion physics, playing a central role in establishing relativistic heavy-ion research in Hungary and contributing key developments to hydrodynamic descriptions of nuclear collisions. Much of the week’s programme revisited the problems that occupied his career, including how the hot, dense system created in a collision evolves and how it converts its energy into the observed hadrons.
Giuseppe Verde (INFN Catania) and Máté Csanád (ELTE) emphasised the role of femtoscopic methods, rooted in the Hanbury Brown–Twiss interferometry originally developed for stellar measurements, in understanding the system that emerges from heavy-ion collisions. Quantum entanglement in high-energy nuclear collisions – a subject closely connected to the 2025 Nobel Prize in Physics – was also explored in a dedicated, invited lecture by Dmitri Kharzeev (Stony Brook University), who described the approach and the results of his team that suggest the origin of the observed thermodynamic properties is quantum entanglement itself.
The NA61/SHINE collaboration reported ongoing studies of isospin-symmetry breaking, including a recent result where the charged-to-neutral kaon ratio in argon–scandium collisions deviates at 4.7σ from expectations based on approximate isospin symmetry (CERN Courier March/April 2025 p9). Further detailed studies are planned, with potential implications for improving the understanding of antimatter production.
Hydrodynamic modelling remains one of the most successful tools in heavy-ion physics. Tetsufumi Hirano (Sophia University, Japan), the first recipient of the Zimányi Medal, discussed how the collision system behaves like an expanding relativistic fluid, whose collective motion encodes its initial conditions and transport properties. Hydrodynamic approaches incorporating spin effects – and the resulting polarisation effects in heavy-ion collisions – were discussed by Wojciech Florkowski (Jagiellonian University) and Victor E Ambrus (West University of Timisoara).
The 15th edition of the Implications of LHCb Measurements and Future Prospects annual workshop took place at CERN from 4 to 7 November 2025, attracting more than 180 participants from the LHCb experiment and the theoretical physics community.
Peilian Li (UCAS) described how, thanks to an upgraded trigger that is fully software-based, the dataset gathered in 2025 alone already exceeded the total one from Run 1 and Run 2 combined. The future of LHCb was discussed, with prospects for an upgrade targeting the high-luminosity phase of the LHC, where timing information will be introduced. Theorist Monika Blanke (KIT) concluded the workshop with a keynote on the status of B-decay anomalies, highlighting the importance of LHCb measurements on constraining new physics models.
Much attention went to the long-standing discrepancies between data and theory on lepton–flavour–universality tests – such as the measurement of the R(D) and R(D*) ratios in semileptonic B-meson decays. Marzia Bordone (UZH) gave a theoretical overview of the determination of the form factors describing B → D* transitions, highlighting discrepancies in the determination of some form-factor shapes, both among different lattice–QCD determinations and within extractions from different experimental datasets.
A new combination of all LHCb measurements of the CKM angle γ, which quantifies a key CP-violating phase in b-hadron decays, yielded an overall value of (62.8 ± 2.6)°. The collaboration reported flagship electroweak precision measurements of the effective weak mixing angle and the W-boson mass, as well as the first dedicated measurement of the Z-boson mass at the LHC.
An exciting focus for 2026 will be the search for the double open-beauty tetraquark Tbb(bbud)
An exciting focus for 2026 will be the search for the double open-beauty tetraquark Tbb(bbud) – the first accessible exotic hadron expected to be stable against strong decay (CERN Courier November/December 2024 p34). Saša Prelovšek (UL) presented the first lattice-QCD calculation of the state’s electromagnetic form factors, allowing her to rule out an interpretation of the tetraquark as a loosely–bound B–B* molecule.
The legacy Run 1+2 B → K*μ+μ– angular analysis, based on a dataset roughly twice as large as that used in previous ones, was presented. Previously seen tensions were confirmed with much increased precision and new observables are reported for the first time. Theorists Arianna Tinari (UZH), Giuseppe Gagliardi (INFN Rome3) and Nazila Mahmoudi (IP2I, CERN) reviewed the status of the non-local hadronic contributions that could affect this channel, discussing how the use of different theoretical approaches can be employed to determine these contributions and how compatible the current results are with the theoretical expectations.
Zhengchen Lian (THU, INFN Firenze) showed the characteristic “bowling–pin” deformation of neon nuclei as it was recently observed using the SMOG2 apparatus, which allows collisions of LHC protons with a variety of fixed-target light nuclei injected into the beampipe (CERN Courier November/December 2025 p8).
From 17 to 23 November, the second International Conference on Physics of the Two Infinities (P2I) gathered nearly 200 participants on the historic Hongo campus of the University of Tokyo. Organised by the ILANCE laboratory, a joint initiative by CNRS and the University of Tokyo, the P2I series aims to bridge the largest and smallest scales of the universe. In this spirit, the 2025 programme drew together results from cosmological surveys, particle colliders and neutrino detectors.
Two cosmological tensions will play a key role in the coming decades. One concerns how strongly matter clumps together to form structures such as galaxy clusters and filaments. The other involves the universe’s expansion rate, H0. In both cases, measurements based on early-universe data differ from those conducted in the local universe. The discrepancy on H0 has now reached about 6σ (CERN Courier March/April 2025 p28). Independent methods, such as strong lensing, lensed supernovae and gravitational-wave standard sirens, are essential to confirm or resolve this discrepancy. Several of these techniques are expected to reach 1% precision in the near future. More broadly, upcoming large-scale cosmological missions, including Euclid, DESI, LiteBIRD and the Legacy Survey of Space and Time (LSST) – which released its world-leading camera’s first images in June – are set to deliver important insights into inflation, dark energy and the cosmological effects of neutrino masses.
The dark universe featured prominently. Participants discussed an excess of gamma rays from the galactic centre detected by the Fermi telescope, which is consistent with the self-annihilation of weakly interacting massive particles (WIMPs) and may represent one of the strongest experimental hints for dark matter. Recent analyses on more than 40 million galaxies and quasars in DESI’s Data Release 2 show that fits to baryon acoustic oscillation distances deviate from the standard ΛCDM model at the 2.8 to 4.2σ level, with a dynamical dark energy providing a better match. Euclid, having identified approximately 26 million galaxies out to over 10.5 billion light-years, is poised to constrain the nature of dark matter by combining measurements of large-scale structure, gravitational-lensing statistics, small-scale substructure, dwarf-galaxy populations and stellar streams. Experiments such as XENONnT and PandaX-4T are instead pursuing a mature direct-detection programme.
Future colliders were a central topic at P2I. While new physics has long been expected to emerge near the TeV scale to stabilise the Higgs mass, the Standard Model remains in excellent agreement with current data, and precision flavour measurements constrain many possible new particles to lie at much higher energies. The LHC collaborations presented a flurry of new results and superb prospects for its high–luminosity phase, alongside new results from Belle II and NA64. Looking ahead, a major future collider will be essential for exploring and probing the laws connecting particle physics with the earliest moments of the universe.
The conference hosted the first-ever public presentation of JUNO’s experimental results, only a few hours after their appearance on arXiv. Despite relying on only 59.1 days of data, the experiment has already demonstrated excellent detector performance and produced competitive measurements on solar-neutrino oscillation that are fully consistent with previous results. This level of precision is remarkable, after barely two months of data collection. Three major questions in neutrino physics remain unresolved: the ordering of neutrino masses, the value of the CP-violating parameter and the octant of the mixing angle θ32. The next generation of experiments, including JUNO, DUNE, Hyper-K and upgraded neutrino telescopes, are specifically designed to answer these questions. Meanwhile, DESI has reported a new, stringent upper limit of 0.064 eV on the sum of neutrino masses, within a flat ΛCDM framework. It is the tightest cosmological constraint to date.
The LHC collaborations presented a flurry of new results and superb prospects for its high–luminosity phase
New data from the JWST, Subaru and ALMA telescopes revealed an unexpectedly rich population of galaxies only 200–300 million years after the Big Bang. Many of these early systems appear to grow far more rapidly than predicted by the ΛCDM model, raising questions such as whether star formation efficiency was significantly higher in the early universe or whether we currently underestimate the growth of dark-matter halos (CERN Courier November/December 2025 p11). These data also highlighted a surprisingly abundant population of high-redshift active galactic nuclei, with important implications for black-hole seeding and early supermassive black-hole formation. A comprehensive review of the rapidly evolving field of supernova and transient astronomy was also presented. The mechanisms behind core-collapse supernovae remain only partially understood, and the thermonuclear explosions of white dwarfs continue to pose open questions. At the same time, observations keep identifying new transient classes, whose physical origins are still under investigation. Important insights into protostars, discs and planet formation were also discussed. Observations show that interstellar bubbles and molecular filaments shape the formation of stars and planets across a vast range of physical scales. More than 6000 exoplanets have today been detected, from hot Jupiters to super Earths and ocean planets, many without counterparts in our Solar System.
With more than 150 new gravitational-wave (GW) candidates now identified, including extreme ones with rapid spins and highly asymmetric component masses, GW astronomy offers outstanding opportunities to investigate gravity in the strong-field regime. Notably, the GW250114 event was shown to obey Hawking’s area law, which states that the total horizon area cannot decrease during a black-hole merger, providing strong confirmation of general relativity in the most nonlinear regime. Next-generation observatories such as the Einstein Telescope, Cosmic Explorer and LISA will allow detailed black-hole spectroscopy and impose tighter constraints on alternative theories of gravity.
Even if the transition to multi-messenger astronomy began in the late 20th century, the first binary neutron-star merger, GW170817, remains its landmark event. An extraordinary global effort – more than 70 teams and 100 instruments pointed at the event for years – highlighted several historic firsts: the first gravitational-wave “standard siren” measurement of the Hubble constant, the first association between a neutron-star merger and a short gamma-ray burst, the first observed kilonovae confirming the astrophysical site of heavy-element production, and the first direct test comparing the speed of gravity and light. Very-high-energy gamma-ray astronomy (HESS, MAGIC and VERITAS) also reported impressive results, with more than 300 sources above 100 GeV observed, and bright prospects, as the Cherenkov Telescope Array Observatory (CTAO) is about to start operations.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.