Rare, unobserved decays of the Higgs boson are natural places to search for new physics. At the EPS-HEP conference, the ATLAS collaboration presented new improved measurements of two highly suppressed Higgs decays: into a pair of muons; and into a Z boson accompanied by a photon. Producing a single event of either H → μμ or H → Zγ→ (ee/μμ) γ at the LHC requires, on average, around 10 trillion proton–proton collisions. The H → μμ and H → Zγ signals appear as narrow resonances in the dimuon and Zγ invariant mass spectra, atop backgrounds some three orders of magnitude larger.
In the Standard Model, the Brout–Englert–Higgs mechanism gives mass to the muon through its Yukawa coupling to the Higgs field, which can be tested via the rare H → μμ decay. An indirect comparison with the well-known muon mass, determined to 22 parts per billion, provides a stringent test of the mechanism in the second fermion generation and is a powerful probe of new physics. With a branching ratio of just 0.02%, and a large background dominated by the Drell–Yan production of muon pairs through virtual photons or Z bosons, the inclusive signal-over-background ratio plunges to the level of one part in a thousand. To single out its decay signature, the ATLAS collaboration employed machine-learning techniques for background suppression and generated over five billion Drell–Yan Monte Carlo events at next-to-leading-order accuracy in QCD, all passed through the full detector simulation. This high-precision sample provides templates to refine the background model and minimise bias on the tiny H → μμ signal.
The Higgs boson can decay into a Z boson and a photon via loop diagrams involving W bosons and heavy charged fermions, like the top quark. Detecting this rare process would complete the suite of established decays into electroweak boson pairs and offer a window on physics beyond the Standard Model. To reduce QCD background and improve sensitivity, the ATLAS analysis focused on Z bosons further decaying into electron or muon pairs, with an overall branching fraction of 7%. This additional selection reduces the event rate to about one in 10,000 Higgs decays, with an inclusive signal-over-background ratio at the per-mille level. The low momenta of final-state particles, combined with the high-luminosity conditions of LHC Run 3, pose additional challenges for signal extraction and suppression of Z + jets backgrounds. To enhance signal significance, the ATLAS collaboration improved background modelling techniques, optimised event categorisation by Higgs production mode, and employed machine learning to boost sensitivity.
The two ATLAS searches are based on 165 fb–1 of LHC Run 3 proton–proton collision data collected between 2022 and 2024 at √s = 13.6 TeV, with a rigorous blinding procedure in place to prevent biases. Both channels show excesses at the Higgs-boson mass of 125.09 GeV, with observed (expected) 2.8σ (1.8σ) significance for H to μμ and 1.4σ (1.5σ) for H to Zγ. These results are strengthened by combining them with 140 fb–1 of Run-2 data collected at √s = 13 TeV, updating the H → μμ and H → Zγ observed (expected) significances to 3.4σ (2.5σ) and 2.5σ (1.9σ), respectively (see figure 1). The measured signal strengths are consistent with the Standard Model within uncertainties.
These results mark the ATLAS collaboration’s first evidence for the H → μμ decay, following the earlier claim by CMS based on Run-2 data (see CERN Courier September/October 2020 p7). Meanwhile, the H → Zγ search achieves a 19% increase in expected significance with respect to the combined ATLAS–CMS Run-2 analysis, which first reported evidence for this process. As Run 3 data-taking continues, the LHC experiments are closing in on establishing these two rare Higgs decay channels. Both will remain statistically limited throughout the LHC’s lifetime, with ample room for discovery in the high-luminosity phase.
Axion-like particles (ALPs) are some of the most promising candidates for physics beyond the Standard Model. At the LHC, searches for ALPs that couple to gluons and photons have so far been limited to masses above 10 GeV due to trigger requirements that reduce low-energy sensitivity. In its first ever analysis on purely neutral final states, the LHCb collaboration has now extended this experimental reach and set new bounds on the ALP parameter space.
When a global symmetry is spontaneously broken, it gives rise to massless excitations called Goldstone bosons, which reflect the system’s freedom to transform continuously without changing its energy. It is thought that ALPs may arise via a similar mechanism, acquiring a small mass though, as they originate from symmetries that are only approximate. Depending on the underlying theory, they could contribute to dark matter, solve the strong-CP problem, or mediate interactions with a hidden sector. Their coupling to known particles varies across models, leading to a range of potential experimental signatures. Among the most compelling are those involving gluons and photons.
Thanks to the magnitude of the strong coupling constant, even a small interaction with gluons can dominate the production and decay of ALPs. This makes searches at the LHC challenging since low-energy jets in proton–proton collisions are often indistinguishable from the expected ALP decay signature. In this environment, a more effective approach is to focus on the photon channel and search for ALPs that are produced in proton–proton collisions – mostly via gluon–gluon fusion – and that decay into photon pairs. These processes have been investigated at the LHC, but previous searches were limited by trigger thresholds requesting photons with large momentum components transverse to the beam. This is particularly restrictive for low-mass ALPs, whose decay products are often too soft to pass these thresholds.
The new search, based on Run-2 data collected in 2018, overcomes this limitation by leveraging the LHCb detector’s flexible software-based trigger system, lower pile-up and forward geometry. The latter enhances sensitivity to products with a small momentum component transverse to the beam, making it well suited to probe resonances in the 4.9 to 19.4 GeV mass region. This is the first LHCb analysis of a purely neutral final state, hence requiring a new trigger and selection strategy, as well as a dedicated calibration procedure. Candidate photon pairs are identified from two high-energy calorimeter clusters, produced in isolation from the rest of the event, which could not originate from charged particles or neutral pions. ALP decays are then sought using maximum likelihood fits that scan the photon-pair invariant mass spectrum for peaks.
No photon-pair excess is observed over the background-only hypothesis, and upper limits are set on the ALP production cross-section times decay branching. These results constrain the ALP decay rate and its coupling to photons, probing a region of parameter space that has so far remained unexplored (see figure 1). The investigated mass range is also of interest beyond ALP searches. Alongside the main analysis, the study targeted two-photon decays of B0(s) and the little-studied ηb meson, almost reaching the sensitivity required for its detection.
The upgraded LHCb detector, which began operations with Run 3 in 2022, is expected to deliver another boost in sensitivity. This will allow future analyses to benefit from the extended flexibility of its purely software trigger, significantly larger datasets and a wider energy coverage of the upgraded calorimeter.
In the late 1990s, observational evidence accumulated that the universe is currently undergoing an accelerating expansion. Its cause remains a major mystery for physics. The term “dark energy” was coined to explain the data, however, we have no idea what dark energy is. All we know is that it makes up about 70% of the energy density of the universe, and that it does not behave like regular matter – if it is indeed matter and not a modification of the laws of gravity on cosmological scales. If it is matter, then it must have a pressure density close to p = –ρ, where ρ is its energy density. The cosmological constant in Einstein’s equations for spacetime acts precisely this way, and a cosmological constant has therefore long been regarded as the simplest explanation for the observations. It is the bedrock of the prevailing ΛCDM model of cosmology – a setup where dark energy is time-independent. But recent observations by the Dark Energy Spectroscopic Instrument provide tantalising evidence that dark energy might be time-dependent, with its pressure slightly increasing over time (CERN Courier May/June 2025 p11). If upcoming data confirm these results, it would require a paradigm shift in cosmology, ruling out the ΛCDM model.
Mounting evidence
From the point of view of fundamental theory, there are at least four good reasons to believe that dark energy must be time-dependent and cannot be a cosmological constant.
The first piece of evidence is well known: if there is a cosmological constant induced by a particle-physics description of matter, then its value should be 120 orders of magnitude larger than observations indicate. This is the famous cosmological constant problem.
A second argument is the “infrared instability” of a spacetime induced by a cosmological constant. Alexander Polyakov (Princeton) has forcefully argued that inhomogeneities on very large length scales would gradually mask a preexisting cosmological constant, making it appear to vary over time.
Recently, other arguments have been put forwards indicating that dark energy must be time-dependent. Since quantum matter generates a large cosmological constant when treated as an effective field theory, it should be expected that the cosmological constant problem can only be addressed in a quantum theory of all forces. The best candidate we have is superstring theory. There is mounting evidence that – at least in the regions of the theory under mathematical control – it is impossible to obtain a positive cosmological constant corresponding to the observed accelerating expansion. But one can obtain time-dependent dark energy, for example in quintessence toy models.
Recent observations provide tantalising evidence that dark energy might be time-dependent
The final reason is known as the trans-Planckian censorship conjecture. As the nature of dark energy remains a complete mystery, it is often treated as an effective field theory. This means that one expands all fields in Fourier modes and quantises each field as a harmonic oscillator. The modes one uses have wavelengths that increase in proportion to the scale of space. This creates a theoretical headache at the highest energies. To avoid infinities, an “ultraviolet cutoff” is required at or below the Planck mass. This must be at a fixed physical wavelength. In order to maintain this cutoff in an expanding space, it is necessary to continuously create new modes at the cutoff scale as the wavelength of the previously present modes increases. This implies a violation of unitarity. If dark energy were a cosmological constant, then modes with wavelength equal to the cutoff scale at the present time would become classical at some time in the future, and the violation of unitarity would be visible in hypothetical future observations. To avoid this problem, we conclude that dark energy must be time-dependent.
Because of its deep implications for fundamental physics, we are eagerly awaiting new observational results that will shine more light on the issue of the time-dependence of dark energy.
The 2025 European Physical Society Conference on High Energy Physics (EPS-HEP), held in Marseille from 7 to 11 July, took centre stage in this pivotal year for high-energy physics as the community prepares to make critical decisions on the next flagship collider at CERN to enable major leaps at the high-precision and high-energy frontiers. The meeting showcased the remarkable creativity and innovation in both experiment and theory, driving progress across all scales of fundamental physics. It also highlighted the growing interplay between particle, nuclear, astroparticle physics and cosmology.
Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries. This requires sustained investment from funding agencies, laboratories, universities and the broader community to support careers and recognise leadership in detectors, software and computing. Such support must extend across construction, commissioning and operation, and include strategic and basic R&D. The implementation of detector R&D (DRD) collaborations, as outlined in the 2021 ECFA roadmap, is an important step in this direction.
Physics thrives on precision, and a prime example this year came from the Muon g–2 collaboration at Fermilab, which released its final result combining all six data runs, achieving an impressive 127 parts-per-billion precision on the muon anomalous magnetic moment (CERN Courier July/August 2025 p7). The result agrees with the latest lattice–QCD predictions for the leading hadronic–vacuum-polarisation term, albeit within a four times larger theoretical uncertainty than the experimental one. Continued improvements to lattice QCD and to the traditional dispersion-relation method based on low-energy e+e– and τ data are expected in the coming years.
Runaway success
After the remarkable success of LHC Run 2, Run 3 has now surpassed it in delivered luminosity. Using the full available Run-2 and Run-3 datasets, ATLAS reported 3.4σ evidence for the rare Higgs decay to a muon pair, and a new result on the quantum-loop mediated decay into a Z boson and a photon, now more consistent with the Standard Model prediction than the earlier ATLAS and CMS Run-2 combination (see “Mapping rare Higgs-boson decays”). ATLAS also presented an updated study of Higgs pair production with decays into two b-quarks and two photons, whose sensitivity was increased beyond statistical gains thanks to improved reconstruction and analysis. CMS released a new Run-2 search for Higgs decays to charm quarks in events produced with a top-quark pair, reaching sensitivity comparable to the traditional weak-boson-associated production. Both collaborations also released new combinations of nearly all their Higgs analyses from Run 2, providing a wide set of measurements. While ATLAS sees overall agreement with predictions, CMS observes some non-significant tensions.
Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries
A highlight in top-quark physics this year was the observation by CMS of an excess in top-pair production near threshold, confirmed at the conference by ATLAS (see “ATLAS confirms top–antitop excess”). The physics of the strong interaction predicts highly compact, colour-singlet, quasi-bound pseudoscalar top–antitop state effects arising from gluon exchange. Unlike bottomonium or charmonium, no proper bound state is formed due to the rapid weak decay of the top quark (see “Memories of quarkonia”). This “toponium” effect can be modelled with the use of non-relativistic QCD. Both experiments observed a cross section about 100 times smaller than for inclusive top-quark pair production. The subtle signal and complex threshold modelling make the analysis challenging, and warrant further theoretical and experimental investigation.
A major outcome of LHC Run 2 is the lack of compelling evidence for physics beyond the Standard Model. In Run 3, ATLAS and CMS continue their searches, aided by improved triggers, reconstruction and analysis techniques, as well as a dataset more than twice as large, enabling a more sensitive exploration of rare or suppressed signals. The experiments are also revisiting excesses seen in Run 2, for example, a CMS hint of a new resonance decaying into a Higgs and another scalar was not confirmed by a new ATLAS analysis including Run-3 data.
Hadron spectroscopy has seen a renaissance since Belle’s 2003 discovery of the exotic X(3872), with landmark advances at the LHC, particularly by LHCb. CMS recently reported three new four-charm-quark states decaying into J/ψ pairs between 6.6 and 7.1 GeV. Spin-parity analysis suggests they are tightly bound tetraquarks rather than loosely bound molecular states (CERN Courier November/December 2024 p33).
Rare observations
Flavour physics continues to test the Standard Model with high sensitivity. Belle-II and LHCb reported new CP violation measurements in the charm sector, confirming the expected small effects. LHCb observed, for the first time, CP violation in the baryon sector via Λb decays, a milestone in CP violation history. NA62 at CERN’s SPS achieved the first observation of the ultra-rare kaon decay K+→ π+νν with a branching ratio of 1.3 × 10–10, matching the Standard Model prediction. MEG-II at PSI set the most stringent limit to date on the lepton-flavour-violating decay μ → eγ, excluding branching fractions above 1.5 × 10–13. Both experiments continue data taking until 2026.
Heavy-ion collisions at the LHC provide a rich environment to study the quark–gluon plasma, a hot, dense state of deconfined quarks and gluons, forming a collective medium that flows as a relativistic fluid with an exceptionally low viscosity-to-entropy ratio. Flow in lead–lead collisions, quantified by Fourier harmonics of spatial momentum anisotropies, is well described by hydrodynamic models for light hadrons. Hadrons containing heavier charm and bottom quarks show weaker collectivity, likely due to longer thermalisation times, while baryons exhibit stronger flow than mesons due to quark coalescence. ALICE reported the first LHC measurement of charm–baryon flow, consistent with these effects.
Spin-parity analysis suggests the states are tightly bound tetraquarks
Neutrino physics has made major strides since oscillations were confirmed 27 years ago, with flavour mixing parameters now known to a few percent.Crucial questions still remain: are neutrinos their own antiparticles (Majorana fermions)? What is the mass ordering – normal or inverted? What is the absolute mass scale and how is it generated? Does CP violation occur? What are the properties of the right-handed neutrinos? These and other questions have wide-ranging implications for particle physics, astrophysics and cosmology.
Neutrinoless double-beta decay, if observed, would confirm that neutrinos are Majorana particles. Experiments using xenon and germanium are beginning to constrain the inverted mass ordering, which predicts higher decay rates. Recent combined data from the long-baseline experiments T2K and NOvA show no clear preference for either ordering, but exclude vanishing CP violation at over 3σ in the inverted scenario. The KM3NeT detector in the Mediterranean, with its ORCA and ARCA components, has delivered its first competitive oscillation results, and detected a striking ~220 PeV muon neutrino, possibly from a blazar (CERN Courier March/April 2025 p7). The next-generation large-scale neutrino experiments JUNO (China), Hyper-Kamiokande (Japan) and LBNF/DUNE (USA) are progressing in construction, with data-taking expected to begin in 2025, 2028 and 2031, respectively. LBNF/DUNE is best positioned to determine the neutrino mass ordering, while Hyper-Kamiokande will be the most sensitive to CP violation. All three will also search for proton decay, a possible messenger of grand unification.
There is compelling evidence for dark matter from gravitational effects across cosmic times and scales, as well as indications that it is of particle origin. Its possible forms span a vast mass range, up to the ~100 TeV unitarity limit for a thermal relic, and may involve a complex, structured “dark sector”. The wide complementarity among the search strategies gives the field a unifying character. Direct detection experiments looking for tiny, elastic nuclear recoils, such as XENONnT (Italy), LZ (USA) and PandaX-4T (China), have set world-leading constraints on weakly interacting massive particles. XENONnT and PandaX-4T have also reported first signals from boron-8 solar neutrinos, part of the so-called “neutrino fog” that will challenge future searches. Axions, introduced theoretically to suppress CP violation in strong interactions, could be viable dark-matter candidates. They would be produced in the early universe with enormous number density, behaving, on galactic scales, as a classical, nonrelativistic, coherently oscillating bosonic field, effectively equivalent to cold dark matter. Axions can be detected via their conversion into photons in strong magnetic fields. Experiments using microwave cavities have begun to probe the relevant μeV mass range of relic QCD axions, but the detection becomes harder at higher masses. New concepts, using dielectric disks or wire-based plasmonic resonance, are under development to overcome these challenges.
Cosmological constraints
Cosmology featured prominently at EPS-HEP, driven by new results from the analysis of DESI DR2 baryon acoustic oscillation (BAO) data, which include 14 million redshifts. Like the cosmic microwave background (CMB), BAO also provides a “standard ruler” to trace the universe’s expansion history – much like supernovae (SNe) do as standard candles. Cosmological surveys are typically interpreted within the ΛCDM model, a six-parameter framework that remarkably accounts for 13.8 billion years of cosmic evolution, from inflation and structure formation to today’s energy content, despite offering no insight into the nature of dark matter, dark energy or the inflationary mechanism. Recent BAO data, when combined with CMB and SNe surveys, show a preference for a form of dark energy that weakens over time. Tensions also persist in the Hubble expansion rate derived from early-universe (CMB and BAO) and late-universe (SN type-Ia) measurements (CERN Courier March/April 2025 p28). However, anchoring SN Ia distances in redshift remains challenging, and further work is needed before drawing firm conclusions.
Cosmological fits also constrain the sum of neutrino masses. The latest CMB and BAO-based results within ΛCDM appear inconsistent with the lower limit implied by oscillation data for inverted mass ordering. However, firm conclusions are premature, as the result may reflect limitations in ΛCDM itself. Upcoming surveys from the Euclid satellite and the Vera C. Rubin Observatory (LSST) are expected to significantly improve cosmological constraints.
Cristinel Diaconu and Thomas Strebler, chairs of the local organising committee, together with all committee members and many volunteers, succeeded in delivering a flawlessly organised and engaging conference in the beautiful setting of the Palais du Pharo overlooking Marseille’s old port. They closed the event with a memorial phrase of British cyclist Tom Simpson: “There is no mountain too high.”
The nature of dark matter remains one of the greatest unresolved questions in modern physics. While ground-based experiments persist in their quest for direct detection, astrophysical observations and multi-messenger studies have emerged as powerful complementary tools for constraining its properties. Stars across the Milky Way and beyond – including neutron stars, white dwarfs, red giants and main-sequence stars – are increasingly recognised as natural laboratories for probing dark matter through its interactions with stellar interiors, notably via neutron-star cooling, asteroseismic diagnostics of solar oscillations and gravitational-wave emission.
The international conference Dark Matter and Stars: Multi-Messenger Probes of Dark Matter and Modified Gravity (ICDMS) was held at Queen’s University in Kingston, Ontario, Canada, from 14 to 16 July. The meeting brought together around 70 researchers from across astrophysics, cosmology, particle physics and gravitational theory. The goal was to foster interdisciplinary dialogue on how observations of stellar systems, gravitational waves and cosmological data can help shed light on the dark sector. The conference was specifically dedicated to exploring how astrophysical and cosmological systems can be used to probe the nature of dark matter.
The first day centred on compact objects as natural laboratories for dark-matter physics. Giorgio Busoni (University of Adelaide) opened with a comprehensive overview of recent theoretical progress on dark-matter accumulation in neutron stars and white dwarfs, highlighting refinements in the treatment of relativistic effects, optical depth, Fermi degeneracy and light mediators – all of which have shaped the field in recent years. Melissa Diamond (Queen’s University) followed with a striking talk with a nod to Dr. Strangelove, exploring how accumulated dark matter might trigger thermonuclear instability in white dwarfs. Sandra Robles (Fermilab) shifted the perspective from neutron stars to white dwarfs, showing how they constrain dark-matter properties. One of the authors highlighted postmerger gravitational-wave observations as a tool to distinguish neutron stars from low-mass black holes, offering a promising avenue for probing exotic remnants potentially linked to dark matter. Axions featured prominently throughout the day, alongside extensive discussions of the different ways in which dark matter affects neutron stars and their mergers.
ICDMS continues to strengthen the interface between fundamental physics and astrophysical observations
On the second day, attention turned to the broader stellar population and planetary systems as indirect detectors. Isabelle John (University of Turin) questioned whether the anomalously long lifetimes of stars near the galactic centre might be explained by dark-matter accumulation. Other talks revisited stellar systems – white dwarfs, red giants and even speculative dark stars – with a focus on modelling dark-matter transport and its effects on stellar heat flow. Complementary detection strategies also took the stage, including neutrino emission, stochastic gravitational waves and gravitational lensing, all offering potential access to otherwise elusive energy scales and interaction strengths.
The final day shifted toward galactic structure and the increasingly close interplay between theory and observation. Lina Necib (MIT) shared stellar kinematics data used to map the Milky Way’s dark-matter distribution, while other speakers examined the reliability of stellar stream analyses and subtle anomalies in galactic rotation curves. The connection to terrestrial experiments grew stronger, with talks tying dark matter to underground detectors, atomic-precision tools and cosmological observables such as the Lyman-alpha forest and baryon acoustic oscillations. Early-career researchers contributed actively across all sessions, underscoring the field’s growing vitality and introducing a fresh influx of ideas that is expanding its scope.
The ICDMS series is now in its third edition. It began in 2018 at Instituto Superior Técnico, Portugal, and is poised to become an annual event. The next conference will take place at the University of Southampton, UK, in 2026, followed by the Massachusetts Institute of Technology in the US in 2027. With increasing participation and growing international interest, the ICDMS series continues to strengthen the interface between fundamental physics and astrophysical observations in the quest to understand the nature of dark matter.
Measurements at high-energy colliders such as the LHC, the Electron–Ion Collider (EIC) and the FCC will be performed at the highest luminosities. The analysis of the high-precision data taken there will require a significant increase in the accuracy of theoretical predictions. To achieve this, new mathematical and algorithmic technologies are needed. Developments in precision Standard Model calculations have been rapid since experts last met for Loopsummit-1 at Cadenabbia on the banks of Lake Como in 2021 (CERN Courier November/December 2021 p24). Loopsummit-2, held in the same location from 20 to 25 July this year, summarised this formidable body of work.
As higher experimental precision relies on new technologies, new theory results require better algorithms, both from the mathematical and computer-algebraic side, and new techniques in quantum field theory. The central software package for perturbative calculations, FORM, now has a new major release, FORM 5. Progress has also been achieved in integration-by-parts reduction, which is of central importance for reducing to a much smaller set of master integrals. New developments were also reported in analytic and numerical Feynman-diagram integration using Mellin–Barnes techniques, new compact function classes such as Feynman–Fox integrals, and modern summation technologies and methods to establish and solve gigantic recursions and differential equations of degree 4000 and order 100. The latest results on elliptic integrals and progress on the correct treatment of the γ5-problem in real dimensions were also presented. These technologies allow the calculation of processes up to five loops and in the presence of more scales at two- and three-loop order. New results for single-scale quantities like quark condensates and the ρ-parameter were also reported.
In the loop
Measurements at future colliders will depend on the precise knowledge of parton distribution functions, the strong coupling constant αs(MZ) and the heavy-quark masses. Experience suggests that going from one loop order to the next in the massless and massive cases takes 15 years or more, as new technologies must be developed. By now, most of the space-like four-loop splitting functions governing scaling violations are known with a good precision, as well as new results for the three-loop time-like splitting functions. The massive three-loop Wilson coefficients for deep-inelastic scattering are now complete, requiring far larger and different integral spaces compared with the massless case. Related to this are the Wilson coefficients of semi-inclusive deep-inelastic scattering at next-to-next-to leading order (NNLO), which will be important to tag individual flavours at the EIC. For the αs(MZ) measurement at low-scale processes, the correct treatment of renormalon contributions is necessary. Collisions at high energies also allow the detailed study of scattering processes in the forward region of QCD. Other long-term projects concern NNLO corrections for jet-production at e+e– and hadron colliders, and other related processes like Higgs-boson and top-quark production, in some cases with a large number of partons in the final state. This also includes the use of effective Lagrangians.
Many more steps lie ahead if we are to match the precision of measurements at high-luminosity colliders
The complete calculation of difficult processes at NNLO and beyond always drives the development of term-reduction algorithms and analytic or numerical integration technologies. Many more steps lie ahead in the coming years if we are to match the precision of measurements at high-luminosity colliders. Some of these will doubtless be reported at Loopsummit-3 in summer 2027.
The 39th edition of the International Cosmic Ray Conference (ICRC), a key biennial conference in astroparticle physics, was held in Geneva from 15 to 24 July. Plenary talks covered solar, galactic and ultra-high-energy cosmic rays. A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves. Talks were informed by limits from the LHC and elsewhere on dark-matter particles and primordial black-holes. The bundle of constraints has improved very significantly over the past few years, allowing more meaningful and stringent tests.
Solar modelling
The Sun and its heliosphere, where the solar wind offers insights into magnetic reconnection, shock acceleration and diffusion, are now studied in situ thanks to the Solar Orbiter and Parker Solar Probe spacecraft. Long-term PAMELA and AMS data, spanning over an 11-year solar cycle, allow precise modelling of solar modulation of cosmic-ray fluxes below a few tens of GeV. AMS solar proton data show a 27-day periodicity up to 20 GV, caused by corotating interaction regions where fast solar wind overtakes slower wind, creating shocks. AMS has recorded 46 solar energetic particle (SEP) events, the most extreme reaching a few GV, from magnetic-reconnection flares or fast coronal mass ejections. While isotope data once suggested such extreme events occur every 1500 years, Kepler observations of Sun-like stars indicate they may happen every 100 years, releasing more than 1034 erg, often during weak solar minima, and linked to intense X-ray flares.
The spectrum of galactic cosmic rays, studied with high-precision measurements from satellites (DAMPE) and ISS-based experiments (AMS-02, CALET, ISS-CREAM), is not a single power law but shows breaks and slope changes, signatures of diffusion or source effects. A hardening at about 500 GV, common to all primaries, and a softening at 10 TV, are observed in protons and He spectra by all experiments – and for the first time also in DAMPE’s O and C. As the hardening is detected in primary spectra scaling at the same rigidity (charge, not mass) as in secondary-to-primary ratios, they are attributed to propagation in the galaxy and not to source-related effects. This is supported by secondary (Li, Be, B) spectra with breaks about twice as strong as primaries (He, C, O). A second hardening at 150 TV was reported by ISS-CREAM (p) and DAMPE (p + He) for the first time, broadly consistent – within large hadronic-model and statistical uncertainties – with indirect ground-based results from GRAPES and LHAASO.
A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves
Ratios of secondary over primary species versus rigidity R (energy per unit charge) probe the ratio of the galactic halo size H to the energy-dependent diffusion coefficient D(R), and so measure the “grammage” of material through which cosmic rays propagate. Unstable/stable secondary isotope ratios probe the escape times of cosmic rays from the halo (H2/D(R)), so from both measurements H and D(R) can be derived. The flattening evidenced by the highest energy point at 10 to 12 GeV/nucleon of the 10Be/9Be ratio as a function of energy, hints at a possibly larger halo than previously believed beyond 5 kpc, to be tested by HELIX. AMS-02 spectra of single elements will soon allow separation of the primary and secondary fractions for each nucleus, also based on spallation cross-sections. Anomalies remain, such as a flattening at ~7 TeV/nucleon in Li/C and B/C, possibly indicating reacceleration or source grammage. AMS-02’s 7Li/6Li ratio disagrees with pure secondary models, but cross-section uncertainties preclude firm conclusions on a possible Li primary component, which would be produced by a new population of sources.
The muon puzzle
The dependency of ground-based cosmic-ray measurements on hadronic models has been widely discussed by Boyd and Pierog, highlighting the need for more measurements at CERN, such as the recent proton-O run being analysed by LHCf. The EPOS–LHC model, based on the core–corona approach, shows reduced muon discrepancies, producing more muons and a heavier composition, namely deeper shower maxima (+20 g/cm2) than earlier models. This clarifies the muon puzzle raised by Pierre Auger a few years ago of a larger muon content in atmospheric showers than simulations. A fork-like structure remains in the knee region of the proton spectrum, where the new measurements presented by LHAASO are in agreement with IceTop/IceCube, and could lead to a higher content of protons beyond the knee than hinted at by KASCADE and the first results of GRAPES. Despite the higher proton fluxes, a dominance of He above the knee is observed, which requires a special kind of close-by source to be hypothesised.
Multi-messenger approaches
Gamma-ray and neutrino astrophysics were widely discussed at the conference, highlighting the relevance of multi-messenger approaches. LHAASO produced impressive results on UHE astrophysics, revealing a new class of pevatrons: microquasars alongside young massive clusters, pulsar wind nebulae (PWNe) and supernova remnants.
Microquasars are gamma-ray binaries containing a stellar-mass black hole that drives relativistic jets while accreting matter from their companion stars. Outstanding examples include Cyg X-3, a potential PeV microquasar, from which the flux of PeV photons is 5–10 times higher than in the rest of the Cygnus bubble.
Five other microquasars are observed beyond 100 TeV: SS 433, V4641 Sgr, GRS 1915 + 105, MAXI J1820 + 070 and Cygnus X-1. SS 433 is a microquasar with two gamma-ray emitting jets nearly perpendicular to our line of sight, terminated at 40 pc from the black hole (BH) identified by HESS and LHAASO beyond 10 TeV. Due to the Klein–Nishina effect, the inverse Compton flux above ~10 TeV is gradually suppressed, and an additional spectral component is needed to explain the flux around 100 TeV.
Gamma-ray and neutrino astrophysics were widely discussed at the conference
Beyond 100 TeV, LHAASO also identifies a source coincident with a giant molecular cloud; this component may be due to protons accelerated close to the BH or in the lobes. These results demonstrate the ability to resolve the morphology of extended galactic sources. Similarly, ALMA has discovered two hotspots, both at 0.28° (about 50 pc) from GRS 1915 + 105 in opposite directions from its BH. These may be interpreted as two lobes, or the extended nature of the LHAASO source may instead be due to the spatial distribution of the surrounding gas, if the emission from GRS 1915 + 105 is dominated by hadronic processes.
Further discussions addressed pulsar halos and PWNe as unique laboratories for studying the diffusion of electrons and mysterious as-yet-unidentified pevatrons, such as MGRO J1908 + 06, coincident with a SNR (favoured) and a PSR. One of these sources may finally reveal an excess in KM3NeT or IceCube neutrinos, proving their cosmic-ray accelerator nature directly.
The identification and subtraction of source fluxes on the galactic plane is also important for the measurement of the galactic plane neutrino flux by IceCube. This currently assumes a fixed spectral index of E–2.7, while authors like Grasso et al. presented a spectrum becoming as soft as E–2.4, closer to the galactic centre. The precise measurements of gamma-ray source fluxes and the diffuse emission from galactic cosmic rays interacting in the interstellar matter lead to better constraints on neutrino observations and on cosmic ray fluxes around the knee.
Cosmogenic origins
KM3NeT presented a neutrino of energy well beyond the diffuse cosmic neutrino flux of IceCube, which does not extend beyond 10 PeV (CERN Courier March/April 2025 p7). Its origin was widely discussed at the conference. The large error on its estimated energy – 220 PeV, within a 1σ confidence interval of 110 to 790 PeV – makes it nevertheless compatible with the flux observed by IceCube, for which a 30 TeV break was first hypothesised at this conference. If events of this kind are confirmed, they could have transient or dark-matter origins, but a cosmogenic origin is improbable due to the IceCube and Pierre Auger limits on the cosmogenic neutrino flux.
Reconciling general relativity and quantum mechanics remains a central problem in fundamental physics. Though successful in their own domains, the two theories resist unification and offer incompatible views of space, time and matter. The field of quantum gravity, which has sought to resolve this tension for nearly a century, is still plagued by conceptual challenges, limited experimental guidance and a crowded landscape of competing approaches. Now in its third instalment, the “Quantum Gravity” conference series addresses this fragmentation by promoting open dialogue across communities. Organised under the auspices of the International Society for Quantum Gravity (ISQG), the 2025 edition took place from 21 to 25 July at Penn State University. The event gathered researchers working across a variety of frameworks – from random geometry and loop quantum gravity to string theory, holography and quantum information. At its core was the recognition that, regardless of specific research lines or affiliations, what matters is solving the puzzle.
One step to get there requires understanding the origin of dark energy, which drives the accelerated expansion of the universe and is typically modelled by a cosmological constant Λ. Yasaman K Yazdi (Dublin Institute for Advanced Studies) presented a case for causal set theory, reducing spacetime to a discrete collection of events, partially ordered to capture cause–effect relationships. In this context, like a quantum particle’s position and momentum, the cosmological constant and the spacetime volume are conjugate variables. This leads to the so-called “ever-present Λ” models, where fluctuations in the former scale as the inverse square root of the latter, decreasing over time but never vanishing. The intriguing agreement between the predicted size of these fluctuations and the observed amount of dark energy, while far from resolving quantum cosmology, stands as a compelling motivation for pursuing the approach.
In the spirit of John Wheeler’s “it from bit” proposal, Jakub Mielczarek (Jagiellonian University) suggested that our universe may itself evolve by computing – or at least admit a description in terms of quantum information processing. In loop quantum gravity, space is built from granular graphs known as spin networks, which capture the quantum properties of geometry. Drawing on ideas from tensor networks and holography, Mielczarek proposed that these structures can be reinterpreted as quantum circuits, with their combinatorial patterns reflected in the logic of algorithms. This dictionary offers a natural route to simulating quantum geometry, and could help clarify quantum theories that, like general relativity, do not rely on a fixed background.
Quantum clues
What would a genuine quantum theory of spacetime achieve, though? According to Esteban Castro Ruiz (IQOQI), it may have to recognise that reference frames, which are idealised physical systems used to define spatio-temporal distances, must themselves be treated as quantum objects. In the framework of quantum reference frames, notions such as entanglement, localisation and superposition become observer-dependent. This leads to a perspective-neutral formulation of quantum mechanics, which may offer clues for describing physics when spacetime is not only dynamical, but quantum.
The conference’s inclusive vocation came through most clearly in the thematic discussion sessions, including one on the infamous black-hole information problem chaired by Steve Giddings (UC Santa Barbara). A straightforward reading of Stephen Hawking’s 1974 result suggests that black holes radiate, shrink and ultimately destroy information – a process that is incompatible with standard quantum mechanics. Any proposed resolution must face sharp trade-offs: allowing information to escape challenges locality, losing it breaks unitarity and storing it in long-lived remnants undermines theoretical control. Giddings described a mild violation of locality as the lesser evil, but the controversy is far from settled. Still, there is growing consensus that dissolving the paradox may require new physics to appear well before the Planck scale, where quantum-gravity effects are expected to dominate.
Once the domain of pure theory, quantum gravity has become eager to engage with experiment
Among the few points of near-universal agreement in the quantum-gravity community has long been the virtual impossibility of detecting a graviton, the hypothetical quantum of the gravitational field. According to Igor Pikovski (Stockholm University), things may be less bleak than once thought. While the probability of seeing graviton-induced atomic transitions is negligible due to the weakness of gravity, the situation is different for massive systems. By cooling a macroscopic object close to absolute zero, Pikovski suggested, the effect could be amplified enough, with current interferometers simultaneously monitoring gravitational waves in the correct frequency window. Such a signal would not amount to a definitive proof of gravity’s quantisation, just as the photoelectric effect could not definitely establish the existence of photons, nor would it single out a specific ultraviolet model. However, it could constrain concrete predictions and put semiclassical theories under pressure. Giulia Gubitosi (University of Naples Federico II) tackled phenomenology from a different angle, exploring possible deviations from special relativity in models where spacetime becomes non-commutative. There, coordinates are treated like quantum operators, leading to effects like decoherence, modified particle speeds and soft departures from locality. Although such signals tend to be faint, they could be enhanced by high-energy astrophysical sources: observations of neutrinos corresponding to gamma-ray bursts are now starting to close in on these scenarios. Both talks reflected a broader, cultural shift: quantum gravity, once the domain of pure theory, has become eager to engage with experiment.
Quantum Gravity 2025 offered a wide snapshot of a field still far from closure, yet increasingly shaped by common goals, the convergence of approaches and cross-pollination. As intended, no single framework took centre stage, with a dialogue-based format keeping focus on the central, pressing issue at hand: understanding the quantum nature of spacetime. With limited experimental guidance, open exchange remains key to clarifying assumptions and avoiding duplication of efforts. Building on previous editions, the meeting pointed toward a future where quantum-gravity researchers will recognise themselves as part of a single, coherent scientific community.
In June 2025, physicists met at Saariselkä, Finland to discuss recent progress in the field of ultra-peripheral collisions (UPCs). All the major LHC experiments measure UPCs – events where two colliding nuclei miss each other, but nevertheless interact via the mediation of photons that can propagate long distances. In a case of life imitating science, almost 100 delegates propagated to a distant location in one of the most popular hiking destinations in northern Lapland to experience 24-hour daylight and discuss UPCs in Finnish saunas.
UPC studies have expanded significantly since the first UPC workshop in Mexico in December 2023. The opportunity to study scattering processes in a clean photon–nucleus environment at collider energies has inspired experimentalists to examine both inclusive and exclusive scattering processes, and to look for signals of collectivity and even the formation of quark–gluon plasma (QGP) in this unique environment.
For many years, experimental activity in UPCs was mainly focused on exclusive processes and QED phenomena including photon–photon scattering. This year, fresh inclusive particle-production measurements gained significant attention, as well as various signatures of QGP-like behaviour observed by different experiments at RHIC and at the LHC. The importance of having complementing experiments to perform similar measurements was also highlighted. In particular, the ATLAS experiment joined the ongoing activities to measure exclusive vector–meson photoproduction, finding a cross section that disagrees with the previous ALICE measurements by almost 50%. After long and detailed discussions, it was agreed that different experimental groups need to work together closely to resolve this tension before the next UPC workshop.
Experimental and theoretical developments very effectively guide each other in the field of UPCs. This includes physics within and beyond the Standard Model (BSM), such as nuclear modifications to the partonic structure of protons and neutrons, gluon-saturation phenomena predicted by QCD (CERN Courier January/February 2025 p31), and precision tests for BSM physics in photon–photon collisions. The expanding activity in the field of UPCs, together with the construction of the Electron Ion Collider (EIC) at Brookhaven National Laboratory in the US, has also made it crucial to develop modern Monte Carlo event generators to the level where they can accurately describe various aspects of photon–photon and photon–nucleus scatterings.
As a photon collider, the LHC complements the EIC. While the centre-of-mass energy at the EIC will be lower, there is some overlap between the kinematic regions probed by these two very different collider projects thanks to the varying energy spectra of the photons. This allows the theoretical models needed for the EIC to be tested against UPC data, thereby reducing theoretical uncertainty on the predictions that guide the detector designs. This complementarity will enable precision studies of QCD phenomena and BSM physics in the 2030s.
For Heike Riel, IBM fellow and head of science and technology at IBM Research, successful careers in science are built not by choosing between academia and industry, but by moving fluidly between them. With a background in semiconductor physics and a leadership role in one of the world’s top industrial research labs, Riel learnt to harness the skills she picked up in academia, and now uses them to build real-world applications. Today, IBM collaborates with academia and industry partners on projects ranging from quantum computing and cybersecurity to developing semiconductor chips for AI hardware.
“I chose semiconductor physics because I wanted to build devices, use electronics and understand photonics,” says Riel, who spent her academic years training to be an applied physicist. “There’s fundamental science to explore, but also something that can be used as a product to benefit society. That combination was very motivating.”
Hands-on mindset
For experimental physicists, this hands-on mindset is crucial. But experiments also require infrastructure that can be difficult to access in purely academic settings. “To do experiments, you need cleanrooms, fabrication tools and measurement systems,” explains Riel. “These resources are expensive and not always available in university labs.” During her first industry job at Hewlett-Packard in Palo Alto, Riel realised just how much she could achieve if given the right resources and support. “I felt like I was then the limit, not the lab,” she recalls.
This experience led Riel to proactively combine academic and industrial research in her PhD with IBM, where cutting-edge experiments are carried out towards a clear, purpose-driven goal within a structured research framework, leaving lots of leeway for creativity. “We explore scientific questions, but always with an application in mind,” says Riel. “Whether we’re improving a product or solving a practical problem, we aim to create knowledge and turn it into impact.”
Shifting gears
According to Riel, once you understand the foundations of fundamental physics, and feel as though you have learnt all the skills you can leach from it, then it’s time to consider shifting gears and expanding your skills with economics or business. In her role, understanding economic value and organisational dynamics is essential. But Riel advises against independently pursuing an MBA. “Studying economics or an MBA later is very doable,” she says. “In fact, your company might even financially support you. But going the other way – starting with economics and trying to pick up quantum physics later – is much harder.”
Riel sees university as a precious time to master complex subjects like quantum mechanics, relativity and statistical physics – topics that are difficult to revisit later in life. “It’s much easier to learn theoretical physics as a student than to go back to it later,” she says. “It builds something more important than just knowledge: it builds your tolerance for frustration, and your capacity for deep logical thinking. You become extremely analytical and much better at breaking down problems. That’s something every employer values.”
In demand
High-energy physicists are even in high demand in fields like consulting, says Riel. A high-achieving academic has a really good chance at being hired, as long as they present their job applications effectively. When scouring applications, recruiters look for specific key words and transferable skills, so regardless of the depth or quality of your academic research, the way you present yourself really counts. Physics, Riel argues, teaches a kind of thinking that’s both analytical and resilient. With experimental physics, your application can be tailored towards hands-on experience and understanding tangible solutions to real-world problems. For theoretical physicists, your application should demonstrate logical problem-solving and thinking outside of the box. “The winning combination is having aspects of both,” says Riel.
On top of that, research in physics increases your “frustration tolerance”. Every physicist has faced failure at one point during their academic career. But their determination to persevere is what makes them resilient. Whether this is through constantly thinking on your feet, or coming up with new solutions to the same problems, this resilience is what can make a physicist’s application pierce through the others. “In physics, you face problems every day that don’t have easy answers, and you learn how to deal with that,” explains Riel. “That mindset is incredibly useful, whether you’re solving a semiconductor design problem or managing a business unit.”
Academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application
Riel champions the idea of the “T-shaped person”: someone with deep expertise in one area (the vertical stroke of the T) and broad knowledge across fields (the horizontal bar of the T). “You start by going deep – becoming the go-to person for something,” says Riel. This deep knowledge builds your credibility in your desired field: you become the expert. But after that, you need to broaden your scope and understanding.
That breadth can include moving between fields, working on interdisciplinary projects, or applying physics in new domains. “A T-shaped person brings something unique to every conversation,” adds Riel. “You’re able to connect dots that others might not even see, and that’s where a lot of innovation happens.”
Adding the bar on the T means that you can move fluidly between different fields, including through academia and industry. For this reason, Riel believes that the divide between academia and industry is less rigid than people assume, especially in large research organisations like IBM. “We sit in that middle ground,” she explains. “We publish papers. We work with universities on fundamental problems. But we also push toward real-world solutions, products and economic value.”
The difficult part is making the leap from academia to industry. “You need the confidence to make the decision, to choose between working in academia or industry,” says Riel. “At some point in your PhD, your first post-doc, or maybe even your second, you need to start applying your practical skills to industry.” Companies like IBM offer internships, PhDs, research opportunities and temporary contracts for physicists all the way from masters students to high-level post-docs. These are ideal ways to get your foot in the door of a project, get work published, grow your network and garner some of those industry-focused practical skills, regardless of the stage you are at in your academic career. “You can learn from your colleagues about economy, business strategy and ethics on the job,” says Riel. “If your team can see you using your practical skills and engaging with the business, they will be eager to help you up-skill. This may mean supporting you through further study, whether it’s an online course, or later an MBA.”
Applied knowledge
Riel notes that academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application. “US funding is often tied to applications, and they are much stronger at converting research into tangible products, whereas in Europe there is still more of a divide between knowledge creation and the next step to turn this into products,” she says. “But personally, I find it most satisfying when I can apply what I learn to something meaningful.”
That applied focus is also cyclical, she says. “At IBM, projects to develop hardware often last five to seven years. Software development projects have a much faster turnaround. You start with an idea, you prove the concept, you innovate the path to solve the engineering challenges and eventually it becomes a product. And then you start again with something new.” This is different to most projects in academia, where a researcher contributes to a small part of a very long-term project. Regardless of the timeline of the project, the skills gained from academia are invaluable.
For early-career researchers, especially those in high-energy physics, Riel’s message is reassuring: “Your analytical training is more useful than you think. Whether you stay in academia, move to industry, or float between both, your skills are always relevant. Keep learning and embracing new technologies.”
The key, she says, is to stay flexible, curious and grounded in your foundations. “Build your depth, then your breadth. Don’t be afraid of crossing boundaries. That’s where the most exciting work happens.”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.