Axion-like particles (ALPs) are some of the most promising candidates for physics beyond the Standard Model. At the LHC, searches for ALPs that couple to gluons and photons have so far been limited to masses above 10 GeV due to trigger requirements that reduce low-energy sensitivity. In its first ever analysis on purely neutral final states, the LHCb collaboration has now extended this experimental reach and set new bounds on the ALP parameter space.
When a global symmetry is spontaneously broken, it gives rise to massless excitations called Goldstone bosons, which reflect the system’s freedom to transform continuously without changing its energy. It is thought that ALPs may arise via a similar mechanism, acquiring a small mass though, as they originate from symmetries that are only approximate. Depending on the underlying theory, they could contribute to dark matter, solve the strong-CP problem, or mediate interactions with a hidden sector. Their coupling to known particles varies across models, leading to a range of potential experimental signatures. Among the most compelling are those involving gluons and photons.
Thanks to the magnitude of the strong coupling constant, even a small interaction with gluons can dominate the production and decay of ALPs. This makes searches at the LHC challenging since low-energy jets in proton–proton collisions are often indistinguishable from the expected ALP decay signature. In this environment, a more effective approach is to focus on the photon channel and search for ALPs that are produced in proton–proton collisions – mostly via gluon–gluon fusion – and that decay into photon pairs. These processes have been investigated at the LHC, but previous searches were limited by trigger thresholds requesting photons with large momentum components transverse to the beam. This is particularly restrictive for low-mass ALPs, whose decay products are often too soft to pass these thresholds.
The new search, based on Run-2 data collected in 2018, overcomes this limitation by leveraging the LHCb detector’s flexible software-based trigger system, lower pile-up and forward geometry. The latter enhances sensitivity to products with a small momentum component transverse to the beam, making it well suited to probe resonances in the 4.9 to 19.4 GeV mass region. This is the first LHCb analysis of a purely neutral final state, hence requiring a new trigger and selection strategy, as well as a dedicated calibration procedure. Candidate photon pairs are identified from two high-energy calorimeter clusters, produced in isolation from the rest of the event, which could not originate from charged particles or neutral pions. ALP decays are then sought using maximum likelihood fits that scan the photon-pair invariant mass spectrum for peaks.
No photon-pair excess is observed over the background-only hypothesis, and upper limits are set on the ALP production cross-section times decay branching. These results constrain the ALP decay rate and its coupling to photons, probing a region of parameter space that has so far remained unexplored (see figure 1). The investigated mass range is also of interest beyond ALP searches. Alongside the main analysis, the study targeted two-photon decays of B0(s) and the little-studied ηb meson, almost reaching the sensitivity required for its detection.
The upgraded LHCb detector, which began operations with Run 3 in 2022, is expected to deliver another boost in sensitivity. This will allow future analyses to benefit from the extended flexibility of its purely software trigger, significantly larger datasets and a wider energy coverage of the upgraded calorimeter.
In the late 1990s, observational evidence accumulated that the universe is currently undergoing an accelerating expansion. Its cause remains a major mystery for physics. The term “dark energy” was coined to explain the data, however, we have no idea what dark energy is. All we know is that it makes up about 70% of the energy density of the universe, and that it does not behave like regular matter – if it is indeed matter and not a modification of the laws of gravity on cosmological scales. If it is matter, then it must have a pressure density close to p = –ρ, where ρ is its energy density. The cosmological constant in Einstein’s equations for spacetime acts precisely this way, and a cosmological constant has therefore long been regarded as the simplest explanation for the observations. It is the bedrock of the prevailing ΛCDM model of cosmology – a setup where dark energy is time-independent. But recent observations by the Dark Energy Spectroscopic Instrument provide tantalising evidence that dark energy might be time-dependent, with its pressure slightly increasing over time (CERN Courier May/June 2025 p11). If upcoming data confirm these results, it would require a paradigm shift in cosmology, ruling out the ΛCDM model.
Mounting evidence
From the point of view of fundamental theory, there are at least four good reasons to believe that dark energy must be time-dependent and cannot be a cosmological constant.
The first piece of evidence is well known: if there is a cosmological constant induced by a particle-physics description of matter, then its value should be 120 orders of magnitude larger than observations indicate. This is the famous cosmological constant problem.
A second argument is the “infrared instability” of a spacetime induced by a cosmological constant. Alexander Polyakov (Princeton) has forcefully argued that inhomogeneities on very large length scales would gradually mask a preexisting cosmological constant, making it appear to vary over time.
Recently, other arguments have been put forwards indicating that dark energy must be time-dependent. Since quantum matter generates a large cosmological constant when treated as an effective field theory, it should be expected that the cosmological constant problem can only be addressed in a quantum theory of all forces. The best candidate we have is superstring theory. There is mounting evidence that – at least in the regions of the theory under mathematical control – it is impossible to obtain a positive cosmological constant corresponding to the observed accelerating expansion. But one can obtain time-dependent dark energy, for example in quintessence toy models.
Recent observations provide tantalising evidence that dark energy might be time-dependent
The final reason is known as the trans-Planckian censorship conjecture. As the nature of dark energy remains a complete mystery, it is often treated as an effective field theory. This means that one expands all fields in Fourier modes and quantises each field as a harmonic oscillator. The modes one uses have wavelengths that increase in proportion to the scale of space. This creates a theoretical headache at the highest energies. To avoid infinities, an “ultraviolet cutoff” is required at or below the Planck mass. This must be at a fixed physical wavelength. In order to maintain this cutoff in an expanding space, it is necessary to continuously create new modes at the cutoff scale as the wavelength of the previously present modes increases. This implies a violation of unitarity. If dark energy were a cosmological constant, then modes with wavelength equal to the cutoff scale at the present time would become classical at some time in the future, and the violation of unitarity would be visible in hypothetical future observations. To avoid this problem, we conclude that dark energy must be time-dependent.
Because of its deep implications for fundamental physics, we are eagerly awaiting new observational results that will shine more light on the issue of the time-dependence of dark energy.
The 2025 European Physical Society Conference on High Energy Physics (EPS-HEP), held in Marseille from 7 to 11 July, took centre stage in this pivotal year for high-energy physics as the community prepares to make critical decisions on the next flagship collider at CERN to enable major leaps at the high-precision and high-energy frontiers. The meeting showcased the remarkable creativity and innovation in both experiment and theory, driving progress across all scales of fundamental physics. It also highlighted the growing interplay between particle, nuclear, astroparticle physics and cosmology.
Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries. This requires sustained investment from funding agencies, laboratories, universities and the broader community to support careers and recognise leadership in detectors, software and computing. Such support must extend across construction, commissioning and operation, and include strategic and basic R&D. The implementation of detector R&D (DRD) collaborations, as outlined in the 2021 ECFA roadmap, is an important step in this direction.
Physics thrives on precision, and a prime example this year came from the Muon g–2 collaboration at Fermilab, which released its final result combining all six data runs, achieving an impressive 127 parts-per-billion precision on the muon anomalous magnetic moment (CERN Courier July/August 2025 p7). The result agrees with the latest lattice–QCD predictions for the leading hadronic–vacuum-polarisation term, albeit within a four times larger theoretical uncertainty than the experimental one. Continued improvements to lattice QCD and to the traditional dispersion-relation method based on low-energy e+e– and τ data are expected in the coming years.
Runaway success
After the remarkable success of LHC Run 2, Run 3 has now surpassed it in delivered luminosity. Using the full available Run-2 and Run-3 datasets, ATLAS reported 3.4σ evidence for the rare Higgs decay to a muon pair, and a new result on the quantum-loop mediated decay into a Z boson and a photon, now more consistent with the Standard Model prediction than the earlier ATLAS and CMS Run-2 combination (see “Mapping rare Higgs-boson decays”). ATLAS also presented an updated study of Higgs pair production with decays into two b-quarks and two photons, whose sensitivity was increased beyond statistical gains thanks to improved reconstruction and analysis. CMS released a new Run-2 search for Higgs decays to charm quarks in events produced with a top-quark pair, reaching sensitivity comparable to the traditional weak-boson-associated production. Both collaborations also released new combinations of nearly all their Higgs analyses from Run 2, providing a wide set of measurements. While ATLAS sees overall agreement with predictions, CMS observes some non-significant tensions.
Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries
A highlight in top-quark physics this year was the observation by CMS of an excess in top-pair production near threshold, confirmed at the conference by ATLAS (see “ATLAS confirms top–antitop excess”). The physics of the strong interaction predicts highly compact, colour-singlet, quasi-bound pseudoscalar top–antitop state effects arising from gluon exchange. Unlike bottomonium or charmonium, no proper bound state is formed due to the rapid weak decay of the top quark (see “Memories of quarkonia”). This “toponium” effect can be modelled with the use of non-relativistic QCD. Both experiments observed a cross section about 100 times smaller than for inclusive top-quark pair production. The subtle signal and complex threshold modelling make the analysis challenging, and warrant further theoretical and experimental investigation.
A major outcome of LHC Run 2 is the lack of compelling evidence for physics beyond the Standard Model. In Run 3, ATLAS and CMS continue their searches, aided by improved triggers, reconstruction and analysis techniques, as well as a dataset more than twice as large, enabling a more sensitive exploration of rare or suppressed signals. The experiments are also revisiting excesses seen in Run 2, for example, a CMS hint of a new resonance decaying into a Higgs and another scalar was not confirmed by a new ATLAS analysis including Run-3 data.
Hadron spectroscopy has seen a renaissance since Belle’s 2003 discovery of the exotic X(3872), with landmark advances at the LHC, particularly by LHCb. CMS recently reported three new four-charm-quark states decaying into J/ψ pairs between 6.6 and 7.1 GeV. Spin-parity analysis suggests they are tightly bound tetraquarks rather than loosely bound molecular states (CERN Courier November/December 2024 p33).
Rare observations
Flavour physics continues to test the Standard Model with high sensitivity. Belle-II and LHCb reported new CP violation measurements in the charm sector, confirming the expected small effects. LHCb observed, for the first time, CP violation in the baryon sector via Λb decays, a milestone in CP violation history. NA62 at CERN’s SPS achieved the first observation of the ultra-rare kaon decay K+→ π+νν with a branching ratio of 1.3 × 10–10, matching the Standard Model prediction. MEG-II at PSI set the most stringent limit to date on the lepton-flavour-violating decay μ → eγ, excluding branching fractions above 1.5 × 10–13. Both experiments continue data taking until 2026.
Heavy-ion collisions at the LHC provide a rich environment to study the quark–gluon plasma, a hot, dense state of deconfined quarks and gluons, forming a collective medium that flows as a relativistic fluid with an exceptionally low viscosity-to-entropy ratio. Flow in lead–lead collisions, quantified by Fourier harmonics of spatial momentum anisotropies, is well described by hydrodynamic models for light hadrons. Hadrons containing heavier charm and bottom quarks show weaker collectivity, likely due to longer thermalisation times, while baryons exhibit stronger flow than mesons due to quark coalescence. ALICE reported the first LHC measurement of charm–baryon flow, consistent with these effects.
Spin-parity analysis suggests the states are tightly bound tetraquarks
Neutrino physics has made major strides since oscillations were confirmed 27 years ago, with flavour mixing parameters now known to a few percent.Crucial questions still remain: are neutrinos their own antiparticles (Majorana fermions)? What is the mass ordering – normal or inverted? What is the absolute mass scale and how is it generated? Does CP violation occur? What are the properties of the right-handed neutrinos? These and other questions have wide-ranging implications for particle physics, astrophysics and cosmology.
Neutrinoless double-beta decay, if observed, would confirm that neutrinos are Majorana particles. Experiments using xenon and germanium are beginning to constrain the inverted mass ordering, which predicts higher decay rates. Recent combined data from the long-baseline experiments T2K and NOvA show no clear preference for either ordering, but exclude vanishing CP violation at over 3σ in the inverted scenario. The KM3NeT detector in the Mediterranean, with its ORCA and ARCA components, has delivered its first competitive oscillation results, and detected a striking ~220 PeV muon neutrino, possibly from a blazar (CERN Courier March/April 2025 p7). The next-generation large-scale neutrino experiments JUNO (China), Hyper-Kamiokande (Japan) and LBNF/DUNE (USA) are progressing in construction, with data-taking expected to begin in 2025, 2028 and 2031, respectively. LBNF/DUNE is best positioned to determine the neutrino mass ordering, while Hyper-Kamiokande will be the most sensitive to CP violation. All three will also search for proton decay, a possible messenger of grand unification.
There is compelling evidence for dark matter from gravitational effects across cosmic times and scales, as well as indications that it is of particle origin. Its possible forms span a vast mass range, up to the ~100 TeV unitarity limit for a thermal relic, and may involve a complex, structured “dark sector”. The wide complementarity among the search strategies gives the field a unifying character. Direct detection experiments looking for tiny, elastic nuclear recoils, such as XENONnT (Italy), LZ (USA) and PandaX-4T (China), have set world-leading constraints on weakly interacting massive particles. XENONnT and PandaX-4T have also reported first signals from boron-8 solar neutrinos, part of the so-called “neutrino fog” that will challenge future searches. Axions, introduced theoretically to suppress CP violation in strong interactions, could be viable dark-matter candidates. They would be produced in the early universe with enormous number density, behaving, on galactic scales, as a classical, nonrelativistic, coherently oscillating bosonic field, effectively equivalent to cold dark matter. Axions can be detected via their conversion into photons in strong magnetic fields. Experiments using microwave cavities have begun to probe the relevant μeV mass range of relic QCD axions, but the detection becomes harder at higher masses. New concepts, using dielectric disks or wire-based plasmonic resonance, are under development to overcome these challenges.
Cosmological constraints
Cosmology featured prominently at EPS-HEP, driven by new results from the analysis of DESI DR2 baryon acoustic oscillation (BAO) data, which include 14 million redshifts. Like the cosmic microwave background (CMB), BAO also provides a “standard ruler” to trace the universe’s expansion history – much like supernovae (SNe) do as standard candles. Cosmological surveys are typically interpreted within the ΛCDM model, a six-parameter framework that remarkably accounts for 13.8 billion years of cosmic evolution, from inflation and structure formation to today’s energy content, despite offering no insight into the nature of dark matter, dark energy or the inflationary mechanism. Recent BAO data, when combined with CMB and SNe surveys, show a preference for a form of dark energy that weakens over time. Tensions also persist in the Hubble expansion rate derived from early-universe (CMB and BAO) and late-universe (SN type-Ia) measurements (CERN Courier March/April 2025 p28). However, anchoring SN Ia distances in redshift remains challenging, and further work is needed before drawing firm conclusions.
Cosmological fits also constrain the sum of neutrino masses. The latest CMB and BAO-based results within ΛCDM appear inconsistent with the lower limit implied by oscillation data for inverted mass ordering. However, firm conclusions are premature, as the result may reflect limitations in ΛCDM itself. Upcoming surveys from the Euclid satellite and the Vera C. Rubin Observatory (LSST) are expected to significantly improve cosmological constraints.
Cristinel Diaconu and Thomas Strebler, chairs of the local organising committee, together with all committee members and many volunteers, succeeded in delivering a flawlessly organised and engaging conference in the beautiful setting of the Palais du Pharo overlooking Marseille’s old port. They closed the event with a memorial phrase of British cyclist Tom Simpson: “There is no mountain too high.”
The nature of dark matter remains one of the greatest unresolved questions in modern physics. While ground-based experiments persist in their quest for direct detection, astrophysical observations and multi-messenger studies have emerged as powerful complementary tools for constraining its properties. Stars across the Milky Way and beyond – including neutron stars, white dwarfs, red giants and main-sequence stars – are increasingly recognised as natural laboratories for probing dark matter through its interactions with stellar interiors, notably via neutron-star cooling, asteroseismic diagnostics of solar oscillations and gravitational-wave emission.
The international conference Dark Matter and Stars: Multi-Messenger Probes of Dark Matter and Modified Gravity (ICDMS) was held at Queen’s University in Kingston, Ontario, Canada, from 14 to 16 July. The meeting brought together around 70 researchers from across astrophysics, cosmology, particle physics and gravitational theory. The goal was to foster interdisciplinary dialogue on how observations of stellar systems, gravitational waves and cosmological data can help shed light on the dark sector. The conference was specifically dedicated to exploring how astrophysical and cosmological systems can be used to probe the nature of dark matter.
The first day centred on compact objects as natural laboratories for dark-matter physics. Giorgio Busoni (University of Adelaide) opened with a comprehensive overview of recent theoretical progress on dark-matter accumulation in neutron stars and white dwarfs, highlighting refinements in the treatment of relativistic effects, optical depth, Fermi degeneracy and light mediators – all of which have shaped the field in recent years. Melissa Diamond (Queen’s University) followed with a striking talk with a nod to Dr. Strangelove, exploring how accumulated dark matter might trigger thermonuclear instability in white dwarfs. Sandra Robles (Fermilab) shifted the perspective from neutron stars to white dwarfs, showing how they constrain dark-matter properties. One of the authors highlighted postmerger gravitational-wave observations as a tool to distinguish neutron stars from low-mass black holes, offering a promising avenue for probing exotic remnants potentially linked to dark matter. Axions featured prominently throughout the day, alongside extensive discussions of the different ways in which dark matter affects neutron stars and their mergers.
ICDMS continues to strengthen the interface between fundamental physics and astrophysical observations
On the second day, attention turned to the broader stellar population and planetary systems as indirect detectors. Isabelle John (University of Turin) questioned whether the anomalously long lifetimes of stars near the galactic centre might be explained by dark-matter accumulation. Other talks revisited stellar systems – white dwarfs, red giants and even speculative dark stars – with a focus on modelling dark-matter transport and its effects on stellar heat flow. Complementary detection strategies also took the stage, including neutrino emission, stochastic gravitational waves and gravitational lensing, all offering potential access to otherwise elusive energy scales and interaction strengths.
The final day shifted toward galactic structure and the increasingly close interplay between theory and observation. Lina Necib (MIT) shared stellar kinematics data used to map the Milky Way’s dark-matter distribution, while other speakers examined the reliability of stellar stream analyses and subtle anomalies in galactic rotation curves. The connection to terrestrial experiments grew stronger, with talks tying dark matter to underground detectors, atomic-precision tools and cosmological observables such as the Lyman-alpha forest and baryon acoustic oscillations. Early-career researchers contributed actively across all sessions, underscoring the field’s growing vitality and introducing a fresh influx of ideas that is expanding its scope.
The ICDMS series is now in its third edition. It began in 2018 at Instituto Superior Técnico, Portugal, and is poised to become an annual event. The next conference will take place at the University of Southampton, UK, in 2026, followed by the Massachusetts Institute of Technology in the US in 2027. With increasing participation and growing international interest, the ICDMS series continues to strengthen the interface between fundamental physics and astrophysical observations in the quest to understand the nature of dark matter.
Measurements at high-energy colliders such as the LHC, the Electron–Ion Collider (EIC) and the FCC will be performed at the highest luminosities. The analysis of the high-precision data taken there will require a significant increase in the accuracy of theoretical predictions. To achieve this, new mathematical and algorithmic technologies are needed. Developments in precision Standard Model calculations have been rapid since experts last met for Loopsummit-1 at Cadenabbia on the banks of Lake Como in 2021 (CERN Courier November/December 2021 p24). Loopsummit-2, held in the same location from 20 to 25 July this year, summarised this formidable body of work.
As higher experimental precision relies on new technologies, new theory results require better algorithms, both from the mathematical and computer-algebraic side, and new techniques in quantum field theory. The central software package for perturbative calculations, FORM, now has a new major release, FORM 5. Progress has also been achieved in integration-by-parts reduction, which is of central importance for reducing to a much smaller set of master integrals. New developments were also reported in analytic and numerical Feynman-diagram integration using Mellin–Barnes techniques, new compact function classes such as Feynman–Fox integrals, and modern summation technologies and methods to establish and solve gigantic recursions and differential equations of degree 4000 and order 100. The latest results on elliptic integrals and progress on the correct treatment of the γ5-problem in real dimensions were also presented. These technologies allow the calculation of processes up to five loops and in the presence of more scales at two- and three-loop order. New results for single-scale quantities like quark condensates and the ρ-parameter were also reported.
In the loop
Measurements at future colliders will depend on the precise knowledge of parton distribution functions, the strong coupling constant αs(MZ) and the heavy-quark masses. Experience suggests that going from one loop order to the next in the massless and massive cases takes 15 years or more, as new technologies must be developed. By now, most of the space-like four-loop splitting functions governing scaling violations are known with a good precision, as well as new results for the three-loop time-like splitting functions. The massive three-loop Wilson coefficients for deep-inelastic scattering are now complete, requiring far larger and different integral spaces compared with the massless case. Related to this are the Wilson coefficients of semi-inclusive deep-inelastic scattering at next-to-next-to leading order (NNLO), which will be important to tag individual flavours at the EIC. For the αs(MZ) measurement at low-scale processes, the correct treatment of renormalon contributions is necessary. Collisions at high energies also allow the detailed study of scattering processes in the forward region of QCD. Other long-term projects concern NNLO corrections for jet-production at e+e– and hadron colliders, and other related processes like Higgs-boson and top-quark production, in some cases with a large number of partons in the final state. This also includes the use of effective Lagrangians.
Many more steps lie ahead if we are to match the precision of measurements at high-luminosity colliders
The complete calculation of difficult processes at NNLO and beyond always drives the development of term-reduction algorithms and analytic or numerical integration technologies. Many more steps lie ahead in the coming years if we are to match the precision of measurements at high-luminosity colliders. Some of these will doubtless be reported at Loopsummit-3 in summer 2027.
The 39th edition of the International Cosmic Ray Conference (ICRC), a key biennial conference in astroparticle physics, was held in Geneva from 15 to 24 July. Plenary talks covered solar, galactic and ultra-high-energy cosmic rays. A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves. Talks were informed by limits from the LHC and elsewhere on dark-matter particles and primordial black-holes. The bundle of constraints has improved very significantly over the past few years, allowing more meaningful and stringent tests.
Solar modelling
The Sun and its heliosphere, where the solar wind offers insights into magnetic reconnection, shock acceleration and diffusion, are now studied in situ thanks to the Solar Orbiter and Parker Solar Probe spacecraft. Long-term PAMELA and AMS data, spanning over an 11-year solar cycle, allow precise modelling of solar modulation of cosmic-ray fluxes below a few tens of GeV. AMS solar proton data show a 27-day periodicity up to 20 GV, caused by corotating interaction regions where fast solar wind overtakes slower wind, creating shocks. AMS has recorded 46 solar energetic particle (SEP) events, the most extreme reaching a few GV, from magnetic-reconnection flares or fast coronal mass ejections. While isotope data once suggested such extreme events occur every 1500 years, Kepler observations of Sun-like stars indicate they may happen every 100 years, releasing more than 1034 erg, often during weak solar minima, and linked to intense X-ray flares.
The spectrum of galactic cosmic rays, studied with high-precision measurements from satellites (DAMPE) and ISS-based experiments (AMS-02, CALET, ISS-CREAM), is not a single power law but shows breaks and slope changes, signatures of diffusion or source effects. A hardening at about 500 GV, common to all primaries, and a softening at 10 TV, are observed in protons and He spectra by all experiments – and for the first time also in DAMPE’s O and C. As the hardening is detected in primary spectra scaling at the same rigidity (charge, not mass) as in secondary-to-primary ratios, they are attributed to propagation in the galaxy and not to source-related effects. This is supported by secondary (Li, Be, B) spectra with breaks about twice as strong as primaries (He, C, O). A second hardening at 150 TV was reported by ISS-CREAM (p) and DAMPE (p + He) for the first time, broadly consistent – within large hadronic-model and statistical uncertainties – with indirect ground-based results from GRAPES and LHAASO.
A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves
Ratios of secondary over primary species versus rigidity R (energy per unit charge) probe the ratio of the galactic halo size H to the energy-dependent diffusion coefficient D(R), and so measure the “grammage” of material through which cosmic rays propagate. Unstable/stable secondary isotope ratios probe the escape times of cosmic rays from the halo (H2/D(R)), so from both measurements H and D(R) can be derived. The flattening evidenced by the highest energy point at 10 to 12 GeV/nucleon of the 10Be/9Be ratio as a function of energy, hints at a possibly larger halo than previously believed beyond 5 kpc, to be tested by HELIX. AMS-02 spectra of single elements will soon allow separation of the primary and secondary fractions for each nucleus, also based on spallation cross-sections. Anomalies remain, such as a flattening at ~7 TeV/nucleon in Li/C and B/C, possibly indicating reacceleration or source grammage. AMS-02’s 7Li/6Li ratio disagrees with pure secondary models, but cross-section uncertainties preclude firm conclusions on a possible Li primary component, which would be produced by a new population of sources.
The muon puzzle
The dependency of ground-based cosmic-ray measurements on hadronic models has been widely discussed by Boyd and Pierog, highlighting the need for more measurements at CERN, such as the recent proton-O run being analysed by LHCf. The EPOS–LHC model, based on the core–corona approach, shows reduced muon discrepancies, producing more muons and a heavier composition, namely deeper shower maxima (+20 g/cm2) than earlier models. This clarifies the muon puzzle raised by Pierre Auger a few years ago of a larger muon content in atmospheric showers than simulations. A fork-like structure remains in the knee region of the proton spectrum, where the new measurements presented by LHAASO are in agreement with IceTop/IceCube, and could lead to a higher content of protons beyond the knee than hinted at by KASCADE and the first results of GRAPES. Despite the higher proton fluxes, a dominance of He above the knee is observed, which requires a special kind of close-by source to be hypothesised.
Multi-messenger approaches
Gamma-ray and neutrino astrophysics were widely discussed at the conference, highlighting the relevance of multi-messenger approaches. LHAASO produced impressive results on UHE astrophysics, revealing a new class of pevatrons: microquasars alongside young massive clusters, pulsar wind nebulae (PWNe) and supernova remnants.
Microquasars are gamma-ray binaries containing a stellar-mass black hole that drives relativistic jets while accreting matter from their companion stars. Outstanding examples include Cyg X-3, a potential PeV microquasar, from which the flux of PeV photons is 5–10 times higher than in the rest of the Cygnus bubble.
Five other microquasars are observed beyond 100 TeV: SS 433, V4641 Sgr, GRS 1915 + 105, MAXI J1820 + 070 and Cygnus X-1. SS 433 is a microquasar with two gamma-ray emitting jets nearly perpendicular to our line of sight, terminated at 40 pc from the black hole (BH) identified by HESS and LHAASO beyond 10 TeV. Due to the Klein–Nishina effect, the inverse Compton flux above ~10 TeV is gradually suppressed, and an additional spectral component is needed to explain the flux around 100 TeV.
Gamma-ray and neutrino astrophysics were widely discussed at the conference
Beyond 100 TeV, LHAASO also identifies a source coincident with a giant molecular cloud; this component may be due to protons accelerated close to the BH or in the lobes. These results demonstrate the ability to resolve the morphology of extended galactic sources. Similarly, ALMA has discovered two hotspots, both at 0.28° (about 50 pc) from GRS 1915 + 105 in opposite directions from its BH. These may be interpreted as two lobes, or the extended nature of the LHAASO source may instead be due to the spatial distribution of the surrounding gas, if the emission from GRS 1915 + 105 is dominated by hadronic processes.
Further discussions addressed pulsar halos and PWNe as unique laboratories for studying the diffusion of electrons and mysterious as-yet-unidentified pevatrons, such as MGRO J1908 + 06, coincident with a SNR (favoured) and a PSR. One of these sources may finally reveal an excess in KM3NeT or IceCube neutrinos, proving their cosmic-ray accelerator nature directly.
The identification and subtraction of source fluxes on the galactic plane is also important for the measurement of the galactic plane neutrino flux by IceCube. This currently assumes a fixed spectral index of E–2.7, while authors like Grasso et al. presented a spectrum becoming as soft as E–2.4, closer to the galactic centre. The precise measurements of gamma-ray source fluxes and the diffuse emission from galactic cosmic rays interacting in the interstellar matter lead to better constraints on neutrino observations and on cosmic ray fluxes around the knee.
Cosmogenic origins
KM3NeT presented a neutrino of energy well beyond the diffuse cosmic neutrino flux of IceCube, which does not extend beyond 10 PeV (CERN Courier March/April 2025 p7). Its origin was widely discussed at the conference. The large error on its estimated energy – 220 PeV, within a 1σ confidence interval of 110 to 790 PeV – makes it nevertheless compatible with the flux observed by IceCube, for which a 30 TeV break was first hypothesised at this conference. If events of this kind are confirmed, they could have transient or dark-matter origins, but a cosmogenic origin is improbable due to the IceCube and Pierre Auger limits on the cosmogenic neutrino flux.
Reconciling general relativity and quantum mechanics remains a central problem in fundamental physics. Though successful in their own domains, the two theories resist unification and offer incompatible views of space, time and matter. The field of quantum gravity, which has sought to resolve this tension for nearly a century, is still plagued by conceptual challenges, limited experimental guidance and a crowded landscape of competing approaches. Now in its third instalment, the “Quantum Gravity” conference series addresses this fragmentation by promoting open dialogue across communities. Organised under the auspices of the International Society for Quantum Gravity (ISQG), the 2025 edition took place from 21 to 25 July at Penn State University. The event gathered researchers working across a variety of frameworks – from random geometry and loop quantum gravity to string theory, holography and quantum information. At its core was the recognition that, regardless of specific research lines or affiliations, what matters is solving the puzzle.
One step to get there requires understanding the origin of dark energy, which drives the accelerated expansion of the universe and is typically modelled by a cosmological constant Λ. Yasaman K Yazdi (Dublin Institute for Advanced Studies) presented a case for causal set theory, reducing spacetime to a discrete collection of events, partially ordered to capture cause–effect relationships. In this context, like a quantum particle’s position and momentum, the cosmological constant and the spacetime volume are conjugate variables. This leads to the so-called “ever-present Λ” models, where fluctuations in the former scale as the inverse square root of the latter, decreasing over time but never vanishing. The intriguing agreement between the predicted size of these fluctuations and the observed amount of dark energy, while far from resolving quantum cosmology, stands as a compelling motivation for pursuing the approach.
In the spirit of John Wheeler’s “it from bit” proposal, Jakub Mielczarek (Jagiellonian University) suggested that our universe may itself evolve by computing – or at least admit a description in terms of quantum information processing. In loop quantum gravity, space is built from granular graphs known as spin networks, which capture the quantum properties of geometry. Drawing on ideas from tensor networks and holography, Mielczarek proposed that these structures can be reinterpreted as quantum circuits, with their combinatorial patterns reflected in the logic of algorithms. This dictionary offers a natural route to simulating quantum geometry, and could help clarify quantum theories that, like general relativity, do not rely on a fixed background.
Quantum clues
What would a genuine quantum theory of spacetime achieve, though? According to Esteban Castro Ruiz (IQOQI), it may have to recognise that reference frames, which are idealised physical systems used to define spatio-temporal distances, must themselves be treated as quantum objects. In the framework of quantum reference frames, notions such as entanglement, localisation and superposition become observer-dependent. This leads to a perspective-neutral formulation of quantum mechanics, which may offer clues for describing physics when spacetime is not only dynamical, but quantum.
The conference’s inclusive vocation came through most clearly in the thematic discussion sessions, including one on the infamous black-hole information problem chaired by Steve Giddings (UC Santa Barbara). A straightforward reading of Stephen Hawking’s 1974 result suggests that black holes radiate, shrink and ultimately destroy information – a process that is incompatible with standard quantum mechanics. Any proposed resolution must face sharp trade-offs: allowing information to escape challenges locality, losing it breaks unitarity and storing it in long-lived remnants undermines theoretical control. Giddings described a mild violation of locality as the lesser evil, but the controversy is far from settled. Still, there is growing consensus that dissolving the paradox may require new physics to appear well before the Planck scale, where quantum-gravity effects are expected to dominate.
Once the domain of pure theory, quantum gravity has become eager to engage with experiment
Among the few points of near-universal agreement in the quantum-gravity community has long been the virtual impossibility of detecting a graviton, the hypothetical quantum of the gravitational field. According to Igor Pikovski (Stockholm University), things may be less bleak than once thought. While the probability of seeing graviton-induced atomic transitions is negligible due to the weakness of gravity, the situation is different for massive systems. By cooling a macroscopic object close to absolute zero, Pikovski suggested, the effect could be amplified enough, with current interferometers simultaneously monitoring gravitational waves in the correct frequency window. Such a signal would not amount to a definitive proof of gravity’s quantisation, just as the photoelectric effect could not definitely establish the existence of photons, nor would it single out a specific ultraviolet model. However, it could constrain concrete predictions and put semiclassical theories under pressure. Giulia Gubitosi (University of Naples Federico II) tackled phenomenology from a different angle, exploring possible deviations from special relativity in models where spacetime becomes non-commutative. There, coordinates are treated like quantum operators, leading to effects like decoherence, modified particle speeds and soft departures from locality. Although such signals tend to be faint, they could be enhanced by high-energy astrophysical sources: observations of neutrinos corresponding to gamma-ray bursts are now starting to close in on these scenarios. Both talks reflected a broader, cultural shift: quantum gravity, once the domain of pure theory, has become eager to engage with experiment.
Quantum Gravity 2025 offered a wide snapshot of a field still far from closure, yet increasingly shaped by common goals, the convergence of approaches and cross-pollination. As intended, no single framework took centre stage, with a dialogue-based format keeping focus on the central, pressing issue at hand: understanding the quantum nature of spacetime. With limited experimental guidance, open exchange remains key to clarifying assumptions and avoiding duplication of efforts. Building on previous editions, the meeting pointed toward a future where quantum-gravity researchers will recognise themselves as part of a single, coherent scientific community.
In June 2025, physicists met at Saariselkä, Finland to discuss recent progress in the field of ultra-peripheral collisions (UPCs). All the major LHC experiments measure UPCs – events where two colliding nuclei miss each other, but nevertheless interact via the mediation of photons that can propagate long distances. In a case of life imitating science, almost 100 delegates propagated to a distant location in one of the most popular hiking destinations in northern Lapland to experience 24-hour daylight and discuss UPCs in Finnish saunas.
UPC studies have expanded significantly since the first UPC workshop in Mexico in December 2023. The opportunity to study scattering processes in a clean photon–nucleus environment at collider energies has inspired experimentalists to examine both inclusive and exclusive scattering processes, and to look for signals of collectivity and even the formation of quark–gluon plasma (QGP) in this unique environment.
For many years, experimental activity in UPCs was mainly focused on exclusive processes and QED phenomena including photon–photon scattering. This year, fresh inclusive particle-production measurements gained significant attention, as well as various signatures of QGP-like behaviour observed by different experiments at RHIC and at the LHC. The importance of having complementing experiments to perform similar measurements was also highlighted. In particular, the ATLAS experiment joined the ongoing activities to measure exclusive vector–meson photoproduction, finding a cross section that disagrees with the previous ALICE measurements by almost 50%. After long and detailed discussions, it was agreed that different experimental groups need to work together closely to resolve this tension before the next UPC workshop.
Experimental and theoretical developments very effectively guide each other in the field of UPCs. This includes physics within and beyond the Standard Model (BSM), such as nuclear modifications to the partonic structure of protons and neutrons, gluon-saturation phenomena predicted by QCD (CERN Courier January/February 2025 p31), and precision tests for BSM physics in photon–photon collisions. The expanding activity in the field of UPCs, together with the construction of the Electron Ion Collider (EIC) at Brookhaven National Laboratory in the US, has also made it crucial to develop modern Monte Carlo event generators to the level where they can accurately describe various aspects of photon–photon and photon–nucleus scatterings.
As a photon collider, the LHC complements the EIC. While the centre-of-mass energy at the EIC will be lower, there is some overlap between the kinematic regions probed by these two very different collider projects thanks to the varying energy spectra of the photons. This allows the theoretical models needed for the EIC to be tested against UPC data, thereby reducing theoretical uncertainty on the predictions that guide the detector designs. This complementarity will enable precision studies of QCD phenomena and BSM physics in the 2030s.
In 1982 Richard Feynman posed a question that challenged computational limits: can a classical computer simulate a quantum system? His answer: not efficiently. The complexity of the computation increases rapidly, rendering realistic simulations intractable. To understand why, consider the basic units of classical and quantum information.
A classical bit can exist in one of two states: |0> or |1>. A quantum bit, or qubit, exists in a superposition α|0> + β|1>, where α and β are complex amplitudes with real and imaginary parts. This superposition is the core feature that distinguishes quantum bits and classical bits. While a classical bit is either |0> or |1>, a quantum bit can be a blend of both at once. This is what gives quantum computers their immense parallelism – and also their fragility.
The difference becomes profound with scale. Two classical bits have four possible states, and are always in just one of them at a time. Two qubits simultaneously encode a complex-valued superposition of all four states.
Resources scale exponentially. N classical bits encode N boolean values, but N qubits encode 2N complex amplitudes. Simulating 50 qubits with double-precision real numbers for each part of the complex amplitudes would require more than a petabyte of memory, beyond the reach of even the largest supercomputers.
Direct mimicry
Feynman proposed a different approach to quantum simulation. If a classical computer struggles, why not use one quantum system to emulate the behaviour of another? This was the conceptual birth of the quantum simulator: a device that harnesses quantum mechanics to solve quantum problems. For decades, this visionary idea remained in the realm of theory, awaiting the technological breakthroughs that are now rapidly bringing it to life. Today, progress in quantum hardware is driving two main approaches: analog and digital quantum simulation, in direct analogy to the history of classical computing.
In analog quantum simulators, the physical parameters of the simulator directly correspond to the parameters of the quantum system being studied. Think of it like a wind tunnel for aeroplanes: you are not calculating air resistance on a computer but directly observing how air flows over a model.
A striking example of an analog quantum simulator traps excited Rydberg atoms in precise configurations using highly focused laser beams known as “optical tweezers”. Rydberg atoms have one electron excited to an energy level far from the nucleus, giving them an exaggerated electric dipole moment that leads to tunable long-range dipole–dipole interactions – an ideal setup for simulating particle interactions in quantum field theories (see “Optical tweezers” figure).
The positions of the Rydberg atoms discretise the space inhabited by the quantum fields being modelled. At each point in the lattice, the local quantum degrees of freedom of the simulated fields are embodied by the internal states of the atoms. Dipole–dipole interactions simulate the dynamics of the quantum fields. This technique has been used to observe phenomena such as string breaking, where the force between particles pulls so strongly that the vacuum spontaneously creates new particle–antiparticle pairs. Such quantum simulations model processes that are notoriously difficult to calculate from first principles using classical computers (see “A philosophical dimension” panel).
Universal quantum computation
Digital quantum simulators operate much like classical digital computers, though using quantum rather than classical logic gates. While classical logic manipulates classical bits, quantum logic manipulates qubits. Because quantum logic gates obey the Schrödinger equation, they preserve information and are reversible, whereas most classical gates, such as “AND” and “OR”, are irreversible. Many quantum gates have no classical equivalent, because they manipulate phase, superposition or entanglement – a uniquely quantum phenomenon in which two or more qubits share a combined state. In an entangled system, the state of each qubit cannot be described independently of the others, even if they are far apart: the global description of the quantum state is more than the combination of the local information at every site.
A philosophical dimension
The discretisation of space by quantum simulators echoes the rise of lattice QCD in the 1970s and 1980s. Confronted with the non-perturbative nature of the strong interaction, Kenneth Wilson introduced a method to discretise spacetime, enabling numerical solutions to quantum chromodynamics beyond the reach of perturbation theory. Simulations on classical supercomputers have since deepened our understanding of quark confinement and hadron masses, catalysed advances in high-performance computing, and inspired international collaborations. It has become an indispensable tool in particle physics (see “Fermilab’s final word on muon g-2”).
In classical lattice QCD, the discretisation of spacetime is just a computational trick – a means to an end. But in quantum simulators this discretisation becomes physical. The simulator is a quantum system governed by the same fundamental laws as the target theory.
This raises a philosophical question: are we merely modelling the target theory or are we, in a limited but genuine sense, realising it? If an array of neutral atoms faithfully mimics the dynamical behaviour of a specific gauge theory, is it “just” a simulation, or is it another manifestation of that theory’s fundamental truth? Feynman’s original proposal was, in a sense, about using nature to compute itself. Quantum simulators bring this abstract notion into concrete laboratory reality.
By applying sequences of quantum logic gates, a digital quantum computer can model the time evolution of any target quantum system. This makes them flexible and scalable in pursuit of universal quantum computation – logic able to run any algorithm allowed by the laws of quantum mechanics, given enough qubits and sufficient time. Universal quantum computing requires only a small subset of the many quantum logic gates that can be conceived, for example Hadamard, T and CNOT. The Hadamard gate creates a superposition: |0>→ (|0> + |1>) / √2. The T gate applies a 45° phase rotation: |1>→ eiπ/4|1>. And the CNOT gate entangles qubits by flipping a target qubit if a control qubit is |1>. These three suffice to prepare any quantum state from a trivial reference state: |ψ> = U1 U2 U3 … UN |0000…000>.
To bring frontier physics problems within the scope of current quantum computing resources, the distinction between analog and digital quantum simulations is often blurred. The complexity of simulations can be reduced by combining digital gate sequences with analog quantum hardware that aligns with the interaction patterns relevant to the target problem. This is feasible as quantum logic gates usually rely on native interactions similar to those used in analog simulations. Rydberg atoms are a common choice. Alongside them, two other technologies are becoming increasingly dominant in digital quantum simulation: trapped ions and superconducting qubit arrays.
Trapped ions offer the greatest control. Individual charged ions can be suspended in free space using electromagnetic fields. Lasers manipulate their quantum states, inducing interactions between them. Trapped-ion systems are renowned for their high fidelity (meaning operations are accurate) and long coherence times (meaning they maintain their quantum properties for longer), making them excellent candidates for quantum simulation (see “Trapped ions” figure).
Superconducting qubit arrays promise the greatest scalability. These tiny superconducting circuit materials act as qubits when cooled to extremely low temperatures and manipulated with microwave pulses. This technology is at the forefront of efforts to build quantum simulators and digital quantum computers for universal quantum computation (see “Superconducting qubits” figure).
The noisy intermediate-scale quantum era
Despite rapid progress, these technologies are at an early stage of development and face three main limitations.
The first problem is that qubits are fragile. Interactions with their environment quickly compromise their superposition and entanglement, making computations unreliable. Preventing “decoherence” is one of the main engineering challenges in quantum technology today.
The second challenge is that quantum logic gates have low fidelity. Over a long sequence of operations, errors accumulate, corrupting the result.
Finally, quantum simulators currently have a very limited number of qubits – typically only a few hundred. This is far fewer than what is needed for high-energy physics (HEP) problems.
This situation is known as the noisy “intermediate-scale” quantum era: we are no longer doing proof-of-principle experiments with a few tens of qubits, but neither can we control thousands of them. These limitations mean that current digital simulations are often restricted to “toy” models, such as QED simplified to have just one spatial and one time dimension. Even with these constraints, small-scale devices have successfully reproduced non-perturbative aspects of the theories in real time and have verified the preservation of fundamental physical principles such as gauge invariance, the symmetry that underpins the fundamental forces of the Standard Model.
Quantum simulators may chart a similar path to classical lattice QCD, but with even greater reach. Lattice QCD struggles with real-time evolution and finite-density physics due to the infamous “sign problem”, wherein quantum interference between classically computed amplitudes causes exponentially worsening signal-to-noise ratios. This renders some of the most interesting problems unsolvable on classical machines.
Quantum simulators do not suffer from the sign problem because they evolve naturally in real-time, just like the physical systems they emulate. This promises to open new frontiers such as the simulation of early-universe dynamics, black-hole evaporation and the dense interiors of neutron stars.
Quantum simulators will powerfully augment traditional theoretical and computational methods, offering profound insights when Feynman diagrams become intractable, when dealing with real-time dynamics and when the sign problem renders classical simulations exponentially difficult. Just as the lattice revolution required decades of concerted community effort to reach its full potential, so will the quantum revolution, but the fruits will again transform the field. As the aphorism attributed to Mark Twain goes: history never repeats itself, but it often rhymes.
Quantum information
One of the most exciting and productive developments in recent years is the unexpected, yet profound, convergence between HEP and quantum information science (QIS). For a long time these fields evolved independently. HEP explored the universe’s smallest constituents and grandest structures, while QIS focused on harnessing quantum mechanics for computation and communication. One of the pioneers in studying the interface between these fields was John Bell, a theoretical physicist at CERN.
Just as the lattice revolution needed decades of concerted community effort to reach its full potential, so will the quantum revolution
HEP and QIS are now deeply intertwined. As quantum simulators advance, there is a growing demand for theoretical tools that combine the rigour of quantum field theory with the concepts of QIS. For example, tensor networks were developed in condensed-matter physics to represent highly entangled quantum states, and have now found surprising applications in lattice gauge theories and “holographic dualities” between quantum gravity and quantum field theory. Another example is quantum error correction – a vital QIS technique to protect fragile quantum information from noise, and now a major focus for quantum simulation in HEP.
This cross-disciplinary synthesis is not just conceptual; it is becoming institutional. Initiatives like the US Department of Energy’s Quantum Information Science Enabled Discovery (QuantISED) programme, CERN’s Quantum Technology Initiative (QTI) and Europe’s Quantum Flagship are making substantial investments in collaborative research. Quantum algorithms will become indispensable for theoretical problems just as quantum sensors are becoming indispensable to experimental observation (see “Sensing at quantum limits”).
The result is the emergence of a new breed of scientist: one equally fluent in the fundamental equations of particle physics and the practicalities of quantum hardware. These “hybrid” scientists are building the theoretical and computational scaffolding for a future where quantum simulation is a standard, indispensable tool in HEP.
One hundred years after its birth, quantum mechanics is the foundation of our understanding of the physical world. Yet debates on how to interpret the theory – especially the thorny question of what happens when we make a measurement – remain as lively today as during the 1930s.
The latest recognition of the fertility of studying the interpretation of quantum mechanics was the award of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser and Anton Zeilinger. The motivation for the prize pointed out that the bubbling field of quantum information, with its numerous current and potential technological applications, largely stems from the work of John Bell at CERN the 1960s and 1970s, which in turn was motivated by the debate on the interpretation of quantum mechanics.
The majority of scientists use a textbook formulation of the theory that distinguishes the quantum system being studied from “the rest of the world” – including the measuring apparatus and the experimenter, all described in classical terms. Used in this orthodox manner, quantum theory describes how quantum systems react when probed by the rest of the world. It works flawlessly.
Sense and sensibility
The problem is that the rest of the world is quantum mechanical as well. There are of course regimes in which the behaviour of a quantum system is well approximated by classical mechanics. One may even be tempted to think that this suffices to solve the difficulty. But this leaves us in the awkward position of having a general theory of the world that only makes sense under special approximate conditions. Can we make sense of the theory in general?
Today, variants of four main ideas stand at the forefront of efforts to make quantum mechanics more conceptually robust. They are known as physical collapse, hidden variables, many worlds and relational quantum mechanics. Each appears to me to be viable a priori, but each comes with a conceptual price to pay. The latter two may be of particular interest to the high-energy community as the first two do not appear to fit well with relativity.
The idea of the physical collapse is simple: we are missing a piece of the dynamics. There may exist a yet-undiscovered physical interaction that causes the wavefunction to “collapse” when the quantum system interacts with the classical world in a measurement. The idea is empirically testable. So far, all laboratory attempts to find violations of the textbook Schrödinger equation have failed (see “Probing physical collapse” figure), and some models for these hypothetical new dynamics have been ruled out by measurements.
The second possibility, hidden variables, follows on from Einstein’s belief that quantum mechanics is incomplete. It posits that its predictions are exactly correct, but that there are additional variables describing what is going on, besides those in the usual formulation of the theory: the reason why quantum predictions are probabilistic is our ignorance of these other variables.
The work of John Bell shows that the dynamics of any such theory will have some degree of non-locality (see “Non-locality” image). In the non-relativistic domain, there is a good example of a theory of this sort, that goes under the name of de Broglie–Bohm, or pilot-wave theory. This theory has non-local but deterministic dynamics capable of reproducing the predictions of non-relativistic quantum-particle dynamics. As far as I am aware, all existing theories of this kind break Lorentz invariance, and the extension of hidden variable theories to quantum-field theoretical domains appears cumbersome.
Relativistic interpretations
Let me now come to the two ideas that are naturally closer to relativistic physics. The first is the many-worlds interpretation – a way of making sense of quantum theory without either changing its dynamics or adding extra variables. It is described in detail in this edition of CERN Courier by one of its leading contemporary proponents (see “The minimalism of many worlds“), but the main idea is the following: being a genuine quantum system, the apparatus that makes a quantum measurement does not collapse the superposition of possible measurement outcomes – it becomes a quantum superposition of the possibilities, as does any human observer.
If we observe a singular outcome, says the many-worlds interpretation, it is not because one of the probabilistic alternatives has actualised in a mysterious “quantum measurement”. Rather, it is because we have split into a quantum superposition of ourselves, and we just happen to be in one of the resulting copies. The world we see around us is thus only one of the branches of a forest of parallel worlds in the overall quantum state of everything. The price to pay to make sense of quantum theory in this manner is to accept the idea that the reality we see is just a branch in a vast collection of possible worlds that include innumerable copies of ourselves.
Relational interpretations are the most recent of the four kinds mentioned. They similarly avoid physical collapse or hidden variables, but do so without multiplying worlds. They stay closer to the orthodox textbook interpretation, but with no privileged status for observers. The idea is to think of quantum theory in a manner closer to the way it was initially conceived by Born, Jordan, Heisenberg and Dirac: namely in terms of transition amplitudes between observations rather than quantum states evolving continuously in time, as emphasised by Schrödinger’s wave mechanics (see “A matter of taste” image).
Observer relativity
The alternative to taking the quantum state as the fundamental entity of the theory is to focus on the information that an arbitrary system can have about another arbitrary system. This information is embodied in the physics of the apparatus: the position of its pointer variable, the trace in a bubble chamber, a person’s memory or a scientist’s logbook. After a measurement, these physical quantities “have information” about the measured system as their value is correlated with a property of the observed systems.
Quantum theory can be interpreted as describing the relative information that systems can have about one another. The quantum state is interpreted as a way of coding the information about a system available to another system. What looks like a multiplicity of worlds in the many-worlds interpretation becomes nothing more than a mathematical accounting of possibilities and probabilities.
The relational interpretation reduces the content of the physical theory to be about how systems affect other systems. This is like the orthodox textbook interpretation, but made democratic. Instead of a preferred classical world, any system can play a role that is a generalisation of the Copenhagen observer. Relativity teaches us that velocity is a relative concept: an object has no velocity by itself, but only relative to another object. Similarly, quantum mechanics, interpreted in this manner, teaches us that all physical variables are relative. They are not properties of a single object, but ways in which an object affects another object.
The QBism version of the interpretation restricts its attention to observing systems that are rational agents: they can use observations and make probabilistic predictions about the future. Probability is interpreted subjectively, as the expectation of a rational agent. The relational interpretation proper does not accept this restriction: it considers the information that any system can have about any other system. Here, “information” is understood in the simple physical sense of correlation described above.
Like many worlds – to which it is not unrelated – the relational interpretation does not add new dynamics or new variables. Unlike many worlds, it does not ask us to think about parallel worlds either. The conceptual price to pay is a radical weakening of a strong form of realism: the theory does not give us a picture of a unique objective sequence of facts, but only perspectives on the reality of physical systems, and how these perspectives interact with one another. Only quantum states of a system relative to another system play a role in this interpretation. The many-worlds interpretation is very close to this. It supplements the relational interpretation with an overall quantum state, interpreted realistically, achieving a stronger version of realism at the price of multiplying worlds. In this sense, the many worlds and relational interpretations can be seen as two sides of the same coin.
Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics
I have only sketched here the most discussed alternatives, and have tried to be as neutral as possible in a field of lively debates in which I have my own strong bias (towards the fourth solution). Empirical testing, as I have mentioned, can only test the physical collapse hypothesis.
There is nothing wrong, in science, in using different pictures for the same phenomenon. Conceptual flexibility is itself a resource. Specific interpretations often turn out to be well adapted to specific problems. In quantum optics it is sometimes convenient to think that there is a wave undergoing interference, as well as a particle that follows a single trajectory guided by the wave, as in the pilot-wave hidden-variable theory. In quantum computing, it is convenient to think that different calculations are being performed in parallel in different worlds. My own field of loop quantum gravity treats spacetime regions as quantum processes: here, the relational interpretation merges very naturally with general relativity, because spacetime regions themselves become quantum processes, affecting each other.
Richard Feynman famously wrote that “every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping that they will give him different ideas for guessing.” I think that this is where we are, in trying to make sense of our best physical theory. We have various ways to make sense of it. We do not yet know which of these will turn out to be the most fruitful in the future.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.