Precise measurements of the Higgs self-coupling and its effects on the Higgs potential will play a key role in testing the validity of the Standard Model (SM). 150 physicists discussed the required experimental and theoretical manoeuvres on the serene island of Elba from 11 to 17 May at the Higgs Pairs 2025 workshop.
The conference mixed updates on theoretical developments in Higgs-boson pair production, searches for new physics in the scalar sector, and the most recent results from Run 2 and Run 3 of the LHC. Among the highlights was the first Run 3 analysis released by ATLAS on the search for di-Higgs production in the bbγγ final state – a particularly sensitive channel for probing the Higgs self-coupling. This result builds on earlier Run 2 analyses and demonstrates significantly improved sensitivity, now comparable to the full Run 2 combination of all channels. These gains were driven by the use of new b-tagging algorithms, improved mass resolution through updated analysis techniques, and the availability of nearly twice the dataset.
Complementing this, CMS presented the first search for ttHH production – a rare process that would provide additional sensitivity to the Higgs self-coupling and Higgs–top interactions. Alongside this, ATLAS presented first experimental searches for triple Higgs boson production (HHH), one of the rarest processes predicted by the SM. Work on more traditional final states such as bbττ and bbbb is ongoing at both experiments, and continues to benefit from improved reconstruction techniques and larger datasets.
Beyond current data, the workshop featured discussions of the latest combined projection study by ATLAS and CMS, prepared as part of the input to the upcoming European Strategy Update. It extrapolates results of the Run 2 analyses to expected conditions of the High-Luminosity LHC (HL-LHC), estimating future sensitivities to the Higgs self-coupling and di-Higgs cross-section in scenarios with vastly higher luminosity and upgraded detectors. Under these assumptions, the combined sensitivity of ATLAS and CMS to di-Higgs production is projected to reach a significance of 7.6σ, firmly establishing the process.
These projections provide crucial input for analysis strategy planning and detector design for the next phase of operations at the HL-LHC. Beyond the HL-LHC, efforts are already underway to design experiments at future colliders that will enhance sensitivity to the production of Higgs pairs, and offer new insights into electroweak symmetry breaking.
In 2018 and 2019, the LHCb collaboration published surprising measurements of the Ξc0 and Ωc0 baryon lifetimes, which were inconsistent with previous results and overturned the established hierarchy between the two. A new analysis of their hadronic decays now confirms this observation, promising insights into the dynamics of baryons.
The Λc+, Ξc+, Ξc0 and Ωc0 baryons – each composed of one charm and two lighter up, down or strange quarks – are the only ground-state singly charmed baryons that decay predominantly via the weak interaction. The main contribution to this process comes from the charm quark transitioning into a strange quark, with the other constituents acting as passive spectators. Consequently, at leading order, their lifetimes should be the same. Differences arise from higher-order effects, such as W-boson exchange between the charm and spectator quarks and quantum interference between identical particles, known as “Pauli interference”. Charm hadron lifetimes are more sensitive to these effects than beauty hadrons because of the smaller charm quark mass compared to the bottom quark, making them a promising testing ground to study these effects.
Measurements of the Ξc0 and Ωc0 lifetimes prior to the start of the LHCb experiment resulted in the PDG averages shown in figure 1. The first LHCb analysis, using charm baryons produced in semi-leptonic decays of beauty baryons, was in tension with the established values, giving a Ωc0 lifetime four times larger than the previous average. The inconsistencies were later confirmed by another LHCb measurement, using an independent data set with charm baryons produced directly (prompt) in the pp collision (CERN Courier July/August 2021 p17). These results changed the ordering of the four single-charm baryons when arranged according to their lifetimes, triggering a scientific discussion on how to treat higher-order effects in decay rate calculations.
Using the full Run 1 and 2 datasets, LHCb has now measured the Ξc0 and Ωc0 lifetimes with a third independent data sample, based on fully reconstructed Ξb–→Ξc0 (→ pK–K–π+)π– and Ω–b→ Ωc0 (→ pK–K–π+)π– decays. The selection of these hadronic decay chains exploits the long lifetime of the beauty baryons, such that the selection efficiency is almost independent of the charm baryon decay time. To cancel out the small remaining acceptance effects, the measurement is normalised to the kinematically and topologically similar B–→ D0(→ K+K–π+π–)π– channel, minimising the uncertainties with only a small additional correction from simulation.
The signal decays are separated from the remaining background by fits to the Ξc0 π– and Ωc0 π– invariant mass spectra, providing 8260 ± 100 Ξc0 and 355 ± 26 Ωc0 candidates. The decay time distributions are obtained with two independent methods: by determining the yield in each of a specific set of decay time intervals, and by employing a statistical technique that uses the covariance matrix from the fit to the mass spectra. The two methods give consistent results, confirming LHCb’s earlier measurements. Combining the three measurements from LHCb, while accounting for their correlated uncertainties, gives τ(Ξc0) = 150.7 ± 1.6 fs and τ(Ωc0) = 274.8 ± 10.5 fs. These new results will serve as experimental guidance on how to treat higher-order effects in weak baryon decays, particularly regarding the approach-dependent sign and magnitude of Pauli interference terms.
The discovery of the Higgs boson at the LHC in 2012 provided strong experimental support for the Brout–Englert–Higgs mechanism of spontaneous electroweak symmetry breaking (EWSB) as predicted by the Standard Model. The EWSB explains how the W and Z bosons, the mediators of the weak interaction, acquire mass: their longitudinal polarisation states emerge from the Goldstone modes of the Higgs field, linking the mass generation of vector bosons directly to the dynamics of the process.
Yet, its ultimate origins remain unknown and the Standard Model may only offer an effective low-energy description of a more fundamental theory. Exploring this possibility requires precise tests of how EWSB operates, and vector boson scattering (VBS) provides a particularly sensitive probe. In VBS, two electroweak gauge bosons scatter off one another. The cross section remains finite at high energies only because there is an exact cancellation between the pure gauge-boson interactions and the Higgs-boson mediated contributions, an effect analogous to the role of the Z boson propagator in WW production at electron–positron colliders. Deviations from the expected behaviour could signal new dynamics, such as anomalous couplings, strong interactions in the Higgs sector or new particles at higher energy scales.
This result lays the groundwork for future searches for new physics hidden within the electroweak sector
VBS interactions are among the rarest observed so far at the LHC, with cross sections as low as one femtobarn. To disentangle them from the background, researchers rely on the distinctive experimental signature of two high-energy jets in the forward detector regions produced by the initial quarks that radiate the bosons, with minimal hadronic activity between them. Using the full data set from Run 2 of the LHC at a centre-of-mass energy of 13 TeV, the CMS collaboration carried out a comprehensive set of VBS measurements across several production modes: WW (with both same and opposite charges), WZ and ZZ, studied in five final states where both bosons decay leptonically and in two semi-leptonic configurations where one boson decays into leptons and the other into quarks. To enhance sensitivity further, the data from all the measurements have now been combined in a single joint fit, with a complete treatment of uncertainty correlations and a careful handling of events selected by more than one analysis.
All modes, one analysis
To account for possible deviations from the expected predictions, each process is characterised by a signal strength parameter (μ), defined as the ratio of the measured production rate to the cross section predicted by the Standard Model. A value of μ near unity indicates consistency with the Standard Model, while significant deviations may suggest new physics. The results, summarised in figure 1, display good agreement with the Standard Model predictions: all measured signal strengths are consistent with unity within their respective uncertainties. A mild excess with respect to the leading-order theoretical predictions is observed across several channels, highlighting the need for more accurate modelling, in particular for the measurements that have reached a level of precision where systematic effects dominate. By presenting the first evidence for all charged VBS production modes from a single combined statistical analysis, this CMS result lays the groundwork for future searches for new physics hidden within the electroweak sector.
The 16th International Workshop on Hadron Physics (Hadrons 2025) welcomed 135 physicists to the Federal University of Rio Grande do Sul (UFRGS) in Porto Alegre, Brazil. Delayed by four months due to a tragic flood that devastated the city, the triennial conference took place from 10 to 14 March, despite adversity maintaining its long tradition as a forum for collaboration among Brazilian and international researchers at different stages of their careers.
The workshop’s scientific programme included field theoretical approaches to QCD, the behaviour of hadronic and quark matter in astrophysical contexts, hadronic structure and decays, lattice QCD calculations, recent experimental developments in relativistic heavy-ion collisions, and the interplay of strong and electroweak forces within the Standard Model.
Fernanda Steffens (University of Bonn) explained how deep-inelastic-scattering experiments and theoretical developments are revealing the internal structure of the proton. Kenji Fukushima (University of Tokyo) addressed the theoretical framework and phase structure of strongly interacting matter, with particular emphasis on the QCD phase diagram and its relevance to heavy-ion collisions and neutron stars. Chun Shen (Wayne State University) presented a comprehensive overview of the state-of-the-art techniques used to extract the transport properties of quark–gluon plasma from heavy-ion collision data, emphasising the role of Bayesian inference and machine learning in constraining theoretical models. Li-Sheng Geng (Beihang University) explored exotic hadrons through the lens of hadronic molecules, highlighting symmetry multiplets such as pentaquarks, the formation of multi-hadron states and the role of femtoscopy in studying unstable particle interactions.
This edition of Hadrons was dedicated to the memory of two individuals who left a profound mark on the Brazilian hadronic-physics community: Yogiro Hama, a distinguished senior researcher and educator whose decades-long contributions were foundational to the development of the field in Brazil, and Kau Marquez, an early-career physicist whose passion for science remained steadfast despite her courageous battle with spinal muscular atrophy. Both were remembered with deep admiration and respect, not only for their scientific dedication but also for their personal strength and impact on the community.
Its mission is to cultivate a vibrant and inclusive scientific environment
Since its creation in 1988, the Hadrons workshop has played a central role in developing Brazil’s scientific capacity in particle and nuclear physics. Its structure facilitates close interaction between master’s and doctoral students, and senior researchers, thus enhancing both technical training and academic exchange. This model continues to strengthen the foundations of research and collaboration throughout the Brazilian scientific community.
This is the main event for the Brazilian particle- and nuclear-physics communities, reflecting a commitment to advancing research in this highly interactive field. By circulating the venue across multiple regions of Brazil, each edition further renews its mission to cultivate a vibrant and inclusive scientific environment. This edition was closed by a public lecture on QCD by Tereza Mendes (University of São Paolo), who engaged local students with the foundational questions of strong-interaction physics.
The next edition of the Hadrons series will take place in Bahia in 2028.
The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025. The conference reviews recent experimental and theoretical developments in CP violation, rare decays, Cabibbo–Kobayashi–Maskawa matrix elements, heavy-quark decays, flavour phenomena in charged leptons and neutrinos, and the interplay between flavour physics and high-pT physics at the LHC.
The highlight of the conference was new results on the muon magnetic anomaly. The Muon g-2 experiment at Fermilab released its final measurement of aμ = (g-2)/2 on 3 June, while the conference was in progress, reaching a precision of 127 ppb on the published value. This uncertainty is more than four times smaller than that reported by the previous experiment. One week earlier, on 27 May, the Muon g-2 Theory Initiative published their second calculation of the same quantity, following that published in summer 2020. A major difference between the two calculations is that the earlier one used experimental data and the dispersion integral to evaluate the hadronic contribution to aμ, whereas the update uses a purely theoretical approach based on lattice QCD. The strong tension with the experiment of the earlier calculation is no longer present, with the new calculation compatible with experimental results. Thus, no new physics discovery can be claimed, though the reason for the difference between the two approaches must be understood (see “Fermilab’s final word on muon g-2“).
The MEG II collaboration presented an important update to their limit on the branching fraction for the lepton-flavour-violating decay μ → eγ. Their new upper bound of 1.5 × 10–13 is determined from data collected in 2021 and 2022. The experiment recorded additional data from 2023 to 2024 and expects to continue data taking for two more years. These data will be sensitive to a branching fraction four to five times smaller than the current limit.
LHCb, Belle II, BESIII and NA62 all discussed recent results in quark flavour physics. Highlights include the first measurement of CP violation in a baryon decay by LHCb and improved limits on CP violation in D-meson decay to two pions by Belle II. With more data, the latter measurements could potentially show that the observed CP violation in charm is from a non-Standard-Model source.
The Belle II collaboration now plans to collect a sample between 5 to 10 ab–1 by the early 2030s before undergoing an upgrade to collect a 30 to 50 ab–1 sample by the early 2040s. LHCb plan to run to the end of the High-Luminosity LHC and collect 300 fb–1. LHCb recorded almost 10 fb–1 of data last year – more than in all their previous running, and now with a fully software-based trigger with much higher efficiency than the previous hardware-based first-level trigger. Future results from Belle II and the LHCb upgrade are eagerly anticipated.
The 24th FPCP conference will be held from 18 to 22 May 2026 in Bad Honnef, Germany.
Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.
But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.
The view from the lab
The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.
Valued measurements
The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?
One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)
One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.
Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles
Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.
Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.
Practice makes perfect
We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.
In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.
Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whicheverpath the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.
Unfeasible interference
But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.
Anddecoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.
Emergence and scale
Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.
On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).
Two views of quantum mechanics
Quantum phenomenon
Lab view
Decoherent view
Dynamics
Unitary (i.e. governed by the Schrödinger equation) only between measurements
Always unitary
Quantum/classical transition
Conceptual jump between fundamentally different systems
Purely dynamical: classical physics is a limiting case of quantum physics
Measurements
Cannot be treated internal to the formalism
Just one more dynamical interaction
Role of the observer
Conceptually central
Just one more physical system
But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.
Many worlds
At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.
(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)
Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)
Alternative views
Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)
The Everett interpretation is just the decoherent view taken fully seriously
Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.
Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.
The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.
Heavy-ion collisions usually have very high multiplicities due to colour flow and multiple nucleon interactions. However, when the ions are separated by greater than about twice their radii in so-called ultra-peripheral collisions (UPC), electromagnetic-induced interactions dominate. In these colour-neutral interactions, the ions remain intact and a central system with few particles is produced whose summed transverse momenta, being the Fourier transform of the distance between the ions, is typically less than 100 MeV/c.
In the photoproduction of vector mesons, a photon, radiated from one of the ions, fluctuates into a virtual vector meson long before it reaches the target and then interacts with one or more nucleons in the other ion. The production of ρ mesons has been measured at the LHC by ALICE in PbPb and XeXe collisions, while J/ψ mesons have been measured in PbPb collisions by ALICE, CMS and LHCb. Now, LHCb has isolated a precisely measured, high-statistics sample of di-pions with backgrounds below 1% in which several vector mesons are seen.
Figure 1 shows the invariant mass distribution of the pions, and the fit to the data requires contributions from the ρ meson, continuum ππ, the ω meson and two higher mass resonances at about 1.35 and 1.80 GeV, consistent with excited ρ mesons. The higher structure was also discernible in previous measurements by STAR and ALICE. Since its discovery in 1961, the ρ meson has proved challenging to describe because of its broad width and because of interference effects. More data in the di-pion channel, particularly when practically background-free down almost to production threshold, are therefore welcome. These data may help with hadronic corrections to the prediction of muon g-2: the dip and bump structure at high masses seen by LHCb is qualitatively similar to that observed by BaBar in e+e– → π+π– scattering (CERN Courier March/April 2025 p21). From the invariant mass spectrum, LHCb has measured the cross-sections for ρ, ω, ρ′ and ρ′′ as a function of rapidity in photoproduction on lead nuclei.
Naively, comparison of the photoproduction on the nucleus and on the proton should simply scale with the number of nucleons, and can be calculated in the impulse approximation that only takes into account the nuclear form factor, neglecting all other potential nuclear effects.
However, nuclear shadowing, caused by multiple interactions as the meson passes through the nucleus, leads to a suppression (CERN Courier January/February 2025 p31). In addition, there may be further non-linear QCD effects at play.
Elastic re-scattering is usually described through a Glauber calculation that takes account of multiple elastic scatters. This is extended in the GKZ model using Gribov’s formalism to include inelastic scatters. The inset in figure 1 shows the measured differential cross-section for the ρ meson as a function of rapidity for LHCb data compared to the GKZ prediction, to a prediction for the STARlight generator, and to ALICE data at central rapidities. Additional suppression due to nuclear effects is observed above that predicted by GKZ.
The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.
Cosmological constant
In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.
These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.
The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).
The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.
The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.
Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.
The new data release provides significant evidence of a deviation from the ΛCDM model
An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.
Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.
Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.
Astrophysical gravitational waves have revolutionised astronomy; the eventual detection of cosmological gravitons promises to open an otherwise inaccessible window into the universe’s earliest moments. Such a discovery would offer profound insights into the hidden corners of the early universe and physics beyond the Standard Model. Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to the most exciting frontiers in modern cosmology and particle physics.
Giovannini is an esteemed scholar and household name in the fields of theoretical cosmology and early-universe physics. He has written influential research papers, reviews and books on cosmology, providing detailed discussions on several aspects of the early universe. He also authored 2008’s A Primer on the Physics of the Cosmic Microwave Background – a book most cosmologists are very familiar with.
In Relic Gravitons, Giovannini provides a comprehensive exploration of recent developments in the field, striking a remarkable balance between clarity, physical intuition and rigorous mathematical formalism. As such, it serves as an excellent reference – equally valuable for both junior researchers and seasoned experts seeking depth and insight into theoretical cosmology and particle physics.
Relic Gravitons opens with an overview of cosmological gravitons, offering a broad perspective on gravitational waves across different scales and cosmological epochs, while drawing parallels with the electromagnetic spectrum. This graceful introduction sets the stage for a well-contextualised and structured discussion.
Gravitational rainbow
Relic gravitational waves from the early universe span 30 orders of magnitude, from attohertz to gigahertz. Their wavelengths are constrained from above by the Hubble radius, setting a lower frequency bound of 10–18 Hz. At the lowest frequencies, measurements of the cosmic microwave background (CMB) provide the most sensitive probe of gravitational waves. In the nanohertz range, pulsar timing arrays serve as powerful astrophysical detectors. At intermediate frequencies, laser and atomic interferometers are actively probing the spectrum. At higher frequencies, only wide-band interferometers such as LIGO and Virgo currently operate, primarily within the audio band spanning from a few hertz to several kilohertz.
The theoretical foundation begins with a clear and accessible introduction to tensor modes in flat spacetime, followed by spherical harmonics and polarisations. With these basics in place, tensor modes in curved spacetime are also explored, before progressing to effective action, the quantum mechanics of relic gravitons and effective energy density. This structured progression builds a solid framework for phenomenological applications.
The second part of the book is about the signals of the concordance paradigm, which includes discussions of Sakharov oscillations, short, intermediate and long wavelengths, before entering technical interludes in the next section. Here, Giovannini emphasises that the evolution of the comoving Hubble radius is uncertain, spectral energy density and other observables require approximate methods. The chapter expands to include conventional results using the Wentzel–Kramers–Brillouin approach, which is particularly useful when early-universe dynamics deviate from standard inflation.
Phenomenological implications are discussed in the final section, starting with the low-frequency branch that covers the analysis of the phenomenological implications in the lowest-frequency domain. Giovannini then examines the intermediate and high-frequency ranges. The concordance paradigm suggests that large-scale inhomogeneities originate from quantum mechanics, where traveling waves transform into standing waves. The penultimate chapter addresses the hot topic of the “quantumness” of relic gravitons, before diving into the conclusion. The book finishes with five appendices covering all sorts of useful topics, from notation to basic related topics in general relativity and cosmic perturbations.
Relic Gravitons is a must-read for anyone intrigued by the gravitational-wave background and its unparalleled potential to unveil new physics. It is an invaluable resource for those interested in gravitational waves and the unique potential to explore the unknown parts of particle physics and cosmology.
The 31st Quark Matter conference took place from 6 to 12 April at Goethe University in Frankfurt, Germany. This edition of the world’s flagship conference for ultra-relativistic heavy-ion physics was the best attended in the series’ history, with more than 1000 participants.
A host of experimental measurements and theoretical calculations targeted fundamental questions in many-body QCD. These included the search for a critical point along the QCD phase diagram, the extraction of the properties of the deconfined quark–gluon plasma (QGP) medium created in heavy-ion collisions, and the search for signatures of the formation of this deconfined medium in smaller collision systems.
Probing thermalisation
New results highlighted the ability of the strong force to thermalise the out-of-equilibrium QCD matter produced during the collisions. Thermalisation can be probed by taking advantage of spatial anisotropies in the initial collision geometry which, due to the rapid onset of strong interactions at early times, result in pressure gradients across the system. These pressure gradients in turn translate into a momentum-space anisotropy of produced particles in the bulk, which can be experimentally measured by taking a Fourier transform of the azimuthal distribution of final-state particles with respect to a reference event axis.
An area of active experimental and theoretical interest is to quantify the degree to which heavy quarks, such as charm and beauty, participate in this collective behaviour, which informs on the diffusion properties of the medium. The ALICE collaboration presented the first measurement of the second-order coefficient of the momentum anisotropy of charm baryons in Pb–Pb collisions, showing significant collective behaviour and suggesting that charm quarks undergo some degree of thermalisation. This collective behaviour appears to be stronger in charm baryons than charm mesons, following similar observations for light flavour.
A host of measurements and calculations targeted fundamental questions in many-body QCD
Due to the nature of thermalisation and the long hydrodynamic phase of the medium in Pb–Pb collisions, signatures of the microscopic dynamics giving rise to the thermalisation are often washed out in bulk observables. However, local excitations of the hydrodynamic medium, caused by the propagation of a high-energy jet through the QGP, can offer a window into such dynamics. Due to coupling to the coloured medium, the jet loses energy to the QGP, which in turn re-excites the thermalised medium. These excited states quickly decay and dissipate, and the local perturbation can partially thermalise. This results in a correlated response of the medium in the direction of the propagating jet, the distribution of which allows measurement of the thermalisation properties of the medium in a more controlled manner.
In this direction, the CMS collaboration presented the first measurement of an event-wise two-point energy–energy correlator, for events containing a Z boson, in both pp and Pb–Pb collisions. The two-point correlator represents the energy-weighted cross section of the angle between particle pairs in the event and can separate out QCD effects at different scales, as these populate different regions in angular phase space. In particular, the correlated response of the medium is expected to appear at large angles in the correlator in Pb–Pb collisions.
The use of a colourless Z boson, which does not interact in the QGP, allows CMS to compare events with similar initial virtuality scales in pp and Pb–Pb collisions, without incurring biases due to energy loss in the QCD probes. The collaboration showed modifications in the two-point correlator at large angles, from pp to Pb–Pb collisions, alluding to a possible signature of the correlated response of the medium to the traversing jets. Such measurements can help guide models into capturing the relevant physical processes underpinning the diffusion of colour information in the medium.
Looking to the future
The next addition of this conference series will take place in 2027 in Jeju, South Korea, and the new results presented there should notably contain the latest complement of results from the upgraded Run 3 detectors at the LHC and the newly commissioned sPHENIX detector at RHIC. New collision systems like O–O at the LHC will help shed light on many of the properties of the QGP, including its thermalisation, by varying the lifetime of the pre-equilibrium and hydrodynamic phases in the collision evolution.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.