The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025. The conference reviews recent experimental and theoretical developments in CP violation, rare decays, Cabibbo–Kobayashi–Maskawa matrix elements, heavy-quark decays, flavour phenomena in charged leptons and neutrinos, and the interplay between flavour physics and high-pT physics at the LHC.
The highlight of the conference was new results on the muon magnetic anomaly. The Muon g-2 experiment at Fermilab released its final measurement of aμ = (g-2)/2 on 3 June, while the conference was in progress, reaching a precision of 127 ppb on the published value. This uncertainty is more than four times smaller than that reported by the previous experiment. One week earlier, on 27 May, the Muon g-2 Theory Initiative published their second calculation of the same quantity, following that published in summer 2020. A major difference between the two calculations is that the earlier one used experimental data and the dispersion integral to evaluate the hadronic contribution to aμ, whereas the update uses a purely theoretical approach based on lattice QCD. The strong tension with the experiment of the earlier calculation is no longer present, with the new calculation compatible with experimental results. Thus, no new physics discovery can be claimed, though the reason for the difference between the two approaches must be understood (see “Fermilab’s final word on muon g-2“).
The MEG II collaboration presented an important update to their limit on the branching fraction for the lepton-flavour-violating decay μ → eγ. Their new upper bound of 1.5 × 10–13 is determined from data collected in 2021 and 2022. The experiment recorded additional data from 2023 to 2024 and expects to continue data taking for two more years. These data will be sensitive to a branching fraction four to five times smaller than the current limit.
LHCb, Belle II, BESIII and NA62 all discussed recent results in quark flavour physics. Highlights include the first measurement of CP violation in a baryon decay by LHCb and improved limits on CP violation in D-meson decay to two pions by Belle II. With more data, the latter measurements could potentially show that the observed CP violation in charm is from a non-Standard-Model source.
The Belle II collaboration now plans to collect a sample between 5 to 10 ab–1 by the early 2030s before undergoing an upgrade to collect a 30 to 50 ab–1 sample by the early 2040s. LHCb plan to run to the end of the High-Luminosity LHC and collect 300 fb–1. LHCb recorded almost 10 fb–1 of data last year – more than in all their previous running, and now with a fully software-based trigger with much higher efficiency than the previous hardware-based first-level trigger. Future results from Belle II and the LHCb upgrade are eagerly anticipated.
The 24th FPCP conference will be held from 18 to 22 May 2026 in Bad Honnef, Germany.
Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.
But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.
The view from the lab
The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.
Valued measurements
The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?
One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)
One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.
Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles
Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.
Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.
Practice makes perfect
We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.
In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.
Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whicheverpath the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.
Unfeasible interference
But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.
Anddecoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.
Emergence and scale
Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.
On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).
Two views of quantum mechanics
Quantum phenomenon
Lab view
Decoherent view
Dynamics
Unitary (i.e. governed by the Schrödinger equation) only between measurements
Always unitary
Quantum/classical transition
Conceptual jump between fundamentally different systems
Purely dynamical: classical physics is a limiting case of quantum physics
Measurements
Cannot be treated internal to the formalism
Just one more dynamical interaction
Role of the observer
Conceptually central
Just one more physical system
But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.
Many worlds
At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.
(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)
Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)
Alternative views
Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)
The Everett interpretation is just the decoherent view taken fully seriously
Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.
Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.
The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.
Heavy-ion collisions usually have very high multiplicities due to colour flow and multiple nucleon interactions. However, when the ions are separated by greater than about twice their radii in so-called ultra-peripheral collisions (UPC), electromagnetic-induced interactions dominate. In these colour-neutral interactions, the ions remain intact and a central system with few particles is produced whose summed transverse momenta, being the Fourier transform of the distance between the ions, is typically less than 100 MeV/c.
In the photoproduction of vector mesons, a photon, radiated from one of the ions, fluctuates into a virtual vector meson long before it reaches the target and then interacts with one or more nucleons in the other ion. The production of ρ mesons has been measured at the LHC by ALICE in PbPb and XeXe collisions, while J/ψ mesons have been measured in PbPb collisions by ALICE, CMS and LHCb. Now, LHCb has isolated a precisely measured, high-statistics sample of di-pions with backgrounds below 1% in which several vector mesons are seen.
Figure 1 shows the invariant mass distribution of the pions, and the fit to the data requires contributions from the ρ meson, continuum ππ, the ω meson and two higher mass resonances at about 1.35 and 1.80 GeV, consistent with excited ρ mesons. The higher structure was also discernible in previous measurements by STAR and ALICE. Since its discovery in 1961, the ρ meson has proved challenging to describe because of its broad width and because of interference effects. More data in the di-pion channel, particularly when practically background-free down almost to production threshold, are therefore welcome. These data may help with hadronic corrections to the prediction of muon g-2: the dip and bump structure at high masses seen by LHCb is qualitatively similar to that observed by BaBar in e+e– → π+π– scattering (CERN Courier March/April 2025 p21). From the invariant mass spectrum, LHCb has measured the cross-sections for ρ, ω, ρ′ and ρ′′ as a function of rapidity in photoproduction on lead nuclei.
Naively, comparison of the photoproduction on the nucleus and on the proton should simply scale with the number of nucleons, and can be calculated in the impulse approximation that only takes into account the nuclear form factor, neglecting all other potential nuclear effects.
However, nuclear shadowing, caused by multiple interactions as the meson passes through the nucleus, leads to a suppression (CERN Courier January/February 2025 p31). In addition, there may be further non-linear QCD effects at play.
Elastic re-scattering is usually described through a Glauber calculation that takes account of multiple elastic scatters. This is extended in the GKZ model using Gribov’s formalism to include inelastic scatters. The inset in figure 1 shows the measured differential cross-section for the ρ meson as a function of rapidity for LHCb data compared to the GKZ prediction, to a prediction for the STARlight generator, and to ALICE data at central rapidities. Additional suppression due to nuclear effects is observed above that predicted by GKZ.
The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.
Cosmological constant
In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.
These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.
The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).
The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.
The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.
Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.
The new data release provides significant evidence of a deviation from the ΛCDM model
An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.
Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.
Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.
Astrophysical gravitational waves have revolutionised astronomy; the eventual detection of cosmological gravitons promises to open an otherwise inaccessible window into the universe’s earliest moments. Such a discovery would offer profound insights into the hidden corners of the early universe and physics beyond the Standard Model. Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to the most exciting frontiers in modern cosmology and particle physics.
Giovannini is an esteemed scholar and household name in the fields of theoretical cosmology and early-universe physics. He has written influential research papers, reviews and books on cosmology, providing detailed discussions on several aspects of the early universe. He also authored 2008’s A Primer on the Physics of the Cosmic Microwave Background – a book most cosmologists are very familiar with.
In Relic Gravitons, Giovannini provides a comprehensive exploration of recent developments in the field, striking a remarkable balance between clarity, physical intuition and rigorous mathematical formalism. As such, it serves as an excellent reference – equally valuable for both junior researchers and seasoned experts seeking depth and insight into theoretical cosmology and particle physics.
Relic Gravitons opens with an overview of cosmological gravitons, offering a broad perspective on gravitational waves across different scales and cosmological epochs, while drawing parallels with the electromagnetic spectrum. This graceful introduction sets the stage for a well-contextualised and structured discussion.
Gravitational rainbow
Relic gravitational waves from the early universe span 30 orders of magnitude, from attohertz to gigahertz. Their wavelengths are constrained from above by the Hubble radius, setting a lower frequency bound of 10–18 Hz. At the lowest frequencies, measurements of the cosmic microwave background (CMB) provide the most sensitive probe of gravitational waves. In the nanohertz range, pulsar timing arrays serve as powerful astrophysical detectors. At intermediate frequencies, laser and atomic interferometers are actively probing the spectrum. At higher frequencies, only wide-band interferometers such as LIGO and Virgo currently operate, primarily within the audio band spanning from a few hertz to several kilohertz.
The theoretical foundation begins with a clear and accessible introduction to tensor modes in flat spacetime, followed by spherical harmonics and polarisations. With these basics in place, tensor modes in curved spacetime are also explored, before progressing to effective action, the quantum mechanics of relic gravitons and effective energy density. This structured progression builds a solid framework for phenomenological applications.
The second part of the book is about the signals of the concordance paradigm, which includes discussions of Sakharov oscillations, short, intermediate and long wavelengths, before entering technical interludes in the next section. Here, Giovannini emphasises that the evolution of the comoving Hubble radius is uncertain, spectral energy density and other observables require approximate methods. The chapter expands to include conventional results using the Wentzel–Kramers–Brillouin approach, which is particularly useful when early-universe dynamics deviate from standard inflation.
Phenomenological implications are discussed in the final section, starting with the low-frequency branch that covers the analysis of the phenomenological implications in the lowest-frequency domain. Giovannini then examines the intermediate and high-frequency ranges. The concordance paradigm suggests that large-scale inhomogeneities originate from quantum mechanics, where traveling waves transform into standing waves. The penultimate chapter addresses the hot topic of the “quantumness” of relic gravitons, before diving into the conclusion. The book finishes with five appendices covering all sorts of useful topics, from notation to basic related topics in general relativity and cosmic perturbations.
Relic Gravitons is a must-read for anyone intrigued by the gravitational-wave background and its unparalleled potential to unveil new physics. It is an invaluable resource for those interested in gravitational waves and the unique potential to explore the unknown parts of particle physics and cosmology.
The 31st Quark Matter conference took place from 6 to 12 April at Goethe University in Frankfurt, Germany. This edition of the world’s flagship conference for ultra-relativistic heavy-ion physics was the best attended in the series’ history, with more than 1000 participants.
A host of experimental measurements and theoretical calculations targeted fundamental questions in many-body QCD. These included the search for a critical point along the QCD phase diagram, the extraction of the properties of the deconfined quark–gluon plasma (QGP) medium created in heavy-ion collisions, and the search for signatures of the formation of this deconfined medium in smaller collision systems.
Probing thermalisation
New results highlighted the ability of the strong force to thermalise the out-of-equilibrium QCD matter produced during the collisions. Thermalisation can be probed by taking advantage of spatial anisotropies in the initial collision geometry which, due to the rapid onset of strong interactions at early times, result in pressure gradients across the system. These pressure gradients in turn translate into a momentum-space anisotropy of produced particles in the bulk, which can be experimentally measured by taking a Fourier transform of the azimuthal distribution of final-state particles with respect to a reference event axis.
An area of active experimental and theoretical interest is to quantify the degree to which heavy quarks, such as charm and beauty, participate in this collective behaviour, which informs on the diffusion properties of the medium. The ALICE collaboration presented the first measurement of the second-order coefficient of the momentum anisotropy of charm baryons in Pb–Pb collisions, showing significant collective behaviour and suggesting that charm quarks undergo some degree of thermalisation. This collective behaviour appears to be stronger in charm baryons than charm mesons, following similar observations for light flavour.
A host of measurements and calculations targeted fundamental questions in many-body QCD
Due to the nature of thermalisation and the long hydrodynamic phase of the medium in Pb–Pb collisions, signatures of the microscopic dynamics giving rise to the thermalisation are often washed out in bulk observables. However, local excitations of the hydrodynamic medium, caused by the propagation of a high-energy jet through the QGP, can offer a window into such dynamics. Due to coupling to the coloured medium, the jet loses energy to the QGP, which in turn re-excites the thermalised medium. These excited states quickly decay and dissipate, and the local perturbation can partially thermalise. This results in a correlated response of the medium in the direction of the propagating jet, the distribution of which allows measurement of the thermalisation properties of the medium in a more controlled manner.
In this direction, the CMS collaboration presented the first measurement of an event-wise two-point energy–energy correlator, for events containing a Z boson, in both pp and Pb–Pb collisions. The two-point correlator represents the energy-weighted cross section of the angle between particle pairs in the event and can separate out QCD effects at different scales, as these populate different regions in angular phase space. In particular, the correlated response of the medium is expected to appear at large angles in the correlator in Pb–Pb collisions.
The use of a colourless Z boson, which does not interact in the QGP, allows CMS to compare events with similar initial virtuality scales in pp and Pb–Pb collisions, without incurring biases due to energy loss in the QCD probes. The collaboration showed modifications in the two-point correlator at large angles, from pp to Pb–Pb collisions, alluding to a possible signature of the correlated response of the medium to the traversing jets. Such measurements can help guide models into capturing the relevant physical processes underpinning the diffusion of colour information in the medium.
Looking to the future
The next addition of this conference series will take place in 2027 in Jeju, South Korea, and the new results presented there should notably contain the latest complement of results from the upgraded Run 3 detectors at the LHC and the newly commissioned sPHENIX detector at RHIC. New collision systems like O–O at the LHC will help shed light on many of the properties of the QGP, including its thermalisation, by varying the lifetime of the pre-equilibrium and hydrodynamic phases in the collision evolution.
On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars, which bring together physicists, statisticians and scientists from related fields to discuss, develop and disseminate methods for statistical data analysis and machine learning.
The special symposium heard from the founder and primary organiser of the PhyStat series Louis Lyons (Imperial College London and University of Oxford), who together with Fred James and Yves Perrin initiated the movement with the “Workshop on Confidence Limits” in January 2000. According to Lyons, the series was to bring together physicists and statisticians, a philosophy that has been followed and extended throughout the 22 PhyStat workshops and conferences, as well as numerous seminars and “informal reviews”. Speakers called attention to recognition from the Royal Statistical Society’s pictorial timeline of statistics, starting with the use of averages by Hippias of Elis in 450 BC and culminating with the 2012 discovery of the Higgs boson with 5σ significance.
Lyons and Bob Cousins (UCLA) offered their views on the evolution of statistical practice in high-energy physics, starting in the 1960s bubble-chamber era, strongly influenced by the 1971 book Statistical Methods in Experimental Physics by W T Eadie et al., its 2006 second edition by symposium participant Fred James (CERN), as well as Statistics for Nuclear and Particle Physics (1985) by Louis Lyons – reportedly the most stolen book from the CERN library. Both Lyons and Cousins noted the interest of the PhyStat community not only in practical solutions to concrete problems but also in foundational questions in statistics, with the focus on frequentist methods setting high-energy physics somewhat apart from the Bayesian approach more widely used in astrophysics.
Giving his view of the PhyStat era, ATLAS physicist and director of the University of Wisconsin Data Science Institute Kyle Cranmer emphasised the enormous impact that PhyStat has had on the field, noting important milestones such as the ability to publish full likelihood models through the statistical package RooStats, the treatment of systematic uncertainties with profile-likelihood ratio analyses, methods for combining analyses, and the reuse of published analyses to place constraints on new physics models. In regards to the next 25 years, Cranmer predicted the increasing use of methods that have emerged from PhyStat, such as simulation-based inference, and pointed out that artificial intelligence (the elephant in the room) could drastically alter how we use statistics.
Statistician Mikael Kuusela (CMU) noted that Phystat workshops have provided important two-way communication between the physics and statistics communities, citing simulation-based inference as an example where many key ideas were first developed in physics and later adopted by statisticians. In his view, the use of statistics in particle physics has emerged as “phystatistics”, a proper subfield with distinct problems and methods.
Another important feature of the PhyStat movement has been to encourage active participation and leadership by younger members of the community.With its 25th anniversary, the torch is now passed from Louis Lyons to Olaf Behnke (DESY), Lydia Brenner (NIKHEF) and a younger team, who will guide Phystat into the next 25 years and beyond.
Since 1966 the Rencontres de Moriond has been one of the most important conferences for theoretical and experimental particle physicists. The Electroweak Interactions and Unified Theories session of the 59th edition attracted about 150 participants to La Thuile, Italy, from 23 to 30 March, to discuss electroweak, Higgs-boson, top-quark, flavour, neutrino and dark-matter physics, and the field’s links to astrophysics and cosmology.
Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. These are particularly important while the international community discusses future projects, basing projections on current results and technology. The conference heard how theoretical investigations of specific models and “catch all” effective field theories are being sharpened to constrain a broader spectrum of possible extensions of the Standard Model. Theoretical parametric uncertainties are being greatly reduced by collider precision measurements and lattice QCD. Perturbative calculations of short-distance amplitudes are reaching to percent-level precision, while hadronic long-distance effects are being investigated both in B-, D- and K-meson decays, as well as in the modelling of collider events.
Comprehensive searches
Throughout Moriond 2025 we heard how a broad spectrum of experiments at the LHC, B factories, neutrino facilities, and astrophysical and cosmological observatories are planning upgrades to search for new physics at both low- and high-energy scales. Several fields promise qualitative progress in understanding nature in the coming years. Neutrino experiments will measure the neutrino mass hierarchy and CP violation in the neutrino sector. Flavour experiments will exclude or confirm flavour anomalies. Searches for QCD axions and axion-like particles will seek hints to the solution of the strong CP problem and possible dark-matter candidates.
The Standard Model has so far been confirmed to be the theory that describes physics at the electroweak scale (up to a few hundred GeV) to a remarkable level of precision. All the particles predicted by the theory have been discovered, and the consistency of the theory has been proven with high precision, including all calculable quantum effects. No direct evidence of new physics has been found so far. Still, big open questions remain that the Standard Model cannot answer, from understanding the origin of neutrino masses and their hierarchy, to identifying the origin and nature of dark matter and dark energy, and explaining the dynamics behind the baryon asymmetry of the universe.
Several fields promise qualitative progress in understanding nature in the coming years
The discovery of the Higgs boson has been crucial to confirming the Standard Model as the theory of particle physics at the electroweak scale, but it does not explain why the scalar Brout–Englert–Higgs (BEH) potential takes the form of a Mexican hat, why the electroweak scale is set by a Higgs vacuum expectation value of 246 GeV, or what the nature of the Yukawa force is that results in the bizarre hierarchy of masses coupling the BEH field to quarks and leptons. Gravity is also not a component of the Standard Model, and a unified theory escapes us.
At the LHC today, the ATLAS and CMS collaborations are delivering Run 1 and 2 results with beyond-expectation accuracies on Higgs-boson properties and electroweak precision measurements. Projections for the high-luminosity phase of the LHC are being updated and Run 3 analyses are in full swing. The LHCb collaboration presented another milestone in flavour physics for the first time at Moriond 2025: the first observation of CP violation in baryon decays. Its rebuilt Run 3 detector with triggerless readout and full software trigger reported its first results at this conference.
Several talks presented scenarios of new physics that could be revealed in today’s data given theoretical guidance of sufficient accuracy. These included models with light weakly interacting particles, vector-like fermions and additional scalar particles. Other talks discussed how revisiting established quantum properties such as entanglement with fresh eyes could offer unexplored avenues to new theoretical paradigms and overlooked new-physics effects.
In the Standard Model (SM), W and Z bosons acquire mass and longitudinal polarisation through electroweak (EW) symmetry breaking, where the Brout–Englert–Higgs mechanism transforms Goldstone bosons into their longitudinal components. One of the most powerful ways to probe this mechanism is through vector-boson scattering (VBS), a rare process represented in figure 1, where two vector bosons scatter off each other. At high (TeV-scale) energies, interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. Without the Higgs boson’s couplings to these polarisation states, their interaction rates would grow uncontrollably with energy, eventually violating unitarity, indicating a complete breakdown of the SM.
Measuring the polarisation of same electric charge (same sign) W-boson pairs in VBS directly tests the predicted EW interactions at high energies through precision measurements. Furthermore, beyond-the-SM scenarios predict modifications to VBS, some affecting specific polarisation states, rendering such measurements valuable avenues for uncovering new physics.
Using the full proton–proton collision dataset from LHC Run 2 (2015–2018, 140 fb–1 at 13 TeV), the ATLAS collaboration recently published the first evidence for longitudinally polarised W bosons in the electroweak production of same-sign W-boson pairs in final states including two same-sign leptons (electrons or muons) and missing transverse momentum, along with two jets (EW W±W±jj). This process is categorised by the polarisation states of the W bosons: fully longitudinal (WL±WL±jj), mixed (WL±WT±jj), and fully transverse (WT±WT±jj). Measuring the polarisation states is particularly challenging due to the rarity of the VBS events, the presence of two undetected neutrinos, and the absence of a single kinematic variable that efficiently distinguishes between polarisation states. To overcome this, deep neural networks (DNNs) were trained to exploit the complex correlations between event kinematic variables that characterise different polarisations. This approach enabled the separation of the fully longitudinal WL±WL±jj from the combined WT±W±jj (WL±WT±jj plus WT±WT±jj) processes as well as the combined WL±W±jj (WL±WL±jj plus WL±WT±jj) from the purely transverse WT±WT±jj contribution.
To measure the production of WL±WL±jj and WL±W±jj processes, a first DNN (inclusive DNN) was trained to distinguish EW W±W±jj events from background processes. Variables such as the invariant mass of the two highest-energy jets provide strong discrimination for this classification. In addition, two independent DNNs (signal DNNs) were trained to extract polarisation information, separating either WL±WL±jj from WT±W±jj or WL±W±jj from WT±WT±jj, respectively. Angular variables, such as the azimuthal angle difference between the leading leptons and the pseudorapidity difference between the leading and subleading jets, are particularly sensitive to the scattering angles of the W bosons, enhancing the separation power of the signal DNNs. Each DNN is trained using up to 20 kinematic variables, leveraging correlations among them to improve sensitivity.
The signal DNN distributions, within each inclusive DNN region, were used to extract the WL±WL±jj and WL±W±jj polarisation fractions through two independent maximum-likelihood fits. The excellent separation between the WL±W±jj and WT±WT±jj processes can be seen in figure 2 for the WL±W±jj fit, achieving better separation for higher scores of the signal DNN, represented in the x-axis. An observed (expected) significance of 3.3 (4.0) standard deviations was obtained for WL±W±jj, providing the first evidence of same-sign WW production with at least one of the W bosons longitudinally polarised. No significant excess of events consistent with WL±WL±jj production was observed, leading to the most stringent 95% confidence-level upper limits to date on the WL±WL±jj cross section: 0.45 (0.70) fb observed (expected).
There is still much to understand about the electroweak sector of the Standard Model, and the measurement presented in this article remains limited by the size of the available data sample. The techniques developed in this analysis open new avenues for studying W- and Z-boson polarisation in VBS processes during the LHC Run 3 and beyond.
In 1989, Rocky Kolb and Mike Turner published The Early Universe – a seminal book that offered a comprehensive introduction to the then-nascent field of particle cosmology, laying the groundwork for a generation of physicists to explore the connections between the smallest and largest scales of the universe. Since then, the interfaces between particle physics, astrophysics and cosmology have expanded enormously, fuelled by an avalanche of new data from ground-based and space-borne observatories.
In Particle Cosmology and Astrophysics, Dan Hooper follows in their footsteps, providing a much-needed update that captures the rapid developments of the past three decades. Hooper, now a professor at the University of Wisconsin–Madison, addresses the growing need for a text that introduces the fundamental concepts and synthesises the vast array of recent discoveries that have shaped our current understanding of the universe.
Hooper’s textbook opens with 75 pages of “preliminaries”, covering general relativity, cosmology, the Standard Model of particle physics, thermodynamics and high-energy processes in astrophysics. Each of these disciplines is typically introduced in a full semester of dedicated study, supported by comprehensive texts. For example, students seeking a deeper understanding of high-energy phenomena are likely to benefit from consulting Longair’s High Energy Astrophysics or Sigl’s Astroparticle Physics. Similarly, those wishing to advance their knowledge in particle physics will find that more detailed treatments are available in Griffiths’ Introduction to Elementary Particles or Peskin and Schroeder’s An Introduction to Quantum Field Theory, to mention just a few textbooks recommended by the author.
A much-needed update that captures the rapid developments of the past three decades
By distilling these complex subjects into just enough foundational content, Hooper makes the field accessible to those who have been exposed to only a fraction of the standard coursework. His approach provides an essential stepping stone, enabling students to embark on research in particle cosmology and astrophysics with a well calibrated introduction while still encouraging further study through more specialised texts.
Part II, “Cosmology”, follows a similarly pragmatic approach, providing an updated treatment that parallels Kolb and Turner while incorporating a range of topics that have, in the intervening years, become central to modern cosmology. The text now covers areas such as cosmic microwave background (CMB) anisotropies, the evidence for dark matter and its potential particle candidates, the inflationary paradigm, and the evidence and possible nature of dark energy.
Hooper doesn’t shy away from complex subjects, even when they resist simple expositions. The discussion on CMB anisotropies serves as a case in point: anyone who has attempted to condense this complex topic into a few graduate lectures is aware of the challenge in maintaining both depth and clarity. Instead of attempting an exhaustive technical introduction, Hooper offers a qualitative description of the evolution of density perturbations and how one extracts cosmological parameters from CMB observations. This approach, while not substituting for the comprehensive analysis found in texts such as Dodelson’s Modern Cosmology or Baumann’s Cosmology, provides students with a valuable overview that successfully charts the broad landscape of modern cosmology and illustrates the interconnectedness of its many subdisciplines.
Part III, “Particle Astrophysics”, contains a selection of topics that largely reflect the scientific interests of the author, a renowned expert in the field of dark matter. Some colleagues might raise an eyebrow at the book devoting 10 pages each to entire fields such as cosmic rays, gamma rays and neutrino astrophysics, and 50 pages to dark-matter candidates and searches. Others might argue that a book titled Particle Cosmology and Astrophysics is incomplete without detailing the experimental techniques behind the extraordinary advances witnessed in these fields and without at least a short introduction to the booming field of gravitational-wave astronomy. But the truth is that, in the author’s own words, particle cosmology and astrophysics have become “exceptionally multidisciplinary,” and it is impossible in a single textbook to do complete justice to domains that intersect nearly all branches of physics and astronomy. I would also contend that it is not only acceptable but indeed welcome for authors to align the content of their work with their own scientific interests, as this contributes to the diversity of textbooks and offers more choice to lecturers who wish to supplement a standard curriculum with innovative, interdisciplinary perspectives.
Ultimately, I recommend the book as a welcome addition to the literature and an excellent introductory textbook for graduate students and junior scientists entering the field.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.