In 1974, Kenneth G Wilson suggested modelling the continuous spacetime of quantum chromodynamics (QCD) with a discrete lattice – space and time would be represented as a grid of points, with quarks on the lattice points and gluons on the links between them. Lattice QCD has only grown in importance since, with international symposia on lattice field theory taking place annually since 1984. Since then the conference has developed and by now furnishes an important forum for both established experts and early-career researchers alike to report recent progress, and the published proceedings provide a valuable resource. The 41st symposium, Lattice 2024, welcomed 500 participants to the University of Liverpool from 28 July to 3 August.
Hadronic contributions
One of the highest profile topics in lattice QCD is the evaluation of hadronic contributions to the magnetic moment of the muon. For many years, the experimental measurements from Brookhaven and Fermilab have appeared to be in tension with the Standard Model (SM), based on theoretical predictions that rely on data from e+e– annihilation to hadrons. Intense work on the lattice by multiple groups is now maturing rapidly and providing a valuable cross-check for data-driven SM calculations.
At the lowest order in quantum electrodynamics, the Dirac equation accounts for precisely two Bohr magnetons in the muon’s magnetic moment (g = 2) – a contribution arising purely from the muon interacting with a single real external photon representing the magnetic field. At higher orders in QED, virtual Standard Model particles modify that value, leading to a so-called anomalous magnetic moment g–2. The Schwinger term adds a virtual photon and a contribution to g-2 of approximately 0.2%. Adding individual virtual W, Z or Higgs bosons adds a well defined contribution a factor of a million or so smaller. The remaining relevant contributions are from hadronic vacuum polarisation (HVP) and hadronic light-by-light (HLBL) scattering. HVP and HLBL both add hadronic contributions integrated to all orders in the strong coupling constant to interactions between the muon and the external electric field, which also feature additional virtual photons. Though their contributions to g-2 are in the ballpark of the small electroweak contribution, they are more difficult to calculate, and dominate the error budget for the SM prediction of the muon’s g-2.
Christine Davies (University of Glasgow) gave a comprehensive survey of muon g–2 that stressed several high-level points: the small HLBL contribution looks to be settled, and is unlikely to be a key piece to the puzzle; recent tensions among the e+e– experiments for HVP have emerged and need to be better understood; and in the most contentious region, all eight recent lattice–QCD calculations agree with each other and with the very recent e+e–→ hadrons experiment CMD 3 (2024 Phys. Rev. Lett.132 231903), though not so much with earlier experiments. Thus, lattice QCD and CMD 3 suggest there is “almost certainly less new physics in muon g–2 than previously hoped, and perhaps none,” said Davies. We shall see: many groups are preparing results for the full HVP, targeting a new whitepaper from the Muon g–2 Theory Initiative by the end of this year, in anticipation of the final measurement from the Fermilab experiment sometime in 2025.
New directions
While the main focus of Lattice calculations is the study of QCD, lattice methods have been applied beyond that. There is a small but active community investigating systems that could be relevant to physics beyond the Standard Model, including composite Higgs models, supersymmetry and dark matter. These studies often inspire formal “theoretical” developments that are of interest beyond the lattice community. Particularly exciting directions this year were the development on emergent phases, non-invertible symmetries and their possible application to formulate chiral gauge theories, one of the outstanding theoretical issues in lattice gauge theories.
The lattice QCD community is one of the main users of high-performance computing resources
The lattice QCD community is one of the main users of high-performance computing resources, with its simulation efforts generating petabytes of Monte Carlo data. For more than 20 years, a community wide effort, the international lattice data grid (ILDG), has allowed this data to be shared. Since its inception, ILDG implemented the FAIR principles – data should be findable, accessible, interoperable and reusable – almost fully. The lattice QCD community is now discussing Open Science. Ed Bennett (Swansea) led a panel discussion that explored the benefits of ILDG embracing open science, such as higher credibility for published results, and not least the means to fulfill the expectations of funding bodies. Sustainably maintaining the infrastructure and employing the personnel required calls for national or even international community efforts to convince the funding agencies to provide corresponding funding lines, but also the researchers of the benefits of open science.
The Kenneth G. Wilson Award for Excellence in Lattice Field Theory was awarded to Michael Wagman (Fermilab) for his lattice-QCD studies of noise reduction in nuclear systems, the structure of nuclei and transverse-momentum-dependent hadronic structure functions. Fifty years on from Wilson’s seminal paper, two of the field’s earliest contributors, John Kogut (US Department of Energy) and Jan Smit (University of Amsterdam), reminisced about the birth of the lattice in a special session chaired by Liverpool pioneer Chris Michael. Both speakers gave fascinating insights into a time where physics was extracted from a handful of small-volume gauge configurations, compared to hundreds of thousands today.
Lattice 2025 will take place at the Tata Institute of Fundamental Research in Mumbai, India, from 3 to 8 November 2025.
In the famous double-slit experiment, an interference pattern consisting of dark and bright bands emerges when a beam of light hits two narrow slits. The same effect has also been seen with particles such as electrons and protons, demonstrating the wave nature of propagating particles in quantum mechanics. Typically, experiments of this type produce interference patterns at the nanometre scale. In a recent study, the ALICE collaboration measured a similar interference pattern at the femtometre scale using ultra-peripheral collisions between lead nuclei at the LHC.
In ultra-peripheral collisions, two nuclei pass close to each other without colliding. With their impact parameter larger than the sum of their radii, one nucleus emits a photon that transforms into a virtual quark–antiquark pair. This pair interacts strongly with the other nucleus, resulting in the emission of a vector meson and the exchange of two gluons. Such vector-meson photoproduction is a well-established tool for probing the internal structure of colliding nuclei.
In vector-meson photoproduction involving symmetric systems, such as two lead nuclei, it is not possible to determine which of the nuclei emitted the photon and which emitted the two gluons. Crucially, however, due to the short range of the strong force between the virtual quark–antiquark pair and the nucleus, the vector mesons must have been produced within or close to one of the two well-separated nuclei. Because of this and their relatively short lifetime, the vector mesons decay quite rapidly into other particles. These decay products form a quantum-mechanically entangled state and generate an interference pattern akin to that of a double-slit interferometer.
In the photoproduction of the electrically neutral ρ0 vector meson, the interference pattern takes the form of a cos(2φ) modulation of the ρ0 yield, where φ is the angle between the two vectors formed by the sum and difference of the transverse momenta of the two oppositely charged pions into which the ρ0 decays. The strength of the modulation is expected to increase as the impact parameter decreases.
Using a dataset of 57,000 ρ0 mesons produced in lead–lead collisions at an energy of 5.02 TeV per nucleon pair during Run 2 of the LHC, the ALICE team measured the cos(2φ) modulation of the ρ0 yield for different values of the impact parameter. The measurements showed that the strength of the modulation varies strongly with the impact parameter. Theoretical calculations indicate that this behaviour is indeed the result of a quantum interference effect at the femtometre scale.
In the ongoing Run 3 of the LHC and in the next run, Run 4, ALICE is expected to collect more than 15 million ρ0 mesons from lead–lead collisions. This enhanced dataset will allow a more detailed analysis of the interference effect, further testing the validity of quantum mechanics at femtometre scales.
In high-energy hadronic and heavy-ion collisions, strange quarks are dominantly produced from gluon fusion. In contrast to u and d quarks, they are not present in the colliding particles. Since strangeness is a conserved quantity in QCD, the net number of strange and anti-strange particles must equal zero, making them prime observable to study the dynamics of these collisions. Various experimental results from high-multiplicity pp collisions at the LHC demonstrate striking similarities to Pb–Pb collision results. Notably, the fraction of hadrons carrying one or more strange quarks smoothly increases as a function of particle multiplicity in pp and p–Pb collisions to values consistent with those measured in peripheral Pb–Pb collisions. Multi-particle correlations in pp collisions also closely resemble those in Pb–Pb collisions.
Explaining such observations requires understanding the hadronisation mechanism, which governs how quarks and gluons rearrange into bound states (hadrons). Since there are no first-principle calculations of the hadronisation process available, phenomenological models are used, based on either the Lund string fragmentation (Pythia 8, HIJING) or a statistical approach assuming a system of hadrons and their resonances (HRG) at thermal and chemical equilibrium. Despite having vastly different approaches, both models successfully describe the enhanced production of strange hadrons. This similarity calls for new observables to decisively discriminate between these two approaches.
The data indicate a weaker opposite-sign strangeness correlation than that predicted by string fragmentation
In a recently published study, the ALICE collaboration measured correlations between particles arising from the conservation of quantum numbers to further distinguish the two models. In the string fragmentation model, the quantum numbers are conserved locally through the creation of quark–antiquark pairs from the breaking of colour strings. This leads to a short-range rapidity correlation between strange and anti-strange hadrons. On the other hand, in the statistical hadronisation approach, quantum numbers are conserved globally over a finite volume, leading to long-range correlations between both strange–strange and strange–anti-strange hadron pairs. Quantum-number conservation leads to correlated particle production that is probed by measuring the yields of charged kaons (with one strange quark) and multistrange baryons (Ξ– and Ξ+) on an event-by-event basis. In ALICE, charged kaons are directly tracked in the detectors, while Ξ baryons are reconstructed via their weak decay to a charged pion and a Λ-baryon, which is itself identified via its weak decay into a proton and a charged pion.
Figure 1 shows the first measurement of the correlation between the “net number” of Ξ baryons and kaons, as a function of the charged-particle multiplicity at midrapidity in pp, p–Pb and Pb–Pb collisions, where the net number is the difference between particle and antiparticle multiplicities. The experimental results deviate from the uncorrelated baseline (dashed line), and string fragmentation models that mainly correlate strange hadrons with opposite strange quark content over a small rapidity range fail to describe both observables. At the same time, the measurements agree with the statistical hadronisation model description that includes opposite-sign and same-sign strangeness correlations over large rapidity intervals. The data indicate a weaker opposite-sign strangeness correlation than that predicted by string fragmentation, suggesting that the correlation volume for strangeness conservation extends to about three units of rapidity.
The present study will be extended using the recently collected data during LHC Run 3. The larger data samples will enable similar measurements for the triply strange Ω baryon, as well as the study of higher cumulants.
Immediately after the Big Bang, all the particles we know about today were massless and moving at the speed of light. About 10–12 seconds later, the scalar Higgs field spontaneously broke the symmetry of the electroweak force, separating it into the electromagnetic and weak forces, and giving mass to fundamental particles. Without this process, the universe as we know it would not exist.
Since its discovery in 2012, measurements of the Higgs boson – the particle associated with the new field – have refined our understanding of its properties, but it remains unknown how closely the field’s energy potential resembles the predicted Mexican hat shape. Studying the Higgs potential can provide insights into the dynamics of the early universe, and the stability of the vacuum with respect to potential future changes.
The Higgs boson’s self-coupling strength λ governs the cubic and quartic terms in the equation describing the potential. It can be probed using the pair production of Higgs bosons (HH), though this is experimentally challenging as this process is more than 1000 times less likely than the production of a single Higgs boson. This is partly due to destructive interference between the two leading order diagrams in the dominant gluon–gluon fusion production mode.
The ATLAS collaboration recently compiled a series of results targeting HH decays to bbγγ, bbττ, bbbb, bbll plus missing transverse energy (ETmiss), and multilepton final states. Each analysis uses the full LHC Run 2 data set. A key parameter is the HH signal strength, μHH, which divides the measured HH production rate by the Standard Model (SM) prediction. This combination yields the strongest expected constraints to date on μHH, and an observed upper limit of 2.9 times the SM prediction (figure 1). The combination also sets the most stringent constraints to date on the strength of the Higgs boson’s self-coupling of –1.2 < κλ < 7.2, where κλ = λ/λSM, its value relative to the SM prediction.
Each analysis contributes in a complementary way to the global picture of HH interactions and faces its own set of unique challenges.
Despite its tiny branching fraction of just 0.26% of all HH decays, HH → bbγγ provides very good sensitivity to μHH thanks to the ATLAS detector’s excellent di-photon mass resolution. It also sets the best constraints on λ due to its sensitivity to HH events with low invariant mass.
The HH → bbττ analysis (7.3% of HH decays) exploits state-of-the-art hadronic–tau identification to control the complex mix of electroweak, multijet and top-quark backgrounds. It yields the strongest limits on μHH and the second tightest constraints on λ.
HH → bbbb (34%) has good sensitivity to μHH thanks to ATLAS’s excellent b-jet identification, but controlling the multijet background presents a formidable challenge, which is tackled in a fully data-driven fashion.
Studying the Higgs potential can provide insights into the dynamics of the early universe
The decays HH → bbWW and HH → bbττ in fully leptonic final states have very similar characteristics and are thus targeted in a single HH → bbll+ETmiss analysis. Contributions from the bbZZ decay mode, where one Z decays to charged light leptons and the other to neutrinos, are also considered.
Finally, the HH → multilepton analysis is designed to catch decay modes where the HH system cannot be fully reconstructed due to ambiguity in how the decay products should be assigned to the two Higgs bosons. The analysis uses nine signal regions with different multiplicities of light charged leptons, hadronic taus and photons. It is complementary to all the exclusive channels discussed above.
For the ongoing LHC Run 3, ATLAS designed new triggers to enhance sensitivity to the hadronic HH → bbττ and HH → bbbb channels. Improved b-jet identification algorithms will increase the efficiency in selecting HH signals and distinguishing them from background processes. With these and other improvements, our prospects have never looked brighter for homing in on the Higgs self-coupling.
At the International Conference on High-Energy Physics in Prague in July, the LHCb collaboration presented an updated measurement of the weak mixing angle using the data collected at the experiment between 2016 and 2018. The measurement benefits from the unique forward coverage of the LHCb detector.
The success of electroweak theory in describing a wide range of measurements at different experiments is one of the crowning achievements of the Standard Model (SM) of particle physics. It explains electroweak phenomena using a small number of free parameters, allowing precise measurements of different quantities to be compared to each other. This facilitates powerful indirect searches for beyond-the-SM physics. Discrepancies between measurements might imply that new physics influences one process but not another, and global analyses of high-precision electroweak measurements are sensitive to the presence of new particles at multi-TeV scales. In 2022 the entire field was excited by a measurement of the W-boson mass that is significantly larger than the value predicted within these global analyses by the CDF collaboration, heightening interest in electroweak measurements.
The weak mixing angle is at the centre of electroweak physics. It describes the mixing of the U(1) and SU(2) fields, determines couplings of the Z boson, and can also be directly related to the ratio of the W and Z boson masses. Excitingly, the two most precise measurements to date, from LEP and SLD, are in significant tension. This raises the prospect of non-SM particles potentially influencing one of these measurements, since the weak mixing angle, as a fundamental parameter of nature, should otherwise be the same no matter how it is measured. There is therefore a major programme measuring the weak mixing angle at hadron colliders, with important contributions from CDF, D0, ATLAS, CMS and LHCb.
Since the weak mixing angle controls Z-boson couplings, it can be determined from measurements of the angular distributions of Z-boson decays. The LHCb collaboration measured around 860,000 Z-boson decays to two oppositely charged muons, determining the relative rate at which negatively charged muons are produced closer to the LHC beamline than positively charged muons as a function of the angular separation of the two muons. Corrections are then applied for detector effects. Comparison to theoretical predictions based on different values of the weak mixing angle allows the value best describing the data to be determined (figure 1).
The unique angular coverage of the LHCb detector is well-suited for this measurement for two key reasons. First, the statistical sensitivity to the weak mixing angle is largest in the forward region close to the beamline that the LHCb detector covers. Second, the leading systematic uncertainties in measurements of the weak mixing angle at hadron colliders typically arise from existing knowledge of the proton’s internal structure. These uncertainties are also smallest in the forward region.
The value of the weak mixing angle measured by LHCb is consistent with previous measurements and with SM expectations (see “Weak mixing angle” figure). Notably, the precision of the LHCb measurement remains limited by the size of the data sample collected, such that further improvements are expected with the data currently being collected using the upgraded LHCb detector. In addition, while other experiments profile effects associated with the proton’s internal structure to reduce uncertainties, the unique forward acceptance means that this is not yet necessary at LHCb. This advantage will also be important for future measurements: the small theoretical uncertainty means that the forthcoming Upgrade 2 of the LHCb experiment is expected to achieve a precision more than a factor of two better than the most precise measurements to date.
This textbook for advanced undergraduate and graduate students, written by experimental particle-physicist Pascal Paganini of Ecole Polytechnique, aims to teach Standard Model calculations of quantities that are relevant for modern experimental research. Each chapter ends with a collection of unsolved problems to help the student practice the discussed calculations. The level is similar to the well-known textbook Quarks and Leptons by F Halzen and A D Martin (Wiley, 1984), but with a broader introduction and including more up-to-date material. The notation is also similar, and shared with several other popular textbooks at the same level, making it easy for students to use it along with other resources.
Comprehensive
Fundamentals of Particle Physics starts with a general introduction that is around 50 pages long and includes information on detectors and statistics. It continues with a recap of relativistic kinematics, quantum mechanics of angular momentum and spin, phase–space calculations for cross sections and decays as well as symmetries. The main part of the book begins with a discussion of relativistic quantum mechanics, covering the equations of motion of spin 0, 1 and ½ particles along with a detailed description of Dirac spinors and their properties. Then, it addresses quantum electrodynamics (QED), including the QED Lagrangian, standard QED cross-section calculations and a section dedicated to magnetic moments (g-2). About 100 pages are devoted to hadronic physics: deep inelastic scattering, parton model, parton-distribution functions and quantum chromodynamics (QCD). Calculations in perturbative QCD are discussed in some detail and there is also an accessible section in non-perturbative QCD that can serve as a very nice introduction to beginner graduate students.
The book continues with weak interactions, covering the Fermi theory, W-boson exchange, CKM matrix, neutrinos, neutrino mixing and CP-violation. The following chapter presents the electroweak theory and introduces gauge-boson interactions. A dedicated chapter is reserved for the Higgs boson. This includes a nice section about the discovery of the particle and the measurements that are performed at the LHC, as well as some comments about the pre-history (LEP and Tevatron) and the future (HL-LHC and FCC). A clear discussion about naturalness and several other conceptual issues offers a light and useful read for students of any level. The final chapter goes through the Standard Model as a whole, including a very useful evaluation of its successes and weaknesses. In terms of beyond-Standard Model physics, only dark matter and neutrino masses are covered.
Although this is not a quantum field-theory textbook, some of its elements are introduced; in particular second quantisation, S-matrix, Dyson’s expansion and a few words about renormalisation are included. These are very useful in bridging the gap between practical calculations and their theoretical background, also serving as a quick reference.
There are several useful appendices, most notably a 30-page introduction to group theory that can serve as a guide for a short standalone course in the subject or as a quick reference. The book also includes elements of the Lagrangian formalism, which could have been a bit more expanded to include a more detailed presentation of Noether’s theorem, probably in an additional appendix.
Overall the book achieves a good balance between calculations and more conceptual discussions. All students in the field can benefit from the sections on the Higgs-boson discovery and the Standard Model. Being concise and not too long, Fundamentals of Particle Physics can easily be used as a primary or secondary textbook for a particle-physics course that introduces calculations using Feynman diagrams in the Standard Model to students.
The Standard Model – an inconspicuous name for one of the great human inventions. It describes all known elementary particles and their interactions, except for gravity. About 19 free parameters tune its behaviour. To the best of our knowledge, they could in principle take any value, and no underlying theory yet conceived can predict their values. They include particle masses, interaction strengths, important technical numbers such as mixing angles and phases, and the vacuum strength of the Higgs field, which theorists believe has alone among fundamental fields permeated every cubic attometre of the universe, since almost the beginning of time. Measuring these parameters is the most fundamental experimental task available to modern science.
The basic constituents of matter interact through forces which are mediated by virtual particles that ping back and forth, delivering momentum and quantum numbers. The gluon mediates the strong interaction, the photon mediates the electromagnetic interaction, and the W and Z bosons mediate the weak interaction. Although the electromagnetic and weak forces operate very differently to each other in everyday life, in the Standard Model they are two manifestations of the broken electroweak interaction – an interaction that broke when the Higgs field switched on throughout the universe, giving mass to matter particles, the W and Z bosons, and the Higgs boson itself, via the Brout–Englert–Higgs (BEH) mechanism. The electroweak theory has been extraordinarily successful in describing experimental results, but it remains mysterious – and the BEH mechanism is the origin of some of those free parameters. The best way to test the electroweak model is to over-constrain its free parameters using precision measurements and try to find a breaking point.
Ever since the late 1960s, when Steven Weinberg, Sheldon Glashow and Abdus Salam unified the electromagnetic and weak forces using the BEH mechanism, CERN has had an intimate experimental relationship with the electroweak theory. In 1973 the Z boson was indirectly discovered by observing “neutral current” events in the Gargamelle bubble chamber, using a neutrino beam from the Proton Synchrotron. The W boson was discovered in 1983 at the Super Proton Synchrotron collider, followed by the direct observation of the Z boson in the same machine soon after. The 1990s witnessed a decade of exquisite electroweak precision measurements at the Large Electron Positron (LEP) collider at CERN and the Stanford Linear Collider (SLC) at SLAC National Accelerator Laboratory in the US, before the crown jewel of the electroweak sector, the Higgs boson, was discovered by the ATLAS and CMS collaborations at the Large Hadron Collider (LHC) in 2012 – a remarkable success that delivered the last to be observed, and arguably most mysterious, missing piece of the Standard Model.
What was not expected, was that the ATLAS, CMS and LHCb experiments at the LHC would go on to make electroweak measurements that rival in precision those made at lepton colliders.
Discovery or precision?
Studying the electroweak interaction requires a supply of W and Z bosons. For that, you need a collider. Electrons and positrons are ideally suited for the task as they interact exclusively via the electroweak interaction. By precisely tuning the energy of electron–positron collisions, experiments at LEP and the SLC tested the electroweak sector with an unprecedented 0.1% accuracy at the energy scale of the Z-boson mass (mZ).
Hadron colliders like the LHC have different strengths and weaknesses. Equipped to copiously produce all known Standard Model particles – and perhaps also hypothetical new ones – they are the ultimate instruments for probing the high-energy frontier of our understanding of the microscopic world. The protons they collide are not elementary, but a haze of constituent quarks and gluons that bubble and fizz with quantum fluctuations. Each constituent “parton” carries an unpredictable fraction of the proton’s energy. This injects unavoidable uncertainty into studies of hadron collisions that physicists attempt to encode in probabilistic parton distribution functions. What’s more, when a pair of partons from the two opposing protons interact in an interesting way, the result is overlaid by numerous background particles originating from the remaining partons that were untouched by the original collision – a complexity that is exacerbated by the difficult-to-model strong force which governs the behaviour of quarks and gluons. As a result, hadron colliders have a reputation for being discovery machines with limited precision.
The LHC has collided protons at the energy frontier since 2010, delivering far more collisions than comparable previous machines such as the Tevatron at Fermilab in the US. This has enabled a comprehensive search and measurement programme. Following the discovery of the Higgs boson in 2012, measurements have so far verified its place in the electroweak sector of the Standard Model, although the relative precisions of many measurements are currently far lower than those achieved for the W and Z bosons at LEP. But in defiance of expectations, the capabilities of the LHC experiments and the ingenuity of analysts have also enabled many of the world’s most precise measurements of the electroweak interaction. Here, we highlight five.
1. Producing W and Z bosons
When two streams of objects meet, how many strike each other depends on their cross-sectional area. Though quarks and other partons are thought to be fundamental objects with zero extent, particle physicists borrow this logic for particle beams, and extend it by subdividing the metaphorical cross section according to the resulting interactions. The range of processes used to study W and Z bosons at the LHC spans a remarkable eight orders of magnitude in cross section.
The most common interaction is the production of single W and Z bosons through the annihilation of a quark and an antiquark in the colliding protons. Measurements with single W and Z boson events have now reached a precision well below 1% thanks to the excellent calibration of the detector performance. They are a prodigious tool for testing and improving the modelling of the underlying process, for example using parton distribution functions.
The second most common interaction is the simultaneous production of two bosons. Measurements of “diboson” processes now routinely reach a precision better than 5%. Since the start of the LHC operation, the accelerator has operated at several collision energies, allowing the experiments to map diboson cross sections as a function of energy. Measurements of the cross sections for creating WW, WZ and ZZ pairs exhibit remarkable agreement with state-of-the art Standard Model predictions (see “Diboson production” figure).
The large amount of collected data at the LHC has recently allowed us to move the frontier to the observation of extremely infrequent “triboson” processes with three W or Z bosons, or photons, produced simultaneously – the first step towards confirming the existence of the quartic self-interaction between the electroweak bosons.
2. The weak mixing angle
The Higgs potential is famously thought to resemble a Mexican hat. The Higgs field that permeates space could in principle exist with a strength corresponding to any point on its surface. Theorists believe it settled somewhere in the brim a picosecond or so after the Big Bang, breaking the perfect symmetry of the hat’s apex, where its value was zero. This switched the Higgs field on throughout the universe – and the massless gauge bosons of the unified electroweak theory mixed to form the photon and W and Z boson mass eigenstates that mediate the broken electroweak interaction today. The weak mixing angle θW is the free parameter of the Standard Model which defines that mixing.
The θW angle can be studied using a beautifully simple interaction: the annihilation of a quark and its antiquark to create an electron and a positron or a muon and an antimuon. When the pair has an invariant mass in the vicinity of mZ, there is a small preference for the negatively charged lepton to be produced in the same direction as the initial quark. This arises due to quantum interference between the Z boson’s vector and axial-vector couplings, whose relative strengths depend on θW.
The unique challenge at a proton–proton collider like the LHC is that the initial directions of the quark and the antiquark can only be inferred using our limited knowledge of parton distribution functions. These systematic uncertainties currently dominate the total uncertainty, although they can be reduced somewhat by using information on lepton pairs produced away from the Z resonance. The CMS and LHCb collaborations have recently released new measurements consistent with the Standard Model prediction with a precision comparable to that of the LEP and SLC experiments (see “Weak mixing angle” figure).
Quantum physics effects play an interesting role here. In practice, it is not possible to experimentally isolate “tree level” properties like θW, which describe the simplest interactions that can be drawn on a Feynman diagram. Measurements are in fact sensitive to the effective weak mixing angle, which includes the effect of quantum interference from higher-order diagrams.
A crucial prediction of electroweak theory is that the masses of the W and Z bosons are, at leading order, related by the electroweak mixing angle: sin2θW = 1–m2W/m2Z, where mW and mZ are the masses of the W and Z bosons. This relationship is modified by quantum loops involving the Higgs boson, the top quark and possibly new particles. Measuring the parameters of the electroweak theory precisely, therefore, allows us to test for any gaps in our understanding of nature.
Surprisingly, combining this relationship with the mZ measurement from LEP and the CMS measurement of θW also allows a competitive measurement of mW. A measurement of sin2θW with a precision of 0.0003 translates into a prediction of mW with 15 MeV precision, which is comparable to the best direct measurements.
3. The mass and width of the W boson
Precisely measuring the mass of the W boson is of paramount importance to efforts to further constrain the relationships between the parameters of the electroweak theory, and probe possible beyond-the-Standard Model contributions. Particle lifetimes also offer a sensitive test of the electroweak theory. Because of their large masses and numerous decay channels, the W and Z bosons have mean lifetimes of less than 10–24 s. Though this is an impossibly brief time interval to measure directly, Heisenberg’s uncertainty principle smudges a particle’s observed mass by a certain “width” when it is produced in a collider. This width can be measured by fitting the mass distribution of many virtual particles. It is reciprocally related to the particle’s lifetime.
While lepton-collider measurements of the properties of the Z boson were extensive and achieved remarkable precision, the same is not quite true for the W boson. The mass of the Z boson was measured with a precision of 0.002%, but the mass of the W boson was measured with a precision of only 0.04% – a factor 20 worse. The reason is that while single Z bosons were copiously produced at LEP and SLC, W bosons could not be produced singly, due to charge conservation. W+W– pairs were produced, though only at low rates at LEP energies.
In contrast to LEP, hadron colliders produce large quantities of single W bosons through quark–antiquark annihilation. The LHC produces more single W bosons in a minute than all the W-boson pairs produced in the entire lifetime of LEP. Even when only considering decays to electrons or muons and their respective neutrinos – the most precise measurements – the LHC experiments have recorded billions of W-boson events.
But there are obstacles to overcome. The neutrino in the final state escapes undetected. Its transverse momentum with respect to the beam direction can only be measured indirectly, by measuring all other products of the collision – a major experimental challenge in an environment with not just one, but up to 60 simultaneous proton–proton collisions. Its longitudinal momentum cannot be measured at all. And as the W bosons are not produced at rest, extensive theoretical calculations and ancillary measurements are needed to model their momenta, incurring uncertainties from parton distribution functions.
Despite these challenges, the latest measurement of the W boson’s mass by the ATLAS collaboration achieved a precision of roughly 0.02% (see “Mass and width” figure, top). The LHCb collaboration also recently produced its first measurement of the W-boson mass using W bosons produced close to the beam line with a precision at the 0.04% level, dominated for now by the size of the data sample. Owing to the complementary detector coverage of the LHCb experiment with respect to the ATLAS and CMS experiments, several uncertainties are reduced when these measurements are combined.
The Tevatron experiments CDF and D0 also made precise W-boson measurements using proton–antiproton collisions at a lower centre-of-mass energy. The single most precise mass measurement, at the 0.01% level, comes from CDF. It is in stark disagreement with the Standard Model prediction and disagrees with the combination of other measurements.
A highly anticipated measurement by the CMS collaboration may soon weigh in decisively in favour either of the CDF measurement or the Standard Model. The CMS measurement will combine innovative analysis techniques using the Z boson with a larger 13 TeV data set than the 7 TeV data used by the recent ATLAS measurement, enabling more powerful validation samples and thereby greater power to reduce systematic uncertainties.
Measurements of the W boson’s width are not yet sufficiently precise to constrain the Standard Model significantly, though the strongest constraint so far comes from the ATLAS collaboration (see “Mass and width” figure, bottom). Further measurements are a promising avenue to test the Standard Model. If the W boson decays into any hitherto undiscovered particles, its lifetime should be shorter than predicted, and its width greater, potentially indicating the presence of new physics.
4. Couplings of the W boson to leptons
Within the Standard Model, the W and Z bosons have equal couplings to leptons of each of the three generations – a property known as lepton flavour universality (LFU). Any experimental deviation from LFU would indicate new physics.
As with mass and width, lepton colliders’ precision was superior for the Z boson than the W boson. LEP confirmed LFU in leptonic Z-boson decays to about 0.3%. Comparing the three branching fractions of the W boson in the electron, muon and tau–lepton decay channels, the combination of the four LEP experiments reached a precision of only about 2%.
At the LHC, the large cross section for producing top quark–antiquark pairs that both decay into a W boson and a bottom quark offers a unique sample of W-boson pairs for high-precision studies of their decays. The resulting measurements are the most precise tests of LFU for all three possible comparisons of the coupling of the lepton flavours to the W boson (see “Couplings to leptons” figure).
Regarding the tau lepton to muon ratio, the ATLAS collaboration observed 0.992 ± 0.013 decays to a tau for every one decay to a muon. This result favours LFU and is twice as precise than the corresponding LEP result of 1.066 ± 0.025, which exhibits a deviation of 2.6 standard deviations from unity. Because of the relatively long tau lifetime, ATLAS was able to separate muons produced in the decay of tau leptons from those produced promptly by observing the tau decay length of the order of 2 mm.
The best tau to electron measurement is provided by a simultaneous CMS measurement of all the leptonic and hadronic decay branching fractions of the W boson. The analysis splits the top quark–antiquark pair events based on the multiplicity and flavour of reconstructed leptons, the number of jets, and the number of jets identified as originating from the hadronisation of b quarks. All CMS ratios are consistent with the LFU hypothesis and reduce tension with the Standard Model prediction.
Regarding the muon to electron ratio, measurements have been performed by several LHC and Tevatron experiments. The observed results are consistent with LFU, with the most precise measurement from the ATLAS experiment boasting a precision better than 0.5%.
5. The invisible width of the Z boson
A groundbreaking measurement at LEP deduced how often a particle that cannot be directly observed decays to particles that cannot be detected. The particle in question is the Z boson. By scanning the energy of electron–positron collisions and measuring the broadness of the “lineshape” of the smudged bump in interactions around the mass of the Z, LEP physicists precisely measured its width. As previously noted, a particle’s width is reciprocal to its lifetime and therefore proportional to its decay rate – something that can also be measured by directly accounting for the observed rate of decays to visible particles of all types. The difference between the two numbers is due to Z-boson decays to so-called invisible particles that cannot be reconstructed in the detector. A seminal measurement concluded that exactly three species of light neutrino couple to the Z boson.
The LEP experiments also measured the invisible width of the Z boson using an ingenious method that searched for solitary “recoils”. Here, the trick was to look for the rare occasion when the colliding electron or positron emitted a photon just before creating a virtual Z boson that decayed invisibly. Such events would yield nothing more than a single photon recoiling from an otherwise invisible Z-boson decay.
The ATLAS and CMS collaborations recently performed similar measurements, requiring the invisibly decaying Z boson to be produced alongside a highly energetic jet in place of a recoil photon. By taking the ratio with equivalent recoil decays to electrons and muons, they achieved remarkable uncertainties of around 2%, equivalent to LEP, despite the much more challenging environment (see “Invisible width” figure). The results are consistent with the Standard Model’s three generations of light neutrinos.
Future outlook
Building on these achievements, the LHC experiments are now readying themselves for a more than comparable experimental programme, which is yet to begin. Following the ongoing run of the LHC, a high-luminosity upgrade (HL-LHC) is scheduled to operate throughout the 2030s, delivering a total integrated luminosity of 3 ab–1 to both ATLAS and CMS. The LHCb experiment also foresees a major upgrade to collect an integrated luminosity of more than 300 fb–1 by the end of the LHC operations. A tenfold data set, upgraded detectors and experimental methods, and improvements to theoretical modelling will greatly extend both experimental precision and the reach of direct and indirect searches for new physics. Unprecedented energy scales will be probed and anomalies with respect to the Standard Model may become apparent.
Despite the significant challenges posed by systematic uncertainties, there are good prospects to further improve uncertainties in precision electroweak observables such as the mass of the W boson and the effective weak mixing angle, thanks to the larger angular acceptances of the new inner tracking devices currently under production by ATLAS and CMS. A possible programme of high-precision measurements in electron–proton collisions, the LHeC, could deliver crucial input to reduce uncertainties such as from parton distribution functions. The LHeC has been proposed to run concurrently with the HL-LHC by adding an electron beam to the LHC.
Beyond the HL-LHC programme, several proposals for future particle colliders have captured the imagination of the global particle-physics community – and not least the two phases of the Future Circular Collider (FCC) being studied at CERN. With a circumference three to four times greater than that of the LEP/LHC tunnel, electron–positron collisions could be delivered with very high luminosity and centre-of-mass energies from 90 to 365 GeV in the initial FCC-ee phase. The FCC-ee would facilitate an impressive leap in the precision of most electroweak observables. Projections estimate a factor of 10 improvement for Z-boson measurements and up to 100 for W-boson measurements. For the first time, the top quark could be produced in an environment where it is not colour-connected to initial hadrons, in some cases reducing uncertainties by a factor of 10 or more.
The LHC collaborations have made remarkable strides forward in probing the electroweak theory – a theory of great beauty and consequence for the universe. But its most fundamental workings are subtle and elusive. Our exploration is only just beginning.
The simplest possible interaction in nature is when three identical particle lines, with the same quantum numbers, meet at a single vertex. The Higgs boson is the only known elementary particle that can exhibit such behaviour. More importantly, the strength of the coupling between three or even four Higgs bosons will reveal the first picture of the shape of the Brout–Englert–Higgs potential, responsible for the evolution of the universe in its first moments as well as possibly its fate.
Since the discovery of the Higgs boson at the LHC in 2012, the ATLAS and CMS collaborations have measured its properties and interactions with increasing precision. This includes its couplings to the gauge bosons and to third-generation fermions, its production cross sections, mass and width. So far, the boson appears as the Standard Model (SM) says it should. But the picture is still fuzzy, and many more measurements are needed. After all, the Higgs boson may interact with new particles suggested by theories beyond the SM to shed light on mysteries including the nature of the electroweak phase transition.
Line of attack
“The Higgs self-coupling is the next big thing since the Higgs discovery, and di-Higgs production is our main line of attack,” says Jana Schaarschmidt of ATLAS. “The experiments are making tremendous progress towards measuring Higgs-boson pair production at the LHC – far more than was imagined would be possible 12 years ago – thanks to improvements in analysis techniques and machine learning in particular.”
The dominant process for di-Higgs production at the LHC, gluon–gluon fusion, proceeds via a box or triangle diagram, the latter offering access to the trilinear Higgs coupling constant λ (see figure). Destructive interference between the two processes makes di-Higgs production extremely rare, with a cross section at the LHC about 1000 times smaller than that for single-Higgs production. Many different decay channels are available to ATLAS and CMS. Those with a high probability to occur are chosen if they can also provide a clean way to be distinguished from backgrounds. The most sensitive channels are those with one Higgs boson decaying to a b-quark pair and the other decaying either to a pair of photons, τ leptons or b quarks.
During this year’s Rencontres de Moriond, ATLAS presented new results in the HH → bbbb and HH → multileptons channels and CMS in the HH → γγττ channel. In May, ATLAS released a combination of searches for HH production in five channels using the complete LHC Run 2 dataset. The combination provides the best expected sensitivities to HH production (excluding values more than 2.4 times the SM prediction) and to the Higgs boson self-coupling. A combination of HH searches published by CMS in 2022 obtains a similar sensitivity to the di-Higgs cross-section limits. “In late 2023 we put out a preliminary result combining single-Higgs and di-Higgs analyses to constrain the Higgs self-coupling, and further work on combining all the latest analyses is ongoing,” explains Nadjieh Jafari of CMS.
The Higgs self-coupling is the next big thing since the Higgs discovery
Considerable improvements are expected with the LHC Run 3 and much larger High-Luminosity LHC (HL-LHC) datasets. Based on extrapolations of early subsets of its Run 2 analyses, ATLAS expects to detect SM di-Higgs production with a significance of 3.2σ (4.6σ) with (without) systematic uncertainties by the end of the HL-LHC era. With similar progress at CMS, a di-Higgs observation is expected to be possible at the HL-LHC even with current analysis techniques, along with improved knowledge of λ. ATLAS, for example, expects to be able to constrain λ to be between 0.5 and 1.6 times the SM expectation at the level of 1σ.
Testing the foundations
Physicists are also starting to place limits on possible new-physics contributions to HH production, which can originate either from loop corrections involving new particles or from non-standard couplings between the Higgs boson and other SM particles. Several theories beyond the SM, including two-Higgs-doublet and composite-Higgs models, also predict the existence of heavy scalar particles that can decay resonantly into a pair of Higgs bosons. “Large anomalous values of λ are already excluded, and the window of possible values continues to shrink towards the SM as the sensitivity grows,” says Schaarschmidt. “Furthermore, in recent di-Higgs analyses ATLAS and CMS have been able to establish a strong constraint on the coupling between two Higgs bosons and two vector bosons.”
For Christophe Grojean of the DESY theory group, the principal interest in di-Higgs production is to test the foundations of quantum field theory: “The basic principles of the SM are telling us that the way the Higgs boson interacts with itself is mostly dictated by its expectation value (linked to the Fermi constant, i.e. the muon and neutron lifetimes) and its mass. Verifying this prediction experimentally is therefore of prime importance.”
Thanks to its 13.6 TeV collisions, the LHC directly explores distance scales as short as 5 × 10–20 m. But the energy frontier can also be probed indirectly. By studying rare decays, distance scales as small as a zeptometre (10–21 m) can be resolved, probing the existence of new particles with masses as high as 100 TeV. Such particles are out of the reach of any high-energy collider that could be built in this century.
The key concept is the quantum fluctuation. Just because a collision doesn’t have enough energy to bring a new particle into existence does not mean that a very heavy new particle cannot inform us about its existence. Thanks to Heisenberg’s uncertainty principle, new particles could be virtually exchanged between the other particles involved in the collisions, modifying the probabilities for the processes we observe in our detectors. The effect of massive new particles could be unmistakable, giving physicists a powerful tool for exploring more deeply into the unknown than accelerator technology and economic considerations allow direct searches to go.
The effect of massive new particles could be unmistakable
The search for new particles and forces beyond those of the Standard Model is strongly motivated by the need to explain dark matter, the huge range of particle masses from the tiny neutrino to the massive top quark, and the asymmetry between matter and antimatter that is responsible for our very existence. As direct searches at the LHC have not yet provided any clue as to what these new particles and forces might be, indirect searches are growing in importance. Studying very rare processes could allow us to see imprints of new particles and forces acting at much shorter distance scales than it is possible to explore at current and future colliders.
Anticipating the November Revolution
The charm quark is a good example. The story of its direct discovery unfolded 50 years ago, in November 1974, when teams at SLAC and MIT simultaneously discovered a charm–anticharm meson in particle collisions. But four years earlier, Sheldon Glashow, John Iliopoulos and Luciano Maiani had already predicted the existence of the charm quark thanks to the surprising suppression of the neutral kaon’s decay into two muons.
Neutral kaons are made up of a strange quark and a down antiquark, or vice versa. In the Standard Model, their decay to two muons can proceed most simply through the virtual exchange of two W bosons, one virtual up quark and a virtual neutrino. The trouble was that the rate for the neutral kaon decay to two muons predicted in thismanner turned out to be many orders of magnitude larger than observed experimentally.
Glashow, Iliopoulos and Maiani (GIM) proposed a simple solution. With visionary insight, they hypothesised a new quark, the charm quark, which would totally cancel the contribution of the up quark to this decay if their masses were equal to each other. As the rate was non-vanishing and the charm quark had not yet been observed experimentally, they concluded that the mass of the charm quark must be significantly larger than that of the up quark.
Their hunch was correct. In early 1974, months before its direct discovery, Mary K Gaillard and Benjamin Lee predicted the charm quark’s mass by analysing another highly suppressed quantity, the mass difference in K0–K0 mixing.
As modifications to the GIM mechanism by new heavy particles are still a hot prospect for discovering new physics in the 2020s, the details merit a closer look. Years earlier, Nicola Cabibbo had correctly guessed that weak interactions act between up quarks and a mixture (d cos θ + s sin θ) of the down and strange quarks. We now know that charm quarks interact with the mixture (–d sin θ + s cos θ). This is just a rotation of the down and strange quarks through this Cabibbo angle. The minus sign causes the destructive interference observed in the GIM mechanism.
With the discovery of a third generation of quarks, quark mixing is now described by the Cabibbo–Kobayashi–Maskawa (CKM) matrix – a unitary three-dimensional rotation with complex phases that parameterise CP violation. Understanding its parameters may prove central to our ability to discover new physics this decade.
On to the 1980s
The story of indirect discoveries continued in the late 1980s, when the magnitude of B0d – B0d mixing implied the existence of a heavy top quark, which was confirmed in 1995, completing the third generation of quarks. The W, Z and Higgs bosons were also predicted well in advance of their discoveries. It’s only natural to expect that indirect searches for new physics will be successful at even shorter distance scales.
Rare weak decays of kaons and B mesons that are strongly suppressed by the GIM mechanism are expected to play a crucial role. Many channels of interest are predicted by the Standard Model to have branching ratios as low as 10–11, often being further suppressed by small elements of the CKM matrix. If the GIM mechanism is violated by new-physics contributions, these branching ratios – the fraction of times a particle decays that way – could be much larger.
Measuring suppressed branching ratios with respectable precision this decade is therefore an exciting prospect. Correlations between different branching ratios can be particularly sensitive to new physics and could provide the first hints of physics beyond the Standard Model. A good example is the search for the violation of lepton-flavour universality (CERN Courier May/June 2019 p33). Though hints of departures from muon–electron universality seem to be receding, hints that muon–tau universality may be violated still remain, and the measured branching ratios for B → K(K*)µ+µ– differ visibly from Standard Model predictions.
The first step in this indirect strategy is to search for discrepancies between theoretical predictions and experimental observables. The main challenge for experimentalists is the low branching ratios for the rare decays in question. However, there are very good prospects for measuring many of these highly suppressed branching ratios in the coming years.
Six channels for the 2020s
Six channels stand out today for their superb potential to observe new physics this decade. If their decay rates defy expectations, the nature of any new physics could be identified by studying the correlations between these six decays and others.
The first two channels are kaon decays: the measurements of K+→ π+νν by the NA62 collaboration at CERN (see “Needle in a haystack” image), and the measurement of KL→ π0νν by the KOTO collaboration at J-PARC in Japan. The branching ratios for these decays are predicted to be in the ballpark of 8 × 10–11 and 3 × 10–11, respectively.
The second two are measurements of B → Kνν and B → K*νν by the Belle II collaboration at KEK in Japan. Branching ratios for these decays are expected to be much higher, in the ballpark of 10–5.
The final two channels, which are only accessible at the LHC, are measurements of the dimuon decays Bs→ µ+µ– and Bd→ µ+µ– by the LHCb, CMS and ATLAS collaborations. Their branching ratios are about 4 × 10–9 and 10–10 in the Standard Model. Though the decays B → K(K*)µ+µ–are also promising, they are less theoretically clean than these six.
The main challenge for theorists is to control quantum-chromodynamics (QCD) effects, both below 10–16 m, where strong interactions weaken, and in the non-perturbative region at distance scales of about 10–15 m, where quarks are confined in hadrons and calculations become particularly tricky. While satisfactory precision has been achieved at short-distance scales over the past three decades, the situation for non-perturbative computations is expected to improve significantly in the coming years, thanks to lattice QCD and analytic approaches such as dual QCD and chiral perturbation theory for kaon decays, and heavy-quark effective field theory for B decays.
Another challenge is that Standard Model predictions for the branching ratios require values for four CKM parameters that are not predicted by the Standard Model, and which must be measured using kaon and B-meson decays. These are the magnitude of the up-strange (Vus) and charm-bottom (Vcb) couplings and the CP-violating phases β and γ. The current precision on measurements of Vus and β is fully satisfactory, and the error on γ = (63.8 ± 3.5)° should be reduced to 1° by LHCb and Belle II in the coming years. The stumbling block is Vcb, where measurements currently disagree. Though experimental problems have not been excluded, the tension is thought to originate in QCD calculations. While measurements of exclusive decays to specific channels yield 39.21(62) × 10–3, inclusive measurements integrated over final states yield 41.96(50) × 10–3. This discrepancy makes the predicted branching ratios differ by 16% for the four B-meson decays, and by 25% and 35% for K+→ π+νν and KL→ π0νν. These discrepancies are a disaster for the theorists who had succeeded over many years of work to reduce QCD uncertainties in these decays to the level of a few percent.
One solution is to replace the CKM dependence of the branching ratios with observables where QCD uncertainties are under good control, for example: the mass differences in B0s−B0s and B0d−B0d mixing (∆Ms and ∆Md); a parameter that measures CP violation in K0 – K0 mixing (εK); and the CP-asymmetry that yields the angle β. Fitting these observables to the experimental data avoids us being forced to choose between inclusive and exclusive values for the charm-bottom coupling, and avoids the 3.5° uncertainty on γ, which in this strategy is reduced to 1.6°. Uncertainty on the predicted branching ratios is thereby reduced to 6% and 9% for B → Kννand B → K*νν, to 5% for the two kaon decays, and to 4% for Bs→ µ+µ– and Bd→ µ+µ–.
So what is the current experimental situation for the six channels? The latest NA62 measurement of K+→ π+ννis 25% larger than the Standard Model prediction. Its 36% uncertainty signals full compatibility at present, and precludes any conclusions about the size of new physics contributing to this decay. Next year, when the full analysis has been completed, this could turn out to be possible. It is unfortunate that the HIKE proposal was not adopted (CERN Courier May/June 2024 p7), as NA62’s expected precision of 15% could have been reduced to 5%. This could turn out to be crucial for the discovery of new physics in this decay.
The present upper bound on KL→ π0ννfrom KOTO is still two orders of magnitude above the Standard Model prediction. This bound should be lowered by at least one order of magnitude in the coming years. As this decay is fully governed by CP violation, one may expect that new physics will impact it significantly more than CP-conserving decays such as K+→ π+νν.
Branching out from Belle
At present, the most interesting result concerns a 2023 update from Belle II to the measured branching ratio for B+→ K+νν(see “Interesting excess” image). The resulting central value from Belle II and BaBar is currently a factor of 2.6 above the Standard Model prediction. This has sparked many theoretical analyses around the world, but the experimental error of 30% once again does not allow for firm conclusions. Measurements of other charge and spin configurations of this decay are pending.
Finally, both dimuon B-meson decays are at present consistent with Standard Model predictions, but significant improvements in experimental precision could still reveal new physics at work, especially in the case of Bd.
It will take a few years to conclude if new physics contributions are evident in these six branching ratios, but the fact that all are now predicted accurately means that we can expect to observe or exclude new physics in them before the end of the decade. This would be much harder if measurements of the Vcb coupling were involved.
So far, so good. But what if the observables that replaced Vcb and γ are themselves affected by new physics? How can they be trusted to make predictions against which rare decay rates can be tested?
Here comes some surprisingly good news: new physics does not appear to be required to simultaneously fit them using our new basis of observables ΔMd, εK and ΔMs, as they intersect at a single point in the Vcb–γ plane (see “No new physics” figure). This analysis favours the inclusive determination of Vcb and yields a value for γ that is consistent with the experimental world average and a factor of two more accurate. It’s important to stress, though, that non-perturbative four-flavour lattice-QCD calculations of ∆Ms and ∆Md by the HPQCD lattice collaboration played a key role here. It is crucial that another lattice QCD collaboration repeat these calculations, as the three curves cross at different points in three-flavour calculations that exclude charm.
Interesting years are ahead in the field of indirect searches for new physics
In this context, one realises the advantages of Vcb–γ plots compared to the usual unitarity-triangle plots, where Vcb is not seen and 1° improvements in the determination of γ are difficult to appreciate. In the late 2020s, determining Vcb and γ from tree-level decays will be a central issue, and a combination of Vcb-independent and Vcb-dependent approaches will be needed to identify any concrete model of new physics.
We should therefore hope that the tension between inclusive and exclusive determinations of Vcb will soon be conclusively resolved. Forthcoming measurements of our six rare decays may then reveal new physics at the energy frontier (see “New physics” figure). With a 1° precision measurement of γ on the horizon, and many Vcb-independent ratios available, interesting years are ahead in the field of indirect searches for new physics.
In 1676 Antonie van Leeuwenhoek discovered a microuniverse populated by bacteria, which he called animalcula, or little animals. Let us hope that we will, in this decade, discover new animalcula on our flavour expedition to the zeptouniverse.
In a series of daring balloon flights in 1912, Victor Hess discovered radiation that intensified with altitude, implying extra-terrestrial origins. A century later, experiments with cosmic rays have reached low-Earth orbit, but physicists are still puzzled. Cosmic-ray spectra are difficult to explain using conventional models of galactic acceleration and propagation. Hypotheses for their sources range from supernova remnants, active galactic nuclei and pulsars to physics beyond the Standard Model. The study of cosmic rays in the 1940s and 1950s gave rise to particle physics as we know it. Could these cosmic messengers be about to unlock new secrets, potentially clarifying the nature of dark matter?
The cosmic-ray spectrum extends well into the EeV regime, far beyond what can be reached by particle colliders. For many decades, the spectrum was assumed to be broken into intervals, each following a power law, as Enrico Fermi had historically predicted. The junctures between intervals include: a steepening decline at about 3 × 106 GeV known as the knee; a flattening at about 4 × 109 GeV known as the ankle; and a further steepening at the supposed end of the spectrum somewhere above 1010 GeV (10 EeV).
While the cosmic-ray population at EeV energies may include contributions from extra-galactic cosmic rays, and the end of the spectrum may be determined by collisions with relic cosmic-microwave-background photons – the Greisen–Zatsepin–Kuzmin cutoff – the knee is still controversial as the relative abundance of protons and other nuclei is largely unknown. What’s more, recent direct measurements by space-borne instruments have discovered “spectral curvatures” below the knee. These significant deviations from a pure power law range from a few hundred GeV to a few tens of TeV. Intriguing anomalies in the spectra of cosmic-ray electrons and positrons have also been observed below the knee.
Electron origins
The Calorimetric Electron Telescope (CALET; see “Calorimetric telescope” figure) on board the International Space Station (ISS) provides the highest-energy direct measurements of the spectrum of cosmic-ray electrons and positrons. Its goal is to observe discrete sources of high-energy particle acceleration in the local region of our galaxy. Led by the Japan Aerospace Exploration Agency, with the participation of the Italian Space Agency and NASA, CALET was launched from the Tanegashima Space Center in August 2015, becoming the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During 2017 a third experiment, ISS-CREAM, joined AMS-02 and CALET, but its observation time ended prematurely.
As a result of radiative losses in space, high-energy cosmic-ray electrons are expected to originate just a few thousand light-years away, relatively close to Earth. CALET’s homogeneous calorimeter (fully active, with no absorbers) is optimised to reconstruct such particles (see “Energetic electron” figure). With the exception of the highest energies, anisotropies in their arrival direction are typically small due to deflections by turbulent interstellar magnetic fields.
Energy spectra also contain crucial information as to where and how cosmic-ray electrons are accelerated. And they could provide possible signatures of dark matter. For example, the presence of a peak in the spectrum could be a sign of dark-matter decay, or dark-matter annihilation into an electron–positron pair, with a detected electron or positron in the final state.
Direct measurements of the energy spectra of charged cosmic rays have recently achieved unprecedented precision thanks to long-term observations of electrons and positrons of cosmic origin, as well as of individual elements from hydrogen to nickel, and even beyond. Space-borne instruments such as CALET directly identify cosmic nuclei by measuring their electric charge. Ground-based experiments must do so indirectly by observing the showers they generate in the atmosphere, incurring large systematic uncertainties. Either way, hadronic cosmic rays can be assumed to be fully stripped of atomic electrons in their high-temperature regions of origin.
A rich phenomenology
The past decade has seen the discovery of unexpected features in the differential energy spectra of both leptonic and hadronic cosmic rays. The observation by PAMELA and AMS of an excess of positrons above 10 GeV has generated widespread interest and still calls for an unambiguous explanation (CERN Courier December 2016 p26). Possibilities include pair production in pulsars, in addition to the well known interactions with the interstellar gas, and the annihilation of dark matter into electron–positron pairs.
Regarding cosmic-ray nuclei, significant deviations of the fluxes from pure power-law spectra have been observed by several instruments in flight, including by CREAM on balloon launches from Antarctica, by PAMELA and DAMPE aboard satellites in low-Earth orbit, and by AMS-02 and CALET on the ISS. Direct measurements have also shown that the energy spectra of “primary” cosmic rays is different from those of “secondary” cosmic rays created by collisions of primaries with the interstellar medium. This rich phenomenology, which encodes information on cosmic-ray acceleration processes and the history of their propagation in the galaxy, is the subject of multiple theoretical models.
An unexpected discovery by PAMELA, which had been anticipated by CREAM and was later measured with greater precision by AMS-02, DAMPE and CALET, was the observation of a flattening of the differential energy spectra of protons and helium. Starting from energies of a few hundred GeV, the proton flux shows a smooth and progressive hardening (increase in gradient) of the spectrum that continues up to around 10 TeV, above which a completely different regime is established. A turning point was the subsequent discovery by CALET and DAMPE of an unexpected softening of proton and helium fluxes above about 10 TeV/Z, where the atomic number Z is one for protons and two for helium. The presence of a second break challenges the conventional “standard model” of cosmic-ray spectra and calls for a further extension of the observed energy range, currently limited to a few hundred TeV.
At present, only two experiments in low-Earth orbit have an energy reach beyond 100 TeV: CALET and DAMPE. They rely on a purely calorimetric measurement of the energy, while space-borne magnetic spectrometers are limited to a maximum magnetic “rigidity” – a particle’s momentum divided by its charge – of a few teravolts. Since the end of PAMELA’s operations in 2016, AMS-02 is now the only instrument in orbit with the ability to discriminate the sign of the charge. This allows separate measurements of the high-energy spectra of positrons and antiprotons – an important input to the observation of final states containing antiparticles for dark-matter searches. AMS-02 is also now preparing for an upgrade: an additional silicon tracker layer will be deployed at the top of the instrument to enable a significant increase in its acceptance and energy reach (CERN Courier March/April 2024 p7).
Pioneering observations
CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers, enabling measurements of electrons up to 20 TeV and measurements of hadrons up to 1 PeV. As an all-calorimetric instrument with no magnetic field, its main science goal is to perform precision measurements of the detailed shape of the inclusive spectra of electrons and positrons.
Thanks to its advanced imaging calorimeter, CALET can measure the kinetic energy of incident particles well into TeV energies, maintaining excellent proton–electron discrimination throughout. CALET’s homogeneous calorimeter has a total thickness of 30 radiation lengths, allowing for a full containment of electron showers. It is preceded by a high-granularity pre-shower detector with imaging capabilities that provide a redundant measurement of charge via multiple energy-loss measurements. The calibration of the two instruments is the key to controlling the energy scale, motivating beam tests at CERN before launch.
A first important deviation from a scale-invariant power-law spectrum was found for electrons near 1 TeV. Here, CALET and DAMPE observed a significant flux reduction, as expected from the large radiative losses of electrons during their travel in space. CALET has now published a high-statistics update up to 7.5 TeV, reporting the presence of candidate electrons above the 1 TeV spectral break (see “Electron break” figure).
This unexplored region may hold some surprises. For example, the detection of even higher energy electrons, such as the 12 TeV candidate recently found by CALET, may indicate the contribution of young and nearby sources such as the Vela supernova remnant, which is known to host a pulsar (see “Pulsar home” image).
CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers
A second unexpected finding is the observation of a significant reduction in the proton flux around 10 TeV. This bump and dip were also observed by DAMPE and anticipated by CREAM, albeit with low statistics (see “Proton bump” figure). A precise measurement of the flux has allowed CALET to fit the spectrum with a double-broken power law: after a spectral hardening starting at a few hundred GeV, which is also observed by AMS-02 and PAMELA, and which progressively increases above 500 GeV, a steep softening takes place above 10 TeV.
A similar bump and dip have been observed in the helium flux. These spectral features may result from a single physical process that generates a bump in the cosmic-ray spectrum. Theoretical models include an anomalous diffusive regime near the acceleration sources, the dominance of one or more nearby supernova remnants, the gradual release of cosmic rays from the source, and the presence of additional sources.
CALET is also a powerful hunter of heavier cosmic rays. Measurements of the spectra of boron, carbon and oxygen ions have been extended in energy reach and precision, providing evidence of a progressive spectral hardening for most of the primary elements above a few hundred GeV per nucleon. The boron-to-carbon flux ratio is an important input for understanding cosmic-ray propagation. This is because diffusion through the interstellar medium causes an additional softening of the flux of secondary cosmic rays such as boron with respect to primary cosmic rays such as carbon (see “Break in B/C?” figure). The collaboration also recently published the first high-resolution flux measurement of nickel (Z = 28), revealing the element to have a very similar spectrum to iron, suggesting similar acceleration and propagation behaviour.
CALET is also studying the spectra of sub-iron elements, which are poorly known above 10 GeV per nucleon, and ultra-heavy galactic cosmic rays such as zinc (Z = 30), which are quite rare. CALET studies abundances up to Z = 40 using a special trigger with a large acceptance, so far revealing an excellent match with previous measurements from ACE-CRIS (a satellite-based detector), SuperTIGER (a balloon-borne detector) and HEAO-3 (a satellite-based detector decommissioned in the 1980s). Ultra-heavy galactic cosmic rays provide insights into cosmic-ray production and acceleration in some of the most energetic processes in our galaxy, such as supernovae and binary-neutron-star mergers.
Gravitational-wave counterparts
In addition to charged particles, CALET can detect gamma rays with energies between 1 GeV and 10 TeV, and study the diffuse photon background as well as individual sources. To study electromagnetic transients related to complex phenomena such as gamma-ray bursts and neutron-star mergers, CALET is equipped with a dedicated monitor that to date has detected more than 300 gamma-ray bursts, 10% of which are short bursts in the energy range 7 keV to 20 MeV. The search for electromagnetic counterparts to gravitational waves proceeds around the clock by following alerts from LIGO, VIRGO and KAGRA. No X-ray or gamma-ray counterparts to gravitational waves have been detected so far.
On the low-energy side of cosmic-ray spectra, CALET has contributed a thorough study of the effect of solar activity on galactic cosmic rays, revealing charge dependence on the polarity of the Sun’s magnetic field due to the different paths taken by electrons and protons in the heliosphere. The instrument’s large-area charge detector has also proven to be ideal for space-weather studies of relativistic electron precipitation from the Van Allen belts in Earth’s magnetosphere.
The spectacular recent experimental advances in cosmic-ray research, and the powerful theoretical efforts that they are driving, are moving us closer to a solution to the century-old puzzle of cosmic rays. With more than four billion cosmic rays observed so far, and a planned extension of the mission to the nominal end of ISS operativity in 2030, CALET is expected to continue its campaign of direct measurements in space, contributing sharper and perhaps unexpected pictures of their complex phenomenology.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.