Comsol -leaderboard other pages

Topics

PhyStat turns 25

Confidence intervals

On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars, which bring together physicists, statisticians and scientists from related fields to discuss, develop and disseminate methods for statistical data analysis and machine learning.

The special symposium heard from the founder and primary organiser of the PhyStat series Louis Lyons (Imperial College London and University of Oxford), who together with Fred James and Yves Perrin initiated the movement with the “Workshop on Confidence Limits” in January 2000. According to Lyons, the series was to bring together physicists and statisticians, a philosophy that has been followed and extended throughout the 22 PhyStat workshops and conferences, as well as numerous seminars and “informal reviews”. Speakers called attention to recognition from the Royal Statistical Society’s pictorial timeline of statistics, starting with the use of averages by Hippias of Elis in 450 BC and culminating with the 2012 discovery of the Higgs boson with 5σ significance.

Lyons and Bob Cousins (UCLA) offered their views on the evolution of statistical practice in high-energy physics, starting in the 1960s bubble-chamber era, strongly influenced by the 1971 book Statistical Methods in Experimental Physics by W T Eadie et al., its 2006 second edition by symposium participant Fred James (CERN), as well as Statistics for Nuclear and Particle Physics (1985) by Louis Lyons – reportedly the most stolen book from the CERN library. Both Lyons and Cousins noted the interest of the PhyStat community not only in practical solutions to concrete problems but also in foundational questions in statistics, with the focus on frequentist methods setting high-energy physics somewhat apart from the Bayesian approach more widely used in astrophysics.

Giving his view of the PhyStat era, ATLAS physicist and director of the University of Wisconsin Data Science Institute Kyle Cranmer emphasised the enormous impact that PhyStat has had on the field, noting important milestones such as the ability to publish full likelihood models through the statistical package RooStats, the treatment of systematic uncertainties with profile-likelihood ratio analyses, methods for combining analyses, and the reuse of published analyses to place constraints on new physics models. In regards to the next 25 years, Cranmer predicted the increasing use of methods that have emerged from PhyStat, such as simulation-based inference, and pointed out that artificial intelligence (the elephant in the room) could drastically alter how we use statistics.

Statistician Mikael Kuusela (CMU) noted that Phystat workshops have provided important two-way communication between the physics and statistics communities, citing simulation-based inference as an example where many key ideas were first developed in physics and later adopted by statisticians. In his view, the use of statistics in particle physics has emerged as “phystatistics”, a proper subfield with distinct problems and methods.

Another important feature of the PhyStat movement has been to encourage active participation and leadership by younger members of the community.  With its 25th anniversary, the torch is now passed from Louis Lyons to Olaf Behnke (DESY), Lydia Brenner (NIKHEF) and a younger team, who will guide Phystat into the next 25 years and beyond.

Planning for precision at Moriond

Since 1966 the Rencontres de Moriond has been one of the most important conferences for theoretical and experimental particle physicists. The Electroweak Interactions and Unified Theories session of the 59th edition attracted about 150 participants to La Thuile, Italy, from 23 to 30 March, to discuss electroweak, Higgs-boson, top-quark, flavour, neutrino and dark-matter physics, and the field’s links to astrophysics and cosmology.

Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. These are particularly important while the international community discusses future projects, basing projections on current results and technology. The conference heard how theoretical investigations of specific models and “catch all” effective field theories are being sharpened to constrain a broader spectrum of possible extensions of the Standard Model. Theoretical parametric uncertainties are being greatly reduced by collider precision measurements and lattice QCD. Perturbative calculations of short-distance amplitudes are reaching to percent-level precision, while hadronic long-distance effects are being investigated both in B-, D- and K-meson decays, as well as in the modelling of collider events.

Comprehensive searches

Throughout Moriond 2025 we heard how a broad spectrum of experiments at the LHC, B factories, neutrino facilities, and astrophysical and cosmological observatories are planning upgrades to search for new physics at both low- and high-energy scales. Several fields promise qualitative progress in understanding nature in the coming years. Neutrino experiments will measure the neutrino mass hierarchy and CP violation in the neutrino sector. Flavour experiments will exclude or confirm flavour anomalies. Searches for QCD axions and axion-like particles will seek hints to the solution of the strong CP problem and possible dark-matter candidates.

The Standard Model has so far been confirmed to be the theory that describes physics at the electroweak scale (up to a few hundred GeV) to a remarkable level of precision. All the particles predicted by the theory have been discovered, and the consistency of the theory has been proven with high precision, including all calculable quantum effects. No direct evidence of new physics has been found so far. Still, big open questions remain that the Standard Model cannot answer, from understanding the origin of neutrino masses and their hierarchy, to identifying the origin and nature of dark matter and dark energy, and explaining the dynamics behind the baryon asymmetry of the universe.

Several fields promise qualitative progress in understanding nature in the coming years

The discovery of the Higgs boson has been crucial to confirming the Standard Model as the theory of particle physics at the electroweak scale, but it does not explain why the scalar Brout–Englert–Higgs (BEH) potential takes the form of a Mexican hat, why the electroweak scale is set by a Higgs vacuum expectation value of 246 GeV, or what the nature of the Yukawa force is that results in the bizarre hierarchy of masses coupling the BEH field to quarks and leptons. Gravity is also not a component of the Standard Model, and a unified theory escapes us.

At the LHC today, the ATLAS and CMS collaborations are delivering Run 1 and 2 results with beyond-expectation accuracies on Higgs-boson properties and electroweak precision measurements. Projections for the high-luminosity phase of the LHC are being updated and Run 3 analyses are in full swing. The LHCb collaboration presented another milestone in flavour physics for the first time at Moriond 2025: the first observation of CP violation in baryon decays. Its rebuilt Run 3 detector with triggerless readout and full software trigger reported its first results at this conference.

Several talks presented scenarios of new physics that could be revealed in today’s data given theoretical guidance of sufficient accuracy. These included models with light weakly interacting particles, vector-like fermions and additional scalar particles. Other talks discussed how revisiting established quantum properties such as entanglement with fresh eyes could offer unexplored avenues to new theoretical paradigms and overlooked new-physics effects.

Pinpointing polarisation in vector-boson scattering

In the Standard Model (SM), W and Z bosons acquire mass and longitudinal polarisation through electroweak (EW) symmetry breaking, where the Brout–Englert–Higgs mechanism transforms Goldstone bosons into their longitudinal components. One of the most powerful ways to probe this mechanism is through vector-boson scattering (VBS), a rare process represented in figure 1, where two vector bosons scatter off each other. At high (TeV-scale) energies, interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. Without the Higgs boson’s couplings to these polarisation states, their interaction rates would grow uncontrollably with energy, eventually violating unitarity, indicating a complete breakdown of the SM.

Measuring the polarisation of same electric charge (same sign) W-boson pairs in VBS directly tests the predicted EW interactions at high energies through precision measurements. Furthermore, beyond-the-SM scenarios predict modifications to VBS, some affecting specific polarisation states, rendering such measurements valuable avenues for uncovering new physics.

ATLAS figure 2

Using the full proton–proton collision dataset from LHC Run 2 (2015–2018, 140 fb–1 at 13 TeV), the ATLAS collaboration recently published the first evidence for longitudinally polarised W bosons in the electroweak production of same-sign W-boson pairs in final states including two same-sign leptons (electrons or muons) and missing transverse momentum, along with two jets (EW W±W±jj). This process is categorised by the polarisation states of the W bosons: fully longitudinal (WL±WL±jj), mixed (WL±WT±jj), and fully transverse (WT±WT±jj). Measuring the polarisation states is particularly challenging due to the rarity of the VBS events, the presence of two undetected neutrinos, and the absence of a single kinematic variable that efficiently distinguishes between polarisation states. To overcome this, deep neural networks (DNNs) were trained to exploit the complex correlations between event kinematic variables that characterise different polarisations. This approach enabled the separation of the fully longitudinal WL±WL±jj from the combined WT±W±jj (WL±WT±jj plus WT±WT±jj) processes as well as the combined WL±W±jj (WL±WL±jj plus WL±WT±jj) from the purely transverse WT±WT±jj contribution.

To measure the production of WL±WL±jj and WL±W±jj processes, a first DNN (inclusive DNN) was trained to distinguish EW W±W±jj events from background processes. Variables such as the invariant mass of the two highest-energy jets provide strong discrimination for this classification. In addition, two independent DNNs (signal DNNs) were trained to extract polarisation information, separating either WL±WL±jj from WT±W±jj or WL±W±jj from WT±WT±jj, respectively. Angular variables, such as the azimuthal angle difference between the leading leptons and the pseudorapidity difference between the leading and subleading jets, are particularly sensitive to the scattering angles of the W bosons, enhancing the separation power of the signal DNNs. Each DNN is trained using up to 20 kinematic variables, leveraging correlations among them to improve sensitivity.

The signal DNN distributions, within each inclusive DNN region, were used to extract the WL±WL±jj and WL±W±jj polarisation fractions through two independent maximum-likelihood fits. The excellent separation between the WL±W±jj and WT±WT±jj processes can be seen in figure 2 for the WL±W±jj fit, achieving better separation for higher scores of the signal DNN, represented in the x-axis. An observed (expected) significance of 3.3 (4.0) standard deviations was obtained for WL±W±jj, providing the first evidence of same-sign WW production with at least one of the W bosons longitudinally polarised. No significant excess of events consistent with WL±WL±jj production was observed, leading to the most stringent 95% confidence-level upper limits to date on the WL±WL±jj cross section: 0.45 (0.70) fb observed (expected).

There is still much to understand about the electroweak sector of the Standard Model, and the measurement presented in this article remains limited by the size of the available data sample. The techniques developed in this analysis open new avenues for studying W- and Z-boson polarisation in VBS processes during the LHC Run 3 and beyond.

Particle Cosmology and Astrophysics

Particle Cosmology and Astrophysics

In 1989, Rocky Kolb and Mike Turner published The Early Universe – a seminal book that offered a comprehensive introduction to the then-nascent field of particle cosmology, laying the groundwork for a generation of physicists to explore the connections between the smallest and largest scales of the universe. Since then, the interfaces between particle physics, astrophysics and cosmology have expanded enormously, fuelled by an avalanche of new data from ground-based and space-borne observatories.

In Particle Cosmology and Astrophysics, Dan Hooper follows in their footsteps, providing a much-needed update that captures the rapid developments of the past three decades. Hooper, now a professor at the University of Wisconsin–Madison, addresses the growing need for a text that introduces the fundamental concepts and synthesises the vast array of recent discoveries that have shaped our current understanding of the universe.

Hooper’s textbook opens with 75 pages of “preliminaries”, covering general relativity, cosmology, the Standard Model of particle physics, thermodynamics and high-energy processes in astrophysics. Each of these disciplines is typically introduced in a full semester of dedicated study, supported by comprehensive texts. For example, students seeking a deeper understanding of high-energy phenomena are likely to benefit from consulting Longair’s High Energy Astrophysics or Sigl’s Astroparticle Physics. Similarly, those wishing to advance their knowledge in particle physics will find that more detailed treatments are available in Griffiths’ Introduction to Elementary Particles or Peskin and Schroeder’s An Introduction to Quantum Field Theory, to mention just a few textbooks recommended by the author.

A much-needed update that captures the rapid developments of the past three decades

By distilling these complex subjects into just enough foundational content, Hooper makes the field accessible to those who have been exposed to only a fraction of the standard coursework. His approach provides an essential stepping stone, enabling students to embark on research in particle cosmology and astrophysics with a well calibrated introduction while still encouraging further study through more specialised texts.

Part II, “Cosmology”, follows a similarly pragmatic approach, providing an updated treatment that parallels Kolb and Turner while incorporating a range of topics that have, in the intervening years, become central to modern cosmology. The text now covers areas such as cosmic microwave background (CMB) anisotropies, the evidence for dark matter and its potential particle candidates, the inflationary paradigm, and the evidence and possible nature of dark energy.

Hooper doesn’t shy away from complex subjects, even when they resist simple expositions. The discussion on CMB anisotropies serves as a case in point: anyone who has attempted to condense this complex topic into a few graduate lectures is aware of the challenge in maintaining both depth and clarity. Instead of attempting an exhaustive technical introduction, Hooper offers a qualitative description of the evolution of density perturbations and how one extracts cosmological parameters from CMB observations. This approach, while not substituting for the comprehensive analysis found in texts such as Dodelson’s Modern Cosmology or Baumann’s Cosmology, provides students with a valuable overview that successfully charts the broad landscape of modern cosmology and illustrates the interconnectedness of its many subdisciplines.

Part III, “Particle Astrophysics”, contains a selection of topics that largely reflect the scientific interests of the author, a renowned expert in the field of dark matter. Some colleagues might raise an eyebrow at the book devoting 10 pages each to entire fields such as cosmic rays, gamma rays and neutrino astrophysics, and 50 pages to dark-matter candidates and searches. Others might argue that a book titled Particle Cosmology and Astrophysics is incomplete without detailing the experimental techniques behind the extraordinary advances witnessed in these fields and without at least a short introduction to the booming field of gravitational-wave astronomy. But the truth is that, in the author’s own words, particle cosmology and astrophysics have become “exceptionally multidisciplinary,” and it is impossible in a single textbook to do complete justice to domains that intersect nearly all branches of physics and astronomy. I would also contend that it is not only acceptable but indeed welcome for authors to align the content of their work with their own scientific interests, as this contributes to the diversity of textbooks and offers more choice to lecturers who wish to supplement a standard curriculum with innovative, interdisciplinary perspectives.

Ultimately, I recommend the book as a welcome addition to the literature and an excellent introductory textbook for graduate students and junior scientists entering the field.

ALICE measures a rare Ω baryon

ALICE figure 1

Since the discovery of the electron and proton over 100 years ago, physicists have observed a “zoo” of different types of particles. While some of these particles have been fundamental, like neutrinos and muons, many are composite hadrons consisting of quarks bound together by the exchange of gluons. Studying the zoo of hadrons – their compositions, masses, lifetimes and decay modes – allows physicists to understand the details of the strong interaction, one of the fundamental forces of nature.

The Ω(2012) was discovered by the Belle Collaboration in 2018. The ALICE collaboration recently released an observation of a signal consistent with it with a significance of 15σ in proton–proton (pp) collisions at a centre-of-mass energy of 13 TeV. This is the first observation of the Ω(2012) by another experiment.

While the details of its internal structure are still up for debate, the Ω(2012) consists, at minimum, of three strange quarks bound together. It is a heavier, excited version of the ground-state Ω baryon discovered in 1964, which also contains three strange quarks. Multiple theoretical models predicted a spectrum of excited Ω baryons, with some calling for a state with a mass around 2 GeV. Following the discovery of the Ω(2012), theoretical work has attempted to describe its internal structure, with hypotheses including a simple three-quark baryon or a hadronic molecule.

Using a sample of a billion pp collisions, ALICE has measured the decay of Ω(2012) baryons to ΞK0S pairs. After traveling a few centimetres, these hadrons decay in turn, eventually producing a proton and four charged pions that are tracked by the ALICE detector.

ALICE’s measurements of the mass and width of the Ω(2012) are consistent with Belle’s, and superior precision on the mass. ALICE has also confirmed the rather narrow width of around 6 MeV, which indicates that the Ω(2012) is fairly long-lived for a particle that decays via the strong interaction. Belle and ALICE’s width measurements also lend support to the conclusion that the Ω(2012) has a spin-parity configuration of JP = 3/2.

ALICE also measured the number of Ω(2012) decays to ΞK0S pairs. By comparing this to the total Ω(2012) yield based on statistical thermal model calculations, ALICE has estimated the absolute branching ratio for the Ω(2012) → ΞK0 decay. A branching ratio is the probability of decay to a given mode. The ALICE results indicate that Ω(2012) undergoes two-body (ΞK) decays more than half the time, disfavouring models of the Ω(2012) structure that require large branching ratios for three-body decays.

The present ALICE results will help to improve the theoretical description of the structure of excited baryons. They can also serve as baseline measurements in searches for modifications of Ω-baryon properties in nucleus–nucleus collisions. In the future, Ω(2012) bary­ons may also serve as new probes to study the strangeness enhancement effect observed in proton–proton and nucleus–nucleus collisions.

Tau leptons from light resonances

CMS figure 1

Among the fundamental particles, tau leptons occupy a curious spot. They participate in the same sort of reactions as their lighter lepton cousins, electrons and muons, but their large mass means that they can also decay into a shower of pions and they interact more strongly with the Higgs boson. In many new-physics theories, Higgs-like particles – beyond that of the Standard Model – are introduced in order to explain the mass hierarchy or as possible portals to dark matter.

Because of their large mass, tau leptons are especially useful in searches for new physics. However, identifying taus is challenging, as in most cases they decay into a final state of one or more pions and an undetected neutrino. A crucial step in the identification of a tau lepton in the CMS experiment is the hadrons-plus-strips (HPS) algorithm. In the standard CMS reconstruction, a minimum momentum threshold of 20 GeV is imposed, such that the taus have enough momentum to make their decay products fall into narrow cones. However, this requirement reduces sensitivity to low-momentum taus. As a result, previous searches for a Higgs-like resonance φ decaying into two tau leptons required a φ-mass of more than 60 GeV.

CMS figure 2

The CMS experiment has now been able to extend the φ-mass range down to 20 GeV. To improve sensitivity to low-momentum tau decays, machine learning is used to determine a dynamic cone algorithm that expands the cone size as needed. The new algorithm, requiring one tau decaying into a muon and two neutrinos and one tau decaying into hadrons and a neutrino, is implemented in the CMS Scouting trigger system. Scouting extends CMS’s reach into previously inaccessible phase space by retaining only the most relevant information about the event, and thus facilitating much higher event rates.

The sensitivity of the new algorithm is so high that even the upsilon (Υ) meson, a bound state of the bottom quark and its antiquark, can be seen. Figure 1 shows the distribution of the mass of the visible decay products of tau (Mvis), in this case a muon from one tau lepton and either one or three pions from the other. A clear resonance structure is visible at Mvis = 6 GeV, in agreement with the expectation for the Υ meson. The peak is not at the actual mass of the Υ meson (9.46 GeV) due to the presence of neutrinos in the decay. While Υττ decays have been observed at electron–positron colliders, this marks the first evidence at a hadron collider and serves as an important benchmark for the analysis.

Given the high sensitivity of the new algorithm, CMS performed a search for a possible resonance in the range between 20 and 60 GeV using the data recorded in the years 2022 and 2023, and set competitive exclusion limits (see figure 2). For the 2024 and 2025 data taking, the algorithm was further improved, enhancing the sensitivity even more.

CMS observes top–antitop excess

Threshold excess

CERN’s Large Hadron Collider continues to deliver surprises. While searching for additional Higgs bosons, the CMS collaboration may have instead uncovered evidence for the smallest composite particle yet observed in nature – a “quasi-bound” hadron made up of the most massive and shortest-lived fundamental particle known to science and its antimatter counterpart. The findings, which do not yet constitute a discovery claim and could also be susceptible to other explanations, were reported this week at the Rencontres de Moriond conference in the Italian Alps.

Almost all of the Standard Model’s shortcomings motivate the search for additional Higgs bosons. Their properties are usually assumed to be simple. Much as the 125 GeV Higgs boson discovered in 2012 appears to interact with each fundamental fermion with a strength proportional to the fermion’s mass, theories postulating additional Higgs bosons generally expect them to couple more strongly to heavier quarks. This puts the singularly massive top quark at centre stage. If an additional Higgs boson has a mass greater than about 345 GeV and can therefore decay to a top quark–antiquark pair, this should dominate the way it decays inside detectors. Hunting for bumps in the invariant mass spectrum of top–antitop pairs is therefore often considered to be the key experimental signature of additional Higgs bosons above the top–antitop production threshold.

The CMS experiment has observed just such a bump. Intriguingly, however, it is located at the lower limit of the search, right at the top-quark pair production threshold itself, leading CMS to also consider an alternative hypothesis long considered difficult to detect: a top–antitop quasi-bound state known as toponium (see “Threshold excess figure).

The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC

“When we started the project, toponium was not even considered as a background to this search,” explains CMS physics coordinator Andreas Meyer (DESY). “In our analysis today we are only using a simplified model for toponium – just a generic spin-0 colour-singlet state with a pseudoscalar coupling to top quarks. The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC.”

Though other explanations can’t be ruled out, CMS finds the toponium hypothesis to be sufficient to explain the observed excess. The size of the excess is consistent with the latest theoretical estimate of the cross section to produce pseudoscalar toponium of around 6.4 pb.

“The cross section we obtain for our simplified hypothesis is 8.8 pb with an uncertainty of about 15%,” explains Meyer. “One can infer that this is significantly above five sigma.”

The smallest hadron

If confirmed, toponium would be the final example of quarkonium – a term for quark–antiquark states formed from heavy charm, bottom and perhaps top quarks. Charmonium (charm–anticharm) mesons were discovered at SLAC and Brookhaven National Laboratory in the November Revolution of 1974. Bottomonium (bottom–antibottom) mesons were discovered at Fermilab in 1977. These heavy quarks move relatively slowly compared to the speed of light, allowing the strong interaction to be modelled by a static potential as a function of the separation between them. When the quarks are far apart, the potential is proportional to their separation due to the self-interacting gluons forming an elongating flux tube, yielding a constant force of attraction. At close separations, the potential is due to the exchange of individual gluons and is Coulomb-like in form, and inversely proportional to separation, leading to an inverse-square force of attraction. This is the domain where compact quarkonium states are formed, in a near perfect QCD analogy to positronium, wherein an electron and a positron are bound by photon exchange. The Bohr radii of the ground states of charmonium and bottomonium are approximately 0.3 fm and 0.2 fm, and bottomonium is thought to be the smallest hadron yet discovered. Given its larger mass, toponium’s Bohr radius would be an order of magnitude smaller.

Angular analysis

For a long time it was thought that toponium bound states were unlikely to be detected in hadron–hadron collisions. The top quark is the most massive and the shortest-lived of the known fundamental particles. It decays into a bottom quark and a real W boson in the time it takes light to travel just 0.1 fm, leaving little time for a hadron to form. Toponium would be unique among quarkonia in that its decay would be triggered by the weak decay of one of its constituent quarks rather than the annihilation of its constituent quarks into photons or gluons. Toponium is expected to decay at twice the rate of the top quark itself, with a width of approximately 3 GeV.

CMS first saw a 3.5 sigma excess in a 2019 search studying the mass range above 400 GeV, based on 35.9 fb−1 of proton–proton collisions at 13 TeV from 2016. Now armed with 138 fb–1 of collisions from 2016 to 2018, the collaboration extended the search down to the top–antitop production threshold at 345 GeV. Searches are complicated by the possibility that quantum interference between background and Higgs signal processes could generate an experimentally challenging peak–dip structure with a more or less pronounced bump.

“The signal reported by CMS, if confirmed, could be due either to a quasi-bound top–antitop meson, commonly called ‘toponium’, or possibly an elementary spin-zero boson such as appears in models with additional Higgs bosons, or conceivably even a combination of the two,” says theorist John Ellis of King’s College London. “The mass of the lowest-lying toponium state can be calculated quite accurately in QCD, and is expected to lie just below the nominal top–antitop threshold. However, this threshold is smeared out by the short lifetime of the top quark, as well as the mass resolution of an LHC detector, so toponium would appear spread out as a broad excess of events in the final states with leptons and jets that generally appear in top decays.”

Quantum numbers

An important task of the analysis is to investigate the quantum numbers of the signal. It could be a scalar particle, like the Higgs boson discovered in 2012, or a pseudoscalar particle – a different type of spin-0 object with odd rather than even parity. To measure its spin-parity, CMS studied the angular correlations of the top-quark-pair decay products, which retain information on the original quantum state. The decays bear all the experimental hallmarks of a pseudoscalar particle, consistent with toponium (see “Angular analysis” figure) or the pseudoscalar Higgs bosons common to many theories featuring extended Higgs sectors.

“The toponium state produced at the LHC would be a pseudoscalar boson, whose decays into these final states would have characteristic angular distributions, and the excess of events reported by CMS exhibits the angular correlations expected for such a pseudoscalar state,” explains Ellis. “Similar angular correlations would be expected in the decays of an elementary pseudoscalar boson, whereas scalar-boson decays would exhibit different angular correlations that are disfavoured by the CMS analysis.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery

Two main challenges now stand in the way of definitively identifying the nature of the excess. The first is to improve the modelling of the creation of top-quark pairs at the LHC, including the creation of bound states at the threshold. The second challenge is to obtain consistency with the ATLAS experiment. “ATLAS had similar studies in the past but with a more conservative approach on the systematic uncertainties,” says ATLAS physics coordinator Fabio Cerutti (LBNL). “This included, for example, larger uncertainties related to parton showers and other top-modelling effects. To shed more light on the CMS observation, be it a new boson, a top quasi-bound state, or some limited understanding of the modelling of top–antitop production at threshold, further studies are needed on our side. We have several analysis teams working on that. We expect to have new results with improved modelling of the top-pair production at threshold and additional variables sensitive to both a new pseudo-scalar boson or a top quasi-bounded state very soon.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery.

“Discovering toponium 50 years after the November Revolution would be an unanticipated and welcome golden anniversary present for its charmonium cousin that was discovered in 1974,” concludes Ellis. “The prospective observation and measurement of the vector state of toponium in e+e collisions around 350 GeV have been studied in considerable theoretical detail, but there have been rather fewer studies of the observability of pseudoscalar toponium at the LHC. In addition to the angular correlations observed by CMS, the effective production cross section of the observed threshold effect is consistent with non-relativistic QCD calculations. More detailed calculations will be desirable for confirmation that another quarkonium family member has made its appearance, though the omens are promising.”

The Hubble tension

Just like particle physics, cosmology has its own standard model. It is also powerful in prediction, and brings new mysteries and profound implications. The first was the realisation in 1917 that a homogeneous and isotropic universe must be expanding. This led Einstein to modify his general theory of relativity by introducing a cosmological constant (Λ) to counteract gravity and achieve a static universe – an act he labelled his greatest blunder when Edwin Hubble provided observational proof of the universe’s expansion in 1929. Sixty-nine years later, Saul Perlmutter, Adam Riess and Brian Schmidt went further. Their observations of Type Ia supernovae (SN Ia) showed that the universe’s expansion was accelerating. Λ was revived as “dark energy”, now estimated to account for 68% of the total energy density of the universe.

On large scales the dominant motion of galaxies is the Hubble flow, the expansion of the fabric of space itself

The second dominant component of the model emerged not from theory but from 50 years of astrophysical sleuthing. From the “missing mass problem” in the Coma galaxy cluster in the 1930s to anomalous galaxy-rotation curves in the 1970s, evidence built up that additional gravitational heft was needed to explain the formation of the large-scale structure of galaxies that we observe today. The 1980s therefore saw the proposal of cold dark matter (CDM), now estimated to account for 27% of the energy density of the universe, and actively sought by diverse experiments across the globe and in space.

Dark energy and CDM supplement the remaining 5% of normal matter to form the ΛCDM model. ΛCDM is a remarkable six-parameter framework that models 13.8 billion years of cosmic evolution from quantum fluctuations during an initial phase of “inflation” – a hypothesised expansion of the universe by 26 to 30 orders of magnitude in roughly 10–36 seconds at the beginning of time. ΛCDM successfully models cosmic microwave background (CMB) anisotropies, the large-scale structure of the universe, and the redshifts and distances of SN Ia. It achieves this despite big open questions: the nature of dark matter, the nature of dark energy and the mechanism for inflation.

The Hubble tension

Cosmologists are eager to guide beyond-ΛCDM model-building efforts by testing its end-to-end predictions, and the model now seems to be failing the most important: predicting the expansion rate of the universe.

One of the main predictions of ΛCDM is the average energy density of the universe today. This determines its current expansion rate, otherwise known as the Hubble constant (H0). The most precise ΛCDM prediction comes from a fit to CMB data from ESA’s Planck satellite (operational 2009 to 2013), which yields H0 = 67.4 ± 0.5 km/s/Mpc. This can be tested against direct measurements in our local universe, revealing a surprising discrepancy (see “The Hubble tension” figure).

At sufficiently large distances, the dominant motion of galaxies is the Hubble flow – the expansion of the fabric of space itself. Directly measuring the expansion rate of the universe calls for fitting the increase in the recession velocity of galaxies deep within the Hubble flow as a function of distance. The gradient is H0.

Receding supernovae

While high-precision spectroscopy allows recession velocity to be precisely measured using the redshifts (z) of atomic spectra, it is more difficult to measure the distance to astrophysical objects. Geometrical methods such as parallax are imprecise at large distances, but “standard candles” with somewhat predictable luminosities such as cepheids and SN Ia allow distance to be inferred using the inverse square-law. Cepheids are pulsating post-main-sequence stars whose radius and observed luminosity oscillate over a period of one to 100 days, driven by the ionisation and recombination of helium in their outer layers, which increases opacity and traps heat; their period increases with their true luminosity. Before going supernova, SN Ia were white dwarf stars in binary systems; when the white dwarf accretes enough mass from its companion star, runaway carbon fusion produces a nearly standardised peak luminosity for a period of one to two weeks. Only SN Ia are deep enough in the Hubble flow to allow precise measurements of H0. When cepheids are observable in the same galaxies, they can be used to calibrate them.

Distance ladder

At present, the main driver of the Hubble tension is a 2022 measurement of H0 by the SH0ES (Supernova H0 for the Equation of State) team led by Adam Riess. As the SN Ia luminosity is not known from first principles, SH0ES built a “distance ladder” to calibrate the luminosity of 42 SN Ia within 37 host galaxies. The SN Ia are calibrated against intermediate-distance cepheids, and the cepheids are calibrated against four nearby “geometric anchors” whose distance is known through a geometric method (see “Distance ladder” figure). The geometric anchors are: Milky Way parallaxes from ESA’s Gaia mission; detached eclipsing binaries in the large and small magellanic clouds (LMC and SMC); and the “megamaser” galaxy host NGC4258, where water molecules in the accretion disk of a supermassive black hole emit Doppler-shifting microwave maser photons.

The great strength of the SH0ES programme is its use of NASA and ESA’s Hubble Space Telescope (HST, 1990–) at all three rungs of the distance ladder, bypassing the need for cross-calibration between instruments. SN Ia can be calibrated out to 40 Mpc. As a result, in 2022 SH0ES used measurements of 300 or so high-z SN Ia deep within the Hubble flow to measure H0 = 73.04 ± 1.04 km/s/Mpc. This is in more than 5σ tension with Planck’s ΛCDM prediction of 67.4 ± 0.5 km/s/Mpc.

Baryon acoustic oscillation

The sound horizon

The value of H0 obtained from fitting Planck CMB data has been shown to be robust in two key ways.

First, Planck data can be bypassed by combining CMB data from NASA’s WMAP probe (2001–2010) with observations by ground-based telescopes. WMAP in combination with the Atacama Cosmology Telescope (ACT, 2007–2022) yields H0 = 67.6 ± 1.1 km/s/Mpc. WMAP in combination with the South Pole Telescope (SPT, 2007–) yields H0 = 68.2 ± 1.1 km/s/Mpc. Second, and more intriguingly, CMB data can be bypassed altogether.

In the early universe, Compton scattering between photons and electrons was so prevalent that the universe behaved as a plasma. Quantum fluctuations from the era of inflation propagated like sound waves until the era of recombination, when the universe had cooled sufficiently for CMB photons to escape the plasma when protons and electrons combined to form neutral atoms. This propagation of inflationary perturbations left a characteristic scale known as the sound horizon in both the acoustic peaks of the CMB and in “baryon acoustic oscillations” (BAOs) seen in the large-scale structure of galaxy surveys (see “Baryon acoustic oscillation” figure). The sound horizon is the distance travelled by sound waves in the primordial plasma.

While the SH0ES measurement relies on standard candles, ΛCDM predictions rely instead on using the sound horizon as a “standard ruler” against which to compare the apparent size of BAOs at different redshifts, and thereby deduce the expansion rate of the universe. Under ΛCDM, the only two free parameters entering the computation of the sound horizon are the baryon density and the dark-matter density. Planck evaluates both by studying the CMB, but they can be obtained independently of the CMB by combining BAO measurements of the dark-matter density with Big Bang nucleosynthesis (BBN) measurements of the baryon density (see “Sound horizon” figure). The latest measurement by the Dark Energy Spectroscopic Instrument in Arizona (DESI, 2021–) yields H0 = 68.53 ± 0.80 km/s/Mpc, in 3.4σ tension with SH0ES and fully independent of Planck.

Sound horizon

The next few years will be crucial for understanding the Hubble tension, and may decide the fate of the ΛCDM model. ACT, SPT and the Simons Observatory in Chile (2024–) will release new CMB data. DESI, the Euclid space telescope (2023–) and the forthcoming LSST wide-field optical survey in Chile will release new galaxy surveys. “Standard siren” measurements from gravitational waves with electromagnetic counterparts may also contribute to the debate, although the original excitement has dampened with a lack of new events after GW170817. More accurate measurements of the age of the oldest objects may also provide an important new test. If H0 increases, the age of the universe decreases, and the SH0ES measurement favours less than 13.1 billion years at 2σ significance.

The SH0ES measurement is also being checked directly. A key approach is to test the three-step calibration by seeking alternative intermediate standard candles besides cepheids. One candidate is the peak-luminosity “tip” of the red giant branch (TRGB) caused by the sudden start of helium fusion in low-mass stars. The TRGB is bright enough to be seen in distant galaxies that host SN Ia, though at distances smaller than that of cepheids.

Settling the debate

In 2019 the Carnegie–Chicago Hubble Program (CCHP) led by Wendy Freedman and Barry Madore calibrated SN Ia using the TRGB within the LMC and NGC4258 to determine H0 = 69.8 ± 0.8 (stat) ± 1.7 (syst). An independent reanalysis including authors from the SH0ES collaboration later reported H0 = 71.5 ± 1.8 (stat + syst) km/s/Mpc. The difference in the results suggests that updated measurements with the James Webb Space Telescope (JWST) may settle the debate.

James Webb Space Telescope

Launched into space on 25 December 2021, JWST is perfectly adapted to improve measurements of the expansion rate of the universe thanks to its improved capabilities in the near infrared band, where the impact of dust is reduced (see “Improved resolution” figure). Its four-times-better spatial resolution has already been used to re-observe a subsample of the 37 hosts galaxies home to the 42 SN Ia studied by SH0ES and the geometric anchor NGC4258.

So far, all observations suggest good agreement with the previous observations by HST. SH0ES used JWST observations to obtain up to a factor 2.5 reduction in the dispersion of the period-luminosity relation for cepheids with no indication of a bias in HST measurements. Most importantly, they were able to exclude the confusion of cepheids with other stars as being responsible for the Hubble tension at 8σ significance.

Meanwhile, the CCHP team provided new measurements based on three distance indicators: cepheids, the TRGB and a new “population based” method using the J-region of the asymptotic giant branch (JAGB) of carbon-rich stars, for which the magnitude of the mode of the luminosity function can serve as a distance indicator (see the last three rows of “The Hubble tension” figure).

Galaxies used to measure the Hubble constant

The new CCHP results suggest that cepheids may show a bias compared to JAGB and TRGB, though this conclusion was rapidly challenged by SH0ES, who identified a missing source of uncertainty and argued that the size of the sample of SN Ia within hosts with primary distance indicators is too small to provide competitive constraints: they claim that sample variations of order 2.5 km/s/Mpc could explain why the JAGB and TRGB yield a lower value. Agreement may be reached when JWST has observed a larger sample of galaxies – across both teams, 19 of the 37 calibrated by SH0ES have been remeasured so far, plus the geometric anchor NGC 5468 (see “The usual suspects” figure).

At this stage, no single systematic error seems likely to fully explain the Hubble tension, and the problem is more severe than it appears. When calibrated, SN Ia and BAOs constrain not only H0, but the entire redshift range out to z ~ 1. This imposes strong constraints on any new physics introduced in the late universe. For example, recent DESI results suggest that the dynamics of dark energy at late times may not be exactly that of a cosmological constant, but the behaviour needed to reconcile Planck and SH0ES is strongly excluded.

Comparison of JWST and HST views

Rather than focusing on the value of the expansion rate, most proposals now focus on altering the calibration of either SN Ia or BAOs. For example, an unknown systematic error could alter the luminosity of SN Ia in our local vicinity, but we have no indication that their magnitude changes with redshift, and this solution appears to be very constrained.

The most promising solution appears to be that some new physics may have altered the value of the sound horizon in the early universe. As the sound horizon is used to calibrate both the CMB and BAOs, reducing it by 10 Mpc could match the value of H0 favoured by SH0ES (see “Sound horizon” figure). This can be achieved either by increasing the redshift of recombination or the energy density in the pre-recombination universe, giving the sound waves less time to propagate.

The best motivated models invoke additional relativistic species in the early universe such as a sterile neutrino or a new type of “dark radiation”. Another intriguing possibility is that dark energy played a role in the pre-recombination universe, boosting the expansion rate at just the right time. The wide variety and high precision of the data make it hard to find a simple mechanism that is not strongly constrained or finely tuned, but existing models have some of the right features. Future data will be decisive in testing them.

Do muons wobble faster than expected?

Vacuum fluctuation

Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.

The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.

In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.

Quantum fluctuations

The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.

Quantum fluctuation

This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.

As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.

Precision

This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.

g-2 and the Standard Model

Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.

The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.

Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.

Data or the lattice?

Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.

Hadronic contributions

A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.

In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.

After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).

Landscape of muon g-2

In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.

This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.

News from the lattice

Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).

Intermediate window

Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.

The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.

With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.

The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.

These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e into hadrons is also evolving rapidly.

News from electron–positron annihilation

Many experiments have measured the cross-section for e+e annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e π+π process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).

The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π channel. Scrutiny by the Theory Initiative has identified no major problem.

Two-pion channel

CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.

In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ→ ππ0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.

KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e → π+π cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e → π+π. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.

The future

While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.

The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus

New analyses of e+e are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.

At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.

There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.

The beauty of falling

The Beauty of Falling

A theory of massive gravity is one in which the graviton, the particle that is believed to mediate the force of gravity, has a small mass. This contrasts with general relativity, our current best theory of gravity, which predicts that the graviton is exactly massless. In 2011, Claudia de Rham (Imperial College London), Gregory Gabadadze (New York University) and Andrew Tolley (Imperial College London) revitalised interest in massive gravity by uncovering the structure of the best possible (in a technical sense) theory of massive gravity, now known as the dRGT theory, after these authors.

Claudia de Rham has now written a popular book on the physics of gravity. The Beauty of Falling is an enjoyable and relatively quick read: a first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

De Rahm begins by setting the stage with the breakthroughs that led to our current paradigm of gravity. The Michelson–Morley experiment and special relativity, Einstein’s description of gravity as geometry leading to general relativity and its early experimental triumphs, black holes and cosmology are all described in accessible terms using familiar analogies. De Rham grips the reader by weaving in a deeply personal account of her own life and upbringing, illustrating what inspired her to study these ideas and pursue a career in theoretical physics. She has led an interesting life, from growing up in various parts of the world, to learning to dive and fly, to training as an astronaut and coming within a hair’s breadth of becoming one. Her account of the training and selection process for European Space Agency astronauts is fascinating, and worth the read in its own right.

Moving closer to the present day, de Rahm discusses the detection of gravitational waves at gravitational-wave observatories such as LIGO, the direct imaging of black holes by the Event Horizon Telescope, and the evidence for dark matter and the accelerating expansion of the universe with its concomitant cosmological constant problem. As de Rham explains, this latter discovery underlies much of the interest in massive gravity; there remains the lingering possibility that general relativity may need to be modified to account for the observed accelerated expansion.

In the second part of the book, de Rham warns us that we are departing from the realm of well tested and established physics, and entering the world of more uncertain ideas. A pet peeve of mine is popular accounts that fail to clearly make this distinction, a temptation to which this book does not succumb. 

Here, the book offers something that is hard to find: a first-hand account of the process of thought and discovery in theoretical physics. When reading the latest outrageously overhyped clickbait headlines coming out of the world of fundamental physics, it is easy to get the wrong impression about what theoretical physicists do. This part of the book illustrates how ideas come about: by asking questions of established theories and tugging on their loose threads, we uncover new mathematical structures and, in the process, gain a deeper understanding of the structures we have.

Massive gravity, the focus of this part of the book, is a prime example: by starting with a basic question, “does the graviton have to be massless?”, a new structure was revealed. This structure may or may not have any direct relevance to gravity in the real world, but even if it does not, our study of it has significantly enhanced our understanding of the structure of general relativity. And, as has occurred countless times before with intriguing mathematical structures, it may ultimately prove useful for something completely different and unforeseen – something that its originators did not have even remotely in mind. Here, de Rahm offers invaluable insights both into uncovering a new theoretical structure and what happens next, as the results are challenged and built upon by others in the community.

bright-rec iop pub iop-science physcis connect