Topics

The Hubble tension

Just like particle physics, cosmology has its own standard model. It is also powerful in prediction, and brings new mysteries and profound implications. The first was the realisation in 1917 that a homogeneous and isotropic universe must be expanding. This led Einstein to modify his general theory of relativity by introducing a cosmological constant (Λ) to counteract gravity and achieve a static universe – an act he labelled his greatest blunder when Edwin Hubble provided observational proof of the universe’s expansion in 1929. Sixty-nine years later, Saul Perlmutter, Adam Riess and Brian Schmidt went further. Their observations of Type Ia supernovae (SN Ia) showed that the universe’s expansion was accelerating. Λ was revived as “dark energy”, now estimated to account for 68% of the total energy density of the universe.

On large scales the dominant motion of galaxies is the Hubble flow, the expansion of the fabric of space itself

The second dominant component of the model emerged not from theory but from 50 years of astrophysical sleuthing. From the “missing mass problem” in the Coma galaxy cluster in the 1930s to anomalous galaxy-rotation curves in the 1970s, evidence built up that additional gravitational heft was needed to explain the formation of the large-scale structure of galaxies that we observe today. The 1980s therefore saw the proposal of cold dark matter (CDM), now estimated to account for 27% of the energy density of the universe, and actively sought by diverse experiments across the globe and in space.

Dark energy and CDM supplement the remaining 5% of normal matter to form the ΛCDM model. ΛCDM is a remarkable six-parameter framework that models 13.8 billion years of cosmic evolution from quantum fluctuations during an initial phase of “inflation” – a hypothesised expansion of the universe by 26 to 30 orders of magnitude in roughly 10–36 seconds at the beginning of time. ΛCDM successfully models cosmic microwave background (CMB) anisotropies, the large-scale structure of the universe, and the redshifts and distances of SN Ia. It achieves this despite big open questions: the nature of dark matter, the nature of dark energy and the mechanism for inflation.

The Hubble tension

Cosmologists are eager to guide beyond-ΛCDM model-building efforts by testing its end-to-end predictions, and the model now seems to be failing the most important: predicting the expansion rate of the universe.

One of the main predictions of ΛCDM is the average energy density of the universe today. This determines its current expansion rate, otherwise known as the Hubble constant (H0). The most precise ΛCDM prediction comes from a fit to CMB data from ESA’s Planck satellite (operational 2009 to 2013), which yields H0 = 67.4 ± 0.5 km/s/Mpc. This can be tested against direct measurements in our local universe, revealing a surprising discrepancy (see “The Hubble tension” figure).

At sufficiently large distances, the dominant motion of galaxies is the Hubble flow – the expansion of the fabric of space itself. Directly measuring the expansion rate of the universe calls for fitting the increase in the recession velocity of galaxies deep within the Hubble flow as a function of distance. The gradient is H0.

Receding supernovae

While high-precision spectroscopy allows recession velocity to be precisely measured using the redshifts (z) of atomic spectra, it is more difficult to measure the distance to astrophysical objects. Geometrical methods such as parallax are imprecise at large distances, but “standard candles” with somewhat predictable luminosities such as cepheids and SN Ia allow distance to be inferred using the inverse square-law. Cepheids are pulsating post-main-sequence stars whose radius and observed luminosity oscillate over a period of one to 100 days, driven by the ionisation and recombination of helium in their outer layers, which increases opacity and traps heat; their period increases with their true luminosity. Before going supernova, SN Ia were white dwarf stars in binary systems; when the white dwarf accretes enough mass from its companion star, runaway carbon fusion produces a nearly standardised peak luminosity for a period of one to two weeks. Only SN Ia are deep enough in the Hubble flow to allow precise measurements of H0. When cepheids are observable in the same galaxies, they can be used to calibrate them.

Distance ladder

At present, the main driver of the Hubble tension is a 2022 measurement of H0 by the SH0ES (Supernova H0 for the Equation of State) team led by Adam Riess. As the SN Ia luminosity is not known from first principles, SH0ES built a “distance ladder” to calibrate the luminosity of 42 SN Ia within 37 host galaxies. The SN Ia are calibrated against intermediate-distance cepheids, and the cepheids are calibrated against four nearby “geometric anchors” whose distance is known through a geometric method (see “Distance ladder” figure). The geometric anchors are: Milky Way parallaxes from ESA’s Gaia mission; detached eclipsing binaries in the large and small magellanic clouds (LMC and SMC); and the “megamaser” galaxy host NGC4258, where water molecules in the accretion disk of a supermassive black hole emit Doppler-shifting microwave maser photons.

The great strength of the SH0ES programme is its use of NASA and ESA’s Hubble Space Telescope (HST, 1990–) at all three rungs of the distance ladder, bypassing the need for cross-calibration between instruments. SN Ia can be calibrated out to 40 Mpc. As a result, in 2022 SH0ES used measurements of 300 or so high-z SN Ia deep within the Hubble flow to measure H0 = 73.04 ± 1.04 km/s/Mpc. This is in more than 5σ tension with Planck’s ΛCDM prediction of 67.4 ± 0.5 km/s/Mpc.

Baryon acoustic oscillation

The sound horizon

The value of H0 obtained from fitting Planck CMB data has been shown to be robust in two key ways.

First, Planck data can be bypassed by combining CMB data from NASA’s WMAP probe (2001–2010) with observations by ground-based telescopes. WMAP in combination with the Atacama Cosmology Telescope (ACT, 2007–2022) yields H0 = 67.6 ± 1.1 km/s/Mpc. WMAP in combination with the South Pole Telescope (SPT, 2007–) yields H0 = 68.2 ± 1.1 km/s/Mpc. Second, and more intriguingly, CMB data can be bypassed altogether.

In the early universe, Compton scattering between photons and electrons was so prevalent that the universe behaved as a plasma. Quantum fluctuations from the era of inflation propagated like sound waves until the era of recombination, when the universe had cooled sufficiently for CMB photons to escape the plasma when protons and electrons combined to form neutral atoms. This propagation of inflationary perturbations left a characteristic scale known as the sound horizon in both the acoustic peaks of the CMB and in “baryon acoustic oscillations” (BAOs) seen in the large-scale structure of galaxy surveys (see “Baryon acoustic oscillation” figure). The sound horizon is the distance travelled by sound waves in the primordial plasma.

While the SH0ES measurement relies on standard candles, ΛCDM predictions rely instead on using the sound horizon as a “standard ruler” against which to compare the apparent size of BAOs at different redshifts, and thereby deduce the expansion rate of the universe. Under ΛCDM, the only two free parameters entering the computation of the sound horizon are the baryon density and the dark-matter density. Planck evaluates both by studying the CMB, but they can be obtained independently of the CMB by combining BAO measurements of the dark-matter density with Big Bang nucleosynthesis (BBN) measurements of the baryon density (see “Sound horizon” figure). The latest measurement by the Dark Energy Spectroscopic Instrument in Arizona (DESI, 2021–) yields H0 = 68.53 ± 0.80 km/s/Mpc, in 3.4σ tension with SH0ES and fully independent of Planck.

Sound horizon

The next few years will be crucial for understanding the Hubble tension, and may decide the fate of the ΛCDM model. ACT, SPT and the Simons Observatory in Chile (2024–) will release new CMB data. DESI, the Euclid space telescope (2023–) and the forthcoming LSST wide-field optical survey in Chile will release new galaxy surveys. “Standard siren” measurements from gravitational waves with electromagnetic counterparts may also contribute to the debate, although the original excitement has dampened with a lack of new events after GW170817. More accurate measurements of the age of the oldest objects may also provide an important new test. If H0 increases, the age of the universe decreases, and the SH0ES measurement favours less than 13.1 billion years at 2σ significance.

The SH0ES measurement is also being checked directly. A key approach is to test the three-step calibration by seeking alternative intermediate standard candles besides cepheids. One candidate is the peak-luminosity “tip” of the red giant branch (TRGB) caused by the sudden start of helium fusion in low-mass stars. The TRGB is bright enough to be seen in distant galaxies that host SN Ia, though at distances smaller than that of cepheids.

Settling the debate

In 2019 the Carnegie–Chicago Hubble Program (CCHP) led by Wendy Freedman and Barry Madore calibrated SN Ia using the TRGB within the LMC and NGC4258 to determine H0 = 69.8 ± 0.8 (stat) ± 1.7 (syst). An independent reanalysis including authors from the SH0ES collaboration later reported H0 = 71.5 ± 1.8 (stat + syst) km/s/Mpc. The difference in the results suggests that updated measurements with the James Webb Space Telescope (JWST) may settle the debate.

James Webb Space Telescope

Launched into space on 25 December 2021, JWST is perfectly adapted to improve measurements of the expansion rate of the universe thanks to its improved capabilities in the near infrared band, where the impact of dust is reduced (see “Improved resolution” figure). Its four-times-better spatial resolution has already been used to re-observe a subsample of the 37 hosts galaxies home to the 42 SN Ia studied by SH0ES and the geometric anchor NGC4258.

So far, all observations suggest good agreement with the previous observations by HST. SH0ES used JWST observations to obtain up to a factor 2.5 reduction in the dispersion of the period-luminosity relation for cepheids with no indication of a bias in HST measurements. Most importantly, they were able to exclude the confusion of cepheids with other stars as being responsible for the Hubble tension at 8σ significance.

Meanwhile, the CCHP team provided new measurements based on three distance indicators: cepheids, the TRGB and a new “population based” method using the J-region of the asymptotic giant branch (JAGB) of carbon-rich stars, for which the magnitude of the mode of the luminosity function can serve as a distance indicator (see the last three rows of “The Hubble tension” figure).

Galaxies used to measure the Hubble constant

The new CCHP results suggest that cepheids may show a bias compared to JAGB and TRGB, though this conclusion was rapidly challenged by SH0ES, who identified a missing source of uncertainty and argued that the size of the sample of SN Ia within hosts with primary distance indicators is too small to provide competitive constraints: they claim that sample variations of order 2.5 km/s/Mpc could explain why the JAGB and TRGB yield a lower value. Agreement may be reached when JWST has observed a larger sample of galaxies – across both teams, 19 of the 37 calibrated by SH0ES have been remeasured so far, plus the geometric anchor NGC 5468 (see “The usual suspects” figure).

At this stage, no single systematic error seems likely to fully explain the Hubble tension, and the problem is more severe than it appears. When calibrated, SN Ia and BAOs constrain not only H0, but the entire redshift range out to z ~ 1. This imposes strong constraints on any new physics introduced in the late universe. For example, recent DESI results suggest that the dynamics of dark energy at late times may not be exactly that of a cosmological constant, but the behaviour needed to reconcile Planck and SH0ES is strongly excluded.

Comparison of JWST and HST views

Rather than focusing on the value of the expansion rate, most proposals now focus on altering the calibration of either SN Ia or BAOs. For example, an unknown systematic error could alter the luminosity of SN Ia in our local vicinity, but we have no indication that their magnitude changes with redshift, and this solution appears to be very constrained.

The most promising solution appears to be that some new physics may have altered the value of the sound horizon in the early universe. As the sound horizon is used to calibrate both the CMB and BAOs, reducing it by 10 Mpc could match the value of H0 favoured by SH0ES (see “Sound horizon” figure). This can be achieved either by increasing the redshift of recombination or the energy density in the pre-recombination universe, giving the sound waves less time to propagate.

The best motivated models invoke additional relativistic species in the early universe such as a sterile neutrino or a new type of “dark radiation”. Another intriguing possibility is that dark energy played a role in the pre-recombination universe, boosting the expansion rate at just the right time. The wide variety and high precision of the data make it hard to find a simple mechanism that is not strongly constrained or finely tuned, but existing models have some of the right features. Future data will be decisive in testing them.

Do muons wobble faster than expected?

Vacuum fluctuation

Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.

The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.

In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.

Quantum fluctuations

The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.

Quantum fluctuation

This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.

As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.

Precision

This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.

g-2 and the Standard Model

Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.

The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.

Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.

Data or the lattice?

Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.

Hadronic contributions

A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.

In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.

After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).

Landscape of muon g-2

In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.

This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.

News from the lattice

Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).

Intermediate window

Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.

The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.

With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.

The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.

These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e into hadrons is also evolving rapidly.

News from electron–positron annihilation

Many experiments have measured the cross-section for e+e annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e π+π process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).

The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π channel. Scrutiny by the Theory Initiative has identified no major problem.

Two-pion channel

CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.

In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ→ ππ0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.

KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e → π+π cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e → π+π. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.

The future

While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.

The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus

New analyses of e+e are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.

At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.

There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.

Educational accelerator open to the public

What better way to communicate accelerator physics to the public than using a functioning particle accelerator? From January, visitors to CERN’s Science Gateway were able to witness a beam of protons being accelerated and focused before their very eyes. Its designers believe it to be the first working proton accelerator to be exhibited in a museum.

“ELISA gives people who visit CERN a chance to really see how the LHC works,” says Science Gateway’s project leader Patrick Geeraert. “This gives visitors a unique experience: they can actually see a proton beam in real time. It then means they can begin to conceptualise the experiments we do at CERN.”

The model accelerator is inspired by a component of LINAC 4 – the first stage in the chain of accelerators used to prepare beams of protons for experiments at the LHC. Hydrogen is injected into a low-pressure chamber and ionised; a one-metre-long RF cavity accelerates the protons to 2 MeV, which then pass through a thin vacuum-sealed window. In dim light, the protons in the air ionise the gas molecules, producing visible light, allowing members of the public to see the beam’s progress before their very eyes (see “Accelerating education” figure).

ELISA – the Experimental Linac for Surface Analysis – will also be used to analyse the composition of cultural artefacts, geological samples and objects brought in by members of the public. This is an established application of low-energy proton accelerators: for example, a particle accelerator is hidden 15 m below the famous glass pyramids of the Louvre in Paris, though it is almost 40 m long and not freely accessible to the public.

“The proton-beam technique is very effective because it has higher sensitivity and lower backgrounds than electron beams,” explains applied physicist and lead designer Serge Mathot. “You can also perform the analysis in the ambient air, instead of in a vacuum, making it more flexible and better suited to fragile objects.”

For ELISA’s first experiment, researchers from the Australian Nuclear Science Technology Organisation and from Oxford’s Ashmolean Museum have proposed a joint research project about the optimisation of ELISA’s analysis of paint samples designed to mimic ancient cave art. The ultimate goal is to work towards a portable accelerator that can be taken to regions of the world that don’t have access to proton beams.

Game on for physicists

Raphael Granier de Cassagnac and Exographer

“Confucius famously may or may not have said: ‘When I hear, I forget. When I see, I remember. When I do, I understand.’ And computer-game mechanics can be inspired directly by science. Study it well, and you can invent game mechanics that allow you to engage with and learn about your own reality in a way you can’t when simply watching films or reading books.”

So says Raphael Granier de Cassagnac, a research director at France’s Centre national de la recherche scientifique and member of the CMS collaboration at the LHC. Granier de Cassagnac is also the creative director of Exographer, a science-fiction computer game that draws on concepts from particle physics and is available on Steam, Switch, PlayStation 5 and Xbox.

“To some extent, it’s not too different from working at a place like CMS, which is also a super complicated object,” explains Granier de Cassagnac. Developing a game often requires graphic artists, sound designers, programmers and science advisors. To keep a detector like CMS running, you need engineers, computer scientists, accelerator physicists and funding agencies. And that’s to name just a few. Even if you are not the primary game designer or principal investigator, understanding the
fundamentals is crucial to keep the project running efficiently.

Root skills

Most physicists already have some familiarity with structured programming and data handling, which eases the transition into game development. Just as tools like ROOT and Geant4 serve as libraries for analysing particle collisions, game engines such as Unreal, Unity or Godot provide a foundation for building games. Prebuilt functionalities are used to refine the game mechanics.

“Physicists are trained to have an analytical mind, which helps when it comes to organising a game’s software,” explains Granier de Cassagnac. “The engine is merely one big library, and you never have to code anything super complicated, you just need to know how to use the building blocks you have and code in smaller sections to optimise the engine itself.”

While coding is an essential skill for game production, it is not enough to create a compelling game. Game design demands storytelling, character development and world-building. Structure, coherence and the ability to guide an audience through complex information are also required.

“Some games are character-driven, others focus more on the adventure or world-building,” says Granier de Cassagnac. “I’ve always enjoyed reading science fiction and playing role-playing games like Dungeons and Dragons, so writing for me came naturally.”

Entrepreneurship and collaboration are also key skills, as it is increasingly rare for developers to create games independently. Universities and startup incubators can provide valuable support through funding and mentorship. Incubators can help connect entrepreneurs with industry experts, and bridge the gap between scientific research and commercial viability.

“Managing a creative studio and a company, as well as selling the game, was entirely new for me,” recalls Granier de Cassagnac. “While working at CMS, we always had long deadlines and low pressure. Physicists are usually not prepared for the speed of the industry at all. Specialised offices in most universities can help with valorisation – taking scientific research and putting it on the market. You cannot forget that your academic institutions are still part of your support network.”

Though challenging to break into, opportunity abounds for those willing to upskill

The industry is fiercely competitive, with more games being released than players can consume, but a well-crafted game with a unique vision can still break through. A common mistake made by first-time developers is releasing their game too early. No matter how innovative the concept or engaging the mechanics, a game riddled with bugs frustrates players and damages its reputation. Even with strong marketing, a rushed release can lead to negative reviews and refunds – sometimes sinking a project entirely.

“In this industry, time is money and money is time,” explains Granier de Cassagnac. But though challenging to break into, opportunity abounds for those willing to upskill, with the gaming industry worth almost $200 billion a year and reaching more than three billion players worldwide by Granier de Cassagnac’s estimation. The most important aspects for making a successful game are originality, creativity, marketing and knowing the engine, he says.

“Learning must always be part of the process; without it we cannot improve,” adds Granier de Cassagnac, referring to his own upskilling for the company’s next project, which will be even more ambitious in its scientific coverage. “In the next game we want to explore the world as we know it, from the Big Bang to the rise of technology. We want to tell the story of humankind.”

The beauty of falling

The Beauty of Falling

A theory of massive gravity is one in which the graviton, the particle that is believed to mediate the force of gravity, has a small mass. This contrasts with general relativity, our current best theory of gravity, which predicts that the graviton is exactly massless. In 2011, Claudia de Rham (Imperial College London), Gregory Gabadadze (New York University) and Andrew Tolley (Imperial College London) revitalised interest in massive gravity by uncovering the structure of the best possible (in a technical sense) theory of massive gravity, now known as the dRGT theory, after these authors.

Claudia de Rham has now written a popular book on the physics of gravity. The Beauty of Falling is an enjoyable and relatively quick read: a first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

De Rahm begins by setting the stage with the breakthroughs that led to our current paradigm of gravity. The Michelson–Morley experiment and special relativity, Einstein’s description of gravity as geometry leading to general relativity and its early experimental triumphs, black holes and cosmology are all described in accessible terms using familiar analogies. De Rham grips the reader by weaving in a deeply personal account of her own life and upbringing, illustrating what inspired her to study these ideas and pursue a career in theoretical physics. She has led an interesting life, from growing up in various parts of the world, to learning to dive and fly, to training as an astronaut and coming within a hair’s breadth of becoming one. Her account of the training and selection process for European Space Agency astronauts is fascinating, and worth the read in its own right.

Moving closer to the present day, de Rahm discusses the detection of gravitational waves at gravitational-wave observatories such as LIGO, the direct imaging of black holes by the Event Horizon Telescope, and the evidence for dark matter and the accelerating expansion of the universe with its concomitant cosmological constant problem. As de Rham explains, this latter discovery underlies much of the interest in massive gravity; there remains the lingering possibility that general relativity may need to be modified to account for the observed accelerated expansion.

In the second part of the book, de Rham warns us that we are departing from the realm of well tested and established physics, and entering the world of more uncertain ideas. A pet peeve of mine is popular accounts that fail to clearly make this distinction, a temptation to which this book does not succumb. 

Here, the book offers something that is hard to find: a first-hand account of the process of thought and discovery in theoretical physics. When reading the latest outrageously overhyped clickbait headlines coming out of the world of fundamental physics, it is easy to get the wrong impression about what theoretical physicists do. This part of the book illustrates how ideas come about: by asking questions of established theories and tugging on their loose threads, we uncover new mathematical structures and, in the process, gain a deeper understanding of the structures we have.

Massive gravity, the focus of this part of the book, is a prime example: by starting with a basic question, “does the graviton have to be massless?”, a new structure was revealed. This structure may or may not have any direct relevance to gravity in the real world, but even if it does not, our study of it has significantly enhanced our understanding of the structure of general relativity. And, as has occurred countless times before with intriguing mathematical structures, it may ultimately prove useful for something completely different and unforeseen – something that its originators did not have even remotely in mind. Here, de Rahm offers invaluable insights both into uncovering a new theoretical structure and what happens next, as the results are challenged and built upon by others in the community.

CMS peers inside heavy-quark jets

CMS figure 1

Ever since quarks and gluons were discovered, scientists have been gathering clues about their nature and behaviour. When quarks and gluons – collectively called partons – are produced at particle colliders, they shower to form jets – sprays of composite particles called hadrons. The study of jets has been indispensable towards understanding quantum chromodynamics (QCD) and the description of the final state using parton shower models. Recently, particular focus has been on the study of the jet substructure, which provides further input about the modelling of parton showers.

Jets initiated by the heavy charm (c-jets) or bottom quarks (b-jets) provide insight into the role of the quark mass, as an additional energy scale in QCD calculations. Heavy-flavour jets are not only used to test QCD predictions, they are also a key part of the study of other particles, such as the top quark and the Higgs boson. Understanding the internal structure of heavy-quark jets is thus crucial for both the identification of these heavier objects and the interpretation of QCD properties. One such property is the presence of a “dead cone” around the heavy quark, where collinear gluon emissions are suppressed in the direction of motion of the quark.

CMS has shed light on the role of the quark mass in the parton shower with two new results focusing on c- and b-jets, respectively. Heavy-flavour hadrons in these jets are typically long-lived, and decay at a small but measurable distance from the primary interaction vertex. In c-jets, the D0 meson is reconstructed in the K±π decay channel by combining pairs of charged hadrons that do not appear to come from the primary interaction vertex. In the case of b-jets, a novel technique is employed. Instead of reconstructing the b hadron in a given decay channel, its charged decay daughters are identified using a multivariate analysis. In both cases, the decay daughters are replaced by the mother hadron in the jet constituents.

CMS has shed light on the role of the quark mass in the parton shower

Jets are reconstructed by clustering particles in a pairwise manner, leading to a clustering tree that mimics the parton shower process. Substructure techniques are then employed to decompose the jet into two subjets, which correspond to the heavy quark and a gluon being emitted from it. Two of those algorithms are soft drop and late-kT. They select the first and last emission in the jet clustering tree, respectively, capturing different aspects of the QCD shower. Looking at the angle between the two subjets (see figure 1), denoted as Rg for soft drop and θ for late-kT, demonstrates the dead-cone effect, as the small angle emissions of b-jets (left) and c-jets (right) are suppressed compared to the inclusive jet case. The effect is captured better by the late-kT algorithm than soft drop in the case of c-jets.

These measurements serve to refine the tuning of Monte Carlo event generators relating to the heavy-quark mass and strong coupling. Identifying the onset of the dead cone in the vacuum also opens up possibilities for substructure studies in heavy-ion collisions, where emissions induced by the strongly interacting quark–gluon plasma can be isolated.

Salam’s dream visits the Himalayas

After winning the Nobel Prize in Physics in 1979, Abdus Salam wanted to bring world-class physics research opportunities to South Asia. This was the beginning of the BCSPIN programme, encompassing Bangladesh, China, Sri Lanka, Pakistan, India and Nepal. The goal was to provide scientists in South and Southeast Asia with new opportunities to learn from leading experts about developments in particle physics, astroparticle physics and cosmology. Together with Jogesh Pati, Yu Lu and Qaisar Shafi, Salam initiated the programme in 1989. This first edition was hosted by Nepal. Vietnam joined in 2009 and BCSPIN became BCVSPIN. Over the years, the conference has been held as far afield as Mexico.

The most recent edition attracted more than 100 participants to the historic Hotel Shanker in Kathmandu, Nepal, from 9 to 13 December 2024. The conference aimed to facilitate interactions between researchers from BCVSPIN countries and the broader international community, covering topics such as collider physics, cosmology, gravitational waves, dark matter, neutrino physics, particle astrophysics, physics beyond the Standard Model and machine learning. Participants ranged from renowned professors from across the globe to aspiring students.

Speaking of aspiring students, the main event was preceded by the BCVSPIN-2024 Masterclass in Particle Physics and Workshop in Machine Learning, hosted at Tribhuvan University from 4 to 6 December. The workshop provided 34 undergraduate and graduate students from around Nepal with a comprehensive introduction to particle physics, high-energy physics (HEP) experiments and machine learning. In addition to lectures, the workshop engaged students in hands-on sessions, allowing them to experience real research by exploring core concepts and applying machine-learning techniques to data from the ATLAS experiment. The students’ enthusiasm was palpable as they delved into the intricacies of particle physics and machine learning. The interactive sessions were particularly engaging, with students eagerly participating in discussions and practical exercises. Highlights included a special talk on artificial intelligence (AI) and a career development session focused on crafting CVs, applications and research statements. These sessions ensured participants were equipped with both academic insights and practical guidance. The impact on students was profound, as they gained valuable skills and networking opportunities, preparing them for future careers in HEP.

The BCVSPIN conference officially started the following Monday. In the spirit of BCVSPIN, the first plenary session featured an insightful talk on the status and prospects of HEP in Nepal, providing valuable insights for both locals and newcomers to the initiative. Then, the latest and the near-future physics highlights of experiments such as ATLAS, ALICE, CMS, as well as Belle, DUNE and IceCube, were showcased. From physics performance such as ATLAS nailing b-tagging with graph neural networks, to the most elaborate mass measurement of the W boson mass by CMS, not to mention ProtoDUNE’s runs exceeding expectations, the audience were offered comprehensive reviews of the recent breakthroughs on the experimental side. The younger physicists willing to continue or start hardware efforts surely appreciated the overview and schedule of the different upgrade programmes. The theory talks covered, among others, dark-matter models, our dear friend the neutrino and the interactions between the two. A special talk on AI invited the audience to reflect on what AI really is and how – in the midst of the ongoing revolution – it impacts the fields of physics and physicists themselves. Overviews of long-term future endeavours such as the Electron–Ion Collider and the Future Circular Collider concluded the programme.

BCVSPIN offers younger scientists precious connections with physicists from the international community

A special highlight of the conference was a public lecture “Oscillating Neutrinos” by the 2015 Nobel Laureate Takaaki Kajita. The event was held near the historical landmark of Patan Durbar Square, in the packed auditorium of the Rato Bangala School. This centre of excellence is known for its innovative teaching methods and quality instruction. More than half the room was filled with excited students from schools and universities, eager to listen to the keynote speaker. After a very pedagogical introduction explaining the “problem of solar neutrinos”, Kajita shared his insights on the discovery of neutrino oscillations and its implications for our understanding of the universe. His presentation included historical photographs of the experiments in Kamioka, Japan, as well as his participation at BCVSPIN in 1994. After encouraging the students to become scientists and answering as many questions as time allowed, he was swept up in a crowd of passionate Nepali youth, thrilled to be in the presence of such a renowned physicist.

The BCVSPIN initiative has changed the landscape of HEP in South and Southeast Asia. With participation made affordable for students, it is a stepping stone for the younger generation of scientists, offering them precious connections with physicists from the international community.

CDF addresses W-mass doubt

The CDF II experiment

It’s tough to be a lone dissenting voice, but the CDF collaboration is sticking to its guns. Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model (SM) prediction. All other measurements are statistically compatible with the SM, though slightly higher, including the most recent by the CMS collaboration at the LHC, which almost matched CDF’s stated precision of 9.4 MeV (CERN Courier November/December 2024 p7).

With CMS’s measurement came fresh scrutiny for the CDF collaboration, which had established one of the most interesting anomalies in fundamental science – a higher-than-expected W mass might reveal the presence of undiscovered heavy virtual particles. Particular scrutiny focused on the quoted momentum resolution of the CDF detector, which the collaboration claims exceeds the precision of any other collider detector by more than a factor of two. A new analysis by CDF verifies the stated accuracy of 25 parts per million by constraining possible biases using a large sample of cosmic-ray muons.

“The publication lays out the ‘warts and all’ of the tracking aspect and explains why the CDF measurement should be taken seriously despite being in disagreement with both the SM and silicon-tracker-based LHC measurements,” says spokesperson David Toback of Texas A&M University. “The paper should be seen as required reading for anyone who truly wants to understand, without bias, the path forward for these incredibly difficult analyses.”

The 2022 W-mass measurement exclusively used information from CDF’s drift chamber – a descendant of the multiwire proportional chamber invented at CERN by Georges Charpak in 1968 – and discarded information from its inner silicon vertex detector as it offered only marginal improvements to momentum resolution. The new analysis by CDF collaborator Ashutosh Kotwal of Duke University studies possible geometrical defects in the experiment’s drift chamber that could introduce unsuspected biases in the measured momenta of the electrons and muons emitted in the decays of W bosons.

“Silicon trackers have replaced wire-based technology in many parts of modern particle detectors, but the drift chamber continues to hold its own as the technology of choice when high accuracy is required over large tracking volumes for extended time periods in harsh collider environments,” opines Kotwal. “The new analysis demonstrates the efficiency and stability of the CDF drift chamber and its insensitivity to radiation damage.”

The CDF II detector operated at Fermilab’s Tevatron collider from 1999 to 2011. Its cylindrical drift chamber was coaxial with the colliding proton and antiproton beams, and immersed in an axial 1.4 T magnetic field. A helical fit yielded track parameters.

Boost for compact fast radio bursts

Fast radio bursts (FRBs) are short but powerful bursts of radio waves that are believed to be emitted by dense astrophysical objects such as neutron stars or black holes. They were discovered by Duncan Lorimer and his student David Narkevic in 2007 while studying archival data from the Parkes radio telescope in Australia. Since then, more than a thousand FRBs have been detected, located both within and without the Milky Way. These bursts usually last only a few milliseconds but can release enormous amounts of energy – an FRB detected in 2022 gave off more energy in a millisecond than the Sun does in 30 years – however, the exact mechanism underlying their creation remains a mystery.

Inhomogeneities caused by the presence of gas and dust in the interstellar medium scatter the radio waves coming from an FRB. This creates a stochastic interference pattern on the signal, called scintillation – a phenomenon akin to the twinkling of stars. In a recent study, astronomer Kenzie Nimmo and her colleagues used scintillation data from FRB 20221022A to constrain the size of its emission region. FRB 20221022A is a 2.5 millisecond burst from a galaxy about 200 million light-years away. It was detected on 22 October 2022 by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB).

The CHIME telescope is currently the world’s leading FRB detector, discovering an average of three new FRBs every day. It consists of four stationary 20 m-wide and 100 m-long semi-cylindrical paraboloidal reflectors with a focal length of 5 m (see “Right on CHIME” figure). 256 dual-polarisation feeds suspended along each axis gives it a field of view of more than 200 square degrees. With a wide bandwidth, high sensitivity and a high-performance correlator to pinpoint where in the sky signals are coming from, CHIME is an excellent instrument for the detection of FRBs. The antenna receives radio waves in the frequency range of 400 to 800 MHz.

Two main classes of models compete to explain the emission mechanisms of FRBs. Near-field models hypothesise that emission occurs in close proximity to the turbulent magnetosphere of a central engine, while far-away models hypothesise that emission occurs in relativistic shocks that propagate out to large radial distances. Nimmo and her team measured two distinct scintillation scales in the frequency spectrum of FRB 20221022A: one originating from its host galaxy or local environment, and another from a scattering site within the Milky Way. By using these scattering sites as astrophysical lenses, they were able to constrain the size of the FRB’s emission region to better than 30,000 km. This emission size contradicted expectations from far-away models. It is more consistent with an emission process occurring within or just beyond the magnetosphere of a central compact object – the first clear evidence for the near-field class of models.

Additionally, FRB 20221022A’s detection paper notes a striking change in the burst’s polarisation angle – an “S-shaped” swing covering about 130° – over a mere 2.5 milliseconds. They interpret this as the emission beam physically sweeping across our line of sight, much like a lighthouse beam passing by an observer, and conclude that it hints at a magnetospheric origin of the emission, as highly magnetised regions can twist or shape how radio waves are emitted. The scintillation studies by Nimmo et al. independently support this conclusion, narrowing the possible sources and mechanisms that power FRBs. Moreover, they highlight the potential of the scintillation technique to explore the emission mechanisms in FRBs and understand their environments.

The field of FRB physics looks set to grow by leaps and bounds. CHIME can already identify host galaxies for FRBs, but an “outrigger” programme using similar detectors geographically displaced from the main telescope at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, aims to strengthen its localisation capabilities to a precision of tens of milliarcsecond. CHIME recently finished deploying its third outrigger telescope in northern California.

Charm jets lose less energy

ALICE figure 1

Collisions between lead ions at the LHC generate the hottest and densest system ever created in the laboratory. Under these extreme conditions, quarks and gluons are no longer confined inside hadrons but instead form a quark–gluon plasma (QGP). Being heavier than the more abundantly produced light quarks, charm quarks play a special role in probing the plasma since they are created in the collision before the plasma is formed and interact with the plasma as they traverse the collision zone. Charm jets, which are clusters of particles originating from charm quarks, have been investigated for the first time by the ALICE collaboration in Pb–Pb collisions at the LHC using the D0 mesons (that carry a charm quark) as tags.

The primary interest lies in measuring the extent of energy loss experienced by different types of particles as they traverse the plasma, referred to as “in-medium energy loss”. This energy loss specifically depends on the particle type and particle mass, varying between quarks and gluons. Due to their larger mass, charm quarks at low transverse momentum do not reach the speed of light and lose substantially less energy than light quarks through both collisional and radiative processes, as gluon radiation by massive quarks is suppressed: the so-called “dead-cone effect”. Additionally, gluons, which carry a larger colour charge than quarks, experience greater energy loss in the QGP as quantified by the Casimir factors CA = 3 for gluons and CF = 4/3 for quarks. This makes the charm quark an ideal probe for studying the QGP properties. ALICE is well suited to study the in-medium energy loss of charm quarks, which is dependent on the mass of the charm quark and its colour charge.

The production yield of charm jets tagged with fully reconstructed D0 mesons (D0 Kπ+) in central Pb–Pb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2 was measured by ALICE. The results are reported in terms of nuclear modification factor (RAA), which is the ratio of the particle production rate in Pb–Pb collisions to that in proton–proton collisions, scaled by the number of binary nucleon–nucleon collisions. A measured nuclear modification factor of unity would indicate the absence of final-state effects.

The results, shown in figure 1, show a clear suppression (RAA < 1) for both charm jets and inclusive jets (that mainly originate from light quarks and gluons) due to energy loss. Importantly, the charm jets exhibit less suppression than the inclusive jets within the transverse momentum range of 20 to 50 GeV, which is consistent with mass and colour-charge dependence.

The measured results are compared with theoretical model calculations that include mass effects in the in-medium energy loss. Among the different models, LIDO incorporates both the dead-cone effect and the colour-charge effects, which are essential for describing the energy-loss mechanisms. Consequently, it shows reasonable agreement with experimental data, reproducing the observed hierarchy between charm jets and inclusive jets.

The present finding provides a hint of the flavour-dependent energy loss in the QGP, suggesting that charm jets lose less energy than inclusive jets. This highlights the quark-mass and colour-charge dependence of the in-medium energy-loss mechanisms.

bright-rec iop pub iop-science physcis connect