Topics

Using top quarks to probe nature’s secrets

CMS figure 1

Despite its exceptional success, we know that the standard model (SM) is incomplete. To date, the LHC has not yet found clear indications of physics beyond the SM (BSM), which might mean that the BSM energy scale is above what can be directly probed at the LHC. An alternative way to probe BSM physics is through searches of off-shell effects, which can be done using the effective field theory framework (EFT). By treating the SM Lagrangian as the lowest order term in a perturbative expansion, EFT allows us to include higher-dimension operators in the Lagrangian, while respecting the experimentally verified SM symmetries.

Operators

The CMS collaboration recently performed a search for BSM physics using EFT, analysing data containing top quarks with additional final-state leptons. The top quark is of particular interest because of its large mass, resulting in a Higgs–Yukawa coupling of order unity. Many BSM models connect the top-quark mass to large couplings to new physics. In the context of top quark EFT, there are 59 total operators at dimension six, controlled by the so-called Wilson coefficients, 26 of which produce final-state leptons. These coefficients enter the model as corrections to the SM matrix element, with a first term corresponding to the interference between the SM and BSM contributions, and a second term reflecting pure BSM effects. 

The analysis was performed on the Run 2 proton–proton collisions sample, corresponding to an integrated luminosity of 138 fb–1. It obtained limits on those 26 dimension-six coefficients, simulated at detector level with leading order precision (plus an additional parton when possible), exploiting six final-state signals, with different numbers of top quarks and leptons: ttH, ttν, ttℓℓ, tℓℓq, tHq and tttt. The analysis splits the data into 43 discrete categories, based primarily on lepton multiplicity, total lepton charge, and total jet or b-quark jet multiplicities. The events are analysed as differential distributions in the kinematics of the final-state leptons and jets. 

CMS figure 2

A statistical analysis is performed using a profiled likelihood to extract the 68% and 95% confidence intervals for all 26 Wilson coefficients by varying one of them while profiling the other 25. All the coefficients are compatible with zero (i.e. in agreement with the SM) at the 95% confidence level. For many of them, these results are the most competitive to date, even when compared to analyses that fit only one or two coefficients. Figure 1 shows how the 95% confidence intervals (2σ limit) translate into upper limits on the energy scale of the probed BSM interaction. 

The CMS collaboration will continue to refine these measurements by expanding upon the final-state observables and leveraging the Run 3 data sample. With the HL-LHC quickly approaching, the future of BSM physics searches is full of potential.

GBAR joins the anticlub

The GBAR experiment at CERN has joined the select club of experiments that have succeeded in synthesising antihydrogen atoms. Located at the Antiproton Decelerator (AD), GBAR aims to test Einstein’s equivalence principle by measuring the acceleration of an antihydrogen atom in Earth’s gravitational field and comparing it with that of normal hydrogen. 

Producing and slowing down an anti­atom enough to see it in free fall is no mean feat. To achieve this, the AD’s 5.3 MeV antiprotons are decelerated and cooled in the ELENA ring and a packet of a few million 100 keV antiprotons is sent to GBAR every two minutes. A pulsed drift tube further decelerates the packet to an adjustable energy of a few keV. In parallel, a linear particle accelerator sends 9 MeV electrons onto a tungsten target, producing positrons, which are accumulated in a series of electromagnetic traps. Just before the antiproton packet arrives, the positrons are sent to a layer of nanoporous silica, from which about one in five positrons emerges as a positronium atom. When the antiproton packet crosses the resulting cloud of positronium atoms, a charge exchange can take place, with the positronium giving up its positron to the antiproton, forming antihydrogen.

At the end of 2022, during an operation that lasted several days, the GBAR collaboration detected some 20 antihydrogen atoms produced in this way, validating the “in-flight” production method for the first time. The collaboration will now improve the production of antihydrogen atoms to enable precision measurements, for example, of its spectroscopic properties.

The first antihydrogen atoms were produced at CERN’s LEAR facility in 1995, but at an energy too high for any measurement to be made. Following this early success, CERN’s Antiproton Accumulator (used for the discovery of the W and Z bosons in 1983) was repurposed as a decelerator, becoming the AD, which is unique worldwide in providing low-energy antiprotons to antimatter experiments. After the demonstration of storing antiprotons by the ATRAP and ATHENA experiments, ALPHA, a successor of ATHENA, was the first experiment to merge trapped antiprotons and positrons and to trap the resulting antihydrogen atoms. Since then, ATRAP and ASACUSA have also achieved these two milestones, and AEgIS has produced pulses of antiatoms. GBAR now joins this elite club, having produced 6 keV antihydrogen atoms in-flight.

GBAR is also not alone in its aim of testing Einstein’s equivalence principle with atomic antimatter. ALPHA and AEgIS are also working towards this goal using complementary approaches.

Time dilation finally observed in quasars

A quasar in the very early universe

Within astronomy and cosmology, the idea that the universe is continuously expanding is a cornerstone of the standard cosmological model. For example, when measuring the distance of astronomical objects one often uses their redshift, which is induced by their velocity with respect to us due to the expansion. The expansion itself has, however, never been directly measured, i.e. no measurement exists that shows the increasing redshift with time of a single object. Although not far beyond the current capabilities of astrophysics, such a measurement is unlikely to be performed soon. Rather, evidence for it is based on correlations within populations of astrophysical objects. However, not all studies agree with this standard assumption.

One population study that supports the standard model concerns type 1A supernovae, specifically the observed correlation between their duration and distance. Such a correlation is predicted to be the result of time dilation induced by the higher velocity of more distant objects. Supporting this picture, gamma-ray bursts occurring at larger distances appear to, on average, last longer than those that occur nearby. However, similar studies of quasars thus far did not show any dependence of the length in their variability with their distance, thereby contradicting special relativity and leading to an array of alternative hypotheses.

Detailed studies

Quasars are active galaxies containing a supermassive blackhole surrounded by a relativistic accretion disk. Due to their brightness they can be observed with redshifts up to about z = 8, which, based on special relativity should show variabilities occurring 8 times slower than those that occur nearby. As previous studies did not observe such time dilation, alternative theories proposed included those that cast doubt on the extragalactic nature of quasars. A new, detailed study now removes the need for such theories.

These results do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model

In order to observe time dilation one requires a standard clock. Supernovae are ideal for this purpose because these explosions are all nearly identical, allowing their duration to be used to measure time dilation. For quasars the issue is more complicated as the variability of their brightness appears almost random. However, the variability can be modelled using a so-called dampened random walk (DRW), a random process combined with an exponential dampening component. This complex model does not allow the brightness of a quasar to be predicted, but contains a characteristic timescale in the exponent that should correlate to the redshift due to time dilation.

This idea has now been tested by Geraint Lewis and Brenden Brewer of the universities of Sydney and Auckland, respectively. The pair studied 190 quasars with redshifts up to z = 4, observed over a 20 year period by the Sloan Digital Sky Survey and PanSTARRS-1, and applied a Bayesian analysis to look for a correlation between the DRW parameters and their redshift. The data was found to match best a universe where the DRW parameters scale according to (1 + z)n with n = 1.28 ±0.29, thereby making it compatible with n = 1, the value expected by standard physics. This contradicts previous measurements, something the authors attribute to the smaller quasar sample used in previous studies. The complex nature of quasars and the large variability in their population requires long observations of a similar population to make the time dilation effect visible. 

These new results, which were made possible due to the large amounts of data becoming available from large observatories, do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model.

Electrical perturbation uproots Run 3 operations

Crack in LHC bellows

At around 1 a.m. on 17 July, the LHC beams were dumped after only nine minutes in collision due to a radiofrequency interlock caused by an electrical perturbation. Approximately 300 milliseconds after the beams were cleanly dumped, several superconducting magnets lost their superconducting state, or quenched. Among them were the inner-triplet magnets located to the left of Point 8, which focus the beams for the LHCb experiment. While occasional quenches of some LHC magnets are to be expected, the large forces resulting from this particular event led to a breach of the vacuum helium pressure vessel, rapidly degrading the insulation vacuum and prompting a series of interventions with implications for the 2023 Run 3 schedule. 

The leak occurred between the LHC’s cryogenic circuit, which contains the liquid helium, and the insulation vacuum that separates the cold magnet from the warm outer vessel (the cryostat) – a crucial barrier for preventing heat transfer from the surrounding LHC tunnel to the interior of the cryostat. As a result of the leak, the insulation vacuum filled with helium gas, cooling down the cryostat and causing condensation to form and freeze on the outside. 

By 24 July the CERN teams had traced the leak to a crack in one of more than 2500 bellows that compensate for thermal expansion and contraction on the cryogenic distribution lines. Measuring just 1.6 mm long, it is thought to have been caused by a sudden increase in vacuum pressure when the magnet quench protection system (QPS) kicked in. Following the electrical perturbation, the QPS had dutifully triggered the quench heaters (which are designed to bring the whole magnet out of the superconducting state in a controlled and homogenous manner) of the magnets concerned, generating a heat wave according to expectations.

It is the first time that such a breach  event has occurred; the teamwork between many working groups, including safety, accelerator operations, vacuum, cryogenics, magnets, survey, beam instrumentation, machine protection, electrical quality assurance as well as material and mechanical engineering, made a quick assessment and action plan possible. On 25 July the affected bellow was removed. A new bellow was installed on 28 July, the affected modules were closed, and the insulation vacuum was pumped. 

The electrical perturbation turned out to be caused by an uprooted tree falling on power lines in the nearby Swiss municipality of Morges. In early August, as the Courier went to press, the repairs were finished and the implications for Run physics were being assessed. The choice is between preparing the machine for a short-term proton–proton phase to account for some of the missed run time or sticking to the planned heavy-ion run at the end of the run year, since in 2022 there was no full heavy-ion run. The favoured scenario is to go with the latter and was presented to the LHC machine committee on 26 July.

The W boson’s midlife crisis

The discovery of the W boson at CERN in 1983 can well be considered the birth of precision electroweak physics. Measurements of the W boson’s couplings and mass have become ever more precise, progressively weaving in knowledge of other particle properties through quantum corrections. Just over a decade ago, the combination of several Standard Model (SM) parameters with measurements of the W-boson mass led to a prediction of a relatively low Higgs-boson mass, of order 100 GeV, prior to its discovery. The discovery of the Higgs boson in 2012 with a mass of about 125 GeV was hailed as a triumph of the SM. Last year, however, an unexpectedly high value of the W-boson mass measured by the CDF experiment threw a spanner into the works. One might say the 40-year-old W boson encountered a midlife crisis.

The mass of the W boson, mW, is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson. The mass of each fermion is determined by the strength of its interaction with the Brout–Englert–Higgs field, but this strength is currently only known to an accuracy of approximately 10% at best; future measurements from the High-Luminosity LHC and a future e+e collider are required to achieve percent-level accuracy. Meanwhile, mW is predicted with an accuracy better than 0.01%. At tree level, this mass depends only on the mass of the Z boson and the weak and electromagnetic couplings. The first measurements of mW by the UA1 and UA2 experiments at the SppS collider at CERN were in remarkable agreement with this prediction, within the large uncertainties. Further measurements at the Tevatron at Fermilab and the Large Electron Positron collider (LEP) at CERN achieved sufficient precision to probe the presence of higher-order electroweak corrections, such as from a loop containing top and bottom quarks.

Increasing sophistication

Measurements of mW at the four LEP experiments were performed in collisions producing two W bosons. Hadron colliders, by contrast, can produce a single W-boson resonance, simplifying the measurement when utilising the decay to an electron or muon and an associated neutrino. However, this simplification is countered by the complication of the breakup of the hadrons, along with multiple simultaneous hadron–hadron interactions. Measurements at the Tevatron and LHC have required increasing sophistication to model the production and decay of the W boson, as well as the final-state lepton’s interactions in the detectors. The average time between the available datasets and the resulting published measurement have increased from two years for the first CDF measurement in 1991 to more than 10 years for the most recent CDF measurement announced last year (CERN Courier May/June 2022 p9). The latter benefitted from a factor of four more W bosons than the previous measurement, but suffered from a higher number of additional simultaneous interactions. The challenge of modelling these interactions while also increasing the measurement precision required many years of detailed study. The end result, mW = 80433.5 ± 9.4 MeV, differs from the SM prediction of mW = 80357 ± 6 MeV by approximately seven standard deviations (see “Out of order” figure).

CDF measurement of the W mass

The SM calculation of mW includes corrections from single loops involving fermions or the Higgs boson, as well as from two-loop processes that also include gluons. The splitting of the W boson into a top- and bottom-quark loop produces the largest correction to the mass: for every 1 GeV increase in top-quark mass the predicted W mass increases by a little over 6 MeV. Measurements of the top-quark mass at the Tevatron and LHC have reached a precision of a few hundred MeV, thus contributing an uncertainty on mW of only a couple of MeV. The calculated mW depends only logarithmically on the Higgs-boson mass mH, and given the accuracy of the LHC mH measurements, it contributes negligibly to the uncertainty on mW. The tree-level dependence of mW on the Z-boson mass and on the electromagnetic coupling strength contribute an additional couple of MeV each to the uncertainty. The robust prediction of the SM allows an incisive test through mW measurements, and it would appear to fail in the face of the recent CDF measurement.

Since the release of the CDF result last year, physicists have held extensive and detailed discussions, with a recurring focus on the measurement’s compatibility with the SM prediction and with the measurements of other experiments. Further discussions and workshops have reviewed the suite of Tevatron and LHC measurements, hypothesising effects that could have led to a bias in one or more of the results. These potential effects are subtle, as fundamentally the W-boson signature is strikingly unique and simple: a single charged electron or muon with no observable particle balancing its momentum. Any source of bias would have to lie in a higher-order theoretical or experimental effect, and the analysts have studied and quantified these in great detail.

Progress

In the spring of this year ATLAS contributed an update to the story. The collaboration re-analysed its data from 2011 to apply a comprehensive statistical fit using a profile likelihood, as well as the latest global knowledge of parton distribution functions (PDFs) – which describe the momentum distribution functions of quarks and gluons inside the proton. The preliminary result (mW = 80360 ± 16 MeV) reduces the uncertainty and the central value of its previous result published in 2017, further increasing the tension between the ATLAS result and that of CDF.

Meanwhile, the Tevatron+LHC W-mass combination working group has carried out a detailed investigation of higher-order theoretical effects affecting hadron-collider measurements, and provided a combined mass value using the latest published measurement from each experiment and from LEP. These studies, due to be presented at the European Physical Society High-Energy Physics conference in Hamburg in late August, give a comprehensive and quantitative overview of W-boson mass measurements and their compatibilities. While no significant issues have been identified in the measurement procedures and results, the studies shed significant light on their details and differences.

LHC versus Tevatron

Two important aspects of the Tevatron and LHC measurements are the modelling of the momentum distribution of each parton in the colliding hadrons, and the angular distribution of the W boson’s decay products. The higher energy of the LHC increases the importance of the momentum distributions of gluons and of quarks from the second generation, though these can be constrained using the large samples of W and Z bosons. In addition, the combination of results from centrally produced W bosons at ATLAS with more forward W-boson production at LHCb reduces uncertainties from the PDFs. At the Tevatron, proton–antiproton collisions produced a large majority of W bosons via the valence up and down (anti)quarks inside the (anti)proton, and these are also constrained by measurements at the Tevatron. For the W-boson decay, the calculation is common to the LHC and the Tevatron, and precise measurements of the decay distributions by ATLAS are able to distinguish several calculations used in the experiments.

W-mass measuring

In any combination of measurements, the primary focus is on the uncertainty correlations. In the case of mW, many uncertainties are constrained in situ and are therefore uncorrelated. The most significant source of correlated uncertainty is the PDFs. In order to evaluate these correlations, the combination working group generated large samples of events and produced simplified models of the CDF, DØ and ATLAS detectors. Several sets of PDFs were studied to determine their compatibility with broader W- and Z-boson measurements at hadron colliders. For each of these sets the correlations and combined mW values were determined, opening a panorama view of the impact of PDFs on the measurement (see “Measuring up” figure).

The mass of the W boson is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson

The first conclusion from this study is that the compatibility of all PDF sets with W- and Z-boson measurements is generally low: the most compatible PDF set, CT18 from the CTEQ collaboration, gives a probability of only 1.5% that the suite of measurements are consistent with the predictions. Using this PDF set for the W-boson mass combination gives an even lower compatibility of 0.5%. When the CDF result is removed, the compatibility of the combined mW value is good (91%), and when comparing this “N-1” combined value to the CDF value for the CT18 set, the difference is 3.6σ. The results are considered unlikely to be compatible, though the possibility cannot be excluded in the absence of an identified bias. If the CDF measurement is removed, the combination yields a mass of mW = 80369.2 ± 13.3 MeV for the CT18 set, while including all measurements results in a mass of mW = 80394.6 ± 11.5 MeV. The former value is consistent with the SM prediction, while the latter value is 2.6σ higher.

Two scenarios

The results of the preliminary combination clearly separate two possible scenarios. In the first, the mW measurements are unbiased and differ due to large fluctuations and the PDF dependence of the W- and Z-boson data. In the second, a bias in one or more of the measurements produces the low compatibility of the measured values. Future measurements will clarify the likelihood of the first scenario, while further studies could identify effect(s) that point to the second scenario. In either case the next milestone will take time due to the exquisite precision that has now been reached, and to the challenges in maintaining analysis teams for the long timescales required to produce a measurement. The W boson’s midlife crisis continues, but with time and effort the golden years will come. We can all look forward to that.

Gravitational waves: a golden era

An array of pulsars

The existence of dark matter in the universe is one of the most important puzzles in fundamental physics. It is inferred solely by means of its gravitational effects, such as on stellar motions in galaxies or on the expansion history of the universe. Meanwhile, non-gravitational interactions between dark matter and the known particles described by the Standard Model have not been detected, despite strenuous and advanced experimental efforts.

Such a situation suggests that new particles and fields, possibly similar to those of the Standard Model, may have been similarly present across the entire cosmological history of our universe, but with only very tiny interactions with visible matter. This intriguing idea is often referred to as the paradigm of dark sectors and is made even more compelling by the lack of new particles seen at the LHC and laboratory experiments so far.

Dark universe

Cosmological observations, above all those of the cosmic microwave background (CMB), currently represent the main tool to test such a paradigm. The primary example is that of dark radiation, i.e. putative new dark particles that, unlike dark matter, behave as relativistic species at the energy scales probed by the CMB. The most recent data collected by the Planck satellite constrain such dark particles to make at most around 30% of the energy of a single neutrino species at the recombination epoch (when atoms formed and the universe became transparent, around 380,000 years after the Big Bang).

While such observations represent a significant advance, the early universe was characterised by temperatures in the MeV range and above (enabling nucleosynthesis), possibly as large as 1016 GeV. Some of these temperatures correspond to energy scales that cannot be probed via the CMB, nor directly with current or prospective particle colliders. Even if new particles had significant interactions with SM particles at such high temperatures, any electromagnetic radiation in the hot universe was continuously scattered off matter (electrons), making it impossible for any light from such early epochs to reach our detectors today. The question then arises: is there another channel to probe the existence of dark sectors in the early universe? 

We are entering a golden era of GW observations across the frequency spectrum

For more than a century, a different signature of gravitational interactions has been known to be possible: waves, analogous to those of the electromagnetic field, carrying fluctuations of gravitational fields. The experimental effort to detect gravitational waves (GWs) had a first amazing success in 2015, when waves generated by the merger of two black holes were first detected by the LIGO and Virgo interferometers in the US and Italy.

Now, the GW community is on the cusp of another incredible milestone: the detection of a GW background, generated by all sources of GWs across the history of our universe. Recently, based on more than a decade of observations, several networks of radio telescopes called pulsar timing arrays (PTAs) – NANOGrav in North America, EPTA in Europe, PPTA in Australia and CPTA in China – produced tentative evidence for such a stochastic GW background based on the influence of GWs on pulsars (see “Hints of low-frequency gravitational waves found” and “Clocking gravity” image). Together with next-generation interferometer-based GW detectors such as LISA and the Einstein Telescope, and new theoretical ideas from particle physics, the observations suggest that we are entering an exciting new era of observational cosmology that connects the smallest and largest scales. 

Particle physics and the GW background

Once produced, GWs interact only very weakly with any other component of the universe, even at the high temperatures present at the earliest times. Therefore, whereas photons can tell us about the state of the universe at recombination, the GW background is potentially a direct probe of high-energy processes in the very early universe. Unlike GWs that reach Earth from the locations of binary systems of compact objects, the GW background is expected to be mostly isotropic in the sky, very much like the CMB. Furthermore, rather than being a transient signal, it should persist in the sensitivity bands of GW detectors, similar to a noise component but with peculiarities that are expected to make a detection possible. 

Colliding spherical pressure waves

As early as 1918, Einstein quantified the power emitted in GWs by a generic source. Compared to electromagnetic radiation, which is sourced by the dipole moment of a charge distribution, the power emitted in GWs is proportional to the third time derivative of the quadrupole moment of the mass-energy distribution of the source. Therefore, the two essential conditions for a source to emit GWs are that it should be sufficiently far from spherical symmetry and that its distribution should change sufficiently quickly with time.

What possible particle-physics sources would satisfy these conditions? One of the most thoroughly studied phenomena as a source of GWs is the occurrence of a phase transition, typically associated with the breaking of a fundamental symmetry. Specifically, only those phase transitions that proceed via the nucleation, expansion and collision of cosmic bubbles (analogous to the phase transition of liquid water to vapour) can generate a significant amount of GWs (see “Ringing out” image). Inside any such bubble the universe is already in the broken-symmetry phase, whereas beyond the bubble walls the symmetry is still unbroken. Eventually, the state of lowest energy inside the bubbles prevails via their rapid expansion and collisions, which fill up the universe. Even though such bubbles may initially be highly spherical, once they collide the energy distribution is far from being so, while their rapid expansion provides a time variation.  

The occurrence of two phase transitions is in fact predicted by the Standard Model (SM): one related to the spontaneous breaking of the electroweak SU(2) × U(1) symmetry, the other associated with colour confinement and thus the formation of hadronic states. However, dedicated analytical and numerical studies in the 1990s and 2000s concluded that the SM phase transitions are not expected to be of first order in the early universe. Rather, they are expected to proceed smoothly, without any violent release of energy to source GWs. 

Sensitivity of current and future GW observatories

This leads to a striking conclusion: a detection of the GW background would provide evidence for physics beyond the SM – that is, if its origin can be attributed to processes occurring in the early universe. This caveat is crucial, since astrophysical processes in the late universe also contribute to a stochastic GW background. 

In order to claim a particle-physics interpretation for any stochastic GW background, it is thus necessary to appropriately account for astrophysical sources and characterise the expected (spectral) shape of the GW signal from early-universe sources of interest. These tasks are being undertaken by a diverse community of cosmologists, particle physicists and astrophysicists at research institutions all around the world, including in the cosmology group in the CERN TH department.

Precise probing

For particle physicists and cosmologists, it is customary to express the strength of a given stochastic GW signal in terms of the fraction of the energy (density) of the universe today carried by those GWs. The CMB already constraints this “relic abundance” to be less than roughly 10% of ordinary radiation, or about one millionth of that of the dominant component of the universe today, dark energy. Remarkably, current GW detectors are already able to probe stochastic GWs that produce only one billionth of the energy density of the universe.

Generally, the stochastic GW signal from a given source extends over a broad frequency range. The spectrum from many early-universe sources typically peaks at a frequency linked to the expansion rate at the time the source was active, redshifted to today. Under standard assumptions, the early universe was dominated by radiation and the peak frequency of the GW signal increases linearly with the temperature. For instance, the GW frequency range in which LIGO/Virgo/KAGRA are most sensitive (10–100 Hz) corresponds to sources that were active when the universe was as hot as 108 GeV – six orders of magnitude higher than the LHC. The other currently operating GW observatories, PTAs, are sensitive to GWs of much smaller frequencies, around 10–9–10–7 Hz, which correspond to temperatures around 10 MeV to 1 GeV (see “Broadband” figure). These are the temperatures at which the QCD phase transition occurred. While, as mentioned above, a signal from the latter is not expected, dark sectors may be active at those temperatures and source a GW signal. In the near (and long-term) future, it is conceivable that new GW observatories will allow us to probe the stochastic GW background across the entire range of frequencies from nHz to 100 Hz. 

Laser-interferometer GW detectors on Earth and in space

Together with bubble collisions, another source of peaked GW spectra due to symmetry breaking in the early universe is the annihilation of topological defects, such as domain walls separating different regions of the universe (in this case the corresponding symmetry is a discrete symmetry). Violent (so-called resonant) decays of new particles, such as is predicted by some early-universe scenarios, may also strongly contribute to the GW background (albeit possibly only at very large frequencies, beyond the sensitivity reach of current and forecasted detectors). Yet another discoverable phenomenon is the collapse of large energy (density) fluctuations in the early universe, such as is predicted to occur in scenarios where the dark matter is made of primordial black holes.

On the other hand, particle-physics sources can also be characterised by very broad GW spectra without large peaks. The most important such source is the inflationary mechanism: during this putative phase of exponential expansion of the universe, GWs would be produced from quantum fluctuations of space–time, stretched by inflation and continuously re-entering the Hubble horizon (i.e. the causally connected part of the universe at any given time) throughout the cosmological evolution. The amount of such primordial GWs is expected to be small. Nonetheless, a broad class of inflationary models predicts GWs with frequencies and amplitudes such that they can be discovered by future measurements of the CMB. In fact, it is precisely via these measurements that Planck and BICEP/Keck Array have been able to strongly constrain the simplest models of inflation. The GWs that can be discovered via the CMB would have very small frequencies (around 10–17 Hz, corresponding to ~eV temperatures). The full spectrum would nonetheless extend to large frequencies, only with such a small amplitude that detection by GW observatories would be unfeasible (except perhaps for the futuristic Big Bang Observer – a proposed successor to the Laser Interferometer Space Antenna, LISA, currently being prepared by the European Space Agency). 

Feeling blue

Certain classes of inflationary models could also lead to “blue-tilted” (i.e. rising with frequency) spectra, which may then be observable at GW observatories. For instance, this can occur in models where the inflaton is a so-called axion field (a generalisation of the predicted Peccei–Quinn axion in QCD). Such scenarios naturally produce gauge fields during inflation, which can themselves act as sources of GWs, with possible peculiar properties such as circular polarisation and non-gaussianities. A final phenomenon that would generate a very broad GW spectrum, unrelated to inflation, is the existence of cosmic strings. These one-dimensional defects can originate, for instance, from the breaking of a global (or gauge) rotation symmetry and persist through cosmological history, analogous to cracks that appear in an ice crystal after a phase transition from water.

Astrophysical contributions to the stochastic GW background are certainly expected from binary black-hole systems. At the frequencies relevant for LIGO/Virgo/KAGRA, such background would be due to black holes with masses of tens of solar masses, whereas in the PTA sensitivity range the background is sourced by binaries of supermassive black holes (with masses up to millions of solar masses), such as those that are believed to exist at the centres of galaxies. The current PTA indications of a stochastic GW background require detailed analyses to understand whether the signal is due to a particle physics or an astrophysics source. A smoking gun for the latter origin would be the observation of significant anisotropies in the signal, as it would come from regions where more binary black holes are clustered. 

Polarised microwave emission from the CMB

We are entering a golden era of GW observations across the frequency spectrum, and thus in exploring particle physics beyond the reach of colliders and astrophysical phenomena at unprecedented energies. The first direct detection of GWs by LIGO in September 2015 was one of the greatest scientific achievements of the 21st century. The first generation of laser interferometric detectors (GEO600, LIGO, Virgo and TAMA) did not detect any signal and only constrained the gravitational-wave emission from several sources. The second generation (Advanced LIGO and Advanced Virgo) made the first direct detection and has observed almost 100 GW signals to date. The underground Kamioka Gravitational Wave Detector (KAGRA) in Japan joined the LIGO–VIRGO observations in 2020. As of 2021, the LIGO–Virgo–KAGRA collaboration is working to establish the International Gravitational Wave Network, to facilitate coordination among ground-based GW observatories across the globe. In the near future, LIGO India (IndIGO) will also join the network of terrestrial detectors. 

Despite being sensitive to changes in the arm length of the order of 10–18 m, the LIGO, Virgo and KAGRA detectors are not sensitive enough for precise astronomical studies of GW sources. This has motivated the new generation of detectors. The Einstein Telescope (ET) is a proposed design concept for a European third-generation GW detector underground, which will be 10 times more sensitive than the current advanced instruments (see “Joined-up thinking in vacuum science”). On Earth, however, gravitational waves with frequencies lower than 1 Hz are inaccessible due to terrestrial gravity gradient noise and limitations to the size of the device. Space-based detectors, on the other hand, can access frequencies as low as 10–4 Hz. Several space-based GW observatories are proposed that will ultimately form a network of laser interferometers in space. They include LISA (planned to launch around 2035), the Deci-hertz Interferometer Gravitational Wave Observatory (DECIGO) led by the Japan Aerospace Exploration Agency and two Chinese detectors, TianQin and Taiji (see “In synch” figure).

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders

A new kid on the block, atom interferometry, offers a complementary approach to laser interferometry for the detection of GWs. Two atom interferometers coherently manipulated by the same light field can be used as a differential phase meter tracking the distance traversed by the light field. Several terrestrial cold-atom experiments are under preparation, such as MIGA, ZAIGA and MAGIS, or being proposed, such as ELGAR and AION. These experiments will provide measurements in the mid-frequency range between 10–2–1 Hz. Moreover, a space-based cold-atom GW detector called the Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE) is expected to probe GWs in a much broader frequency range (10–7–10 Hz) compared to LISA.

Astrometry provides yet another powerful way to explore GWs that is not accessible to other probes, i.e. ultra-low frequencies of 10 nHz or less. Here, the passage of a GW over the Earth-star system induces a deflection in the apparent position of a star, which makes it possible to turn astrometric data into a nHz GW observatory. Finally, CMB missions have a key role to play in searching for possible imprints on the polarisation of CMB photons caused by a stochastic background of primordial GWs (see “Acoustic imprints” image). The wavelength of such primordial GWs can be as large as the size of our horizon today, associated with frequencies as low as 10–17 Hz. Whereas current CMB missions allow upper bounds on GWs, future missions such as the ground-based CMB-S4 (CERN Courier March/April 2022 p34) and space-based LiteBIRD observatories will improve this measurement to either detect primordial GWs or place yet stronger upper bounds on their existence.

Outlook 

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders, as well as for understanding astrophysical phenomena in extreme regimes. Several projects are planned and proposed to detect GWs across more than 20 decades of frequency. Such a wealth of data will provide a great opportunity to explore the universe in new ways during the next decades and open a wide window on possible physics beyond the SM.

Hints of low-frequency gravitational waves found

Since their direct discovery in 2015 by the LIGO and Virgo detectors, gravitational waves (GWs) have opened a new view on extreme cosmic events such as the merging of black holes. These events typically generate gravitational waves with frequencies of a few tens to a few thousand hertz, within reach of ground-based detectors. But the universe is also expected to be pervaded by low-frequency GWs in the nHz range, produced by the superposition of astrophysical sources and possibly by high-energy processes at the very earliest times (see “Gravitational waves: a golden era”). 

Announced in late June, news that pulsar timing arrays (PTAs), which infer the presence of GWs via detailed measurements of the radio emission from pulsars, had seen the first evidence for such a stochastic GW background was therefore met with delight by particle physicists and cosmologists alike. “For me it feels that the first gravitational wave observed by LIGO is like seeing a star for the first time, and now it’s like seeing the cosmic microwave background for the first time,” says CERN theorist Valerie Domcke.

Clocking signals

Whereas the laser interferometers LIGO and Virgo detect relative length changes in two perpendicular arms, PTAs clock the highly periodic signals from millisecond pulsars (rapidly rotating neutron stars), some of which are in Earth’s line of sight. A passing GW perturbs spacetime and induces a small delay in the observed arrival time of the pulses. By observing a large sample of pulsars over a long period and correlating the signals, PTAs effectively turn the galaxy into a low-frequency GW observatory. The challenge is to pick out the characteristic signature of this stochastic background, which is expected to induce “red noise” (meaning there should be greater power at lower fluctuation frequencies) in the differences between the measured arrival times of the pulsars and the timing-model predictions. 

The smoking gun of a nHz GW detection is a measurement of the so-called Hellings–Downs (HD) curve based on general relativity. This curve predicts the arrival-time correlations as a function of angular separation for pairs of pulsars, which vary because the quadrupolar nature of GWs introduces directionally dependent changes. 

Following its first hints of these elusive correlations in 2020, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has released the results of its 15-year dataset. Based on observations of 68 millisecond-pulsars distributed over half the galaxy (21 more than in the last release) by the Arecibo Observatory, the Green Bank Telescope and the Very Large Array, the team finds 4σ evidence for HD correlations in both frequentist and Bayesian analyses.

We are opening a new window in the GW universe, where we can observe unique sources and phenomena

A similar signal is seen by the independent European PTA, and the results are also supported by data from the Parkes PTA and others. “Once the partner collaborations of the International Pulsar Timing Array (which includes NANOGrav, the European, Parkes and Indian PTAs) combine these newest datasets, this may put us over the 5σ threshold,” says NANOGrav spokesperson Stephen Taylor. “We expect that it will take us about a year to 18 months to finalise.”

It will take longer to decipher the precise origin of the low-frequency PTA signals. If the background is aniso­tropic, astrophysical sources such as supermassive black-hole binaries would be the likely origin and one could therefore learn about their environment, population and how galaxies merge. Phase transitions or other cosmological sources tend to lead to an isotropic background. Since the shape of the GW spectrum encodes information about the source, with more data it should become possible to disentangle the signatures of the two potential sources. PTAs and current, as well as next-generation, GW detectors such as LISA and the Einstein Telescope complement each other as they cover different frequency ranges. For instance, LISA could detect the same supermassive black-hole binaries as PTAs but at different times during and after their merger. 

“We are opening a new window in the gravitational-wave universe in the nanohertz regime, where we can observe unique sources and phenomena,” says European PTA collaborator Caterina Tiburzi of the Cagliari Observatory in Sardinia.

Muon g-2 update sets up showdown with theory

Muon g-2 measurement

On 10 August, the Muon g-2 collaboration at Fermilab presented its latest measurement of the anomalous magnetic moment of the muon aμ. Combining data from Run 1 to Run 3, the collaboration found aμ = 116 592 055 (24) × 10–11, representing a factor-of-two improvement on the precision of its initial 2021 result. The experimental world average for aμ now stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in 2020. However, calculations based on a different theoretical approach (lattice QCD) and a recent analysis of e+e data that feeds into the prediction are in tension with the 2020 calculation, and more work is needed before the discrepancy is understood.

The anomalous magnetic moment of the muon aμ = (g-2)/2 (where g is the muon’s gyromagnetic ratio) is the difference between the observed value of the muon’s magnetic moment and the Dirac prediction (g = 2) due to contributions of virtual particles. This makes measurements of aμ, which is one of the most precisely calculated and measured quantities in physics, an ideal testbed for physics beyond the SM. To measure it, a muon beam is sent into a superconducting storage ring reused from the former g-2 experiment at Brookhaven National Laboratory. Initially aligned, the muon spin axes precess as they interact with the magnetic field. Detectors located along the ring’s inner circumference allow the precession rate and thus aμ to be determined. Many improvements to the setup have been made since the first run, including better running conditions, more stable beams and an improved knowledge of the magnetic field.

The new result is based on data taken from 2019 and 2020, and has four times the statistics compared to the 2021 result. The collaboration also decreased the systematic uncertainty to levels beyond its initial goals. Currently, about 25% of the total data (Run 1–Run 6) has been analysed. The collaboration plans to publish its final results in 2025, targeting a precision of 0.14 ppm compared to the current 0.2 ppm. “We have moved the accuracy bar of this experiment one step further and now we are waiting for the theory to complete the calculations and cross-checks necessary to match the experimental accuracy,” explains collaboration co-spokesperson Graziano Venanzoni of INFN Pisa and the University of Liverpool. “A huge experimental and theoretical effort is going on, which makes us confident that theory prediction will be in time for the final experimental result from FNAL in a few years from now.”

The theoretical picture is foggy. The SM prediction for the anomalous magnetic moment receives contributions from the electromagnetic, electroweak and strong interactions. While the former two can be computed to high precision in perturbation theory, it is only possible to compute the latter analytically in certain kinematic regimes. Contributions from hadronic vacuum polarisation and hadronic light-by-light scattering dominate the overall theoretical uncertainty on aμ at 83% and 17%, respectively.

To date, the experimental results are confronted with two theory predictions: one by the Muon g-2 Theory Initiative based on the data-driven “R-ratio” method, which relies on hadronic cross-section measurements, and one by the Budapest–Marseille–Wuppertal (BMW) collaboration based on simulations of lattice QCD and QED. The latter significantly reduces the discrepancy between the theoretical and measured values. Adding a further puzzle, a recently published value of hadronic cross-section measurements by the CMD-3 collaboration that contrasts with all other experiments narrows the gap between the Muon g-2 Theory Initiative and the BMW predictions (see p19).

“This new result by the Fermilab Muon g-2 experiment is a true milestone in the precision study of the Standard Model,” says lattice gauge theorist Andreas Jüttner of CERN and the University of Southampton. “This is really exciting – we are now faced with getting to the roots of various tensions between experimental and theoretical findings.”

Counting half-lives to a nuclear clock

The observation at CERN’s ISOLDE facility of a long-sought decay of the thorium-229 nucleus marks a key step towards a clock that could outperform today’s most precise atomic timekeepers. Publishing the results in Nature, an international team has used ISOLDE’s unique facilities to measure, for the first time, the radiative decay of the metastable state of thorium-229m, opening a path to direct laser-manipulation of a nuclear state to build a new generation of nuclear clocks. 

Today’s best atomic clocks, based on periodic transitions between two electronic states of an atom such as caesium or aluminium held in an optical lattice, achieve a relative systematic frequency uncertainty below 1 × 10–18, meaning they won’t lose or gain a second over about 30 billion years. Nuclear clocks would exploit the periodic transition between two states in the vastly smaller atomic nucleus, which couple less strongly to electromagnetic fields and hence are less vulnerable to external perturbations. In addition to offering a more precise timepiece, nuclear clocks could test the constancy of fundamental parameters such as the fine structure or strong-coupling constants, and enable searches for ultralight dark matter (CERN Courier September/October 2022 p32).

Higher precision

In 2003 Ekkehard Peik and Christian Tamm of Physikalisch-Technische Bundesanstalt in Germany proposed a nuclear clock based on the transition between the ground state of the thorium-229 nucleus and its first, higher-energy state. The advantage of the 229mTh isomer compared to almost all other nuclear species is its unusually low excitation level (~8 eV), which in principle allows direct laser manipulation. Despite much effort, researchers have not succeeded until now in observing the radiative decay – which is the inverse process of direct laser excitation – of 229mTh to its ground state. This allows, among other things, the isomer’s energy to be determined to higher precision.

In a novel technique based on vacuum-ultraviolet spectroscopy, lead author Sandro Kraemer of KU Leuven and co-workers used ISOLDE to generate an isomeric beam with atomic mass number A = 229, following the decay chain 229Fr → 229Ra → 229Ac → 229Th/229mTh. A fraction of 229Ac decays to the metastable, excited state of 229Th, the isomer 229mTh. To achieve this, the team incorporated the produced 229Ac into six separate crystals of calcium flouride and magnesium flouride at different thicknesses. They measured the radiation emitted when the isomer relaxes to its ground state using an ultraviolet spectrometer, determining the wavelength of the observed light to be 148.7 nm. This corresponds to an energy of 8.338 ± 0.024 eV – seven times more precise than the previous best measurements.

Our study marks a crucial step in the development of lasers that would make such a clock tick

“ISOLDE is currently one of only two facilities in the world that can produce actinium-229 isotopes in sufficient amounts and purity,” says Kraemer. “By incorporating these isotopes in calcium fluoride or magnesium fluoride crystals, we produced many more isomeric thorium-229 nuclei and increased our chances of observing their radiative decay.”

The team’s novel approach of producing thorium-229 nuclei also made it possible to determine the lifetime of the isomer in the magnesium fluoride crystal, which helps to predict the precision of a thorium-229 nuclear clock based on this solid-state system. The result (16.1 ± 2.5 min) indicates that a clock precision which is competitive with that of today’s most precise atomic clocks is attainable, while also being four orders of magnitude more sensitive to a number of effects beyond the Standard Model.

“Solid-state systems such as magnesium fluoride crystals are one of two possible settings in which to build a future thorium-229 nuclear clock,” says the team’s spokesperson, Piet Van Duppen of KU Leuven. “Our study marks a crucial step in this direction, and it will ease the development of lasers with which to drive the periodic transition that would make such a clock tick.”

Probing for periodic signals

ATLAS figure 1

New physics may come at us in unexpected ways that may be completely hidden to conventional search methods. One unique example of this is the narrowly spaced, semi-periodic spectra of heavy gravitons predicted by the clockwork gravity model. Similar to models with extra dimensions, the clockwork model addresses the hierarchy problem between the weak and Planck scales, not by stabilising the weak scale (as in supersymmetry, for example), but by bringing the fundamental higher dimensional Planck scale down to accessible energies. The mass spectrum of the resulting graviton tower in the clockwork model is described by two parameters: k, a mass parameter that determines the onset of the tower, and M5, the five-dimensional reduced Planck mass that controls the overall cross-section of the tower’s spectrum.

At the LHC, these gravitons would be observed via their decay into two light Standard Model particles. However, conventional bump/tail hunts are largely insensitive to this type of signal, particularly when its cross section is small. A recent ATLAS analysis approaches the problem from a completely new angle by exploiting the underlying approximate periodicity feature of the two-particle invariant mass spectrum.

Graviton decays with dielectron or diphoton final states are an ideal testbed for this search due to the excellent energy resolution of the ATLAS detector. After convolving the mass spectrum of the graviton tower with the ATLAS detector resolution corresponding to these final states, it resembles a wave-packet (like the representation of a free particle propagating in space as a pulse of plane-wave superposition with a finite momenta range). This implies that a transformation exploiting the periodic nature of the signal may be helpful.

ATLAS figure 2

Figure 1 shows how a particularly faint clockwork signal would emerge in ATLAS for the diphoton final state. It is compared with the data and the background-only fit obtained from an earlier (full Run 2) ATLAS search for resonances with the same final state. As an illustration, the signal shape is given without realistic statistical fluctuations. The tiny “bumps” or the shape’s integral over the falling background cannot be detected with conventional bump/tail-hunting methods. Instead, for the first time, a continuous wavelet transformation is applied to the mass distribution. The problem is therefore transformed to the “scalogram” space, i.e. the mass versus scale (or inverse frequency) space, as shown in figure 2 (left). The large red area at high scales (low frequencies) represents the falling shape of the background, while the signal from figure 1 now appears as a clear, distinct local “blob” above mγγ = k and at low scales (high frequencies).

The strongest exclusion contours to date are placed in the clockwork parameter space

With realistic statistical fluctuations and uncertainties, these distinct “blobs” may partially wash out, as shown in figure 2 (right). To counteract this effect, the analysis uses multiple background-only and background-plus-signal scalograms to train a binary convolutional neural-network classifier. This network is very powerful in distinguishing between scalograms belonging to the two classes, but it is also model-specific. Therefore, another search for possible periodic signals is performed independently from the clockwork model hypothesis. This is done in an “anomaly detection” mode using an autoencoder neural-network. Since the autoencoder is trained on multiple background-only scalograms (unlabelled data) to learn the features of the background (unsupervised learning), it can predict the compatibility of a given scalogram with the background-only hypothesis. A statistical test based on the two networks’ scores is derived to check the data compatibility with the background-only or the background+signal hypotheses.

Applying these novel procedures to the dielectron and diphoton full Run 2 data, ATLAS sees no significant deviation from the background-only hypothesis in either the clockwork-model search or in the model-independent one. The strongest exclusion contours to date are placed in the clockwork parameter space, pushing the sensitivity to beyond 11 TeV in M5. Despite the large systematic uncertainties in the background model, these do not exhibit any periodic structure in the mass space and their impact is naturally reduced when transforming to the scalogram space. The sensitivity of this analysis is therefore mostly limited by statistics and is expected to improve with the full Run 3 dataset.

bright-rec iop pub iop-science physcis connect