Comsol -leaderboard other pages

Topics

The W boson’s midlife crisis

The discovery of the W boson at CERN in 1983 can well be considered the birth of precision electroweak physics. Measurements of the W boson’s couplings and mass have become ever more precise, progressively weaving in knowledge of other particle properties through quantum corrections. Just over a decade ago, the combination of several Standard Model (SM) parameters with measurements of the W-boson mass led to a prediction of a relatively low Higgs-boson mass, of order 100 GeV, prior to its discovery. The discovery of the Higgs boson in 2012 with a mass of about 125 GeV was hailed as a triumph of the SM. Last year, however, an unexpectedly high value of the W-boson mass measured by the CDF experiment threw a spanner into the works. One might say the 40-year-old W boson encountered a midlife crisis.

The mass of the W boson, mW, is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson. The mass of each fermion is determined by the strength of its interaction with the Brout–Englert–Higgs field, but this strength is currently only known to an accuracy of approximately 10% at best; future measurements from the High-Luminosity LHC and a future e+e collider are required to achieve percent-level accuracy. Meanwhile, mW is predicted with an accuracy better than 0.01%. At tree level, this mass depends only on the mass of the Z boson and the weak and electromagnetic couplings. The first measurements of mW by the UA1 and UA2 experiments at the SppS collider at CERN were in remarkable agreement with this prediction, within the large uncertainties. Further measurements at the Tevatron at Fermilab and the Large Electron Positron collider (LEP) at CERN achieved sufficient precision to probe the presence of higher-order electroweak corrections, such as from a loop containing top and bottom quarks.

Increasing sophistication

Measurements of mW at the four LEP experiments were performed in collisions producing two W bosons. Hadron colliders, by contrast, can produce a single W-boson resonance, simplifying the measurement when utilising the decay to an electron or muon and an associated neutrino. However, this simplification is countered by the complication of the breakup of the hadrons, along with multiple simultaneous hadron–hadron interactions. Measurements at the Tevatron and LHC have required increasing sophistication to model the production and decay of the W boson, as well as the final-state lepton’s interactions in the detectors. The average time between the available datasets and the resulting published measurement have increased from two years for the first CDF measurement in 1991 to more than 10 years for the most recent CDF measurement announced last year (CERN Courier May/June 2022 p9). The latter benefitted from a factor of four more W bosons than the previous measurement, but suffered from a higher number of additional simultaneous interactions. The challenge of modelling these interactions while also increasing the measurement precision required many years of detailed study. The end result, mW = 80433.5 ± 9.4 MeV, differs from the SM prediction of mW = 80357 ± 6 MeV by approximately seven standard deviations (see “Out of order” figure).

CDF measurement of the W mass

The SM calculation of mW includes corrections from single loops involving fermions or the Higgs boson, as well as from two-loop processes that also include gluons. The splitting of the W boson into a top- and bottom-quark loop produces the largest correction to the mass: for every 1 GeV increase in top-quark mass the predicted W mass increases by a little over 6 MeV. Measurements of the top-quark mass at the Tevatron and LHC have reached a precision of a few hundred MeV, thus contributing an uncertainty on mW of only a couple of MeV. The calculated mW depends only logarithmically on the Higgs-boson mass mH, and given the accuracy of the LHC mH measurements, it contributes negligibly to the uncertainty on mW. The tree-level dependence of mW on the Z-boson mass and on the electromagnetic coupling strength contribute an additional couple of MeV each to the uncertainty. The robust prediction of the SM allows an incisive test through mW measurements, and it would appear to fail in the face of the recent CDF measurement.

Since the release of the CDF result last year, physicists have held extensive and detailed discussions, with a recurring focus on the measurement’s compatibility with the SM prediction and with the measurements of other experiments. Further discussions and workshops have reviewed the suite of Tevatron and LHC measurements, hypothesising effects that could have led to a bias in one or more of the results. These potential effects are subtle, as fundamentally the W-boson signature is strikingly unique and simple: a single charged electron or muon with no observable particle balancing its momentum. Any source of bias would have to lie in a higher-order theoretical or experimental effect, and the analysts have studied and quantified these in great detail.

Progress

In the spring of this year ATLAS contributed an update to the story. The collaboration re-analysed its data from 2011 to apply a comprehensive statistical fit using a profile likelihood, as well as the latest global knowledge of parton distribution functions (PDFs) – which describe the momentum distribution functions of quarks and gluons inside the proton. The preliminary result (mW = 80360 ± 16 MeV) reduces the uncertainty and the central value of its previous result published in 2017, further increasing the tension between the ATLAS result and that of CDF.

Meanwhile, the Tevatron+LHC W-mass combination working group has carried out a detailed investigation of higher-order theoretical effects affecting hadron-collider measurements, and provided a combined mass value using the latest published measurement from each experiment and from LEP. These studies, due to be presented at the European Physical Society High-Energy Physics conference in Hamburg in late August, give a comprehensive and quantitative overview of W-boson mass measurements and their compatibilities. While no significant issues have been identified in the measurement procedures and results, the studies shed significant light on their details and differences.

LHC versus Tevatron

Two important aspects of the Tevatron and LHC measurements are the modelling of the momentum distribution of each parton in the colliding hadrons, and the angular distribution of the W boson’s decay products. The higher energy of the LHC increases the importance of the momentum distributions of gluons and of quarks from the second generation, though these can be constrained using the large samples of W and Z bosons. In addition, the combination of results from centrally produced W bosons at ATLAS with more forward W-boson production at LHCb reduces uncertainties from the PDFs. At the Tevatron, proton–antiproton collisions produced a large majority of W bosons via the valence up and down (anti)quarks inside the (anti)proton, and these are also constrained by measurements at the Tevatron. For the W-boson decay, the calculation is common to the LHC and the Tevatron, and precise measurements of the decay distributions by ATLAS are able to distinguish several calculations used in the experiments.

W-mass measuring

In any combination of measurements, the primary focus is on the uncertainty correlations. In the case of mW, many uncertainties are constrained in situ and are therefore uncorrelated. The most significant source of correlated uncertainty is the PDFs. In order to evaluate these correlations, the combination working group generated large samples of events and produced simplified models of the CDF, DØ and ATLAS detectors. Several sets of PDFs were studied to determine their compatibility with broader W- and Z-boson measurements at hadron colliders. For each of these sets the correlations and combined mW values were determined, opening a panorama view of the impact of PDFs on the measurement (see “Measuring up” figure).

The mass of the W boson is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson

The first conclusion from this study is that the compatibility of all PDF sets with W- and Z-boson measurements is generally low: the most compatible PDF set, CT18 from the CTEQ collaboration, gives a probability of only 1.5% that the suite of measurements are consistent with the predictions. Using this PDF set for the W-boson mass combination gives an even lower compatibility of 0.5%. When the CDF result is removed, the compatibility of the combined mW value is good (91%), and when comparing this “N-1” combined value to the CDF value for the CT18 set, the difference is 3.6σ. The results are considered unlikely to be compatible, though the possibility cannot be excluded in the absence of an identified bias. If the CDF measurement is removed, the combination yields a mass of mW = 80369.2 ± 13.3 MeV for the CT18 set, while including all measurements results in a mass of mW = 80394.6 ± 11.5 MeV. The former value is consistent with the SM prediction, while the latter value is 2.6σ higher.

Two scenarios

The results of the preliminary combination clearly separate two possible scenarios. In the first, the mW measurements are unbiased and differ due to large fluctuations and the PDF dependence of the W- and Z-boson data. In the second, a bias in one or more of the measurements produces the low compatibility of the measured values. Future measurements will clarify the likelihood of the first scenario, while further studies could identify effect(s) that point to the second scenario. In either case the next milestone will take time due to the exquisite precision that has now been reached, and to the challenges in maintaining analysis teams for the long timescales required to produce a measurement. The W boson’s midlife crisis continues, but with time and effort the golden years will come. We can all look forward to that.

Gravitational waves: a golden era

An array of pulsars

The existence of dark matter in the universe is one of the most important puzzles in fundamental physics. It is inferred solely by means of its gravitational effects, such as on stellar motions in galaxies or on the expansion history of the universe. Meanwhile, non-gravitational interactions between dark matter and the known particles described by the Standard Model have not been detected, despite strenuous and advanced experimental efforts.

Such a situation suggests that new particles and fields, possibly similar to those of the Standard Model, may have been similarly present across the entire cosmological history of our universe, but with only very tiny interactions with visible matter. This intriguing idea is often referred to as the paradigm of dark sectors and is made even more compelling by the lack of new particles seen at the LHC and laboratory experiments so far.

Dark universe

Cosmological observations, above all those of the cosmic microwave background (CMB), currently represent the main tool to test such a paradigm. The primary example is that of dark radiation, i.e. putative new dark particles that, unlike dark matter, behave as relativistic species at the energy scales probed by the CMB. The most recent data collected by the Planck satellite constrain such dark particles to make at most around 30% of the energy of a single neutrino species at the recombination epoch (when atoms formed and the universe became transparent, around 380,000 years after the Big Bang).

While such observations represent a significant advance, the early universe was characterised by temperatures in the MeV range and above (enabling nucleosynthesis), possibly as large as 1016 GeV. Some of these temperatures correspond to energy scales that cannot be probed via the CMB, nor directly with current or prospective particle colliders. Even if new particles had significant interactions with SM particles at such high temperatures, any electromagnetic radiation in the hot universe was continuously scattered off matter (electrons), making it impossible for any light from such early epochs to reach our detectors today. The question then arises: is there another channel to probe the existence of dark sectors in the early universe? 

We are entering a golden era of GW observations across the frequency spectrum

For more than a century, a different signature of gravitational interactions has been known to be possible: waves, analogous to those of the electromagnetic field, carrying fluctuations of gravitational fields. The experimental effort to detect gravitational waves (GWs) had a first amazing success in 2015, when waves generated by the merger of two black holes were first detected by the LIGO and Virgo interferometers in the US and Italy.

Now, the GW community is on the cusp of another incredible milestone: the detection of a GW background, generated by all sources of GWs across the history of our universe. Recently, based on more than a decade of observations, several networks of radio telescopes called pulsar timing arrays (PTAs) – NANOGrav in North America, EPTA in Europe, PPTA in Australia and CPTA in China – produced tentative evidence for such a stochastic GW background based on the influence of GWs on pulsars (see “Hints of low-frequency gravitational waves found” and “Clocking gravity” image). Together with next-generation interferometer-based GW detectors such as LISA and the Einstein Telescope, and new theoretical ideas from particle physics, the observations suggest that we are entering an exciting new era of observational cosmology that connects the smallest and largest scales. 

Particle physics and the GW background

Once produced, GWs interact only very weakly with any other component of the universe, even at the high temperatures present at the earliest times. Therefore, whereas photons can tell us about the state of the universe at recombination, the GW background is potentially a direct probe of high-energy processes in the very early universe. Unlike GWs that reach Earth from the locations of binary systems of compact objects, the GW background is expected to be mostly isotropic in the sky, very much like the CMB. Furthermore, rather than being a transient signal, it should persist in the sensitivity bands of GW detectors, similar to a noise component but with peculiarities that are expected to make a detection possible. 

Colliding spherical pressure waves

As early as 1918, Einstein quantified the power emitted in GWs by a generic source. Compared to electromagnetic radiation, which is sourced by the dipole moment of a charge distribution, the power emitted in GWs is proportional to the third time derivative of the quadrupole moment of the mass-energy distribution of the source. Therefore, the two essential conditions for a source to emit GWs are that it should be sufficiently far from spherical symmetry and that its distribution should change sufficiently quickly with time.

What possible particle-physics sources would satisfy these conditions? One of the most thoroughly studied phenomena as a source of GWs is the occurrence of a phase transition, typically associated with the breaking of a fundamental symmetry. Specifically, only those phase transitions that proceed via the nucleation, expansion and collision of cosmic bubbles (analogous to the phase transition of liquid water to vapour) can generate a significant amount of GWs (see “Ringing out” image). Inside any such bubble the universe is already in the broken-symmetry phase, whereas beyond the bubble walls the symmetry is still unbroken. Eventually, the state of lowest energy inside the bubbles prevails via their rapid expansion and collisions, which fill up the universe. Even though such bubbles may initially be highly spherical, once they collide the energy distribution is far from being so, while their rapid expansion provides a time variation.  

The occurrence of two phase transitions is in fact predicted by the Standard Model (SM): one related to the spontaneous breaking of the electroweak SU(2) × U(1) symmetry, the other associated with colour confinement and thus the formation of hadronic states. However, dedicated analytical and numerical studies in the 1990s and 2000s concluded that the SM phase transitions are not expected to be of first order in the early universe. Rather, they are expected to proceed smoothly, without any violent release of energy to source GWs. 

Sensitivity of current and future GW observatories

This leads to a striking conclusion: a detection of the GW background would provide evidence for physics beyond the SM – that is, if its origin can be attributed to processes occurring in the early universe. This caveat is crucial, since astrophysical processes in the late universe also contribute to a stochastic GW background. 

In order to claim a particle-physics interpretation for any stochastic GW background, it is thus necessary to appropriately account for astrophysical sources and characterise the expected (spectral) shape of the GW signal from early-universe sources of interest. These tasks are being undertaken by a diverse community of cosmologists, particle physicists and astrophysicists at research institutions all around the world, including in the cosmology group in the CERN TH department.

Precise probing

For particle physicists and cosmologists, it is customary to express the strength of a given stochastic GW signal in terms of the fraction of the energy (density) of the universe today carried by those GWs. The CMB already constraints this “relic abundance” to be less than roughly 10% of ordinary radiation, or about one millionth of that of the dominant component of the universe today, dark energy. Remarkably, current GW detectors are already able to probe stochastic GWs that produce only one billionth of the energy density of the universe.

Generally, the stochastic GW signal from a given source extends over a broad frequency range. The spectrum from many early-universe sources typically peaks at a frequency linked to the expansion rate at the time the source was active, redshifted to today. Under standard assumptions, the early universe was dominated by radiation and the peak frequency of the GW signal increases linearly with the temperature. For instance, the GW frequency range in which LIGO/Virgo/KAGRA are most sensitive (10–100 Hz) corresponds to sources that were active when the universe was as hot as 108 GeV – six orders of magnitude higher than the LHC. The other currently operating GW observatories, PTAs, are sensitive to GWs of much smaller frequencies, around 10–9–10–7 Hz, which correspond to temperatures around 10 MeV to 1 GeV (see “Broadband” figure). These are the temperatures at which the QCD phase transition occurred. While, as mentioned above, a signal from the latter is not expected, dark sectors may be active at those temperatures and source a GW signal. In the near (and long-term) future, it is conceivable that new GW observatories will allow us to probe the stochastic GW background across the entire range of frequencies from nHz to 100 Hz. 

Laser-interferometer GW detectors on Earth and in space

Together with bubble collisions, another source of peaked GW spectra due to symmetry breaking in the early universe is the annihilation of topological defects, such as domain walls separating different regions of the universe (in this case the corresponding symmetry is a discrete symmetry). Violent (so-called resonant) decays of new particles, such as is predicted by some early-universe scenarios, may also strongly contribute to the GW background (albeit possibly only at very large frequencies, beyond the sensitivity reach of current and forecasted detectors). Yet another discoverable phenomenon is the collapse of large energy (density) fluctuations in the early universe, such as is predicted to occur in scenarios where the dark matter is made of primordial black holes.

On the other hand, particle-physics sources can also be characterised by very broad GW spectra without large peaks. The most important such source is the inflationary mechanism: during this putative phase of exponential expansion of the universe, GWs would be produced from quantum fluctuations of space–time, stretched by inflation and continuously re-entering the Hubble horizon (i.e. the causally connected part of the universe at any given time) throughout the cosmological evolution. The amount of such primordial GWs is expected to be small. Nonetheless, a broad class of inflationary models predicts GWs with frequencies and amplitudes such that they can be discovered by future measurements of the CMB. In fact, it is precisely via these measurements that Planck and BICEP/Keck Array have been able to strongly constrain the simplest models of inflation. The GWs that can be discovered via the CMB would have very small frequencies (around 10–17 Hz, corresponding to ~eV temperatures). The full spectrum would nonetheless extend to large frequencies, only with such a small amplitude that detection by GW observatories would be unfeasible (except perhaps for the futuristic Big Bang Observer – a proposed successor to the Laser Interferometer Space Antenna, LISA, currently being prepared by the European Space Agency). 

Feeling blue

Certain classes of inflationary models could also lead to “blue-tilted” (i.e. rising with frequency) spectra, which may then be observable at GW observatories. For instance, this can occur in models where the inflaton is a so-called axion field (a generalisation of the predicted Peccei–Quinn axion in QCD). Such scenarios naturally produce gauge fields during inflation, which can themselves act as sources of GWs, with possible peculiar properties such as circular polarisation and non-gaussianities. A final phenomenon that would generate a very broad GW spectrum, unrelated to inflation, is the existence of cosmic strings. These one-dimensional defects can originate, for instance, from the breaking of a global (or gauge) rotation symmetry and persist through cosmological history, analogous to cracks that appear in an ice crystal after a phase transition from water.

Astrophysical contributions to the stochastic GW background are certainly expected from binary black-hole systems. At the frequencies relevant for LIGO/Virgo/KAGRA, such background would be due to black holes with masses of tens of solar masses, whereas in the PTA sensitivity range the background is sourced by binaries of supermassive black holes (with masses up to millions of solar masses), such as those that are believed to exist at the centres of galaxies. The current PTA indications of a stochastic GW background require detailed analyses to understand whether the signal is due to a particle physics or an astrophysics source. A smoking gun for the latter origin would be the observation of significant anisotropies in the signal, as it would come from regions where more binary black holes are clustered. 

Polarised microwave emission from the CMB

We are entering a golden era of GW observations across the frequency spectrum, and thus in exploring particle physics beyond the reach of colliders and astrophysical phenomena at unprecedented energies. The first direct detection of GWs by LIGO in September 2015 was one of the greatest scientific achievements of the 21st century. The first generation of laser interferometric detectors (GEO600, LIGO, Virgo and TAMA) did not detect any signal and only constrained the gravitational-wave emission from several sources. The second generation (Advanced LIGO and Advanced Virgo) made the first direct detection and has observed almost 100 GW signals to date. The underground Kamioka Gravitational Wave Detector (KAGRA) in Japan joined the LIGO–VIRGO observations in 2020. As of 2021, the LIGO–Virgo–KAGRA collaboration is working to establish the International Gravitational Wave Network, to facilitate coordination among ground-based GW observatories across the globe. In the near future, LIGO India (IndIGO) will also join the network of terrestrial detectors. 

Despite being sensitive to changes in the arm length of the order of 10–18 m, the LIGO, Virgo and KAGRA detectors are not sensitive enough for precise astronomical studies of GW sources. This has motivated the new generation of detectors. The Einstein Telescope (ET) is a proposed design concept for a European third-generation GW detector underground, which will be 10 times more sensitive than the current advanced instruments (see “Joined-up thinking in vacuum science”). On Earth, however, gravitational waves with frequencies lower than 1 Hz are inaccessible due to terrestrial gravity gradient noise and limitations to the size of the device. Space-based detectors, on the other hand, can access frequencies as low as 10–4 Hz. Several space-based GW observatories are proposed that will ultimately form a network of laser interferometers in space. They include LISA (planned to launch around 2035), the Deci-hertz Interferometer Gravitational Wave Observatory (DECIGO) led by the Japan Aerospace Exploration Agency and two Chinese detectors, TianQin and Taiji (see “In synch” figure).

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders

A new kid on the block, atom interferometry, offers a complementary approach to laser interferometry for the detection of GWs. Two atom interferometers coherently manipulated by the same light field can be used as a differential phase meter tracking the distance traversed by the light field. Several terrestrial cold-atom experiments are under preparation, such as MIGA, ZAIGA and MAGIS, or being proposed, such as ELGAR and AION. These experiments will provide measurements in the mid-frequency range between 10–2–1 Hz. Moreover, a space-based cold-atom GW detector called the Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE) is expected to probe GWs in a much broader frequency range (10–7–10 Hz) compared to LISA.

Astrometry provides yet another powerful way to explore GWs that is not accessible to other probes, i.e. ultra-low frequencies of 10 nHz or less. Here, the passage of a GW over the Earth-star system induces a deflection in the apparent position of a star, which makes it possible to turn astrometric data into a nHz GW observatory. Finally, CMB missions have a key role to play in searching for possible imprints on the polarisation of CMB photons caused by a stochastic background of primordial GWs (see “Acoustic imprints” image). The wavelength of such primordial GWs can be as large as the size of our horizon today, associated with frequencies as low as 10–17 Hz. Whereas current CMB missions allow upper bounds on GWs, future missions such as the ground-based CMB-S4 (CERN Courier March/April 2022 p34) and space-based LiteBIRD observatories will improve this measurement to either detect primordial GWs or place yet stronger upper bounds on their existence.

Outlook 

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders, as well as for understanding astrophysical phenomena in extreme regimes. Several projects are planned and proposed to detect GWs across more than 20 decades of frequency. Such a wealth of data will provide a great opportunity to explore the universe in new ways during the next decades and open a wide window on possible physics beyond the SM.

Hints of low-frequency gravitational waves found

Since their direct discovery in 2015 by the LIGO and Virgo detectors, gravitational waves (GWs) have opened a new view on extreme cosmic events such as the merging of black holes. These events typically generate gravitational waves with frequencies of a few tens to a few thousand hertz, within reach of ground-based detectors. But the universe is also expected to be pervaded by low-frequency GWs in the nHz range, produced by the superposition of astrophysical sources and possibly by high-energy processes at the very earliest times (see “Gravitational waves: a golden era”). 

Announced in late June, news that pulsar timing arrays (PTAs), which infer the presence of GWs via detailed measurements of the radio emission from pulsars, had seen the first evidence for such a stochastic GW background was therefore met with delight by particle physicists and cosmologists alike. “For me it feels that the first gravitational wave observed by LIGO is like seeing a star for the first time, and now it’s like seeing the cosmic microwave background for the first time,” says CERN theorist Valerie Domcke.

Clocking signals

Whereas the laser interferometers LIGO and Virgo detect relative length changes in two perpendicular arms, PTAs clock the highly periodic signals from millisecond pulsars (rapidly rotating neutron stars), some of which are in Earth’s line of sight. A passing GW perturbs spacetime and induces a small delay in the observed arrival time of the pulses. By observing a large sample of pulsars over a long period and correlating the signals, PTAs effectively turn the galaxy into a low-frequency GW observatory. The challenge is to pick out the characteristic signature of this stochastic background, which is expected to induce “red noise” (meaning there should be greater power at lower fluctuation frequencies) in the differences between the measured arrival times of the pulsars and the timing-model predictions. 

The smoking gun of a nHz GW detection is a measurement of the so-called Hellings–Downs (HD) curve based on general relativity. This curve predicts the arrival-time correlations as a function of angular separation for pairs of pulsars, which vary because the quadrupolar nature of GWs introduces directionally dependent changes. 

Following its first hints of these elusive correlations in 2020, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has released the results of its 15-year dataset. Based on observations of 68 millisecond-pulsars distributed over half the galaxy (21 more than in the last release) by the Arecibo Observatory, the Green Bank Telescope and the Very Large Array, the team finds 4σ evidence for HD correlations in both frequentist and Bayesian analyses.

We are opening a new window in the GW universe, where we can observe unique sources and phenomena

A similar signal is seen by the independent European PTA, and the results are also supported by data from the Parkes PTA and others. “Once the partner collaborations of the International Pulsar Timing Array (which includes NANOGrav, the European, Parkes and Indian PTAs) combine these newest datasets, this may put us over the 5σ threshold,” says NANOGrav spokesperson Stephen Taylor. “We expect that it will take us about a year to 18 months to finalise.”

It will take longer to decipher the precise origin of the low-frequency PTA signals. If the background is aniso­tropic, astrophysical sources such as supermassive black-hole binaries would be the likely origin and one could therefore learn about their environment, population and how galaxies merge. Phase transitions or other cosmological sources tend to lead to an isotropic background. Since the shape of the GW spectrum encodes information about the source, with more data it should become possible to disentangle the signatures of the two potential sources. PTAs and current, as well as next-generation, GW detectors such as LISA and the Einstein Telescope complement each other as they cover different frequency ranges. For instance, LISA could detect the same supermassive black-hole binaries as PTAs but at different times during and after their merger. 

“We are opening a new window in the gravitational-wave universe in the nanohertz regime, where we can observe unique sources and phenomena,” says European PTA collaborator Caterina Tiburzi of the Cagliari Observatory in Sardinia.

Muon g-2 update sets up showdown with theory

Muon g-2 measurement

On 10 August, the Muon g-2 collaboration at Fermilab presented its latest measurement of the anomalous magnetic moment of the muon aμ. Combining data from Run 1 to Run 3, the collaboration found aμ = 116 592 055 (24) × 10–11, representing a factor-of-two improvement on the precision of its initial 2021 result. The experimental world average for aμ now stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in 2020. However, calculations based on a different theoretical approach (lattice QCD) and a recent analysis of e+e data that feeds into the prediction are in tension with the 2020 calculation, and more work is needed before the discrepancy is understood.

The anomalous magnetic moment of the muon aμ = (g-2)/2 (where g is the muon’s gyromagnetic ratio) is the difference between the observed value of the muon’s magnetic moment and the Dirac prediction (g = 2) due to contributions of virtual particles. This makes measurements of aμ, which is one of the most precisely calculated and measured quantities in physics, an ideal testbed for physics beyond the SM. To measure it, a muon beam is sent into a superconducting storage ring reused from the former g-2 experiment at Brookhaven National Laboratory. Initially aligned, the muon spin axes precess as they interact with the magnetic field. Detectors located along the ring’s inner circumference allow the precession rate and thus aμ to be determined. Many improvements to the setup have been made since the first run, including better running conditions, more stable beams and an improved knowledge of the magnetic field.

The new result is based on data taken from 2019 and 2020, and has four times the statistics compared to the 2021 result. The collaboration also decreased the systematic uncertainty to levels beyond its initial goals. Currently, about 25% of the total data (Run 1–Run 6) has been analysed. The collaboration plans to publish its final results in 2025, targeting a precision of 0.14 ppm compared to the current 0.2 ppm. “We have moved the accuracy bar of this experiment one step further and now we are waiting for the theory to complete the calculations and cross-checks necessary to match the experimental accuracy,” explains collaboration co-spokesperson Graziano Venanzoni of INFN Pisa and the University of Liverpool. “A huge experimental and theoretical effort is going on, which makes us confident that theory prediction will be in time for the final experimental result from FNAL in a few years from now.”

The theoretical picture is foggy. The SM prediction for the anomalous magnetic moment receives contributions from the electromagnetic, electroweak and strong interactions. While the former two can be computed to high precision in perturbation theory, it is only possible to compute the latter analytically in certain kinematic regimes. Contributions from hadronic vacuum polarisation and hadronic light-by-light scattering dominate the overall theoretical uncertainty on aμ at 83% and 17%, respectively.

To date, the experimental results are confronted with two theory predictions: one by the Muon g-2 Theory Initiative based on the data-driven “R-ratio” method, which relies on hadronic cross-section measurements, and one by the Budapest–Marseille–Wuppertal (BMW) collaboration based on simulations of lattice QCD and QED. The latter significantly reduces the discrepancy between the theoretical and measured values. Adding a further puzzle, a recently published value of hadronic cross-section measurements by the CMD-3 collaboration that contrasts with all other experiments narrows the gap between the Muon g-2 Theory Initiative and the BMW predictions (see p19).

“This new result by the Fermilab Muon g-2 experiment is a true milestone in the precision study of the Standard Model,” says lattice gauge theorist Andreas Jüttner of CERN and the University of Southampton. “This is really exciting – we are now faced with getting to the roots of various tensions between experimental and theoretical findings.”

Counting half-lives to a nuclear clock

The observation at CERN’s ISOLDE facility of a long-sought decay of the thorium-229 nucleus marks a key step towards a clock that could outperform today’s most precise atomic timekeepers. Publishing the results in Nature, an international team has used ISOLDE’s unique facilities to measure, for the first time, the radiative decay of the metastable state of thorium-229m, opening a path to direct laser-manipulation of a nuclear state to build a new generation of nuclear clocks. 

Today’s best atomic clocks, based on periodic transitions between two electronic states of an atom such as caesium or aluminium held in an optical lattice, achieve a relative systematic frequency uncertainty below 1 × 10–18, meaning they won’t lose or gain a second over about 30 billion years. Nuclear clocks would exploit the periodic transition between two states in the vastly smaller atomic nucleus, which couple less strongly to electromagnetic fields and hence are less vulnerable to external perturbations. In addition to offering a more precise timepiece, nuclear clocks could test the constancy of fundamental parameters such as the fine structure or strong-coupling constants, and enable searches for ultralight dark matter (CERN Courier September/October 2022 p32).

Higher precision

In 2003 Ekkehard Peik and Christian Tamm of Physikalisch-Technische Bundesanstalt in Germany proposed a nuclear clock based on the transition between the ground state of the thorium-229 nucleus and its first, higher-energy state. The advantage of the 229mTh isomer compared to almost all other nuclear species is its unusually low excitation level (~8 eV), which in principle allows direct laser manipulation. Despite much effort, researchers have not succeeded until now in observing the radiative decay – which is the inverse process of direct laser excitation – of 229mTh to its ground state. This allows, among other things, the isomer’s energy to be determined to higher precision.

In a novel technique based on vacuum-ultraviolet spectroscopy, lead author Sandro Kraemer of KU Leuven and co-workers used ISOLDE to generate an isomeric beam with atomic mass number A = 229, following the decay chain 229Fr → 229Ra → 229Ac → 229Th/229mTh. A fraction of 229Ac decays to the metastable, excited state of 229Th, the isomer 229mTh. To achieve this, the team incorporated the produced 229Ac into six separate crystals of calcium flouride and magnesium flouride at different thicknesses. They measured the radiation emitted when the isomer relaxes to its ground state using an ultraviolet spectrometer, determining the wavelength of the observed light to be 148.7 nm. This corresponds to an energy of 8.338 ± 0.024 eV – seven times more precise than the previous best measurements.

Our study marks a crucial step in the development of lasers that would make such a clock tick

“ISOLDE is currently one of only two facilities in the world that can produce actinium-229 isotopes in sufficient amounts and purity,” says Kraemer. “By incorporating these isotopes in calcium fluoride or magnesium fluoride crystals, we produced many more isomeric thorium-229 nuclei and increased our chances of observing their radiative decay.”

The team’s novel approach of producing thorium-229 nuclei also made it possible to determine the lifetime of the isomer in the magnesium fluoride crystal, which helps to predict the precision of a thorium-229 nuclear clock based on this solid-state system. The result (16.1 ± 2.5 min) indicates that a clock precision which is competitive with that of today’s most precise atomic clocks is attainable, while also being four orders of magnitude more sensitive to a number of effects beyond the Standard Model.

“Solid-state systems such as magnesium fluoride crystals are one of two possible settings in which to build a future thorium-229 nuclear clock,” says the team’s spokesperson, Piet Van Duppen of KU Leuven. “Our study marks a crucial step in this direction, and it will ease the development of lasers with which to drive the periodic transition that would make such a clock tick.”

Probing for periodic signals

ATLAS figure 1

New physics may come at us in unexpected ways that may be completely hidden to conventional search methods. One unique example of this is the narrowly spaced, semi-periodic spectra of heavy gravitons predicted by the clockwork gravity model. Similar to models with extra dimensions, the clockwork model addresses the hierarchy problem between the weak and Planck scales, not by stabilising the weak scale (as in supersymmetry, for example), but by bringing the fundamental higher dimensional Planck scale down to accessible energies. The mass spectrum of the resulting graviton tower in the clockwork model is described by two parameters: k, a mass parameter that determines the onset of the tower, and M5, the five-dimensional reduced Planck mass that controls the overall cross-section of the tower’s spectrum.

At the LHC, these gravitons would be observed via their decay into two light Standard Model particles. However, conventional bump/tail hunts are largely insensitive to this type of signal, particularly when its cross section is small. A recent ATLAS analysis approaches the problem from a completely new angle by exploiting the underlying approximate periodicity feature of the two-particle invariant mass spectrum.

Graviton decays with dielectron or diphoton final states are an ideal testbed for this search due to the excellent energy resolution of the ATLAS detector. After convolving the mass spectrum of the graviton tower with the ATLAS detector resolution corresponding to these final states, it resembles a wave-packet (like the representation of a free particle propagating in space as a pulse of plane-wave superposition with a finite momenta range). This implies that a transformation exploiting the periodic nature of the signal may be helpful.

ATLAS figure 2

Figure 1 shows how a particularly faint clockwork signal would emerge in ATLAS for the diphoton final state. It is compared with the data and the background-only fit obtained from an earlier (full Run 2) ATLAS search for resonances with the same final state. As an illustration, the signal shape is given without realistic statistical fluctuations. The tiny “bumps” or the shape’s integral over the falling background cannot be detected with conventional bump/tail-hunting methods. Instead, for the first time, a continuous wavelet transformation is applied to the mass distribution. The problem is therefore transformed to the “scalogram” space, i.e. the mass versus scale (or inverse frequency) space, as shown in figure 2 (left). The large red area at high scales (low frequencies) represents the falling shape of the background, while the signal from figure 1 now appears as a clear, distinct local “blob” above mγγ = k and at low scales (high frequencies).

The strongest exclusion contours to date are placed in the clockwork parameter space

With realistic statistical fluctuations and uncertainties, these distinct “blobs” may partially wash out, as shown in figure 2 (right). To counteract this effect, the analysis uses multiple background-only and background-plus-signal scalograms to train a binary convolutional neural-network classifier. This network is very powerful in distinguishing between scalograms belonging to the two classes, but it is also model-specific. Therefore, another search for possible periodic signals is performed independently from the clockwork model hypothesis. This is done in an “anomaly detection” mode using an autoencoder neural-network. Since the autoencoder is trained on multiple background-only scalograms (unlabelled data) to learn the features of the background (unsupervised learning), it can predict the compatibility of a given scalogram with the background-only hypothesis. A statistical test based on the two networks’ scores is derived to check the data compatibility with the background-only or the background+signal hypotheses.

Applying these novel procedures to the dielectron and diphoton full Run 2 data, ATLAS sees no significant deviation from the background-only hypothesis in either the clockwork-model search or in the model-independent one. The strongest exclusion contours to date are placed in the clockwork parameter space, pushing the sensitivity to beyond 11 TeV in M5. Despite the large systematic uncertainties in the background model, these do not exhibit any periodic structure in the mass space and their impact is naturally reduced when transforming to the scalogram space. The sensitivity of this analysis is therefore mostly limited by statistics and is expected to improve with the full Run 3 dataset.

Inclusive photon production at forward rapidities

ALICE figure 1

The primary goal of high-energy heavy-ion physics is the study of a new state of nuclear matter, quark–gluon plasma, a thermalised system of quarks and gluons. The study of proton–proton (pp) and proton–nucleus (pA) collisions provides the baseline for the interpretation of results from heavy-ion collisions. The study of pA collisions also helps researchers understand the effects of cold nuclear matter on the production of final-state particles.

Global observables, such as the number of produced particles (particle multiplicity) and their distribution in pseudorapidity (η), provide key information about particle-production mechanisms in these collisions. The total multiplicity is mostly determined by soft interactions, i.e. processes with small momentum transfer, which cannot be calculated using perturbative techniques and are instead modelled using non-perturbative phenomenological descriptions. For example, the distribution of the number of produced particles can be used to disentangle relative contributions to particle production from hard and soft processes using a two-component model.

ALICE has recently completed the measurement of the multiplicity and pseudorapidity density distributions of inclusive photons at forward rapidity, spanning the range η = 2.3 to 3.9, by using the photon multiplicity detector (PMD) in pp, pPb and Pbp collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair using LHC Run 1 and 2 data. Since photons mostly originate from decays of neutral pions, this result complements existing measurements of charged-particle production. A comparative study of charged particles and inclusive photons can reveal possible similarities and differences in the underlying production mechanisms for charged and neutral particles.

The PMD uses the preshower technique, where a three-radiation-length-thick lead converter is sandwiched between two planes comprising an array of 184,320 gas-filled proportional counters. Photons are distinguished from hadrons in the PMD’s preshower plane by applying suitable thresholds on the number of detector cells and the energy deposited in reconstructed clusters.

The measured distributions are corrected for instrumental effects using a Bayesian unfolding method. This is the first time that the dependence of the inclusive photon production on the number of nucleons participating in the pPb collision and its scaling behaviour has been studied at the LHC.

Figure 1 (left) compares the pseudorapidity density distribution of inclusive photons in minimum bias pp, pPb and Pbp collisions measured at forward rapidity to that of charged particles at midrapidity. The pseudorapidity distribution of inclusive photons at forward rapidity smoothly matches that of charged particles at midrapidity, indicating that the production mechanisms for charged and neutral pions are similar. Figure 1 (right) shows the pseudorapidity density distribution of inclusive photons in pPb collisions for different multiplicity classes as estimated using the energy deposited in the zero-degree calorimeter (ZNA) at beam rapidity. The multiplicity in the most central collisions reaches values twice as large as those in minimum bias events. The data and model agree within one sigma of the measurement uncertainties.

These results of inclusive photon production in pp, pPb and Pbp collisions provide valuable input for the development of theoretical models and Monte Carlo event generators, and help to establish the baseline measurements for the interpretation of PbPb collision data.

Charm production in proton–lead collisions

LHCb figure 1

A crucial missing piece in our understanding of quantum chromodynamics (QCD) is a complete description of hadronisation in hard scattering processes with a large momentum transfer, which has now been investigated by the LHCb collaboration in proton–lead (pPb) collisions. While perturbative QCD describes reasonably well the transverse momentum (pT) dependence of heavy-quark production in proton–proton (pp) collisions, the situation is different in heavy-ion collisions due to the formation of quark–gluon plasma (QGP), which affects the behaviour of particles traversing the medium. In particular, hadronisation can be affected, modifying the relative abundance of hadrons compared to pp collisions. Several models predict an enhanced strange-quark production. Thus an abundance of strange baryons is seen as a signature of QGP formation.

The role that QGP may play in pPb collisions is currently unclear. Some models predict the formation of “QGP droplets”, which could partially induce the same behaviour, albeit less pronounced, as in PbPb collisions. In addition, in pPb interactions, “cold nuclear matter” (CNM) effects are also present that can mimic the behaviour caused by QGP but via different mechanisms. For all these reasons, a strangeness enhancement in pPb collisions would strongly indicate the formation of a deconfined medium in small systems, providing crucial information about QGP properties and formation once the CNM effects are under control.

The LHCb collaboration recently analysed pPb data for QGP effects with the twofold purpose of searching for strangeness enhancement and providing a precise understanding of the CNM effects. This search was performed by measuring the production ratio of the strange baryon Ξ+c, which has never been observed in pPb collisions before, to the strangeless baryon Λ+c. Using an earlier pPb sample, LHCb has also studied the ratios of the D+s, D+ and D0 , the first being measured for the first time down to zero pT in the forward region, precisely addressing CNM effects. All measurements are performed differentially in pT and the rapidity of the produced particle, and compared to the latest theory predictions. The Ξ+c cross section has been measured for the first time in pPb collisions, giving strong indications on the factorisation scale μ0 of the theory model. This result allows to set the absolute scale of the theoretical computations in terms of strangeness production, a trend confirmed with even higher precision by comparing the measurement to the Λ+c production-cross section evaluated in the same decay mode. Moreover, the ratio is roughly constant as a function of pT  and behaves in the same way at positive (pPb) and negative (Pbp) rapidities (see figure 1). The measurement is consistent with models incorporating initial-state effects due to gluon-shadowing in nuclei, suggesting that QGP formation and the resulting strangeness enhancement have little or no effect on Ξ+c production in pPb collisions.

This interpretation is confirmed by the measurement of the D+s, D+ and D0 cross sections and corresponding ratios in different rapidity regions. While the ratios show little enhancement within the statistical uncertainty, a large asymmetry is observed in the forward-backward production. This strongly indicates CNM effects and provides detailed constraints on models of nuclear parton distribution functions and hadron production in a very wide range of Bjorken-x (10–2 – 10–5). A strong suppression is observed for the D mesons, giving insight into the nature of the CNM effects involved. An explanation via additional final-state effects is challenged by the Ξ+c data that are well described by models not including them. The production ratios of Ξ+c, D+s, D+ and D0 measured as a function of pT in pPb collisions confirm these findings. All these studies will profit from the increased statistics in pPb collisions that are expected from future LHC runs.

A novel search for inelastic dark matter

CMS figure 1

As dark matter (DM) search experiments increasingly constrain minimal models, more complex ones have gained importance, featuring a rich “dark sector” with additional particle states and often involving forces that cannot be directly felt by Standard Model (SM) particles. Nevertheless, the SM and dark sector are typically connected by a “portal” that can be experimentally probed.

The CMS collaboration recently presented the first dedicated collider search for inelastic dark matter (IDM) using the LHC Run 2 dataset. In IDM models, a small Majorana mass component is combined with a Dirac fermion field corresponding to the DM and added to the SM Lagrangian, resulting in two new DM mass eigenstates with a predominantly off-diagonal (inelastic) coupling and a small mass splitting. In addition, a dark photon (a gauge boson similar to the ordinary photon) serves as the portal to the SM. This means that at the LHC, the lighter (χ1) and heavier (χ2) DM states are simultaneously produced via a dark photon (A′). While the lighter state is stable and escapes the detector, the heavier one can travel a macroscopic distance before decaying to the lighter one and a pair of muons, which are produced away from the collision point.

This process can be probed by exploiting a striking signature: a pair of almost collinear, low-momentum and displaced muons from the χ2 decay; significant missing transverse momentum (MET) from the χ1; and an initial-state radiation jet that can be used for trigger purposes. The MET-dimuon system recoils against the high-momentum jet, so that the muons and MET are also almost collinear. This unique topology presents challenges, including the reconstruction of the displaced muons. This problem was addressed by using a dedicated reconstruction algorithm, which remains efficient even for muons produced several metres away from the collision point (figure 1, left).

The first dedicated collider search for IDM using the full dataset collected during LHC Run 2

After applying event-selection criteria targeting the expected IDM signal, the number of events is compared to the data-driven background prediction: no excess is observed. Upper limits are set on the product of the pp → A′ χ2χ1 production cross-section and the branching fraction of the χ2χ1 μ+μ decay; they are shown in figure 1 (right) for a scenario with 10% mass splitting between the χ1 and χ2 states. The y variable is roughly proportional to the interaction strength between the SM and the DM sector. Values of y > 10–7 to  10–9 are excluded for masses between 3 and 80 GeV, when assuming that the fine structure constant has the same value in the dark sector and in the SM.

CMS physicists are looking forward to probing more complex and well-motivated DM models with novel and creative uses of the existing detector.

DAMPE confirms cosmic-ray complexity

Energy spectra measured by DAMPE

The exact origin of the high-energy cosmic rays that bombard Earth remains one of the most important open questions in astrophysics. Since their discovery more than a century ago, a multitude of potential sources, both galactic and extra-galactic, have been proposed. Examples of proposed galactic sources, which are theorised to be responsible for cosmic rays with energies below the PeV range, are supernova remnants and pulsars, while blazars and gamma-ray bursts are two of many potential sources theorised to be responsible for the cosmic-ray flux at higher energies. 

When identifying the origin of astrophysical photons, one can use their direction. However, for cosmic rays this is not as straightforward due to the impact of galactic and extra-galactic magnetic fields on their direction. To identify the origin of cosmic rays, researchers therefore almost fully rely on information embedded in their energy spectra. When assuming just acceleration within shock regions of extreme astrophysical objects, the galactic cosmic-ray spectrum should follow a simple, single power law with an index between –2.6 and –2.7. However, thanks to measurements by a range of dedicated instruments including AMS, ATIC, CALET, CREAM and HAWC, we know the spectrum to be more complex. Furthermore, different types of cosmic rays, such as protons, and the nuclei of helium or oxygen, have all been shown to exhibit different spectral features with breaks at different energies.

New measurements by the space-based Chinese–European Dark Matter Particle Explorer (DAMPE) provide detailed insights into the various spectral breaks in the combined proton and helium spectra. Clear hints of spectral breaks were already shown previously by various balloon and space-based experiments at low energies (below about 1 TeV), and by ground-based air-shower detectors at high energies (> TeV). However, in the region where space-based measurements start to suffer from a lack of statistics, ground-based instruments suffer from a low sensitivity, resulting in relatively large uncertainties. Furthermore, the completely different way in which space- and ground-based instruments measure the energy (directly in the former, and via air-shower reconstruction in the latter) made it important to make measurements that clearly connect the two. DAMPE has now produced detailed spectra in the 46 GeV to 316 TeV energy range, thereby filling most of the gap. The results confirm both a spectral hardening around 100 GeV and a subsequent spectral softening around 10 TeV, which connects well with a second spectral bump previously observed by ARGO-YBJ+WFCT at an energy of several hundred TeV (see figure).

The complex spectral features of high-energy cosmic rays can be explained in various ways. One possibility is through the presence of different types of cosmic-ray sources in our galaxy; one population produces cosmic rays with energies up to PeV, while a second only produces cosmic rays with energies up to tens of TeV, for example. A second possibility is that the spectral features are a result of a nearby single source from which we observe the cosmic rays directly before they become diffused in the galactic magnetic field. Examples of such a nearby source could be the Geminga pulsar, or the young supernova remnant Vela.

In the near future, novel data and analysis methods will likely allow researchers to distinguish between these two theories. One important source of this data is the LHAASO experiment in China, which is currently taking detailed measurements of cosmic rays in the 100 TeV to EeV range. Furthermore, thanks to ever-increasing statistics, the anisotropy of the arrival direction of the cosmic rays will also become a method to compare different models, in particular to identify nearby sources. The important link between direct and indirect measurements presented in this work thereby paves the way to connecting the large amounts of upcoming data to the theories on the origins of cosmic rays. 

bright-rec iop pub iop-science physcis connect