Neutral pion (π0) and eta-meson (η) production cross sections at midrapidity have recently been measured up to unprecedentedly high transverse momenta (pT) in proton–proton (pp) and proton–lead (p–Pb) collisions at √sNN = 8 and 8.16 TeV, respectively. The mesons were reconstructed in the two-photon decay channel for pT from 0.5 and 1 GeV up to 200 and 50 GeV for π0 and η mesons, respectively. The high momentum reach for the π0 measurement was achieved by identifying two-photon showers reconstructed as a single energy deposit in the ALICE electromagnetic calorimeter.
In pp collisions, measurements of identified hadron spectra are used to constrain perturbative predictions from quantum chromodynamics (QCD). At large momentum transfer (Q2), one relies in these perturbative approximations of QCD (pQCD) on the factorisation of computable short-range parton scattering processes such as quark–quark, quark–gluon and gluon–gluon scatterings from long-range properties of QCD that need experimental input. These properties are modelled by parton distribution functions (PDFs), which describe the fractional-momentum (x) distributions of quarks and gluons within the proton, and fragmentation functions, which describe the fractional-momentum distribution of quarks or gluons for hadrons of a certain species.
In p–Pb collisions, nuclear effects are expected to significantly affect particle production, in particular at small parton fractional momentum x, compared to pp collisions. Modification at low pT (~1 GeV), usually attributed to nuclear shadowing (CERN Courier March/April 2021 p19), can be parameterised by nuclear parton distribution functions (nPDFs). However, since high parton densities are reached at the LHC, the Colour Glass Condensate (CGC) framework is also applicable at low pT (x values as small as ~5 × 10–4), which predicts strong particle suppression due to saturation of the parton phase space in nuclei. Above momenta of about 10 GeV/c, measurements in p–Pb collisions can also be sensitive to the energy loss of the outgoing partons in nuclear matter.
The nuclear modification factor (RpPb), shown in the lower panel of the figure, was measured as the ratio of the cross sections in p–Pb and pp collisions normalised by the atomic mass number. Below 10 GeV, RpPb is found to be smaller than unity, while above 10 GeV it is consistent with unity. The measurement is described by calculations over the full transverse momentum range and provides further constraints to the nPDF parameterisations for lower than about 5 GeV. The direct comparison of the neutral pion cross section in pp collisions at 8 TeV, with pQCD calculations shown in the upper panel of the figure, reveals differences in the low to intermediate pT range, which, however, cancel in RpPb, since similar differences are also present for the p–Pb cross section. Future high-precision measurements are ongoing using the large dataset from pp collisions at 13 TeV, providing further constraints to pQCD calculations.
The XIX International Workshop on Neutrino Telescopes (NeuTel) attracted 1000 physicists online from 18 to 26 February, under the organisation of INFN Sezione di Padova and the Department of Physics and Astronomy of the University of Padova.
The opening session featured presentations by Sheldon Lee Glashow, on the past and future of neutrino science, Carlo Rubbia, on searches for neutrino anomalies, and Barry Barish, on the present and future of gravitational-wave detection. This session was a propitious moment for IceCube principal investigator Francis Halzen to give a “heads-up” on the first observation, in the South-Pole detector, of a so-called Glashow resonance – the interaction of an electron antineutrino with an atomic electron to produce a real W boson, as the eponymous theorist predicted back in 1960. According to Glashow’s calculations, the energy at which the resonance shall happen depends on the mass of the W boson, which was discovered in 1983 by Rubbia and his team.
The first edition of NeuTel saw the birth of the idea of instrumenting a large volume of Antarctic ice
The first edition of NeuTel saw the birth of the idea of instrumenting a large volume of Antarctic ice to capture high-energy neutrinos – a “Deo volente” (God willing) detector, as Halzen and collaborators then dubbed it. Thirty-three years later, as the detection of a Glashow resonance demonstrates, it is possible to precisely calibrate the absolute energy scale of these gigantic instruments for cosmic particles, and we have achieved several independent proofs of the existence of high-energy cosmic neutrinos, including first confirmations by ANTARES and Baikal-GVD.
Astrophysical models describing the connections between cosmic neutrinos, photons and cosmic rays were discussed in depth, with special emphasis on blazars, starburst galaxies and tidal-distribution events. Perspectives for future global multi-messenger observations and campaigns, including gravitational waves and networks of neutrino instruments over a broad range of energies, were illustrated, anticipating core-collapse supernovae as the most promising sources. The future of astroparticle physics relies upon very large infrastructures and collaborative efforts on a planetary scale. Next-generation neutrino telescopes might follow different strategic developments. Extremely large volumes, equipped with cosmic-ray-background veto techniques and complementary radio-sensitive installations might be the key to achieving high statistics and high-precision measurements over a large energy range, given limited sky coverage. Alternatively, a network of intermediate-scale installations, like KM3NeT, distributed over the planet and based on existing or future infrastructures, might be better suited for population studies of transient phenomena. Efforts are currently being undertaken along both paths, with a newborn project, P-ONE, exploiting existing deep-underwater Canadian infrastructures for science to operate strings of photomultipliers.
T2K and NOvA did not update last summer’s leptonic–CP–violation results. The tension of their measurements creates counter-intuitive fit values when a combination is tried, as discussed by Antonio Marrone of the University of Bari. The most striking example is the neutrino mass hierarchy: both experiments in their own fits favour a normal hierarchy, but their combination, with a tension in the value of the CP phase, favours an inverted hierarchy.
The founder of the Borexino experiment, Gianpaolo Bellini, discussed the results of the experiment together with the latest exciting measurements of the CNO cycle in the Sun. DUNE, Hyper-K, and JUNO presented progress towards the realisation of these leading projects, and speakers discussed their potential in many aspects of new-physics searches, astrophysics investigations and neutrino–oscillation sensitivities. The latest results of the reactor–neutrino experiment Neutrino-4, which about one year ago claimed 3.2σ evidence for an oscillation anomaly that could be induced by sterile neutrinos, were discussed in a dedicated session. Both ICARUS and KATRIN presented their sensitivities to this signal in two completely different setups.
Marc Kamionkowski (John Hopkins University) and Silvia Galli (Institut d’Astrophysique de Paris) both provided an update on the “Hubble tension”: an approximately 4σ difference in the Hubble constant when determined from angular temperature fluctuations in the cosmic microwave background (probing the expansion rate when the universe was approximately 380,000 years old) and observing the recession velocity of supernovae (which provides its current value). This Hubble tension could hint at new physics modifying the thermal history of our universe, such as massive neutrinos that influence the early-time measurement of the Hubble parameter.
Microseconds after the Big Bang, quarks and gluons roamed freely. As the universe expanded, this quark–gluon plasma (QGP) cooled. When the temperature dropped to roughly a hundred thousand times that in the core of the Sun, hadrons formed. Today, this phase transition is reproduced in the heart of detectors at the LHC when lead ions careen into each other at high energy.
Heavy quarks are powerful probes of properties of the QGP
The experimental quest for the QGP started in the 1980s using fixed-target collisions at the Alternating Gradient Synchrotron at Brookhaven National Laboratory (BNL) and the Super Proton Synchrotron at CERN. This side of the millennium, collider experiments have provided a big jump in energy, first at the Relativistic Heavy Ion Collider (RHIC) at BNL, and now at the LHC. Both facilities allow a thorough investigation of the QGP at different points on the still-mysterious phase diagram of quantum chromodynamics.
Among the most striking features of the QGP formed at the LHC is the development of “collective” phenomena, as spatial anisotropies are transformed by pressure gradients into momentum anisotropies. The ALICE experiment is designed to study the collective behaviour of the torrent of particles created in the hadronisation of QGP droplets. Following detailed studies of the “flow” of the abundant light hadrons that are produced, ALICE has recently demonstrated, alongside certain competitive measurements by CMS and ATLAS, the flow of heavy-flavour (HF) hadrons – particles that probe the entire lifetime of a droplet of QGP.
A perfect fluid
The QGP created in lead–ion collisions at the LHC is made up of thousands of quarks and gluons – far too many quantum fields to keep track of in a simulation. In the early 2000s, however, measurements at RHIC revealed that the QGP has a simplifying property: it is a near perfect fluid, with a very low viscosity, as indicated by observations of the highest collective flows allowable in viscous hydrodynamic simulations. More precisely, its shear viscosity-to-entropy ratio – the generalisation of the non-relativistic kinematic viscosity – appears to be only a little above the conjectured quantum limit of 1/4π derived using holographic gravity (AdS/CFT) duality. As the QGP is a near-perfect fluid, its expansion can be modelled using a few local quantities such as energy density, velocity and temperature.
In noncentral heavy-ion collisions, the overlap region between the two incoming nuclei has an almond shape, which naturally imprints a spatial anisotropy to the initial state of the system: the QGP is less elongated along the symmetry plane that connects the centres of the colliding nuclei. As the system evolves, interactions push the QGP more strongly along the shorter symmetry-plane axis than along the longer one (see “Noncentral collision” figure). This is called elliptic flow.
Density fluctuations in the initial state may also lead to other anisotropic flows in the velocity field of the QGP. Triangular flow, for example, pushes the system along three axes. In general, this collective motion is decomposed as 1 + 2 ∑ vn cos(n(ϕ–Ψn)), where vn are harmonic coefficients, ϕ is the azimuthal angle of the final-state particles in transverse-momentum (pT) space, and Ψn are the orientation of the symmetry planes. v1, which is expected to be negligible at mid-rapidity, is “directed flow” towards a single maximum, while v2 and v3 signal elliptic and triangular flows. The LHC’s impressive luminosity has allowed ALICE to measure significant values for the flow of light-flavour hadrons up to v9 (see “Light-flavour flow” figure).
The importance of being heavy
The bulk of the QGP is composed of thermally produced gluons and light quarks. By contrast, thermal HF production is negligible as the typical temperature of the system created in heavy-ion collisions is a few hundred MeV – significantly below the mass of a charm or beauty quark–antiquark pair. HF quarks are instead created in quark–antiquark pairs in early hard-scattering processes on shorter timescales than the QGP formation time, and experience the whole evolution of the system.
Heavy quarks are therefore powerful probes of properties of the QGP. As they traverse the medium, they interact with its constituents, gaining or losing energy depending on their momenta. High-momentum HF quarks lose energy via both elastic (collisional) and inelastic (gluon radiation) processes. Low-momentum HF quarks are swept along with the flow of the medium, partially thermalising with it via multiple interactions. The thermalisation time is inversely proportional to the particle’s mass, and so a higher degree of thermalisation is expected for charm than for beauty. Subsequent hadronisation brings additional complexity: as colour-charged quarks arrange themselves in colour-neutral hadrons, extra contributions to their flow arise from the influence of the surrounding medium when they coalesce with nearby light quarks.
In the past two years, the ALICE collaboration has measured the elliptic and triangular flow coefficients of HF hadrons with open and hidden charm and beauty. The results are currently unique in both scope and transverse-momentum coverage, and depend on the simultaneous reconstruction of thousands of particles in the ALICE detectors (see “ALICE in action” panel). In each case, these HF flows should be compared to the flow of the abundant light-particle species such as charged pions. Within the hydrodynamic description, particles originating from the thermally expanding medium at relatively low transverse momenta typically exhibit flow coefficients that increase with transverse momentum. Faster particles also interact with the medium, but might not reach thermal equilibrium. For these particles, an azimuthal anisotropy develops due to the shorter length of medium they traverse along the symmetry plane, but it is not as large, and anisotropy coefficients are expected to fall with increasing transverse momentum. When thermal equilibrium is achieved, it imprints the same velocity field to all particles: the result is a mass hierarchy wherein heavier particles exhibit lower flow coefficients for a given transverse momentum.
The geometrical overlap between the two colliding nuclei varies from head-on collisions that produce a huge number of particles, sending several thousand hadrons flying to ALICE’s detectors (“0% centrality”, as a percentile of the hadronic cross section) to peripheral collisions where the two nuclei barely overlap (“100% centrality”). Since the initial geometry is not directly experimentally accessible, centrality is estimated using either the total particle multiplicity or the energy deposited in the detectors.
Among the cloud of particles are a handful of open and hidden heavy-flavour hadrons that are reconstructed from their decay products using tracking, particle-identification and decay-vertex reconstruction. Charm mesons are reconstructed through hadronic decay channels using the central barrel detectors. Open beauty hadrons are also reconstructed in the central barrel using their semileptonic decay to an electron as a proxy. Compelling evidence of heavy-quark energy loss in a deconfined strongly interacting matter is provided by the suppression of high-pT open heavy-flavour hadron yields in central nucleus–nucleus collisions relative to proton–proton collisions (after scaling by the average number of binary nucleon–nucleon collisions).
A small fraction of the initially created heavy-quark pairs will bind together to form charmonium (c–c) or bottomonium (b-b) states that are reconstructed in the forward muon spectrometer using their decay channel to two muons. Charmonium states were among the first proposed probes of the deconfinement of the QGP. The potential between the heavy quark and antiquark pair is partially screened by the high density of colour charges in the QGP, leading to a suppression of the production of charmonium states. Interestingly, however, ALICE observes less suppression of the J/ψ in lead–lead collisions than is seen at the lower collision energies of RHIC, despite the increased density of colour charges at higher collision energies. This effect may be understood as due to J/ψ regeneration as the copiously produced charm quarks and antiquarks recombine. By contrast, bottomonia are not expected to have a large regeneration contribution due to the larger mass and thus lower production cross section of the beauty quark.
D mesons are the lightest and most abundant hadrons formed from a heavy quark, and are key to understanding the dynamics of charm quarks in the collision. A substantial anisotropy is observed for D mesons in non-central collisions (see “Elliptic flow” figure). As expected, the measured pT dependence is similar to that for light particles, suggesting that D mesons are strongly affected by the surrounding medium, participating in the collective motion of the QGP and reaching a high degree of thermalisation. J/ψ mesons, which do not contain light-flavour quarks, also exhibit significant positive elliptic flow with a similar pT shape. Open beauty hadrons, whose mass is dominated by the b quark, are also seen to flow, and in the low to intermediate pT region, below 4 GeV, an apparent mass hierarchy is seen: the lighter the particle, the greater the elliptic flow, as expected in a hydrodynamical description of QGP evolution. Above 6 GeV, the elliptic flows of the three particles converge, perhaps as a result of energy loss as energetic partons move through the QGP. In contrast to the other particles, ϒ mesons do not show any significant elliptic flow. This is not surprising as the transverse momentum of peak elliptic flow is expected to scale with the mass of the particle according to the hydrodynamic description of the evolution of the QGP – for ϒ mesons that should be beyond 10 GeV, where the uncertainties are currently large.
Theoretical descriptions of elliptic flow are also making progress. Models of HF flow need to include a realistic hydrodynamic expansion of the QGP, the interaction of the heavy quarks with the medium via collisional and radiative processes, and the hadronisation of heavy quarks via both fragmentation and coalescence. For example, the “TAMU” model describes the measurements of the D mesons and electrons from beauty-hadron decays reasonably well, but shows some tension with the measurement of J/ψ at intermediate and high transverse momenta, perhaps indicating that a mechanism related to parton energy loss is not included.
Triangular flow
Triangular flow is observed for D and J/ψ mesons in central collisions, demonstrating that energy-density fluctuations in the initial state have a measurable effect on the heavy quark sector (see “Triangular flow” figure). These measurements of a triangular flow of open- and hidden- charm mesons pose new challenges to models describing HF interactions in the QGP: models now need to account not only for the properties of the medium and the transport of the HF quarks through it, but also for fluctuations in the initial conditions of the heavy-ion collisions.
In the coming years, measurements of HF flow will continue to strongly constrain models of the QGP. It is now clear that charm quarks take part in the collective motion of the medium and partially thermalise. More data is needed to make firm conclusions about open and hidden beauty hadrons. All four LHC experiments will study how heavy quarks diffuse in a colour-deconfined and hydrodynamically expanding medium with the greater luminosities set to be delivered in LHC Run 3 and Run 4. Currently ongoing upgrades to ALICE will extend its unique advantages in track reconstruction at low momenta, and upgrades to LHCb will allow this asymmetric experiment to study non-central collisions in Run 3. In the next long shutdown of the LHC, upgrades to CMS and ATLAS will then extend their already impressive flow measurements to be competitive with ALICE in the crucial low transverse momentum domain, inching us closer to understanding both the early universe and the phase diagram of quantum chromodynamics.
The electroweak session of the Rencontres de Moriond convened more than 200 participants virtually from 22 to 27 March in a new format, with pre-recorded plenary talks and group-chat channels that went online in advance of live discussion sessions. The following week, the QCD and high-energy interactions session took place with a more conventional virtual organisation.
The highlight of both conferences was the new LHCb result on RK based on the full Run 1 and Run 2 data, and corresponding to an integrated luminosity of 9 fb–1, which led to the claim of the first evidence for lepton-flavour-universality (LFU) violation from a single measurement. RK is the ratio of the branching fractions for the decays B+→ K+ μ+ μ– and B+→ K+ e+ e–. LHCb measured this ratio to be 3.1σ below unity, despite the fact that the two branching fractions are expected to be equal by virtue of the well-established property of lepton universality (see New data strengthens RK flavour anomaly). Coupled with previously reported anomalies of angular variables and the RK*, RD and RD* branching-fraction ratios by several experiments, it further reinforces the indications that LFU may be violated in the B sector. Global fits and possible theoretical interpretations with new particles were also discussed.
Important contributions
Results from Belle II and BES III were reported. Some of the highlights were a first measurement of the B+→ K+νν decay and the most stringent limits to date for masses of axions between 0.2 and 1 GeV from Belle II, based on the first data they collected, and searches for LFU violation in the charm sector from BES III that for the moment give negative results. Belle II is expected to give important contributions to the LFU studies soon and to accumulate an integrated luminosity of 50 ab–1 10 years from now.
ATLAS and CMS presented tens of new results each on Standard Model (SM) measurements and searches for new phenomena in the two conferences. Highlights included the CMS measurement of the W leptonic and hadronic branching fraction with an accuracy larger than that measured at LEP for the branching fractions to the electron and muon, and the updated ATLAS evidence of the four-top-production process at 4.7σ (with 2.6σ expected). ATLAS and CMS have not yet found any indications of new physics but continue to perform many searches, expanding the scope to as-yet unexplored areas, and many improved limits on new-physics scenarios were reported for the first time at both conference sessions.
Several results and prospects of electroweak precision measurements were presented and discussed, including a new measurement of the fine structure constant with a precision of 80 parts per trillion, and a measurement at PSI of the null electric dipole moment of the neutron with an uncertainty of 1.1 × 10–26 e∙cm. Theoretical predictions of (g–2)μ were discussed, including the recent lattice calculation from the Budapest–Marseille–Wuppertal group of the hadronic–vacuum–polarisation contribution, which, if used in comparison with the experimental measurement, would bring the tension with the (g–2)μ prediction to within 2σ.
In the neutrino session, the most relevant recent new results of last year were discussed. KATRIN reported updated upper limits on the neutrino mass, obtained from the direct measurement of the endpoint of the electron spectrum of the tritium β decay, while T2K showed the most recent results concerning CP violation in the neutrino sector, obtained from the simultaneous measurement of the νμ and νμ disappearance, and νe and νeappearance. The measurement disfavours at 90% CL the CP-conserving values 0 and π of the CP-violating parameter of the neutrino mixing matrix, δCP, and all values between 0 and π.
The quest for dark matter is in full swing and is expanding on all fronts. XENON1T updated delegates on an intriguing small excess in the low-energy part of the electron-recoil spectrum, from 1 to 7 keV, which could be interpreted as originating from new particles but that is also consistent with an increased background from tritium contamination. Upcoming new data from the upgraded XENONnT detector are expected to be able to disentangle the different possibilities, should the excess be confirmed. The Axion Dark Matter eXperiment (ADMX) is by far the most sensitive experiment to detect axions in the explored range around 2 μeV. ADMX showed near-future prospects and the plans for upgrading the detector to scan a much wider mass range, up to 20 μeV, in the next few years. The search for dark matter also continues at accelerators, where it could be directly produced or be detected in the decays of SM particles such as the Higgs boson.
The quest for dark matter is in full swing and is expanding on all fronts
ATLAS and CMS also presented new results at the Moriond QCD and high-energy-interactions conference. Highlights of the new results are: the ATLAS full Run-2 search for double-Higgs-boson production in the bbγγ channel, which yielded the tightest constraints to date on the Higgs-boson self-coupling, and the measurement of the top-quark mass by CMS in the single-top-production channel that for the first time reached an accuracy of less than 1 GeV, now becoming relevant to future top-mass combinations. Several recent heavy-ion results were also presented by the LHC experiments, and by STAR and PHENIX at RHIC, in the dedicated heavy-ion session. One highlight was a result from ALICE on the measurement of the Λc+ transverse-momentum spectrum and the Λc+ /D0 ratio in pp and p–Pb collisions, showing discrepancies with perturbative QCD predictions.
The above is only a snapshot of the many interesting results presented at this year’s Rencontres de Moriond, representing the hard work and dedication of countless physicists, many at the early-career stage. As ever, the SM stands strong, though intriguing results provoked lively debate during many virtual discussions.
It has been almost a century since Dirac formulated his famous equation, and 75 years since the first QED calculations by Schwinger, Tomonaga and Feynman were used to explain the small deviations in hydrogen’s hyperfine structure. These calculations also predicted that deviations from Dirac’s prediction a = (g–2)/2, where g is the gyromagnetic ratio e/2me, should be non-zero and thus “anomalous”. The result is famously engraved on Schwinger’s tombstone, standing as a monument to the importance of this result and a marker of things to come.
In January 1957 Garwin and collaborators at Columbia published the first measurements of g for the recently discovered muon, accurate to 5%, followed two months later by Cassels and collaborators at Liverpool with uncertainties of less than 1%. Leon Lederman is credited with initiating the CERN campaign of g–2 experiments from 1959 to 1979, starting with a borrowed 83 × 52 × 10 cm magnet from Liverpool and ending with a dedicated storage ring and a precision of better than 10 ppm.
Why was CERN so interested in the muon? In a 1981 review, Combley, Farley and Picasso commented that the CERN results for aμ had a higher sensitivity to new physics by “a modification to the photon propagator or new couplings” by a factor (mμ/me)2. Revealing a deeper interest, they also admitted “… this activity has brought us no nearer to the understanding of the muon mass [200 times that of the electron].”
With the end of the CERN muon programme, focus turned to Brookhaven and the E821 experiment, which took up the challenge of measuring aμ 20 times more precisely, providing sensitivity to virtual particles with masses beyond the reach of the colliders at the time. In 2004 the E821 collaboration delivered on its promise, reporting results accurate to about 0.6 ppm. At the time this showed a 2–3σ discrepancy with respect to the Standard Model (SM) – tantalising, but far from conclusive.
Spectacular progress
The theoretical calculation of g–2 made spectacular progress in step with experiment. Almost eclipsed by the epic 2012 achievement of calculating the QED contributions to five loops from 12,672 Feynman diagrams, huge advances in calculating the hadronic vacuum polarisation contributions to aμ have been made. A reappraisal of the E821 data using this information suggested at least a 3.5σ discrepancy with the SM. It was this that provided the impetus to Lee Roberts and colleagues to build the improved muon g–2 experiments at Fermilab, the first results from which are described in this issue, and at J-PARC. Full results from the Fermilab experiment alone should reduce the aμ uncertainties by at least another factor of three – down to a level that really challenges what we know about the SM.
Muon g–2 is a clear demonstration that theory and experiment must progress hand in hand
Of course, the interpretation of the new results relies on the choice of theory baseline. For example, one could choose, as the Fermilab experiment has, to use the consensus “International Theory Initiative” expectation for aμ. One could also take into account the new results provided by LHCb’s recent RK measurement, which hint that muons might behave differently than electrons. There will inevitably be speculation over the coming months about the right approach. Whatever one’s choice, muon g–2 is a clear demonstration that theory and experiment must progress hand in hand.
Perhaps the most important lesson is the continued cross-fertilisation and impetus to the physics delivered both at CERN and at Fermilab by recent results. The g–2 experiment, an international collaboration between dozens of labs and universities in seven countries, has benefited from students who cut their teeth on LHC experiments. Likewise, students who have worked at the precision frontier at Fermilab are now armed with the expertise of making blinded ppm measurements and are keen to see how they can make new measurements at CERN, for example at the proposed MUonE experiment, or at other muon experiments due to come online this decade.
“It remains to be seen whether or not future refinement of the [SM] will call for the discerning scrutiny of further measurements of even greater precision,” concluded Combley, Farley and Picasso in their 1981 review – a wise comment that is now being addressed.
A fermion’s spin tends to twist to align with a magnetic field – an effect that becomes dramatically macroscopic when electron spins twist together in a ferromagnet. Microscopically, the tiny magnetic moment of a fermion interacts with the external magnetic field through absorption of photons that comprise the field. Quantifying this picture, the Dirac equation predicts fermion magnetic moments to be precisely two in units of Bohr magnetons, e/2m. But virtual lines and loops add an additional 0.1% or so to this value, giving rise to an “anomalous” contribution known as “g–2” to the particle’s magnetic moment, caused by quantum fluctuations. Calculated to tenth order in quantum electrodynamics (QED), and verified experimentally to about two parts in 1010, the electron’s magnetic moment is one of the most precisely known numbers in the physical sciences. While also measured precisely, the magnetic moment of the muon, however, is in tension with the Standard Model.
Tricky comparison
The anomalous magnetic moment of the muon was first measured at CERN in 1959, and prior to 2021, was most recently measured by the E821 experiment at Brookhaven National Laboratory (BNL) 16 years ago. The comparison between theory and data is much trickier than for electrons. Being short-lived, muons are less suited to experiments with Penning traps, whereby stable charged particles are confined using static electric and magnetic fields, and the trapped particles are then cooled to allow precise measurements of their properties. Instead, experiments infer how quickly muon spins precess in a storage ring – a situation similar to the wobbling of a spinning top, where information on the muon’s advancing spin is encoded in the direction of the electron that is emitted when it decays. Theoretical calculations are also more challenging, as hadronic contributions are no longer so heavily suppressed when they emerge as virtual particles from the more massive muon.
All told, our knowledge of the anomalous magnetic moment of the muon is currently three orders of magnitude less precise than for electrons. And while everything tallies up, more or less, for the electron, BNL’s longstanding measurement of the magnetic moment of the muon is 3.7σ greater than the Standard Model prediction (see panel “Rising to the moment”). The possibility that the discrepancy could be due to virtual contributions from as-yet-undiscovered particles demands ever more precise theoretical calculations. This need is now more pressing than ever, given the increased precision of the experimental value expected in the next few years from the Muon g–2 collaboration at Fermilab in the US and other experiments such as the Muon g–2/EDM collaboration at J-PARC in Japan. Hotly anticipated results from the first data run at Fermilab’s E989 experiment were released on 7 April. The new result is completely consistent with the BNL value but with a slightly smaller error, leading to a slightly larger discrepancy of 4.2σ with the Standard Model when the measurements are combined (see Fermilab strengthens muon g-2 anomaly).
Hadronic vacuum polarisation
The value of the muon anomaly, aμ, is an important test of the Standard Model because currently it is known very precisely – to roughly 0.5 parts per million (ppm) – in both experiment and theory. QED dominates the value of aμ, but due to the non-perturbative nature of QCD it is strong interactions that contribute most to the error. The theoretical uncertainty on the anomalous magnetic moment of the muon is currently dominated by so-called hadronic vacuum polarisation (HVP) diagrams. In HVP, a virtual photon briefly explodes into a “hadronic blob”, before being reabsorbed, while the magnetic-field photon is simultaneously absorbed by the muon. While of order α2 in QED, it is all orders in QCD, making for very difficult calculations.
In the Standard Model, the magnetic moment of the muon is computed order-by-order in powers of a for QED (each virtual photon represents a factor of α), and to all orders in as for QCD.
At the lowest order in QED, the Dirac term (pictured left) accounts for precisely two Bohr magnetons and arises purely from the muon (μ) and the real external photon (γ) representing the magnetic field.
At higher orders in QED, virtual Standard Model particles, depicted by lines forming loops, contribute to a fractional increase of aμ with respect to that value: the so-called anomalous magnetic moment of the muon. It is defined to be aμ = (g–2)/2, where g is the gyromagnetic ratio of the muon – the number of Bohr magnetons, e/2m, which make up the muon’s magnetic moment. According to the Dirac equation, g = 2, but radiative corrections increase its value.
The biggest contribution is from the Schwinger term (pictured left, O(α)) and higher-order QED diagrams.
aμQED = (116 584 718.931 ± 0.104) × 10–11
Electroweak lines (pictured left) also make a well-defined contribution. These diagrams are suppressed by the heavy masses of the Higgs, W and Z bosons.
aμEW = (153.6 ± 1.0) × 10–11
The biggest QCD contribution is due to hadronic vacuum polarisation (HVP) diagrams. These are computed from leading order (pictured left, O(α2)), with one “hadronic blob” at all orders in as (shaded) up to next-to-next-to-leading order (NNLO, O(α4), with three hadronic blobs) in the HVP.
Hadronic light-by-light scattering (HLbL, pictured left at O(α3) and all orders in αs (shaded)), makes a smaller contribution but with a larger fractional uncertainty.
Neglecting lattice–QCD calculations for the HVP in favour of those based on e+e– data and phenomenology, the total anomalous magnetic moment is given by
This is somewhat below the combined value from the E821 experiment at BNL in 2004 and the E989 experiment at Fermilab in 2021.
aμexp = (116 592 061 ± 41) × 10–11
The discrepancy has roughly 4.2σ significance:
aμexp– aμSM = (251 ± 59) × 10–11.
Historically, and into the present, HVP is calculated using a dispersion relation and experimental data for the cross section for e+e–→ hadrons. This idea was born of necessity almost 60 years ago, before QCD was even on the scene, let alone calculable. The key realisation is that the imaginary part of the vacuum polarisation is directly related to the hadronic cross section via the optical theorem of wave-scattering theory; a dispersion relation then relates the imaginary part to the real part. The cross section is determined over a relatively wide range of energies, in both exclusive and inclusive channels. The dominant contribution – about three quarters – comes from the e+e–→ π+π– channel, which peaks at the rho meson mass, 775 MeV. Though the integral converges rapidly with increasing energy, data are needed over a relatively broad region to obtain the necessary precision. Above the τ mass, QCD perturbation theory hones the calculation.
Several groups have computed the HVP contribution in this way, and recently a consensus value has been produced as part of the worldwide Muon g–2 Theory Initiative. The error stands at about 0.58% and is the dominant part of the theory error. It is worth noting that a significant part of the error arises from a tension between the most precise measurements, by the BaBar and KLOE experiments, around the rho–meson peak. New measurements, including those from experiments at Novosibirsk, Russia and Japan’s Belle II experiment, may help resolve the inconsistency in the current data and reduce the error by a factor of two or so.
The alternative approach, of calculating the HVP contribution from first principles using lattice QCD, is not yet at the same level of precision, but is getting there. Consistency between the two approaches will be crucial for any claim of new physics.
Lattice QCD
Kenneth Wilson formulated lattice gauge theory in 1974 as a means to rid quantum field theories of their notorious infinities – a process known as regulating the theory – while maintaining exact gauge invariance, but without using perturbation theory. Lattice QCD calculations involve the very large dimensional integration of path integrals in QCD. Because of confinement, a perturbative treatment including physical hadronic states is not possible, so the complete integral, regulated properly in a discrete, finite volume, is done numerically by Monte Carlo integration.
Lattice QCD has made significant improvements over the last several years, both in methodology and invested computing time. Recently developed methods (which rely on low-lying eigenmodes of the Dirac operator to speed up calculations) have been especially important for muon–anomaly calculations. By allowing state-of-the-art calculations using physical masses, they remove a significant systematic: the so-called chiral extrapolation for the light quarks. The remaining systematic errors arise from the finite volume and non-zero lattice spacing employed in the simulations. These are handled by doing multiple simulations and extrapolating to the infinite-volume and zero-lattice-spacing limits.
The HVP contribution can readily be computed using lattice QCD in Euclidean space with space-like four-momenta in the photon loop, thus yielding the real part of the HVP directly. The dispersive result is currently more precise (see “Off the mark” figure”), but further improvements will depend on consistent new e+e– scattering datasets.
Rapid progress in the last few years has resulted in first lattice results with sub-percent uncertainty, closing in on the precision of the dispersive approach. Since these lattice calculations are very involved and still maturing, it will be crucial to monitor the emerging picture once several precise results with different systematic approaches are available. It will be particularly important to aim for statistics-dominated errors to make it more straightforward to quantitatively interpret the resulting agreement with the no-new-physics scenario or the dispersive results. In the shorter term, it will also be crucial to cross-check between different lattice and dispersive results using additional observables, for example based on the vector–vector correlators.
With improved lattice calculations in the pipeline from a number of groups, the tension between lattice QCD and phenomenological calculations may well be resolved before the Fermilab and J-PARC experiments announce their final results. Interestingly, there is a new lattice result with sub-percent precision (BMW 2020) that is in agreement both with the no-new-physics point within 1.3σ, and with the dispersive-data-driven result within 2.1σ. Barring a significant re-evaluation of the phenomenological calculation, however, HVP does not appear to be the source of the discrepancy with experiments.
The next most likely Standard Model process to explain the muon anomaly is hadronic light-by-light scattering. Though it occurs less frequently since it includes an extra virtual photon compared to the HVP contribution, it is much less well known, with comparable uncertainties to HVP.
Hadronic light-by-light scattering
In hadronic light-by-light scattering (HLbL), the magnetic field interacts not with the muon, but with a hadronic “blob”, which is connected to the muon by three virtual photons. (The interaction of the four photons via the hadronic blob gives HLbL its name.) A miscalculation of the HLbL contribution has often been proposed as the source of the apparently anomalous measurement of the muon anomaly by BNL’s E821 collaboration.
Since the so-called Glasgow consensus (the fruit of a 2009 workshop) first established a value more than 10 years ago, significant progress has been made on the analytic computation of the HLbL scattering contribution. In particular, a dispersive analysis of the most important hadronic channels has been carried out, including the leading pion–pole, sub-leading pion loop and rescattering diagrams including heavier pseudoscalars. These calculations are analogous in spirit to the dispersive HVP calculations, but are more complicated, and the experimental measurements are more difficult because form factors with one or two virtual photons are required.
The project to calculate the HLbL contribution using lattice QCD began more than 10 years ago, and many improvements to the method have been made to reduce both statistical and systematic errors since then. Last year we published, with colleagues Norman Christ, Taku Izubuchi and Masashi Hayakawa, the first ever lattice–QCD calculation of the HLbL contribution with all errors controlled, finding aμHLbL, lattice = (78.7 ± 30.6 (stat) ± 17.7 (sys)) × 10–11. The calculation was not easy: it took four years and a billion core-hours on the Mira supercomputer at Argonne National Laboratory’s Large Computing Facility.
Our lattice HLbL calculations are quite consistent with the analytic and data-driven result, which is approximately a factor of two more precise. Combining the results leads to aμHLbL = (90 ± 17) × 10–11, which means the very difficult HLbL contribution cannot explain the Standard Model discrepancy with experiment. To make such a strong conclusion, however, it is necessary to have consistent results from at least two completely different methods of calculating this challenging non-perturbative quantity.
New physics?
If current theory calculations of the muon anomaly hold up, and the new experiments reduce its uncertainty by the hoped-for factor of four, then a new-physics explanation will become impossible to ignore.The idea would be to add particles and interactions that have not yet been observed but may soon be discovered at the LHC or in future experiments. New particles would be expected to contribute to the anomaly through Feynman diagrams similar to the Standard Model topographies (see “Rising to the moment” panel).
Calculations of the anomalous magnetic moment of the muon are not finished
The most commonly considered new-physics explanation is supersymmetry, but the increasingly stringent lower limits placed on the masses of super-partners by the LHC experiments make it increasingly difficult to explain the muon anomaly. Other theories could do the job too. One popular idea that could also explain persistent anomalies in the b-quark sector is heavy scalar leptoquarks, which mediate a new interaction allowing leptons and quarks to change into each other. Another option involves scenarios whereby the Standard Model Higgs boson is accompanied by a heavier Higgs-like boson.
The calculations of the anomalous magnetic moment of the muon are not finished. As a systematically improvable method, we expect more precise lattice determinations of the hadronic contributions in the near future. Increasingly powerful algorithms and hardware resources will further improve precision on the lattice side, and new experimental measurements and analysis methods will do the same for dispersive studies of the HVP and HLbL contributions.
To confidently discover new physics requires that these two independent approaches to the Standard Model value agree. With the first new results on the experimental value of the muon anomaly in almost two decades showing perfect agreement with the old value, we anxiously await more precise measurements in the near future. Our hope is that the clash of theory and experiment will be the beginning of an exciting new chapter of particle physics, heralding new discoveries at current and future particle colliders.
Hotly anticipated results from the first run of the muon g-2 experiment at Fermilab were announced today, increasing the tension between measurements and theoretical calculations. The last time this ultra-precise measurement was performed, in a sequence of results at Brookhaven National Laboratory in the late 1990s and early 2000s, it disagreed with the Standard Model (SM) by 3.7σ. After almost eight years of work rebuilding the Brookhaven experiment at Fermilab and analysing its first data, the muon’s anomalous magnetic moment has been measured to be 116 592 040(54)×10-11. The result is in agreement with the Brookhaven measurement and is 3.3σ greater than the SM prediction: 116 591 810(43)×10-11. Combined with the Brookhaven result, the world-average value for the anomalous magnetic moment of the muon is 116 592 061(41)×10-11, representing a 4.2σ departure from the SM.
“Today is an extraordinary day, long awaited not only by us but by the whole international physics community,” says Graziano Venanzoni of the INFN, who is co-spokesperson of the Fermilab muon g-2 collaboration. “A large amount of credit goes to our young researchers who, with their talent, ideas and enthusiasm, have allowed us to achieve this incredible result.”
Today is an extraordinary day, long awaited not only by us but by the whole international physics community
Graziano Venanzoni
The Fermilab result was unblinded during a Zoom meeting on 25 February in the presence of around 200 collaborators from around the world. “We were all very excited to finally know our result and the meeting was very emotional,” says Venanzoni. The analysis took almost three years from data taking to the release of the result and the collaboration decided to unblind only when all the steps of the analysis were completed and there were no outstanding questions. Venanzoni adds that no further analysis was completed after the unblinding and the results are unchanged.
The previous Brookhaven measurement left physicists pondering whether the presence of unknown particles in loops could be affecting the muon’s behaviour. It was clear that further measurements were needed, but it turned out to be much cheaper to move the apparatus to Fermilab than to build a new, more precise experiment at Brookhaven. So in the summer of 2013, the experiment’s 14-m diameter, 1.45 T superconducting magnet was transported from Long Island to the suburbs of Chicago. The Fermilab team reassembled the magnet and spent a year “shimming” its field, making it three times more uniform than the one it created at Brookhaven. Along with a new beamline to deliver a purer muon beam, Fermilab’s muon g-2 reincarnation required entirely new instrumentation, along with new detectors and a control room.
When a muon travels through the strong external magnetic field of a storage ring, the direction of its magnetic moment precesses at a rate that depends on its strength g. The Dirac equation predicts that all fermions have a g-factor equal to two. But higher order loops add an “anomalous” moment, aμ = (g-2)/2, which can be calculated extremely precisely. At Fermilab, muons with an energy of about 3.1 GeV are vertically focused in the storage ring via quadrupoles, and their precession frequency is determined from decays to electrons using 24 electromagnetic calorimeters located along the ring’s inner circumference. The intense polarised muon beam suppresses the pion contamination that challenged the Brookhaven measurement, while new calibration systems and simulations allow better control of systematic uncertainties.
It is so gratifying to finally be resolving this mystery
Chris Polly
The Fermilab muon g-2 collaboration took its first dataset in 2018, with over eight billion muon decays resulting in an overall uncertainty approximately 15% better than Brookhaven’s. Data analysis on the second and third runs is already under way, while a fourth run is ongoing and a fifth is planned. The collaboration is targeting a final precision of around 0.14 ppm – four times greater than the previous measurement.
“After the 20 years that have passed since the Brookhaven experiment ended, it is so gratifying to finally be resolving this mystery,” said Fermilab’s Chris Polly, a co-spokesperson for the current experiment and a graduate student on the Brookhaven experiment. “So far we have analysed less than 6% of the data that the experiment will eventually collect. Although these first results are telling us that there is an intriguing difference with the Standard Model, we will learn much more in the next couple of years.”
Theory baseline Developments in the theory community are equally vital. The Fermilab muon g-2 collaboration takes as its theory baseline the value for aμ obtained last year by the Muon g-2 Theory Initiative. Uncertainties in the calculation are dominated by hadronic contributions, in particular a term called the hadronic vacuum polarization (HVP). The Theory Initiative incorporates the HVP value obtained by well-established “dispersive methods”, which combine fundamental properties of quantum field theory with experimental measurements of low-energy hadronic processes. An alternative approach gaining traction is to calculate the HVP contribution using lattice QCD. In a paper published in Nature today, one group reports lattice calculations of HVP which, if included in the theory result, would significantly reduce the discrepancy between the experimental and theoretical values for aμ. The result is in 2σ tension with the value obtained from the dispersive approach, and is currently dominated by systematic uncertainties stemming from approximations used in the lattice calculations, say Muon g-2 Theory Initiative members.
“This being the first lattice result at sub-percent precision, it is premature to draw firm conclusions from this comparison,” reads a statement from the Muon g-2 Theory Initiative steering committee. “Indeed, given the complexity of the computations, independent results from different lattice groups with commensurate uncertainties are needed to test and check the lattice calculations against each other. Being entirely based on Standard Model theory, once the lattice results are well tested and precise enough, they will play an important role in understanding how new physics enters into the discrepancy.”
Despite the strong indirect evidence for the existence of dark matter, a plethora of direct searches have not resulted in a positive detection. The exception to this are the famous results from the DAMA/NaI experiment at Gran Sasso underground laboratory in Italy, first reported in the late 1990s, which show a modulating signal compatible with Earth moving through a region containing Weakly Interacting Massive Particles (WIMPs). These results were backed-up more recently with measurements from the follow-up DAMA/LIBRA detector. Combining the data in 2018, the evidence reported for a dark-matter signal is as high as 13 sigma.Now, the Annual modulation with NaI Scintillators (ANAIS) collaboration, which aims to directly reproduce the DAMA results using the same detector concept, has published the results from their first three years of operations. The results, which were presented today at Rencontres de Moriond, show a clear contradiction with DAMA, indicating that we are still no closer to finding dark matter.
The DAMA results are based on searches for an annual modulation in the interaction rate of WIMPs in a detector comprising NaI crystals. First theoretically proposed in 1986 by Andrzej Drukier, Katherine Freese and David Spergel, this modulation is a result of the difference in velocity of Earth with respect to the dark-matter halo of the galaxy. On 2 June, the velocities of Earth and the Sun are aligned with respect to the galaxy, whereas half a year later they are oppositely aligned, resulting in a lower cross section for WIMPs with a detector placed on Earth. Although this method has advantages compared to more direct detection methods, it requires that other potential sources of such a seasonal modulation be ruled out. Despite the significant modulation with the correct phase observed by DAMA, its results were not immediately accepted as a clear signal of dark matter due to the remaining possibility of instrumental effects, seasonal background modulation or artifacts from the analysis.
Over the years the significance of the DAMA results has continued to increase while other dark-matter searches, in particular with the XENON1T and LUX experiments, found no evidence of WIMPs capable of explaining the DAMA results. The fact that only the final analysis products from DAMA have been made public has also hampered attempts to prove or disprove alternative origins of the modulation. To overcome this, the ANAIS collaboration set out to reproduce the data with an independent detector intentionally similar to DAMA, consisting of NaI(Tl) scintillators readout by photomultipliers placed in the Canfranc Underground Laboratory deep beneath the Pyrenees in northern Spain. Using this method ANAIS can rule out any instrument-induced effects while producing data in a controlled way and studying it in detail with different analysis procedures.
The ANAIS results agree with the first results published by the COSINE-100 collaboration
The first three years of ANAIS data have now been unblinded, and the results were posted on arXiv on 1 March. None of the analysis methods used show any signs of a modulation, with a statistical analysis ruling out the DAMA results at 99% confidence. The results therefore narrow down the possible causes of the modulation observed by DAMA to either differences in the detector compared to ANAIS, or in the analysis method. One specific issue raised by the ANAIS collaboration regards the background-subtraction method. In the DAMA results the mean background rate for each year is subtracted from the raw data for that full year. In case the background during that year is not constant, however, this will produce an artificial saw-tooth shape which, with the limited statistics, can be fitted with a sinusoidal. This effect was already pointed out in a publication by a group from INFN in 2020, which showed how a slowly increasing background is capable of producing the exact modulation observed by DAMA. The ANAIS collaboration describes their background in detail, shows that it is indeed not constant, and provides suggestions for a more robust handling of the background.
The ANAIS results also agree with the first results published by the COSINE-100 collaboration in 2019 which, again using a NaI-based detector, found no evidence of a yearly modulation. Thanks to the continued experimental efforts of these two groups, and with the ANAIS collaboration planning to make their data public to allow independent analyses, the more than 20 year-old DAMA anomaly looks likely to be settled in the next few years.
The principle that the charged leptons have identical electroweak interaction strengths is a distinctive feature of the Standard Model (SM). However, this lepton-flavour universality (LFU) is an accidental symmetry in the SM, which may not hold in theories beyond the SM. The LHCb collaboration has used a number of rare decays mediated by flavour-changing neutral currents, where the SM contribution is suppressed, to test for deviations from LFU. During the past few years, these and other measurements, together with results from B-factories, hint at possible departures from the SM.
In a new measurement of a LFU-sensitive parameter “RK” with increased precision and statistical power, reported today at the Rencontres de Moriond, LHCb has strengthened the significance of the flavour anomalies. The value RK probes the ratio of B-meson decays to muons and electrons: RK = BR(B+→K+μ+μ–)/BR(B+→K+e+e–). Testing LFU in such b→sℓ+ℓ– transitions has the advantage that not only are SM contributions suppressed, but the theoretical predictions are very precise. Therefore, any significant deviation of RK from unity would imply physics beyond the SM.
The experimental challenge lies in the fact that, while electrons and muons interact via the electroweak force in the same way, the small electron mass means it interacts with detector material much more than muons. For example, electrons radiate a significant number of bremsstrahlung photons when traversing the LHCb detector, which degrades reconstruction efficiency and signal resolution compared to muons. The key to control this effect is to use the decays J/ψ→e+e– and J/ψ→μ+μ–, which are known to have the same decay probability and can be used to calibrate and test electron reconstruction efficiencies. High precision tests with the J/ψ are compatible with LFU, which provides a powerful cross-check on the experimental analysis.
Previous LHCb measurements of RK and RK* (which probes B0→K*ℓ+ℓ– decays) in 2019 and 2017 respectively, provide hints of deviations from unity. The latest analysis of RK, which uses the full dataset collected by the experiment in Run 1 and Run 2 of the LHC, represents a substantial improvement in precision on the previous measurement (see figure) thanks to doubling the dataset. The RK ratio is measured to be three standard deviations from the SM prediction (see figure). This is the first time that a departure from LFU above this level has been seen in any individual B-meson decay, with a value of RK=0.846+0.042-0.039 (stat.) +0.013-0.012 (syst.).
Although it is too early to conclude anything definitive at this stage, this deviation is consistent with a pattern of anomalies which have manifested themselves in b→s ℓ+ℓ– and similar processes over the course of the past decade. In particular, the strengthening RK anomaly may be considered alongside hints from other measurements of these transitions, including angular asymmetries and decay rates.
The LHCb experiment is well placed to clarify the potential existence of new-physics effects in these decays. Updates on a suite of b→s ℓ+ℓ– related measurements with the full Run 1 and Run 2 dataset are underway. A major upgrade to the detector during the ongoing second long shutdown of the LHC will offer a step change in precision in Run 3 and beyond.
The TOTEM collaboration at the LHC, in collaboration with the DØ collaboration at the former Tevatron collider at Fermilab, have announced the discovery of the odderon – an elusive three-gluon state predicted almost 50 years ago. The result was presented in a “discovery talk” on Friday 5 March during the LHC Forward Physics meeting at CERN, and follows the joint publication of a CERN/Fermilab preprint by TOTEM and DØ reporting the observation in December 2020.
This result probes the deepest features of quantum chromodynamics
Simone Giani
“This result probes the deepest features of quantum chromodynamics, notably that gluons interact between themselves and that an odd number of gluons are able to be ‘colourless’, thus shielding the strong interaction,” says TOTEM spokesperson Simone Giani of CERN. “A notable feature of this work is that the results are produced by joining the LHC and Tevatron data at different energies.”
States comprising two, three or more gluons are usually called “glueballs”, and are peculiar objects made only of the carriers of the strong force. The advent of quantum chromodynamics (QCD) led theorists to predict the existence of the odderon in 1973. Proving its existence has been a major experimental challenge, however, requiring detailed measurements of protons as they glance off one another in high-energy collisions.
While most high-energy collisions cause protons to break into their constituent quarks and gluons, roughly 25% are elastic collisions where the protons remain intact but emerge on slightly different paths (deviating by around a millimetre over a distance of 200 m at the LHC). TOTEM measures these small deviations in proton–proton (pp) scattering using two detectors located 220 m on either side of the CMS experiment, while DØ employed a similar setup at the Tevatron proton–antiproton (pp̄) collider.
Pomerons and odderons
At low energies, differences in pp vs pp̄ scattering are due to the exchange of different virtual mesons. At multi-TeV energies, on the other hand, proton interactions are expected to be mediated purely by gluons. In particular, elastic scattering at low-momentum transfer and high energies has long been explained by the exchange of a pomeron – a colour-neutral virtual glueball made up of an even number of gluons.
However, in 2018 TOTEM reported measurements at high energies that could not easily be explained by this traditional picture. Instead, a further QCD object seemed to be at play, supporting models in which a three-gluon compound, or one containing higher odd numbers of gluons, was being exchanged. The discrepancy came to light via measurements of a parameter called ρ, which represents the ratio of the real and imaginary parts of the forward elastic-scattering amplitude when there is minimal gluon exchange between the colliding protons and thus almost no deviation in their trajectories. The results were sufficient to claim evidence for the odderon, although not yet its definitive observation.
The new work is based on a model-independent analysis of data at medium-range momenta transfer. The TOTEM and DØ teams compared LHC pp data (recorded at collision energies of 2.76, 7, 8 and 13 TeV and extrapolated to 1.96 TeV), with Tevatron pp̄ data measured at 1.96 TeV. The odderon would be expected to contribute with different signs to pp and pp̄ scattering. Supporting this picture, the two data sets disagree at the 3.4σ level, providing evidence for the t-channel exchange of a colourless, C-odd gluonic compound.
“When combined with the ρ and total cross-section result at 13 TeV, the significance is in the range 5.2-5.7σ and thus constitutes the first experimental observation of the odderon,” said Christophe Royon of University of Kansas, who presented the results on behalf of DØ and TOTEM last week. “This is a major discovery by CERN/Fermilab.”
In addition to the new TOTEM-DØ model-independent study, several theoretical papers based on data from the ISR, SPS, Tevatron and LHC, and model-dependent inputs, provide additional evidence supporting the conclusion that the odderon exists.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.