Beauty baryons are a subject of great interest at the LHC, offering unique insights into the nature of the strong interaction and the mechanisms by which hadrons are formed. While the ground states Λb0, Σb±, Ξb–, Ξb0, Ωb– were observed at the Tevatron at Fermilab and the SPS at CERN, the LHC’s higher energy and orders-of-magnitude larger integrated luminosity have allowed the discovery of more than a dozen excited beauty baryon states among the 59 new hadrons observed at the LHC so far (see LHCb observes four new tetraquarks).
Many hadrons with one c or b quark are quite similar. Interchanging heavy-quark flavours does not significantly change the physics predicted by effective models assuming “heavy quark symmetry”. The well-established charm baryons and their excitations therefore provide excellent input for theories modelling the less well understood spectrum of beauty-baryons. A number of the lightest excited b baryons, such as Λb(5912)0, Λb(5920)0, and several excited Ξb and Ωb– states, have been observed, and are consistent with their charm partners. By contrast, however, heavier excitations, such as the Λb(6072)0 and Ξb(6227) isodoublet (particles that differ only by an up or down quark), cannot yet be readily associated with charmed partners.
New particles
The first particle observed by the CMS experiment, in 2012, was the beauty- strange baryon Ξb(5945)0 (CERN Courier June 2012 p6). It is consistent with being the beauty partner of the Ξc(2645)+ with spin-parity 3/2+, while the Ξb(5955)– and Ξb(5935)– states observed by LHCb are its isospin partner and the beauty partner of the Ξc′0, respectively. The charm sector also suggests the existence of prominent heavier isodoublets, called Ξb**: the lightest orbital Ξb excitations with orbital momentum between a light diquark (a pairing of a s quark with either a d or a u quark) and a heavy b quark. The isodoublet with spin-parity 1/2– decays into Ξb′ π± and the one with 3/2– into Ξb* π±.
The CMS collaboration has now observed such a baryon, Ξb(6100)–, via the decay sequence Ξb(6100)–→Ξb(5945)0π–→Ξb– π+ π–. The new state’s measured mass is 6100.3 ± 0.6 MeV, and the upper limit on its natural width is 1.9 MeV at 95% confidence level. The Ξb– ground state was reconstructed in two channels: J/ψ Ξ– and J/ψ Λ K–. The latter channel also includes partially reconstructed J/ψ Σ0 K– (where the photon from the Σ0→Λ γ decay is too soft to be reconstructed).
If the Ξb(6100)– baryon were only 13 MeV heavier, it would be above the Λb0 K– mass threshold
The observation of this baryon and the measurement of its properties are useful for distinguishing between different theoretical models predicting the excited beauty baryon states. It is curious to note that if the Ξb(6100)– baryon were only 13 MeV heavier, a tiny 0.2% change, it would be above the Λb0 K– mass threshold and could decay to this final state. The Ξb(6100)– might also shed light on the nature of previous discoveries: if it is the 3/2– member of the lightest orbital excitation isodoublet, then the Ξb(6227) isodoublet recently found by the LHCb collaboration could be the 3/2– orbital excitation of Ξb′ or Ξb* baryons.
Neutral pion (π0) and eta-meson (η) production cross sections at midrapidity have recently been measured up to unprecedentedly high transverse momenta (pT) in proton–proton (pp) and proton–lead (p–Pb) collisions at √sNN = 8 and 8.16 TeV, respectively. The mesons were reconstructed in the two-photon decay channel for pT from 0.5 and 1 GeV up to 200 and 50 GeV for π0 and η mesons, respectively. The high momentum reach for the π0 measurement was achieved by identifying two-photon showers reconstructed as a single energy deposit in the ALICE electromagnetic calorimeter.
In pp collisions, measurements of identified hadron spectra are used to constrain perturbative predictions from quantum chromodynamics (QCD). At large momentum transfer (Q2), one relies in these perturbative approximations of QCD (pQCD) on the factorisation of computable short-range parton scattering processes such as quark–quark, quark–gluon and gluon–gluon scatterings from long-range properties of QCD that need experimental input. These properties are modelled by parton distribution functions (PDFs), which describe the fractional-momentum (x) distributions of quarks and gluons within the proton, and fragmentation functions, which describe the fractional-momentum distribution of quarks or gluons for hadrons of a certain species.
In p–Pb collisions, nuclear effects are expected to significantly affect particle production, in particular at small parton fractional momentum x, compared to pp collisions. Modification at low pT (~1 GeV), usually attributed to nuclear shadowing (CERN Courier March/April 2021 p19), can be parameterised by nuclear parton distribution functions (nPDFs). However, since high parton densities are reached at the LHC, the Colour Glass Condensate (CGC) framework is also applicable at low pT (x values as small as ~5 × 10–4), which predicts strong particle suppression due to saturation of the parton phase space in nuclei. Above momenta of about 10 GeV/c, measurements in p–Pb collisions can also be sensitive to the energy loss of the outgoing partons in nuclear matter.
The nuclear modification factor (RpPb), shown in the lower panel of the figure, was measured as the ratio of the cross sections in p–Pb and pp collisions normalised by the atomic mass number. Below 10 GeV, RpPb is found to be smaller than unity, while above 10 GeV it is consistent with unity. The measurement is described by calculations over the full transverse momentum range and provides further constraints to the nPDF parameterisations for lower than about 5 GeV. The direct comparison of the neutral pion cross section in pp collisions at 8 TeV, with pQCD calculations shown in the upper panel of the figure, reveals differences in the low to intermediate pT range, which, however, cancel in RpPb, since similar differences are also present for the p–Pb cross section. Future high-precision measurements are ongoing using the large dataset from pp collisions at 13 TeV, providing further constraints to pQCD calculations.
The XIX International Workshop on Neutrino Telescopes (NeuTel) attracted 1000 physicists online from 18 to 26 February, under the organisation of INFN Sezione di Padova and the Department of Physics and Astronomy of the University of Padova.
The opening session featured presentations by Sheldon Lee Glashow, on the past and future of neutrino science, Carlo Rubbia, on searches for neutrino anomalies, and Barry Barish, on the present and future of gravitational-wave detection. This session was a propitious moment for IceCube principal investigator Francis Halzen to give a “heads-up” on the first observation, in the South-Pole detector, of a so-called Glashow resonance – the interaction of an electron antineutrino with an atomic electron to produce a real W boson, as the eponymous theorist predicted back in 1960. According to Glashow’s calculations, the energy at which the resonance shall happen depends on the mass of the W boson, which was discovered in 1983 by Rubbia and his team.
The first edition of NeuTel saw the birth of the idea of instrumenting a large volume of Antarctic ice
The first edition of NeuTel saw the birth of the idea of instrumenting a large volume of Antarctic ice to capture high-energy neutrinos – a “Deo volente” (God willing) detector, as Halzen and collaborators then dubbed it. Thirty-three years later, as the detection of a Glashow resonance demonstrates, it is possible to precisely calibrate the absolute energy scale of these gigantic instruments for cosmic particles, and we have achieved several independent proofs of the existence of high-energy cosmic neutrinos, including first confirmations by ANTARES and Baikal-GVD.
Astrophysical models describing the connections between cosmic neutrinos, photons and cosmic rays were discussed in depth, with special emphasis on blazars, starburst galaxies and tidal-distribution events. Perspectives for future global multi-messenger observations and campaigns, including gravitational waves and networks of neutrino instruments over a broad range of energies, were illustrated, anticipating core-collapse supernovae as the most promising sources. The future of astroparticle physics relies upon very large infrastructures and collaborative efforts on a planetary scale. Next-generation neutrino telescopes might follow different strategic developments. Extremely large volumes, equipped with cosmic-ray-background veto techniques and complementary radio-sensitive installations might be the key to achieving high statistics and high-precision measurements over a large energy range, given limited sky coverage. Alternatively, a network of intermediate-scale installations, like KM3NeT, distributed over the planet and based on existing or future infrastructures, might be better suited for population studies of transient phenomena. Efforts are currently being undertaken along both paths, with a newborn project, P-ONE, exploiting existing deep-underwater Canadian infrastructures for science to operate strings of photomultipliers.
T2K and NOvA did not update last summer’s leptonic–CP–violation results. The tension of their measurements creates counter-intuitive fit values when a combination is tried, as discussed by Antonio Marrone of the University of Bari. The most striking example is the neutrino mass hierarchy: both experiments in their own fits favour a normal hierarchy, but their combination, with a tension in the value of the CP phase, favours an inverted hierarchy.
The founder of the Borexino experiment, Gianpaolo Bellini, discussed the results of the experiment together with the latest exciting measurements of the CNO cycle in the Sun. DUNE, Hyper-K, and JUNO presented progress towards the realisation of these leading projects, and speakers discussed their potential in many aspects of new-physics searches, astrophysics investigations and neutrino–oscillation sensitivities. The latest results of the reactor–neutrino experiment Neutrino-4, which about one year ago claimed 3.2σ evidence for an oscillation anomaly that could be induced by sterile neutrinos, were discussed in a dedicated session. Both ICARUS and KATRIN presented their sensitivities to this signal in two completely different setups.
Marc Kamionkowski (John Hopkins University) and Silvia Galli (Institut d’Astrophysique de Paris) both provided an update on the “Hubble tension”: an approximately 4σ difference in the Hubble constant when determined from angular temperature fluctuations in the cosmic microwave background (probing the expansion rate when the universe was approximately 380,000 years old) and observing the recession velocity of supernovae (which provides its current value). This Hubble tension could hint at new physics modifying the thermal history of our universe, such as massive neutrinos that influence the early-time measurement of the Hubble parameter.
Alex Chao, one of the leading practitioners in the field, has written an introductory textbook on accelerator physics. It is a lucid and insightful presentation of the principles behind the workings of modern accelerators, touching on a multitude of aspects, from elegant mathematical concepts and fundamental electromagnetism to charged-particle optics and the stability of charged particle beams. At the same time, numerous practical examples illustrate key concepts employed in the most advanced machines currently in operation, from high-energy colliders to free-electron lasers.
The author is careful to keep the text rigorous, yet not to overload it with formal derivations, and exhibits a keen sense for finding simple, convincing arguments to introduce the basic physics. A large number of homework problems (most of them with solutions) facilitate the stated aim to stimulate thinking. The variety of these is the fruit of extensive teaching experience. The book assumes only a basic understanding of special relativity and electromagnetism, while readers with advanced language skills will benefit from occasional remarks in Chinese, mainly philosophical in nature (translated in most cases). The present reviewer could not help wondering about the missed punchlines.
The discussion on “symplecticity” and Liouville’s theorem lets physics ideas stand out against the background of mathematics
Beginners and advanced students alike will find pleasure in striking derivations of basic properties of simple physical systems by dimensional analysis. Students will also find the presentation on the use of phase-space (coordinate-momentum space) concepts in classical mechanics capable of clearing the fog in their heads. In particular, an insightful presentation of transverse and longitudinal phase-space manipulation techniques provides modern-day examples of advanced designs. Furthermore, an important discussion on “symplecticity” and Liouville’s theorem – ideas that yield powerful constraints on the evolution of dynamical systems – lets physics ideas stand out against the background of formal mathematics. The discussion should help students avoid imagining typical unphysical ideas such as beams focused to infinitesimally small dimensions: the infamous “death rays” first dreamt up in the 1920s and 1930s. The treatment of the stability criteria for linear and non-linear systems, in the latter case introducing the notion of dynamical aperture (the stable region of phase space in a circular accelerator), serves as a concrete illustration of these deep and beautiful concepts of classical mechanics.
The physics of synchrotron radiation and its detailed effects on beam dynamics of charged-particle beams provide the essentials for understanding the properties of lepton and future very-high-energy hadron colliders. Lectures on Accelerator Physics also describes the necessary fundamentals of accelerator-based synchrotron light sources, reaching as far as the physics principles of free-electron lasers and diffraction-limited storage rings.
A chapter on collective instability introduces some of the most important effects related to the stability of beams as multi-particle systems. A number of essential effects, including head–tail instability and the Landau damping mechanism, which play a crucial role in the operation of present and future particle accelerators and colliders, are explained with great elegance. The beginner, armed with the insights gained from these lectures, is well advised to turn to Chao’s classic 1993 text Physics of Collective Beam Instabilities in High Energy Accelerators for a more in-depth treatment of these phenomena.
This book is a veritable “All you wanted to know about accelerators physics but were afraid to ask”. It is a compilation of ideas, and can be used as a less dry companion to yet another classic compilation, in this case of formulas: the Handbook of Accelerator Physics and Engineering, edited by Chao and Maury Tigner.
The ATLAS, CMS and LHCb collaborations perform precise measurements of Standard Model (SM) processes and direct searches for physics beyond the Standard Model (BSM) in a vast variety of channels. Despite the multitude of BSM scenarios tested this way by the experiments, it still constitutes only a small subset of the possible theories and parameter combinations to which the experiments are sensitive. The (re)interpretation of the LHC results in order to fully understand their implications for new physics has become a very active field, with close theory–experiment interaction and with new computational tools and related infrastructure being developed.
From 15 to 19 February, almost 300 theorists and experimental physicists gathered for a week-long online workshop to discuss the latest developments. The topics covered ranged from advances in public software packages for reinterpretation to the provision of detailed analysis information by the experiments, from phenomenological studies to global fits, and from long-term preservation to public data.
Open likelihoods
One of the leading questions throughout the workshop was that of public likelihoods. The statistical model of an experimental analysis provides its complete mathematical description; it is essential information for determining the compatibility of the observations with theoretical predictions. In his keynote talk “Open science needs open likelihoods’’, Harrison Prosper (Florida State University) explained why it is in our scientific interest to make the publication of full likelihoods routine and straightforward. The ATLAS collaboration has recently made an important step in this direction by releasing full likelihoods in a JSON format, which provides background estimates, changes under systematic variations, and observed data counts at the same fidelity as used in the experiment, as presented by Eric Schanet (LMU Munich). Matthew Feickert (University of Illinois) and colleagues gave a detailed tutorial on how to use these likelihoods with the pyhf python package. Two public reinterpretation tools, MadAnalysis5 presented by Jack Araz (IPPP Durham) and SModelS presented by Andre Lessa (UFABC Santo Andre) can already make use of pyhf and JSON likelihoods, and others are to follow. An alternative approach to the plain-text JSON serialisation is to encode the experimental likelihood functions in deep neural networks, as discussed by Andrea Coccaro (INFN Genova) who presented the DNNLikelihood framework. Several more contributions from CMS, LHCb and from theorists addressed the question of how to present and use likelihood information, and this will certainly stay an active topic at future workshops.
The question of making research data findable, accessible, interoperable and reusable is a burning one throughout modern science
A novelty for the Reinterpretation workshop was that the discussion was extended to experiences and best practices beyond the LHC, to see how experiments in other fields address the need for publicly released data and reusable results. This included presentations on dark-matter direct detection, the high-intensity frontier, and neutrino oscillation experiments. Supporting Prosper’s call for data reusability 40 years into the future – “for science 2061” – Eligio Lisi (INFN Bari) pointed out the challenges met in reinterpreting the 1998 Super-Kamiokande data, initially published in terms of the then-sufficient two-flavour neutrino-oscillation paradigm, in terms of contemporary three-neutrino descriptions, and beyond. On the astrophysics side, the LIGO and Virgo collaborations actively pursue an open-science programme. Here, Agata Trovato (APC Paris) presented the Gravitational Wave Open Science Center, giving details on the available data, on their format and on the tools to access them. An open-data policy also exists at the LHC, spearheaded by the CMS collaboration, and Edgar Carrera Jarrin (USF Quito) shared experiences from the first CMS open-data workshop.
The question of making research data findable, accessible, interoperable and reusable (“FAIR” in short) is a burning one throughout modern science. In a keynote talk, the head of the GO FAIR Foundation, Barend Mons, explained the FAIR Guiding Principles together with the technical and social aspects of FAIR data management and data reuse, using the example of COVID-19 disease modelling. There is much to be learned here for our field.
The wrap-up session revolved around the question of how to implement the recommendations of the Reinterpretation workshop in a more systematic way. An important aspect here is the proper recognition, within the collaborations as well as the community at large, of the additional work required to this end. More rigorous citation of HEPData entries by theorists may help in this regard. Moreover, a “Reinterpretation: Auxiliary Material Presentation” (RAMP) seminar series will be launched to give more visibility and explicit recognition to the efforts of preparing and providing extensive material for reinterpretation. The first RAMP meetings took place on 9 and 23 April.
Microseconds after the Big Bang, quarks and gluons roamed freely. As the universe expanded, this quark–gluon plasma (QGP) cooled. When the temperature dropped to roughly a hundred thousand times that in the core of the Sun, hadrons formed. Today, this phase transition is reproduced in the heart of detectors at the LHC when lead ions careen into each other at high energy.
Heavy quarks are powerful probes of properties of the QGP
The experimental quest for the QGP started in the 1980s using fixed-target collisions at the Alternating Gradient Synchrotron at Brookhaven National Laboratory (BNL) and the Super Proton Synchrotron at CERN. This side of the millennium, collider experiments have provided a big jump in energy, first at the Relativistic Heavy Ion Collider (RHIC) at BNL, and now at the LHC. Both facilities allow a thorough investigation of the QGP at different points on the still-mysterious phase diagram of quantum chromodynamics.
Among the most striking features of the QGP formed at the LHC is the development of “collective” phenomena, as spatial anisotropies are transformed by pressure gradients into momentum anisotropies. The ALICE experiment is designed to study the collective behaviour of the torrent of particles created in the hadronisation of QGP droplets. Following detailed studies of the “flow” of the abundant light hadrons that are produced, ALICE has recently demonstrated, alongside certain competitive measurements by CMS and ATLAS, the flow of heavy-flavour (HF) hadrons – particles that probe the entire lifetime of a droplet of QGP.
A perfect fluid
The QGP created in lead–ion collisions at the LHC is made up of thousands of quarks and gluons – far too many quantum fields to keep track of in a simulation. In the early 2000s, however, measurements at RHIC revealed that the QGP has a simplifying property: it is a near perfect fluid, with a very low viscosity, as indicated by observations of the highest collective flows allowable in viscous hydrodynamic simulations. More precisely, its shear viscosity-to-entropy ratio – the generalisation of the non-relativistic kinematic viscosity – appears to be only a little above the conjectured quantum limit of 1/4π derived using holographic gravity (AdS/CFT) duality. As the QGP is a near-perfect fluid, its expansion can be modelled using a few local quantities such as energy density, velocity and temperature.
In noncentral heavy-ion collisions, the overlap region between the two incoming nuclei has an almond shape, which naturally imprints a spatial anisotropy to the initial state of the system: the QGP is less elongated along the symmetry plane that connects the centres of the colliding nuclei. As the system evolves, interactions push the QGP more strongly along the shorter symmetry-plane axis than along the longer one (see “Noncentral collision” figure). This is called elliptic flow.
Density fluctuations in the initial state may also lead to other anisotropic flows in the velocity field of the QGP. Triangular flow, for example, pushes the system along three axes. In general, this collective motion is decomposed as 1 + 2 ∑ vn cos(n(ϕ–Ψn)), where vn are harmonic coefficients, ϕ is the azimuthal angle of the final-state particles in transverse-momentum (pT) space, and Ψn are the orientation of the symmetry planes. v1, which is expected to be negligible at mid-rapidity, is “directed flow” towards a single maximum, while v2 and v3 signal elliptic and triangular flows. The LHC’s impressive luminosity has allowed ALICE to measure significant values for the flow of light-flavour hadrons up to v9 (see “Light-flavour flow” figure).
The importance of being heavy
The bulk of the QGP is composed of thermally produced gluons and light quarks. By contrast, thermal HF production is negligible as the typical temperature of the system created in heavy-ion collisions is a few hundred MeV – significantly below the mass of a charm or beauty quark–antiquark pair. HF quarks are instead created in quark–antiquark pairs in early hard-scattering processes on shorter timescales than the QGP formation time, and experience the whole evolution of the system.
Heavy quarks are therefore powerful probes of properties of the QGP. As they traverse the medium, they interact with its constituents, gaining or losing energy depending on their momenta. High-momentum HF quarks lose energy via both elastic (collisional) and inelastic (gluon radiation) processes. Low-momentum HF quarks are swept along with the flow of the medium, partially thermalising with it via multiple interactions. The thermalisation time is inversely proportional to the particle’s mass, and so a higher degree of thermalisation is expected for charm than for beauty. Subsequent hadronisation brings additional complexity: as colour-charged quarks arrange themselves in colour-neutral hadrons, extra contributions to their flow arise from the influence of the surrounding medium when they coalesce with nearby light quarks.
In the past two years, the ALICE collaboration has measured the elliptic and triangular flow coefficients of HF hadrons with open and hidden charm and beauty. The results are currently unique in both scope and transverse-momentum coverage, and depend on the simultaneous reconstruction of thousands of particles in the ALICE detectors (see “ALICE in action” panel). In each case, these HF flows should be compared to the flow of the abundant light-particle species such as charged pions. Within the hydrodynamic description, particles originating from the thermally expanding medium at relatively low transverse momenta typically exhibit flow coefficients that increase with transverse momentum. Faster particles also interact with the medium, but might not reach thermal equilibrium. For these particles, an azimuthal anisotropy develops due to the shorter length of medium they traverse along the symmetry plane, but it is not as large, and anisotropy coefficients are expected to fall with increasing transverse momentum. When thermal equilibrium is achieved, it imprints the same velocity field to all particles: the result is a mass hierarchy wherein heavier particles exhibit lower flow coefficients for a given transverse momentum.
The geometrical overlap between the two colliding nuclei varies from head-on collisions that produce a huge number of particles, sending several thousand hadrons flying to ALICE’s detectors (“0% centrality”, as a percentile of the hadronic cross section) to peripheral collisions where the two nuclei barely overlap (“100% centrality”). Since the initial geometry is not directly experimentally accessible, centrality is estimated using either the total particle multiplicity or the energy deposited in the detectors.
Among the cloud of particles are a handful of open and hidden heavy-flavour hadrons that are reconstructed from their decay products using tracking, particle-identification and decay-vertex reconstruction. Charm mesons are reconstructed through hadronic decay channels using the central barrel detectors. Open beauty hadrons are also reconstructed in the central barrel using their semileptonic decay to an electron as a proxy. Compelling evidence of heavy-quark energy loss in a deconfined strongly interacting matter is provided by the suppression of high-pT open heavy-flavour hadron yields in central nucleus–nucleus collisions relative to proton–proton collisions (after scaling by the average number of binary nucleon–nucleon collisions).
A small fraction of the initially created heavy-quark pairs will bind together to form charmonium (c–c) or bottomonium (b-b) states that are reconstructed in the forward muon spectrometer using their decay channel to two muons. Charmonium states were among the first proposed probes of the deconfinement of the QGP. The potential between the heavy quark and antiquark pair is partially screened by the high density of colour charges in the QGP, leading to a suppression of the production of charmonium states. Interestingly, however, ALICE observes less suppression of the J/ψ in lead–lead collisions than is seen at the lower collision energies of RHIC, despite the increased density of colour charges at higher collision energies. This effect may be understood as due to J/ψ regeneration as the copiously produced charm quarks and antiquarks recombine. By contrast, bottomonia are not expected to have a large regeneration contribution due to the larger mass and thus lower production cross section of the beauty quark.
D mesons are the lightest and most abundant hadrons formed from a heavy quark, and are key to understanding the dynamics of charm quarks in the collision. A substantial anisotropy is observed for D mesons in non-central collisions (see “Elliptic flow” figure). As expected, the measured pT dependence is similar to that for light particles, suggesting that D mesons are strongly affected by the surrounding medium, participating in the collective motion of the QGP and reaching a high degree of thermalisation. J/ψ mesons, which do not contain light-flavour quarks, also exhibit significant positive elliptic flow with a similar pT shape. Open beauty hadrons, whose mass is dominated by the b quark, are also seen to flow, and in the low to intermediate pT region, below 4 GeV, an apparent mass hierarchy is seen: the lighter the particle, the greater the elliptic flow, as expected in a hydrodynamical description of QGP evolution. Above 6 GeV, the elliptic flows of the three particles converge, perhaps as a result of energy loss as energetic partons move through the QGP. In contrast to the other particles, ϒ mesons do not show any significant elliptic flow. This is not surprising as the transverse momentum of peak elliptic flow is expected to scale with the mass of the particle according to the hydrodynamic description of the evolution of the QGP – for ϒ mesons that should be beyond 10 GeV, where the uncertainties are currently large.
Theoretical descriptions of elliptic flow are also making progress. Models of HF flow need to include a realistic hydrodynamic expansion of the QGP, the interaction of the heavy quarks with the medium via collisional and radiative processes, and the hadronisation of heavy quarks via both fragmentation and coalescence. For example, the “TAMU” model describes the measurements of the D mesons and electrons from beauty-hadron decays reasonably well, but shows some tension with the measurement of J/ψ at intermediate and high transverse momenta, perhaps indicating that a mechanism related to parton energy loss is not included.
Triangular flow
Triangular flow is observed for D and J/ψ mesons in central collisions, demonstrating that energy-density fluctuations in the initial state have a measurable effect on the heavy quark sector (see “Triangular flow” figure). These measurements of a triangular flow of open- and hidden- charm mesons pose new challenges to models describing HF interactions in the QGP: models now need to account not only for the properties of the medium and the transport of the HF quarks through it, but also for fluctuations in the initial conditions of the heavy-ion collisions.
In the coming years, measurements of HF flow will continue to strongly constrain models of the QGP. It is now clear that charm quarks take part in the collective motion of the medium and partially thermalise. More data is needed to make firm conclusions about open and hidden beauty hadrons. All four LHC experiments will study how heavy quarks diffuse in a colour-deconfined and hydrodynamically expanding medium with the greater luminosities set to be delivered in LHC Run 3 and Run 4. Currently ongoing upgrades to ALICE will extend its unique advantages in track reconstruction at low momenta, and upgrades to LHCb will allow this asymmetric experiment to study non-central collisions in Run 3. In the next long shutdown of the LHC, upgrades to CMS and ATLAS will then extend their already impressive flow measurements to be competitive with ALICE in the crucial low transverse momentum domain, inching us closer to understanding both the early universe and the phase diagram of quantum chromodynamics.
The CMS collaboration, in partnership with the Geneva-based Sharing Knowledge Foundation, has launched a fundraising initiative to support the Lebanese scientific community during an especially difficult period. Lebanon signed an international cooperation agreement with CERN in 2016, which triggered a strong development of the country’s contributions to CERN projects, particularly to the CMS experiment through the affiliation of four of its top universities. Yet the country is dealing with an unprecedented economic crisis, food shortages, Syrian refugees and the COVID-19 pandemic, all in the aftermath of the Beirut port explosion in August 2020.
“Even the most resilient higher-education institutions in Lebanon are struggling to survive,” says CMS collaborator Martin Gastal of CERN, who initiated the fundraising activity in March. “Despite these challenges, the Lebanese scientific community has reaffirmed its commitment to CERN and CMS, but it needs support.”
One project, High-Performance Computing for Lebanon (HPC4L), which was initiated to build Lebanon’s research capacity while contributing as a Tier-2 centre to the analysis of CMS data, is particularly at risk. HPC4L was due to benefit from servers donated by CERN to Lebanon, and from the transfer of CERN and CMS knowledge and expertise to train a dedicated support team that will run a high-performance computing facility there. But the hardware has been unable to be shipped from CERN because of a lack of available funding. CMS and the Sharing Knowledge Foundation are therefore fundraising to cover the shipping costs of the donated hardware, to purchase hardware to allow its installation, and to support Lebanese experts while they are trained at CERN by the CMS offline computing team.
“At this pivotal moment, every effort to help Lebanon counts,” says Gastal. “CMS is reaching out for donations to support this initiative, to help both the Lebanese research community and the country itself.”
The electroweak session of the Rencontres de Moriond convened more than 200 participants virtually from 22 to 27 March in a new format, with pre-recorded plenary talks and group-chat channels that went online in advance of live discussion sessions. The following week, the QCD and high-energy interactions session took place with a more conventional virtual organisation.
The highlight of both conferences was the new LHCb result on RK based on the full Run 1 and Run 2 data, and corresponding to an integrated luminosity of 9 fb–1, which led to the claim of the first evidence for lepton-flavour-universality (LFU) violation from a single measurement. RK is the ratio of the branching fractions for the decays B+→ K+ μ+ μ– and B+→ K+ e+ e–. LHCb measured this ratio to be 3.1σ below unity, despite the fact that the two branching fractions are expected to be equal by virtue of the well-established property of lepton universality (see New data strengthens RK flavour anomaly). Coupled with previously reported anomalies of angular variables and the RK*, RD and RD* branching-fraction ratios by several experiments, it further reinforces the indications that LFU may be violated in the B sector. Global fits and possible theoretical interpretations with new particles were also discussed.
Important contributions
Results from Belle II and BES III were reported. Some of the highlights were a first measurement of the B+→ K+νν decay and the most stringent limits to date for masses of axions between 0.2 and 1 GeV from Belle II, based on the first data they collected, and searches for LFU violation in the charm sector from BES III that for the moment give negative results. Belle II is expected to give important contributions to the LFU studies soon and to accumulate an integrated luminosity of 50 ab–1 10 years from now.
ATLAS and CMS presented tens of new results each on Standard Model (SM) measurements and searches for new phenomena in the two conferences. Highlights included the CMS measurement of the W leptonic and hadronic branching fraction with an accuracy larger than that measured at LEP for the branching fractions to the electron and muon, and the updated ATLAS evidence of the four-top-production process at 4.7σ (with 2.6σ expected). ATLAS and CMS have not yet found any indications of new physics but continue to perform many searches, expanding the scope to as-yet unexplored areas, and many improved limits on new-physics scenarios were reported for the first time at both conference sessions.
Several results and prospects of electroweak precision measurements were presented and discussed, including a new measurement of the fine structure constant with a precision of 80 parts per trillion, and a measurement at PSI of the null electric dipole moment of the neutron with an uncertainty of 1.1 × 10–26 e∙cm. Theoretical predictions of (g–2)μ were discussed, including the recent lattice calculation from the Budapest–Marseille–Wuppertal group of the hadronic–vacuum–polarisation contribution, which, if used in comparison with the experimental measurement, would bring the tension with the (g–2)μ prediction to within 2σ.
In the neutrino session, the most relevant recent new results of last year were discussed. KATRIN reported updated upper limits on the neutrino mass, obtained from the direct measurement of the endpoint of the electron spectrum of the tritium β decay, while T2K showed the most recent results concerning CP violation in the neutrino sector, obtained from the simultaneous measurement of the νμ and νμ disappearance, and νe and νeappearance. The measurement disfavours at 90% CL the CP-conserving values 0 and π of the CP-violating parameter of the neutrino mixing matrix, δCP, and all values between 0 and π.
The quest for dark matter is in full swing and is expanding on all fronts. XENON1T updated delegates on an intriguing small excess in the low-energy part of the electron-recoil spectrum, from 1 to 7 keV, which could be interpreted as originating from new particles but that is also consistent with an increased background from tritium contamination. Upcoming new data from the upgraded XENONnT detector are expected to be able to disentangle the different possibilities, should the excess be confirmed. The Axion Dark Matter eXperiment (ADMX) is by far the most sensitive experiment to detect axions in the explored range around 2 μeV. ADMX showed near-future prospects and the plans for upgrading the detector to scan a much wider mass range, up to 20 μeV, in the next few years. The search for dark matter also continues at accelerators, where it could be directly produced or be detected in the decays of SM particles such as the Higgs boson.
The quest for dark matter is in full swing and is expanding on all fronts
ATLAS and CMS also presented new results at the Moriond QCD and high-energy-interactions conference. Highlights of the new results are: the ATLAS full Run-2 search for double-Higgs-boson production in the bbγγ channel, which yielded the tightest constraints to date on the Higgs-boson self-coupling, and the measurement of the top-quark mass by CMS in the single-top-production channel that for the first time reached an accuracy of less than 1 GeV, now becoming relevant to future top-mass combinations. Several recent heavy-ion results were also presented by the LHC experiments, and by STAR and PHENIX at RHIC, in the dedicated heavy-ion session. One highlight was a result from ALICE on the measurement of the Λc+ transverse-momentum spectrum and the Λc+ /D0 ratio in pp and p–Pb collisions, showing discrepancies with perturbative QCD predictions.
The above is only a snapshot of the many interesting results presented at this year’s Rencontres de Moriond, representing the hard work and dedication of countless physicists, many at the early-career stage. As ever, the SM stands strong, though intriguing results provoked lively debate during many virtual discussions.
It has been almost a century since Dirac formulated his famous equation, and 75 years since the first QED calculations by Schwinger, Tomonaga and Feynman were used to explain the small deviations in hydrogen’s hyperfine structure. These calculations also predicted that deviations from Dirac’s prediction a = (g–2)/2, where g is the gyromagnetic ratio e/2me, should be non-zero and thus “anomalous”. The result is famously engraved on Schwinger’s tombstone, standing as a monument to the importance of this result and a marker of things to come.
In January 1957 Garwin and collaborators at Columbia published the first measurements of g for the recently discovered muon, accurate to 5%, followed two months later by Cassels and collaborators at Liverpool with uncertainties of less than 1%. Leon Lederman is credited with initiating the CERN campaign of g–2 experiments from 1959 to 1979, starting with a borrowed 83 × 52 × 10 cm magnet from Liverpool and ending with a dedicated storage ring and a precision of better than 10 ppm.
Why was CERN so interested in the muon? In a 1981 review, Combley, Farley and Picasso commented that the CERN results for aμ had a higher sensitivity to new physics by “a modification to the photon propagator or new couplings” by a factor (mμ/me)2. Revealing a deeper interest, they also admitted “… this activity has brought us no nearer to the understanding of the muon mass [200 times that of the electron].”
With the end of the CERN muon programme, focus turned to Brookhaven and the E821 experiment, which took up the challenge of measuring aμ 20 times more precisely, providing sensitivity to virtual particles with masses beyond the reach of the colliders at the time. In 2004 the E821 collaboration delivered on its promise, reporting results accurate to about 0.6 ppm. At the time this showed a 2–3σ discrepancy with respect to the Standard Model (SM) – tantalising, but far from conclusive.
Spectacular progress
The theoretical calculation of g–2 made spectacular progress in step with experiment. Almost eclipsed by the epic 2012 achievement of calculating the QED contributions to five loops from 12,672 Feynman diagrams, huge advances in calculating the hadronic vacuum polarisation contributions to aμ have been made. A reappraisal of the E821 data using this information suggested at least a 3.5σ discrepancy with the SM. It was this that provided the impetus to Lee Roberts and colleagues to build the improved muon g–2 experiments at Fermilab, the first results from which are described in this issue, and at J-PARC. Full results from the Fermilab experiment alone should reduce the aμ uncertainties by at least another factor of three – down to a level that really challenges what we know about the SM.
Muon g–2 is a clear demonstration that theory and experiment must progress hand in hand
Of course, the interpretation of the new results relies on the choice of theory baseline. For example, one could choose, as the Fermilab experiment has, to use the consensus “International Theory Initiative” expectation for aμ. One could also take into account the new results provided by LHCb’s recent RK measurement, which hint that muons might behave differently than electrons. There will inevitably be speculation over the coming months about the right approach. Whatever one’s choice, muon g–2 is a clear demonstration that theory and experiment must progress hand in hand.
Perhaps the most important lesson is the continued cross-fertilisation and impetus to the physics delivered both at CERN and at Fermilab by recent results. The g–2 experiment, an international collaboration between dozens of labs and universities in seven countries, has benefited from students who cut their teeth on LHC experiments. Likewise, students who have worked at the precision frontier at Fermilab are now armed with the expertise of making blinded ppm measurements and are keen to see how they can make new measurements at CERN, for example at the proposed MUonE experiment, or at other muon experiments due to come online this decade.
“It remains to be seen whether or not future refinement of the [SM] will call for the discerning scrutiny of further measurements of even greater precision,” concluded Combley, Farley and Picasso in their 1981 review – a wise comment that is now being addressed.
A fermion’s spin tends to twist to align with a magnetic field – an effect that becomes dramatically macroscopic when electron spins twist together in a ferromagnet. Microscopically, the tiny magnetic moment of a fermion interacts with the external magnetic field through absorption of photons that comprise the field. Quantifying this picture, the Dirac equation predicts fermion magnetic moments to be precisely two in units of Bohr magnetons, e/2m. But virtual lines and loops add an additional 0.1% or so to this value, giving rise to an “anomalous” contribution known as “g–2” to the particle’s magnetic moment, caused by quantum fluctuations. Calculated to tenth order in quantum electrodynamics (QED), and verified experimentally to about two parts in 1010, the electron’s magnetic moment is one of the most precisely known numbers in the physical sciences. While also measured precisely, the magnetic moment of the muon, however, is in tension with the Standard Model.
Tricky comparison
The anomalous magnetic moment of the muon was first measured at CERN in 1959, and prior to 2021, was most recently measured by the E821 experiment at Brookhaven National Laboratory (BNL) 16 years ago. The comparison between theory and data is much trickier than for electrons. Being short-lived, muons are less suited to experiments with Penning traps, whereby stable charged particles are confined using static electric and magnetic fields, and the trapped particles are then cooled to allow precise measurements of their properties. Instead, experiments infer how quickly muon spins precess in a storage ring – a situation similar to the wobbling of a spinning top, where information on the muon’s advancing spin is encoded in the direction of the electron that is emitted when it decays. Theoretical calculations are also more challenging, as hadronic contributions are no longer so heavily suppressed when they emerge as virtual particles from the more massive muon.
All told, our knowledge of the anomalous magnetic moment of the muon is currently three orders of magnitude less precise than for electrons. And while everything tallies up, more or less, for the electron, BNL’s longstanding measurement of the magnetic moment of the muon is 3.7σ greater than the Standard Model prediction (see panel “Rising to the moment”). The possibility that the discrepancy could be due to virtual contributions from as-yet-undiscovered particles demands ever more precise theoretical calculations. This need is now more pressing than ever, given the increased precision of the experimental value expected in the next few years from the Muon g–2 collaboration at Fermilab in the US and other experiments such as the Muon g–2/EDM collaboration at J-PARC in Japan. Hotly anticipated results from the first data run at Fermilab’s E989 experiment were released on 7 April. The new result is completely consistent with the BNL value but with a slightly smaller error, leading to a slightly larger discrepancy of 4.2σ with the Standard Model when the measurements are combined (see Fermilab strengthens muon g-2 anomaly).
Hadronic vacuum polarisation
The value of the muon anomaly, aμ, is an important test of the Standard Model because currently it is known very precisely – to roughly 0.5 parts per million (ppm) – in both experiment and theory. QED dominates the value of aμ, but due to the non-perturbative nature of QCD it is strong interactions that contribute most to the error. The theoretical uncertainty on the anomalous magnetic moment of the muon is currently dominated by so-called hadronic vacuum polarisation (HVP) diagrams. In HVP, a virtual photon briefly explodes into a “hadronic blob”, before being reabsorbed, while the magnetic-field photon is simultaneously absorbed by the muon. While of order α2 in QED, it is all orders in QCD, making for very difficult calculations.
In the Standard Model, the magnetic moment of the muon is computed order-by-order in powers of a for QED (each virtual photon represents a factor of α), and to all orders in as for QCD.
At the lowest order in QED, the Dirac term (pictured left) accounts for precisely two Bohr magnetons and arises purely from the muon (μ) and the real external photon (γ) representing the magnetic field.
At higher orders in QED, virtual Standard Model particles, depicted by lines forming loops, contribute to a fractional increase of aμ with respect to that value: the so-called anomalous magnetic moment of the muon. It is defined to be aμ = (g–2)/2, where g is the gyromagnetic ratio of the muon – the number of Bohr magnetons, e/2m, which make up the muon’s magnetic moment. According to the Dirac equation, g = 2, but radiative corrections increase its value.
The biggest contribution is from the Schwinger term (pictured left, O(α)) and higher-order QED diagrams.
aμQED = (116 584 718.931 ± 0.104) × 10–11
Electroweak lines (pictured left) also make a well-defined contribution. These diagrams are suppressed by the heavy masses of the Higgs, W and Z bosons.
aμEW = (153.6 ± 1.0) × 10–11
The biggest QCD contribution is due to hadronic vacuum polarisation (HVP) diagrams. These are computed from leading order (pictured left, O(α2)), with one “hadronic blob” at all orders in as (shaded) up to next-to-next-to-leading order (NNLO, O(α4), with three hadronic blobs) in the HVP.
Hadronic light-by-light scattering (HLbL, pictured left at O(α3) and all orders in αs (shaded)), makes a smaller contribution but with a larger fractional uncertainty.
Neglecting lattice–QCD calculations for the HVP in favour of those based on e+e– data and phenomenology, the total anomalous magnetic moment is given by
This is somewhat below the combined value from the E821 experiment at BNL in 2004 and the E989 experiment at Fermilab in 2021.
aμexp = (116 592 061 ± 41) × 10–11
The discrepancy has roughly 4.2σ significance:
aμexp– aμSM = (251 ± 59) × 10–11.
Historically, and into the present, HVP is calculated using a dispersion relation and experimental data for the cross section for e+e–→ hadrons. This idea was born of necessity almost 60 years ago, before QCD was even on the scene, let alone calculable. The key realisation is that the imaginary part of the vacuum polarisation is directly related to the hadronic cross section via the optical theorem of wave-scattering theory; a dispersion relation then relates the imaginary part to the real part. The cross section is determined over a relatively wide range of energies, in both exclusive and inclusive channels. The dominant contribution – about three quarters – comes from the e+e–→ π+π– channel, which peaks at the rho meson mass, 775 MeV. Though the integral converges rapidly with increasing energy, data are needed over a relatively broad region to obtain the necessary precision. Above the τ mass, QCD perturbation theory hones the calculation.
Several groups have computed the HVP contribution in this way, and recently a consensus value has been produced as part of the worldwide Muon g–2 Theory Initiative. The error stands at about 0.58% and is the dominant part of the theory error. It is worth noting that a significant part of the error arises from a tension between the most precise measurements, by the BaBar and KLOE experiments, around the rho–meson peak. New measurements, including those from experiments at Novosibirsk, Russia and Japan’s Belle II experiment, may help resolve the inconsistency in the current data and reduce the error by a factor of two or so.
The alternative approach, of calculating the HVP contribution from first principles using lattice QCD, is not yet at the same level of precision, but is getting there. Consistency between the two approaches will be crucial for any claim of new physics.
Lattice QCD
Kenneth Wilson formulated lattice gauge theory in 1974 as a means to rid quantum field theories of their notorious infinities – a process known as regulating the theory – while maintaining exact gauge invariance, but without using perturbation theory. Lattice QCD calculations involve the very large dimensional integration of path integrals in QCD. Because of confinement, a perturbative treatment including physical hadronic states is not possible, so the complete integral, regulated properly in a discrete, finite volume, is done numerically by Monte Carlo integration.
Lattice QCD has made significant improvements over the last several years, both in methodology and invested computing time. Recently developed methods (which rely on low-lying eigenmodes of the Dirac operator to speed up calculations) have been especially important for muon–anomaly calculations. By allowing state-of-the-art calculations using physical masses, they remove a significant systematic: the so-called chiral extrapolation for the light quarks. The remaining systematic errors arise from the finite volume and non-zero lattice spacing employed in the simulations. These are handled by doing multiple simulations and extrapolating to the infinite-volume and zero-lattice-spacing limits.
The HVP contribution can readily be computed using lattice QCD in Euclidean space with space-like four-momenta in the photon loop, thus yielding the real part of the HVP directly. The dispersive result is currently more precise (see “Off the mark” figure”), but further improvements will depend on consistent new e+e– scattering datasets.
Rapid progress in the last few years has resulted in first lattice results with sub-percent uncertainty, closing in on the precision of the dispersive approach. Since these lattice calculations are very involved and still maturing, it will be crucial to monitor the emerging picture once several precise results with different systematic approaches are available. It will be particularly important to aim for statistics-dominated errors to make it more straightforward to quantitatively interpret the resulting agreement with the no-new-physics scenario or the dispersive results. In the shorter term, it will also be crucial to cross-check between different lattice and dispersive results using additional observables, for example based on the vector–vector correlators.
With improved lattice calculations in the pipeline from a number of groups, the tension between lattice QCD and phenomenological calculations may well be resolved before the Fermilab and J-PARC experiments announce their final results. Interestingly, there is a new lattice result with sub-percent precision (BMW 2020) that is in agreement both with the no-new-physics point within 1.3σ, and with the dispersive-data-driven result within 2.1σ. Barring a significant re-evaluation of the phenomenological calculation, however, HVP does not appear to be the source of the discrepancy with experiments.
The next most likely Standard Model process to explain the muon anomaly is hadronic light-by-light scattering. Though it occurs less frequently since it includes an extra virtual photon compared to the HVP contribution, it is much less well known, with comparable uncertainties to HVP.
Hadronic light-by-light scattering
In hadronic light-by-light scattering (HLbL), the magnetic field interacts not with the muon, but with a hadronic “blob”, which is connected to the muon by three virtual photons. (The interaction of the four photons via the hadronic blob gives HLbL its name.) A miscalculation of the HLbL contribution has often been proposed as the source of the apparently anomalous measurement of the muon anomaly by BNL’s E821 collaboration.
Since the so-called Glasgow consensus (the fruit of a 2009 workshop) first established a value more than 10 years ago, significant progress has been made on the analytic computation of the HLbL scattering contribution. In particular, a dispersive analysis of the most important hadronic channels has been carried out, including the leading pion–pole, sub-leading pion loop and rescattering diagrams including heavier pseudoscalars. These calculations are analogous in spirit to the dispersive HVP calculations, but are more complicated, and the experimental measurements are more difficult because form factors with one or two virtual photons are required.
The project to calculate the HLbL contribution using lattice QCD began more than 10 years ago, and many improvements to the method have been made to reduce both statistical and systematic errors since then. Last year we published, with colleagues Norman Christ, Taku Izubuchi and Masashi Hayakawa, the first ever lattice–QCD calculation of the HLbL contribution with all errors controlled, finding aμHLbL, lattice = (78.7 ± 30.6 (stat) ± 17.7 (sys)) × 10–11. The calculation was not easy: it took four years and a billion core-hours on the Mira supercomputer at Argonne National Laboratory’s Large Computing Facility.
Our lattice HLbL calculations are quite consistent with the analytic and data-driven result, which is approximately a factor of two more precise. Combining the results leads to aμHLbL = (90 ± 17) × 10–11, which means the very difficult HLbL contribution cannot explain the Standard Model discrepancy with experiment. To make such a strong conclusion, however, it is necessary to have consistent results from at least two completely different methods of calculating this challenging non-perturbative quantity.
New physics?
If current theory calculations of the muon anomaly hold up, and the new experiments reduce its uncertainty by the hoped-for factor of four, then a new-physics explanation will become impossible to ignore.The idea would be to add particles and interactions that have not yet been observed but may soon be discovered at the LHC or in future experiments. New particles would be expected to contribute to the anomaly through Feynman diagrams similar to the Standard Model topographies (see “Rising to the moment” panel).
Calculations of the anomalous magnetic moment of the muon are not finished
The most commonly considered new-physics explanation is supersymmetry, but the increasingly stringent lower limits placed on the masses of super-partners by the LHC experiments make it increasingly difficult to explain the muon anomaly. Other theories could do the job too. One popular idea that could also explain persistent anomalies in the b-quark sector is heavy scalar leptoquarks, which mediate a new interaction allowing leptons and quarks to change into each other. Another option involves scenarios whereby the Standard Model Higgs boson is accompanied by a heavier Higgs-like boson.
The calculations of the anomalous magnetic moment of the muon are not finished. As a systematically improvable method, we expect more precise lattice determinations of the hadronic contributions in the near future. Increasingly powerful algorithms and hardware resources will further improve precision on the lattice side, and new experimental measurements and analysis methods will do the same for dispersive studies of the HVP and HLbL contributions.
To confidently discover new physics requires that these two independent approaches to the Standard Model value agree. With the first new results on the experimental value of the muon anomaly in almost two decades showing perfect agreement with the old value, we anxiously await more precise measurements in the near future. Our hope is that the clash of theory and experiment will be the beginning of an exciting new chapter of particle physics, heralding new discoveries at current and future particle colliders.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.