Comsol -leaderboard other pages

Topics

Going with the flow

Microseconds after the Big Bang, quarks and gluons roamed freely. As the universe expanded, this quark–gluon plasma (QGP) cooled. When the temperature dropped to roughly a hundred thousand times that in the core of the Sun, hadrons formed. Today, this phase transition is reproduced in the heart of detectors at the LHC when lead ions careen into each other at high energy.

Heavy quarks are powerful probes of properties of the QGP

The experimental quest for the QGP started in the 1980s using fixed-target collisions at the Alternating Gradient Synchrotron at Brookhaven National Laboratory (BNL) and the Super Proton Synchrotron at CERN. This side of the millennium, collider experiments have provided a big jump in energy, first at the Relativistic Heavy Ion Collider (RHIC) at BNL, and now at the LHC. Both facilities allow a thorough investigation of the QGP at different points on the still-mysterious phase diagram of quantum chromodynamics.

Three droplets of quark–gluon plasma

Among the most striking features of the QGP formed at the LHC is the development of “collective” phenomena, as spatial anisotropies are transformed by pressure gradients into momentum anisotropies. The ALICE experiment is designed to study the collective behaviour of the torrent of particles created in the hadronisation of QGP droplets. Following detailed studies of the “flow” of the abundant light hadrons that are produced, ALICE has recently demonstrated, alongside certain competitive measurements by CMS and ATLAS, the flow of heavy-flavour (HF) hadrons – particles that probe the entire lifetime of a droplet of QGP.

A perfect fluid

The QGP created in lead–ion collisions at the LHC is made up of thousands of quarks and gluons – far too many quantum fields to keep track of in a simulation. In the early 2000s, however, measurements at RHIC revealed that the QGP has a simplifying property: it is a near perfect fluid, with a very low viscosity, as indicated by observations of the highest collective flows allowable in viscous hydrodynamic simulations. More precisely, its shear viscosity-to-entropy ratio – the generalisation of the non-relativistic kinematic viscosity – appears to be only a little above the conjectured quantum limit of 1/4π derived using holographic gravity (AdS/CFT) duality. As the QGP is a near-perfect fluid, its expansion can be modelled using a few local quantities such as energy density, velocity and temperature.

Evolving energy density of the QGP

In noncentral heavy-ion collisions, the overlap region between the two incoming nuclei has an almond shape, which naturally imprints a spatial anisotropy to the initial state of the system: the QGP is less elongated along the symmetry plane that connects the centres of the colliding nuclei. As the system evolves, interactions push the QGP more strongly along the shorter symmetry-plane axis than along the longer one (see “Noncentral collision” figure). This is called elliptic flow.

Density fluctuations in the initial state may also lead to other anisotropic flows in the velocity field of the QGP. Triangular flow, for example, pushes the system along three axes. In general, this collective motion is decomposed as 1 + 2  vn cos(n(ϕΨn)), where vn are harmonic coefficients, ϕ is the azimuthal angle of the final-state particles in transverse-momentum (pT) space, and Ψn are the orientation of the symmetry planes. v1, which is expected to be negligible at mid-rapidity, is “directed flow” towards a single maximum, while v2 and v3 signal elliptic and triangular flows. The LHC’s impressive luminosity has allowed ALICE to measure significant values for the flow of light-flavour hadrons up to v9 (see “Light-flavour flow” figure).

The importance of being heavy

The bulk of the QGP is composed of thermally produced gluons and light quarks. By contrast, thermal HF production is negligible as the typical temperature of the system created in heavy-ion collisions is a few hundred MeV – significantly below the mass of a charm or beauty quark–antiquark pair. HF quarks are instead created in quark–antiquark pairs in early hard-scattering processes on shorter timescales than the QGP formation time, and experience the whole evolution of the system. 

Flow anisotropy coefficients

Heavy quarks are therefore powerful probes of properties of the QGP. As they traverse the medium, they interact with its constituents, gaining or losing energy depending on their momenta. High-momentum HF quarks lose energy via both elastic (collisional) and inelastic (gluon radiation) processes. Low-momentum HF quarks are swept along with the flow of the medium, partially thermalising with it via multiple interactions. The thermalisation time is inversely proportional to the particle’s mass, and so a higher degree of thermalisation is expected for charm than for beauty. Subsequent hadronisation brings additional complexity: as colour-charged quarks arrange themselves in colour-neutral hadrons, extra contributions to their flow arise from the influence of the surrounding medium when they coalesce with nearby light quarks.

In the past two years, the ALICE collaboration has measured the elliptic and triangular flow coefficients of HF hadrons with open and hidden charm and beauty. The results are currently unique in both scope and transverse-momentum coverage, and depend on the simultaneous reconstruction of thousands of particles in the ALICE detectors (see “ALICE in action” panel). In each case, these HF flows should be compared to the flow of the abundant light-particle species such as charged pions. Within the hydrodynamic description, particles originating from the thermally expanding medium at relatively low transverse momenta typically exhibit flow coefficients that increase with transverse momentum. Faster particles also interact with the medium, but might not reach thermal equilibrium. For these particles, an azimuthal anisotropy develops due to the shorter length of medium they traverse along the symmetry plane, but it is not as large, and anisotropy coefficients are expected to fall with increasing transverse momentum. When thermal equilibrium is achieved, it imprints the same velocity field to all particles: the result is a mass hierarchy wherein heavier particles exhibit lower flow coefficients for a given transverse momentum.

ALICE in action

Lead–ion collisions

The geometrical overlap between the two colliding nuclei varies from head-on collisions that produce a huge number of particles, sending several thousand hadrons flying to ALICE’s detectors (“0% centrality”, as a percentile of the hadronic cross section) to peripheral collisions where the two nuclei barely overlap (“100% centrality”). Since the initial geometry is not directly experimentally accessible, centrality is estimated using either the total particle multiplicity or the energy deposited in the detectors. 

Among the cloud of particles are a handful of open and hidden heavy-flavour hadrons that are reconstructed from their decay products using tracking, particle-identification and decay-vertex reconstruction. Charm mesons are reconstructed through hadronic decay channels using the central barrel detectors. Open beauty hadrons are also reconstructed in the central barrel using their semileptonic decay to an electron as a proxy. Compelling evidence of heavy-quark energy loss in a deconfined strongly interacting matter is provided by the suppression of high-pT open heavy-flavour hadron yields in central nucleus–nucleus collisions relative to proton–proton collisions (after scaling by the average number of binary nucleon–nucleon collisions). 

A small fraction of the initially created heavy-quark pairs will bind together to form charmonium (c–c) or bottomonium (b-b) states that are reconstructed in the forward muon spectrometer using their decay channel to two muons. Charmonium states were among the first proposed probes of the deconfinement of the QGP. The potential between the heavy quark and antiquark pair is partially screened by the high density of colour charges in the QGP, leading to a suppression of the production of charmonium states. Interestingly, however, ALICE observes less suppression of the J/ψ in lead–lead collisions than is seen at the lower collision energies of RHIC, despite the increased density of colour charges at higher collision energies. This effect may be understood as due to J/ψ regeneration as the copiously produced charm quarks and antiquarks recombine. By contrast, bottomonia are not expected to have a large regeneration contribution due to the larger mass and thus lower production cross section of the beauty quark. 

D mesons are the lightest and most abundant hadrons formed from a heavy quark, and are key to understan­ding the dynamics of charm quarks in the collision. A substantial anisotropy is observed for D mesons in non-central collisions (see “Elliptic flow” figure). As expected, the measured pT dependence is similar to that for light particles, suggesting that D mesons are strongly affected by the surrounding medium, participating in the collective motion of the QGP and reaching a high degree of thermalisation. J/ψ mesons, which do not contain light-flavour quarks, also exhibit significant positive elliptic flow with a similar pT shape. Open beauty hadrons, whose mass is dominated by the b quark, are also seen to flow, and in the low to intermediate pT region, below 4 GeV, an apparent mass hierarchy is seen: the lighter the particle, the greater the elliptic flow, as expected in a hydrodynamical description of QGP evolution. Above 6 GeV, the elliptic flows of the three particles converge, perhaps as a result of energy loss as energetic partons move through the QGP. In contrast to the other particles, ϒ mesons do not show any significant elliptic flow. This is not surprising as the transverse momentum of peak elliptic flow is expected to scale with the mass of the particle according to the hydrodynamic description of the evolution of the QGP – for ϒ mesons that should be beyond 10 GeV, where the uncertainties are currently large.

Differential elliptic-flow coefficients

Theoretical descriptions of elliptic flow are also making progress. Models of HF flow need to include a realistic hydrodynamic expansion of the QGP, the interaction of the heavy quarks with the medium via collisional and radiative processes, and the hadronisation of heavy quarks via both fragmentation and coalescence. For example, the “TAMU” model describes the measurements of the D mesons and electrons from beauty-hadron decays reasonably well, but shows some tension with the measurement of J/ψ at intermediate and high transverse momenta, perhaps indicating that a mechanism related to parton energy loss is not included. 

Triangular flow

Triangular flow is observed for D and J/ψ mesons in central collisions, demonstrating that energy-density fluctuations in the initial state have a measurable effect on the heavy quark sector (see “Triangular flow” figure). These measurements of a triangular flow of open- and hidden- charm mesons pose new challenges to models describing HF interactions in the QGP: models now need to account not only for the properties of the medium and the transport of the HF quarks through it, but also for fluctuations in the initial conditions of the heavy-ion collisions.

Differential triangular-flow coefficients

In the coming years, measurements of HF flow will continue to strongly constrain models of the QGP. It is now clear that charm quarks take part in the collective motion of the medium and partially thermalise. More data is needed to make firm conclusions about open and hidden beauty hadrons. All four LHC experiments will study how heavy quarks diffuse in a colour-deconfined and hydrodynamically expanding medium with the greater luminosities set to be delivered in LHC Run 3 and Run 4. Currently ongoing upgrades to ALICE will extend its unique advantages in track reconstruction at low momenta, and upgrades to LHCb will allow this asymmetric experiment to study non-central collisions in Run 3. In the next long shutdown of the LHC, upgrades to CMS and ATLAS will then extend their already impressive flow measurements to be competitive with ALICE in the crucial low transverse momentum domain, inching us closer to understanding both the early universe and the phase diagram of quantum chromodynamics. 

CMS seeks support for Lebanese colleagues

Lebanese scientists at CERN

The CMS collaboration, in partnership with the Geneva-based Sharing Knowledge Foundation, has launched a fundraising initiative to support the Lebanese scientific community during an especially difficult period. Lebanon signed an international cooperation agreement with CERN in 2016, which triggered a strong development of the country’s contributions to CERN projects, particularly to the CMS experiment through the affiliation of four of its top universities. Yet the country is dealing with an unprecedented economic crisis, food shortages, Syrian refugees and the COVID-19 pandemic, all in the aftermath of the Beirut port explosion in August 2020.

“Even the most resilient higher-education institutions in Lebanon are struggling to survive,” says CMS collaborator Martin Gastal of CERN, who initiated the fundraising activity in March. “Despite these challenges, the Lebanese scientific community has reaffirmed its commitment to CERN and CMS, but it needs support.”

One project, High-Performance Computing for Lebanon (HPC4L), which was initiated to build Lebanon’s research capacity while contributing as a Tier-2 centre to the analysis of CMS data, is particularly at risk. HPC4L was due to benefit from servers donated by CERN to Lebanon, and from the transfer of CERN and CMS knowledge and expertise to train a dedicated support team that will run a high-performance computing facility there. But the hardware has been unable to be shipped from CERN because of a lack of available funding. CMS and the Sharing Knowledge Foundation are therefore fundraising to cover the shipping costs of the donated hardware, to purchase hardware to allow its installation, and to support Lebanese experts while they are trained at CERN by the CMS offline computing team.

“At this pivotal moment, every effort to help Lebanon counts,” says Gastal. “CMS is reaching out for donations to support this initiative, to help both the Lebanese research community and the country itself.”

More information, including how to get involved, can be found at: cern.ch/fundraiser-lebanon. 

Anomalies intrigue at Moriond

LHCb

The electroweak session of the Rencontres de Moriond convened more than 200 participants virtually from 22 to 27 March in a new format, with pre-recorded plenary talks and group-chat channels that went online in advance of live discussion sessions. The following week, the QCD and high-energy interactions session took place with a more conventional virtual organisation.

The highlight of both conferences was the new LHCb result on RK based on the full Run 1 and Run 2 data, and corresponding to an integrated luminosity of 9 fb–1, which led to the claim of the first evidence for lepton-flavour-universality (LFU) violation from a single measurement. RK is the ratio of the branching fractions for the decays B+→ K+ μ+ μ and B+→ K+ e+ e. LHCb measured this ratio to be 3.1σ below unity, despite the fact that the two branching fractions are expected to be equal by virtue of the well-established property of lepton universality (see New data strengthens RK flavour anomaly). Coupled with previously reported anomalies of angular variables and the RK*, RD and RD* branching-fraction ratios by several experiments, it further reinforces the indications that LFU may be violated in the B sector. Global fits and possible theoretical interpretations with new particles were also discussed. 

Important contributions

Results from Belle II and BES III were reported. Some of the highlights were a first measurement of the B+→ K+ νν decay and the most stringent limits to date for masses of axions between 0.2 and 1 GeV from Belle II, based on the first data they collected, and searches for LFU violation in the charm sector from BES III that for the moment give negative results. Belle II is expected to give important contributions to the LFU studies soon and to accumulate an integrated luminosity of 50 ab–1 10 years from now.

ATLAS and CMS presented tens of new results each on Standard Model (SM) measurements and searches for new phenomena in the two conferences. Highlights included the CMS measurement of the W leptonic and hadronic branching fraction with an accuracy larger than that measured at LEP for the branching fractions to the electron and muon, and the updated ATLAS evidence of the four-top-production process at 4.7σ (with 2.6σ expected). ATLAS and CMS have not yet found any indications of new physics but continue to perform many searches, expanding the scope to as-yet unexplored areas, and many improved limits on new-physics scenarios were reported for the first time at both conference sessions.

Several results and prospects of electroweak precision measurements were presented and discussed, including a new measurement of the fine structure constant with a precision of 80 parts per trillion, and a measurement at PSI of the null electric dipole moment of the neutron with an uncertainty of 1.1 × 10–26 e∙cm. Theoretical predictions of (g–2)μ were discussed, including the recent lattice calculation from the Budapest–Marseille–Wuppertal group of the hadronic–vacuum–polarisation contribution, which, if used in comparison with the experimental measurement, would bring the tension with the (g–2)μ prediction to within 2σ.

In the neutrino session, the most relevant recent new results of last year were discussed. KATRIN reported updated upper limits on the neutrino mass, obtained from the direct measurement of the endpoint of the electron spectrum of the tritium β decay, while T2K showed the most recent results concerning CP violation in the neutrino sector, obtained from the simultaneous measurement of the νμ and νμ disappearance, and νe and νe  appearance. The measurement disfavours at 90% CL the CP-conserving values 0 and π of the CP-violating parameter of the neutrino mixing matrix, δCP, and all values between 0 and π.

The quest for dark matter is in full swing and is expanding on all fronts. XENON1T updated delegates on an intriguing small excess in the low-energy part of the electron-recoil spectrum, from 1 to 7 keV, which could be interpreted as originating from new particles but that is also consistent with an increased background from tritium contamination. Upcoming new data from the upgraded XENONnT detector are expected to be able to disentangle the different possibilities, should the excess be confirmed. The Axion Dark Matter eXperiment (ADMX) is by far the most sensitive experiment to detect axions in the explored range around 2 μeV. ADMX showed near-future prospects and the plans for upgrading the detector to scan a much wider mass range, up to 20 μeV, in the next few years. The search for dark matter also continues at accelerators, where it could be directly produced or be detected in the decays of SM particles such as the Higgs boson.

The quest for dark matter is in full swing and is expanding on all fronts

ATLAS and CMS also presented new results at the Moriond QCD and high-energy-interactions conference. Highlights of the new results are: the ATLAS full Run-2 search for double-Higgs-boson production in the bbγγ channel, which yielded the tightest constraints to date on the Higgs-boson self-coupling, and the measurement of the top-quark mass by CMS in the single-top-production channel that for the first time reached an accuracy of less than 1 GeV, now becoming relevant to future top-mass combinations. Several recent heavy-ion results were also presented by the LHC experiments, and by STAR and PHENIX at RHIC, in the dedicated heavy-ion session. One highlight was a result from ALICE on the measurement of the Λc+ transverse-momentum spectrum and the Λc+ /D0 ratio in pp and p–Pb collisions, showing discrepancies with perturbative QCD predictions.

The above is only a snapshot of the many interesting results presented at this year’s Rencontres de Moriond, representing the hard work and dedication of countless physicists, many at the early-career stage. As ever, the SM stands strong, though intriguing results provoked lively debate during many virtual discussions.

Muon g–2: the promise of a generation

CERN g-2 storage ring

It has been almost a century since Dirac formulated his famous equation, and 75 years since the first QED calculations by Schwinger, Tomonaga and Feynman were used to explain the small deviations in hydrogen’s hyperfine structure. These calculations also predicted that deviations from Dirac’s prediction a = (g–2)/2, where g is the gyromagnetic ratio e/2me, should be non-zero and thus “anomalous”. The result is famously engraved on Schwinger’s tombstone, standing as a monument to the importance of this result and a marker of things to come.

In January 1957 Garwin and collaborators at Columbia published the first measurements of g for the recently discovered muon, accurate to 5%, followed two months later by Cassels and collaborators at Liverpool with uncertainties of less than 1%. Leon Lederman is credited with initiating the CERN campaign of g–2 experiments from 1959 to 1979, starting with a borrowed 83 × 52 × 10 cm magnet from Liverpool and ending with a dedicated storage ring and a precision of better than 10 ppm.

Why was CERN so interested in the muon? In a 1981 review, Combley, Farley and Picasso commented that the CERN results for aμ had a higher sensitivity to new physics by “a modification to the photon propagator or new couplings” by a factor (mμ/me)2. Revealing a deeper interest, they also admitted “… this activity has brought us no nearer to the understanding of the muon mass [200 times that of the electron].”

With the end of the CERN muon programme, focus turned to Brookhaven and the E821 experiment, which took up the challenge of measuring aμ 20 times more precisely, providing sensitivity to virtual particles with masses beyond the reach of the colliders at the time. In 2004 the E821 collaboration delivered on its promise, reporting results accurate to about 0.6 ppm. At the time this showed a 2–3σ discrepancy with respect to the Standard Model (SM) – tantalising, but far from conclusive.

Spectacular progress
The theoretical calculation of g–2 made spectacular progress in step with experiment. Almost eclipsed by the epic 2012 achievement of calculating the QED contributions to five loops from 12,672 Feynman diagrams, huge advances in calculating the hadronic vacuum polarisation contributions to aμ have been made. A reappraisal of the E821 data using this information suggested at least a 3.5σ discrepancy with the SM. It was this that provided the impetus to Lee Roberts and colleagues to build the improved muon g–2 experiments at Fermilab, the first results from which are described in this issue, and at J-PARC. Full results from the Fermilab experiment alone should reduce the aμ uncertainties by at least another factor of three – down to a level that really challenges what we know about the SM.

Muon g–2 is a clear demonstration that theory and experiment must progress hand in hand

Of course, the interpretation of the new results relies on the choice of theory baseline. For example, one could choose, as the Fermilab experiment has, to use the consensus “International Theory Initiative” expectation for aμ. One could also take into account the new results provided by LHCb’s recent RK measurement, which hint that muons might behave differently than electrons. There will inevitably be speculation over the coming months about the right approach. Whatever one’s choice, muon g–2 is a clear demonstration that theory and experiment must progress hand in hand.

Perhaps the most important lesson is the continued cross-fertilisation and impetus to the physics delivered both at CERN and at Fermilab by recent results. The g–2 experiment, an international collaboration between dozens of labs and universities in seven countries, has benefited from students who cut their teeth on LHC experiments. Likewise, students who have worked at the precision frontier at Fermilab are now armed with the expertise of making blinded ppm measurements and are keen to see how they can make new measurements at CERN, for example at the proposed MUonE experiment, or at other muon experiments due to come online this decade.

“It remains to be seen whether or not future refinement of the [SM] will call for the discerning scrutiny of further measurements of even greater precision,” concluded Combley, Farley and Picasso in their 1981 review – a wise comment that is now being addressed.

An anomalous moment for the muon

Hadronic light-by-light computation

A fermion’s spin tends to twist to align with a magnetic field – an effect that becomes dramatically macroscopic when electron spins twist together in a ferromagnet. Microscopically, the tiny magnetic moment of a fermion interacts with the external magnetic field through absorption of photons that comprise the field. Quantifying this picture, the Dirac equation predicts fermion magnetic moments to be precisely two in units of Bohr magnetons, e/2m. But virtual lines and loops add an additional 0.1% or so to this value, giving rise to an “anomalous” contribution known as “g–2” to the particle’s magnetic moment, caused by quantum fluctuations. Calculated to tenth order in quantum electrodynamics (QED), and verified experimentally to about two parts in 1010, the electron’s magnetic moment is one of the most precisely known numbers in the physical sciences. While also measured precisely, the magnetic moment of the muon, however, is in tension with the Standard Model.

Tricky comparison

The anomalous magnetic moment of the muon was first measured at CERN in 1959, and prior to 2021, was most recently measured by the E821 experiment at Brookhaven National Laboratory (BNL) 16 years ago. The comparison between theory and data is much trickier than for electrons. Being short-lived, muons are less suited to experiments with Penning traps, whereby stable charged particles are confined using static electric and magnetic fields, and the trapped particles are then cooled to allow precise measurements of their properties. Instead, experiments infer how quickly muon spins precess in a storage ring – a situation similar to the wobbling of a spinning top, where information on the muon’s advancing spin is encoded in the direction of the electron that is emitted when it decays. Theoretical calculations are also more challenging, as hadronic contributions are no longer so heavily suppressed when they emerge as virtual particles from the more massive muon.

All told, our knowledge of the anomalous magnetic moment of the muon is currently three orders of magnitude less precise than for electrons. And while everything tallies up, more or less, for the electron, BNL’s longstanding measurement of the magnetic moment of the muon is 3.7σ greater than the Standard Model prediction (see panel “Rising to the moment”). The possibility that the discrepancy could be due to virtual contributions from as-yet-undiscovered particles demands ever more precise theoretical calculations. This need is now more pressing than ever, given the increased precision of the experimental value expected in the next few years from the Muon g–2 collaboration at Fermilab in the US and other experiments such as the Muon g–2/EDM collaboration at J-PARC in Japan. Hotly anticipated results from the first data run at Fermilab’s E989 experiment were released on 7 April. The new result is completely consistent with the BNL value but with a slightly smaller error, leading to a slightly larger discrepancy of 4.2σ with the Standard Model when the measurements are combined (see Fermilab strengthens muon g-2 anomaly).

Hadronic vacuum polarisation

The value of the muon anomaly, aμ, is an important test of the Standard Model because currently it is known very precisely – to roughly 0.5 parts per million (ppm) – in both experiment and theory. QED dominates the value of aμ, but due to the non-perturbative nature of QCD it is strong interactions that contribute most to the error. The theoretical uncertainty on the anomalous magnetic moment of the muon is currently dominated by so-called hadronic vacuum polarisation (HVP) diagrams. In HVP, a virtual photon briefly explodes into a “hadronic blob”, before being reabsorbed, while the magnetic-field photon is simultaneously absorbed by the muon. While of order α2 in QED, it is all orders in QCD, making for very difficult calculations.

Rising to the moment

Artist

In the Standard Model, the magnetic moment of the muon is computed order-by-order in powers of a for QED (each virtual photon represents a factor of α), and to all orders in as for QCD.

At the lowest order in QED, the Dirac term (pictured left) accounts for precisely two Bohr magnetons and arises purely from the muon (μ) and the real external photon (γ) representing the magnetic field.

 

At higher orders in QED, virtual Standard Model particles, depicted by lines forming loops, contribute to a fractional increase of aμ with respect to that value: the so-called anomalous magnetic moment of the muon. It is defined to be aμ = (g–2)/2, where g is the gyromagnetic ratio of the muon – the number of Bohr magnetons, e/2m, which make up the muon’s magnetic moment. According to the Dirac equation, g = 2, but radiative corrections increase its value.

The biggest contribution is from the Schwinger term (pictured left, O(α)) and higher-order QED diagrams.

 

aμQED = (116 584 718.931 ± 0.104) × 10–11

Electroweak lines (pictured left) also make a well-defined contribution. These diagrams are suppressed by the heavy masses of the Higgs, W and Z bosons.

aμEW = (153.6 ± 1.0) × 10–11

The biggest QCD contribution is due to hadronic vacuum polarisation (HVP) diagrams. These are computed from leading order (pictured left, O(α2)), with one “hadronic blob” at all orders in as (shaded) up to next-to-next-to-leading order (NNLO, O(α4), with three hadronic blobs) in the HVP.

 

 

Hadronic light-by-light scattering (HLbL, pictured left at O(α3) and all orders in αs (shaded)), makes a smaller contribution but with a larger fractional uncertainty.

 

 

 

Neglecting lattice–QCD calculations for the HVP in favour of those based on e+e data and phenomenology, the total anomalous magnetic moment is given by

aμSM = aμQED + aμEW + aμHVP + aμHLbL = (116 591 810 ± 43) × 10–11.

This is somewhat below the combined value from the E821 experiment at BNL in 2004 and the E989 experiment at Fermilab in 2021.

aμexp = (116 592 061 ± 41) × 10–11

The discrepancy has roughly 4.2σ significance:

aμexp– aμSM = (251 ± 59) × 10–11.

Historically, and into the present, HVP is calculated using a dispersion relation and experimental data for the cross section for e+e hadrons. This idea was born of necessity almost 60 years ago, before QCD was even on the scene, let alone calculable. The key realisation is that the imaginary part of the vacuum polarisation is directly related to the hadronic cross section via the optical theorem of wave-scattering theory; a dispersion relation then relates the imaginary part to the real part. The cross section is determined over a relatively wide range of energies, in both exclusive and inclusive channels. The dominant contribution – about three quarters – comes from the e+e π+π channel, which peaks at the rho meson mass, 775 MeV. Though the integral converges rapidly with increasing energy, data are needed over a relatively broad region to obtain the necessary precision. Above the τ mass, QCD perturbation theory hones the calculation.

Several groups have computed the HVP contribution in this way, and recently a consensus value has been produced as part of the worldwide Muon g–2 Theory Initiative. The error stands at about 0.58% and is the dominant part of the theory error. It is worth noting that a significant part of the error arises from a tension between the most precise measurements, by the BaBar and KLOE experiments, around the rho–meson peak. New measurements, including those from experiments at Novosibirsk, Russia and Japan’s Belle II experiment, may help resolve the inconsistency in the current data and reduce the error by a factor of two or so. 

The alternative approach, of calculating the HVP contribution from first principles using lattice QCD, is not yet at the same level of precision, but is getting there. Consistency between the two approaches will be crucial for any claim of new physics.

Lattice QCD

Kenneth Wilson formulated lattice gauge theory in 1974 as a means to rid quantum field theories of their notorious infinities – a process known as regulating the theory – while maintaining exact gauge invariance, but without using perturbation theory. Lattice QCD calculations involve the very large dimensional integration of path integrals in QCD. Because of confinement, a perturbative treatment including physical hadronic states is not possible, so the complete integral, regulated properly in a discrete, finite volume, is done numerically by Monte Carlo integration.

Lattice QCD has made significant improvements over the last several years, both in methodology and invested computing time. Recently developed methods (which rely on low-lying eigenmodes of the Dirac operator to speed up calculations) have been especially important for muon–anomaly calculations. By allowing state-of-the-art calculations using physical masses, they remove a significant systematic: the so-called chiral extrapolation for the light quarks. The remaining systematic errors arise from the finite volume and non-zero lattice spacing employed in the simulations. These are handled by doing multiple simulations and extrapolating to the infinite-volume and zero-lattice-spacing limits. 

The HVP contribution can readily be computed using lattice QCD in Euclidean space with space-like four-momenta in the photon loop, thus yielding the real part of the HVP directly. The dispersive result is currently more precise (see “Off the mark” figure”), but further improvements will depend on consistent new e+e scattering datasets.

Hadronic vacuum-polarisation contribution

Rapid progress in the last few years has resulted in first lattice results with sub-percent uncertainty, closing in on the precision of the dispersive approach. Since these lattice calculations are very involved and still maturing, it will be crucial to monitor the emerging picture once several precise results with different systematic approaches are available. It will be particularly important to aim for statistics-dominated errors to make it more straightforward to quantitatively interpret the resulting agreement with the no-new-physics scenario or the dispersive results. In the shorter term, it will also be crucial to cross-check between different lattice and dispersive results using additional observables, for example based on the vector–vector correlators.

With improved lattice calculations in the pipeline from a number of groups, the tension between lattice QCD and phenomenological calculations may well be resolved before the Fermilab and J-PARC experiments announce their final results. Interestingly, there is a new lattice result with sub-percent precision (BMW 2020) that is in agreement both with the no-new-physics point within 1.3σ, and with the dispersive-data-driven result within 2.1σ. Barring a significant re-evaluation of the phenomenological calculation, however, HVP does not appear to be the source of the discrepancy with experiments. 

The next most likely Standard Model process to explain the muon anomaly is hadronic light-by-light scattering. Though it occurs less frequently since it includes an extra virtual photon compared to the HVP contribution, it is much less well known, with comparable uncertainties to HVP.

Hadronic light-by-light scattering

In hadronic light-by-light scattering (HLbL), the magnetic field interacts not with the muon, but with a hadronic “blob”, which is connected to the muon by three virtual photons. (The interaction of the four photons via the hadronic blob gives HLbL its name.) A miscalculation of the HLbL contribution has often been proposed as the source of the apparently anomalous measurement of the muon anomaly by BNL’s E821 collaboration.

Since the so-called Glasgow consensus (the fruit of a 2009 workshop) first established a value more than 10 years ago, significant progress has been made on the analytic computation of the HLbL scattering contribution. In particular, a dispersive analysis of the most important hadronic channels has been carried out, including the leading pion–pole, sub-leading pion loop and rescattering diagrams including heavier pseudoscalars. These calculations are analogous in spirit to the dispersive HVP calculations, but are more complicated, and the experimental measurements are more difficult because form factors with one or two virtual photons are required. 

The project to calculate the HLbL contribution using lattice QCD began more than 10 years ago, and many improvements to the method have been made to reduce both statistical and systematic errors since then. Last year we published, with colleagues Norman Christ, Taku Izubuchi and Masashi Hayakawa, the first ever lattice–QCD calculation of the HLbL contribution with all errors controlled, finding aμHLbL, lattice = (78.7 ± 30.6 (stat) ± 17.7 (sys)) × 10–11. The calculation was not easy: it took four years and a billion core-hours on the Mira supercomputer at Argonne National Laboratory’s Large Computing Facility. 

Our lattice HLbL calculations are quite consistent with the analytic and data-driven result, which is approximately a factor of two more precise. Combining the results leads to aμHLbL = (90 ± 17) × 10–11, which means the very difficult HLbL contribution cannot explain the Standard Model discrepancy with experiment. To make such a strong conclusion, however, it is necessary to have consistent results from at least two completely different methods of calculating this challenging non-perturbative quantity. 

New physics?

If current theory calculations of the muon anomaly hold up, and the new experiments reduce its uncertainty by the hoped-for factor of four, then a new-physics explanation will become impossible to ignore. The idea would be to add particles and interactions that have not yet been observed but may soon be discovered at the LHC or in future experiments. New particles would be expected to contribute to the anomaly through Feynman diagrams similar to the Standard Model topographies (see “Rising to the moment” panel).

Calculations of the anomalous magnetic moment of the muon are not finished

The most commonly considered new-physics explanation is supersymmetry, but the increasingly stringent lower limits placed on the masses of super-partners by the LHC experiments make it increasingly difficult to explain the muon anomaly. Other theories could do the job too. One popular idea that could also explain persistent anomalies in the b-quark sector is heavy scalar leptoquarks, which mediate a new interaction allowing leptons and quarks to change into each other. Another option involves scenarios whereby the Standard Model Higgs boson is accompanied by a heavier Higgs-like boson.

The calculations of the anomalous magnetic moment of the muon are not finished. As a systematically improvable method, we expect more precise lattice determinations of the hadronic contributions in the near future. Increasingly powerful algorithms and hardware resources will further improve precision on the lattice side, and new experimental measurements and analysis methods will do the same for dispersive studies of the HVP and HLbL contributions.

To confidently discover new physics requires that these two independent approaches to the Standard Model value agree. With the first new results on the experimental value of the muon anomaly in almost two decades showing perfect agreement with the old value, we anxiously await more precise measurements in the near future. Our hope is that the clash of theory and experiment will be the beginning of an exciting new chapter of particle physics, heralding new discoveries at current and future particle colliders. 

Calculating the curiosity windfall

Magnet R&D

Recent decades have seen an emphasis on the market and social value of fundamental science. Increasingly, researchers must demonstrate the benefits of their work beyond the generation of pure scientific knowledge, and the cultural benefits of peaceful and open international collaboration.

This timely collection of short essays by leading scientific managers and policymakers, which emerged from a workshop held during Future Circular Collider (FCC) Week 2019, brings the interconnectedness of fundamental science and economics into focus. Its 18 contributions range from procurement to knowledge transfer, and from global-impact assessments to case studies from CERN, SKA, the ESS and ESA, with a foreword by former CERN Director-General Rolf Heuer. As such, it constitutes an important contribution to the literature and a guide for future projects such as a post-LHC collider.

As the number and size of research infrastructures (RIs) has grown over the years, describes CERN’s head of industry, procurement and knowledge transfer, Thierry Lagrange, the will to push the frontier of knowledge has required significant additional public spending linked to the development and upgrade of high-tech instruments, and increased maintenance costs. The socioeconomic returns to society are clear, he says. But these benefits are not generated automatically: they require a thriving ecosystem that transfers knowledge and technologies to society, aided by entities such as CERN’s knowledge transfer group and business incubation centres.

RIs need to be closely integrated into the European landscape, with plans put in place for international governance structures

Multi-billion public investments in RIs are justified given their crucial and multifaceted role in society, asserts EIROforum liaison officer at the European Commission, Margarida Ribeiro. She argues that new RIs need to be closely integrated into the European landscape, with plans put in place for international governance structures, adequate long-term funding, closer engagement with industry, and methodologies for assessing RI impact. All contributors acknowledge the importance of this latter point. While physicists would no doubt prefer to go back to the pre-Cold War days of doing science for science’s sake, argues ESS director John Womersley, without the ability to articulate the socioeconomic justifications of fundamental science as a driver of prosperity, jobs, innovation, startups and as solutions to challenges such as climate change and the environment, it is only going to become more difficult for projects to get funding.

A future collider is a case in point. Johannes Gutleber of CERN and the FCC study describes several recent studies seeking to quantify the socioeconomic value of the LHC and its proposed successor, the FCC, with training and industrial innovation emerging as the most important generators of impact. The rising interest in the type of RI benefits that emerge and how they can be maximised and redistributed to society, he writes, is giving rise to a new field of interdisciplinary research, bringing together economists, social scientists, historians and philosophers of science, and policymakers.

Nowhere is this better illustrated than the ongoing programme led by economists at the University of Milan, described in two chapters by Florio Massimo and Andrea Bastianin. A recent social cost–benefit analysis of the HL-LHC, for example, conservatively estimates that every €1 of costs returns €1.2 to society, while a similar study concerning the FCC estimates the benefit/cost ratio to be even higher, at 1.8. Florio argues that CERN and big science more generally are ideal testing grounds for theoretical and empirical economic models, while demonstrating the positive net impact that large colliders have for society. His 2019 book Investing in Science: Social Cost-Benefit Analysis of Research Infrastructures (MIT Press) explores this point in depth (CERN Courier September 2018 p51), and is another must-read in this growing interdisciplinary area. Completing the series of essays on impact evaluation, Philip Amison of the UK’s Science and Technology Facilities Council reviews the findings of a report published last year capturing the benefits of CERN membership.

The final part of the volume focuses on the question “Who benefits from such large public investments in science?”, and addresses the contribution of big science to social justice and inequalities. Carsten Welsch of the University of Liverpool/Cockcroft Institute argues that fundamental science should not be considered as a distant activity, illustrating the point convincingly via the approximately 50,000 particle accelerators currently used in industry, medical treatments and research worldwide.

The grand ideas and open questions in particle physics and cosmology already inspire many young people to enter STEM subjects, while technological spin-offs such as medical treatments, big-data handling, and radio-frequency technology are also often communicated. Less well known are the significant but harder-to-quantify economic benefits of big science. This volume is therefore essential reading, not just for government ministers and policymakers, but for physicists and others working in curiosity-driven research who need to convey the immense benefits of their work beyond pure knowledge.

Fermilab strengthens muon g-2 anomaly

Hotly anticipated results from the first run of the muon g-2 experiment at Fermilab were announced today, increasing the tension between measurements and theoretical calculations. The last time this ultra-precise measurement was performed, in a sequence of results at Brookhaven National Laboratory in the late 1990s and early 2000s, it disagreed with the Standard Model (SM) by 3.7σ. After almost eight years of work rebuilding the Brookhaven experiment at Fermilab and analysing its first data, the muon’s anomalous magnetic moment has been measured to be 116 592 040(54)×10-11. The result is in agreement with the Brookhaven measurement and is 3.3σ greater than the SM prediction: 116 591 810(43)×10-11. Combined with the Brookhaven result, the world-average value for the anomalous magnetic moment of the muon is 116 592 061(41)×10-11, representing a 4.2σ departure from the SM.

“Today is an extraordinary day, long awaited not only by us but by the whole international physics community,” says Graziano Venanzoni of the INFN, who is co-spokesperson of the Fermilab muon g-2 collaboration. “A large amount of credit goes to our young researchers who, with their talent, ideas and enthusiasm, have allowed us to achieve this incredible result.”

Today is an extraordinary day, long awaited not only by us but by the whole international physics community

Graziano Venanzoni

The Fermilab result was unblinded during a Zoom meeting on 25 February in the presence of around 200 collaborators from around the world. “We were all very excited to finally know our result and the meeting was very emotional,” says Venanzoni. The analysis took almost three years from data taking to the release of the result and the collaboration decided to unblind only when all the steps of the analysis were completed and there were no outstanding questions. Venanzoni adds that no further analysis was completed after the unblinding and the results are unchanged.

The previous Brookhaven measurement left physicists pondering whether the presence of unknown particles in loops could be affecting the muon’s behaviour. It was clear that further measurements were needed, but it turned out to be much cheaper to move the apparatus to Fermilab than to build a new, more precise experiment at Brookhaven. So in the summer of 2013, the experiment’s 14-m diameter, 1.45 T superconducting magnet was transported from Long Island to the suburbs of Chicago. The Fermilab team reassembled the magnet and spent a year “shimming” its field, making it three times more uniform than the one it created at Brookhaven. Along with a new beamline to deliver a purer muon beam, Fermilab’s muon g-2 reincarnation required entirely new instrumentation, along with new detectors and a control room.

When a muon travels through the strong external magnetic field of a storage ring, the direction of its magnetic moment precesses at a rate that depends on its strength g. The Dirac equation predicts that all fermions have a g-factor equal to two. But higher order loops add an “anomalous” moment, aμ = (g-2)/2, which can be calculated extremely precisely. At Fermilab, muons with an energy of about 3.1 GeV are vertically focused in the storage ring via quadrupoles, and their precession frequency is determined from decays to electrons using 24 electromagnetic calorimeters located along the ring’s inner circumference. The intense polarised muon beam suppresses the pion contamination that challenged the Brookhaven measurement, while new calibration systems and simulations allow better control of systematic uncertainties.

It is so gratifying to finally be resolving this mystery

Chris Polly

The Fermilab muon g-2 collaboration took its first dataset in 2018, with over eight billion muon decays resulting in an overall uncertainty approximately 15% better than Brookhaven’s. Data analysis on the second and third runs is already under way, while a fourth run is ongoing and a fifth is planned. The collaboration is targeting a final precision of around 0.14 ppm – four times greater than the previous measurement.

“After the 20 years that have passed since the Brookhaven experiment ended, it is so gratifying to finally be resolving this mystery,” said Fermilab’s Chris Polly, a co-spokesperson for the current experiment and a graduate student on the Brookhaven experiment. “So far we have analysed less than 6% of the data that the experiment will eventually collect. Although these first results are telling us that there is an intriguing difference with the Standard Model, we will learn much more in the next couple of years.”

Theory baseline
Developments in the theory community are equally vital. The Fermilab muon g-2 collaboration takes as its theory baseline the value for aμ obtained last year by the Muon g-2 Theory Initiative. Uncertainties in the calculation are dominated by hadronic contributions, in particular a term called the hadronic vacuum polarization (HVP). The Theory Initiative incorporates the HVP value obtained by well-established “dispersive methods”, which combine fundamental properties of quantum field theory with experimental measurements of low-energy hadronic processes. An alternative approach gaining traction is to calculate the HVP contribution using lattice QCD. In a paper published in Nature today, one group reports lattice calculations of HVP which, if included in the theory result, would significantly reduce the discrepancy between the experimental and theoretical values for aμ. The result is in 2σ tension with the value obtained from the dispersive approach, and is currently dominated by systematic uncertainties stemming from approximations used in the lattice calculations, say Muon g-2 Theory Initiative members.

“This being the first lattice result at sub-percent precision, it is premature to draw firm conclusions from this comparison,” reads a statement from the Muon g-2 Theory Initiative steering committee. “Indeed, given the complexity of the computations, independent results from different lattice groups with commensurate uncertainties are needed to test and check the lattice calculations against each other. Being entirely based on Standard Model theory, once the lattice results are well tested and precise enough, they will play an important role in understanding how new physics enters into the discrepancy.”

High-power linac shows promise for accelerator-driven reactors

Physicists at the Institute of Modern Physics (IMP) in Lanzhou, China, have achieved a significant milestone towards an accelerator-driven sub-critical system – a proposed technology for sustainable fission energy. In February, the institute’s prototype front-end linac for the China Accelerator Driven Subcritical System (C-ADS) reached its design goal with the successful commissioning of a 10 mA, 205 kW continuous-wave (CW) proton beam at an energy of 20 MeV. The result breaks the world record for a high-power CW superconducting linac, says Yuan He, director of IMP’s Linac Center: “This result consists of ten years of hard work by IMP scientists, and brings the realisation of an actual ADS facility one step closer to the world.”

The ADS concept, which was proposed by Carlo Rubbia at CERN in late 1990s, offers a potential technology for nuclear-waste transmutation and the development of safe, sustainable nuclear power. The idea is to sustain fission reactions in a subcritical reactor core with neutrons generated by directing a high-energy proton or electron beam, which can be switched on or off at will, at a heavy-metal spallation target. Such a system could run on non-fissile thorium fuel, which is more abundant than uranium and produces less waste. The challenge is to design an accelerator with the required beam power and long-term reliability, for which a superconducting proton linac is a promising candidate.

CAFe is the world’s first CW superconducting proton linac stepping into the hundred-kilowatt level

Yuan He

In 2011, a team at IMP launched a programme to build a superconducting proton linac (CAFe) with an unprecedented 10 mA beam current. It was upgraded in 2018 by replacing the radio-frequency quadrupole and a cryomodule, but the team faced difficulties in reaching the design goals. Challenges including beam-loss control and detection, heavy beam loading and rapid fault recovery were finally overcome in early 2021, enabling the 38 m-long facility to achieve its design performance at the start of the Chinese new year. CAFe’s beam availability during long-term, high-power operation was measured to be 93- 96%, indicating high reliability: 12 hours of operation at 174 kW/10 mA and 108 hours at 126 kW/7.3 mA.

The full C-ADS project is expected to be completed this decade. A similar project called MYRHHA is under way at SCK CEN in Belgium, the front-end linac for which recently entered construction. Other ADS projects are under study in Japan, India and other countries.

“CAFe is the world’s first CW superconducting proton linac stepping into the hundred-kilowatt level,” says He. “The successful operation of the 10 mA beam meets the beam-intensity requirement for an experimental ADS demo facility – a breakthrough for ADS linac development and an outstanding achievement in the accelerator field.”

 

ANAIS challenges DAMA dark-matter claim

ANAIS shows no modulation

Despite the strong indirect evidence for the existence of dark matter, a plethora of direct searches have not resulted in a positive detection. The exception to this are the famous results from the DAMA/NaI experiment at Gran Sasso underground laboratory in Italy, first reported in the late 1990s, which show a modulating signal compatible with Earth moving through a region containing Weakly Interacting Massive Particles (WIMPs). These results were backed-up more recently with measurements from the follow-up DAMA/LIBRA detector. Combining the data in 2018, the evidence reported for a dark-matter signal is as high as 13 sigma.Now, the Annual modulation with NaI Scintillators (ANAIS) collaboration, which aims to directly reproduce the DAMA results using the same detector concept, has published the results from their first three years of operations. The results, which were presented today at Rencontres de Moriond, show a clear contradiction with DAMA, indicating that we are still no closer to finding dark matter.

The DAMA results are based on searches for an annual modulation in the interaction rate of WIMPs in a detector comprising NaI crystals. First theoretically proposed in 1986 by Andrzej Drukier, Katherine Freese and David Spergel, this modulation is a result of the difference in velocity of Earth with respect to the dark-matter halo of the galaxy. On 2 June, the velocities of Earth and the Sun are aligned with respect to the galaxy, whereas half a year later they are oppositely aligned, resulting in a lower cross section for WIMPs with a detector placed on Earth. Although this method has advantages compared to more direct detection methods, it requires that other potential sources of such a seasonal modulation be ruled out. Despite the significant modulation with the correct phase observed by DAMA, its results were not immediately accepted as a clear signal of dark matter due to the remaining possibility of instrumental effects, seasonal background modulation or artifacts from the analysis.

Over the years the significance of the DAMA results has continued to increase while other dark-matter searches, in particular with the XENON1T and LUX experiments, found no evidence of WIMPs capable of explaining the DAMA results. The fact that only the final analysis products from DAMA have been made public has also hampered attempts to prove or disprove alternative origins of the modulation. To overcome this, the ANAIS collaboration set out to reproduce the data with an independent detector intentionally similar to DAMA, consisting of NaI(Tl) scintillators readout by photomultipliers placed in the Canfranc Underground Laboratory deep beneath the Pyrenees in northern Spain. Using this method ANAIS can rule out any instrument-induced effects while producing data in a controlled way and studying it in detail with different analysis procedures.

The ANAIS results agree with the first results published by the COSINE-100 collaboration

ANAIS and DAMA signals

The first three years of ANAIS data have now been unblinded, and the results were posted on arXiv on 1 March. None of the analysis methods used show any signs of a modulation, with a statistical analysis ruling out the DAMA results at 99% confidence. The results therefore narrow down the possible causes of the modulation observed by DAMA to either differences in the detector compared to ANAIS, or in the analysis method. One specific issue raised by the ANAIS collaboration regards the background-subtraction method. In the DAMA results the mean background rate for each year is subtracted from the raw data for that full year. In case the background during that year is not constant, however, this will produce an artificial saw-tooth shape which, with the limited statistics, can be fitted with a sinusoidal. This effect was already pointed out in a publication by a group from INFN in 2020, which showed how a slowly increasing background is capable of producing the exact modulation observed by DAMA. The ANAIS collaboration describes their background in detail, shows that it is indeed not constant, and provides suggestions for a more robust handling of the background.

The ANAIS results also agree with the first results published by the COSINE-100 collaboration in 2019 which, again using a NaI-based detector, found no evidence of a yearly modulation. Thanks to the continued experimental efforts of these two groups, and with the ANAIS collaboration planning to make their data public to allow independent analyses, the more than 20 year-old DAMA anomaly looks likely to be settled in the next few years.

New data strengthens RK flavour anomaly

RK 2021

The principle that the charged leptons have identical electroweak interaction strengths is a distinctive feature of the Standard Model (SM). However, this lepton-flavour universality (LFU) is an accidental symmetry in the SM, which may not hold in theories beyond the SM. The LHCb collaboration has used a number of rare decays mediated by flavour-changing neutral currents, where the SM contribution is suppressed, to test for deviations from LFU. During the past few years, these and other measurements, together with results from B-factories, hint at possible departures from the SM.

In a new measurement of a LFU-sensitive parameter “RK” with increased precision and statistical power, reported today at the Rencontres de Moriond, LHCb has strengthened the significance of the flavour anomalies. The value RK  probes the ratio of B-meson decays to muons and electrons: RK = BR(B+→K+μ+μ)/BR(B+→K+e+e). Testing LFU in such b→sℓ+ transitions has the advantage that not only are SM contributions suppressed, but the theoretical predictions are very precise. Therefore, any significant deviation of RK from unity would imply physics beyond the SM.

The experimental challenge lies in the fact that, while electrons and muons interact via the electroweak force in the same way, the small electron mass means it interacts with detector material much more than muons. For example, electrons radiate a significant number of bremsstrahlung photons when traversing the LHCb detector, which degrades reconstruction efficiency and signal resolution compared to muons. The key to control this effect is to use the decays J/ψ→e+e and J/ψ→μ+μ, which are known to have the same decay probability and can be used to calibrate and test electron reconstruction efficiencies. High precision tests with the J/ψ are compatible with LFU, which provides a powerful cross-check on the experimental analysis.

Previous LHCb measurements of RK and RK* (which probes B0→K*ℓ+ decays) in 2019 and 2017 respectively, provide hints of deviations from unity. The latest analysis of RK, which uses the full dataset collected by the experiment in Run 1 and Run 2 of the LHC, represents a substantial improvement in precision on the previous measurement (see figure) thanks to doubling the dataset. The RK ratio is measured to be three standard deviations from the SM prediction (see figure). This is the first time that a departure from LFU above this level has been seen in any individual B-meson decay, with a value of RK=0.846+0.042-0.039 (stat.) +0.013-0.012 (syst.).

Although it is too early to conclude anything definitive at this stage, this deviation is consistent with a pattern of anomalies which have manifested themselves in b→s ℓ+ and similar processes over the course of the past decade. In particular, the strengthening RK anomaly may be considered alongside hints from other measurements of these transitions, including angular asymmetries and decay rates.

The LHCb experiment is well placed to clarify the potential existence of new-physics effects in these decays. Updates on a suite of  b→s ℓ+ related measurements with the full Run 1 and Run 2 dataset are underway. A major upgrade to the detector during the ongoing second long shutdown of the LHC will offer a step change in precision in Run 3 and beyond.

bright-rec iop pub iop-science physcis connect