Comsol -leaderboard other pages

Topics

LHCb tests lepton universality in new channels

Measurements of the ratios of muon to electron decays

At a seminar at CERN today, the LHCb collaboration presented new tests of lepton universality in rare B-meson decays. While limited in statistical sensitivity, the results fit an intriguing pattern of recent results in the flavour sector, says the collaboration.

Since 2013, several measurements have hinted at deviations from lepton-flavour universality (LFU), a tenet of the Standard Model (SM) which treats charged leptons, ℓ, as identical apart from their masses. The measurements concern decay processes involving the transition between a bottom and a strange quark b→sℓ+, which are strongly suppressed by the SM because they involve quantum corrections at the one-loop level (leading to branching fractions of one part in 106 or less). A powerful way to probe LFU is therefore to measure the ratio of B-meson decays to muons and electrons, for which the SM prediction, close-to-unity, is theoretically very clean.

In March this year, an LHCb measurement of RK = BR(B+→K+μ+μ)/BR(B+→K+e+e) based on the full LHC Run 1 and 2 dataset showed a 3.1σ difference from the SM prediction. This followed departures at the level of 2.2—2.5σ in the ratio RK*0 (which probes B0→K*0+ decays) reported by LHCb in 2017. The collaboration has also seen slight deficits in the ratio RpK, and departures from theory in measurements of the angular distribution of final-state particles and of branching fractions in neutral B-meson decays. None of the results is individually significant enough to constitute evidence of new physics. But taken together, say theorists, they point to a coherent pattern.

We are seeing a similar deficit of rare muon decays to rare electron decays that we have seen in other LFU tests

Harry Cliff

The latest LHCb analysis clocked the ratio of muons to electrons in the isospin-partner B-decays: B0→ KS0+ and B+→K*++. As well as being a first at the LHC, it’s the first single-experiment observation of these decays, and the most precise measurement yet of their branching ratios. Being difficult to reconstruct due to the presence of a long-lived KS0 in the final state, however, the sensitivity of the results is lower than for previous “RK” analyses. LHCb found R(KS0) = 0.66+0.2/-0.15 (stat) +0.02/-0.04 (syst) and R(K*+) = 0.70+0.18/-0.13 (stat) +0.03/-0.04 (syst), which are consistent with the SM at the level of 1.5 and 1.4σ, respectively.

“What is interesting is that we are seeing a similar deficit of rare muon decays to rare electron decays that we have seen in other LFU tests,” said Harry Cliff of the University of Cambridge, who presented the result on behalf of LHCb (in parallel with a presentation at Rencontres de Blois by Cambridge PhD student John Smeaton). “With many other LFU tests in progress using Run 1 and 2 data, there will be more to come on this puzzle soon. Then we have Run 3, where we expect to really zoom in on the measurements and obtain a detailed understanding.”

The experimental and theoretical status of the flavour anomalies in b→sℓ+ℓ and semi-leptonic B-decays will be the focus of the Flavour Anomaly Workshop at CERN on Wednesday 20 October, at which ATLAS and CMS activities will also be discussed, along with perspectives from theorists.

Gauge–boson polarisation observed in WZ production

Figure 1

At the collision energies of the LHC, diboson processes have relatively high production cross sections and often produce relatively clean final states with two or more charged leptons. Consequently, multilepton final states resulting from diboson processes are powerful signatures to study the properties of the electroweak sector of the Standard Model. In particular, WZ production is sensitive to the strength of the triple gauge coupling that characterises the WWZ vertex, which derives from the non-Abelian nature of the electroweak sector. Additionally, as the Higgs mechanism is responsible for the appearance of longitudinally polarised gauge bosons, studying W and Z boson polarisation indirectly probes the validity of the Higgs mechanism.

The results include the first observation at any experiment of longitudinally polarised W bosons in diboson production

A recent result from the CMS collaboration uses the full power of the data taken during Run 2 of the LHC to learn as much as possible from WZ production in the decay channels involving three charged leptons (electrons or muons). The results include the first observation at any experiment of longitudinally polarised W bosons in diboson production.

Reconstruction and event selection were optimised to reduce contributions from processes with “non-isolated” electrons and muons produced in hadron decays – traditionally one of the primary sources of experimental uncertainty in such measurements. The total production cross section for WZ production was measured with a simultaneous fit to the signal-enriched region and three different control regions. This elaborate fitting scheme paid off, as the final result has a relative uncertainty of 4%, down from the 6% obtained in past iterations of the measurement. The results are all consistent with state-of-the-art theoretical predictions (figure 1, left).

A highlight of the analysis is the study of the polarisation of both the W and the Z bosons in the helicity frame, using missing transverse energy as a proxy for the transverse momentum of the neutrino in the W decay. This choice, coupled with the precisely measured four-momenta of the three leptons and the requirement that W boson be on-shell, allows both the W and Z momenta to be fully reconstructed. The angle between the W (Z) boson and the (negatively) charged lepton originating from its decay is then computed. The resulting distributions are fitted to extract the polarisation fractions fR, fL, and fo, which correspond to the proportion of bosons in the left, right and longitudinally polarised states in WZ production.

The measured polarisation fractions are consistent within 1σ with the Standard Model predictions (figure 1, right), in accordance with our knowledge of the electroweak spontaneous symmetry breaking mechanism. The significance for the presence of longitudinally polarised vector bosons is measured to be 5.6σ for the W boson and well beyond 5σ for Z-boson production. These new studies pave the way for future measurements of doubly polarised diboson cross sections, including the challenging doubly longitudinal polarisation mode in WW, WZ or ZZ production.

BICEP crunches primordial gravitational waves 

The BICEP/Keck collaboration has published the strongest constraints to date on primordial gravitational waves, ruling out parameter space for models of inflation in the early universe (Phys. Rev. Lett. 2021 127 151301). A conjectured rapid expansion of the universe during the first fraction of a second of its existence, inflation was first proposed in the early 1980s to explain the surprising uniformity of the universe over scales which should not otherwise have been connected, and may have left an imprint in the polarisation of the cosmic-microwave background (CMB). Despite a high-profile false detection of gravitational-wave-induced “B-modes” by BICEP in 2014, which was soon explained as a mis-modelling of the galactic-dust foreground, the search for primordial gravitational waves remains one of the most promising avenues to study particle physics at extremely high energies, as inflation is thought to require a particle-physics explanation such as the scalar “inflaton” field proposed by Alan Guth.

Certain ‘standard’ types of inflation are now clearly disfavoured

Kai Schmitz

In its latest publication, the BICEP/Keck collaboration has managed to significantly improve the upper bound on the strength of gravitational waves produced during the epoch of inflation. “This is important for theorists because it further constrains the allowed range of viable models of inflation, and certain ‘standard’ types of models are now clearly disfavoured,” explains CERN theorist Kai Schmitz. “It’s also a great experimental achievement because it demonstrates that the sources of systematic uncertainties such as dust emission in our Milky Way are under good control. That’s a good sign for future observations.”

The BICEP/Keck collaboration searches for the imprint of gravitational waves in the polarisation pattern of the CMB, emitted 380,000 years after the Big Bang. Telescopes at the South Pole receive incoming CMB photons and focus them through plastic lenses onto detectors in the focal plane which are cooled to 300 mK, explains principal investigator Clem Pryke of the University of Minnesota. As the telescopes scan the sky they record the tiny changes in temperature due to  the intensity of the incoming microwaves. The detectors are arranged in pairs with each half sensitive to one of two orthogonal linear polarisation components. The telescopes take their best data during the six-month long Antarctic night, during which intrepid “winter-overs” maintain the detectors and upload data via satellite to the US for further analysis.

“The big change since 2014 was to make measurements in multiple frequency bands to allow the removal of the galactic foreground,” says Pryke. “Back then we had data only at 150 GHz and were relying on models and projections of the galactic foreground – models which turned out to be optimistic as far as the dust is concerned. Now we have super-deep maps at 95, 150 and 220 GHz allowing us to accurately remove the dust component.”

The current analysis uses data recorded by BICEP2, the Keck Array and BICEP3 up to 2018. Since then, the collaboration has installed a new more capable telescope platform called the BICEP Array designed to increase sensitivity to primordial gravitational waves by a factor of three, in collaboration with a large-aperture telescope at the South Pole called SPT3G. With 21 telescopes at the South Pole and in the Chilean Atacama desert, the proposed CMB Stage-4 project plans to improve sensitivity by a further factor of six in the 2030s.

MicroBooNE homes in on the sterile neutrino

MicroBooNE

Excitement is building in the search for sterile neutrinos – long-predicted particles which would constitute physics beyond the Standard Model. Although impervious to the electromagnetic, weak and strong interactions, such a fourth “right-handed” neutrino flavour could reveal itself by altering the rate of standard-neutrino oscillations – tantalising hints of which were reported by Fermilab’s MiniBooNE experiment in 2007. In a preprint published last week, sibling experiment MicroBooNE strongly disfavours a mundane explanation for such hints, with further scrutiny by the collaboration expected to be announced later this month.

“If the MiniBooNE effect is indeed a sterile neutrino, this of course would be a major discovery which would revolutionise particle physics, opening up a whole new sector to explore,” says MicroBooNE co-spokesperson Justin Evans of the University of Manchester.

The story of the sterile neutrino began in the 1990s, when the ​​LSND experiment at Los Alamos reported seeing 88±23 (3.8σ) more electron antineutrinos than expected in a beam of accelerator-generated muon antineutrinos. This apparent short-baseline oscillation from muon to electron antineutrinos was incompatible with the oscillation rates established by Super-Kamiokande in 1998 and SNO in 2002, and would have to occur via an unknown intermediate neutrino flavour with a mass of about an electron-Volt. This hypothesised neutrino was dubbed sterile, as it would have to be insensitive to all interactions but gravity for it to have remained undiscovered this long.

The photon hypothesis

The plot thickened in 2007 when the MiniBooNE experiment at Fermilab tried to reproduce the LSND anomaly. The team also saw an excess of electron-like signals, though not quite at the energy corresponding to the LSND effect. The significance of the MiniBooNE anomaly grew to 4.5σ by the time the experiment finished running in November 2018. But a mundane possible explanation poured cold water on hopes for new physics: as a mineral-oil Cherenkov detector, MiniBooNE could not differentiate electrons from photons, and one particularly tricky-to-model background process might be contributing more photons than expected.

Many of us suspected that there could be something wrong with predictions for this background

Joachim Kopp

“High-energy single photons can be produced when a neutrino scatters on a nucleon via a neutral-current interaction and excites the nucleon to a Δ(1232) resonance,” explains CERN theorist Joachim Kopp. “Most of the time, the resonance decays to a pion and a nucleon, but there is a rare decay mode to a nucleon and a photon. The rate for this mode is very hard to predict, and many of us suspected that there could be something wrong with predictions for this background.”

Enter MicroBooNE, a liquid-argon time-projection-chamber sibling experiment to MiniBooNE which is capable of studying neutrino interactions in photographic detail, and differentiating the two signals. Having detected its first neutrino interactions in 2015, the MicroBooNE team has now set a limit on the neutral-current Δ→Nγ process is more than a factor of 50 better than existing constraints, explains Evans. “With this MicroBooNE result, we reject a Δ→Nγ model of the low-energy excess at 94.8% confidence, a strong indication that we must look elsewhere for the source of the excess.”

The electron hypothesis

Now that MicroBooNE has strongly disfavoured a leading-photon model for the MiniBooNE anomaly, attention shifts to the electron hypothesis – which would hint at the existence of a sterile neutrino, or something more exotic, if proven. And we don’t have long to wait. The MicroBooNE collaboration plans to release its search for an electron-like low-energy excess on 27 October, with results from three independent analyses looking at a range of inclusive and exclusive channels.

Beyond that, there is more to come, says Evans. “Our current round of results use only the first half of the total MicroBooNE data-set, and this is a programme that is only just beginning, with ICARUS and SBND within Fermilab’s short-baseline programme now coming online to turn this into a multi-baseline exploration of the richness of neutrino physics with unparalleled detail.”

The global picture is complex. In 2019, for example, the MINOS+ experiment failed to confirm the MiniBooNE signal (CERN Courier March/April 2019 p7). Were the sterile neutrino to exist, it should also have significant cosmological consequences which remain unobserved. But the anomalies are accumulating, says Kopp.

“LSND and MiniBooNE are quite consistent, and the short-baseline reactor experiments require parameters in the same region of parameter space, though these results are very much in flux and it’s not clear which ones are trustworthy, so it’s hard to make precise statements. The good news is that there’s realistic hope of resolving these puzzles over the next few years. ”

Breaking records at EPS-HEP

2021-EPS-HEP-Poster-WEB-final

In this year’s unusual Olympic summer, high-energy physicists pushed back the frontiers of knowledge and broke many records. The first one is surely the number of registrants to the EPS-HEP conference, hosted online from 26 to 30 July by the University of Hamburg and DESY: nearly 2000 participants scrutinised more than 600 talks and 280 posters. After 18 months of the COVID pandemic, the community showed a strong desire to meet and discuss physics with international colleagues. 

200 trillion b-quarks, 40 billion electroweak bosons, 300 million top quarks and 10 million Higgs bosons

The conference offered the opportunity to hear about analyses using the full LHC Run-2 data set, which is the richest hadron-collision data sample ever recorded. The results are breathtaking. As my CERN colleague Michelangelo Mangano explained recently to summer students, “The LHC works and is more powerful than expected, the experiments work and are more precise than expected, and the Standard Model works beautifully and is more reliable than expected.” About 3000 papers have been published by the LHC collaborations in the past decade. They have established the LHC as a truly multi-messenger endeavour, not so much because of the multitude of elementary particles produced – 200 trillion b-quarks, 40 billion electroweak bosons, 300 million top quarks and 10 million Higgs bosons – but because of the diversity of scientifically independent experiments that historically would have required different detectors and facilities, built and operated by different communities. “Data first” should always remain the leitmotif of the natural sciences. 

Paula Alvarez Cartelle (Cambridge) reminded us that the LHC has revealed new states of matter, with LHCb confirming that four or even five quarks can assemble themselves into new long-lived bound states, stabilised by the presence of two charm quarks. For theorists, these new quark-molecules provide valuable input data to tune their lattice simulations and to refine their understanding of the non-perturbative dynamics of strong interactions.

Theoretical tours de force

While Run 1 was a time for inclusive measurements, a multitude of differential measurements were performed during Run 2. Paolo Azzurri (INFN Pisa) reviewed the transverse momentum distribution of the jets produced in association with electroweak gauge bosons. These offer a way to test quantum chromodynamics and electroweak predictions at the highest achievable precision through higher-order computations, resummation and matching to parton showers. The work is fuelled by remarkable theoretical tours de force reported by Jonas Lindert (Sussex) and Lorenzo Tancredi (Oxford), which build on advanced mathematical techniques, including inspiring new mathematical developments in algebraic geometry and finite-field arithmetic. We experienced a historic moment: the LHC definitively became a precision machine, achieving measurements reaching and even surpassing LEP’s precision. This new situation also induced a shift more towards precision measurements, model-independent interpretations and Standard Model (SM) compatibility checks, and away from model-dependent searches for new physics. Effective-field-theory analyses are therefore gaining popularity, explained Veronica Sanz (Valencia and Sussex).

We know for certain that the SM is not the ultimate theory of nature. How and when the first cracks will be revealed is the big question that motivates future collider design studies. The enduring and compelling “B anomalies” reported by LHCb could well be the revolutionary surprise that challenges our current understanding of the structure of matter. The ratios of the decay widths of B mesons, either through charged or neutral currents, b→cℓν and b→sℓ+, could finally reveal that the electron, muon and tau lepton differ by more than just their masses.

The statistical significance of the lepton flavour anomalies is growing, reported Franz Muheim (Edinburgh and CERN), creating “cautious” excitement and stimulating the creativity of theorists like Ana Teixeira (Clermont-Ferrand), who builds new physics models with leptoquarks and heavy vectors with different couplings to the three families of leptons, to accommodate the apparent lepton-flavour-universality violations. Belle II should soon bring new additional input to the debate, said Carsten Niebuhr (DESY).

Long-awaited results

The other excitement of the year came from the long-awaited results from the muon g-2 experiment at Fermilab, presented by Alex Keshavarzi (Manchester). The spin precession frequency of a sample of 10 billion muons was measured with a precision of a few hundred parts per million, confirming the deviation from the SM prediction observed nearly 20 years ago by the E821 experiment at Brookhaven. With the current statistics, the deviation now amounts to 4.2σ. With an increase by a factor 20 of the dataset foreseen in the next run, the measurement will soon become systematics limited. Gilberto Colangelo (Bern) also discussed new and improved lattice computations of the hadronic vacuum polarisation, significantly reducing the discrepancy between the theoretical prediction and the experimental measurement. The jury is still out – and the final word might come from the g-2/EDM experiment at J-PARC.

Accelerator-based experiments might not be the place to prove the SM wrong. Astrophysical and cosmological observations have already taught us that SM matter only constitutes around 5% of the stuff that the universe is made of. The traditional idea that the gap in the energy budget of the universe is filled by new TeV-scale particles that stabilise the electroweak scale under radiative corrections is fading away. And a huge range of possible dark-matter scales opens up a rich and reinvigorated experimental programme that can profit from original techniques exploiting electron and nuclear recoils caused by the scattering of dark-matter particles. A front-runner in the new dark-matter landscape is the QCD axion originally introduced to explain why strong interactions do not distinguish matter from antimatter. Babette Döbrich (CERN) discussed the challenges inherent in capturing an axion, and described the many new experiments around the globe designed to overcome them.

Progress could also come directly from theory

Progress could also come directly from theory. Juan Maldacena (IAS Princeton) recalled the remarkable breakthroughs on the black-hole information problem. The Higgs discovery in 2012 established the non-trivial vacuum structure of space–time. We are now on our way to understanding the quantum mechanics of this space–time.

Like at the Olympics, where breaking records requires a lot of work and effort by the athletes, their teams and society, the quest to understand nature relies on the enthusiasm and the determination of physicists and their funding agencies. What we have learnt so far has allowed us to formulate precise and profound questions. We now need to create opportunities to answer them and to move ahead.

One cannot underestimate how quickly the landscape of physics can change, whether the B-anomalies will be confirmed or whether a dark-matter particle will be discovered. Let’s see what will be awaiting us at the next EPS-HEP conference in 2023 in Hamburg – in person this time!

Bs decays remain anomalous

Figure 1

The LHCb experiment recently presented new results on the b → sμμ decay of a Bs meson to a φ meson and a dimuon pair, reinforcing an anomaly last reported in 2015 with improved statistics and theory calculations. Such decays of b hadrons via b → s quark transitions are strongly suppressed in the Standard Model (SM) and therefore constitute sensitive probes for hypothetical new particles. In recent years, several measurements of rare semileptonic b → sℓℓ decays have shown tensions with SM predictions. Anomalies have been spotted in measurements of branching fractions, angular analyses and tests of lepton flavour universality (LFU), leading to cautious excitement that new physics might be at play.

Calculating the Standard Model prediction is more challenging than for lepton-flavour universality

At the SM@LHC conference in April, LHCb presented the most precise determination to date of the branching fraction for the decay using data collected during LHC Run 1 and Run 2 (figure 1). The branching fraction is measured as a function of the dimuon invariant mass (q2) and found to lie below the SM prediction at the level of 3.6 standard deviations in the low-q2 region. This deficit of muons is consistent with the pattern seen in LFU tests of b → sℓℓ transitions, however calculating the SM prediction for the Bs→ φμμ branching fraction is more challenging than for LFU tests as it involves the calculation of non-perturbative hadronic effects. 

Calculations based on light-cone sum rules are most precise at low q2, while lattice-QCD calculations do better at high q2. A combination is expected to give the best precision over the full q2 range. If lattice-QCD calculations are not used in the comparison, increased theory errors reduce the tension to 1.8 standard deviations in the low-q2 region. The previous 2015 measurement by LHCb, which was based exclusively on Run-1 data (grey data points), was reported at the time to be approximately three standard deviations below the best theoretical predictions that were available at the time. Since then, theo­retical calculations have generally become more precise with regard to form factors, but more conservatively evaluated with regard to non-local hadronic effects.

Figure 2

Angular information

The angular distribution of the Bs→ φμμ decay products offers complementary information. At the international FPCP conference in June, LHCb presented a measurement of the angular distribution of these decays in different q2 regions using data collected during LHC Run 1 and Run 2. Figure 2 shows the longitudinal polarisation fraction FL – one of several variables sensitive to anomalous b → sμμ couplings. The results are consistent with SM predictions at the level of two standard deviations, but may also hint at the same pattern of unexpected behaviour seen in angular analyses of other b → sμμ decays and in branching-fraction measurements.

For both analyses, LHC Run 3 will be crucial to better understanding the anomalous behaviour seen so far in Bs→ φμμ decays.

Partnership yields big wins for the EIC

The EIC in outline

The international nuclear-physics community will be front-and-centre as a unique research facility called the Electron–Ion Collider (EIC) moves from concept to reality through the 2020s – the latest progression in the line of large-scale accelerator programmes designed to probe the fundamental forces and particles that underpin the structure of matter. 

Decades of research in particle and nuclear physics have shown that protons and neutrons, once thought to be elementary, have a rich, dynamically complex internal structure of quarks, anti-quarks and gluons, the understanding of which is fundamental to the nature of matter as we experience it. By colliding high-energy beams of electrons with high-energy beams of protons and heavy ions, the EIC is designed to explore this hidden subatomic landscape with the resolving power to image its behaviour directly. Put another way: the EIC will provide the world’s most powerful microscope for studying the “glue” that binds the building blocks of matter.

Luminous performance

When the EIC comes online in the early 2030s, the facility will perform precision “nuclear femtography” by zeroing in on the substructure of quarks and gluons in a manner comparable to the seminal studies of the proton using electron–proton collisions at DESY’s HERA accelerator in Germany between 1992 and 2007 (see “Nuclear femtography to delve deep into nuclear matter” panel). However, the EIC will produce a luminosity (collision rate) 100 times greater than the highest achieved by HERA and, for the first time in such a collider, will provide spin-polarised beams of both protons and electrons, as well as high-energy collisions of electrons with heavy ions. All of which will require unprecedented performance in terms of the power, intensity and spatial precision of the colliding beams, with the EIC expected to provide not only transformational advances in nuclear science, but also transferable technology innovations to shape the next generation of particle accelerators and detectors.

The US Department of Energy (DOE) formally initiated the EIC project in December 2019 with the approval of a “mission need”. That was followed in June of this year with the next “critical decision” to proceed with funding for engineering and design prior to construction (with the estimated cost of the build about $2 billion). The new facility will be sited at Brookhaven National Laboratory (BNL) in Long Island, New York, utilising components and infrastructure from BNL’s Relativistic Heavy Ion Collider (RHIC), including the polarised proton and ion-beam capability and the 3.8 km underground tunnel. Construction will be carried out as a partnership between BNL and Thomas Jefferson National Accelerator Facility (JLab) in Newport News, Virginia, home of the Continuous Electron Beam Accelerator Facility (CEBAF), which has pioneered many of the enabling technologies needed for the EIC’s new electron rings. 

Beyond the BNL–JLab partnership, the EIC is very much a global research endeavour. While the facility is not scheduled to become operational until early in the next decade, an international community of scientists is already hard at work within the EIC User Group. Formed in 2016, the group now has around 1300 members – representing 265 universities and laboratories from 35 countries – engaged collectively on detector R&D, design and simulation as well as initial planning for the EIC’s experimental programme. 

A cutting-edge accelerator facility

Being the latest addition to the line of particle colliders, the EIC represents a fundamental link in the chain of continuous R&D, knowledge transfer and innovation underpinning all manner of accelerator-related technologies and applications – from advanced particle therapy systems for the treatment of cancer to ion implantation in semiconductor manufacturing. 

The images “The EIC in outline” and “Going underground” show the planned layout of the EIC, where the primary beams circulate inside the existing RHIC tunnel to enable the collisions of high-energy (5–18 GeV) electrons (and possibly positrons) with high-energy ion beams of up to 275 GeV/nucleon. One thing is certain: the operating parameters of the EIC, with luminosities of up to 1034 cm–2 s–1 and up to 85% beam polarisation, will push the design of the facility beyond the limits set by previous accelerator projects in a number of core technology areas.

The EIC

For starters, the EIC will require significant advances in the field of superconducting radiofrequency (SRF) systems operating under high current conditions, including control of higher-order modes, beam RF stability and crab cavities. A major challenge is the achievement of strong cooling of intense proton and light-ion beams to manage emittance growth owing to intrabeam scattering. Such a capability will require unprecedented control of low-energy electron-beam quality with the help of ultrasensitive and precise photon detection technologies – innovations that will likely yield transferable benefits for other areas of research reliant on electron-beam technology (e.g. free-electron lasers). 

The EIC design for strong cooling of the ion beams specifies a superconducting energy-recovery linac with a virtual beam power of 15 MW, an order-of-magnitude increase versus existing machines. With this environmentally friendly new technology, the rapidly cycling beam of low-energy electrons (150 MeV) is accelerated within the linac and passes through a cooling channel where it co-propagates with the ions. The cooling electron beam is then returned to the linac, timed to see the decelerating phase of the RF field, and the beam power is thus recovered for the next accelerating cycle – i.e. beam power is literally recycled after each cooling pass.

The EIC will also require complex operating schemes. A case in point: fresh, highly polarised electron bunches will need to be frequently injected into the electron storage ring without disturbing the collision operation of previously injected bunches. Further complexity comes in maximising the luminosity and polarisation over a large range of centre-of-mass energies and for the entire spectrum of ion beams. With a control system that can monitor hundreds of beam parameters in real-time, and with hundreds of points where the guiding magnetic fields can be tuned on the fly, there is a vast array of “knobs-to-be-turned” to optimise overall performance. Inevitably, this is a facility that will benefit from the use of artificial intelligence and machine-learning technologies to maximise its scientific output. 

Prototype bunched-beam polarised electron source

At the same time, the EIC and CERN’s High-Luminosity LHC user communities are working in tandem to realise more capable technologies for particle detection as well as innovative electronics for large-scale data read-out and processing. Exploiting advances in chip technology, with feature sizes as small as 65 nm, multipixel silicon sensors are in the works for charged-particle tracking, offering single-point spatial resolution better than 5 µm, very low mass and on-chip, individual-pixel readout. These R&D efforts open the way to compact arrays of thin solid-state detectors with broad angular coverage to replace large-volume gaseous detectors. 

Coupled with leading-edge computing capabilities, such detectors will allow experiments to stream data continuously, rather than selecting small samples of collisions for readout. Taken together, these innovations will yield no shortage of downstream commercial opportunities, feeding into next-generation medical imaging systems, for example, as well as enhancing industrial R&D capacity at synchrotron light-source facilities.

The BNL–JLab partnership

As the lead project partners, BNL and JLab have a deep and long-standing interest in the EIC programme and its wider scientific mission. In 2019, BNL and JLab each submitted their own preconceptual designs to DOE for a future high-energy and high-luminosity polarised EIC based around existing accelerator infrastructure and facilities. In January 2020, DOE subsequently selected BNL as the preferred site for the EIC, after which the two labs immediately committed to a full partnership between their respective teams (and other collaborators) in the construction and operation of the facility. 

Nuclear femtography to delve deep into nuclear matter

Internal quark and gluon substructure of the proton

Nuclear matter is inherently complex because the interactions and structures therein are inextricably mixed up: its constituent quarks are bound by gluons that also bind themselves. Consequently, the observed properties of nucleons and nuclei, such as their mass and spin, emerge from a dynamical system governed by quantum chromodynamics (QCD). The quark masses, generated via the Higgs mechanism, only account for a tiny fraction of the mass of a proton, leaving fundamental questions about the role of gluons in the structure of nucleons and nuclei still unanswered. 

The underlying nonlinear dynamics of the gluon’s self-interaction is key to understanding QCD and fundamental features of the strong interactions such as dynamical chiral symmetry-breaking and confinement. Yet despite the central role of gluons, and the many successes in our understanding of QCD, the properties and dynamics of gluons remain largely unexplored. 

If that’s the back-story, the future is there to be written by the EIC, a unique machine that will enable physicists to shed light on the many open questions in modern nuclear physics. 

Back to basics

At the fundamental level, the way in which a nucleon or nucleus reveals itself in an experiment depends on the kinematic regime being probed. A dynamic structure of quarks and gluons is revealed when probing nucleons and nuclei at higher energies, or with higher resolutions. Here, the nucleon transforms from a few-body system, with its structure dominated by three valance quarks, to a regime where it is increasingly dominated by gluons generated through gluon radiation, as discovered at the former HERA electron–proton collider at DESY. Eventually, the gluon density becomes so large that the gluon radiation is balanced by gluon recombination, leading to nonlinear features of the strong interaction.

The LHC and RHIC have shown that neutrons and protons bound inside nuclei already exhibit the collective behaviour that reveals QCD substructure under extreme conditions, as initially seen with high-energy heavy-ion collisions. This has triggered widespread interest in the study of the strong force in the context of condensed-matter physics, and the understanding that the formation and evolution of the extreme phase of QCD matter is dominated by the properties of gluons at high density.

The subnuclear genetic code

The EIC will enable researchers to go far beyond the present one-dimensional picture of nuclei and nucleons, where the composite nucleon appears as a bunch of fast-moving (anti-)quarks and gluons whose transverse momenta or spatial extent are not resolved. Specifically, by correlating the information of the quark and gluon longitudinal momentum component with their transverse momentum and spatial distribution inside the nucleon, the EIC will enable nuclear femtography. 

Such femtographic images will provide, for the first time, insight into the QCD dynamics inside hadrons, such as the interplay between sea quarks and gluons. The ultimate goal is to experimentally reconstruct and constrain the so-called Wigner functions – the quantities that encode the complete tomographic information and constitute a QCD “genetic map” of nucleons and nuclei.

  Adapted from “Electron–ion collider on the horizon” by Elke-Caroline Aschenauer, BNL, and Rolf Ent, JLab.

The construction project is led by a joint BNL–JLab management team that integrates the scientific, engineering and management capabilities of JLab into the BNL design effort. JLab, for its part, leads on the design and construction of SRF and cryogenics systems, the energy-recovery linac and several of the electron injector and storage-ring subsystems within the EIC accelerator complex. 

More broadly, BNL and JLab are gearing up to work with US and international partners to meet the technical challenges of the EIC in a cost-effective, environmentally responsible manner. The goal: to deliver a leading-edge research facility that will build upon the current CEBAF and RHIC user base to ensure engagement – at scale – from the US and international nuclear-physics communities. 

As such, the labs are jointly hosting the EIC experiments in the spirit of a DOE user facility for fundamental research, while the BNL–JLab management team coordinates the engagement of other US and international laboratories into a multi-institutional partnership for EIC construction. Work is also under way with prospective partners to define appropriate governance and operating structures to enhance the engagement of the user community with the EIC experimental programme. 

With international collaboration hard-wired into the EIC’s working model, the EIC User Group has been in the vanguard of a global effort to develop the science goals for the facility – as well as the experimental programme to realise those goals. Most importantly, the group has carried out intensive studies over the past two years to document the measurements required to deliver EIC’s physics objectives and the resulting detector requirements. This work also included an exposition of evolving detector concepts and a detailed compendium of candidate technologies for the EIC experimental programme.

Cornerstone collaborations 

The resulting Yellow Report, released in March 2021, provides the basis for the ongoing discussion of the most effective implementation of detectors, including the potential for complementary detectors in the two possible collision points as a means of maximising the scientific output of the EIC facility (see “Detectors deconstructed”). Operationally, the report also provides the cornerstone on which EIC detector proposals are currently being developed by three international “proto-collaborations”, with significant components of the detector instrumentation being sourced from non-US partners. 

The EIC represents a fundamental link in the chain of continuous R&D and knowledge transfer

Along every coordinate, it’s clear that the EIC project profits enormously from its synergies with accelerator and detector R&D efforts worldwide. To reinforce those benefits, a three-day international workshop was held in October 2020, focusing on EIC partnership opportunities across R&D and construction of accelerator components. This first Accelerator Partnership Workshop, hosted by the Cockcroft Institute in the UK, attracted more than 250 online participants from 26 countries for a broad overview of EIC and related accelerator-technology projects. A follow-up workshop, scheduled for October 2021 and hosted by the TRIUMF Laboratory in Canada, will focus primarily on areas where advanced “scope of work” discussions are already under way between the EIC project and potential partners.

Nurturing talent 

While discussion and collaboration between the BNL and JLab communities were prioritised from the start of the EIC planning process, a related goal is to get early-career scientists engaged in the EIC physics programme. To this end, two centres were created independently: the Center for Frontiers in Nuclear Science (CFNS) at Stony Brook University, New York, and the Electron-Ion Collider Center (EIC2) at JLab.

The CFNS, established jointly by BNL and Stony Brook University in 2017, was funded by a generous donation from the Simons Foundation (a not-for-profit organisation that supports basic science) and a grant from the State of New York. As a focal point for EIC scientific discourse, the CFNS mentors early-career researchers seeking long-term opportunities in nuclear science while simultaneously supporting the formation of the EIC’s experimental collaborations. 

Conceptual general-purpose detector

Core CFNS activities include EIC science workshops, short ad-hoc meetings (proposed and organised by members of the EIC User Group), alongside a robust postdoctoral fellow programme to guide young scientists in EIC-related theory and experimental disciplines. An annual summer school series on high-energy QCD also kicked off in 2019, with most of the presentations and resources from the wide-ranging CFNS events programme available online to participants around the world. 

In a separate development, the CFNS recently initiated a dedicated programme for under-represented minorities (URMs). The Edward Bouchet Initiative provides a broad portfolio of support to URM students at BNL, including grants to pursue masters or doctoral degrees at Stony Brook on EIC-related research. 

Meanwhile, the EIC2 was established at JLab with funding from the State of Virginia to involve outstanding JLab students and postdocs in EIC physics. Recognising that there are many complementary overlaps between JLab’s current physics programme and the physics of the future EIC, the EIC2 provides financial support to three PhD students and three postdocs each year to expand their current research to include the physics that will become possible once the new collider comes online. 

Beyond their primary research projects, this year’s cohort of six EIC2 fellows worked together to organise and establish the first EIC User Group Early Career workshop. The event, designed specifically to highlight the research of young scientists, was attended by more than 100 delegates and is expected to become an annual part of the EIC User Group meeting.

The future, it seems, is bright, with CFNS and EIC2 playing their part in ensuring that a diverse cadre of next-generation scientists and research leaders is in place to maximise the impact of EIC science over the decades to come.

Strongly unbalanced photon pairs

Figure 1

Most processes resulting from proton–proton collisions at the LHC are affected by the strong force – a difficult-to-model part of the Standard Model involving non-perturbative effects. This can be problematic when measuring rare processes not mediated by strong interactions, such as those involving the Higgs boson, and when searching for new particles or interactions. To ensure such processes are not obscured, precise knowledge of the more dom­inant strong-interaction effects, including those caused by the initial-state partons, is a prerequisite to LHC physics analyses.

The electromagnetic production of a photon pair is the dominant background to the H → γγ decay channel – a process that is instrumental to the study of the Higgs boson. Despite its electromagnetic nature, diphoton production is affected by surprisingly large strong-interaction effects. Thanks to precise ATLAS measurements of diphoton processes using the full Run-2 dataset, the collaboration is able to probe these effects and scrutinise state-of-the-art theoretical calculations.

Measurements studying strong interactions typically employ final states that include jets produced from the showering and hadronisation of quarks and gluons. However, the latest ATLAS analysis instead uses photons, which can be very precisely measured by the detector. Although photons do not carry a colour charge, they interact with quarks as the latter carry electric charge. As a result, strong-interaction effects on the quarks can alter the characteristics of the measured photons. The conservation of momentum allows us to quantify this effect: the LHC’s proton beams collide head-on, so the net momentum transverse to the beam axis must be zero for the final-state particles. Any signs to the contrary indicate additional activity in the event with equivalent but opposite transverse momentum, usually arising from quarks and gluons radiated from the initial-state partons. Therefore, by measuring the transverse momentum of photon pairs, and related observables, the strong interaction may be indirectly probed.

A surprising role of the strong interaction in electro-magnetic diphoton production is revealed

Comparing the measured values to predictions reveals the surprising role of the strong interaction in electromagnetic diphoton production. In a simple picture without the strong interaction, the momentum of each photon should perfectly balance in the transverse plane. However, this simplistic expectation does not match the measurements (see figure 1). Measuring the differential cross-section as a function of the transverse momentum of the photon pair, ATLAS finds that most of the measured photon pairs (black points) have low but non-zero transverse momenta, with a peak at approximately 10 GeV, followed by a smoothly falling distribution towards higher values.

Extending calculations to encompass next-to-next-to-leading order corrections in the strong-interaction coupling constant (purple line), the impact of the strong interaction becomes manifest. The measured values at high transverse momenta are well described by these predictions, including the bump observed at 70 GeV, which is another manifestation of higher-order strong-interaction effects. Monte Carlo event generators like Sherpa (red line), which combine similar calculations with approximate simulations of arbitrarily many-quark and gluon emissions – especially relevant at low energies – properly describe the entire measured distribution.

The results of this analysis, which also include measurements of other distributions such as angular variables between the two photons, don’t just viscerally probe the strong interaction – they also provide a benchmark for this important background process.

Emergence

A murmuration of starlings

Particle physics is at its heart a reductionistic endeavour that tries to reduce reality to its most basic building blocks. This view of nature is most evident in the search for a theory of everything – an idea that is nowadays more common in popularisations of physics than among physicists themselves. If discovered, all physical phenomena would follow from the application of its fundamental laws.

A complementary perspective to reductionism is that of emergence. Emergence says that new and different kinds of phenomena arise in large and complex systems, and that these phenomena may be impossible, or at least very hard, to derive from the laws that govern their basic constituents. It deals with properties of a macroscopic system that have no meaning at the level of its microscopic building blocks. Good examples are the wetness of water and the superconductivity of an alloy. These concepts don’t exist at the level of individual atoms or molecules, and are very difficult to derive from the microscopic laws. 

As physicists continue to search for cracks in the Standard Model (SM) and Einstein’s general theory of relativity, could these natural laws in fact be emergent from a deeper reality? And emergence is not limited to the world of the very small, but by its very nature skips across orders of magnitude in scale. It is even evident, often mesmerisingly so, at scales much larger than atoms or elementary particles, for example in the murmurations of a flock of birds – a phenomenon that is impossible to describe by following the motion of an individual bird. Another striking example may be intelligence. The mechanism by which artificial intelligence is beginning to emerge from the complexity of underlying computing codes shows similarities with emergent phenomena in physics. One can argue that intelligence, whether it occurs naturally, as in humans, or artificially, should also be viewed as an emergent phenomenon. 

Data compression

Renormalisable quantum field theory, the foundation of the SM, works extraordinarily well. The same is true of general relativity. How can our best theories of nature be so successful, while at the same time being merely emergent? Perhaps these theories are so successful precisely because they are emergent. 

As a warm up, let’s consider the laws of thermodynamics, which emerge from the microscopic motion of many molecules. These laws are not fundamental but are derived by statistical averaging – a huge data compression in which the individual motions of the microscopic particles are compressed into just a few macroscopic quantities such as temperature. As a result, the laws of thermodynamics are universal and independent of the details of the microscopic theory. This is true of all the most successful emergent theories; they describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different. For instance, two physical systems that undergo a second-order phase transition, while being very different microscopically, often obey exactly the same scaling laws, and are at the critical point described by the same emergent theory. In other words, an emergent theory can often be derived from a large universality class of many underlying microscopic theories. 

Successful emergent theories describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different

Entropy is a key concept here. Suppose that you try to store the microscopic data associated with the motion of some particles on a computer. If we need N bits to store all that information, we have 2N possible microscopic states. The entropy equals the logarithm of this number, and essentially counts the number of bits of information. Entropy is therefore a measure of the total amount of data that has been compressed. In deriving the laws of thermodynamics, you throw away a large amount of microscopic data, but you at least keep count of how much information has been removed in the data-compression procedure.

Emergent quantum field theory

One of the great theoretical-physics paradigm shifts of the 20th century occurred when Kenneth Wilson explained the emergence of quantum field theory through the application of the renormalisation group. As with thermodynamics, renormalisation compresses microscopic data into a few relevant parameters – in this case, the fields and interactions of the emergent quantum field theory. Wilson demonstrated that quantum field theories appear naturally as an effective long-distance and low-energy description of systems whose microscopic definition is given in terms of a quantum system living on a discretised spacetime. As a concrete example, consider quantum spins on a lattice. Here, renormalisation amounts to replacing the lattice by a coarser lattice with fewer points, and redefining the spins to be the average of the original spins. One then rescales the coarser lattice so that the distance between lattice points takes the old value, and repeats this step many times. A key insight was that, for quantum statistical systems that are close to a phase transition, you can take a continuum limit in which the expectation values of the spins turn into the local quantum fields on the continuum spacetime.

This procedure is analogous to the compression algorithms used in machine learning. Each renormalisation step creates a new layer, and the algorithm that is applied between two layers amounts to a form of data compression. The goal is similar: you only keep the information that is required to describe the long-distance and low-energy behaviour of the system in the most efficient way.

A neural network

So quantum field theory can be seen as an effective emergent description of one of a large universality class of many possible underlying microscopic theories. But what about the SM specifically, and its possible supersymmetric extensions? Gauge fields are central ingredients of the SM and its extensions. Could gauge symmetries and their associated forces emerge from a microscopic description in which there are no gauge fields? Similar questions can also be asked about the gravitational force. Could the curvature of spacetime be explained from an emergent perspective?

String theory seems to indicate that this is indeed possible, at least theoretically. While initially formulated in terms of vibrating strings moving in space and time, it became clear in the 1990s that string theory also contains many more extended objects, known as “branes”. By studying the interplay between branes and strings, an even more microscopic theoretical description was found in which the coordinates of space and time themselves start to dissolve: instead of being described by real numbers, our familiar (x, y, z) coordinates are replaced by non-commuting matrices. At low energies, these matrices begin to commute, and give rise to the normal spacetime with which we are familiar. In these theoretical models it was found that both gauge forces and gravitational forces appear at low energies, while not existing at the microscopic level.

While these models show that it is theoretically possible for gauge forces to emerge, there is at present no emergent theory of the SM. Such a theory seems to be well beyond us. Gravity, however, being universal, has been more amenable to emergence.

Emergent gravity

In the early 1970s, a group of physicists became interested in the question: what happens to the entropy of a thermodynamic system that is dropped into a black hole? The surprising conclusion was that black holes have a temperature and an entropy, and behave exactly like thermodynamic systems. In particular, they obey the first law of thermodynamics: when the mass of a black hole increases, its (Bekenstein–Hawking) entropy also increases.

The correspondence between the gravitational laws and the laws of thermodynamics does not only hold near black holes. You can artificially create a gravitational field by accelerating. For an observer who continues to accelerate, even empty space develops a horizon, from behind which light rays will not be able to catch up. These horizons also carry a temperature and entropy, and obey the same thermodynamic laws as black-hole horizons. 

It was shown by Stephen Hawking that the thermal radiation emitted from a black hole originates from pair creation near the black-hole horizon. The properties of the pair of particles, such as spin and charge, are undetermined due to quantum uncertainty, but if one particle has spin up (or positive charge), then the other particle must have spin down (or negative charge). This means that the particles are quantum entangled. Quantum entangled pairs can also be found in flat space by considering accelerated observers. 

Crucially, even the vacuum can be entangled. By separating spacetime into two parts, you can ask how much entanglement there is between the two sides. The answer to this was found in the last decade, through the work of many theorists, and turns out to be rather surprising. If you consider two regions of space that are separated by a two-dimensional surface, the amount of quantum entanglement between the two sides turns out to be precisely given by the Bekenstein–Hawking entropy formula: it is equal to a quarter of the area of the surface measured in Planck units. 

Holographic renormalisation

The area of the event horizon

The AdS/CFT correspondence incorporates a principle called “holography”: the gravitational physics inside a region of space emerges from a microscopic description that, just like a hologram, lives on a space with one less dimension and thus can be viewed as living on the boundary of the spacetime region. The extra dimension of space emerges together with the gravitational force through a process called “holographic renormalisation”. One successively adds new layers of spacetime. Each layer is obtained from the previous layer through “coarse-graining”, in a similar way to both renormalisation in quantum field theory and data-compression algorithms in machine learning.

Unfortunately, our universe is not described by a negatively curved spacetime. It is much closer to a so-called de Sitter spacetime, which has a positive curvature. The main difference between de Sitter space and the negatively curved anti-de Sitter space is that de Sitter space does not have a boundary. Instead, it has a cosmological horizon whose size is determined by the rate of the Hubble expansion. One proposed explanation for this qualitative difference is that, unlike for negatively curved spacetimes, the microscopic quantum state of our universe is not unique, but secretly carries a lot of quantum information. The amount of this quantum information can once again be counted by an entropy: the Bekenstein–Hawking entropy associated with the cosmological horizon. 

This raises an interesting prospect: if the microscopic quantum data of our universe may be thought of as many entangled qubits, could our current theories of spacetime, particles and forces emerge via data compression? Space, for example, could emerge by forgetting the precise way in which all the individual qubits are entangled, but only preserving the information about the amount of quantum entanglement present in the microscopic quantum state. This compressed information would then be stored in the form of the areas of certain surfaces inside the emergent curved spacetime. 

In this description, gravity would follow for free, expressed in the curvature of this emergent spacetime. What is not immediately clear is why the curved spacetime would obey the Einstein equations. As Einstein showed, the amount of curvature in spacetime is determined by the amount of energy (or mass) that is present. It can be shown that his equations are precisely equivalent to an application of the first law of thermodynamics. The presence of mass or energy changes the amount of entanglement, and hence the area of the surfaces in spacetime. This change in area can be computed and precisely leads to the same spacetime curvature that follows from the Einstein equations. 

The idea that gravity emerges from quantum entanglement goes back to the 1990s, and was first proposed by Ted Jacobson. Not long afterwards, Juan Maldacena discovered that general relativity can be derived from an underlying microscopic quantum theory without a gravitational force. His description only works for infinite spacetimes with negative curvature called anti-de Sitter (or AdS–) space, as opposed to the positive curvature we measure. The microscopic description then takes the form of a scale-invariant quantum field theory – a so-called conformal field theory (CFT) – that lives on the boundary of the AdS–space (see “Holographic renormalisation” panel). It is in this context that the connection between vacuum entanglement and the Bekenstein–Hawking entropy, and the derivation of the Einstein equations from entanglement, are best understood. I have also contributed to these developments in a paper in 2010 that emphasised the role of entropy and information for the emergence of the gravitational force. Over the last decade a lot of progress has been made in our understanding of these connections, in particular the deep connection between gravity and quantum entanglement. Quantum information has taken centre stage in the most recent theoretical developments.

Emergent intelligence

But what about viewing the even more complex problem of human intelligence as an emergent phenomenon? Since scientific knowledge is condensed and stored in our current theories of nature, the process of theory formation can itself be viewed as a very efficient form of data compression: it only keeps the information needed to make predictions about reproducible events. Our theories provide us with a way to make predictions with the fewest possible number of free parameters. 

The same principles apply in machine learning. The way an artificial-intelligence machine is able to predict whether an image represents a dog or a cat is by compressing the microscopic data stored in individual pixels in the most efficient way. This decision cannot be made at the level of individual pixels. Only after the data has been compressed and reduced to its essence does it becomes clear what the picture represents. In this sense, the dog/cat-ness of a picture is an emergent property. This is even true for the way humans process the data collected by our senses. It seems easy to tell whether we are seeing or hearing a dog or a cat, but underneath, and hidden from our conscious mind, our brains perform a very complicated task that turns all the neural data that come from our eyes and ears into a signal that is compressed into a single outcome: it is a dog or a cat. 

Emergence is often summarised with the slogan “the whole is more than the sum of its parts”

Can intelligence, whether artificial or human, be explained from a reductionist point of view? Or is it an emergent concept that only appears when we consider a complex system built out of many basic constituents? There are arguments in favour of both sides. As human beings, our brains are hard-wired to observe, learn, analyse and solve problems. To achieve these goals the brain takes the large amount of complex data received via our senses and reduces it to a very small set of information that is most relevant for our purposes. This capacity for efficient data compression may indeed be a good definition for intelligence, when it is linked to making decisions towards reaching a certain goal. Intelligence defined in this way is exhibited in humans, but can also be achieved artificially.

Artificially intelligent computers beat us at problem solving, pattern recognition and sometimes even in what appears to be “generating new ideas”. A striking example is DeepMind’s AlphaZero, whose chess rating far exceeds that of any human player. Just four hours after learning the rules of chess, AlphaZero was able to beat the strongest conventional “brute force” chess program by coming up with smarter ideas and showing a deeper understanding of the game. Top grandmasters use its ideas in their own games at the highest level. 

In its basic material design, an artificial-intelligence machine looks like an ordinary computer. On the other hand, it is practically impossible to explain all aspects of human intelligence by starting at the microscopic level of the neurons in our brain, let alone in terms of the elementary particles that make up those neurons. Furthermore, the intellectual capability of humans is closely connected to the sense of consciousness, which most scientists would agree does not allow for a simple reductionist explanation.

Emergence is often summarised with the slogan “the whole is more than the sum of its parts” – or as condensed-matter theorist Phil Anderson put it, “more is different”. It counters the reductionist point of view, reminding us that the laws that we think to be fundamental today may in fact emerge from a deeper underlying reality. While this deeper layer may remain inaccessible to experiment, it is an essential tool for theorists of the mind and the laws of physics alike.

COMPASS points to triangle singularity

COMPASS

The COMPASS experiment at CERN has reported the first direct evidence for a long-hypothesised interplay between hadron decays which can masquerade as a resonance. The analysis, which was published last week in Physical Review Letters, suggests that the “a1(1420)” signal observed by the collaboration in 2015 is not a new exotic hadron after all, but the first sighting of a so-called triangle singularity.

“Triangle singularities are a mechanism for generating a bump in the decay spectrum that does not correspond to a resonance,” explains analyst Mikhail Mikhasenko of the ORIGINS Cluster in Munich. “One gets a peak that has all features of a new hadron, but whose true nature is a virtual loop with known particles.” 

“This is a prime example of an aphorism which is commonly attributed to Dick Dalitz,” agrees fellow analyst Bernhard Ketzer, of the University of Bonn: “Not every bump is a resonance, and not every resonance is a bump!”

Triangle singularities take their name from the triangle in a Feynman diagram when a secondary decay product fuses with a primary decay product. If the particle masses line up such that the process can proceed as a cascade of on-mass-shell hadron decays, the matrix element is enhanced by a so-called logarithmic singularity which can easily be mistaken for a resonance. But the effect is usually rather small, requiring a record 50 million πp→ππ+πp events, and painstaking work by the COMPASS collaboration to make certain that the a1(1420) signal, which makes up less than 1% of the three-pion sample, wasn’t an artefact of the analysis procedure.

Hadron experiments are reaching the precision needed to see one of the most peculiar multi-body features of QCD

Mikhail Mikhasenko

“The correspondence of this small signal with a triangle singularity is noteworthy because it shows that hadron experiments are finally reaching the precision and statistics needed to see one of the most peculiar features of the multi-body non-perturbative regime of quantum chromodynamics,” says Mikhasenko.

Triangle singularities were dreamt up independently by Lev Landau and Richard Cutkosky in 1959. After five decades of calculations and speculations, physicists at the Institute for High-Energy Physics in Beijing in 2012 used a triangle singularity to explain why intermediate f0(980) mesons in J/ψ meson decays at the BESIII experiment at the Beijing Electron–Positron Collider II were unusually long-lived. In 2019, the LHCb collaboration ruled out triangle singularities as the origin of the pentaquark states they discovered that year. The new COMPASS analysis is the first time that a “bump” in a decay spectrum has been convincingly explained as more likely due to a triangle singularity than a resonance.

Triangle singularity

COMPASS collides a secondary beam of charged pions from CERN’s Super Proton Synchrotron with a hydrogen target in the laboratory’s North Area. In this analysis, gluons emitted by protons in the target excite the incident pions, producing the final state of three charged pions which is observed by the COMPASS spectrometer. Intermediate resonances display a variety of angular momentum, spin and parity configurations. In 2015, the collaboration observed a small but unmistakable “P-wave” (L=1) component of the f0(980)π system with a peak at 1420 MeV and JPC=1++. Dubbed a1(1420), the apparent resonance was suspected to be exotic, as it was narrower, and hence more stable, than the ground-state meson with the same quantum numbers, a1(1260). It was also surprisingly light, with a mass just above the K*K threshold of 1.39 GeV. A tempting interpretation was that a1(1420) might be a dsūs̄ tetraquark, and thus the first exotic hadronic state with no charm quarks, and a charged cousin of the famous exotic X(3872) at the D*D threshold to boot, explains Mikhasenko.

According to the new COMPASS analysis, however, the bump at 1420 MeV can more simply be explained by a triangle singularity, whereby an a1(1260) decays to a K*K pair, and the kaon from the resulting K*→Kπ decay annihilates with the initial anti-kaon to create a light unflavoured f0(980) meson which decays to a pair of charged pions. Crucially, the mass of f0(980) is just above the KK threshold, and the roughly 300 MeV width of the conventional a1(1260) meson is wide enough for the particle to be said to decay to K*K on-mass-shell.

A new resonance is not required. That is phenomenologically significant.

Ian Aitchison

“The COMPASS collaboration have obviously done a very thorough job, being in possession of a complete partial-wave analysis,” says Ian Aitchison, emeritus professor at the University of Oxford, who in 1964 was among the first to propose that triangle graphs with an unstable internal line (in this case the K*) could lead to observable effects. This enables the whole process to occur nearly on-shell for all particles, which in turn means that the singularities of the amplitude will be near the physical region, and hence observable, explains Aitchison. “This is not unambiguous evidence for the observation of a triangle singularity, but the paper shows pretty convincingly that it is sufficient to explain the data, and that a new resonance is not required. That is phenomenologically significant.”

The collaboration now plans further studies of this new phenomenon, including its interference with the direct decay of the a1(1260). Meanwhile, observation by Belle II of the a1(1420) phenomenon in decays of the tau meson to three pions should confirm our understanding and provide an even cleaner signal, says Mikhasenko.

bright-rec iop pub iop-science physcis connect