Comsol -leaderboard other pages

Topics

Thermonuclear explosions fuel cosmic rays

The RS Ophiuchi outburst

Normally, RS Ophiuchi is a faint astronomical object at a distance of about 5000 light years from Earth. Once every 15 years or so, however, it brightens dramatically to the point it becomes visible to the naked eye, only to disappear again within several days. This object, classified as a recurrent nova, is not a single star but rather a binary system consisting of a white dwarf and a red giant. Due to the proximity of the white dwarf to its massive companion, it slowly accumulates matter from which it forms a thin atmospheric-like layer on its surface. Over time, this atmosphere becomes denser and heats up until it reaches a critical temperature of around 20 million K. The thermonuclear explosion initiated at this temperature rapidly spreads across the dwarf’s surface, causing all the remaining material to be blown away. This process, which in the case of RS Ophiuchi occurs between every 9 to 26 years, makes the object visible in the optical region. However, the process has also been theorised to be capable of producing cosmic rays.

Bipolar shape

The first recorded explosion on RS Ophiuchi was in 1898 after it was discovered in optical images by Williamina Fleming in 1905. A more recent explosion in 2006 was observed in detail by Hubble, while the last one occurred in August 2021. Hubble’s 2006 images show a shock wave propagating from the object. The shock, which is originally radially symmetric, gets distorted by the gas present in the orbital plane of the binary system. This gas slows down the shock in the orbital plane, leading to a final bipolar shape capable of accelerating electrons and hadrons to high energies. These accelerated charged particles can reach Earth in the form of cosmic rays, but due to the influence of magnetic fields it is not possible to directly trace these back to the source. The high-energy gamma rays produced by some of these cosmic rays, on the other hand, do point directly to the source. Gamma rays formed in this way during the 2021 explosion have recently been used by the H.E.S.S. collaboration to test cosmic-ray acceleration models. 

After the initial detection of the brightening of the source in optical wavelengths, the ground-based H.E.S.S. facility in Namibia pointed its five telescopes (which are sensitive to the Cherenkov light emitted as TeV gamma rays induce showers in the atmosphere) to the source. In parallel, the space-based Fermi–LAT telescope, which directly detects gamma rays in the ~100 MeV to ~500 GeV energy range, observed the target for a duration of several weeks. The emission measured by both telescopes as a function of time shows the maximum energy flux as measured by Fermi–LAT peaking about one day after the peak in optical brightness. For H.E.S.S., which covered the 250 GeV to 2.5 TeV energy range, the peak only occurred three days after the optical peak, indicating a significant hardening of the emission spectrum with time.

Hadronic origin

These results match what would be expected from a hadronic origin of these gamma rays. The shock wave produced by the thermonuclear explosion is capable of accelerating charged particles every time they traverse the shock. Magnetic fields, which are in part induced by some of the accelerated hadrons themselves, trap the charged particles in the region, thereby allowing these to traverse the shock many times. Some of the hadrons collide with gas in the surrounding medium to produce showers in which neutral pions are produced, which in turn produce the gamma rays detected on Earth. The maximum energy of these gamma rays is about an order of magnitude lower than the hadrons that induced the showers. This implies that one day after the explosion, hadrons had been accelerated up to 1 TeV, producing the photons detected by Fermi–LAT, while it took an additional two days for the source to further accelerate such hadrons up to the 10 TeV required to produce the emission visible to H.E.S.S. These timescales, as well as the measured energies, match with the theoretical predictions for sources with the same size and energy as RS Ophiuchi.

The results show a clear correlation between the theoretical predictions of hadronic production of gamma rays by recurring novae

The results, published in Science by the H.E.S.S. collaboration, show a clear correlation between the theoretical predictions of hadronic production of gamma rays by recurring novae. The alternative theory of a leptonic origin of the gamma rays is more difficult to fit due to the relatively large fraction of the shock energy that would need to be converted into electron acceleration. The measurements form an almost direct way to test models of the origin of cosmic rays and thereby add several important pieces to the puzzle of cosmic-ray origins. 

The search for new physics: take three

An ATLAS mono-jet event

Aside from the discovery of the Higgs boson, the absence of additional elementary-particle discoveries is the LHC’s main result so far. For many physicists, it is also the more surprising one. Such further discoveries are suggested by the properties of the Higgs boson, which are now established experimentally to a large extent. The Higgs boson’s low mass, despite its susceptibility to quantum corrections from heavy particles that should push it orders-of-magnitude higher, and its hierarchy of coupling strengths to fermions present extreme, “unnatural” values that so far lack an explanation. Therefore, searches for new physics at the TeV energy scale remain strongly motivated, irrespective of the no-show so far. 

Naturalness has triggered the development of many new-physics models, but the large extent of their parameter space allows them to evade exclusion again and again. Whereas the discoveries of the past decades, including that of the Higgs boson, were driven by precise quantitative predictions, the search for physics beyond the Standard Model (SM) simply requires more perseverance.

LHC Run 3 will bring long-awaited new insights to the question of naturalness with respect to Higgs physics, as well as to many other SM puzzles such as the nature of dark matter or the cosmological matter–antimatter asymmetry. With considerably more data and a slightly higher centre-of-mass energy than at Run 2, in addition to new triggers and improved event reconstruction and physics-analysis techniques, a significant increase in sensitivity compared to the current results will be achieved. Searches for new phenomena with Run 3 data will also benefit from a much improved definition of the physics targets, thanks to information gathered during Run 2 and the various anomalies observed at lower energies.

The story so far

During the past 12 years, a broad search programme has emerged at the LHC in parallel with precision measurements (see “Pushing the precision frontier”). Initially, the most favoured new-physics scenario was supersymmetry (SUSY), a new fermion–boson symmetry that gives rise to supersymmetric partners of SM particles and naturally leads to a light Higgs boson close to the masses of the W and Z bosons. SUSY is expected to produce events containing jets and missing transverse energy (MET), the study of which at Run 2 placed exclusion limits on gluino masses as high as 2.3 TeV. More challenging searches for stop quarks, with background processes up to a million times more frequent than the predicted signal, were also performed thanks to the excellent performance of the ATLAS and CMS detectors. Yet, no signs of stops have been found up to a mass of 1.3 TeV, excluding a sizeable fraction of the SUSY parameter space suggested by naturalness arguments. Further SUSY searches were performed, including those for only weakly interacting SUSY particles (“electroweakinos”), where the Run 2 data allowed the experiments to surpass the sensitivity achieved by LEP in some scenarios. Half a century since SUSY was first proposed, ATLAS and CMS have demonstrated that the simplest models containing TeV-scale sparticle masses are not realised in nature (see “Stop quarks and electroweakinos” figure).

Stop quarks and electroweakinos

In fact, a large number of new-physics searches during LHC Run 1 and Run 2 targeted models other than SUSY, many of which also address the question of naturalness. Signs of extra spatial dimensions have been searched for in “mono-jet” events containing a single energetic jet and large MET, which could be caused by excited gravitons propagating in a higher dimensional space. Searches for vector-like quarks, as suggested by models with a composite Higgs boson, covered numerous complex final states with decays into all of the heavier known elementary particles. In these and other searches, the Higgs boson has entered the experimental toolkit, for example via the identification of high-momentum Higgs-boson decays reconstructed as large-radius jets.

The Higgs sector itself has been the subject of new-physics searches. These target additional Higgs bosons that would arise from an extended Higgs sector and exotic decays of the known Higgs boson, for instance into weakly interacting massive particles (WIMPs), which are candidates for dark matter. Improvements in both theoretical and data-driven background determinations have also allowed searches for Higgs-boson decays into invisible particles, with the Run 2 dataset setting an upper limit of 10% on their rate.

Searches for dark matter also continued to be performed in traditional channels, for example via the mono-jet signature. To increase the accuracy of this search using the full Run 2 statistics, theorists contributed differential background predictions that go beyond the next-to-leading order in perturbation theory to achieve an unprecedented background uncertainty of only 3% at MET values above 1 TeV. The resulting constraints on WIMP dark matter are complementary to those achieved with ultrasensitive detectors deep underground as well as astroparticle experiments. The absence of dark-matter signals in such established search channels led to the development of new models that predict a number of relevant but previously unexplored signatures.

LHC Run 3 will allow searches to go significantly beyond the sensitivity achieved with the Run 2 data 

In several respects, searches for new physics at the LHC experiments have gone well beyond what was foreseen at the time of their design. “Scouting” data streams were introduced to store small-size event records suitable for di-jet and di-muon resonance searches such that recording rates could be increased by up to two orders of magnitude within the available bandwidth. Consequently, the mass reach of these searches was extended to lower values whereas previously this was impossible due to the high background rates at low masses. Long-lived particle searches also opened a new frontier, motivating proposals for new LHC detectors.

Overall, LHC Run 1 and Run 2 led to an enormous diversification of new-physics searches at the energy frontier by ATLAS and CMS, with complementary searches conducted by LHCb targeting lower invariant masses. The absence of new-physics signals despite the exploration of a multitude of signatures with unforeseen precision is a strong experimental result that feeds back to the phenomenology community to shape this programme further. While the analysis of Run 2 data is still ongoing, the experience gained so far in terms of experimental techniques and investigated signatures puts the experimental collaborations in a better position to search for new physics at Run 3.

Experimental improvements

LHC Run 3 will allow searches to go significantly beyond the sensitivity achieved with the Run 2 data. ATLAS and CMS are expected to collect datasets with an integrated luminosity of up to 300 fb–1, adding to the 140 fb–1 collected in Run 2. Taking into account the additional, smaller benefit provided by the increase in the centre-of-mass energy from 13 to 13.6 TeV, new-physics search sensitivities will generally increase by a factor of two in terms of cross sections. Additional gains in sensitivity will result from the exploration of new territory in several respects.

Already at the level of data acquisition, significant improvements will increase the sensitivity of searches. The CMS higher level trigger system has been reinforced using graphics processing units to increase the recording rate in the data scouting stream from 9 to 30 kHz. ATLAS has extended this technique to encompass more final states, including photons and b-jets. These techniques extend the sensitivity to hadronic resonances with low masses and weak coupling strengths to a domain that has never been probed before.

Mass exclusions for spin-1 leptoquarks

The particularly challenging searches for new long-lived particles will also benefit from experimental advances. ATLAS has improved the reconstruction of displaced tracks, reducing the amount of fake tracks by a factor of 20 at similar efficiencies compared to the current data analysis. New, dedicated triggers have been developed by ATLAS and CMS to identify electrons, muons and tau-leptons displaced from the primary interaction vertex. These trigger developments will allow the collection of signal candidate events at unprecedented rates, for example to test exotic Higgs-boson decays into long-lived particles with branching ratios far below the current experimental limits. 

Likewise, ongoing developments in machine learning will contribute to the Run 3 search programme. While Run 1 physics analyses used generic, simple algorithms to distinguish between hypotheses, in Run 2 more powerful approaches of deep learning were introduced. For Run 3 their development continues, using a multitude of different algorithms tailored to the needs of event reconstruction and physics analysis to increase the reach of new-physics searches further.

New signatures

The Run 3 data will also be scrutinised in view of final states that either have been proposed more recently or that require a particularly large dataset. Examples of the latter are searches for electroweakinos, which have a production cross-section at the LHC at least two orders of magnitude smaller than strongly interacting SUSY particles. First results based on Run 2 data surpassed the sensitivity of the LEP experiments, including tests of unconventional “R-parity violating” scenarios in which electroweakinos can decay into only SM particles. This results in complicated final states containing electrons, muons and many jets but relatively low MET. Here, the challenging background determination could only be achieved thanks to machine-learning techniques, which lay the ground for further searches for particularly rare and challenging SUSY signals at Run 3.

If R-parity is not a symmetry, SUSY does not provide a WIMP dark-matter candidate. Among alternative explanations of the nature of this substance, models with bound-state dark matter are gaining increasing attention. In this new approach, strong interactions similar to quantum chromodynamics determine the particle spectrum in a dark sector that includes stable dark-matter candidate particles such as dark pions. At the LHC, coupling between such dark-sector particles and known ones would result in “semi-visible” jets comprising both types of particle (traditional dark-matter searches at the LHC have avoided such events to reduce background contributions). With the Run 2 data, CMS has already provided the very first collider constraints on these dark sectors, and more results from both ATLAS and CMS will follow in this and other proposed dark-sector scenarios.

Multiple deviations from the SM observed at lower energies are starting to shape the search programme at the energy frontier. The long-standing anomaly in the magnetic moment of the muon has recently reached a significance of 4.2σ, motivating increased efforts in searching for possible causes. One is the pair-production of a supersymmetric partner of the muon, for which models fit the low-energy data if the mass of this “smuon” is below 1 TeV and hence within the reach of the LHC. Another is to look for vector-like leptons, which are suggested by consistent extensions of the SM apart from SUSY, using final states containing a large number of leptons.

Multiple deviations from the SM observed at lower energies are starting to shape the search programme at the energy frontier

Moreover, the anomalies in B-meson decays consistently reported by BaBar, Belle and LHCb (see “A flavour of Run 3 physics”) have a strong and growing impact on the Run 3 search programme. Explanations for these anomalies require new particles with TeV-scale masses to fit the size of the observed effects and a hierarchy of fermion couplings to fit the deviations from lepton-flavour universality. Intriguingly these two requirements happen to coincide with the two peculiarities of the Higgs boson. Particular attention is now given to leptoquark searches investigating several production and decay modes. ATLAS and CMS have already started to probe leptoquark models suggested by the B-meson anomalies using Run 2 data (see “Leptoquarks” figure). While the analysis of key channels is ongoing, Run 3 will allow the experiments to probe a large fraction of the relevant parameter space. Furthermore, consistent models of leptoquarks include more new particles, namely colour-charged and colour-neutral bosons, vector-like quarks and vector-like leptons. These predict a variety of new-physics signatures that will further shape the Run 3 search programme.

In summary, searches for new physics at Run 3 will bring significant gains in sensitivity beyond the benefit provided by the increased amount of data. In particular, potential explanations of the anomalies observed at lower energies will be tested. Assuming that these anomalies point to new physics, the relevant searches with Run 3 data have a good chance of finding the first deviations from the SM at the TeV energy scale. Such an outcome would be of the utmost importance for particle physics, strengthening the case for the proposed Future Circular Collider at CERN.

A flavour of Run 3 physics

Particle debris

The famous “November revolution” in particle physics in winter 1974 was sparked by the discovery of the charm quark by two independent groups at Brookhaven and SLAC. It signalled the existence of a second generation of fermions, and was therefore a milestone in establishing the Standard Model (SM). Less widely known is that, four years earlier, the Glashow–Iliopoulos–Maiani (GIM) mechanism had postulated the existence of the charm quark to explain the smallness of the K0 → μ+μ branching fraction. In addition, in the summer of 1974, the puzzling smallness of the mass difference between neutral kaons, which was apparent from kaon mixing, led Gaillard and Lee to conclude, correctly, that the charm mass should be below 1.5 GeV. 

Many historical discoveries in particle physics have followed this pattern: a measurement in flavour physics generated a theoretical breakthrough, which in turn led to a direct discovery. The 1977 discovery of the beauty quark at Fermilab was a confirmation of the Cabibbo–Kobayashi–Maskawa (CKM) mechanism postulating the existence of three generations of fermions, which was put forward following the experimental discovery of CP violation in the kaon system in 1964. In 1987, hints of a surprisingly large value for the top-quark mass were inferred from the first measurement of B0-meson oscillations at the Argus experiment, and confirmed in 1995 by the discovery of the top quark at the Tevatron. 

This critical role of the flavour sector in particle physics is by no means accidental. Since new particles can contribute virtually via loops or box diagrams, precision measurements in flavour physics in tandem with precise theoretical predictions can provide sensitive probes to indirectly search for new particles or interactions at high energy scales. Could the historical role of flavour measurements in elucidating new-particle discoveries be about to repeat itself at the LHC? 

The flavour promise

Diagrams showing mixing and flavour transitions

Following the Higgs-boson discovery in 2012, the next target at the LHC was clear: to search for an indisputable sign of an eagerly awaited mass peak as a signature for a new particle beyond the SM. So far, however, it seems nature might have something else in store. To unearth the new physics that is strongly motivated to exist – to explain phenomena such as the arbitrary mass hierarchy of elementary particles, the matter–antimatter imbalance in the universe and the origin of the CKM matrix – we should also consider the historically successful route though flavour physics. 

Flavour processes are governed by loop diagrams such as “box” and “penguin” diagrams (see “Virtual production” figure), in which new heavy particles can contribute virtually and alter our expectations. The key word here is “virtually”. This peculiarity of quantum physics allows us to probe new physics at very high energy scales, even if the collision energy is not sufficient to produce new particles directly. Any significant discrepancy between flavour measurements and theoretical calculations would provide us with a valuable lead towards hidden new physics. 

Cooking up a storm

On the experimental side, the main ingredient required is a large sample of beauty and charm hadrons. This makes the LHC, and the LHCb experiment in particular, the ideal place to carefully test the flavour structure of the SM. Not only does the LHC have a record energy reach, it also combines a large production cross-section for beauty and charm hadrons with a very high instantaneous luminosity. There is one catch, however. Due to the nature of quantum-chromodynamics, a large number of hadrons are produced in proton–proton collisions, saturating the different sub-detectors (see “Asymmetric complexity” image). Flavour measurements require a full understanding of this complex event environment, which is a much more challenging task compared to that at e+e colliders where only a low number of particles is produced in each collision.

Constraints on effective-field-theory coefficients relevant to flavour anomalies

Since the inauguration of the LHC, its four main experiments have discovered more than 50 new hadronic states. Most follow the expected pattern of the original quark model, whereas some are new forms of matter such as the doubly-heavy “tetraquark” Tcc+ or bound states of five quarks, the so-called pentaquarks, discovered by LHCb. Since the early planning of the LHC, the mission of the flavour community was to better understand the behaviour of beauty and charm quarks. Indeed, in 2019 LHCb was the first single experiment to observe the mixing and CP violation of neutral charm mesons. Similarly for beauty decays, the first observation of time-integrated and time-dependent CP-violating Bs decays was made at the LHC. The unique properties and structure of the CKM matrix connect seemingly unrelated flavour observables, most of which are accessible through B decays. Accurate flavour measurements thus simultaneously allow the CKM matrix to be probed, and precise theory predictions to be scrutinised.

Unturned stones 

Today, the LHC dominates the flavour sector, with an important parallel programme ongoing at Belle II in Japan. Between them, the LHC experiments have made the most precise measurement of matter–antimatter oscillations in the neutral B system, measured CP violation in B mesons, discovered rare B decays and determined CKM elements such as Vtb. So far no measurement has yielded a significant disagreement with SM expectations. However, some interesting hints have emerged, and a couple of stones have not yet been turned.

Since the inauguration of the LHC, its four main experiments have discovered more than 50 new hadronic states

A promising opportunity to probe physics beyond the SM arises through b → sℓℓ and b → cℓν transitions in various hadron decays. The latter proceed through tree-level transitions in processes that are abundant and well-understood: the decay is mediated by a charged W boson that changes the b quark into a c quark, emitting a lepton and an antineutrino. In b → sℓℓ processes, the quark flavour changes through the emission of a Z boson or a photon. This flavour-changing neutral-current process occurs through a higher order penguin diagram, and underlies a breed of suppressed and thus rare hadron decays. The SM makes a slew of precise predictions for flavour observables for both types of transitions. However, new-physics models include yet-unobserved particles that can potentially contribute virtually.

A number of flavour observables are particularly well predicted within the SM. Well-known examples are the lepton-flavour-universality observables R(K), which compare the decay rates of b → sℓℓ decays containing muons to those containing electrons, and R(D), which compares b → cℓν decay rates with muons and tau leptons in the final state. The theoretical precision for these ratios reach an impressive relative uncertainty of about 1%. But other measurable flavour quantities in these two transitions, such as absolute decay rates or angular observables, are more challenging due to the limited knowledge of gluon exchange between hadrons in the initial and final states. 

Precisions expected on ratios

Intriguingly, all b → sℓℓ flavour observables measured by the b-factories LHCb, Belle and BaBar, and also ATLAS and CMS, collectively, point in a similar direction away from SM predictions. This has led to speculation that new heavy particles are changing the rate of B-meson decays to different lepton flavours, violating the SM principle of lepton-flavour universality. The contributions of such particles are quantified “effectively” – similar to the way Fermi described weak decays in terms of a single coupling constant instead of the underlying W-boson propagator. New particles that contribute to B decays can affect many different types of couplings, depending on their spin or handedness. Remarkably, the current flavour anomalies seem to affect only one or two effective couplings (left-handed vector and axial couplings, known as C9 and C10), and these can be visualised in a single two-dimensional plane of new-physics contributions (see “New couplings” figure). Data from b → cℓν transitions also exhibit hints for anomalous lepton-flavour non-universality.

The picture that seems to be emerging could be explained by models that involve leptoquarks or Z′ bosons (CERN Courier May/June 2019 p33). The flavour anomalies measured at the LHC disagree with the SM at the level of 2–3.5σ, which is insufficient to confirm the presence of new physics. To address these and other unanswered questions in the flavour sector, the available data sample need to be expanded.

Luminous future  

The LHCb experiment will operate at Run 3 at an increased instantaneous luminosity, and with an improved data acquisition system. Together, this will enable a 10-fold increase of the sample size. The price to pay for this increased luminosity is the daunting number of overlapping collisions in a single proton-bunch crossing, which makes the task of sifting through billions of collisions to identify interesting topologies a challenge. Novel technologies such as graphics processing units have been incorporated in LHCb’s trigger system to speed up the processing of busy hadronic events, while new detectors have been built to reconstruct charged particle tracks, find the vertex position and to identify the particle species using state-of-the-art readout electronics (CERN Courier May/June 2022 p38). The LHCb upgrades completed during LS2 will also serve the experiment for Run 4 beginning in 2029, which is the start of the ambitious High-Luminosity LHC (HL-LHC) project. 

CKM unitarity triangle

During the next few years of Run 3, the LHCb experiment is expected to collect an integrated luminosity of 20–25 fb–1 (compared to 6 fb–1 in Run 2). This will enable significant improvements on the precision of CP-violation observables and rare B-decay measurements. The expectation is to improve the precision on possible CP violation in Bs0 – Bs0 mixing to 10–3, on CP violation in the interference between mixing and decay in Bs J/ψϕ decays to about 14 mrad, and on the CKM angle γ to 1.5°. Further probes of possible lepton-flavour non-universality are another key target. The ratios of electroweak penguin processes involving b → sℓℓ transitions, R(K) and R(K*), are expected to be determined with a precision between three to two per cent, and ratios of semileptonic b → cℓν processes R(D*) to a precision below one per cent (see “Anomaly squeeze” figure). 

The flavour sector delivered a great harvest in the first 10 years of LHC operations

The flavour programme in the era of the HL-LHC is even more rich and diverse. Many directions are being pursued, including precision measurements targeting CP violation and mixing in charm and beauty, and measurements of CP-conserving quantities such as the magnitudes of the CKM elements Vub, Vcb and Vtb. The end goal is to study every possible constraint to scrutinise the overall CKM picture within the SM (see “Triangulating” figure). Regarding the anomalous b → sℓℓ and b → cℓν transitions, the long-term projections for the LHC experiments are clear: if new phenomena are found, then their detailed characteristics will be established. The large data sample at the end of the HL-LHC will also allow tests of lepton-flavour violation in b → sℓℓ transitions involving tau leptons. The power of such indirect searches is their ability to elucidate the energy scale at which new particles might be present, and could point the way for the next generation of colliders.

The flavour sector delivered a great harvest in the first 10 years of LHC operations: new particles and new forms of matter were discovered, new behaviour of matter was established, stringent constraints on the CKM matrix were set and intriguing flavour anomalies have appeared. That success is only the beginning. The higher luminosity phase of the LHC beginning with Run 3 will undoubtedly generate further knowledge of particle physics, and might unveil deeper layers of nature beyond the SM.

Pushing the precision frontier

A candidate vector-boson-scattering event at CMS

Confronted with multiple questions about how nature works at the smallest scales, we exploit precise measurements of the Standard Model (SM) to seek possible answers. Those answers could further confirm the SM or give hints of new phenomena. As a hadron collider, the LHC was primarily built as a discovery machine. After more than a decade of operation, however, it has surpassed expectations. Alongside the discovery of the Higgs boson and a broad programme of direct searches for new phenomena, ultra-precise measurements on a wide range of parameters have been carried out. These include particle masses, the width of the Z boson and the production cross-sections of various SM processes ranging over 10 orders of magnitude (see “Cross sections” figure); the latter are connected to a multitude of measurements including differential distributions and particle properties.

An example that is unique to the LHC is the measurement of the Higgs-boson mass, which was determined to a precision of 0.12% by CMS in 2019. Also of vital importance are the strengths of the Higgs-boson couplings to other known particles (see “Coupling strengths” figure). According to the SM, these couplings must be proportional to a particle’s mass. Nicely following the SM expectation, every coupling in this plot is extracted using various measurements of the Higgs-boson production and decay channels. Besides the remarkable agreement with the SM, the plot shows the result of the Higgs-boson decay to muons, which is challenging to measure because of the muon’s small mass. 

Production cross section of various SM processes

The LHC-experiment collaborations are currently concluding their Run 2 measurements using proton-collision data recorded at 13 TeV while getting ready for the Run 3 startup. From several notable achievements with the Run 2 data, one can point to the measurement of a fundamental parameter of the SM, the mass of the W boson with a precision of 0.02% by ATLAS and of 0.04% in the forward region by LHCb (see “W mass” figure). Precision measurements of the W-boson mass are crucial for testing the consistency of the SM, as radiative corrections connect it with the masses of the top quark and the Higgs boson. A future combination of the LHCb result with similar measurements from ATLAS and CMS can reduce the significant uncertainty of parton distribution functions on this parameter. Although the particle masses are crucial elements of the SM, it is not always possible to determine them directly. In the case of quarks, except for the heaviest top quark, their immediate hadronisation makes the properties of a bare quark inaccessible. Observed for the first time by ALICE, the QCD “dead cone” (an angular region of suppressed gluon emissions surrounding a heavy quark that is proportional to the quark’s mass) in charmed jets may be a possible way to ultimately access the heavy-quark mass directly. 

The coupling structure of the SM, especially between heavy particles, is another key aspect that is being pinned down by ATLAS and CMS. In 2017 the experiments marked an important milestone in this regard with the observation of WW scattering – a first step in a diverse programme of measurements of vector boson scattering (VBS), in which vector bosons emitted from each of the incoming quarks interact with one other (see “Critical physics” image). As VBS processes are sensitive to the self-interaction of four gauge bosons as well as to the exchange of a virtual Higgs boson, they remain a central part of the LHC physics programme during Run 3 and beyond, where the additional data will become a decisive factor.

Run 3 preparations 

The LHC is about to start a new endeavour at an unprecedented energy (13.6 TeV as opposed to 13 TeV) and with an instantaneous luminosity on average 1.5 times higher than in Run 2. In addition to higher statistics, the larger energy reach of Run 3 provides a unique opportunity to study unexplored territories in the kinematic phase space of particles. Prime targets are regions where the discovery of possible new phenomena is mainly awaiting additional data, and those where the insufficient size of the data sample is the main limiting factor on the precision.

Measurements of the mass of the W boson compared to the SM prediction

A major challenge ahead is the increased number of additional interactions within the same or nearby bunch crossings, called pileup. The large rate of interactions puts strain on different parts of the detectors as well as their trigger systems. Relying on cutting-edge technologies, experiments at the LHC have performed extensive upgrades in several subsystems, hardware and software to cope with the associated complexities and exploit the full potential of the data. In some cases, this has involved the installation of new detectors or an entire renewal, or extension, of existing subdetectors. Examples are the New Small Wheel (NSW) muon detector in ATLAS and the muon gas electron multiplier (GEM) detectors in CMS. These gas-based detectors, which are designed in view of the High-Luminosity LHC (HL-LHC) and will be partially operational during Run 3, are installed in the endcap area of the experiments where a significant increase is expected in the particle flux. The improved muon momentum resolution they bring also plays a critical role in the trigger systems by keeping the rate low. 

It is now proven that with advanced analysis strategies we can surpass the expectations from projection studies

In the ALICE experiment, among other important upgrades, the inner tracker system has faced a complete renewal of the silicon-based detectors for enhanced low-momentum vertexing and tracking capabilities. At LHCb, in addition to new front-end electronics for higher-rate triggering and readout, the ring-imaging Cherenkov detector has been upgraded to deal with the large-pileup environment, while a brand new vertex locator and tracking system will allow the reconstruction of charged particles. In parallel to the hardware, the LHC experiments have accomplished a substantial upgrade in software and computing, including the implementation of fast readout systems and the use of state-of-the-art graphics processing units.

Physics ahead 

The series of upgrades undertaken during Long Shutdown 2 will enable the experiments to pursue a rich physics programme during Run 3 and to get ready for Run 4 at the HL-LHC. The preparation also involves Monte Carlo event generation at the new centre-of-mass energy, full simulation of collision events in the new detectors, and designing new methods with modern tools to identify particles and analyse the data. The additional data of Run 3, together with innovative analysis techniques, will result in reduced uncertainties and therefore push the precision frontier forward. The experience from Run 2 is of great value in this regard. 

Predicted vs measured values of the coupling strengths between the Higgs boson and other SM particles

It is now proven that with advanced analysis strategies which make maximal use of the available data, we can surpass the expectations from projection studies. An example is the Higgs-boson decay to muons. Whereas early Run 3 projections suggested an uncertainty of about 20% with 300 fb–1 of LHC data, in 2020 the CMS experiment achieved such precision using Run 2 data alone. In the latest projections, a further improvement of 30–35% is expected thanks to the advanced analysis strategies developed during Run 2. The projected uncertainties in Higgs-boson couplings to other SM particles, including vector bosons and third-generation leptons, are also expected to be reduced. The Higgs-boson interaction with the heaviest known particle, the top quark, is of particular interest as it may give insights into the existence and energy scale of new physics above 100 GeV. Besides the famous ttH process, simultaneous production of four top quarks is also very sensitive to the top quark’s Yukawa interaction with the Higgs boson. Exhibiting the heaviest SM final state, “four-top” is one of the rarest but most important processes. Following evidence reported by ATLAS in 2021, Run 3 data may fully establish its observation. 

Among rare processes that may shed light on electroweak symmetry breaking, one can point to the VBS production of longitudinally polarised W bosons. The longitudinal polarisation is a result of electroweak symmetry breaking through which vector bosons acquire mass from their interaction with the Brout–Englert–Higgs field. Given that the analysis of Run 2 data has reached the expected significance (about 1σ) of the HL-LHC with the same luminosity, we look forward to Run 3 to test the SM with more data and further channels.

Run 3 excitement

The excitement about LHC Run 3 is not restricted to rare phenomena and new discoveries. Well-established processes such as top-quark, W- and Z-boson production are pivotal for a firm understanding of the SM. The upcoming data will provide us with gigantic statistics that translates to a significantly higher precision on the measured properties of these particles in addition to various fundamental parameters of the SM. The latter include the mass of the top quark, the precise determination of which is a critical factor in the stability of vacuum. Early Run 3 projection studies predicted an uncertainty of 1.5 GeV on the top-quark mass. This has already been achieved in Run 2 using tt differential cross-section measurements, and will be further reduced with the upcoming Run 3 data.

The upcoming data will provide us with gigantic statistics that translate to significantly higher precision

Such levels of precision also provide invaluable feedback to the theory community, whose tremendous efforts in modelling and state-of-the-art calculations and simulations are the basis of our measurements. Thanks to the increasing sophistication and precision of SM calculations, any statistically significant deviation from theory can be an unambiguous sign of new physics. Therefore, precision measurements in Run 3 can act as a gateway to new discoveries. These include measurements of properties such as vector-boson polarisation, which are sensitive to new physics by construction, inclusive cross sections of VBS and other rare processes, and differential distributions where new phenomena can appear in the tails.

In October 2021, stable proton beams were circulated and collided at a centre-of-mass energy of 900 GeV in the LHC for the first time since 2018. While preparing for the start up in May this year, the experiments made use of these data for a special period of commissioning to ensure their readiness to collect data in Run 3. The successful outcome of the commissioning brought further enthusiasm and motivation to the LHC-experiment collaborations, who very much look forward to executing their far-reaching Run 3 physics plans.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Heavy-ion physics: past, present and future

Tracks from a lead–lead collision

Ultra-relativistic collisions between heavy nuclei probe the high-temperature and high-density limit of the phase diagram of nuclear matter. These collisions create a new state of matter, known as the quark–gluon plasma (QGP), in which quarks and gluons are no longer confined in hadrons but instead behave quasi-freely over a relatively large volume. By creating and studying this novel state of matter, which last existed in the microseconds after the Big Bang, we gain a deeper understanding of the strong nuclear force and quantum chromodynamics (QCD).

Nearly 50 years ago, the first relativistic heavy-ion collision experiments were performed at the Bevatron at Berkeley, reaching energies of 1 to 2 GeV. Since then, heavier ions were collided at higher energies at Brookhaven’s AGS, CERN’s SPS and Brookhaven’s RHIC facilities. Since 2010, heavy-ion physics has entered the TeV regime with lead–lead (PbPb) collisions at 2.76 and 5.02 TeV at the LHC. While the ALICE detector is designed specifically to focus on such collisions, all four large LHC experiments have active heavy-ion physics programmes and are contributing to our understanding of extreme QCD matter.

In a heavy-ion collision, the initial energy deposited by the colliding nuclei undergoes a fast equilibration, within roughly 10–24 s, to form the QGP. The resulting deconfined and thermalised medium expands and cools over the next few 10–24 s, before the quarks and gluons recombine to form a hadron gas. It is the goal of heavy-ion experiments at the LHC to use the detected final-state hadrons to reconstruct the properties and dynamical behaviour of the system throughout its evolution. So far, the LHC experiments have  delivered a series of results that are sensitive to various aspects of the heavy-ion collision system, with Run 3 set to push our understanding much further. 

Properties and dynamics

The initial energy-density distribution and subsequent expansion of the heavy-ion collision system is largely determined by the geometrical overlap of the colliding nuclei. Collisions can range from head-on “central” collisions, where the nuclear overlap is large, to glancing “peripheral” collisions where the overlap region is smaller and roughly almond-shaped. Since the interaction region in non-central events is not rotationally symmetric, anisotropic pressure gradients build up. These preferentially boost particles along the minor axis of the ellipsoidal overlap region, resulting in an observable anisotropy in the distribution of final-state hadrons. The distribution of the particles in the azimuthal angle can be described well by a Fourier cosine series, where the largest term is the second harmonic, characterised by the parameter v2, due to the ellipsoidal shape of the nuclear overlap region. Fluctuations in the positions of the individual constituent nucleons lead to significant higher-order terms. It was discovered that these Fourier coefficients, vn, are best described by models where the QGP dynamics obeys hydrodynamic equations, and thus behaves as a liquid exhibiting what we call “collective flow”.

A global Bayesian fit to measurements of centrality dependence

Remarkably, in order to fit hydrodynamic models to experimental data it is necessary for the medium’s viscosity to be very low, corresponding to a shear-viscosity to entropy-density ratio of the order η/s 0.1. With a shear viscosity that is orders of magnitude smaller than other materials, the QGP is known as the “perfect” liquid. Measurements of the higher order harmonics, as well as their event-by-event fluctuations and correlations, provide even greater sensitivity to medium properties and the initial-state dynamics. Precision measurements of the vn harmonics, charged-particle density, mean transverse momentum pT, and mean-pT fluctuations by ALICE have been used to extract the shear and bulk viscosity of the system as a function of temperature (see “Flow coefficients” figure).

While the QGP created in heavy-ion collisions is too small and short-lived to be examined with conventional probes, its properties can be investigated using the products of hard (high momentum-transfer, q2) scatterings that occur in the early stages of the collision and then propagate through the medium as it evolves. The production rates of these internally generated hard probes can be calculated in perturbative QCD and thus are considered calibrated probes of the QGP medium. The high-momentum quarks and gluons produced in these hard scatterings traverse the medium and fragment into collimated jets of hadrons. While these jets appear as a small signal on top of a large, fluctuating background, advances in re-clustering algorithms as well as the higher production rates of jets at the LHC have made it possible to study jets with high precision across a wide range of energies. 

The increase in the LHC luminosity will allow us to perform measurements that were previously inaccessible

Compared to jets in proton–proton (pp) collisions, jets in nucleus–nucleus (AA) collisions appear significantly suppressed or “quenched” due to their interactions with the medium (see “Jet quenching” figure). This is in contrast to electroweak probes, which interact only minimally with the coloured QGP medium. When the presence of hard scattering is identified by a high-pT jet, photon or Z boson, the recoiling jet measured in the opposite direction is often reconstructed with a significantly lower energy, indicating that some of its energy has been transferred and absorbed by the medium. Recent, detailed jet-structure studies show that jets in heavy-ion collisions are softer (they fragment into lower-pT hadrons) and broader than their counterparts in pp collisions, due to their interactions with the surrounding coloured QGP medium.

Another class of hard probes are heavy-flavour hadrons, since even heavy quarks (charm and beauty) with low pT are produced in high-q2 processes. Similar to jets, which mainly come from the fragmentation of light quarks and gluons, heavy hadrons are also suppressed in heavy-ion collisions relative to pp collisions. Recent precision measurements at the LHC of the yield of D mesons (containing charm quarks) as well as non-prompt D and J/ψ mesons (from the decays of hadrons containing beauty quarks), compared to the yields in pp collisions, demonstrate a mass-dependent suppression. This observation is consistent with the “dead cone” effect, which predicts that quarks with larger masses will be less significantly suppressed than those with smaller masses. The suppression of quarkonia (quark–antiquark bound states) depends on the binding energy, with loosely bound states such as the Υ(3S) and ψ(2S) more likely to become dissociated in the hot and dense medium than the tightly bound Υ(1S) and J/ψ(1S) states. However, it was discovered at the LHC that final-state J/ψ are actually less suppressed than in lower energy AA collisions at RHIC. This was attributed to the larger number of charm quarks being produced at LHC energies, which enhances the probability that charm and anti-charm quarks can recombine to form J/ψ states within the QGP. These dual effects of suppression and recombination are considered a signature of the production of a deconfined, thermalised medium in heavy-ion collisions.

Freeze out

As the QGP expands and cools, it undergoes a phase transition into a hadron gas in which quarks and gluons become confined into hadrons. At chemical freeze-out, inelastic collisions cease and the thermochemical properties of the system become fixed. Comparing ALICE measurements of the inclusive yields of multiple hadron species with a model of statistical hadronisation shows excellent agreement over nine orders of magnitude in mass, from pions to anti-4He nuclei (see “Statistical production” figure). This indicates that the bulk chemistry of the QGP freeze-out can be described by purely statistical particle production from a system in thermal equilibrium with a common temperature (155 MeV) and volume (~5000 fm3).

Suppression of the number of reconstructed jets

One of the first surprising results to come from the LHC was the discovery of azimuthal correlations between particles over large distances in pseudorapidity in small collision systems, pp and pPb. These long-range correlations are observed in heavy-ion collisions, where they are traditionally attributed to anisotropic flow (parameterised by vn coefficients). However, the presence of collective behaviour in small systems, where a QGP was not expected to be formed, raised many questions about our understanding of both large and small nuclear collisions.

A second surprising observation was made in the measurement of the ratios of strange and multistrange hadrons (e.g. K0S, Λ, Ξ and Ω) with respect to pions, as a function of the number of particles produced in the collision (multiplicity). The enhancement of strangeness production in AA compared to pp collisions was historically predicted as a signature of the formation of a QGP, although it is now understood as being due to the suppression of strangeness in small systems. However, measurements by ALICE showed a smooth increase in the strangeness enhancement with multiplicity across all collision systems: pp, pPb, XeXe and PbPb – opening further questions about the presence of a thermalised medium in both small and large systems.

In contrast, the suppression of hard probes, which has long been viewed as a complementary effect to anisotropic flow, has not been observed in pp or pPb collisions within current experimental uncertainties. In order to gain a more complete understanding of QCD from the soft to the hard scales, and from small to large systems, we must expand our experimental programmes.

To Run 3 and beyond

All four large experiments at the LHC have undergone significant upgrades during Long Shutdown 2 to extend their reach and allow the collection of heavy-ion data at higher luminosities. The increase in luminosity by a factor of 10 in Runs 3 and 4 at the LHC will allow us to make precision measurements of soft and hard probes of the QGP. Rare probes such as heavy-flavour hadrons will become accessible with high statistical precision, and we will be able to explore the charm and beauty sector at a level commensurate with that of the strangeness studies in Runs 1 and 2. Jet measurements will become significantly more precise as we further explore the medium-induced modification of well-calibrated probes such as γ– and Z-tagged jets. 

Our understanding of collective behaviour and the medium evolution will be enhanced by studies of the correlations and fluctuations of flow coefficients, which provide additional and complementary information above and beyond what we learn from vn alone. Measurements that were severely statistically-limited in Runs 1 and 2, such as those of virtual photons produced as thermal radiation, will be performed with unprecedented precision in Runs 3 and 4. The higher order fluctuations of identified particles, which are expected to be sensitive to critical behaviour around the phase transition, will also come within reach in Runs 3 and 4 and make it possible to map out the phase diagram of QCD matter in great detail.

Thermal-model fits

Furthermore, studies of small systems will continue to shed light on the development of QGP-like signals from pp to AA collisions. In particular, oxygen nuclei will be collided at the LHC, which will allow us to investigate collective effects in collisions with a geometry similar to PbPb collisions but with multiplicities of the order of those in pp and pPb collisions. High-precision and multi-differential jet measurements in pp, pPb and OO collisions will finally allow us to resolve open questions about the relationship between jet quenching and collective behaviour, and whether such effects are observed across all nuclear collision systems. Through these experimental measurements, we will make major progress in our understanding of nuclear matter from small to large collision systems, towards our ultimate goal of a unified description of QCD phenomenology from the microscopic level to the emergent bulk properties of the QGP.

While the heavy-ion physics programme in Runs 3 and 4 will provide deep insights into the rich field of QCD phenomenology, open questions will remain that can only be addressed with further advancements in detector performance and with the significant increase in heavy-ion luminosity anticipated in Run 5 (expected in 2035–2038). This extension of the LHC heavy-ion programme through the 2030s has been supported by the 2020 update of the European strategy for particle physics, and the LHC-experiment collaborations are exploring the potential for novel measurements in light- and heavy-ion collision systems based on their planned detector upgrades. In particular, ALICE is proposing to build a new dedicated heavy-ion experiment, ALICE 3, based on a large-acceptance ultra-light (low material budget) silicon tracking system surrounded by multiple layers of particle identification technology. The increase in the LHC luminosity coupled with state-of-the-art detector upgrades will allow us to dramatically extend our experimental reach and perform measurements that were previously inaccessible. The goals of the future heavy-ion programme at the LHC – from measuring electromagnetic radiation from the QGP and exotic heavy-flavour hadrons to beyond-the-Standard-Model searches for axions – will provide unprecedented insight into the fundamental constituents and forces of nature. 

Accessing the precursor stage of QGP formation

ALICE figure 1

The primary goal of the ultrarelativistic heavy-ion collision programme at the LHC is to study the properties of the quark–gluon plasma (QGP), a state of strongly interacting matter in which quarks and gluons are deconfined over large distances compared to the typical size of a hadron. The rapid expansion of the QGP under large pressure gradients is imprinted in the momentum distributions of final-state particles. The azimuthal-anisotropy flow coefficients vn and the mean transverse momentum pT of particles, which are described by hydrodynamic models, have been extensively measured by experiments at the LHC and  at the RHIC collider. These observables are also used as experimental inputs to global Bayesian analyses that provide information on both the initial stages of the heavy-ion collision, before QGP formation, and on key transport coefficients of the QGP itself, such as the shear and bulk viscosities. However, due to the limited constraints on the initial conditions, uncertainties remain in the QGP’s transport coefficients.

The ALICE collaboration recently reported correlations between vn and pT in terms of the modified Pearson coefficient ρ. The measurements were performed in lead–lead (PbPb) and xenon–xenon (XeXe) collisions at centre-of-mass energies per nucleon–nucleon collision of 5.02 and 5.44 TeV, respectively. As the correlations between vn and pT are predicted to be mainly driven by the shape and size of the initial profile of the energy distribution in the transverse plane, these studies provide a new approach to characterise the initial state. 

The measurements show a positive correlation between vn and pT in both PbPb and XeXe collisions (figure 1). These measurements are compared to hydrodynamic calculations using the initial-state models IP-Glasma (based on the colour-glass-condensate effective theory with gluon saturation) and Trento, a parameterised model with nucleons as the relevant degrees of freedom. The centrality dependence of ρ is better described by IP-Glasma than by Trento. In particular, the positive measured values of ρ suggest an effective nucleon width of the order of 0.3–0.5 fm, which is significantly smaller than what has been extracted in all Bayesian analy­ses using Trento initial conditions. The Pearson correlation measurements can now be included in Bayesian analyses to better constrain the initial state in nuclear collisions, thus impacting  the resulting QGP parameters. As a bonus, the measurements in XeXe collisions are sensitive to the quadrupole deformation parameter β2 of the 129Xe nucleus, potentially opening a new window for studying nuclear structure with ultrarelativistic heavy-ion collisions.

Higgs-boson charm coupling weaker than bottom

ATLAS figure 1

Within the Standard Model (SM), the Higgs boson is predicted to interact with (or couple to) quarks with a strength proportional to their mass. By measuring these interaction strengths, physicists can test this prediction and gain insight into possible physics beyond the SM, where such couplings can be modified. In a new analysis exploiting the full Run-2 dataset, the ATLAS collaboration experimentally excludes new-physics scenarios which predict that decays of the Higgs boson to a pair of charm quarks (H → cc) are as frequent as those to bottom quarks (H → bb). 

The search for H → cc is hampered by abundant background processes. In order to identify charm–quark signatures, a new multivariate classification method was developed to identify charm hadrons within jets, while simultaneously reducing the probability of misidentifying jets originating from a bottom quark. To maximise the sensitivity to the signal, events with one or two charm-tagged jets were selected. Background processes were further suppressed by selecting Higgs-boson events produced together with a weak boson, VH(cc), where the weak boson (V = W or Z) decays to 0, 1 or 2 electrons or muons. In total, 44 regions were fitted simultaneously to measure the H → cc process. 

In the SM, the H → cc process accounts for only 3% of all Higgs-boson decays. The ATLAS analysis found no significant sign for this process in the data, setting an upper limit on the rate of the VH(cc) process 26 times the SM rate at 95% confidence level. This limit constrains the Higgs-to-charm coupling strength to less than 8.5 times the predicted SM value. The analysis strategy is validated by measuring events with two vector bosons that contain the decay of a W boson to one charm quark, VW(cq), or the decay of a Z boson to two charm quarks, VZ(cc), whose rates are found to agree with the predictions. The combined dijet-mass distribution, after subtraction of the backgrounds, is shown in figure 1.

ATLAS figure 2

Since H → cc and H → bb decays lead to very similar signatures in the ATLAS detector, a combined analysis of both processes is key to a common interpretation. The multivariate classification method is used to identify jets as originating from a bottom quark, a charm quark or lighter quarks. Since a fraction of the H → bb events passes the selection criteria of the H → cc analysis and vice versa, the individual analyses are designed to ensure that no collision events are counted twice. This orthogonality between the analyses enabled a simultaneous measurement of the two processes for the first time.

Within the SM, the ratio of the couplings of bottom and charm quarks to the Higgs boson is given by their mass ratio: mb/mc = 4.578 ± 0.008, obtained from lattice-QCD calculations. With its novel combination of H → cc and H → bb decays, the ATLAS analysis excludes the hypothesis that the Higgs-boson interaction with charm quarks is stronger than or equal to the interaction with bottom quarks at 95% confidence level (figure 2). For the first time, this measurement establishes that the Higgs-boson coupling is smaller for charm quarks than for bottom quarks.

LHCb constrains cosmic antimatter production

LHCb figure 1

During their 10 million-year-long journey through the Milky Way, high-energy cosmic rays can collide with particles in the interstellar medium, the ultra-rarefied gas filling our galaxy and mostly composed of hydrogen and helium. Such rare encounters are believed to produce most of the small number of antiprotons, about one per 10,000 protons, that are observed in high-energy cosmic rays. But this cosmic antimatter could also originate from unconventional sources, such as dark-matter annihilation, motivating detailed investigations of antiparticles in space. This effort is currently led by the AMS-02 experiment on the International Space Station, which has reported results with unprecedented accuracy.

The interpretation of these precise cosmic antiproton data calls for a better understanding of the antiproton production mechanism in proton-gas collisions. Here, experiments at accelerators come to the rescue. The LHCb experiment has the unique capability of injecting gas into the vacuum of the LHC accelerator. By injecting helium, cosmic collisions are replicated in the detector and their products can be studied in detail. LHCb already provided a first key input into the understanding of cosmic antimatter by measuring the amount of antiprotons produced at the proton–helium collision vertex itself. In a new study, this measurement has been extended by including the significant fraction (about one third) of antiprotons resulting from the decays of antihyperons such as Λ, which contain a strange antiquark also produced in the collisions.

These antiprotons are displaced from the collision point in the detector, as the antihyperons can fly several metres through the detector before decaying. Different antihyperon states and decay chains are possible, all contributing to the cosmic antiproton flux. To count them, the LHCb team exploited two key features of its detector: the ability to distinguish antiprotons from other charged particles via two ring-imaging Cherenkov (RICH) detectors, and the outstanding resolution of the LHCb vertex locator. Thanks to the latter, when checking the compatibility of the identified antiproton tracks with the collision vertex, three classes of antiprotons can be clearly resolved (figure 1): “prompt” particles originating from the proton–helium collision vertex; detached particles from Λ decays; and more separated particles produced in secondary collisions with the detector material.

The majority of the detached antiprotons are expected to originate from Λ particles produced at the collision point decaying to an antiproton and a positive pion. A second study was thus performed to fully reconstruct these decays by identifying the decay vertex. The results of this complementary approach show that about 75% of the observed detached antiprotons originate from Λ decays, in good agreement with theoretical predictions.

These new results provide an important input for modelling the expected antiproton flux from cosmic collisions. No smoking gun for an exotic source of cosmic antimatter has emerged yet, while the accuracy of this quest would profit from more accelerator inputs. Thus, the LHCb collaboration plans to expand its “space mission” with the new gas target SMOG2. This facility/device could also enable collisions between protons and hydrogen or deuterium targets, further strengthening the ties between the particle and astroparticle physics communities.

Science diversity at the intensity and precision frontiers

The EHN1 experimental hall

While all eyes focus on the LHC restart, a diverse landscape of fixed-target experiments at CERN have already begun data-taking. Driven by beams from smaller accelerators in the LHC chain, they span a large range of research programmes at the precision and intensity frontiers, complementary to the LHC experiments. Several new experiments join existing ones in the new run period, in addition to a suite of test-beam and R&D facilities. 

At the North Area, which is served by proton and ion beams from the Super Proton Synchrotron (SPS), new physics programmes have been underway since the return of beams last year. Experiments in the North Area, which celebrated its 40th anniversary in 2019, are located at different secondary beamlines and span QCD, electroweak physics and QED, as well as dark-matter searches. “During Long Shutdown 2, a major overhaul of the North Area started and will continue during the next 10 years to provide the best possible beam and infrastructure for our users,” says Yacine Kadi, leader of the North Area consolidation project. “The most critical part of the project is to prepare for the future physics programme.”

The first phase of the AMBER facility at the M2 beamline is an evolution of COMPASS, which has operated since 2002 and focuses on the study of the gluon contribution to the nucleon spin structure. By measuring the proton charge radius via muon–proton elastic scattering, AMBER aims to clarify the long-standing proton–radius puzzle, offering a complementary approach to previous electron–proton scattering and spectroscopy measurements. A new data-acquisition system will enable the collaboration to measure the antiproton production cross-section to improve the sensitivity of searches for cosmic antiparticles from possible dark-matter annihilation. A third AMBER programme will concentrate on measurements of the kaon, pion and proton charge radii via Drell-Yan processes using heavy targets. 

A second North Area experiment specialising in hadron physics is NA61/SHINE, which underwent a major overhaul during Long Shutdown 2 (LS2), including the re-use of the vertex detector from the ALICE experiment. Building on its predecessor NA49, the 17 m-long NA61/SHINE facility, situated at the H2 beamline, focuses on three main areas: strong interactions, cosmic rays and cross-section measurements for neutrino physics. The collaboration continues its study of the energy dependence of hadron production in heavy-ion collisions, in which NA49 found irregularities. It also aims to observe the critical point at which the phase transition from a quark–gluon plasma to a hadron gas takes place, the threshold energy for which is only measurable at the SPS rather than at the higher energy LHC or RHIC experiments. By measuring hadron production from pion–carbon interactions, meanwhile, the team will study the properties of high-energy cosmic rays from cascades of charged particles. Finally, using kaons and pions produced from a target replicating that of the T2K experiment in Japan, NA61/SHINE will help to determine the neutrino flux composition at the future DUNE and Hyper-Kamiokande experiments for precise measurements of neutrino mixing angles and the CP-violating phase.

New physics

Situated at the same H2 beamline, the new NA65 “DsTau” experiment will study the production of Ds mesons. This is important because Ds decays are the main source of ντs in a neutrino beam, and are therefore relevant for neutrino-oscillation studies. After a successful pilot run in 2018, a measurement campaign began in 2021 to determine the ντ-production flux.

The newly renovated East Area

At the K12 secondary beamline, NA62 continues its measurement of the ultra-rare charged kaon decay to a charged pion, a neutrino and an antineutrino, which is very sensitive to possible physics beyond the Standard Model. The collaboration aims to increase its sensitivity to a level (10%) approaching theoretical uncertainties, thanks to further data and experimental improvements to the more than 200 m-long facility. One is the installation during LS2 of a muon veto hodoscope that helps to determine whether a muon is coming from a kaon decay or from other interactions. Since 2021, NA62 also operates as a beam-dump experiment, where its primary focus is to search for feebly-interacting particles. Here, the ability to determine whether muons come from the target absorber is even more important since they make up most of the background.

Dark interactions

Searching for new physics is the focus of NA64 at the H4 beamline, which studies the interaction between an electron beam and an active target to look for a hypothetical dark-photon mediator connecting the SM with a possible dark sector. With at least five times more data expected this year, and up to 10 times more data during the period of LHC Run 3, it could be possible to determine whether the dark mediator, should it exist, is either an elastic scalar or a Majorana particle. Adding further impetus to this programme is an unexpected 17 MeV peak reported in e+einternal pair production by the ATOMKI experiment and, more significantly, the tension between the measured and predicted values of the anomalous magnetic moment of the muon (g-2)μ, for which possible explanations include models that invoke a dark mediator. During a planned muon run at the M2 beamline, the collaboration aims to cover the relevant parameter space for the (g-2)μ anomaly.  

NA63 also receives electrons from the H4 beamline and uses a high-energy electron beam to study the behaviour of scattered electrons in a strong electromagnetic field. In particular, the experiment tests QED at higher orders, which have a gravitational analogue in extreme astroparticle physics phenomena such as black-hole inspirals and magnetars. The NA63 team will continue its measurements in June.

Besides driving the broad North Area physics programme, the SPS serves protons to AWAKE – a proof-of-principle experiment investigating the use of plasma wakefields driven by a proton bunch to accelerate charged particles. Following successful results from its first run, the collaboration aims to further develop methods to modulate the proton bunches to demonstrate scalable plasma-wakefield technology, and to prepare for the installation of a second plasma cell and an electron-beam system using the whole CNGS tunnel at the beginning of LS3 in 2026.

Located on the main CERN site, receiving beams from the Proton Synchrotron (PS), the East Area underwent a complete refurbishment during LS2, leading to a 90% reduction in its energy consumption. Its main experiment is CLOUD, which simulates the impact of particulates on cloud formation. This year, the collaboration will test a new detector component called FLOTUS, a 70 litre quartz chamber extending the simulation from a period of minutes to a maximum of 10 days. The PS also feeds the n_TOF facility, which last year marked 20 years of service to neutron science and its applications. A new third-generation spallation target installed and commissioned in 2021 will enable new n_TOF measurements relevant for nuclear astrophysics. 

Different dimensions

Taking CERN science into an altogether different dimension, the PS also links to the Antimatter Factory via the Antiproton Decelarator (AD) and ELENA rings, where several experiments are poised to test CPT invariance and antimatter gravitational interactions at increased levels of precision (see “Antimatter galore at ELENA” panel). Even closer to the proton beam source is the PS Booster, which serves the ISOLDE facility. ISOLDE covers a diverse programme across the physics of exotic nuclei and includes MEDICIS (devoted to the production of novel radioisotopes for medical research), ISOLTRAP (comprising four ion traps to measure ions) and COLLAPS and CRIS, which focus on laser spectroscopy. Its post-accelerators REX/HIE-ISOLDE increase the beam energy up to 10 MeV/u, making ISOLDE the only facility in the world that provides radioactive ion-beam acceleration in this energy range.

Antimatter galore at ELENA

Experiments in the AD hall

Served directly by the Antiproton Decelerator (AD) for the past two decades, experiments at the CERN Antimatter Factory are now connected to the new ELENA ring, which decelerates 5.3 MeV antiprotons from the AD to 100 keV to allow a 100-fold increase in the number of trapped antiprotons. Six experiments involving around 350 researchers use ELENA’s antiprotons for a range of unique measurements, from precise tests of CPT invariance to novel studies of antimatter’s gravitational interactions. 

The ALPHA experiment focuses on antihydrogen-spectroscopy measurements, recently reaching an accuracy of two parts per trillion in the transition from the ground state to the first excited state. By clocking the free-fall of antiatoms released from a trap, it is also planning to measure the gravitational mass of antihydrogen. ALPHA’s recent demonstration of laser-cooled antihydrogen has opened a new realm of precision on anti-hydrogen’s internal structure and gravitational interactions to be explored in upcoming runs.

ASACUSA specialises in spectroscopic measurements of antiprotonic helium, recently finding surprising behaviour. The experiment is also gearing up to perform hyperfine-splitting spectroscopy in antihydrogen using atomic-beam methods complementary to ALPHA’s trapping techniques.

GBAR and AEgIS target direct measurements of the Earth’s gravitational acceleration on antihydrogen. GBAR is developing a method to measure the free-fall of antihydrogen atoms, using sympathetic laser cooling to cool antihydrogen atoms and release them, after neutralisation, from a trap directly injected with antiprotons from ELENA, maximising antihydrogen production. AEgIS, having established pulsed formation of antihydrogen in 2018, is following a different approach based on measuring the vertical drop of a pulsed cold beam of antihydrogen atoms travelling horizontally through a device called a Moiré deflectometer.

BASE uses advanced Penning traps to compare matter and antimatter with extreme precision, recently finding the charge-to-mass ratios of protons and antiprotons to be identical within 16 parts per trillion. The data also allowed the collaboration to perform the first differential test of the weak equivalence principle using antiprotons, reaching the 3% level, with experiment improvements soon expected to increase the sensitivities of both measurements. The BASE team is also working on an improved measurement of the antiproton magnetic moment, the implementation of a transportable antiproton trap called BASE-STEP and improved searches for millicharged particles.

The newest AD experiment, PUMA, which is preparing for first commissioning later this year, aims to transport trapped antiprotons collected at ELENA to ISOLDE where, from next year, they will be annihilated on exotic nuclei to study neutron densities at the surface of nuclei. 

“Thanks to the beam provided by ELENA and the major upgrades of the experiments, we hope to see big progress in ultra-precise tests of CPT invariance, first and long-awaited antihydrogen-based studies of gravity, as well as the development of new technologies such as transportable antimatter traps,” says Stefan Ulmer, head of the AD user committee. 

Stable and highly customisable beams at the North and East areas also facilitate important detector R&D and test-beam activities. These include the recently approved Water-Cherenkov Test Experiment, which will help to develop detector techniques for long-baseline neutrino experiments, and new detector components for the LHC experiments and proposed future colliders. The CERN Neutrino Platform is dedicated to the development of detector technologies for neutrino experiments across the world. Upcoming activities including ongoing contributions to the future DUNE experiment in the US, in particular the two huge DUNE cryostats and R&D for “vertical drift” liquid-argon detection technology. In the East Area, the mixed-field irradiation (CHARM) and proton-irradiation (IRRAD) facilities provide key input to detector R&D and electronics tests, similar to the services provided by the SPS-driven GIF irradiation facility and HiRadMat.

With the many physics opportunities mapped out by Physics Beyond Colliders and the consolidation of our facilities, we are looking into a bright future

Johannes Bernhard

Fixed-target experiments in the North and East areas, along with experiments at ISOLDE and the AD, demonstrate the importance of diverse physics studies at CERN, when the best path to discover new physics is unclear. Some of these experiments emerged within the Physics Beyond Colliders initiative and there are many more on the horizon, such as KLEVER and the SPS Beam Dump Facility. “With the many physics opportunities mapped out by Physics Beyond Colliders and the consolidation of our facilities, we are looking into a bright future,” says Johannes Bernhard, head of the liaison to experiments section in the beams department. “We are always aiming to serve our users with the highest beam quality and performance possible.”

CDF sets W mass against the Standard Model

CDF_detector

Ever since the W boson was discovered at CERN’s SppS four decades ago, successive collider experiments have pinned down its mass at increasing levels of precision. Unlike the fermion masses, the W mass is a clear prediction of the Standard Model (SM). At lowest order in electroweak theory, it depends solely on the mass of the Z boson and the value of the weak mixing angle. But higher-order corrections introduce an additional dependence on the gauge-boson couplings and the masses of other SM particles, in particular the heavy top quark and Higgs boson. With the precision of electroweak calculations now exceeding that of direct measurements, better knowledge of the measured W mass provides a vital test of the SM’s consistency.

The immediate reaction was silence

Chris Hays

A new measurement by the CDF collaboration based on data from the former Tevatron collider at Fermilab throws a curveball into this picture. Published today in Science, the CDF W-mass measurement – the most precise to date – stands 7σ from the SM prediction, upsetting decades of steady convergence between experiment and theory.

“I would say the immediate reaction was silence,” says Chris Hays, one of the CDF analysis leads, of the moment the measurement was unblinded on 19 November 2020. “Then there was some discussion to ensure the unblinding worked, i.e. that the value was correct, and to decide what would be the next steps.”

Long slog
CDF physicists have been measuring the mass of the W boson for more than 30 years via its decays to a lepton and a neutrino. In 2012, shortly after the Tevatron shut down, CDF published a W mass of 80,387 ± 12 (stat) ± 15 (syst) MeV based on 2.2 fb-1 of data, which significantly exceeded the precision of all previous measurements at that time combined. After 10 years of careful analysis and scrutiny of the full Tevatron dataset (8.8 fb-1, corresponding to about 4.2 million W-boson candidates), and taking into account an improved understanding of the detector and advances in the theoretical and experimental understanding of the W’s interactions with other particles, the new CDF result is twice as precise: 80,433.5 ± 6.4 (stat) ± 6.9 (syst) MeV.

In addition to the four-fold increase in statistics, the measurement benefits from a better understanding of systematic uncertainties. One significant change concerns the proton/antiproton parton distribution functions (PDFs), where the addition of LHC data to the PDF fits has reduced the uncertainty from 10 MeV to 3.9 MeV while also slightly raising the central value of the 2012 result.

LHCb-FIGURE-2022-003

“The 2012 and 2022 CDF values are in agreement at the level of two sigma accounting for the fact that approximately 25% of the events are in common, so the internal tension is not so significant,” explains CDF collaborator Mark Lancaster, who was an internal reviewer for the result. “But the tension with other results — particularly ATLAS at 80,370 ± 19 MeV and the SM at 80,357 ± 6 MeV — is significant. Many people from the LHC, Tevatron and theory community are presently working together to combine the results from the Tevatron, LHC and LEP and understand the correlations between them, e.g. in the PDFs and some of the higher order QCD and QED effects.”

It’s now up to theorists and other experiments to follow up on the CDF result, comments CDF co-spokesperson David Toback. “If the difference between the experimental and expected value is due to some kind of new particle or subatomic interaction, which is one of the possibilities, there’s a good chance it’s something that could be discovered in future experiments,” he says.

Cross checks
Results from the LHC experiments are crucial to enable a deeper understanding. One of the challenges  in measuring the W mass in high-rate proton-proton collisions at the LHC is event “pile-up”, which makes it hard to reconstruct the missing transverse energy from neutrinos. The higher collision energy at the LHC also means W bosons are produced with larger transverse momenta with respect to the beam axis, which needs to be properly modeled in order to measure the W boson mass precisely.

It takes years to build up the knowledge of the detector necessary to be able to address all the issues satisfactorily

Florencia Canelli

The ATLAS collaboration published the first high-precision measurement of the W mass at the LHC in 2018 based on data collected at a centre-of-mass energy of 7 TeV, and is currently working on new measurements. In September, based on 2016 data, LHCb published its first measurement of the W mass: 80,354 ± 32 MeV, and estimates that an uncertainty of 20 MeV or less is achievable with existing data. CMS is also proceeding with analyses that should soon see its first public result. “It’s an important measurement of our physics programme,” says CMS physics co-cordinator Florencia Canelli. “As the CDF result shows, precision physics can be a challenging and lengthy process: it takes a very long time to understand all aspects of the data to the level of precision required for a competitive W-mass measurement, and it takes years to build up the knowledge of the detector necessary to be able to address all the issues satisfactorily.”

The CDF result reiterates the central importance of precision measurements in the search for new physics, describe Claudio Campagnari (UC Santa Barbara) and Martijn Mulders (CERN) in a Perspective article accompanying the CDF paper. They point to the increased precision that will be available at the High-Luminosity LHC and the capabilities of future facilities such as the proposed Future Circular Collider, the e+e mode of which “would offer the best prospects for an improved W-boson mass measurement, with a projected sensitivity of 7 ppm”. Such a measurement would also demand the SM electroweak calculations be performed at higher orders, a challenge firmly in the sights of the theory community.

Following the 2012 discovery of the Higgs boson, it is not easy to tweak the SM parameters without ruining the excellent agreement with numerous measurements. Furthermore, unlike calculations such as that of the muon anomalous magnetic moment, which relies on significant input from QCD, the prediction of the W mass relies mostly on “cleaner” electroweak computations. Surveying possible new physics that could push the W mass to higher values than expected, the CDF paper points to hypotheses that offer a deeper understanding of the Higgs field, from which the SM particles get their masses. These include supersymmetry and Higgs-boson compositeness, both of which include a potential source of dark matter.

“Supersymmetry could make a significant change to the SM prediction of the W mass, although it seems difficult to explain as big an effect as seen experimentally,” says theorist John Ellis. “But one prediction I can make with confidence is a tsunami of arXiv papers in the weeks ahead.”

bright-rec iop pub iop-science physcis connect