Topics

Heavy-ion physics: past, present and future

Tracks from a lead–lead collision

Ultra-relativistic collisions between heavy nuclei probe the high-temperature and high-density limit of the phase diagram of nuclear matter. These collisions create a new state of matter, known as the quark–gluon plasma (QGP), in which quarks and gluons are no longer confined in hadrons but instead behave quasi-freely over a relatively large volume. By creating and studying this novel state of matter, which last existed in the microseconds after the Big Bang, we gain a deeper understanding of the strong nuclear force and quantum chromodynamics (QCD).

Nearly 50 years ago, the first relativistic heavy-ion collision experiments were performed at the Bevatron at Berkeley, reaching energies of 1 to 2 GeV. Since then, heavier ions were collided at higher energies at Brookhaven’s AGS, CERN’s SPS and Brookhaven’s RHIC facilities. Since 2010, heavy-ion physics has entered the TeV regime with lead–lead (PbPb) collisions at 2.76 and 5.02 TeV at the LHC. While the ALICE detector is designed specifically to focus on such collisions, all four large LHC experiments have active heavy-ion physics programmes and are contributing to our understanding of extreme QCD matter.

In a heavy-ion collision, the initial energy deposited by the colliding nuclei undergoes a fast equilibration, within roughly 10–24 s, to form the QGP. The resulting deconfined and thermalised medium expands and cools over the next few 10–24 s, before the quarks and gluons recombine to form a hadron gas. It is the goal of heavy-ion experiments at the LHC to use the detected final-state hadrons to reconstruct the properties and dynamical behaviour of the system throughout its evolution. So far, the LHC experiments have  delivered a series of results that are sensitive to various aspects of the heavy-ion collision system, with Run 3 set to push our understanding much further. 

Properties and dynamics

The initial energy-density distribution and subsequent expansion of the heavy-ion collision system is largely determined by the geometrical overlap of the colliding nuclei. Collisions can range from head-on “central” collisions, where the nuclear overlap is large, to glancing “peripheral” collisions where the overlap region is smaller and roughly almond-shaped. Since the interaction region in non-central events is not rotationally symmetric, anisotropic pressure gradients build up. These preferentially boost particles along the minor axis of the ellipsoidal overlap region, resulting in an observable anisotropy in the distribution of final-state hadrons. The distribution of the particles in the azimuthal angle can be described well by a Fourier cosine series, where the largest term is the second harmonic, characterised by the parameter v2, due to the ellipsoidal shape of the nuclear overlap region. Fluctuations in the positions of the individual constituent nucleons lead to significant higher-order terms. It was discovered that these Fourier coefficients, vn, are best described by models where the QGP dynamics obeys hydrodynamic equations, and thus behaves as a liquid exhibiting what we call “collective flow”.

A global Bayesian fit to measurements of centrality dependence

Remarkably, in order to fit hydrodynamic models to experimental data it is necessary for the medium’s viscosity to be very low, corresponding to a shear-viscosity to entropy-density ratio of the order η/s 0.1. With a shear viscosity that is orders of magnitude smaller than other materials, the QGP is known as the “perfect” liquid. Measurements of the higher order harmonics, as well as their event-by-event fluctuations and correlations, provide even greater sensitivity to medium properties and the initial-state dynamics. Precision measurements of the vn harmonics, charged-particle density, mean transverse momentum pT, and mean-pT fluctuations by ALICE have been used to extract the shear and bulk viscosity of the system as a function of temperature (see “Flow coefficients” figure).

While the QGP created in heavy-ion collisions is too small and short-lived to be examined with conventional probes, its properties can be investigated using the products of hard (high momentum-transfer, q2) scatterings that occur in the early stages of the collision and then propagate through the medium as it evolves. The production rates of these internally generated hard probes can be calculated in perturbative QCD and thus are considered calibrated probes of the QGP medium. The high-momentum quarks and gluons produced in these hard scatterings traverse the medium and fragment into collimated jets of hadrons. While these jets appear as a small signal on top of a large, fluctuating background, advances in re-clustering algorithms as well as the higher production rates of jets at the LHC have made it possible to study jets with high precision across a wide range of energies. 

The increase in the LHC luminosity will allow us to perform measurements that were previously inaccessible

Compared to jets in proton–proton (pp) collisions, jets in nucleus–nucleus (AA) collisions appear significantly suppressed or “quenched” due to their interactions with the medium (see “Jet quenching” figure). This is in contrast to electroweak probes, which interact only minimally with the coloured QGP medium. When the presence of hard scattering is identified by a high-pT jet, photon or Z boson, the recoiling jet measured in the opposite direction is often reconstructed with a significantly lower energy, indicating that some of its energy has been transferred and absorbed by the medium. Recent, detailed jet-structure studies show that jets in heavy-ion collisions are softer (they fragment into lower-pT hadrons) and broader than their counterparts in pp collisions, due to their interactions with the surrounding coloured QGP medium.

Another class of hard probes are heavy-flavour hadrons, since even heavy quarks (charm and beauty) with low pT are produced in high-q2 processes. Similar to jets, which mainly come from the fragmentation of light quarks and gluons, heavy hadrons are also suppressed in heavy-ion collisions relative to pp collisions. Recent precision measurements at the LHC of the yield of D mesons (containing charm quarks) as well as non-prompt D and J/ψ mesons (from the decays of hadrons containing beauty quarks), compared to the yields in pp collisions, demonstrate a mass-dependent suppression. This observation is consistent with the “dead cone” effect, which predicts that quarks with larger masses will be less significantly suppressed than those with smaller masses. The suppression of quarkonia (quark–antiquark bound states) depends on the binding energy, with loosely bound states such as the Υ(3S) and ψ(2S) more likely to become dissociated in the hot and dense medium than the tightly bound Υ(1S) and J/ψ(1S) states. However, it was discovered at the LHC that final-state J/ψ are actually less suppressed than in lower energy AA collisions at RHIC. This was attributed to the larger number of charm quarks being produced at LHC energies, which enhances the probability that charm and anti-charm quarks can recombine to form J/ψ states within the QGP. These dual effects of suppression and recombination are considered a signature of the production of a deconfined, thermalised medium in heavy-ion collisions.

Freeze out

As the QGP expands and cools, it undergoes a phase transition into a hadron gas in which quarks and gluons become confined into hadrons. At chemical freeze-out, inelastic collisions cease and the thermochemical properties of the system become fixed. Comparing ALICE measurements of the inclusive yields of multiple hadron species with a model of statistical hadronisation shows excellent agreement over nine orders of magnitude in mass, from pions to anti-4He nuclei (see “Statistical production” figure). This indicates that the bulk chemistry of the QGP freeze-out can be described by purely statistical particle production from a system in thermal equilibrium with a common temperature (155 MeV) and volume (~5000 fm3).

Suppression of the number of reconstructed jets

One of the first surprising results to come from the LHC was the discovery of azimuthal correlations between particles over large distances in pseudorapidity in small collision systems, pp and pPb. These long-range correlations are observed in heavy-ion collisions, where they are traditionally attributed to anisotropic flow (parameterised by vn coefficients). However, the presence of collective behaviour in small systems, where a QGP was not expected to be formed, raised many questions about our understanding of both large and small nuclear collisions.

A second surprising observation was made in the measurement of the ratios of strange and multistrange hadrons (e.g. K0S, Λ, Ξ and Ω) with respect to pions, as a function of the number of particles produced in the collision (multiplicity). The enhancement of strangeness production in AA compared to pp collisions was historically predicted as a signature of the formation of a QGP, although it is now understood as being due to the suppression of strangeness in small systems. However, measurements by ALICE showed a smooth increase in the strangeness enhancement with multiplicity across all collision systems: pp, pPb, XeXe and PbPb – opening further questions about the presence of a thermalised medium in both small and large systems.

In contrast, the suppression of hard probes, which has long been viewed as a complementary effect to anisotropic flow, has not been observed in pp or pPb collisions within current experimental uncertainties. In order to gain a more complete understanding of QCD from the soft to the hard scales, and from small to large systems, we must expand our experimental programmes.

To Run 3 and beyond

All four large experiments at the LHC have undergone significant upgrades during Long Shutdown 2 to extend their reach and allow the collection of heavy-ion data at higher luminosities. The increase in luminosity by a factor of 10 in Runs 3 and 4 at the LHC will allow us to make precision measurements of soft and hard probes of the QGP. Rare probes such as heavy-flavour hadrons will become accessible with high statistical precision, and we will be able to explore the charm and beauty sector at a level commensurate with that of the strangeness studies in Runs 1 and 2. Jet measurements will become significantly more precise as we further explore the medium-induced modification of well-calibrated probes such as γ– and Z-tagged jets. 

Our understanding of collective behaviour and the medium evolution will be enhanced by studies of the correlations and fluctuations of flow coefficients, which provide additional and complementary information above and beyond what we learn from vn alone. Measurements that were severely statistically-limited in Runs 1 and 2, such as those of virtual photons produced as thermal radiation, will be performed with unprecedented precision in Runs 3 and 4. The higher order fluctuations of identified particles, which are expected to be sensitive to critical behaviour around the phase transition, will also come within reach in Runs 3 and 4 and make it possible to map out the phase diagram of QCD matter in great detail.

Thermal-model fits

Furthermore, studies of small systems will continue to shed light on the development of QGP-like signals from pp to AA collisions. In particular, oxygen nuclei will be collided at the LHC, which will allow us to investigate collective effects in collisions with a geometry similar to PbPb collisions but with multiplicities of the order of those in pp and pPb collisions. High-precision and multi-differential jet measurements in pp, pPb and OO collisions will finally allow us to resolve open questions about the relationship between jet quenching and collective behaviour, and whether such effects are observed across all nuclear collision systems. Through these experimental measurements, we will make major progress in our understanding of nuclear matter from small to large collision systems, towards our ultimate goal of a unified description of QCD phenomenology from the microscopic level to the emergent bulk properties of the QGP.

While the heavy-ion physics programme in Runs 3 and 4 will provide deep insights into the rich field of QCD phenomenology, open questions will remain that can only be addressed with further advancements in detector performance and with the significant increase in heavy-ion luminosity anticipated in Run 5 (expected in 2035–2038). This extension of the LHC heavy-ion programme through the 2030s has been supported by the 2020 update of the European strategy for particle physics, and the LHC-experiment collaborations are exploring the potential for novel measurements in light- and heavy-ion collision systems based on their planned detector upgrades. In particular, ALICE is proposing to build a new dedicated heavy-ion experiment, ALICE 3, based on a large-acceptance ultra-light (low material budget) silicon tracking system surrounded by multiple layers of particle identification technology. The increase in the LHC luminosity coupled with state-of-the-art detector upgrades will allow us to dramatically extend our experimental reach and perform measurements that were previously inaccessible. The goals of the future heavy-ion programme at the LHC – from measuring electromagnetic radiation from the QGP and exotic heavy-flavour hadrons to beyond-the-Standard-Model searches for axions – will provide unprecedented insight into the fundamental constituents and forces of nature. 

Accessing the precursor stage of QGP formation

ALICE figure 1

The primary goal of the ultrarelativistic heavy-ion collision programme at the LHC is to study the properties of the quark–gluon plasma (QGP), a state of strongly interacting matter in which quarks and gluons are deconfined over large distances compared to the typical size of a hadron. The rapid expansion of the QGP under large pressure gradients is imprinted in the momentum distributions of final-state particles. The azimuthal-anisotropy flow coefficients vn and the mean transverse momentum pT of particles, which are described by hydrodynamic models, have been extensively measured by experiments at the LHC and  at the RHIC collider. These observables are also used as experimental inputs to global Bayesian analyses that provide information on both the initial stages of the heavy-ion collision, before QGP formation, and on key transport coefficients of the QGP itself, such as the shear and bulk viscosities. However, due to the limited constraints on the initial conditions, uncertainties remain in the QGP’s transport coefficients.

The ALICE collaboration recently reported correlations between vn and pT in terms of the modified Pearson coefficient ρ. The measurements were performed in lead–lead (PbPb) and xenon–xenon (XeXe) collisions at centre-of-mass energies per nucleon–nucleon collision of 5.02 and 5.44 TeV, respectively. As the correlations between vn and pT are predicted to be mainly driven by the shape and size of the initial profile of the energy distribution in the transverse plane, these studies provide a new approach to characterise the initial state. 

The measurements show a positive correlation between vn and pT in both PbPb and XeXe collisions (figure 1). These measurements are compared to hydrodynamic calculations using the initial-state models IP-Glasma (based on the colour-glass-condensate effective theory with gluon saturation) and Trento, a parameterised model with nucleons as the relevant degrees of freedom. The centrality dependence of ρ is better described by IP-Glasma than by Trento. In particular, the positive measured values of ρ suggest an effective nucleon width of the order of 0.3–0.5 fm, which is significantly smaller than what has been extracted in all Bayesian analy­ses using Trento initial conditions. The Pearson correlation measurements can now be included in Bayesian analyses to better constrain the initial state in nuclear collisions, thus impacting  the resulting QGP parameters. As a bonus, the measurements in XeXe collisions are sensitive to the quadrupole deformation parameter β2 of the 129Xe nucleus, potentially opening a new window for studying nuclear structure with ultrarelativistic heavy-ion collisions.

Higgs-boson charm coupling weaker than bottom

ATLAS figure 1

Within the Standard Model (SM), the Higgs boson is predicted to interact with (or couple to) quarks with a strength proportional to their mass. By measuring these interaction strengths, physicists can test this prediction and gain insight into possible physics beyond the SM, where such couplings can be modified. In a new analysis exploiting the full Run-2 dataset, the ATLAS collaboration experimentally excludes new-physics scenarios which predict that decays of the Higgs boson to a pair of charm quarks (H → cc) are as frequent as those to bottom quarks (H → bb). 

The search for H → cc is hampered by abundant background processes. In order to identify charm–quark signatures, a new multivariate classification method was developed to identify charm hadrons within jets, while simultaneously reducing the probability of misidentifying jets originating from a bottom quark. To maximise the sensitivity to the signal, events with one or two charm-tagged jets were selected. Background processes were further suppressed by selecting Higgs-boson events produced together with a weak boson, VH(cc), where the weak boson (V = W or Z) decays to 0, 1 or 2 electrons or muons. In total, 44 regions were fitted simultaneously to measure the H → cc process. 

In the SM, the H → cc process accounts for only 3% of all Higgs-boson decays. The ATLAS analysis found no significant sign for this process in the data, setting an upper limit on the rate of the VH(cc) process 26 times the SM rate at 95% confidence level. This limit constrains the Higgs-to-charm coupling strength to less than 8.5 times the predicted SM value. The analysis strategy is validated by measuring events with two vector bosons that contain the decay of a W boson to one charm quark, VW(cq), or the decay of a Z boson to two charm quarks, VZ(cc), whose rates are found to agree with the predictions. The combined dijet-mass distribution, after subtraction of the backgrounds, is shown in figure 1.

ATLAS figure 2

Since H → cc and H → bb decays lead to very similar signatures in the ATLAS detector, a combined analysis of both processes is key to a common interpretation. The multivariate classification method is used to identify jets as originating from a bottom quark, a charm quark or lighter quarks. Since a fraction of the H → bb events passes the selection criteria of the H → cc analysis and vice versa, the individual analyses are designed to ensure that no collision events are counted twice. This orthogonality between the analyses enabled a simultaneous measurement of the two processes for the first time.

Within the SM, the ratio of the couplings of bottom and charm quarks to the Higgs boson is given by their mass ratio: mb/mc = 4.578 ± 0.008, obtained from lattice-QCD calculations. With its novel combination of H → cc and H → bb decays, the ATLAS analysis excludes the hypothesis that the Higgs-boson interaction with charm quarks is stronger than or equal to the interaction with bottom quarks at 95% confidence level (figure 2). For the first time, this measurement establishes that the Higgs-boson coupling is smaller for charm quarks than for bottom quarks.

VELO’s voyage into the unknown

Marvellous modules

The first 10 years of the LHC have cemented the Standard Model (SM) as the correct theory of known fundamental particle interactions. But unexplained phenomena such as the cosmological matter–antimatter asymmetry, neutrino masses and dark matter strongly suggest the existence of new physics beyond the current direct reach of the LHC. As a dedicated heavy-flavour physics experiment, LHCb is ideally placed to allow physicists to look beyond this horizon. 

Measurements of the subtle effects that new particles can have on SM processes are fully complementary to searches for the direct production of new particles in high-energy collisions. As-yet unknown particles could contribute to the mixing and decay of beauty and charm hadrons, for example, leading to departures from the SM in decay rates, CP-violating asymmetries and other measurements. Rare processes for which the SM contribution occurs through loop diagrams are particularly promising for potential discoveries. Several anomalies recently reported by LHCb in such processes suggest that the cherished SM principle of lepton-flavour universality is under strain, leading to speculation that the discovery of new physics may not be far off.

Unique precision

In addition to precise theoretical predictions, flavour-physics measurements demand vast datasets and specialised detector and data-processing technology. To this end, the LHCb collaboration is soon to start taking data with an almost entirely new detector that will allow at least 50 fb–1 of data to be accumulated during Run 3 and Run 4, compared to 10 fb–1 from Run 1 and Run 2. This will enable many observables, in particular the flavour anomalies, to be measured with a precision unattainable at competing experiments. 

To allow LHCb to run at an instantaneous luminosity 10 times higher than during Run 2, much of the detector system and its readout electronics have been replaced, while a flexible full-software trigger system running at 40 MHz allows the experiment to maintain or even improve trigger efficiencies despite the larger interaction rate. During Long Shutdown 2, upgraded ring-imaging Cherenkov detectors and a brand new “SciFi” (scintillating fibre) tracker have been installed. A major part of LHCb’s metamorphosis – in process at the time of writing – is the installation of a new Vertex Locator (VELO) at the heart of the experiment. 

The VELO encircles the LHCb interaction point, where it contributes to triggering, tracking and vertexing. Its principal task is to pick out short-lived charm and beauty hadrons from the multitude of other particles produced by the colliding proton beams. Thanks to its close position to the interaction point and high granularity, the VELO can measure the decay time of B mesons with a precision of about 50 fs. 

Microcooling

The original VELO was based on silicon-strip detectors. Its upgraded version employs silicon pixel detectors to cope with the increased occupancies at higher luminosities and to stream complete events at 40 MHz, with an expected torrent of up to 3 Tb/s flowing from the VELO at full luminosity. A total of 52 silicon pixel detector modules, each with a sensitive surface of about 25 cm2, are mounted in two detector halves located on either side of the LHC beams and perpendicular to the beam direction (see “Marvellous modules” image). An important feature of the LHCb VELO is that it moves. During injection of LHC protons, the detectors are parked at a safe distance of 3 cm from the beams. But once stable beams are declared, the two halves are moved inward such that the detector sensors effectively enclose the beam. At that point the sensitive elements will be as close as 5.1 mm to the beams (compared to 8.2 mm previously), which is much closer than any of the other large LHC detectors and vital for the identification and reconstruction of charm- and beauty-hadron decays. 

The VELO’s close proximity to the interaction point requires a high radiation tolerance. This led the collaboration to opt for silicon-hybrid pixel detectors, which consist of a 200 μm-thick “p-on-n” pixel sensor bump-bonded to a 200 μm-thick readout chip with binary pixel readout. The CERN/Nikhef-designed “VeloPix” ASIC stems from the Medipix family and was specially developed for LHCb. It is capable of handling up to 900 million hits per second per chip, while withstanding the intense radiation environment. The data are routed through the vacuum via low-mass flex cables engineered by the University of Santiago de Compostela, then make the jump to atmosphere through a high-speed vacuum interface designed by Moscow State University engineers, which is connected to an optical board developed by the University of Glasgow. The data are then carried by optical fibres with the rest of the LHCb data to the event builder, trigger farm and disk buffers contained in modular containers in the LHCb experimental area.

The VELO modules were constructed at two production sites: Nikhef and the University of Manchester, where all the building blocks were delivered from the many institutes involved and assembled together over a period of about 1.5 years. After an extensive quality-assurance programme to assess the mechanical, electrical and thermal performance of each module, they were shipped in batches to the University of Liverpool to be mounted into the VELO halves. Finally, after population with modules, each half of the VELO detector was transported to CERN for installation in the LHCb experiment. The first half was installed on 2 March, and the second is being assembled.

Microchannel cooling

Keeping the VELO cool to prevent thermal runaway and minimise the effects of radiation damage was a major design challenge. The active elements in a VELO module consist of 12 front-end ASICs (VeloPix) and two control ASICs (GBTX), with a nominal power consumption of about 1.56 kW for each VELO half. The large radiation dose experienced by the silicon sensors is distributed highly non-uniformly and concentrated in the region closest to the beams, with a peak dose 60% higher than that experienced by the other LHC tracking detectors. Since the sensors are bump-bonded to the VeloPix chips, they are in direct contact with the ASICs, which are the main source of heat. The detector is also operated under vacuum, making heat removal especially difficult. These challenging requirements led LHCb to adopt microchannel cooling with evaporative CO2 as the coolant (see “Microcooling” image). 

Keeping the VELO cool to prevent thermal runaway and minimise the effects of radiation damage was a major design challenge

The circulation of coolant in microscopic channels embedded within a silicon wafer is an emergent technology, first implemented at CERN by the NA62 experiment. The VELO upgrade combines this with the use of bi-phase (liquid-to-gas) CO2, as used by LHCb in previous runs, in a single innovative system. The LHCb microchannel cooling plates were produced at CERN in collaboration with the University of Oxford. The bare plates were fabricated by CEA-Leti (Grenoble, France) by atomic-bonding two silicon wafers together, one with 120 × 200 μm trenches etched into it, for an overall thickness of 500 μm. This approach allows the design of a channel pattern to ensure a very homogeneous flow directly under the heat sources. The coolant is circulated inside the channels through exit and entry slits that are etched directly into the silicon after the bonding step. The cooling is so effective that it is possible to sustain an overhang of 5 mm closest to the beam, thus reducing the amount of material before the first measured points on each track. The use of microchannels to cool electronics is being investigated both for future LHCb upgrades and several other future detectors.

Module assembly and support

The microchannel plate serves as the core of the mechanical support for all the active components. The silicon sensors, already bump-bonded to their ASICs to form a tile, are precisely positioned with respect to the base and glued to the microchannel plate with a precision of 30 μm. The thickness of the glue layer is around 80 µm to produce low thermal gradients across the sensor. The front-end ASICs are then wire-bonded to custom-designed kapton–copper circuit boards, which are also attached to the microchannel substrate. The ASICs’ placement requires a precision of about 100 µm, such that the length and shape of the 420 wire-bonds are consistent along the tile. High-voltage, ultra-high-speed data links and all electrical services are designed and attached in such a way to produce a precise and lightweight detector (a VELO module weighs only 300 g) and therefore minimise the material in the LHCb acceptance.

Every step in the assembly of a module was followed by checks to ensure that the quality met the requirements. These included: metrology to assess the placement and attachment precision of the active components; mechanical tests to verify the effects of the thermal stress induced by temperature gradients; characterisation of the current-voltage behaviour of the silicon sensors; thermal performance measurements; and electrical tests to check the response of the pixel matrix. The results were then uploaded to a database, both to keep a record of all the measurements carried out and to run tests that assign a grade for each module. This allowed for continuous cross-checks between the two assembly sites. To quantify the effectiveness of the cooling design, the change in temperature on each ASIC as a function of the power consumption was measured. The LHCb modules have demonstrated thermal-figure-of-merit values as low as 2–3 K cm2 W–1. This performance surpasses what is possible with, for example, mono-phase microchannel cooling or integrated-pipe solutions. 

RF boxes

The delicate VELO modules are mounted onto two precision-machined bases, each housed within a hood (one for each side) that provides isolation from the atmosphere. The complex monolithic hoods were machined from one-tonne billets of aluminium to provide the vacuum tightness and the mechanical performance required. The hood and base system is also articulated to allow the detector to be retracted during injection and to be centred accurately around the collision point during stable beams. Pipes and cables for the electrical and cooling services are designed to absorb the approximately 3 cm motion of each VELO half without transferring any force to the modules, to be radiation tolerant, and to survive flexing thousands of times. 

Following the completion of each detector half, performance measurements of each module were compared with those taken at the production sites. Further tests ensured there are no leaks in the high-pressure cooling system or the vacuum volumes, in addition to safety checks that guarantee the long-term performance of the detector. A final set of measurements checks the alignment of the detector along the beam direction, which is extremely difficult once the VELO is installed. Before installation, the detectors are cooled close to their –30°C operating temperature and the position of the tips of the modules measured with a precision of 5 µm. Once complete, each half-tonne detector half is packed for transport into a frame designed to damp-out and monitor vibrations during its 1400 km journey by road from Liverpool to CERN.

RF boxes

One of the most intriguing technological challenges of the VELO upgrade was the design and manufacture of the RF boxes that separate the two detector halves from the primary beam vacuum, shielding the sensitive detectors from RF radiation generated by the beams and guiding the beam mirror currents to minimise wake-fields. The sides of the boxes facing the beams need to be as thin as possible to minimise the impact of particle scattering, yet at the same time they must be vacuum-tight. A further challenge was to design the structures such that they do not touch the silicon sensors even under pressure differences. Whereas the RF boxes of LHCb’s previous VELO were made from 300 μm-thick hot-pressed deformed sheets of aluminium foils welded together, the more complicated layout of the new VELO required them to be machined from solid blocks of small grain-sized forged aluminium. This highly specialised procedure was developed and carried out at Nikhef using a precision five-axis milling machine (see “RF boxes” image).

The VELO upgrade reflects the dedication and work of more than 150 people at 13 institutes over many years

In early prototypes, micro-enclosures led to small vacuum leaks when machining thin layers. A 3D forging technique, performed by block manufacturer Loire Industrie (France), reduced the porosity of the casts sufficiently to eliminate this problem. To form the very thin sides of a box, the inside of the block was milled first. It was then positioned on an aluminium mould. The 1 mm space between box and mould was filled with heated liquid wax, which forms a strong and stable bond at room temperature. The remaining material was then machined until a sturdy flange and box with a wall about 250 μm thick remained, or just over 1% of the original 325 kg block. To further minimise the thickness in the region closest to the beams, a procedure was developed at CERN to remove more material with a chemical agent, leaving a final wall with a thickness between 150 and 200 μm. The final step was the application of a Torlon coating on the inside for electrical insulation to the sensors, and a non-evaporable getter coating on the outside to improve the beam vacuum. The two boxes were installed in the vacuum tank in spring 2021, in advance of the insertion of the VELO modules.

Let collisions commence 

LHCb’s original VELO played a pivotal role in the experiment’s flavour-physics programme. This includes the 2019 discovery of CP violation in the charm sector, numerous matter–antimatter asymmetry measurements and rare-decay searches, and the recent hints of lepton non-universality in B decays. The upgraded VELO detector – in conjunction with the new software trigger, the RICH and SciFi detectors, and other upgrades – will extend LHCb’s capabilities to search for physics beyond the SM. It will remain in place for the start of High-Luminosity LHC operations in Run 4, contributing to the full exploitation of the LHC’s physics potential.

Proposed 15 years ago, with a technical design report published in 2013 and full approval the following year, the VELO upgrade reflects the dedication and work of more than 150 people at 13 institutes over many years. The device is now in final construction. One half is installed and is undergoing commissioning in LHCb, while the other is being assembled, and will be delivered to CERN for installation during a dedicated machine stop during May. The assembly and installation has been made considerably more challenging by COVID-19-related travel and working restrictions, with final efforts taking place around the clock to meet the tight LHC schedule. Everyone in the LHCb collaboration is therefore looking forward to seeing the first data from the new detectors and continuing the success of the LHC’s world-leading flavour-physics programme.

LHCb constrains cosmic antimatter production

LHCb figure 1

During their 10 million-year-long journey through the Milky Way, high-energy cosmic rays can collide with particles in the interstellar medium, the ultra-rarefied gas filling our galaxy and mostly composed of hydrogen and helium. Such rare encounters are believed to produce most of the small number of antiprotons, about one per 10,000 protons, that are observed in high-energy cosmic rays. But this cosmic antimatter could also originate from unconventional sources, such as dark-matter annihilation, motivating detailed investigations of antiparticles in space. This effort is currently led by the AMS-02 experiment on the International Space Station, which has reported results with unprecedented accuracy.

The interpretation of these precise cosmic antiproton data calls for a better understanding of the antiproton production mechanism in proton-gas collisions. Here, experiments at accelerators come to the rescue. The LHCb experiment has the unique capability of injecting gas into the vacuum of the LHC accelerator. By injecting helium, cosmic collisions are replicated in the detector and their products can be studied in detail. LHCb already provided a first key input into the understanding of cosmic antimatter by measuring the amount of antiprotons produced at the proton–helium collision vertex itself. In a new study, this measurement has been extended by including the significant fraction (about one third) of antiprotons resulting from the decays of antihyperons such as Λ, which contain a strange antiquark also produced in the collisions.

These antiprotons are displaced from the collision point in the detector, as the antihyperons can fly several metres through the detector before decaying. Different antihyperon states and decay chains are possible, all contributing to the cosmic antiproton flux. To count them, the LHCb team exploited two key features of its detector: the ability to distinguish antiprotons from other charged particles via two ring-imaging Cherenkov (RICH) detectors, and the outstanding resolution of the LHCb vertex locator. Thanks to the latter, when checking the compatibility of the identified antiproton tracks with the collision vertex, three classes of antiprotons can be clearly resolved (figure 1): “prompt” particles originating from the proton–helium collision vertex; detached particles from Λ decays; and more separated particles produced in secondary collisions with the detector material.

The majority of the detached antiprotons are expected to originate from Λ particles produced at the collision point decaying to an antiproton and a positive pion. A second study was thus performed to fully reconstruct these decays by identifying the decay vertex. The results of this complementary approach show that about 75% of the observed detached antiprotons originate from Λ decays, in good agreement with theoretical predictions.

These new results provide an important input for modelling the expected antiproton flux from cosmic collisions. No smoking gun for an exotic source of cosmic antimatter has emerged yet, while the accuracy of this quest would profit from more accelerator inputs. Thus, the LHCb collaboration plans to expand its “space mission” with the new gas target SMOG2. This facility/device could also enable collisions between protons and hydrogen or deuterium targets, further strengthening the ties between the particle and astroparticle physics communities.

Science diversity at the intensity and precision frontiers

The EHN1 experimental hall

While all eyes focus on the LHC restart, a diverse landscape of fixed-target experiments at CERN have already begun data-taking. Driven by beams from smaller accelerators in the LHC chain, they span a large range of research programmes at the precision and intensity frontiers, complementary to the LHC experiments. Several new experiments join existing ones in the new run period, in addition to a suite of test-beam and R&D facilities. 

At the North Area, which is served by proton and ion beams from the Super Proton Synchrotron (SPS), new physics programmes have been underway since the return of beams last year. Experiments in the North Area, which celebrated its 40th anniversary in 2019, are located at different secondary beamlines and span QCD, electroweak physics and QED, as well as dark-matter searches. “During Long Shutdown 2, a major overhaul of the North Area started and will continue during the next 10 years to provide the best possible beam and infrastructure for our users,” says Yacine Kadi, leader of the North Area consolidation project. “The most critical part of the project is to prepare for the future physics programme.”

The first phase of the AMBER facility at the M2 beamline is an evolution of COMPASS, which has operated since 2002 and focuses on the study of the gluon contribution to the nucleon spin structure. By measuring the proton charge radius via muon–proton elastic scattering, AMBER aims to clarify the long-standing proton–radius puzzle, offering a complementary approach to previous electron–proton scattering and spectroscopy measurements. A new data-acquisition system will enable the collaboration to measure the antiproton production cross-section to improve the sensitivity of searches for cosmic antiparticles from possible dark-matter annihilation. A third AMBER programme will concentrate on measurements of the kaon, pion and proton charge radii via Drell-Yan processes using heavy targets. 

A second North Area experiment specialising in hadron physics is NA61/SHINE, which underwent a major overhaul during Long Shutdown 2 (LS2), including the re-use of the vertex detector from the ALICE experiment. Building on its predecessor NA49, the 17 m-long NA61/SHINE facility, situated at the H2 beamline, focuses on three main areas: strong interactions, cosmic rays and cross-section measurements for neutrino physics. The collaboration continues its study of the energy dependence of hadron production in heavy-ion collisions, in which NA49 found irregularities. It also aims to observe the critical point at which the phase transition from a quark–gluon plasma to a hadron gas takes place, the threshold energy for which is only measurable at the SPS rather than at the higher energy LHC or RHIC experiments. By measuring hadron production from pion–carbon interactions, meanwhile, the team will study the properties of high-energy cosmic rays from cascades of charged particles. Finally, using kaons and pions produced from a target replicating that of the T2K experiment in Japan, NA61/SHINE will help to determine the neutrino flux composition at the future DUNE and Hyper-Kamiokande experiments for precise measurements of neutrino mixing angles and the CP-violating phase.

New physics

Situated at the same H2 beamline, the new NA65 “DsTau” experiment will study the production of Ds mesons. This is important because Ds decays are the main source of ντs in a neutrino beam, and are therefore relevant for neutrino-oscillation studies. After a successful pilot run in 2018, a measurement campaign began in 2021 to determine the ντ-production flux.

The newly renovated East Area

At the K12 secondary beamline, NA62 continues its measurement of the ultra-rare charged kaon decay to a charged pion, a neutrino and an antineutrino, which is very sensitive to possible physics beyond the Standard Model. The collaboration aims to increase its sensitivity to a level (10%) approaching theoretical uncertainties, thanks to further data and experimental improvements to the more than 200 m-long facility. One is the installation during LS2 of a muon veto hodoscope that helps to determine whether a muon is coming from a kaon decay or from other interactions. Since 2021, NA62 also operates as a beam-dump experiment, where its primary focus is to search for feebly-interacting particles. Here, the ability to determine whether muons come from the target absorber is even more important since they make up most of the background.

Dark interactions

Searching for new physics is the focus of NA64 at the H4 beamline, which studies the interaction between an electron beam and an active target to look for a hypothetical dark-photon mediator connecting the SM with a possible dark sector. With at least five times more data expected this year, and up to 10 times more data during the period of LHC Run 3, it could be possible to determine whether the dark mediator, should it exist, is either an elastic scalar or a Majorana particle. Adding further impetus to this programme is an unexpected 17 MeV peak reported in e+einternal pair production by the ATOMKI experiment and, more significantly, the tension between the measured and predicted values of the anomalous magnetic moment of the muon (g-2)μ, for which possible explanations include models that invoke a dark mediator. During a planned muon run at the M2 beamline, the collaboration aims to cover the relevant parameter space for the (g-2)μ anomaly.  

NA63 also receives electrons from the H4 beamline and uses a high-energy electron beam to study the behaviour of scattered electrons in a strong electromagnetic field. In particular, the experiment tests QED at higher orders, which have a gravitational analogue in extreme astroparticle physics phenomena such as black-hole inspirals and magnetars. The NA63 team will continue its measurements in June.

Besides driving the broad North Area physics programme, the SPS serves protons to AWAKE – a proof-of-principle experiment investigating the use of plasma wakefields driven by a proton bunch to accelerate charged particles. Following successful results from its first run, the collaboration aims to further develop methods to modulate the proton bunches to demonstrate scalable plasma-wakefield technology, and to prepare for the installation of a second plasma cell and an electron-beam system using the whole CNGS tunnel at the beginning of LS3 in 2026.

Located on the main CERN site, receiving beams from the Proton Synchrotron (PS), the East Area underwent a complete refurbishment during LS2, leading to a 90% reduction in its energy consumption. Its main experiment is CLOUD, which simulates the impact of particulates on cloud formation. This year, the collaboration will test a new detector component called FLOTUS, a 70 litre quartz chamber extending the simulation from a period of minutes to a maximum of 10 days. The PS also feeds the n_TOF facility, which last year marked 20 years of service to neutron science and its applications. A new third-generation spallation target installed and commissioned in 2021 will enable new n_TOF measurements relevant for nuclear astrophysics. 

Different dimensions

Taking CERN science into an altogether different dimension, the PS also links to the Antimatter Factory via the Antiproton Decelarator (AD) and ELENA rings, where several experiments are poised to test CPT invariance and antimatter gravitational interactions at increased levels of precision (see “Antimatter galore at ELENA” panel). Even closer to the proton beam source is the PS Booster, which serves the ISOLDE facility. ISOLDE covers a diverse programme across the physics of exotic nuclei and includes MEDICIS (devoted to the production of novel radioisotopes for medical research), ISOLTRAP (comprising four ion traps to measure ions) and COLLAPS and CRIS, which focus on laser spectroscopy. Its post-accelerators REX/HIE-ISOLDE increase the beam energy up to 10 MeV/u, making ISOLDE the only facility in the world that provides radioactive ion-beam acceleration in this energy range.

Antimatter galore at ELENA

Experiments in the AD hall

Served directly by the Antiproton Decelerator (AD) for the past two decades, experiments at the CERN Antimatter Factory are now connected to the new ELENA ring, which decelerates 5.3 MeV antiprotons from the AD to 100 keV to allow a 100-fold increase in the number of trapped antiprotons. Six experiments involving around 350 researchers use ELENA’s antiprotons for a range of unique measurements, from precise tests of CPT invariance to novel studies of antimatter’s gravitational interactions. 

The ALPHA experiment focuses on antihydrogen-spectroscopy measurements, recently reaching an accuracy of two parts per trillion in the transition from the ground state to the first excited state. By clocking the free-fall of antiatoms released from a trap, it is also planning to measure the gravitational mass of antihydrogen. ALPHA’s recent demonstration of laser-cooled antihydrogen has opened a new realm of precision on anti-hydrogen’s internal structure and gravitational interactions to be explored in upcoming runs.

ASACUSA specialises in spectroscopic measurements of antiprotonic helium, recently finding surprising behaviour. The experiment is also gearing up to perform hyperfine-splitting spectroscopy in antihydrogen using atomic-beam methods complementary to ALPHA’s trapping techniques.

GBAR and AEgIS target direct measurements of the Earth’s gravitational acceleration on antihydrogen. GBAR is developing a method to measure the free-fall of antihydrogen atoms, using sympathetic laser cooling to cool antihydrogen atoms and release them, after neutralisation, from a trap directly injected with antiprotons from ELENA, maximising antihydrogen production. AEgIS, having established pulsed formation of antihydrogen in 2018, is following a different approach based on measuring the vertical drop of a pulsed cold beam of antihydrogen atoms travelling horizontally through a device called a Moiré deflectometer.

BASE uses advanced Penning traps to compare matter and antimatter with extreme precision, recently finding the charge-to-mass ratios of protons and antiprotons to be identical within 16 parts per trillion. The data also allowed the collaboration to perform the first differential test of the weak equivalence principle using antiprotons, reaching the 3% level, with experiment improvements soon expected to increase the sensitivities of both measurements. The BASE team is also working on an improved measurement of the antiproton magnetic moment, the implementation of a transportable antiproton trap called BASE-STEP and improved searches for millicharged particles.

The newest AD experiment, PUMA, which is preparing for first commissioning later this year, aims to transport trapped antiprotons collected at ELENA to ISOLDE where, from next year, they will be annihilated on exotic nuclei to study neutron densities at the surface of nuclei. 

“Thanks to the beam provided by ELENA and the major upgrades of the experiments, we hope to see big progress in ultra-precise tests of CPT invariance, first and long-awaited antihydrogen-based studies of gravity, as well as the development of new technologies such as transportable antimatter traps,” says Stefan Ulmer, head of the AD user committee. 

Stable and highly customisable beams at the North and East areas also facilitate important detector R&D and test-beam activities. These include the recently approved Water-Cherenkov Test Experiment, which will help to develop detector techniques for long-baseline neutrino experiments, and new detector components for the LHC experiments and proposed future colliders. The CERN Neutrino Platform is dedicated to the development of detector technologies for neutrino experiments across the world. Upcoming activities including ongoing contributions to the future DUNE experiment in the US, in particular the two huge DUNE cryostats and R&D for “vertical drift” liquid-argon detection technology. In the East Area, the mixed-field irradiation (CHARM) and proton-irradiation (IRRAD) facilities provide key input to detector R&D and electronics tests, similar to the services provided by the SPS-driven GIF irradiation facility and HiRadMat.

With the many physics opportunities mapped out by Physics Beyond Colliders and the consolidation of our facilities, we are looking into a bright future

Johannes Bernhard

Fixed-target experiments in the North and East areas, along with experiments at ISOLDE and the AD, demonstrate the importance of diverse physics studies at CERN, when the best path to discover new physics is unclear. Some of these experiments emerged within the Physics Beyond Colliders initiative and there are many more on the horizon, such as KLEVER and the SPS Beam Dump Facility. “With the many physics opportunities mapped out by Physics Beyond Colliders and the consolidation of our facilities, we are looking into a bright future,” says Johannes Bernhard, head of the liaison to experiments section in the beams department. “We are always aiming to serve our users with the highest beam quality and performance possible.”

Compact XFELs for all

A prototype of the CLIC X-band structure

Originally considered a troublesome byproduct of particle accelerators designed to explore fundamental physics, synchrotron radiation is now an indispensable research tool across a wide spectrum of science and technology. The latest generation of synchrotron-radiation sources are X-ray free electron lasers (XFELs) driven by linacs. With sub-picosecond pulse lengths and wavelengths down to the hard X-ray range, these facilities offer unprecedented brilliance, exceeding that of third-generation synchrotrons based on storage rings by many orders of magnitude. However, the high costs and complexity of XFELs have meant that there are only a few such facilities currently in operation worldwide, including the European XFEL at DESY and LCLS-II at SLAC.

CompactLight, an EU-funded project involving 23 international laboratories and academic institutions, three private companies and five third parties, aims to use emerging and innovative accelerator technologies from particle physics to make XFELs more affordable, compact, power-efficient and performant. In the early stages of the project, a dedicated workshop was held at CERN to survey the X-ray characteristics needed by the many user communities. This formed the basis for a design based on the latest concepts for bright electron photo-injectors, high-gradient X-band radio-frequency structures developed in the framework of the Compact Linear Collider (CLIC), and innovative superconducting short-period undulators. After four years of work, the CompactLight team has completed a conceptual design report describing the proposed facility in detail.

The 360-page report sets out a hard X-ray (16–0.25 keV) facility with two separate beamlines offering soft and hard X-ray sources with a pulse-repetition rate of up to 1 kHz and 100 Hz, respectively. It includes a facility baseline layout and two main upgrades, with the most advanced option allowing the operation of both soft and hard X-ray beamlines simultaneously. The technology also offers preliminary evaluations of a very compact, soft X-ray FEL and of an X-ray source based on inverse Compton scattering, considered an affordable solution for university campuses, small labs and hospitals. 

CompactLight is the most significant current effort to enable greater diffusion of XFEL facilities, says the team, which plans to continue its activities beyond the end of its Horizon 2020 contract, improving the partnership and maintaining its leadership in compact acceleration and light production. “Compared to existing facilities, for the same operating wavelengths, the technical solutions adopted ensure that the CompactLight facility can operate with a lower electron beam energy and will have a significantly more compact footprint,” explains project coordinator Gerardo D’Auria. “All these enhancements make the proposed facility more attractive and more affordable to build and operate.”

• Based on an article in Accelerating News, 4 March.

Closing in on open questions

moriond

Around 140 physicists convened for one of the first in-person international particle-physics conferences in the COVID-19 era. The Moriond conference on electroweak interactions and unified theories, which took place from 12 to 19 March on the Alpine slopes of La Thuile in Italy, was a wonderful chance to meet friends and colleagues, to have spontaneous exchanges, to listen to talks and to prolong discussions over dinner.

The LHC experiments presented a suite of impressive results based on increasingly creative and sophisticated analyses, including first observations of rare Standard Model (SM) processes and the most recent insights in the search for new physics. ATLAS reported the first observation of the production of a single top quark in association with a photon, a rare process that is sensitive to the existence of new particles. CMS observed for the first time the electroweak production of a pair of opposite-sign W bosons, which is crucial to investigate the mechanism of electroweak symmetry breaking. The millions of Higgs bosons produced so far at the LHC have enabled detailed measurements and open a new window on rare phenomena, such as the rate of Higgs-boson decays to a charm quark–antiquark pair. CMS presented the world’s most stringent constraint on the coupling between the Higgs boson and the charm quark, improving their previous measurement by more than a factor of five, while ATLAS measurements demonstrated that it is weaker than the coupling between the Higgs boson and the bottom quark. On the theory side, various new signatures for extended Higgs sectors were proposed.

The LHC experiments presented a suite of impressive results based on increasingly creative and sophisticated analyses

Of special interest is the search for heavy resonances decaying to high-mass dijets. CMS reported the observation of a spectacular event with four high transverse-momentum jets, forming an invariant mass of 8 TeV. CMS now has two such events, exceeding the SM prediction with a local significance of 3.9σ, or 1.6σ when taking into account the full range of parameter space searched. Moderate excesses with a global significance of 2–2.5σ were observed in other channels, for example in a search by ATLAS for long-lived, heavy charged particles and in a search by CMS for new resonances that decay into two tau pairs. Data from Run 3 and future High-Luminosity LHC runs will show whether these excesses are statistical fluctuations of the SM expectation or signals of new physics.

Flavour anomalies

The persistent set of tensions between predictions and measurements in semi-leptonic b → s ℓ+ decays (ℓ = e, μ) were much discussed. LHCb has used various decay modes mediated by strongly suppressed flavour-changing neutral currents to search for deviations from lepton flavour universality (LFU). Other measurements of these transitions, including angular distributions and decay rates (for which the predictions are affected by troublesome hadronic corrections) as well as analyses of charged-current b→ cτ ν decays from BaBar, Belle and LHCb also show a consistent pattern of deviations from LFU. While none are individually significant enough to constitute clear evidence of new physics, they represent an intriguing pattern that can be explained by the same new-physics models. Theoretical talks on this subject proposed additional observables (based on baryon decays or leptons at high transverse momenta) to get more information on operators beyond the SM that would contribute to the anomalies. Updates from LHCb on several b → s ℓ+-related measurements with the full Run 1 and Run 2 datasets are eagerly awaited, while Belle II also has the potential to provide essential independent checks. The integrated SuperKEKB luminosity has now reached a third of the full Belle dataset, with Belle II presenting several impressive new results. These include measurements of the b → s ℓ+ decay branching fractions with a precision limited by the sample size and precise measurements of charmed particle lifetimes, including the individual world-best D and Λ+c  lifetimes, proving the excellent tracking and vertexing capabilities of the detector.

The other remarkable deviation from the SM prediction is the anomalous magnetic moment of the muon (g–2)μ, for which the SM prediction and the recent Fermilab measurement stand 4.2σ apart – or less, depending on whether the hadronic vacuum polarisation contribution to (g–2)μ is calculated using traditional “dispersive” methods or a recent lattice QCD calculation. The jury is still out on the theory side, but the ongoing analysis of Run 2 and Run 3 data at Fermilab will soon reduce the statistical uncertainty by more than a factor of two. The hottest issues in neutrinos – in particular their masses and mixing – were reviewed. The current leading long-baseline experiments – NOvA in the US and T2K in Japan – have helped to refine our understanding of oscillations, but the neutrino mass hierarchy and CP-violating phase remain to be determined. A great experimental effort is also being devoted to the search for neutrinoless double-beta decay (NDBD) which, if found, would prove that neutrinos are Majorana particles and have far-reaching implications in cosmology and particle physics. The GERDA experiment at Gran Sasso presented its final result, placing a lower limit on the NDBD half-life of 1.8 × 1026 years.

While tensions between solar-neutrino bounds and the reactor antineutrino anomaly are mostly resolved, the gallium anomaly remains

Another very important question is the possible existence of “sterile” neutrinos that do not participate in weak interactions, for which theoretical motivations were presented together with the robust experimental programme. The search for sterile neutrinos is motivated by a series of tensions in short-baseline experiments using neutrinos from accelerators (LSND, Mini-BooNE), nuclear reactors (the “reactor antineutrino anomaly”) and radioactive sources (the “gallium anomaly”), which cannot be accounted for by the standard three-neutrino framework. In particular, MicroBooNE has neither confirmed nor excluded the electron-like low-energy excess observed by MiniBooNE. While tensions between solar-neutrino bounds and the reactor antineutrino anomaly are mostly resolved, the gallium anomaly remains.

Dark matter and cosmology

The status of dark-matter searches both at the LHC and via direct astrophysical searches was comprehensively reviewed. The ongoing run of the 5.9 tonne XENONnT experiment, for example, should elucidate the 3.3σ excess observed by XENON1T in low-energy electron recoil events. The search for axions, which provide a dark-matter candidate as well as a solution to the strong-CP problem, cover different mass ranges depending on the axion coupling strength. The parameter space is wide, and Moriond participants heard how a discovery could happen at any moment thanks to experiments such as ADMX. The status of the Hubble tension was also reviewed.

The many theory talks described various beyond-the-SM proposals – including extra scalars and/or fermions and/or gauge symmetries – aimed at explaining LFU violation, (g–2)μ, the hierarchy among Yukawa couplings, neutrino masses and dark matter. Overall, the broad spectrum of informative presentations brilliantly covered the present status of open questions in phenomenological high-energy physics and shine a light on the many rich paths that demand further exploration.

CDF sets W mass against the Standard Model

CDF_detector

Ever since the W boson was discovered at CERN’s SppS four decades ago, successive collider experiments have pinned down its mass at increasing levels of precision. Unlike the fermion masses, the W mass is a clear prediction of the Standard Model (SM). At lowest order in electroweak theory, it depends solely on the mass of the Z boson and the value of the weak mixing angle. But higher-order corrections introduce an additional dependence on the gauge-boson couplings and the masses of other SM particles, in particular the heavy top quark and Higgs boson. With the precision of electroweak calculations now exceeding that of direct measurements, better knowledge of the measured W mass provides a vital test of the SM’s consistency.

The immediate reaction was silence

Chris Hays

A new measurement by the CDF collaboration based on data from the former Tevatron collider at Fermilab throws a curveball into this picture. Published today in Science, the CDF W-mass measurement – the most precise to date – stands 7σ from the SM prediction, upsetting decades of steady convergence between experiment and theory.

“I would say the immediate reaction was silence,” says Chris Hays, one of the CDF analysis leads, of the moment the measurement was unblinded on 19 November 2020. “Then there was some discussion to ensure the unblinding worked, i.e. that the value was correct, and to decide what would be the next steps.”

Long slog
CDF physicists have been measuring the mass of the W boson for more than 30 years via its decays to a lepton and a neutrino. In 2012, shortly after the Tevatron shut down, CDF published a W mass of 80,387 ± 12 (stat) ± 15 (syst) MeV based on 2.2 fb-1 of data, which significantly exceeded the precision of all previous measurements at that time combined. After 10 years of careful analysis and scrutiny of the full Tevatron dataset (8.8 fb-1, corresponding to about 4.2 million W-boson candidates), and taking into account an improved understanding of the detector and advances in the theoretical and experimental understanding of the W’s interactions with other particles, the new CDF result is twice as precise: 80,433.5 ± 6.4 (stat) ± 6.9 (syst) MeV.

In addition to the four-fold increase in statistics, the measurement benefits from a better understanding of systematic uncertainties. One significant change concerns the proton/antiproton parton distribution functions (PDFs), where the addition of LHC data to the PDF fits has reduced the uncertainty from 10 MeV to 3.9 MeV while also slightly raising the central value of the 2012 result.

LHCb-FIGURE-2022-003

“The 2012 and 2022 CDF values are in agreement at the level of two sigma accounting for the fact that approximately 25% of the events are in common, so the internal tension is not so significant,” explains CDF collaborator Mark Lancaster, who was an internal reviewer for the result. “But the tension with other results — particularly ATLAS at 80,370 ± 19 MeV and the SM at 80,357 ± 6 MeV — is significant. Many people from the LHC, Tevatron and theory community are presently working together to combine the results from the Tevatron, LHC and LEP and understand the correlations between them, e.g. in the PDFs and some of the higher order QCD and QED effects.”

It’s now up to theorists and other experiments to follow up on the CDF result, comments CDF co-spokesperson David Toback. “If the difference between the experimental and expected value is due to some kind of new particle or subatomic interaction, which is one of the possibilities, there’s a good chance it’s something that could be discovered in future experiments,” he says.

Cross checks
Results from the LHC experiments are crucial to enable a deeper understanding. One of the challenges  in measuring the W mass in high-rate proton-proton collisions at the LHC is event “pile-up”, which makes it hard to reconstruct the missing transverse energy from neutrinos. The higher collision energy at the LHC also means W bosons are produced with larger transverse momenta with respect to the beam axis, which needs to be properly modeled in order to measure the W boson mass precisely.

It takes years to build up the knowledge of the detector necessary to be able to address all the issues satisfactorily

Florencia Canelli

The ATLAS collaboration published the first high-precision measurement of the W mass at the LHC in 2018 based on data collected at a centre-of-mass energy of 7 TeV, and is currently working on new measurements. In September, based on 2016 data, LHCb published its first measurement of the W mass: 80,354 ± 32 MeV, and estimates that an uncertainty of 20 MeV or less is achievable with existing data. CMS is also proceeding with analyses that should soon see its first public result. “It’s an important measurement of our physics programme,” says CMS physics co-cordinator Florencia Canelli. “As the CDF result shows, precision physics can be a challenging and lengthy process: it takes a very long time to understand all aspects of the data to the level of precision required for a competitive W-mass measurement, and it takes years to build up the knowledge of the detector necessary to be able to address all the issues satisfactorily.”

The CDF result reiterates the central importance of precision measurements in the search for new physics, describe Claudio Campagnari (UC Santa Barbara) and Martijn Mulders (CERN) in a Perspective article accompanying the CDF paper. They point to the increased precision that will be available at the High-Luminosity LHC and the capabilities of future facilities such as the proposed Future Circular Collider, the e+e mode of which “would offer the best prospects for an improved W-boson mass measurement, with a projected sensitivity of 7 ppm”. Such a measurement would also demand the SM electroweak calculations be performed at higher orders, a challenge firmly in the sights of the theory community.

Following the 2012 discovery of the Higgs boson, it is not easy to tweak the SM parameters without ruining the excellent agreement with numerous measurements. Furthermore, unlike calculations such as that of the muon anomalous magnetic moment, which relies on significant input from QCD, the prediction of the W mass relies mostly on “cleaner” electroweak computations. Surveying possible new physics that could push the W mass to higher values than expected, the CDF paper points to hypotheses that offer a deeper understanding of the Higgs field, from which the SM particles get their masses. These include supersymmetry and Higgs-boson compositeness, both of which include a potential source of dark matter.

“Supersymmetry could make a significant change to the SM prediction of the W mass, although it seems difficult to explain as big an effect as seen experimentally,” says theorist John Ellis. “But one prediction I can make with confidence is a tsunami of arXiv papers in the weeks ahead.”

Toward a diffraction limited storage-ring-based X-ray source

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

Multi-bend achromat (MBA) lattices have initiated a fourth generation for storage-ring light sources with orders of magnitude increase in brightness and transverse coherence. A few MBA rings have been built, and many others are in design or construction worldwide, including upgrades of APS and ALS in the US.

The HMBA (hybrid MBA), developed for the successful ESRF–EBS MBA upgrade has proven to be very effective in addressing the nonlinear dynamics challenges associated with pushing the emittance toward the diffraction limit. The evolution of the HMBA ring designs will be described in this seminar. The new designs are consistent with the breaking of the lattice periodicity found in traditional circular light sources, inserting dedicated sections for efficient injection and additional emittance damping.

Techniques developed for high-energy physics rings to mitigate nonlinear dynamics challenges associated with breaking periodicity at collision points were applied in the HMBA designs for the injection and damping sections. These techniques were also used to optimise the individual HMBA cell nonlinear dynamics. The resulting HMBA can deliver the long-sought diffraction limited source while maintaining the temporal and transverse stability of third-generation light sources due to the long lifetime and traditional off-axis injection enabled by nonlinear dynamics optimisation, thus improving upon the performance of rings now under construction.

Want to learn more on this subject?

Pantaleo Raimondi, professor at the Stanford Linear Accelerator Center, research technical manager, SLAC National Accelerator Laboratory and previously director, Accelerator and Source Division, ESRF.

 

 


 

bright-rec iop pub iop-science physcis connect