Topics

Spotlight on FCC physics

Ten years after the discovery of a Standard Model-like Higgs boson at the LHC, particle physicists face profound questions lying at the intersection of particle physics, cosmology and astrophysics. A visionary new research infrastructure at CERN, the proposed Future Circular Collider (FCC), would create opportunities to either answer them or refine our present understanding. The latest activities towards the ambitious FCC physics programme were the focus of the 5th FCC Physics Workshop, co-organised with the University of Liverpool as an online event from 7 to 11 February. It was the largest such workshop to date, with more than 650 registrants, and welcomed a wide community geographically and thematically, including members of other “Higgs factory” and future projects.

The overall FCC programme – comprising an electron-positron Higgs and electroweak factory (FCC-ee) as a first stage followed by a high-energy proton-proton collider (FCC-hh) – combines the two key strategies of high-energy physics. FCC-ee offers a unique set of precision measurements to be confronted with testable predictions and opens the possibility for exploration at the intensity frontier, while FCC-hh would enable further precision and the continuation of open exploration at the energy frontier. The February workshop saw advances in our understanding of the physics potential of FCC-ee, and discussions of the possibilities provided at FCC-hh and at a possible FCC-eh facility.

The overall FCC programme combines the two key strategies of high-energy physics: precision measurements at the intensity frontier and the open exploration at the energy frontier

The proposed R&D efforts for the FCC align with the requests of the 2020 update of the European strategy for particle physics and the recently published accelerator and detector R&D roadmaps established by the Laboratory Directors Group and ECFA. Key activities of the FCC feasibility study, including the development of a regional implementation scenario in collaboration with the CERN host states, were presented.

Over the past several months, a new baseline scenario for a 91 km-circumference layout has been established, balancing the optimisation of the machine performance, physics output and territorial constraints. In addition, work is ongoing to develop a sustainable operational model for FCC taking into account human and financial resources and striving to minimise its environmental impact. Ongoing testing and prototyping work on key FCC-ee technologies will demonstrate the technical feasibility of this machine, while parallel R&D developments on high-field magnets pave the way to FCC-hh.

Physics programme
A central element of the overall FCC physics programme is the precise study of the Higgs sector. FCC-ee would provide model-independent measurements of the Higgs width and its coupling to Standard Model particles, in many cases with sub-percent precision and qualitatively different to the measurements possible at the LHC and HL-LHC. The FCC-hh stage has unique capabilities for measuring the Higgs-boson self-interactions, profiting from previous measurements at FCC-ee. The full FCC programme thus allows the reconstruction of the Higgs potential, which could give unique insights into some of the most fundamental puzzles in modern cosmology, including the breaking of electroweak symmetry and the evolution of the universe in the first picoseconds after the Big Bang.

Presentations and discussions throughout the week showed the impressive breadth of the FCC programme, extending far beyond the Higgs factory alone. The large integrated luminosity to be accumulated by FCC-ee at the Z-pole enables high-precision electroweak measurements and an ambitious flavour-physics programme. While the latter is still in the early phase of development, it is clear that the number of B mesons and tau-lepton pairs produced at FCC-ee significantly surpasses those at Belle II, making FCC-ee the flavour factory of the 2040s. Ongoing studies are also revealing its potential for studying interactions and decays of heavy-flavour hadrons and tau leptons, which may provide access to new phenomena including lepton-flavour universality-violating processes. Similarly, the capabilities of FCC-ee to study beyond-the-Standard Model signatures such as heavy neutral leptons have come into further focus. Interleaved presentations on FCC-ee, FCC-hh and FCC-eh physics also further intensified the connections between the lepton- and hadron-collider communities.

The impressive potential of the full FCC programme is also inspiring theoretical work. This ranges from overarching studies on our understanding of naturalness, to concrete strategies to improve the precision of calculations to match the precision of the experimental programme.

The physics thrusts of the FCC-ee programme inform an evaluation of the run plan, which will be influenced by technical considerations on the accelerator side as well as by physics needs and the overall attractiveness and timeliness of the different energy stages (ranging from the Z pole at 91 GeV to the tt threshold at 365 GeV). In particular, the possibility for a direct measurement of the electron Yukawa coupling by extensive operation at the Higgs pole (125 GeV) raises unrivaled challenges, which will be further explored within the FCC feasibility study. The main challenge here is to reduce the spread in the centre-of-mass energy by a factor of around ten while maintaining the high luminosity, requiring a monochromatisation scheme long theorised but never applied in practice.

CLD_iso_view

Detectors status and plan
Designing detectors to meet the physics requirements of FCC-ee physics calls for a strong R&D programme. Concrete detector concepts for FCC-ee were discussed, helping to establish a coherent set of requirements to fully benefit from the statistics and the broad variety of physics channels available.

The primary experimental challenge at FCC-ee is how to deal with the extremely high instantaneous luminosities. Conditions are the most demanding at the Z pole, with the luminosity surpassing 1036 cm-2s-1 and the rate of physics events exceeding 100 kHz. Since collisions are continuous, it is not possible to employ “power pulsing” of the front-end electronics as has been developed for detector concepts at linear colliders. Instead, there is a focus on the development of fast, low-power detector components and electronics, and on efficient and lightweight solutions for powering and cooling. With the enormous data samples expected at FCC-ee, statistical uncertainties will in general be tiny (about a factor of 500 smaller than at LEP). The experimental challenge will be to minimise systematic effects towards the same level.

The mind-boggling integrated luminosities delivered by FCC-ee would allow Standard Model particles – in particular the W, Z and Higgs bosons and the top quark, but also the b and c quarks and the tau lepton – to be studied with unprecedented precision. The expected number of Z bosons produced (5×1012) is more than five orders of magnitude larger than the number collected at LEP, and more than three orders of magnitude larger than that envisioned at a linear collider. The high-precision measurements and the observation of rare processes made possible by these large data samples will open opportunities for new-physics discoveries, including the direct observation of very weakly-coupled particles such as heavy-neutral leptons, which are promising candidates to explain the baryon asymmetry of the universe.

With overlapping requirements, designs for FCC-ee can follow the example of detectors proposed for linear colliders.

The detectors that will be located at two (possibly four) FCC-ee interaction points must be designed to fully profit from the extraordinary statistics. Detector concepts under study feature: a 2 T solenoidal magnetic field (limited in strength to avoid blow-up of the low-emittance beams crossing at 30 mrad); a small-pitch, thin-layers vertex detector providing an excellent impact-parameter resolution for lifetime measurements; a highly transparent tracking system providing a superior momentum resolution; a finely segmented calorimeter system with excellent energy resolution for electrons and photons, isolated hadrons and jets; and a muon system. To fully exploit the heavy-flavour possibilities, at least one of the detector systems will need efficient particle-identification capabilities allowing π/K separation over a wide momentum range, for which there are ongoing R&D efforts on compact, light RICH detectors.

With overlapping requirements, designs for FCC-ee can follow the example of detectors proposed for linear colliders. The CLIC-inspired CLD concept – featuring a silicon-pixel vertex detector and a silicon tracker followed by a 3D-imaging, highly granular calorimeter system (a silicon-tungsten ECAL and a scintillator-steel HCAL) surrounded by a superconducting solenoid and muon chambers interleaved with a steel return yoke – is being adapted to the FCC-ee experimental environment. Further engineering effort is needed to make it compatible with the continuous-beam operation at FCC-ee. Detector optimisation studies are being facilitated by the robust existing software framework which has been recently integrated into the FCC study.

FCC Curved silicon

The IDEA (International Detector for Electron-positron Accelerator) concept, specifically developed for a circular electron-positron collider, brings in alternative technological solutions. It includes a five-layer vertex detector surrounded by a drift chamber, enclosed in a single-layer silicon “wrapper”. The distinctive element of the He-based drift chamber is its high transparency. Indeed, the material budget of the full tracking system, including the vertex detector and the wrapper, amounts to only about 5% (10%) of a radiation length in the barrel (forward) direction. The drift chamber promises superior particle-identification capabilities via the use of a cluster-counting technique that is currently under test-beam study. In the baseline design, a thin low-mass solenoid is placed inside a monolithic, 2 m-deep, dual-readout fibre calorimeter. An alternative (more expensive) design also features a finely segmented crystal ECAL placed immediately inside the solenoid, providing an excellent energy resolution for electrons and photons.

FCC feedthrough_test_setup

Recently, work has started on a third FCC-ee detector concept comprising: a silicon vertex detector; a light tracker (drift chamber or full-silicon device); a thin, low-mass solenoid; a highly-granular noble liquid-based ECAL; a scintillator-iron HCAL; and a muon system. The current baseline ECAL design is based on lead/steel absorbers and active liquid-argon, but a more compact option based on tungsten and liquid-krypton is an interesting option. The concept design is currently being implemented inside the FCC software framework.

All detector concepts are under evolution and there is ample room for further innovative concepts and ideas.

Closing remarks
Circular colliders reach higher luminosities than linear machines because the same particle bunches are used over many turns, while detectors can be installed at several interaction points. The FCC-ee programme greatly benefits from the possibility of having four interaction points to allow the collection of more data, systematic robustness and better physics coverage — especially for very rare processes that could offer hints as to where new physics could lie. In addition, the same tunnel can be used for an energy-frontier hadron collider at a later stage.

The FCC feasibility study will be submitted by 2025, informing the next update of the European strategy for particle physics. Such a machine could start operation at CERN within a few years after the full exploitation of the HL-LHC in around 2040. CERN, together with its international partners, therefore has the opportunity to lead the way for a post-LHC research infrastructure that will provide a multi-decade research programme exploring some of the most fundamental questions in physics. The geographical distribution of participants in the 5th FCC physics workshop testifies to the global attractiveness of the project. In addition, the ongoing physics and engineering efforts, the cooperation with the host states, the support from the European physics community and the global cooperation to tackle the open challenges of this endeavour, are reassuring for the next steps of the FCC feasibility study.

Graph neural networks boost di-Higgs search

Figure 1

Two fundamental characteristics of the Higgs boson (H) that have yet to be measured precisely are its self-coupling λ, which indicates how strongly it interacts with itself, and its quartic coupling to the vector bosons, which mediate the weak force. These couplings can be directly accessed at the LHC by studying the production of Higgs-boson pairs, which is an extremely rare process occurring about 1000 times less frequently than single-H production. However, several new-physics models predict a significant enhancement in the HH production rate compared to the Standard Model (SM) prediction, especially when the H pairs are very energetic, or boosted. Recently, the CMS collaboration developed a new strategy employing graph neural networks to search for boosted HH production in the four-bottom-quark final state, which is one of the most sensitive modes currently under examination.

H pairs are produced primarily via gluon and vector-boson fusion. The former production mode is sensitive to the self-coupling, while the latter probes the quartic coupling involving a pair of weak vector bosons and two Higgs bosons. The extracted modifiers of the coupling-strength parameters, κλ and κ2V, quantify their strengths relative to the SM expectation.

This latest CMS search targets both production modes and selects two Higgs bosons with a high Lorentz boost. When each Higgs boson decays to a pair of bottom quarks, the two quarks are reconstructed as a single large-radius jet. The main challenge is thus to identify the specific H jet while rejecting the background from light-flavour quarks and gluons. Graph neural networks, such as the ParticleNet algorithm, have been shown to distinguish successfully between real H jets and background jets. Using measured properties of the particles and secondary vertices within the jet cone, this algorithm treats each jet as an unordered set of its constituents, considers potential correlations between them, and assigns each jet a probability to originate from a Higgs-boson decay. At an H-jet selection efficiency of 60%, ParticleNet rejects background jets twice as efficiently as the previous best algorithm (known as DeepAK8). A modified version of this algorithm is also used to improve the H-jet mass resolution by nearly 40%.

Using the full LHC Run-2 dataset, the new result excludes an HH production rate larger than 9 times the SM cross-section at 95% confidence level, versus an expected limit of 5. This represents an improvement by a factor of 30 compared to the previous best result for boosted HH production. The analysis yields a strong constraint on the HH production rate and κλ, and the most stringent constraint on κ2V to date, assuming all other H couplings to be at their SM values (see figure 1). For the first time, and with the assumption that the other couplings are consistent with the SM, the result excludes the κ2V = 0 scenario at over five standard deviations, confirming the existence of a quartic coupling between two vector bosons and two Higgs bosons. This search paves the way for a more extensive use of advanced machine-learning techniques, the exploration of the boosted HH production regime, and further investigation into the potentially anomalous character of the Higgs boson in Run 3 and beyond.

Extending the reach on Higgs’ self-coupling

Figure 1

The discovery of the Higgs boson and the comprehensive measurements of its properties provide a strong indication that the mechanism of electroweak symmetry breaking (EWSB) is compatible with the one predicted by Brout, Englert and Higgs (BEH) in 1964. But there remain unprobed features of EWSB, chiefly whether the form of the BEH potential follows the predicted “Mexican hat” shape. One of the parameters that determines the form of the BEH potential is the Higgs boson’s trilinear self-coupling, λ. Experimentally, this fundamental parameter can be measured via Higgs-boson pair (HH) production, where a single virtual Higgs boson splits into two Higgs bosons. However, such a measurement is very challenging as the Standard Model (SM) HH production cross-section is more than 1000 times lower than that of single Higgs-boson production.

Beyond the SM (BSM) physics with modified or new Higgs-boson couplings could lead to significantly enhanced HH production. Some BSM scenarios predict new heavy particles that may lead to resonant HH production, contrary to the non-resonant production featured by the triple-Higgs-boson coupling. New ATLAS results set tight constraints on both the non-resonant and resonant scenarios, showing that the boundaries of what can be achieved with the current and future LHC datasets can be significantly pushed.

The ATLAS collaboration recently released results of searches for HH production in three final states – bbγγ, bbττ and 4b (where one Higgs boson decays into two b-quarks and the other into two photons, two tau-leptons or two b-quarks) and their combination, exploiting the full LHC Run-2 dataset. The first two analyses target both resonant and non-resonant HH production, while the 4b analysis targets only resonant HH production. These three channels are the most sensitive final states in each scenario. The three decay modes of the second Higgs boson provide good sensitivity in different kinematic regions, so that the analyses are highly complementary. The HH → bbγγ process has the lowest branching ratio but high efficiency to trigger and reconstruct photons, as well as an excellent diphoton mass resolution, leading to the best sensitivity at low HH invariant masses. The HH → 4b final state has the highest branching ratio but suffers from the requirement to impose high transverse momentum b-jet trigger thresholds, the ambiguity in the Higgs boson reconstruction and the large multijet background. However, it provides the best sensitivity at high HH invariant masses. Finally, the HH → bbττ decay has a moderate branching ratio as well as a moderate background contamination, giving the best sensitivity in the intermediate HH mass range. 

BSM physics with new Higgs-boson couplings could lead to significantly enhanced HH production

With the latest analyses, a remarkably stringent observed (expected) upper limit of 3.1 (3.1) times the SM prediction on non-resonant HH production was obtained at 95% confidence level (CL). The coupling strength of the Higgs boson trilinear self-coupling in units of the SM value κλ is observed (expected) to be constrained between –1.0 and 6.6 (–1.2 and 7.2) at 95% CL (see figure 1). These are the world’s tightest constraints obtained on this process. The observed (expected) exclusion limits at 95% CL on the resonant HH production cross-section range between 1.1 and 595 fb (1.2 and 392 fb) for resonance masses between 250 and 5000 GeV. 

The sensitivity of the current analyses is still limited by statistical uncertainties and is expected to improve significantly with the future luminosity increase during LHC Run 3 and the HL-LHC programme. A comparison between the current results and previous partial Run-2 dataset results has shown that an improvement by more than a factor of three on the limits is achieved. A factor of two was expected from the larger dataset, and the remaining improvements arise from better object reconstruction and identification techniques, and new analy­sis methods. 

These latest results inspire confidence that the observation of the SM HH production and a precise measurement of the Higgs-boson trilinear self-coupling may be possible at the HL-LHC.

Crab cavities enter next phase

The imminent start of LHC Run 3 following a vast programme of works completed during Long Shutdown 2 marks a milestone for the CERN accelerator complex. When stable proton beams return to the LHC this year (see LHC Run 3: the final countdown), they will collide at higher energies (13.6 compared to 13 TeV) and with higher luminosities (containing up to 1.8 × 1011 protons per bunch compared to 1.3–1.4 × 1011) than in Run 2. Physicists working on the LHC experiments can therefore look forward to a rich harvest of results during the next three years. After Run 3, the statistical gain in running the accelerator without a significant luminosity increase beyond its design and ultimate values will become marginal. Therefore, to maintain scientific progress and to exploit its full capacity, the LHC is undergoing upgrades that will allow a decisive increase of its luminosity during Run 4, expected to begin in 2029, and beyond.

Several technologies are being developed for this High-Luminosity LHC (HL-LHC) upgrade. One is new, large-aperture quadrupole magnets based on a niobium-tin superconductor. These will be installed on either side of the ATLAS and CMS experiments, providing the space required for smaller beam-spot sizes at the interaction points and shielding against the higher radiation levels when operating at increased luminosities. The other key technology, necessary to take advantage of the smaller beam-spot size at the interaction points, is a series of superconducting radio-frequency (RF) “crab” cavities that enlarge the overlap area of the incoming bunches and thus increase the probability of collisions. Never used before at a hadron collider, a total of 16 compact crab cavities will be installed on either side of each of ATLAS and CMS once Run 3 ends and Long Shutdown 3 begins.

The crab-cavity test facility

At a collider such as the LHC, it is imperative that the two counter-circulating beams are physically separated by an angle, aka the crossing angle, such that bunches collide only in one single location over the common interaction region (where the two beams share the same beam pipe). The bunches at the HL-LHC will be 10 cm long and only 7 μm wide at the collision points, resembling thin long wires. As a result, even a very small angle between the bunches implies an immediate loss in luminosity. With the use of powerful superconducting crab cavities, the tilt of the bunches at the collision point can be precisely controlled to make it optimal for the experiments and fully exploit the scientific potential of the HL-LHC.

Radical concepts 

The tight space constraints from the relatively small separation of the two beams outside the common interaction region requires a radically new RF concept for particle deflection, employing a novel shape and significantly smaller cavities than those used in other accelerators. Designs for such devices began around 10 years ago, with CERN settling on two types: double quarter wave (DQW) and RF-dipole (RFD). The former will be fitted around CMS, where bunches are separated vertically, and the latter around ATLAS, where bunches will be separated horizontally, requiring crab cavities uniquely designed for each plane. It is also planned to swap the crossing-angle planes and crab-cavity installations at a later stage during the HL-LHC operation.

The RF-dipole cavity

In 2017, two prototype DQW-type cavities were built and assembled at CERN into a special cryomodule and tested at 2 K, validating the mechanical, cryogenic and RF functioning. The module was then installed in the Super Proton Synchrotron (SPS) for beam tests, with the world’s first “crabbing” of a proton beam demonstrated on 31 May 2018. In parallel, the fabrication of two prototype RFD-type cavities from high-purity niobium was underway at CERN. Following the integration of the devices into a titanium helium tank at the beginning of 2021, and successful tests at 2 K reaching voltages well beyond the nominal value of 3.4 MV, the cavities were equipped with specially designed RF couplers, which are necessary for beam operations. The two cavities are now being integrated into a cryomodule at Daresbury Laboratory in the UK as a joint effort between CERN and the UK’s Science and Technology Facilities Council (STFC). The cryomodule will be installed in a 15 m-long straight section (LSS6) of the SPS in 2023 for its first test with proton beams. This location in the SPS is equipped with a special by-pass and other services, which were put in place in 2017–2018 to test and operate the DQW-type module. 

The manufacturing challenge 

Due to the complex shape and micrometric tolerances required for the HL-LHC crab cavities, a detailed study was performed to realise the final shape through forming, machining, welding and brazing operations on the high-purity niobium sheets and associated materials (see “Fine machining” images). To ensure a uniform removal of material along the cavities’ complex shape, a rotational buffer chemical polishing (BCP) facility was built at CERN for surface etching of the HL-LHC crab cavities. For the RFD and DQW, the rotational setup etches approximately 250 μm of the internal RF surface to remove the damaged cortical layer during the forming process. Ultrasound measurements were performed to follow the evolution of the cavity-wall thickness during the BCP steps, showing remarkable uniformity (see “Chemical etching” images).

The chemical-etching setup

Preparation of the RFD cavities involved a similar process as that for the DQW modules. Following chemical etching and a very high-temperature bake at 650 °C in a vacuum furnace, the cavities are rinsed in ultra-pure water at high pressure (100 bar) for approximately seven hours. This process has proven to be a key step in the HL-LHC crab-cavity preparation to enable extremely high fields and suppress electron-field emitters, which can limit the performance. The cavity is then closed with its RF ancillaries in an ISO4 cleanroom environment to preserve the ultra-clean RF surface, and installed into a special vertical cryostat to cool the cavity surface to its 2 K operating temperature (see “Clean and cool” image, top). Both RFD cavities reached performances well above the nominal target of 3.4 MV. RFD1 reached more than 50% over the nominal voltage and RFD2 reached above a factor of two (7 MV) – a world-record deflecting field in this frequency range. These performances were reproducible after the assembly and welding of the helium tank owing to the careful preparation of the RF surface throughout the different steps of assembly and preparation. 

RF dipole cavity and cold magnetic shield

The helium tank provides a volume around the cavity surface that is maintained at 2 K with superfluid helium (see “Clean and cool” image, bottom). Due to sizeable deformations during the cool-down process from ambient temperature, a titanium vessel which has a thermal behaviour close to that of the niobium cavity is used. A magnetic shield between the cavity and the helium tank suppresses stray fields in the operating environment and further preserves cavity performance. Following the tests with helium tanks, the cavities were equipped with higher-order-mode couplers and field antennae to undergo a final test at 2 K before cryostating them into a two-cavity string.

The crab cavities require many ancillary components to allow them to function. This overall system is known as a cryomodule (see “Cryomodule” image, top) and ensures that the operational environment is correct, including the temperature, stability, vacuum conditions and RF frequency of the cavities. Technical challenges arise due to the need to assemble the cavity string in an ISO4 cleanroom, the space constraints of the LHC (leading to the rectangular compact shape), and the requirement of fully welded joints (where typically “O” rings would be used for the insulation vacuum).

Design components

The outer vacuum chamber (OVC) of the cryomodule provides an insulation vacuum to prevent heat leaking to the environment as well as providing interfaces to any external connections. Manufactured by ALCA Technology in Italy, the OVC used a rectangular design where the cavity string is mounted to a top-plate that is lowered into the rest of the OVC, and includes four large windows to allow access for repair in situ if required (see “Cryomodule” image, bottom). Since the first DQW prototype module, several cryomodule interfaces including cryogenic and vacuum components were updated to be fully compatible with the final installation in the HL-LHC. 

The HL-LHC crab-cavity programme has developed into a mature project supported by a large number of collaborating institutions around the world

Since superconducting RF cavities can have a higher surface resistance if cooled below their transition temperature in the presence of a magnetic field, they need to be shielded from Earth’s magnetic field and stray fields in the surrounding environment. This is achieved using a warm magnetic shield manufactured in the OVC, and a cold magnetic shield mounted inside the liquid-helium vessel. Both shields, which are made from special nickel-iron alloys, are manufactured by Magnetic Shields Ltd in the UK.

Status and outlook

The RFD crab-cavity pre-series cryomodule will be assembled this year at Daresbury lab, where the infrastructure on site has been upgraded, including an extension to the ISO4 cleanroom area and the introduction of an ISO6 preparation area. A bespoke five-tonne crane has also been installed and commissioned to allow the precise lowering of the delicate cavity string into the outer vacuum vessel.

RF dipole cryomodule and outer vacuum vessel

Parallel activities are taking place elsewhere. The HL-LHC crab-cavity programme has developed into a mature project supported by a large number of collaborating institutions around the world. In the US, the Department of Energy is supporting the HL-LHC Accelerator Upgrade Project to coordinate the efforts and leverage the expertise of a group of US laboratories and universities (FNAL, BNL, JLAB, SLAC, ODU) to deliver the series RFD cavities for the HL-LHC. In 2021, two RFD prototype cavities were built by the US collaboration and exceeded the two most important functional project requirements for crab cavities – deflecting voltage and quality factor. After this successful demonstration, the fabrication of the pre-series cavities was launched.

Crab cavities were first implemented in an accelerator in 2006, at the KEKB electron–positron collider in Japan, where they helped the collider reach record luminosities. A different “crab-waist” scheme is currently employed at KEKB’s successor, SuperKEKB, helping to reach even higher luminosities. The development of ultra-compact, very-high-field cavities for a high-energy hadron collider such as the HL-LHC is even more challenging, and will be essential to maximise the scientific output of this flagship facility beyond the 2030s. 

Beyond the HL-LHC, the compact crab-cavity concepts have been adopted by future facilities, including the proton–proton stage of the proposed Future Circular Collider; the Electron–Ion Collider under construction at Brookhaven; bunch compression in synchrotron X-ray sources to produce shorter pulses; and ultrafast particle separators in proton linacs to separate bunches of secondary particles for different experiments. The full implementation of this technology at the HL-LHC is therefore keenly awaited. 

Form follows function in QCD

Hadron from factors

In the 1970s, the study of low-energy (few GeV) hadron–hadron collisions in bubble chambers was all the rage. It seemed that we understood very little. We had the SU(3) of flavour, Regge theory and the S-matrix to describe hadronic processes, but no overarching theory. Of course, theorists were already working on perturbative QCD and this started to gain traction when experimental results from the Big European Bubble Chamber at CERN showed signs of the scaling violations and made an early measurement of the QCD scale, ΛQCD. We have been living with the predictions of perturbative QCD ever since, at increasingly higher orders. But there have always been non-perturbative inputs, such as the parton distribution functions.

Hadron Form Factors: From Basic Phenomenology to QCD Sum Rules takes us back to low-energy hadron physics and shows us how much more we know about it today. In particular, it explores the formalism for heavy-flavour decays, which is particularly relevant at a time when it seems that the only anomalies we observe with respect to the Standard Model appear in various B-meson decays. It also explores the connections between space-like and time-like processes in terms of QCD sum rules connecting perturbative and non-perturbative behaviour.

The book takes us back to low-energy hadron physics and shows us how much more we know about it today

The general introduction reminds us of the formalism of form factors in the atomic case. This is generalised to mesons and baryons in chapters 2 and 3, after the introduction of QCD in chapter 1, with an emphasis on quark and gluon electroweak currents and their generalisation to effective currents. Hadron spectroscopy is reviewed from a modern perspective and heavy-quark effective theory is introduced. In chapter 2, the formalism for the pion form factor, which is related to the pion decay constant, is introduced via e-π scattering. Due emphasis is placed on how one may measure these quantities. I also appreciated the explanation of how a pseudoscalar particle such as the pion can decay via the axial vector current – a question

hadron_form_review

often raised by smart undergraduates. (Clue: the axial vector current is not conserved). Next, the πe3 decay is considered and generalised to K-, D- and B-meson semileptonic decays. Chapter 3 covers the baryon form factors and their decay constants, and chapter 4 considers hadronic radiative transitions. Chapter 5 relates the pion form factor in the space-like region to its counterpart in the time-like region in e+e → π+π, where one has to consider resonances and widths. Relationships are developed, whereby one can see that by measuring pion and kaon form factors in e+e scattering one can predict the widths of decays such as τ → ππν and τ → KKν. In chapter 6, non-local hadronic matrix elements are introduced to extend the formalism to deal with decays such as π → γγ and B → Kμμ.

The book shifts gears in chapters 7–10. Here, QCD is used to calculate hadronic matrix elements. Chapter 7 covers the calculation of the form factors in the infinite momentum frame, whereby the asymptotic form factor can be expressed in terms of the pion decay constant and a pion distribution amplitude describing the momentum distribution between two valence partons in the pion. In chapter 8, the QCD sum rules are introduced. The two-point correlation of quark current operators can be calculated in perturbative QCD at large space-like momenta, and the result is expressed in terms of perturbative contributions and the QCD vacuum condensates. This can then be related through the sum rule to the hadronic degrees of freedom in the time-like region. Such sum rules are used to gain information on both condensate densities or quark masses from accurate hadronic data and hadronic decay constants and masses from QCD calculations. The connection is made to parton–hadron duality and to the operator product expansion. Some illustrative examples of the technique, such as the calculation of the strange-quark mass and the pion decay constant, are also given. Chapter 9 concerns the light-cone expansion and light-cone dominance, which is then used to explain the role of light-cone sum rules in chapter 10. The use of these sum rules in calculating hadron form factors is illustrated with the pion form factor and also with the heavy-to-light form factors necessary for B → π, B → K, D → π, D → K and B → D decays.

Overall, this book is not an easy read, but there are many useful insights. This is essentially a textbook, and a valuable reference work that belongs in the libraries of particle-physics institutes around the world.

Your Adventures at CERN: Play the Hero Among Particles and a Particular Dinosaur!

Your adventures at CERN

Billed as a bizarre adventure filled with brain-tickling facts about particles and science wonders, Your Adventures at CERN invites young audiences to experience a visit to CERN in different guises.

The reader can choose one of three characters, each with a different story: a tourist, a student and a researcher. The stories are intertwined, and the choice of the reader’s actions through the book changes their journey, rather than following a linear chronology. The stories are filled with puzzles, mazes, quizzes and many other games that challenge the reader. Engaging physics references and explanations, as well as the solutions to the quizzes, are given at the back of the book.

Author Letizia Diamante, a biochemist turned science communicator who previously worked in the CERN press office, portrays the CERN experience in an engaging and understandable way. The adventures are illustrated with funny jokes and charismatic characters, such as “Schrödy”, a hungry cat that guides the reader through the adventures in exchange for food. Detailed hand-drawn illustrations by Claudia Flandoli are included, together with photographs of CERN facilities that take the reader directly into the heart of the lab. Moreover, the book includes several historical facts about particle physics and other topics, such as the city of Geneva and the extinct dinosaurs from the Jurassic era, which is named after the nearby Jura mountains on the border between France and Switzerland. A particle-physics glossary and extra information, such as fun cooking recipes, are also included at the end.

Although targeted mainly at children, this book is also suitable for teenagers and adults looking for a soft introduction to high-energy physics and CERN, offering a refreshing addition to the more mainstream popular particle-physics literature.

Fear of a Black Universe: an outsider’s guide to the future of physics

Fear of a black universe feature

Stephon Alexander is a professor of theoretical physics at Brown University, specialising in cosmology, particle physics and quantum gravity. He is also a self-professed outsider, as the subtitle of his latest book Fear of a Black Universe suggests. His first book, The Jazz of Physics, was published in 2016. Fear of a Black Universe is a rallying cry for anyone who feels like a misfit because their identity or outside-the-box thinking doesn’t mesh with cultural norms. By interweaving historical anecdotes and personal experiences, Alexander shows how outsiders drive innovation by making connections and asking questions insiders might dismiss as trivial.

Alexander is Black and internalised his outsider sense early in his career. As a postdoc in the early 2000s, he found that his attempts to engage with other postdocs in his group were rebuffed. He eventually learned from his friend Brian Keating, who is white, the reason why: “They feel that they had to work so hard to get to the top and you got in easily, through affirmative action”. Instead of finding his peers’ rejection limiting, Alexander reinterpreted their dismissal as liberating: “I’ve come to realise that when you fit in, you might have to worry about maintaining your place in the proverbial club… so I eventually became comfortable being the outsider. And since I was never an insider, I didn’t have to worry that colleagues might laugh at me for my unlikely approach.”

Instead of finding his peers’ rejection limiting, Alexander reinterpreted their dismissal as liberating

Alexander argues that true breakthroughs come from “deviants”. He draws parallels between outsiders in physics and graffiti artists, who were considered vandals until the art world recognised their talent and contributions. Alexander recounts his own “deviance” in a humorous and sometimes  self-deprecating manner. He recalls a talk he gave at a conference about his first independent paper, which involved reinterpreting the universe as a three-dimensional membrane orbiting a five-dimensional black hole. During the talk he was often interrupted, eventually prompting a well-respected Indian physicist to stand up and shout “Let him finish! No one ever died from theorising.”

Alexander took these words to heart, and asks his readers to do the same during the speculative discussions in the second part of his book. Here, Alexander intersperses mainstream physics with some of his self-described “strange” ideas, acknowledging that some readers might write him off as an “oddball crank”. He explores the intersection of physics with philosophy, biology, consciousness, and searches for extraterrestrial life. Some sections – such as the chapter on alien quantum computers generating the effect of dark energy – feel more like science fiction than science. But Alexander reassures readers that, while many of his ideas are strange, so are many experimentally verified tenants of physics. “In fact, the likelihood that any one of us will create a new paradigm because we have violated the norms… is very slim” he observes.

Science wise, this book is not for the faint-hearted. While many other public-facing physics books slowly wade readers into early-20th-century physics and touch on more abstract concepts only in the final chapters, part I of Fear of a Black Universe dives directly into relativity, quantum mechanics and emergence. Part II then launches into a much deeper discussion about supersymmetry, baryogenesis, quantum gravity and quantum computing. But the strength of Alexander’s new work isn’t in its retellings of Einstein’s thought experiments or even its deconstruction of today’s cosmological enigma. More than anything, this book makes a case for cultivating diversity in science that goes beyond “gesticulations of identity politics”.

Fear of a Black Universe is both mind-bending and refreshing. It approaches physics with a childlike curiosity and allows the reader to playfully contemplate questions many have but few discuss for fear of sounding like a crank. This book will be enjoyable for scientists and science enthusiasts who can set cultural norms aside and just enjoy the ride.

Exploring the CMB like never before

To address the major questions in cosmology, the cosmic microwave background (CMB) remains the single most important phenomenon that can be observed. Not this author’s words, but those of the recent US National Academies of Sciences, Engineering, and Medicine report Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Astro2020), which recommended that the US pursue a next-generation ground-based CMB experiment, CMB-S4, to enter operation in around 2030. 

The CMB comprises the photons created in the Big Bang. These photons have therefore experienced the entire history of the universe. Everything that has happened has left an imprint on them in the form of anisotropies in their temperature and polarisation with characteristic amplitudes and angular scales. The early universe was hot enough to be completely ionised, which meant that the CMB photons constantly scattered off free electrons. During this period the primary CMB anisotropies were imprinted, tracing the overall geometry of the universe, the fraction of the energy density in baryons, the number of light-relic particles and the nature of inflation. After about 375,000 years of expansion the universe cooled enough for neutral hydrogen atoms to be stable. With the free electrons rapidly swept up by protons, the CMB photons simply free-streamed in whatever direction they were last moving in. When we observe the CMB today we therefore see a snapshot of this so-called last-scattering surface.

The continued evolution of the universe had two main effects on the CMB photons. First, its ongoing expansion stretched their wavelengths to peak at microwave frequencies today. Second, the growth of structure eventually formed galaxy clusters that changed the direction, energy and polarisation of the CMB photons that pass through them, both from gravitational lensing by their mass and from inverse Compton scattering by the hot gas that makes up the inter-cluster medium. These secondary anisotropies therefore constrain all of the parameters that this history depends on, from the moment the first stars formed to the number of light-relic particles and the masses of neutrinos.  

The temperature anisotropies of the CMB

As noted by the Astro2020 report, the history of CMB research is that of continuously improving ground and balloon experiments, punctuated by comprehensive measurements from the major satellite missions COBE, WMAP and Planck. The increasing temperature and polarisation sensitivity and angular resolution of these satellites is evidenced in the depth and resolution of the maps they produced (see “Relic radiation” image”). However, such maps are just our view of the CMB – one particular realisation of a random process. To derive the underlying cosmology that gave rise to them, we need to measure the amplitude of the anisotropies on various angular scales (see “Power spectra” figure). Following the serendipitous discovery of the CMB in 1965, the first measurements of the temperature anisotropy were made by COBE in 1992. The first peak in the temperature power spectrum was measured by the BOOMERanG and MAXIMA balloons in 2000, followed by the E-mode polarisation of the CMB by the DASI experiment in 2002, and the B-mode polarisation by the South Pole Telescope and POLARBEAR experiments in 2015.

CMB-S4, a joint effort supported by the US Department of Energy (DOE) and the National Science Foundation (NSF), will help write the next chapter in this fascinating adventure. Planned to comprise 21 telescopes at the South Pole and in the Chilean Atacama Desert instrumented with more than 500,000 cryogenically-cooled superconducting detectors, it will exceed the capabilities of earlier generations of experiments by more than an order of magnitude and deliver transformative discoveries in fundamental physics, cosmology, astrophysics and astronomy.

The CMB-S4 challenge 

Three major challenges must be addressed to study the CMB at such levels of precision. Firstly, the signals are extraordinarily faint, requiring massive datasets to reduce the statistical uncertainties. Secondly, we have to contend with systematic effects both from imperfect instruments and from the environment, which must be controlled to exquisite precision if they are not to swamp the signals. Finally, the signals are obscured by other sources of microwave emission, especially galactic synchrotron and dust emission. Unlike the CMB, these sources do not have a black-body spectrum, so it is possible to distinguish between CMB and non-CMB sources if observations are made at enough microwave frequencies to break the degeneracy.

Power spectra of the CMB

This third challenge actually proves to be an astrophysical blessing as well as a cosmological curse: CMB observations are also excellent legacy surveys of the millimetre-wave sky, which can be used for a host of other science goals. These range from cataloguing galaxy clusters, to studying the Milky Way, to detecting spatial and temporal transients such as gamma-ray bursts via their afterglows.

Coming together

In 2013 the US CMB community came together in the Snowmass planning process, which informs the deliberations of the decadal Particle Physics Project Prioritization Panel (P5). We realised that achieving the sensitivity needed to make the next leap in CMB science would require an experiment of such magnitude (and therefore cost) that it could only be accomplished as a community-wide endeavour, and that we would therefore need to transition from multiple competing experiments to a single collaborative one. By analogy with the US dark-energy programme, this was designated a “Stage 4” experiment, and hence became known as CMB-S4. 

In 2014 a P5 report made the critical recommendation that the DOE should support CMB science as a core piece of its programme. The following year a National Academies report identified CMB science as one of three strategic priorities for the NSF Office of Polar Programs. In 2017 the DOE, NSF and NASA established a task force to develop a conceptual design for CMB-S4, and in 2019 the DOE took “Critical Decision 0”, identifying the mission need and initiating the CMB-S4 construction project. In 2020 Berkeley Lab was appointed the lead laboratory for the project, with Argonne, Fermilab and SLAC all playing key roles. Finally, late last year, the long-awaited Astro2020 report unconditionally recommended CMB-S4 as a joint NSF and DOE project with an estimated cost of $650 million. With these recommendations in place, the CMB-S4 construction project could begin.

CMB-S4 constraints

From the outset, CMB-S4 was intended to be the first sub-orbital CMB experiment designed to reach specific critical scientific thresholds, rather than simply to maximise the science return under a particular cost cap. Furthermore, as a community-wide collaboration, CMB-S4 will be able to adopt and adapt the best of all previous experiments’ technologies and methodologies – including operating at the site best suited to each science goal. One third of the major questions and discovery areas identified across the six Astro2020 science panels depend on CMB observations.

The critical degrees of freedom in the design of any observation are the sky area, frequency coverage, frequency-dependent depth and angular resolution, and observing cadence. Having reviewed the requirements across the gamut of CMB science, four driving science goals have been identified for CMB-S4. 

For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds

The first is to test models of inflation via the primordial gravitational waves they naturally generate. Such gravitational waves are the only known source of a primordial B-mode polarisation signal. The size of these primordial B-modes is quantified by the ratio of their power to that of the temperature power spectrum – the scalar-to-tensor ratio, designated r. For the largest and most popular classes of inflationary models, CMB-S4 will make a 5σ detection of r, while failure to make such a measurement will put an upper limit of r ≤ 0.001 at 95% confidence, setting a rigorous constraint on alternative models (see “Constraining inflation” figure). The large-scale B-mode polarisation signal encoding r is the faintest of all the CMB signals, requiring both the deepest measurement and the widest low-resolution frequency coverage of any CMB-S4 science case.

The second goal concerns the dark universe. Dark matter and dark energy make up 95% of the universe’s mass-energy content, and their particular form and composition impact the growth of structure and thus the small-scale CMB anisotropies. The collective influence of the three known light-relic particles (the Standard Model neutrinos) has already been observed in CMB data, but many new light species, such as axion-like particles and sterile neutrinos, are predicted by extensions of the Standard Model. CMB-S4’s goal, and the most challenging measurement in this arena, is to detect any additional light-relic species with freeze-out temperatures up to the QCD phase-transition scale. This corresponds to constraining the uncertainty on the number of light-relic species Neff to ≤ 0.06 at 95% confidence (see “Light relics” figure). Precise measurements of the small-scale temperature and E-mode polarisation signals that encode this signal require the largest sky area of any CMB-S4 science case. In addition, since the sum of the masses of the neutrinos impacts the degree of lensing of the E-mode polarisation into small-scale B-modes, CMB-S4 will be able to constrain this sum around a fiducial value of 58 meV with a 1σ uncertainty ≤ 24 meV (in conjunction with baryon acoustic oscillation measurements) and ≤ 14 meV with better measurements of the optical depth to reionisation. 

Current and anticipated CMB-S4 constraints

The third science goal is to understand the formation and evolution of galaxy clusters, and in particular to probe the early period of galaxy formation at redshifts z > 2. This is enabled by the Sunyaev–Zel’dovitch (SZ) effect, whereby CMB photons are up-scattered by the hot, moving gas in the intra-cluster medium. This shifts the CMB photons’ frequency spectrum, resulting in a decrement at frequencies below 217 GHz and an increment at frequencies above, therefore allowing clusters to be identified by matching up the corresponding cold and hot spots. A key feature of the SZ effect is its red-shift independence, allowing us to generate complete, flux-limited catalogues of clusters to the survey sensitivity. The small-scale temperature signals needed for such a catalogue require the highest angular resolution and the widest high-resolution frequency coverage of all the CMB-S4 science cases.

Finally, CMB-S4 aims to explore the mm-wave transient sky, in particular the rate of gamma-ray bursts to help constrain their mechanisms (a few hours to days after the initial event, gamma-ray bursts are observable at longer wavelengths). CMB-S4 will be so sensitive that even its daily maps will be deep enough to detect mm-wave transient phenomena – either spatial from nearby objects moving across our field, or temporal from distant objects exploding in our field. This is the only science goal that places constraints on the survey cadence, specifically on the lag between repeated observations of the same point on the sky. Given its large field of view, CMB-S4 will be an excellent tool for serendipitous discovery of transients but less useful for follow-up observations. The plan is therefore to issue daily alerts for other teams to follow up with targeted observations.

Survey design

While it would be possible to meet all of the CMB-S4 science goals with a single survey, the result – requiring the sensitivity of the inflation survey across the area of the light-relic survey – would be prohibitively expensive. Instead, the requirements have been decoupled into an ultra-deep, small-area survey to meet the inflation goal and a deep, wide-area survey to meet the light-relic goal, the union of these providing a two-tier “wedding cake” survey for the cluster and gamma-ray-burst goals.

Having set the survey requirements, the task was to identify sites at which these observations can most efficiently be made, taking into account the associated cost, schedule and risk. Water vapour is a significant source of noise at microwave frequencies, so the first requirement on any site is that it be high and dry. A handful of locations meet this requirement, and two of them – the South Pole and the high Chilean Atacama Desert – have both exceptional atmospheric conditions and long-standing US CMB programmes. Their positions on Earth also make them ideally suited to CMB-S4’s two-survey strategy: the polar location enables us to observe a small patch of sky continuously, minimising the time needed to reach the required observation depth, and the more equatorial Chilean location enables observations over a large sky area.

CMB-S4 observatory telescopes

Finally, we know that instrumental systematics will be the limiting factor in resolving the extraordinarily faint large-scale B-mode signal. To date, the experiments that have shown the best control of such systematics have used relatively small-aperture (~0.5 m) telescopes. However, the secondary lensing of the much brighter E-mode signal to B-modes, while enabling us to measure the neutrino-mass sum, also obscures the primordial B-mode signal coming from inflation. We therefore need a detailed measurement of this medium- to small-scale lensing signal in order to be able to remove it at the necessary precision. This requires larger, higher-resolution telescopes. The ultra-deep field is therefore itself composed of coincident low- and high-resolution surveys.

A key feature of CMB-S4 is that all of the technologies are already well-proven by the ongoing Stage 3 experiments. These include CMB-S4’s “founding four” experiments, the Atacama Cosmology Telescope (ACT) and POLARBEAR/Simons Array (PB/SA) in Chile, and BICEP/Keck (BK) and the South Pole Telescope (SPT) at the South Pole, which have pairwise-merged into the Simons and South Pole Observatories (SO and SPO). The ACT, PB/SA, BK and SPT are all single-aperture, single-site experiments, while SO and SPO are dual-aperture, single sites. CMB-S4 is therefore the first experiment able to take advantage of both apertures and both sites. 

The key difference with CMB-S4 is that it will deploy these technologies on an unprecedented scale. As a result, the primary challenges for CMB-S4 are engineering ones, both in fabricating detector and readout modules in huge numbers and in deploying them in cryostats on telescopes with unprecedented systematics control. The observatory will comprise: 18 small-aperture refractors collectively fielding about 150,000 detectors across eight frequencies for measuring large angular scales; one large-aperture reflector with about 130,000 detectors across seven frequencies for measuring medium-to-small angular scales in the ultra-deep survey from the South Pole; and two large-aperture reflectors collectively fielding about 275,000 detectors across six frequencies for measuring medium-to-small angular scales in the wide-deep survey from Chile (see “Looking up” image). The final configuration maximises the use of available atmospheric windows to control for microwave foregrounds (particularly synchrotron and dust emission at low and high frequencies, respectively), and to meet the frequency-dependent depth and angular-resolution requirements of the surveys. 

CMB-S4 will be able to adopt and adapt the best of all previous experiments technologies and methodologies

Covering the frequency range 20–280 GHz, the detectors employ dichroic pixels at all but one frequency (to maximise the use of the available focal plane) using superconducting transition-edge sensors, which have become the standard in the field. A major effort is already underway to scale up the production and reduce the fabrication variance of the detectors, taking advantage of the DOE national laboratories and industrial partners. Reading out such large numbers of detectors with limited power is a significant challenge, leading CMB-S4 to adopt the conservative but well-proven time-domain multiplexing approach. The detector and readout systems will be assembled into modules that will be cryogenically cooled to 100 mK to reduce instrument noise. Each large-aperture telescope will carry an 85-tube cryostat with a single wafer per optics tube; and each small-aperture telescope will carry a single optics tube with 12 wafers per tube, with three telescopes sharing a common mount. 

Prototyping of detector and readout fabrication lines, and building up module assembly and testing capabilities, is expected to begin in earnest this year. At the same time, the telescope designs will be refined and the data acquisition and management subsystems developed. The current schedule sees a staggered commissioning of the telescopes in 2028–2030, and operations running for seven years thereafter.

Shifting paradigms

CMB-S4 represents a paradigm shift for sub-orbital CMB experiments. For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds in fundamental physics, cosmology, astrophysics and astronomy, rather than by its cost cap. CMB-S4 will span the entire range of CMB science in a single experiment, take advantage of the best of all worlds in the design of its observation and instrumentation, and make the results available to the entire CMB community. As an extremely sensitive, two-tiered, multi-wavelength, mm-wave survey, it will also play a key role in multi-messenger astrophysics and transient science. Taken together, these measurements will constitute a giant leap in our study of the history of the universe.

Charm baryons constrain hadronisation

Figure 1

Understanding the mechanisms of hadron formation represents one of the most interesting open questions in particle physics. Hadronisation is a non-perturbative process that is not calculable in quantum chromodynamics and is typically described with phenomenological models, such as the Lund string model. Ultrarelativistic nuclear collisions, where a high-density plasma of deconfined quarks and gluons, the quark–gluon plasma (QGP), is created, provide an ideal setup to test the limits of this description. In these conditions, hadrons may be formed via a combination of deconfined quarks close in phase space. This process can lead, for example, to increased production of baryons with respect to mesons in momentum ranges up to 10 GeV/c. The ALICE and CMS experiments at the LHC, and PHENIX and STAR at RHIC, have indeed observed substantial modifications of the event hadro-chemistry in heavy-ion collisions compared to proton–proton and e+e collisions. In particular, the total abundances of light and strange hadrons were found to follow, quite remarkably, the “thermal’’ expectations for a deconfined medium close to equilibrium. 

Measurements of heavy-flavour hadron production play a unique role in such studies. Heavy quarks are mostly produced in hard scatterings at the early stages of the collisions, well before the QGP is formed. Furthermore, their thermal production is negligible since their masses are larger than the typical QGP temperature. Due to the much better theoretical control on their production and propagation in the medium, heavy quarks provide unique constraints on the QGP properties and the nature of hadronisation mechanisms, compared to light quarks. Heavy-flavour measurements in heavy-ion collisions also test whether the transverse momenta (pT) integrated yields of charm hadrons are consistent with the hypothesis of statistical models, in which charm quarks are expected to reach an almost complete thermalisation in the QGP, despite being initially very far from equilibrium.

ALICE has recently made an improvement towards a quantitative understanding of hadron formation from a QGP

The ALICE experiment has recently made an improvement towards a quantitative understanding of hadron formation from a QGP by performing the first measurement of the charm baryon-to-meson ratio Λ+c/D0 in central (head-on) Pb–Pb collisions at √sNN = 5.02 TeV. By exploiting its unique tracking and particle-identification capabilities, and using machine-learning techniques, ALICE has measured the ratio down to very low pT (less than 1 GeV/c), where hadronisation mechanisms via a combination of quarks are expected to dominate (figure 1, left). The measured production ratio of Λ+c/D0 in central Pb–Pb collisions is found to be larger than in pp collisions at pT of 4–8 GeV/c (figure 1, right). On the other hand, the pT-integrated ratio was found to be compatible with the result of pp collisions within one standard deviation. 

A comparison with theoretical calculations confirms the discrimination power of this measurement. The experimental data are well described by transport models that include mechanisms of the combination of quarks from the deconfined medium (TAMU and Catania). Given the current uncertainties, a conclusive answer on the agreement with statistical models (SHMc) cannot yet be reached. This motivates future high-precision and more differential measurements with the upgraded ALICE detector during the upcoming LHC Run-3 Pb–Pb runs. Thanks to the increased rate-capabilities of the new readout systems of the time projection chamber and the new inner tracking system, ALICE will increase its acquisition rate by up to a factor of about 50 in Pb–Pb collisions and will benefit from a much higher tracking resolution (by a factor 3–6 for low-pT tracks). High-accuracy measurements performed in Runs 3 and 4 will therefore provide significant discrimination power on theoretical calculations and strong constraints on the mechanisms underlying the hadronisation of charm quarks from the QGP.

Precision Z-boson production measurements

Figure 1

The precise determination of the Z-boson parameters at e+e colliders was crucial for the establishment of the electroweak theory of the Standard Model. Today, the Z boson has become an essential object of experimental study at the LHC. In particular, measurements of the Z boson’s production and decay properties in high-energy proton–proton collisions provide insights into the parton distribution functions (PDFs) of the proton and are an implicit test of quantum chromodynamics (QCD). 

Recently, using a sample of Z → μ+μ events, the LHCb collaboration reported the most precise measurement to date of the Z-boson production cross section in the forward region at a centre-of-mass energy of 13 TeV (see figure 1). The collaboration also reported the first measurements of the angular coefficients in Z → μ+μ decays in the forward region, which encode key information about the QCD mechanisms underlying the Z-boson production mechanism. In addition to improving knowledge of the proton PDFs, these two analyses contribute to the study of spin-momentum correlations in the proton, complementing ATLAS and CMS measurements in the central region.

In addition to the up and down valence quarks, a proton comprises a sea of quark–antiquark pairs primarily produced via gluon splitting. Given their similar masses, one would expect that the nucleon sea is flavour-symmetric for up and down quarks. However, in the early 1990s, the New Muon Collaboration at CERN found that this symmetry is violated. Later, the ratio of down antiquarks to up antiquarks in the proton was directly measured by the NA51 experiment at CERN and the NuSea/E866 experiment at Fermilab, revealing a significant asymmetry in the sea-quark PDF distributions. Recently, the SeaQuest/E906 experiment at Fermilab reported a new result on this ratio, showing different trends in the larger Bjorken-× range (× > 0.2) compared to the previous results and raising the tension with the NuSea measurement. 

With a detector instrumented in the forward region, LHCb is ideally placed to study decays of highly boosted Z bosons produced by interactions between one parton with large-× and another with small-×. Considering that both the NuSea and SeaQuest results have large contributions from nuclear effects, the current LHCb measurement of the Z production cross section based on a data sample of 5.1 fb–1 provides important complementary constraints in the large-× region.

The measurement of the angular coefficient “A2” in Z → μ+μ decays is sensitive to the transverse-momentum-dependent (TMD) PDFs, as it is proportional to the convolution of the two so-called Boer–Mulders functions of the two initial partons. A measurement of A2 can thus provide stringent constraints on the nonperturbative partonic spin-momentum correlation within unpolarised protons. By comparing the measured A2 in different dimuon mass ranges, the LHCb measurement provides an important input for the determination of the proton TMD PDFs, which are crucial to properly describe the production of electroweak bosons at the LHC. Together with the production cross section, these results from LHCb reinforce the importance of a forward detector to complement other measurements at the LHC.

bright-rec iop pub iop-science physcis connect