Pentaquarks, bound states of five quarks predicted in the first formulation of the quark model in 1964, have had a troubled history. Following disputed claims of the discovery of light-flavour species over 20 years ago, pentaquarks with hidden charm are now well-established members of the hadronic spectrum. The breakthrough was achieved by the LHCb experiment in 2015 with the observation of Pc+ states in the J/ψ p system.
The Pc+ quark content (uudcc) implies that decays to two open-charm hadrons, such as Λc+D0 or Λc+D*0, are possible. The rates of such decays are important for understanding more about the nature of the Pc+ states, as different models predict rates that differ by orders of magnitude. Distinguishing between the proposed mechanisms by which pentaquarks, and excited hadrons in general, are produced and bound allows a better understanding of the dynamics of the strong interaction in the non-perturbative regime.
A new analysis by LHCb of the open-charm hadrons in Λb decays was presented at the International Conference on Meson-Nucleon Physics and the Structure of the Nucleon, held in Mainz in October. It concerns the first observation and measurement of the branching fractions of Λb0→ Λc+D(*)0 K– and Λb0→ Λc+ Ds*– decays using proton–proton collision data collected during LHC Run 2.
All branching fractions are measured relative to the known Λb0→ Λc+ Ds– decay mode, which is reconstructed with the same set of six final-state hadrons: p K–π+ K+π– K–. Many systematic uncertainties in the measured ratios therefore cancel out, making the precision on the relative branching fraction of Λb0→ Λc+D0 K– statistically limited. For Λb0→ Λc+D0* K– and Λb0→ Λc+D*– the resulting branching fractions are systematically limited. This is because either a photon or neutral pion is not reconstructed, so their shape in the invariant mass spectrum of the reconstructed particles is more difficult to describe and more affected by the backgrounds (see figure 1, where the components with a missing photon for which a branching fraction is calculated are shown in orange and those with a missing neutral pion in green).
The partially reconstructed Λb0→ Λc+ Ds*– decay cannot be used directly to search for pentaquarks, but it is an important input to model calculations. In addition, as a two-body decay, it is a powerful test of factorisation assumptions in heavy-quark effective theory.
In the Λb0→ Λc+D(*)0 K– decay, the production process of the Pc+ pentaquarks is the same as in the discovery channel, Λb0→ J/ψ p K–. A comparison between the measured branching fractions and observed signal yields can thus be used to estimate the expected sensitivity for observing Pc+ signals in the open-charm channels. In particular, the rate of a Λb0 decay to Λc+D0 K– is about six times greater than to J/ψ p K–; however, more than 60 times as much data would be needed to match the currently available Λb0→ J/ψ p K– signal yield.
A factor of about 24 in this calculation comes from the branching fractions ratio of J/ψ and open-charm hadrons, given their reconstructed decay modes. The rest is from reconstruction and selection inefficiencies, which favour the four-prong μ+μ– p K– over the fully hadronic six-body final state. With the upgraded Run 3 detector and now triggerless detector readout, a large part of the inefficiency for fully hadronic final states is recoverable, making pentaquark searches in double open-charm final states more favourable compared to the situation in Run 2.
The 13th annual “Implications of LHCb measurements and future prospects” workshop, held at CERN on 25–27 October 2023, drew substantial interest with 231 participants. This collaborative event between LHCb and the theoretical community showcased the mutual enthusiasm for LHCb’s physics advances. The workshop featured five streams highlighting the latest experimental and theoretical developments in mixing and CP violation, heavy ions and fixed-target results, flavour-changing charged currents, QCD spectroscopy and exotics, and flavour-changing neutral currents.
The opening talk by Monica Pepe Altarelli underscored LHCb’s diverse physics programme, solidifying its role as a highly versatile forward detector. While celebrating successes, her talk candidly addressed setbacks, notably the new results in tests of lepton-flavour universality. LHCb detector and computing upgrades for Run 3 include a fully software-based trigger using graphics processing units. The collaboration is also working towards an Upgrade II programme for Long Shutdown 4 (2033–2034) that would position LHCb as a potentially unique global flavour facility.
On mixing and CP violation, the October workshop unveiled intriguing insights in both the beauty and charm sectors. In the beauty sector, notable highlights encompass measurements of the mixing parameter ΔΓs and of CP-violating phases such as ϕs,d, ϕssss and γ. CP asymmetries were further scrutinised in B → DD decays, accounting for SU(3) breaking and re-scattering effects. In the charm sector, the estimated CP asymmetries considering final-state interactions were found to be small compared to the experimental values related to D0→ π–π+ and D0→ K–K+ decays. Novel measurements of CP violation in three-body charm hadron decays were also presented.
Unique capabilities
On the theoretical front, discussions delved into the current status of bottom-baryon lifetimes. Recent lattice predictions on the εK parameter were also showcased, offering refined constraints on the unitarity triangle. The LHCb experiment’s unique capabilities were discussed in the heavy ions and fixed-target session. Operating in fixed-target mode, LHCb collected data pertaining to proton–ion and lead–ion interactions during LHC Run 2 using the SMOG system. Key highlights included measurements impacting theoretical models of charm hadronisation, global analyses of nuclear parton density functions, and the identification of helium nuclei and deuterons. The first Run 3 data with the SMOG2 upgrade showed promising results in proton–argon and proton–hydrogen collisions, opening a path to measurements with implications for heavy-ion physics and astrophysics.
The session on flavour-changing charged currents unveiled a recent measurement concerning the longitudinal polarisation of D*− mesons in B0→ D*−τ–ντ decays, aligning with Standard Model (SM) expectations. Discussions delved into lepton-flavour-universality tests that showed a 3.3σ tension with predictions in the combined R(D(*)) measurement. Noteworthy were new lattice-QCD predictions for charged current decays, especially R(D(*)), showcasing disparities in the SM prediction across different lattice groups. Updates on the CKM matrix elements |Vub| and |Vcb| lead to a reduced tension between inclusive and exclusive determinations. The session also discussed the impact of high-energy constraints of Wilson coefficients on charged-current decays and Bayesian inference of form-factor parameters, regulated by unitarity and analyticity. The QCD spectroscopy and exotics session also featured important findings, including the discovery of novel baryon states, notably Ξb(6087)0 and Ξb(6095)0. Pentaquark exploration involved diverse charm–hadron combinations, alongside precision measurements of the Ω0c mass and first observations of b-hadron decays with potential exotic-state contributions. Charmonia-associated production provided fresh insights for testing QCD predictions, and an approach based on effective field theory (EFT) interpreting pentaquarks as hadronic molecules was presented. A new model-independent Born–Oppenheimer EFT framework for the interpretation of doubly heavy tetraquarks, utilising lattice QCD predictions, was introduced. Scrutinising charm–tetraquark decays and the interpretation of newly discovered hadron states at the LHC were also discussed.
During the flavour-changing neutral-current session a new analysis of B0→ K*0μ+μ– decays was presented, showing consistency with SM expectations. Stringent limits on branching fractions of rare charm decays and precise differential branching fraction measurements of b-baryon decays were also highlighted. Challenges in SM predictions for b → sℓℓ and rare charm decays were discussed, underscoring the imperative for a deeper comprehension of underlying hadronic processes, particularly leveraging LHCb data. Global analyses of b → dℓℓ and b → sℓℓ decays were presented, alongside future prospects for these decays in Run 3 and beyond. The session also explored strategies to enhance sensitivity to new physics in B±→ π±μ+μ– decays.
The keynote talk, delivered by Svjetlana Fajfer, offered a comprehensive summary and highlighted existing anomalies that demand further consideration. Tackling these challenges necessitates precise measurements at both low and high energies, with the collaborative efforts of LHCb, Belle II, CMS and ATLAS. Additionally, advancements in lattice QCD and other novel theoretical approaches are needed for precise theoretical predictions in tandem with experimental efforts.
When lead ions collide head-on at the LHC they deposit most of their kinetic energy in the collision zone, forming new matter at extremely high temperatures and energy densities. The hot and dense zone quickly expands and cools down, leading to the production of approximately equal numbers of particles and antiparticles at mid-rapidity. However, in reality the balance between matter and antimatter can be slightly distorted.
The collision starts with matter only, i.e. protons and neutrons from the incoming beam. During the collision process, incoming lead nuclei interact while penetrating each other, and most of their quantum numbers are carried away by particles travelling close to the beam direction. Due to strong interactions among the quarks and gluons, quantum numbers of the colliding ions are transported to mid-rapidity rather than to the ions themselves. This leads to an imbalance of baryons originating from the initial state, which has more baryons than antibaryons.
This matter–antimatter imbalance can be quantified by determining two global system properties: the chemical potentials associated with the electric charge and baryon number (denoted μQ and μB, respectively). In a thermodynamic description, the chemical potentials determine the net electric-charge and baryon-number densities of the system. Thus, μB measures the imbalance between matter and antimatter, with a vanishing value indicating a perfect balance.
In a new, high-precision measurement, the ALICE collaboration reports the most precise characterisation so far of the imbalance between matter and antimatter in collisions between lead nuclei at a centre-of-mass energy per nucleon pair of 5.02 TeV. The study was carried out by measuring the antiparticle-to-particle yield ratios of light-flavour hadrons, which make up the bulk of particles produced in heavy-ion collisions. The measurement using the ALICE central barrel detectors included identified charged pions, protons and multi- strange Ω– baryons, in addition to light nuclei, 3He, triton and the hypertriton (a bound state of a proton, a neutron and a Λ-baryon). The larger baryon content of these light nuclei makes them more sensitive to baryon-asymmetry effects.
The medium created in lead–lead collisions at the LHC is nearly electrically neutral and baryon-number-free at mid-rapidity
The analysis reveals that in head-on lead–ion collisions, for every 1000 produced protons, approximately 986 ± 6 antiprotons are produced. The chemical potentials extracted from the experimental data are μQ = -0.18 ± 0.90 MeV and μB = 0.71 ± 0.45 MeV. These values are compatible with zero, showing that the medium created in lead–lead collisions at the LHC is nearly electrically neutral and baryon-number-free at mid-rapidity. This observation holds for the full centrality range, from collisions where the incoming ions peripherally interact with each other up to the most violent head-on processes, indicating that quantum-number transport at the LHC is independent of the size of the system formed.
The values of μB are shown in figure 1 as a function of the centre-of-mass energy of the colliding nuclei, along with lower-energy measurements at other facilities. The recent ALICE result is indicated by the red solid circle, along with a phenomenological parametrisation of μB. The decreasing trend of μB observed as a function of increasing collision energy indicates that different net-baryon-number density conditions can be explored by varying the beam energy, reaching almost vanishing net-baryon content at the LHC. The inset gives the μB values extracted at two LHC energies. It shows that the new ALICE result is almost one order of magnitude more precise than the previous estimate (violet), thanks to a more refined study of systematic uncertainties.
The present study with improved precision characterises the vanishing baryon-asymmetry at the LHC, posing stringent limits to models describing baryon-number transport effects. Using the data samples collected in LHC Run 3, these studies will be extended to the strangeness sectors, enabling a full characterisation of quantum-number transport at the LHC.
Climate models are missing an important source of aerosol particles in polar and marine regions, according to new results from the CLOUD experiment at CERN. Atmospheric aerosol particles exert a strong net cooling effect on the climate by making clouds brighter and more extensive, thereby reflecting more sunlight back out to space. However, how aerosol particles form in the atmosphere remains poorly understood, especially in polar and marine regions.
The CLOUD experiment, located in CERN’s East Area, maintains ultra-low contaminant levels and precisely controls all experimental parameters affecting aerosol formation growth under realistic atmospheric conditions. During the past 15 years, the collaboration has uncovered new processes through which aerosol particles form from mixtures of vapours and grow to sizes where they can seed cloud droplets. A beam from the Proton Synchrotron simulates, in the CLOUD chamber, the ionisation from galactic cosmic rays at any altitude in the troposphere.
Globally, the main vapour driving particle formation is thought to be sulphuric acid, stabilised by ammonia. However, ammonia is frequently lacking in polar and marine regions, and models generally underpredict the observed particle-formation rates. The latest CLOUD study challenges this view, by showing that iodine oxoacids can replace the role of ammonia and act synergistically with sulphuric acid to greatly enhance particle-formation rates.
“Our results show that climate models need to include iodine oxoacids along with sulphuric acid and other vapours,” says CLOUD spokesperson Jasper Kirkby. “This is particularly important in polar regions, which are highly sensitive to small changes in aerosol particles and clouds. Here, increased aerosol and clouds actually have a warming effect by absorbing infrared radiation otherwise lost to space, and then re-radiating it back down to the surface.”
The new findings build on earlier CLOUD studies which showed that iodine oxoacids rapidly form particles even in the complete absence of sulphuric acid. At iodine oxoacid concentrations that are typical of marine and polar regions (between 0.1 and 5 relative to those of sulphuric acid), the CLOUD data show that the formation rates of sulphuric acid particles are between 10 and 10,000 times faster than previous estimates.
“Global marine iodine emissions have tripled in the past 70 years due to thinning sea ice and rising ozone concentrations, and this trend is likely to continue,” adds Kirkby. “The resultant increase of marine aerosol particles and clouds, suggested by our findings, will have created a positive feedback that accelerates the loss of sea ice in polar regions, while simultaneously introducing a cooling effect at lower latitudes. The next generation of climate models will need to take iodine vapours and their synergy with sulphuric acid into account.”
Consisting only of an electron and a positron, positronium (Ps) offers unique exploration of a purely leptonic matter–antimatter system. Traditionally, experiments have relied on formation processes that produce clouds of Ps with a large velocity distribution, limiting the precision of spectroscopic studies due to the large Doppler broadening of the Ps transition lines. Now, after almost 10 years of effort, the AEgIS collaboration at CERN’s Antiproton Decelerator has experimentally demonstrated laser-cooling of Ps for the first time, opening new possibilities for antimatter research.
“This is a breakthrough for the antimatter community that has been awaited for almost 30 years, and which has both a broad physics and technological impact,” says AEgIS physics coordinator Benjamin Rienacker of the University of Liverpool. “Precise Ps spectroscopy experiments could reach the sensitivity to probe the gravitational interaction in a two-body system (with 50% on-shell antimatter mass and made of point-like particles) in a cleaner way than with antihydrogen. Cold ensembles of Ps could also enable Bose–Einstein condensation of an antimatter compound system that provides a path to a coherent gamma-ray source, while allowing precise measurements of the positron mass and fine structure constant, among other applications.”
Laser cooling, which was applied to antihydrogen atoms for the first time by the ALPHA experiment in 2021 (CERN Courier May/June 2021 p9), slows atoms gradually during the course of many cycles of photon absorption and emission. This is normally done using a narrowband laser, which emits light with a small frequency range. By contrast, the AEgIS team uses a pulsed alexandrite-based laser with high intensity, large bandwidth and long pulse duration to meet the cooling requirements. The system enabled the AEgIS team to decrease the temperature of the Ps atoms from 380 K to 170 K, corresponding to a decrease in the transversal component of the Ps velocity from 54 to 37 km s–1.
The feat presents a major technical challenge since, unlike antihydrogen, Ps is unstable and annihilates with a lifetime of only 142 ns. The use of a large bandwidth laser has the advantage of cooling a large fraction of the Ps cloud while increasing the effective lifetime, resulting in a higher amount of Ps after cooling for further experimentation.
“Our results can be further improved, starting from a cryogenic Ps source, which we also know how to build in AEgIS, to reach our dream temperature of 10 K or lower,” says AEgIS spokesperson Ruggero Caravita of INFN-TIFPA. “Other ideas are to add a second cooling stage with a narrower spectral bandwidth set to a detuning level closer to resonance, or by coherent laser cooling.”
Supernova remnants (SNRs) are excellent candidates for the production of galactic cosmic rays. Still, as we approach the “knee” region in the cosmic-ray spectrum (in the few-PeV regime), other astrophysical sources may contribute. A recent study by the High Energy Stereoscopic System (H.E.S.S.) observatory in Namibia sheds light on one such source, called SS 433, a microquasar located nearly 18,000 light-years away. It is a binary system formed by a compact object, such as a neutron star or a stellar-mass black hole, and a companion star, where the former is continuously accreting matter from the latter and emitting relativistic jets perpendicular to the accretion plane.
The jets of SS 433 are oriented perpendicular to our line of sight and constantly distort the SNR shell (called W50, or the Manatee Nebula) that was created during the black-hole formation. Radio observations reveal the precessing motion of the jets up to 0.3 light-years from the black hole, disappearing thereafter. At approximately 81 light-years from the black hole, they reappear as collimated large-scale structures in the X- and gamma-ray bands, termed “outer jets”. These jets are a fascinating probe into particle-acceleration sites, as interactions between jets and their environments can lead to the acceleration of particles that produce gamma rays.
Excellent resolution
The H.E.S.S. collaboration collected and analysed more than 200 hours of data from SS 433 to investigate the acceleration and propagation of electrons in its outer jets. Being an imaging air–shower Cherenkov telescope, H.E.S.S. offers excellent energy and angular resolutions. The gamma-ray image showed two emission regions along the outer jets, which overlap with previously observed X-ray sources. To study the energy dependence of the emission, the full energy range was split into three parts, indicating that the highest energy emission is concentrated closer to the central source, i.e. at the base of the outer jets. A proposed explanation for the observations is that electrons are accelerated to TeV energies, generate high-energy gamma rays via inverse Compton scattering, and subsequently lose energy as they propagate outwards to generate the observed X-rays.
Monte Carlo simulations modelled the morphology of the gamma-ray emission and revealed a significant deceleration in the velocity of the outer jets at their bases, indicating a possible shock region. With a lower limit on the cut-off energy for electron injection into this region, the acceleration energies were found to be > 200 TeV at 68% confidence level. Additionally, protons and heavier nuclei can also be accelerated in these regions and reach much higher energies as they are affected by weaker energy losses and carry higher total energy than electrons.
These jets are a fascinating probe into particle-acceleration sites
SS 433 is, unfortunately, ruled out as a contributor to the observed cosmic-ray flux on Earth. Considering the age of the system to be 30,000 years and proton energies of 1 PeV, the distance traversed by a cosmic-ray particle is much smaller than even the lowest estimates for the distance to SS 433. Even with a significantly larger galactic diffusion coefficient or an age 40 times older, it remains incompatible with other measurements and the highest estimate on the age of the nebula. While proton acceleration does occur in the outer jets of SS 433, these particles don’t play a part in the cosmic-ray flux measured on Earth.
This study, by revealing the energy-dependent morphology of a galactic microquasar and constraining jet velocities at large distances, firmly establishes shocks in microquasar jets as potent particle-acceleration sites and offers valuable insights for future modelling of these astrophysical structures. It opens up exciting possibilities in the search for galactic cosmic-ray sources at PeV energies and extragalactic ones at EeV energies.
Results from the LHC so far have transformed the particle-physics landscape. The discovery of the Higgs boson with a mass of 125 GeV – in agreement with the prediction from earlier precision measurements at LEP and other colliders – has completed the long-predicted matrix of particles and interactions of the Standard Model (SM) and cleared the decks for a new phase of exploration. On the other hand, the lack of evidence for an anticipated supporting cast of particles beyond the SM (BSM) gives no clear guidance as to what form this exploration may take. For the first time since the Fermi theory almost a century ago, particle physicists are voyaging into completely uncharted territory, where our only compass is the certitude that the SM in isolation cannot account for all observations. This absence of theoretical guidance calls for a powerful experimental programme to push the frontiers of the unknown as far as possible.
The absence of LHC signals for new phenomena in the TeV range requires physicists to think differently about the open questions in fundamental physics. These include the abundance of matter over antimatter, the nature of dark matter, the quark and lepton flavour puzzle in general, and the non-zero nature of neutrino masses in particular. Solutions could be at even higher energies, at the price of either an unnatural value of the electroweak scale or an ingenious but still elusive structure. Radically new physics scenarios have been devised, often involving light and very-weakly coupled structures. Neither the mass scale (from meV to ZeV) of this new physics nor the intensity of its couplings (from 1 to 10–12 or less) to the SM are known, calling for a versatile exploration tool.
By providing considerable advances in sensitivity, precision and, eventually, energy far above the TeV scale, the integrated Future Circular Collider (FCC) programme is the perfect vehicle with which to navigate this new landscape. Its first stage FCC-ee, an e+e– collider operating at centre-of-mass energies ranging from below the Z pole (90 GeV) to beyond the top-quark pair-production threshold (365 GeV), would map the properties of the Higgs and electroweak gauge bosons and the top quark with precisions that are orders of magnitude better than today, acquiring sensitivity to the processes that led to the formation of the Brout–Englert–Higgs field a fraction of a nanosecond after the Big Bang. A comprehensive campaign of precision electroweak, QCD, flavour, tau, Higgs and top-quark measurements sensitive to tiny deviations from the predicted SM behaviour would probe energy scales far beyond the direct kinematic reach, while a subsequent pp collider (FCC-hh) would improve – by about an order of magnitude – the direct discovery reach for new particles. Both machines are strongly motivated in their own rights. Together, they offer the furthest physics reach of all proposed future colliders, and put the fundamental scalar sector of the universe centre-stage.
A scalar odyssey
The power of FCC-ee to probe the Higgs boson and other SM particles at much higher resolution would allow physicists to peer further into the cloud of quantum fluctuations surrounding them. The combination of results from previous lepton and hadron colliders at CERN and elsewhere has shown that electroweak symmetry breaking is consistent with its SM parameterisation, but its origin (and the origin of the Higgs boson itself) demands a deeper explanation. The FCC is uniquely placed to address this mystery via a combination of per-mil-level Higgs-boson and parts-per-millon gauge-boson measurements, along with direct high-energy exploration, to comprehensively probe symmetry-based explanations for an electroweak hierarchy. In particular, measurements of the Higgs boson’s self-coupling at the FCC would test whether the electroweak phase transition was first- or second-order, revealing whether it could have potentially played a role in setting the out-of-equilibrium condition necessary for creating the matter–antimatter asymmetry.
While the Brout–Englert–Higgs mechanism nicely explains the pattern of gauge-boson masses, the peculiar structure of quark and lepton masses (as well as the quark mixing angles) is ad hoc within the SM and could be the low-energy imprint of some new dynamics. The FCC will probe such potential new symmetries and forces, in particular via detailed studies of b and τ decays and of b → τ transitions, and significantly extend knowledge of flavour physics. A deeper understanding of approximate conservation laws such as baryon- and lepton-number conservation (or the absence thereof in the case of Majorana neutrinos) would test the limits of lepton-flavour universality and violation, for example, and could reveal new selection rules governing the fundamental laws. Measuring the first- and second-generation Yukawa couplings will also be crucial to complete our understanding, with a potential FCC-ee run at the s-channel Higgs resonance offering the best sensitivity to the electron Yukawa coupling. Stepping back, the FCC would sharpen understanding of the SM as a low-energy effective field theory approximation of a deeper, richer theory by extending the reach of direct and indirect exploration by about one order of magnitude.
The unprecedented statistics from FCC-ee also make it uniquely sensitive to exploring weakly coupled dark sectors and other candidates for new physics beyond the SM (such as heavy axions, dark photons and long-lived particles). Decades of searches across different experiments have pushed the mass of the initially favoured dark-matter candidate (weakly interacting massive particles, WIMPs) progressively beyond the reach of the highest energy e+e– colliders. As a consequence, hidden sectors consisting of new particles that interact almost imperceptibly with the SM are rapidly gaining popularity as an alternative that could hold the answer not only to this problem but to a variety of others, such as the origin of neutrino masses. If dark matter is a doublet or a triplet WIMP, FCC-hh would cover the entire parameter space up to the upper mass limit for thermal relic. The FCC could also host a range of complementary detector facilities to extend its capabilities for neutrino physics, long-lived particles and forward physics.
For the first time since the Fermi theory almost a century ago, particle physicists are voyaging into completely uncharted territory
Completing this brief, high-level summary of the FCC physics reach are the origins of exotic astrophysical and cosmological signals, such as stochastic gravitational waves from cosmological phase transitions or astrophysical signatures of high-energy gamma rays. These phenomena, which include a modified electroweak phase transition, confining new physics in a dark sector, or annihilating TeV-scale WIMPs, could arise due to new physics which is directly accessible only to an energy-frontier facility.
Precision rules
Back in 2011, the original incarnation of a circular e+e– collider to follow the LHC (dubbed LEP3) was to create a high-luminosity Higgs factory operating at 240 GeV in the LEP/LHC tunnel, providing similar precision to that at a linear collider running at the same centre-of-mass energy for a much smaller price tag. Choosing to build a larger 80–100 km version not only allows the tunnel and infrastructure to be reused for a 100 TeV hadron collider, but extends the FCC-ee scientific reach significantly beyond the study of the Higgs boson alone. The unparalleled control of the centre-of-mass energy via the use of resonant depolarisation and the unrivalled luminosity of an FCC-ee with four interaction points would produce around 6 × 1012 Z bosons, 2.4 × 108 W pairs (offering ppm precision on the Z and W masses and widths), 2 × 106 Higgs bosons and 2 × 106 top-quark pairs (impossible to produce with e+e– collisions in the LEP/LHC tunnel) in as little as 16 years.
From the Fermi interaction to the discovery of the W and Z, and from electroweak measurements to the discovery of the top quark and the Higgs boson, greater precision has operated as a route to discoveries. Any deviation from the SM predictions, interpreted as the manifestation of new contact interactions, will point to a new energy scale that will be explored directly in a later stage. One of the findings of the FCC feasibility study is the richness of the FCC-ee Z-pole run, which promises comprehensive measurements of the Z lineshape and many electroweak observables with a 50-fold increase in precision, as well as direct and uniquely precise determinations of the electromagnetic and strong coupling constants. The comparison between these data and commensurately precise SM predictions would severely constrain the existence of new physics via virtual loops or mixing, corresponding to a factor-of-seven increase in energy scale – a jump similar to that from the LHC to FCC-hh. The Z-pole run also enables otherwise unreachable flavour (b, τ) physics, studies of QCD and hadronisation, searches for rare or forbidden decays, and exploration of the dark sector.
After the Z-pole run, the W boson provides a further precision tool at FCC-ee. Its mass is one of the most precisely measured parameters that can be calculated in the SM and is thus of utmost importance. In the planned WW-threshold run, current knowledge can be improved by more than an order of magnitude to test the SM as well as a plethora of new-physics models at a higher quantum level. Together, the very-high-luminosity Z and W runs will determine the gauge-boson sector with the sharpest precision ever.
Going to its highest energy, FCC-ee would explore physics associated with the heaviest known particle, the top quark, whose mass plays a fundamental role in the prediction of SM processes and for the cosmological fate of the vacuum. An improvement in precision by more than an order of magnitude will go hand in hand with a significant improvement in the strong coupling constant, and is crucial for precision exploration beyond the SM.
High-energy synergies
A later FCC-hh stage would complement and substantially extend the FCC-ee physics reach in nearly all areas. Compared to the LHC, it would increase the energy for direct exploration by a factor of seven, with the potential to observe new particles with masses up to 40 TeV (see “Direct exploration” figure). The day FCC-hh directly finds a signal for beyond-SM physics, the precision measurements from FCC-ee will be essential to pinpoint its microscopic origin. Indirectly, FCC-hh will be sensitive to energies of around 100 TeV, for example in the tails of Drell–Yan distributions. The large production of SM particles, including the Higgs boson, at large transverse momentum allows measurements to be performed in kinematic regions with optimal signal-to-background ratio and reduced experimental systematic uncertainties, testing the existence of effective contact interactions in ways that are complementary to what is accessible at lepton colliders. Dedicated FCC-hh experiments, for instance with forward detectors, would enrich further the new-physics opportunities and hunt for long-lived and millicharged particles.
Further increasing the synergies between FCC-ee and FCC-hh is the importance of operating four detectors (instead of two as in the conceptual design study), which has led to an optimised ring layout with a new four-fold periodicity. With four interaction points, FCC-ee provides a net gain in integrated luminosity for a given physics outcome. It also allows for a range of detector solutions to cover all physics opportunities, strengthens the robustness of systematic-uncertainty estimates and discovery claims, and opens several key physics targets that are tantalisingly close (but missed) with only two detectors. The latter include the first 5σ observation of the Higgs-boson self-coupling, and the opportunity to access the Higgs-boson coupling to electrons – one of FCC-ee’s toughest physics challenges.
No physics case for FCC would be complete without a thorough assessment of the corresponding detector challenges. A key deliverable of the feasibility study is a complete set of specifications ensuring that calorimeters, tracking and vertex detectors, muon detectors, luminometers and particle-identification devices meet the physics requirements. In the context of a Higgs factory operating at the ZH production threshold and above, these requirements have already been studied extensively for proposed linear colliders. However, the different experimental environment and the huge statistics of FCC-ee demand that they are revisited. The exquisite statistical uncertainties anticipated on key electroweak measurements at the Z peak and at the WW threshold call for a superb control of the systematic uncertainties, which will put considerable demands on the acceptance, construction quality and stability of the detectors. In addition, the specific discovery potential for very weakly coupled particles must be kept in mind.
The software and computing demands of FCC are an integral element of the feasibility study. From the outset, the driving consideration has been to develop a single software “ecosystem” adaptable to any future collider and usable by any future experiment, based on the best software available. Some tools, such as flavour tagging, significantly exceed the performance of algorithms previously used for linear-collider studies, but there is still much work neededto bring the software to the level required by the FCC-ee. This includes the need for more accurate simulations of beam-related quantities, the machine-detector interface and the detectors themselves. In addition, various reconstruction and analysis tools for use by all collaborators need to be developed and implemented, reaping the benefits from the LHC experience and past linear-collider studies, and computing resources for regular simulated data production need to be evaluated.
Powerful plan
The alignment of stars – that from the initial concept in 2011/2012 of a 100 km-class electron–positron collider in the same tunnel as a future 100 TeV proton–proton collider led to the 2020 update of the European strategy for particle physics endorsing the FCC feasibility study as a top priority for CERN and its international partners – provides the global high-energy physics community with the most powerful exploration tool. FCC-ee offers ideal conditions (luminosity, centre-of-mass energy calibration, multiple experiments and possibly monochromatisation) for the study of the four heaviest particles of the SM with a flurry of opportunities for precision measurements, searches for rare or forbidden processes, and the possible discovery of feebly coupled particles. It is also the perfect springboard for a 100 TeV hadron collider, for which it provides a great part of the infrastructure. Strongly motivated in their own rights, together these two machines offer a uniquely powerful long-term plan for 21st-century particle physics.
The CMS collaboration has reported the first observation of ?? → ?? in pp collisions. The results set a new benchmark for the tau lepton’s magnetic moment, surpassing previous constraints and paving the way for studies probing new physics.
For the tau lepton’s less massive cousins, measurements of magnetic moments offer exceptional sensitivity to beyond-the-Standard-Model (BSM) physics. In quantum electrodynamics (QED), quantum effects modify the Dirac equation, which predicts a gyromagnetic factor g precisely equal to two. The first-order correction, an effect of only α/2π, was calculated by Julian Schwinger in 1948. Taking into account higher orders too, the electron anomalous magnetic moment, a = (g–2)/2, is one of the most precisely measured quantities in physics and is in remarkable agreement with QED predictions. The g–2 of the muon has also been measured with high precision and shows a persistent discrepancy with certain theoretical predictions. By contrast, however, the tau lepton’s g–2 suffers from a lack of precision, given that its short lifetime makes direct measurements very challenging. If new-physics effects scale with the squared lepton mass, deviations from QED predictions in this measurement would be about 280 times larger than in the muon g–2 measurement.
Experimental insights on g–2 can be indirectly obtained by measuring the exclusive production of tau–lepton pairs created in photon–photon collisions. As charged particles pass each other at relativistic velocities in the LHC beampipe, they generate intense electromagnetic fields, leading to photon–photon collisions. The production of tau lepton pairs in photon collisions was first observed by the ATLAS and CMS collaborations in Pb–Pb runs. The CMS collaboration has now observed the same process in proton–proton (pp) data. When photon collisions occur in pp runs, the protons can remain intact. As a result, final-state particles can be produced exclusively, with no other particles coming from the same production vertex.
Tau–lepton tracks were isolated within just a millimetre around the interaction vertex
Separating these low particle multiplicity events from ordinary pp collisions is extremely challenging, as events “pile up” within the same bunch crossing. Thanks to the precise tracking capabilities of the CMS detector, tau–lepton tracks were isolated within just a millimetre around the interaction vertex. Figure 1 shows the resulting excess of ?? → ?? events rising above the estimated backgrounds when few additional tracks were observed within the selected 1 mm window.
This process was used to constrain a? using an effective-field-theory approach. BSM physics affecting g–2 would modify the expected number of ?? → ?? events, with the effect increasing with the di-tau invariant mass. Compared to Pb–Pb collisions, the pp data sample provides a more precise g–2 value because of the larger number of events and of the higher invariant masses probed, thanks to the higher energy of the photons. Using the invariant-mass distributions collected in pp collisions during the full LHC Run 2, the CMS collaboration has not observed any statistically significant deviations from the Standard Model. The tightest constraint ever on a? was set, as shown in figure 2. The uncertainty is only three times larger than the value of Schwinger’s correction.
Magnetic monopoles are hypothetical particles that possess a magnetic charge. In 1864 James Clerk Maxwell assumed that magnetic monopoles didn’t exist because no one had ever observed one. Hence, he did not incorporate the concept of magnetic charges in his unified theory of electricity and magnetism, despite their being fully consistent with classical electrodynamics. Interest in magnetic monopoles intensified in 1931 when Dirac showed that quantum mechanics can accommodate magnetic charges, g, allowed by the quantisation condition g = Ne⁄2α = NgD, where e is the elementary electric charge, α is the fine structure constant, gD is the fundamental magnetic charge and N is an integer. Grand unified theories predict very massive magnetic monopoles, but several recent extensions of the Standard Model feature monopoles in a mass range accessible at the LHC. Scientists have explored cosmic rays, particle collisions, polar volcanic rocks and lunar materials in their quest for magnetic monopoles, yet no experiment has found conclusive evidence thus far.
Signature strategy
The ATLAS collaboration recently reported the results of the search for magnetic monopoles using the full LHC Run 2 dataset recorded in 2015–2018. Magnetic charge conservation dictates that magnetic monopoles are stable and would be created in pairs of oppositely charged particles. Point-like magnetic monopoles could be produced in proton–proton collisions via two mechanisms: Drell–Yan, in which a virtual photon from the collision creates a magnetic monopole pair; or photon-fusion, whereby two virtual photons scattering off proton collisions interact to create a magnetic monopole pair. Dirac’s quantisation condition implies that a 1gD monopole would ionise matter in a similar way as a high-electric-charge object (HECO) of charge 68.5e. Hence, magnetic monopoles and HECOs are expected to be highly ionising. In contrast to the behaviour of electrically charged particles, however, the Lorentz force on a monopole in the solenoidal magnetic field encompassing the ATLAS inner tracking detector would cause it to be accelerated in the direction of the field rather than in the orthogonal plane – a trajectory that precludes the application of usual track-reconstruction methods. The ATLAS detection strategy therefore relies on characterising the highly ionising signature of magnetic monopoles and HECOs in the electromagnetic calorimeter and in the transition radiation tracker.
This is the first ATLAS analysis to consider the photon-fusion production mechanism
The ATLAS search considered magnetic monopoles of magnetic charge 1gD and 2gD, and HECOs of 20e, 40e, 60e, 80e and 100e of both spin-0 and spin-½ in the mass range 0.2–4 TeV. ATLAS is not sensitive to higher charge monopoles or HECOs because they stop before the calorimeter due to their higher ionisation. Since particles in the considered mass range are too heavy to produce significant electromagnetic showers in the calorimeter, their narrow high-energy deposits are readily distinguished from the broader lower-energy ones of electrons and photons. Events with multiple high-energy deposits in the transition radiation tracker aligned with a narrow high-energy deposit in the calorimeter are therefore characteristic of magnetic monopoles and HECOs.
Random combinations of rare processes, such as superpositions of high-energy electrons, could potentially mimic such a signature. Since such rare processes cannot be easily simulated, the background in the signal region is estimated to be 0.15 ± 0.04 (stat) ± 0.05 (syst) events through extrapolation from the lower ionisation event yields in the data.
With no magnetic monopole or HECO candidate observed in the analysed ATLAS data, upper cross-section limits and lower mass limits on these particles were set at 95% confidence level. The Drell–Yan cross-section limits are approximately a factor of three better than those from the previous search using the 2015–2016 Run 2 data.
This is the first ATLAS analysis to consider the photon-fusion production mechanism, the results of which are shown in figure 1 (left) for spin-½ monopoles. ATLAS is also currently the most sensitive experiment to magnetic monopoles in the charge range 1-2gD, as shown in figure 1 (right), and to HECOs in the charge range of 20–100e. The collaboration is further refining search techniques and developing new strategies to search for magnetic monopoles and HECOs in both Run 2 and Run 3 data.
The very-high-energy densities reached in heavy-ion collisions at the LHC result in the production of an extremely hot form of matter, known as the quark-gluon plasma (QGP), consisting of freely roaming quarks and gluons. This medium undergoes a dynamic evolution before eventually transitioning to a collection of hadrons. But the details of this temporal evolution and phase transition are very challenging to calculate from first principles using quantum chromodynamics. The experimental study of the final-state hadrons produced in heavy-ion collisions therefore provides important insights into the nature of these processes. In particular, measurements of the pseudorapidity (η) distributions of charged hadrons help in understanding the initial energy density of the produced QGP and how this energy is transported throughout the event. These measurements involve different classes of collisions, sorted according to the degree of overlap between the two colliding nuclei; collisions with the largest overlap have the highest energy densities.
In 2022 the LHC entered Run 3, with higher collision energies and integrated luminosities than previous running periods. The CMS collaboration has now reported the first measurement using Run 3 heavy-ion data. Charged hadrons produced in lead–lead collisions at the record nucleon–nucleon centre-of-mass collision energy of 5.36 TeV were reconstructed by exploiting the pixel layers of the silicon tracker. At mid-rapidity and in the 5% most central collisions (which have the largest overlap between the two colliding nuclei), 2032 ± 91 charged hadrons are produced per unit of pseudorapidity. The data-to-theory comparisons show that models can successfully predict either the total charged-hadron multiplicity or the shape of its η distribution, but struggle to simultaneously describe both aspects.
Previous measurements have shown that the mid-rapidity yield of charged hadrons in proton-proton and heavy-ion collisions are comparable when scaled by the average number of nucleons participating in the collisions,〈Npart〉. Figure 1 shows measurements of this quantity in several collision systems as a function of collision energy. It was previously observed that central nucleus–nucleus collisions exhibit a power-law scaling, as illustrated by the blue dashed curve; the new CMS result agrees with this trend. In addition, the measurement is about two times larger than the values of proton–proton collisions at similar energies, indicating that heavy-ion collisions are more efficient at converting initial-state energy into final-state hadrons at mid-rapidity.
This measurement opens a new chapter in the CMS heavy-ion programme. At the end of 2023 the LHC delivered an integrated luminosity of around 2 nb–1to CMS, and more data will be collected in the coming years, enabling more precise analyses of the QGP features.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.