Consisting only of an electron and a positron, positronium (Ps) offers unique exploration of a purely leptonic matter–antimatter system. Traditionally, experiments have relied on formation processes that produce clouds of Ps with a large velocity distribution, limiting the precision of spectroscopic studies due to the large Doppler broadening of the Ps transition lines. Now, after almost 10 years of effort, the AEgIS collaboration at CERN’s Antiproton Decelerator has experimentally demonstrated laser-cooling of Ps for the first time, opening new possibilities for antimatter research.
“This is a breakthrough for the antimatter community that has been awaited for almost 30 years, and which has both a broad physics and technological impact,” says AEgIS physics coordinator Benjamin Rienacker of the University of Liverpool. “Precise Ps spectroscopy experiments could reach the sensitivity to probe the gravitational interaction in a two-body system (with 50% on-shell antimatter mass and made of point-like particles) in a cleaner way than with antihydrogen. Cold ensembles of Ps could also enable Bose–Einstein condensation of an antimatter compound system that provides a path to a coherent gamma-ray source, while allowing precise measurements of the positron mass and fine structure constant, among other applications.”
Laser cooling, which was applied to antihydrogen atoms for the first time by the ALPHA experiment in 2021 (CERN Courier May/June 2021 p9), slows atoms gradually during the course of many cycles of photon absorption and emission. This is normally done using a narrowband laser, which emits light with a small frequency range. By contrast, the AEgIS team uses a pulsed alexandrite-based laser with high intensity, large bandwidth and long pulse duration to meet the cooling requirements. The system enabled the AEgIS team to decrease the temperature of the Ps atoms from 380 K to 170 K, corresponding to a decrease in the transversal component of the Ps velocity from 54 to 37 km s–1.
The feat presents a major technical challenge since, unlike antihydrogen, Ps is unstable and annihilates with a lifetime of only 142 ns. The use of a large bandwidth laser has the advantage of cooling a large fraction of the Ps cloud while increasing the effective lifetime, resulting in a higher amount of Ps after cooling for further experimentation.
“Our results can be further improved, starting from a cryogenic Ps source, which we also know how to build in AEgIS, to reach our dream temperature of 10 K or lower,” says AEgIS spokesperson Ruggero Caravita of INFN-TIFPA. “Other ideas are to add a second cooling stage with a narrower spectral bandwidth set to a detuning level closer to resonance, or by coherent laser cooling.”
Supernova remnants (SNRs) are excellent candidates for the production of galactic cosmic rays. Still, as we approach the “knee” region in the cosmic-ray spectrum (in the few-PeV regime), other astrophysical sources may contribute. A recent study by the High Energy Stereoscopic System (H.E.S.S.) observatory in Namibia sheds light on one such source, called SS 433, a microquasar located nearly 18,000 light-years away. It is a binary system formed by a compact object, such as a neutron star or a stellar-mass black hole, and a companion star, where the former is continuously accreting matter from the latter and emitting relativistic jets perpendicular to the accretion plane.
The jets of SS 433 are oriented perpendicular to our line of sight and constantly distort the SNR shell (called W50, or the Manatee Nebula) that was created during the black-hole formation. Radio observations reveal the precessing motion of the jets up to 0.3 light-years from the black hole, disappearing thereafter. At approximately 81 light-years from the black hole, they reappear as collimated large-scale structures in the X- and gamma-ray bands, termed “outer jets”. These jets are a fascinating probe into particle-acceleration sites, as interactions between jets and their environments can lead to the acceleration of particles that produce gamma rays.
Excellent resolution
The H.E.S.S. collaboration collected and analysed more than 200 hours of data from SS 433 to investigate the acceleration and propagation of electrons in its outer jets. Being an imaging air–shower Cherenkov telescope, H.E.S.S. offers excellent energy and angular resolutions. The gamma-ray image showed two emission regions along the outer jets, which overlap with previously observed X-ray sources. To study the energy dependence of the emission, the full energy range was split into three parts, indicating that the highest energy emission is concentrated closer to the central source, i.e. at the base of the outer jets. A proposed explanation for the observations is that electrons are accelerated to TeV energies, generate high-energy gamma rays via inverse Compton scattering, and subsequently lose energy as they propagate outwards to generate the observed X-rays.
Monte Carlo simulations modelled the morphology of the gamma-ray emission and revealed a significant deceleration in the velocity of the outer jets at their bases, indicating a possible shock region. With a lower limit on the cut-off energy for electron injection into this region, the acceleration energies were found to be > 200 TeV at 68% confidence level. Additionally, protons and heavier nuclei can also be accelerated in these regions and reach much higher energies as they are affected by weaker energy losses and carry higher total energy than electrons.
These jets are a fascinating probe into particle-acceleration sites
SS 433 is, unfortunately, ruled out as a contributor to the observed cosmic-ray flux on Earth. Considering the age of the system to be 30,000 years and proton energies of 1 PeV, the distance traversed by a cosmic-ray particle is much smaller than even the lowest estimates for the distance to SS 433. Even with a significantly larger galactic diffusion coefficient or an age 40 times older, it remains incompatible with other measurements and the highest estimate on the age of the nebula. While proton acceleration does occur in the outer jets of SS 433, these particles don’t play a part in the cosmic-ray flux measured on Earth.
This study, by revealing the energy-dependent morphology of a galactic microquasar and constraining jet velocities at large distances, firmly establishes shocks in microquasar jets as potent particle-acceleration sites and offers valuable insights for future modelling of these astrophysical structures. It opens up exciting possibilities in the search for galactic cosmic-ray sources at PeV energies and extragalactic ones at EeV energies.
Results from the LHC so far have transformed the particle-physics landscape. The discovery of the Higgs boson with a mass of 125 GeV – in agreement with the prediction from earlier precision measurements at LEP and other colliders – has completed the long-predicted matrix of particles and interactions of the Standard Model (SM) and cleared the decks for a new phase of exploration. On the other hand, the lack of evidence for an anticipated supporting cast of particles beyond the SM (BSM) gives no clear guidance as to what form this exploration may take. For the first time since the Fermi theory almost a century ago, particle physicists are voyaging into completely uncharted territory, where our only compass is the certitude that the SM in isolation cannot account for all observations. This absence of theoretical guidance calls for a powerful experimental programme to push the frontiers of the unknown as far as possible.
The absence of LHC signals for new phenomena in the TeV range requires physicists to think differently about the open questions in fundamental physics. These include the abundance of matter over antimatter, the nature of dark matter, the quark and lepton flavour puzzle in general, and the non-zero nature of neutrino masses in particular. Solutions could be at even higher energies, at the price of either an unnatural value of the electroweak scale or an ingenious but still elusive structure. Radically new physics scenarios have been devised, often involving light and very-weakly coupled structures. Neither the mass scale (from meV to ZeV) of this new physics nor the intensity of its couplings (from 1 to 10–12 or less) to the SM are known, calling for a versatile exploration tool.
By providing considerable advances in sensitivity, precision and, eventually, energy far above the TeV scale, the integrated Future Circular Collider (FCC) programme is the perfect vehicle with which to navigate this new landscape. Its first stage FCC-ee, an e+e– collider operating at centre-of-mass energies ranging from below the Z pole (90 GeV) to beyond the top-quark pair-production threshold (365 GeV), would map the properties of the Higgs and electroweak gauge bosons and the top quark with precisions that are orders of magnitude better than today, acquiring sensitivity to the processes that led to the formation of the Brout–Englert–Higgs field a fraction of a nanosecond after the Big Bang. A comprehensive campaign of precision electroweak, QCD, flavour, tau, Higgs and top-quark measurements sensitive to tiny deviations from the predicted SM behaviour would probe energy scales far beyond the direct kinematic reach, while a subsequent pp collider (FCC-hh) would improve – by about an order of magnitude – the direct discovery reach for new particles. Both machines are strongly motivated in their own rights. Together, they offer the furthest physics reach of all proposed future colliders, and put the fundamental scalar sector of the universe centre-stage.
A scalar odyssey
The power of FCC-ee to probe the Higgs boson and other SM particles at much higher resolution would allow physicists to peer further into the cloud of quantum fluctuations surrounding them. The combination of results from previous lepton and hadron colliders at CERN and elsewhere has shown that electroweak symmetry breaking is consistent with its SM parameterisation, but its origin (and the origin of the Higgs boson itself) demands a deeper explanation. The FCC is uniquely placed to address this mystery via a combination of per-mil-level Higgs-boson and parts-per-millon gauge-boson measurements, along with direct high-energy exploration, to comprehensively probe symmetry-based explanations for an electroweak hierarchy. In particular, measurements of the Higgs boson’s self-coupling at the FCC would test whether the electroweak phase transition was first- or second-order, revealing whether it could have potentially played a role in setting the out-of-equilibrium condition necessary for creating the matter–antimatter asymmetry.
While the Brout–Englert–Higgs mechanism nicely explains the pattern of gauge-boson masses, the peculiar structure of quark and lepton masses (as well as the quark mixing angles) is ad hoc within the SM and could be the low-energy imprint of some new dynamics. The FCC will probe such potential new symmetries and forces, in particular via detailed studies of b and τ decays and of b → τ transitions, and significantly extend knowledge of flavour physics. A deeper understanding of approximate conservation laws such as baryon- and lepton-number conservation (or the absence thereof in the case of Majorana neutrinos) would test the limits of lepton-flavour universality and violation, for example, and could reveal new selection rules governing the fundamental laws. Measuring the first- and second-generation Yukawa couplings will also be crucial to complete our understanding, with a potential FCC-ee run at the s-channel Higgs resonance offering the best sensitivity to the electron Yukawa coupling. Stepping back, the FCC would sharpen understanding of the SM as a low-energy effective field theory approximation of a deeper, richer theory by extending the reach of direct and indirect exploration by about one order of magnitude.
The unprecedented statistics from FCC-ee also make it uniquely sensitive to exploring weakly coupled dark sectors and other candidates for new physics beyond the SM (such as heavy axions, dark photons and long-lived particles). Decades of searches across different experiments have pushed the mass of the initially favoured dark-matter candidate (weakly interacting massive particles, WIMPs) progressively beyond the reach of the highest energy e+e– colliders. As a consequence, hidden sectors consisting of new particles that interact almost imperceptibly with the SM are rapidly gaining popularity as an alternative that could hold the answer not only to this problem but to a variety of others, such as the origin of neutrino masses. If dark matter is a doublet or a triplet WIMP, FCC-hh would cover the entire parameter space up to the upper mass limit for thermal relic. The FCC could also host a range of complementary detector facilities to extend its capabilities for neutrino physics, long-lived particles and forward physics.
For the first time since the Fermi theory almost a century ago, particle physicists are voyaging into completely uncharted territory
Completing this brief, high-level summary of the FCC physics reach are the origins of exotic astrophysical and cosmological signals, such as stochastic gravitational waves from cosmological phase transitions or astrophysical signatures of high-energy gamma rays. These phenomena, which include a modified electroweak phase transition, confining new physics in a dark sector, or annihilating TeV-scale WIMPs, could arise due to new physics which is directly accessible only to an energy-frontier facility.
Precision rules
Back in 2011, the original incarnation of a circular e+e– collider to follow the LHC (dubbed LEP3) was to create a high-luminosity Higgs factory operating at 240 GeV in the LEP/LHC tunnel, providing similar precision to that at a linear collider running at the same centre-of-mass energy for a much smaller price tag. Choosing to build a larger 80–100 km version not only allows the tunnel and infrastructure to be reused for a 100 TeV hadron collider, but extends the FCC-ee scientific reach significantly beyond the study of the Higgs boson alone. The unparalleled control of the centre-of-mass energy via the use of resonant depolarisation and the unrivalled luminosity of an FCC-ee with four interaction points would produce around 6 × 1012 Z bosons, 2.4 × 108 W pairs (offering ppm precision on the Z and W masses and widths), 2 × 106 Higgs bosons and 2 × 106 top-quark pairs (impossible to produce with e+e– collisions in the LEP/LHC tunnel) in as little as 16 years.
From the Fermi interaction to the discovery of the W and Z, and from electroweak measurements to the discovery of the top quark and the Higgs boson, greater precision has operated as a route to discoveries. Any deviation from the SM predictions, interpreted as the manifestation of new contact interactions, will point to a new energy scale that will be explored directly in a later stage. One of the findings of the FCC feasibility study is the richness of the FCC-ee Z-pole run, which promises comprehensive measurements of the Z lineshape and many electroweak observables with a 50-fold increase in precision, as well as direct and uniquely precise determinations of the electromagnetic and strong coupling constants. The comparison between these data and commensurately precise SM predictions would severely constrain the existence of new physics via virtual loops or mixing, corresponding to a factor-of-seven increase in energy scale – a jump similar to that from the LHC to FCC-hh. The Z-pole run also enables otherwise unreachable flavour (b, τ) physics, studies of QCD and hadronisation, searches for rare or forbidden decays, and exploration of the dark sector.
After the Z-pole run, the W boson provides a further precision tool at FCC-ee. Its mass is one of the most precisely measured parameters that can be calculated in the SM and is thus of utmost importance. In the planned WW-threshold run, current knowledge can be improved by more than an order of magnitude to test the SM as well as a plethora of new-physics models at a higher quantum level. Together, the very-high-luminosity Z and W runs will determine the gauge-boson sector with the sharpest precision ever.
Going to its highest energy, FCC-ee would explore physics associated with the heaviest known particle, the top quark, whose mass plays a fundamental role in the prediction of SM processes and for the cosmological fate of the vacuum. An improvement in precision by more than an order of magnitude will go hand in hand with a significant improvement in the strong coupling constant, and is crucial for precision exploration beyond the SM.
High-energy synergies
A later FCC-hh stage would complement and substantially extend the FCC-ee physics reach in nearly all areas. Compared to the LHC, it would increase the energy for direct exploration by a factor of seven, with the potential to observe new particles with masses up to 40 TeV (see “Direct exploration” figure). The day FCC-hh directly finds a signal for beyond-SM physics, the precision measurements from FCC-ee will be essential to pinpoint its microscopic origin. Indirectly, FCC-hh will be sensitive to energies of around 100 TeV, for example in the tails of Drell–Yan distributions. The large production of SM particles, including the Higgs boson, at large transverse momentum allows measurements to be performed in kinematic regions with optimal signal-to-background ratio and reduced experimental systematic uncertainties, testing the existence of effective contact interactions in ways that are complementary to what is accessible at lepton colliders. Dedicated FCC-hh experiments, for instance with forward detectors, would enrich further the new-physics opportunities and hunt for long-lived and millicharged particles.
Further increasing the synergies between FCC-ee and FCC-hh is the importance of operating four detectors (instead of two as in the conceptual design study), which has led to an optimised ring layout with a new four-fold periodicity. With four interaction points, FCC-ee provides a net gain in integrated luminosity for a given physics outcome. It also allows for a range of detector solutions to cover all physics opportunities, strengthens the robustness of systematic-uncertainty estimates and discovery claims, and opens several key physics targets that are tantalisingly close (but missed) with only two detectors. The latter include the first 5σ observation of the Higgs-boson self-coupling, and the opportunity to access the Higgs-boson coupling to electrons – one of FCC-ee’s toughest physics challenges.
No physics case for FCC would be complete without a thorough assessment of the corresponding detector challenges. A key deliverable of the feasibility study is a complete set of specifications ensuring that calorimeters, tracking and vertex detectors, muon detectors, luminometers and particle-identification devices meet the physics requirements. In the context of a Higgs factory operating at the ZH production threshold and above, these requirements have already been studied extensively for proposed linear colliders. However, the different experimental environment and the huge statistics of FCC-ee demand that they are revisited. The exquisite statistical uncertainties anticipated on key electroweak measurements at the Z peak and at the WW threshold call for a superb control of the systematic uncertainties, which will put considerable demands on the acceptance, construction quality and stability of the detectors. In addition, the specific discovery potential for very weakly coupled particles must be kept in mind.
The software and computing demands of FCC are an integral element of the feasibility study. From the outset, the driving consideration has been to develop a single software “ecosystem” adaptable to any future collider and usable by any future experiment, based on the best software available. Some tools, such as flavour tagging, significantly exceed the performance of algorithms previously used for linear-collider studies, but there is still much work neededto bring the software to the level required by the FCC-ee. This includes the need for more accurate simulations of beam-related quantities, the machine-detector interface and the detectors themselves. In addition, various reconstruction and analysis tools for use by all collaborators need to be developed and implemented, reaping the benefits from the LHC experience and past linear-collider studies, and computing resources for regular simulated data production need to be evaluated.
Powerful plan
The alignment of stars – that from the initial concept in 2011/2012 of a 100 km-class electron–positron collider in the same tunnel as a future 100 TeV proton–proton collider led to the 2020 update of the European strategy for particle physics endorsing the FCC feasibility study as a top priority for CERN and its international partners – provides the global high-energy physics community with the most powerful exploration tool. FCC-ee offers ideal conditions (luminosity, centre-of-mass energy calibration, multiple experiments and possibly monochromatisation) for the study of the four heaviest particles of the SM with a flurry of opportunities for precision measurements, searches for rare or forbidden processes, and the possible discovery of feebly coupled particles. It is also the perfect springboard for a 100 TeV hadron collider, for which it provides a great part of the infrastructure. Strongly motivated in their own rights, together these two machines offer a uniquely powerful long-term plan for 21st-century particle physics.
The CMS collaboration has reported the first observation of ?? → ?? in pp collisions. The results set a new benchmark for the tau lepton’s magnetic moment, surpassing previous constraints and paving the way for studies probing new physics.
For the tau lepton’s less massive cousins, measurements of magnetic moments offer exceptional sensitivity to beyond-the-Standard-Model (BSM) physics. In quantum electrodynamics (QED), quantum effects modify the Dirac equation, which predicts a gyromagnetic factor g precisely equal to two. The first-order correction, an effect of only α/2π, was calculated by Julian Schwinger in 1948. Taking into account higher orders too, the electron anomalous magnetic moment, a = (g–2)/2, is one of the most precisely measured quantities in physics and is in remarkable agreement with QED predictions. The g–2 of the muon has also been measured with high precision and shows a persistent discrepancy with certain theoretical predictions. By contrast, however, the tau lepton’s g–2 suffers from a lack of precision, given that its short lifetime makes direct measurements very challenging. If new-physics effects scale with the squared lepton mass, deviations from QED predictions in this measurement would be about 280 times larger than in the muon g–2 measurement.
Experimental insights on g–2 can be indirectly obtained by measuring the exclusive production of tau–lepton pairs created in photon–photon collisions. As charged particles pass each other at relativistic velocities in the LHC beampipe, they generate intense electromagnetic fields, leading to photon–photon collisions. The production of tau lepton pairs in photon collisions was first observed by the ATLAS and CMS collaborations in Pb–Pb runs. The CMS collaboration has now observed the same process in proton–proton (pp) data. When photon collisions occur in pp runs, the protons can remain intact. As a result, final-state particles can be produced exclusively, with no other particles coming from the same production vertex.
Tau–lepton tracks were isolated within just a millimetre around the interaction vertex
Separating these low particle multiplicity events from ordinary pp collisions is extremely challenging, as events “pile up” within the same bunch crossing. Thanks to the precise tracking capabilities of the CMS detector, tau–lepton tracks were isolated within just a millimetre around the interaction vertex. Figure 1 shows the resulting excess of ?? → ?? events rising above the estimated backgrounds when few additional tracks were observed within the selected 1 mm window.
This process was used to constrain a? using an effective-field-theory approach. BSM physics affecting g–2 would modify the expected number of ?? → ?? events, with the effect increasing with the di-tau invariant mass. Compared to Pb–Pb collisions, the pp data sample provides a more precise g–2 value because of the larger number of events and of the higher invariant masses probed, thanks to the higher energy of the photons. Using the invariant-mass distributions collected in pp collisions during the full LHC Run 2, the CMS collaboration has not observed any statistically significant deviations from the Standard Model. The tightest constraint ever on a? was set, as shown in figure 2. The uncertainty is only three times larger than the value of Schwinger’s correction.
Magnetic monopoles are hypothetical particles that possess a magnetic charge. In 1864 James Clerk Maxwell assumed that magnetic monopoles didn’t exist because no one had ever observed one. Hence, he did not incorporate the concept of magnetic charges in his unified theory of electricity and magnetism, despite their being fully consistent with classical electrodynamics. Interest in magnetic monopoles intensified in 1931 when Dirac showed that quantum mechanics can accommodate magnetic charges, g, allowed by the quantisation condition g = Ne⁄2α = NgD, where e is the elementary electric charge, α is the fine structure constant, gD is the fundamental magnetic charge and N is an integer. Grand unified theories predict very massive magnetic monopoles, but several recent extensions of the Standard Model feature monopoles in a mass range accessible at the LHC. Scientists have explored cosmic rays, particle collisions, polar volcanic rocks and lunar materials in their quest for magnetic monopoles, yet no experiment has found conclusive evidence thus far.
Signature strategy
The ATLAS collaboration recently reported the results of the search for magnetic monopoles using the full LHC Run 2 dataset recorded in 2015–2018. Magnetic charge conservation dictates that magnetic monopoles are stable and would be created in pairs of oppositely charged particles. Point-like magnetic monopoles could be produced in proton–proton collisions via two mechanisms: Drell–Yan, in which a virtual photon from the collision creates a magnetic monopole pair; or photon-fusion, whereby two virtual photons scattering off proton collisions interact to create a magnetic monopole pair. Dirac’s quantisation condition implies that a 1gD monopole would ionise matter in a similar way as a high-electric-charge object (HECO) of charge 68.5e. Hence, magnetic monopoles and HECOs are expected to be highly ionising. In contrast to the behaviour of electrically charged particles, however, the Lorentz force on a monopole in the solenoidal magnetic field encompassing the ATLAS inner tracking detector would cause it to be accelerated in the direction of the field rather than in the orthogonal plane – a trajectory that precludes the application of usual track-reconstruction methods. The ATLAS detection strategy therefore relies on characterising the highly ionising signature of magnetic monopoles and HECOs in the electromagnetic calorimeter and in the transition radiation tracker.
This is the first ATLAS analysis to consider the photon-fusion production mechanism
The ATLAS search considered magnetic monopoles of magnetic charge 1gD and 2gD, and HECOs of 20e, 40e, 60e, 80e and 100e of both spin-0 and spin-½ in the mass range 0.2–4 TeV. ATLAS is not sensitive to higher charge monopoles or HECOs because they stop before the calorimeter due to their higher ionisation. Since particles in the considered mass range are too heavy to produce significant electromagnetic showers in the calorimeter, their narrow high-energy deposits are readily distinguished from the broader lower-energy ones of electrons and photons. Events with multiple high-energy deposits in the transition radiation tracker aligned with a narrow high-energy deposit in the calorimeter are therefore characteristic of magnetic monopoles and HECOs.
Random combinations of rare processes, such as superpositions of high-energy electrons, could potentially mimic such a signature. Since such rare processes cannot be easily simulated, the background in the signal region is estimated to be 0.15 ± 0.04 (stat) ± 0.05 (syst) events through extrapolation from the lower ionisation event yields in the data.
With no magnetic monopole or HECO candidate observed in the analysed ATLAS data, upper cross-section limits and lower mass limits on these particles were set at 95% confidence level. The Drell–Yan cross-section limits are approximately a factor of three better than those from the previous search using the 2015–2016 Run 2 data.
This is the first ATLAS analysis to consider the photon-fusion production mechanism, the results of which are shown in figure 1 (left) for spin-½ monopoles. ATLAS is also currently the most sensitive experiment to magnetic monopoles in the charge range 1-2gD, as shown in figure 1 (right), and to HECOs in the charge range of 20–100e. The collaboration is further refining search techniques and developing new strategies to search for magnetic monopoles and HECOs in both Run 2 and Run 3 data.
The very-high-energy densities reached in heavy-ion collisions at the LHC result in the production of an extremely hot form of matter, known as the quark-gluon plasma (QGP), consisting of freely roaming quarks and gluons. This medium undergoes a dynamic evolution before eventually transitioning to a collection of hadrons. But the details of this temporal evolution and phase transition are very challenging to calculate from first principles using quantum chromodynamics. The experimental study of the final-state hadrons produced in heavy-ion collisions therefore provides important insights into the nature of these processes. In particular, measurements of the pseudorapidity (η) distributions of charged hadrons help in understanding the initial energy density of the produced QGP and how this energy is transported throughout the event. These measurements involve different classes of collisions, sorted according to the degree of overlap between the two colliding nuclei; collisions with the largest overlap have the highest energy densities.
In 2022 the LHC entered Run 3, with higher collision energies and integrated luminosities than previous running periods. The CMS collaboration has now reported the first measurement using Run 3 heavy-ion data. Charged hadrons produced in lead–lead collisions at the record nucleon–nucleon centre-of-mass collision energy of 5.36 TeV were reconstructed by exploiting the pixel layers of the silicon tracker. At mid-rapidity and in the 5% most central collisions (which have the largest overlap between the two colliding nuclei), 2032 ± 91 charged hadrons are produced per unit of pseudorapidity. The data-to-theory comparisons show that models can successfully predict either the total charged-hadron multiplicity or the shape of its η distribution, but struggle to simultaneously describe both aspects.
Previous measurements have shown that the mid-rapidity yield of charged hadrons in proton-proton and heavy-ion collisions are comparable when scaled by the average number of nucleons participating in the collisions,〈Npart〉. Figure 1 shows measurements of this quantity in several collision systems as a function of collision energy. It was previously observed that central nucleus–nucleus collisions exhibit a power-law scaling, as illustrated by the blue dashed curve; the new CMS result agrees with this trend. In addition, the measurement is about two times larger than the values of proton–proton collisions at similar energies, indicating that heavy-ion collisions are more efficient at converting initial-state energy into final-state hadrons at mid-rapidity.
This measurement opens a new chapter in the CMS heavy-ion programme. At the end of 2023 the LHC delivered an integrated luminosity of around 2 nb–1to CMS, and more data will be collected in the coming years, enabling more precise analyses of the QGP features.
Collisions between lead ions at the LHC produce the hottest system ever created in the lab, exceeding those in stellar interiors by about a factor of 105. At such temperatures, nucleons no longer exist and quark–gluon plasma (QGP) is formed. Yet, a precise measurement of the initial temperature of the QGP created in these collisions remains challenging. Information about the early stage of the collision gets washed out because the system constituents continue to interact as it evolves. As a result, deriving the initial temperature from the hadronic final state requires a model-dependent extrapolation of system properties (such as energy density) by more than an order of magnitude.
In contrast, electromagnetic radiation in the form of real and virtual photons escapes the strongly interacting system. Moreover, virtual photons – emerging in the final state as electron–positron pairs (dielectrons) – carry mass, which allows early and late emission stages to be separated.
Radiation from the late hadronic phase dominates the thermal dielectron spectrum at invariant masses below 1 GeV. The yield and spectral shape in this mass window reflects the in-medium properties of vector mesons, mainly the ρ, and can be connected to the restoration of chiral symmetry in hot and dense matter. In the intermediate-mass region (IMR) between about 1 and 3 GeV, thermal radiation is expected to originate predominantly from the QGP, and an estimate of the initial QGP temperature can be derived from the slope of the exponential spectrum. This makes dielectrons a unique tool to study the properties of the system at its hottest and densest stage.
A new approach to separate the heavy-flavour contribution experimentally has been employed for the first time at the LHC
At the LHC, this measurement is challenging because the expected thermal dielectron yield in the IMR is outshined by a physical background that is about 10 times larger, mainly from semileptonic decays of correlated pairs of cc or bb hadrons. In ALICE, the electron and positron candidates are selected in the central barrel using complementary information provided by the inner tracking system (ITS), time projection chamber and time-of-flight measurements. Figure 1 (left) shows the dielectron invariant-mass spectrum in central lead–lead (Pb–Pb) collisions. The measured distribution is compared with a “cocktail” of all known contributions from hadronic decays. At masses below 0.5 GeV, an enhancement of the dielectron yield over the cocktail expectation is observed, which is consistent with calculations that include thermal radiation from the hadronic phase and an in-medium modification of the ρ-meson. Between 0.5 GeV and the ρ mass (0.77 GeV) a small discrepancy between the data and calculations is observed.
In the IMR, however, systematic uncertainties on the cocktail contributions from charm and beauty prevent any conclusion being drawn about thermal radiation from QGP. To overcome this limitation, a new approach to separate the heavy-flavour contribution experimentally has been employed for the first time at the LHC. This approach exploits the high-precision vertexing capabilities of the ITS to measure the displaced vertices of heavy-quark pairs. Figure 1 (right) shows the dielectron distribution in the IMR compared to template distributions from Monte Carlo simulations. The best fit includes templates from heavy-quark pairs and an additional prompt dielectron contribution, presumably from thermal radiation. This is the first experimental hint of thermal radiation from the QGP in Pb–Pb collisions at the LHC, albeit with a significance of 1σ.
Ongoing measurements with the upgraded ALICE detector will provide an unprecedented improvement in precision, paving the way for a detailed study of thermal radiation from hot QGP.
In the Standard Model (SM), CP violation originates from a single complex phase in the 3 × 3 Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix. The unitarity condition of the CKM matrix (Vud V*ub + Vcd V*cb + Vtd V*tb = 0, where Vij are the CKM matrix elements) can be represented as a triangle in the complex plane, with an area proportional to the amount of CP violation in the quark sector. One angle of this triangle, γ = arg (–Vud V*ub/ Vcd V*cb), is of particular interest as it can be probed both indirectly under the assumption of unitarity and in tree-level processes that make no such assumption. Its most sensitive direct experimental determination is currently given by a combination of LHCb measurements of B+, B0, B0s decays to final states containing a D(s) meson and one or more light mesons. Decay-time-dependent analyses of tree-level B0s→ D∓s K± and B0→ D∓π± decays are sensitive to the angle γ through CP violation in the interference between mixing and decay amplitudes. Thus, comparing the value of γ obtained from tree-level processes with indirect measurements of γ and other unitary triangle parameters in loop-level processes provides an important consistency check of the SM.
Measurements using neutral B0 and B0s mesons are particularly powerful because they resolve ambiguities that other measurements cannot. Due to the interference between B0(s) – B0(S) mixing and decay amplitudes, the physical CP-violating parameters in these decays are functions of a combination of γ and the relevant mixing phase, namely γ + 2β in the B0 system, where β = arg(–Vcd V*cb/ Vtd V*tb), and γ–2βs in the B0s system, where βs = arg(–Vts V*tb/ Vtd V*tb). Measurements of these physical quantities can therefore be interpreted in terms of the angles γ and β(s), and γ can be derived using independent determinations of the other parameter as input.
The LHCb collaboration recently presented a new measurement of B0s→ D∓s K± decays collected during Run 2. This is a challenging analysis, as it requires a decay time-dependent fit to extract the CP-violating observables expressed as amplitudes of the four different decay paths that arise from B0s and – B0s to D∓s K± final states. Previously, LHCb measured γ in this decay using the Run 1 dataset, obtaining γ = 128 +17–22°. The B0s – B0s oscillation frequency ∆ms must be precisely constrained in order to determine the phase differences between the amplitudes. In the Run 2 measurement, the established uncertainty on ∆ms would have been a limiting systematic uncertainty, which motivated the recent LHCb measurement of ∆ms using the flavour-specific B0s→ D–s π+ decays from the same dataset. Combined with Run 1 measurements of ∆ms, this has led to the most precise contribution to the world average and has greatly improved the precision on γ in the B0s→ D∓s K± analysis. Indeed, for the first time the four amplitudes are resolved with sufficient precision to show the decay rates separately (see figure 1).
The angle γ is determined using inputs from other LHCb measurements of the CP-violating weak phase –2βs, along with measurements of the decay width and decay-width difference. The final result, γ = 74 ± 11°, is compatible with the SM and is the most precise determination of γ using B0s meson decays to date.
The anomalous magnetic moment of the muon has long exhibited an intriguing tension between experiment and theory. The latest measurement from Fermilab is around 5σ higher than the official Standard Model prediction, but newer calculations based on lattice-QCD reduce the gap significantly. Confusion surrounds how best to determine the leading quantum correction to the muon’s magnetic moment: a process called hadronic vacuum polarisation (HVP), whereby a virtual photon briefly transforms into a hadronic blob before being reabsorbed.
While theorists are working hard to resolve this tension, the MUonE project aims to provide an independent determination of HVP using an intense muon beam from the CERN Super Proton Synchrotron. Whereas HVP is traditionally determined via hadron-production cross sections in e+e– data, or via theory-based estimates from recent lattice calculations, MUonE would make a very precise measurement of the shape of the differential cross section of μ+e–→ μ+e– scattering. This will enable a direct measurement of the hadronic contribution to the running of the electromagnetic coupling constant α, which governs the HVP process.
MUonE was first proposed in 2017 as part of the Physics Beyond Colliders initiative, and a test run in 2018 was performed to validate the basic idea of a detector. Following a decision by CERN in 2019 to carry out a three-week long pilot run to validate the experimental idea, the MUonE team collected data at the M2 beamline from 21 August to 10 September 2023, using a 160 GeV/c muon beam fired at atomic electrons in a fixed target located at CERN’s North Area. The main purpose of the run was to verify the system’s engineering and to attempt to measure the leptonic corrections to the running of α, for which an analysis is in progress.
The full experiment would have 40 stations comprising a 1.5 cm thick beryllium target followed by a tracking system, which can measure the scattering angles with high precision; further downstream lies an electromagnetic calorimeter and a muon detector. During the 2023 run, two MUonE stations followed by a calorimeter were installed, and a further tracking station without target was placed upstream of the apparatus to detect the incoming muons; the upstream station, towards the beam and without target, was dedicated to tracking the incoming muons. The next step is to install further detetor stations in stages.
“The original schedule has been delayed, partly due to the COVID pandemic, and the final measurement is expected to be performed after Long Shutdown 3,” explains MUonE collaboration board chair Clara Matteuzzi (INFN Milano Bicocca). “A first stage with a scaled detector, comprising a few stations followed by a calorimeter and a muon identifier, which could provide a very first measurement of HVP with low accuracy and a demonstration of the whole concept before the full final run, is under consideration.”
The overall goal of the experiment is to gather around 3.5 × 1012 elastic scattering events with an electron energy larger than 1 GeV, during three years of data-taking at the M2 beam. This would allow the team to achieve a statistical error of 0.3% and thus make MUonE competitive with the latest HVP results computed by other means. The challenge, however, is to keep the systematic error at the level of the statistical one.
“This successful test run gives MUonE confidence that the final goal can be reached, and we are very much looking forward to submitting the proposal for the full run,” adds Matteuzzi.
While it is believed that each galaxy, including our own, contains a supermassive black hole (SMBH) at its centre, much remains unknown about the origin of these extreme objects. The seeds for SMBHs are thought to have existed as early as 200 million years after the Big Bang, after which they accreted mass for 13 billion years to turn into black holes with sizes of up to tens of billions of solar masses. But what were the seeds of these massive black holes? Some theories state that they were formed after the collapse of the first generation of stars, thereby making them tens to hundreds of solar masses, while other theories attribute their origin to the collapse of massive gas clouds that could produce seeds with masses of 104–105 solar masses.
The recent joint detection of a SMBH dating from 500 million years after the Big Bang by the James Webb Space Telescope (JWST) and the Chandra X-ray Observatory provides new insights into this debate. The JWST, sensitive to highly redshifted emission from the early universe, observed a gravitationally lensed area to provide images of some of the oldest galaxies. One such galaxy, called UHZ1, has a redshift corresponding to 13.2 billion years ago, or 500 million years after the Big Bang. Apart from its age, the observations allow an estimate of its stellar mass, while the SMBH expected to be at its centre remains hidden in these wavelengths. This is where Chandra, which is sensitive in the 0.2 to 10 keV energy range, came in.
Observations by Chandra of the area of the cluster lens, Abell 2744, which magnifies UHZ1, shows an excess at energies of 2–7 keV. The measured emission spectrum and luminosity correspond to that from an accreting black hole with a mass of 107 to 108 solar masses, which is about half of the total mass of the galaxy. This can be compared to our own galaxy where the SMBH is estimated to make up only 0.1% of the total mass.
Such a mass can be explained by a seed black-hole of 104 to 105 solar masses accreting matter for 300 million years. A small seed is more difficult to explain, however, because such sources would have to continuously accrete matter at twice their Eddington limit (the point at which the gravitational pull of the object is cancelled by the radiation pressure it applies through the accretion to the surrounding matter). Although super-Eddington accretion is possible, as this limit assumes for example spherical emission of the radiation, which is not necessarily correct, the accretion rates required for light seeds are difficult to explain.
The measurements of a single early galaxy already provide strong hints regarding the source of SMBHs. As JWST continues to observe the early universe, more such sources will likely be revealed. This will allow us to better understand the masses of the seeds, as well as how they grew over a period of 13 billion years.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.