As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models.
The ATLAS collaboration recently published a new measurement of the B0 lifetime using B0→ J/ψK*0 decays from the entire Run-2 dataset it has recorded at 13 TeV. The result improves the precision of previous world-leading measurements by the CMS and LHCb collaborations by a factor of two.
Studies of b-hadron lifetimes probe our understanding of the weak interaction. The lifetimes of b-hadrons can be systematically computed within the heavy-quark expansion (HQE) framework, where b-hadron observables are expressed as a perturbative expansion in inverse powers of the b-quark mass.
ATLAS measures the “effective” B0 lifetime, which represents the average decay time incorporating effects from mixing and CP contributions, as τ(B0) = 1.5053 ± 0.0012 (stat.) ± 0.0035 (syst.) ps. The result is consistent with previous measurements published by ATLAS and other experiments, as summarised in figure 1. It also aligns with theoretical predictions from HQE and lattice QCD, as well as with the experimental world average.
The analysis benefitted from the large Run-2 dataset and a refined trigger selection, enabling the collection of an extensive sample of 2.5 million B0→ J/ψK*0 decays. Events with a J/ψ meson decaying into two muons with sufficient transverse momentum are cleanly identified in the ATLAS Muon Spectrometer by the first-level hardware trigger. In the next-level software trigger, exploiting the full detector information, these muons are then combined with two tracks measured by the Inner Detector, ensuring they originate from the same vertex.
The B0-meson lifetime is determined through a two-dimensional unbinned maximum-likelihood fit, utilising the measured B0-candidate mass and decay time, and accounting for both signal and background components. The limited hadronic particle-identification capability of ATLAS requires careful modelling of the significant backgrounds from other processes that produce J/ψ mesons. The sensitivity of the fit is increased by estimating the uncertainty of the decay-time measurement provided by the ATLAS tracking and vertexing algorithms on a per-candidate basis. The resulting lifetime measurement is limited by systematic uncertainties, with the largest contributions arising from the correlation between B0 mass and lifetime, and ambiguities in modelling the mass distribution.
ATLAS combined its measurement with the average decay width (Γs) of the light and heavy Bs-meson mass eigenstates, also measured by ATLAS, to determine the ratio of decay widths as Γd/Γs = 0.9905 ± 0.0022 (stat.) ± 0.0036 (syst.) ± 0.0057 (ext.). The result is consistent with unity and provides a stringent test of QCD predictions, which also support a value near unity.
In the autumn of 2023, Wojciech Brylinski was analysing data from the NA61/SHINE collaboration at CERN for his thesis, when he noticed an unexpected anomaly – a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions. Instead of producing roughly equal numbers, he found that charged kaons were produced 18.4% more often. This suggested that the “isospin symmetry” between up (u) and down (d) quarks might be broken by more than expected due to the differences in their electric charges and masses – a discrepancy that existing theoretical models would struggle to explain. Known sources of isospin asymmetry only predict deviations of a few percent.
“When Wojciech got started, we thought it would be a trivial verification of the symmetry,” says Marek Gaździcki of Jan Kochanowski University of Kielce, spokesperson of NA61/SHINE at the time of the discovery. “We expected it to be closely obeyed – though we had previously measured discrepancies at NA49, they had large uncertainties and were not significant.”
Isospin symmetry is one facet of flavour symmetry, whereby the strong interaction treats all quark flavours identically, except for kinematic differences arising from their different masses. Strong interactions should therefore generate nearly equal yields of charged K+ (us) and K– (us), and neutral K0 (ds) and K0 (ds), given the similar masses of the two lightest quarks. NA61/SHINE’s data contradict the hypothesis of equal yields with 4.7σ significance.
“I see two options to interpret the results,” says Francesco Giacosa, a theoretical physicist at Jan Kochanowski University working with NA61/SHINE. “First, we substantially underestimate the role of electromagnetic interactions in creating quark–antiquark pairs. Second, strong interactions do not obey flavour symmetry – if so, this would falsify QCD.” Isospin is not a symmetry of the electromagnetic interaction as up and down quarks have different electric charges.
While the experiment routinely measures particle yields in nuclear collisions, finding a discrepancy in isospin symmetry was not something researchers were actively looking for. NA61/SHINE’s primary focus is studying the phase diagram of high-energy nuclear collisions using a range of ion beams. This includes looking at the onset of deconfinement, the formation of a quark-gluon plasma fireball, and the search for the hypothesised QCD critical point where the transition between hadronic matter and quark–gluon plasma changes from a smooth crossover to a first-order phase transition. Data is also shared with neutrino and cosmic-ray experiments to help refine their models.
The collaboration is now planning additional studies using different projectiles, targets and collision energies to determine whether this effect is unique to certain heavy-ion collisions or a more general feature of high-energy interactions. They have also put out a call to theorists to help explain what might have caused such an unexpectedly large asymmetry.
“The observation of the rather large isospin violation stands in sharp contrast to its validity in a wide range of physical systems,” says Rob Pisarski, a theoretical physicist from Brookhaven National Laboratory. “Any explanation must be special to heavy-ion systems at moderate energy. NA61/SHINE’s discrepancy is clearly significant, and shows that QCD still has the power to surprise our naive expectations.”
On 13 February 2023, strings of photodetectors anchored to the seabed off the coast of Sicily detected the most energetic neutrino ever observed, smashing previous records. Embargoed until the publication of a paper in Nature last month, the KM3NeT collaboration believes their observation may have originated in a novel cosmic accelerator, or may even be the first detection of a “cosmogenic” neutrino.
“This event certainly comes as a surprise,” says KM3NeT spokesperson Paul de Jong (Nikhef). “Our measurement converted into a flux exceeds the limits set by IceCube and the Pierre Auger Observatory. If it is a statistical fluctuation, it would correspond to an upward fluctuation at the 2.2σ level. That is unlikely, but not impossible.” With an estimated energy of a remarkable 220 PeV, the neutrino observed by KM3NeT surpasses IceCube’s record by almost a factor of 30.
The existence of ultra-high-energy cosmic neutrinos has been theorised since the 1960s, when astrophysicists began to conceive ways that extreme astrophysical environments could generate particles with very high energies. At about the same time, Arno Penzias and Robert Wilson discovered “cosmic microwave background” (CMB) photons emitted in the era of recombination, when the primordial plasma cooled down and the universe became electrically neutral. Cosmogenic neutrinos were soon hypothesised to result from ultra-high-energy cosmic rays interacting with the CMB. They are expected to have energies above 100 PeV (1017 eV), however, their abundance is uncertain as it depends on cosmic rays, whose sources are still cloaked in intrigue (CERN Courier July/August 2024 p24).
A window to extreme events
But how might they be detected? In this regard, neutrinos present a dichotomy: though outnumbered in the cosmos only by photons, they are notoriously elusive. However, it is precisely their weakly interacting nature that makes them ideal for investigating the most extreme regions of the universe. Cosmic neutrinos travel vast cosmic distances without being scattered or absorbed, providing a direct window into their origins, and enabling scientists to study phenomena such as black-hole jets and neutron-star mergers. Such extreme astrophysical sources test the limits of the Standard Model at energy scales many times higher than is possible in terrestrial particle accelerators.
Because they are so weakly interacting, studying cosmic neutrinos requires giant detectors. Today, three large-scale neutrino telescopes are in operation: IceCube, in Antarctica; KM3NeT, under construction deep in the Mediterranean Sea; and Baikal–GVD, under construction in Lake Baikal in southern Siberia. So far, IceCube, whose construction was completed over 10 years ago, has enabled significant advancements in cosmic-neutrino physics, including the first observation of the Glashow resonance, wherein a 6 PeV electron antineutrino interacts with an electron in the ice sheet to form an on-shell W boson, and the discovery of neutrinos emitted by “active galaxies” powered by a supermassive black hole accreting matter. The previous record-holder for the highest recorded neutrino energy, IceCube has also searched for cosmogenic neutrinos but has not yet observed neutrino candidates above 10 PeV.
Its new northern-hemisphere colleague, KM3NeT, consists of two subdetectors: ORCA, designed to study neutrino properties, and ARCA, which made this detection, designed to detect high-energy cosmic neutrinos and find their astronomical counterparts. Its deep-sea arrays of optical sensors detect Cherenkov light emitted by charged particles created when a neutrino interacts with a quark or electron in the water. At the time of the 2023 event, ARCA comprised 21 vertical detection units, each around 700 m in length. Its location 3.5 km deep under the sea reduces background noise, and its sparse set up over one cubic kilometre optimises the detector for neutrinos of higher energies.
The event that KM3NeT observed in 2023 is thought to be a single muon created by the charged-current interaction of an ultra-high-energy muon neutrino. The muon then crossed horizontally through the entire ARCA detector, emitting Cherenkov light that was picked up by a third of its active sensors. “If it entered the sea as a muon, it would have travelled some 300 km water-equivalent in water or rock, which is impossible,” explains de Jong. “It is most likely the result of a muon neutrino interacting with sea water some distance from the detector.”
The network will improve the chances of detecting new neutrino sources
The best estimate for the neutrino energy of 220 PeV hides substantial uncertainties, given the unknown interaction point and the need to correct for an undetected hadronic shower. The collaboration expects the true value to lie between 110 and 790 PeV with 68% confidence. “The neutrino energy spectrum is steeply falling, so there is a tug-of-war between two effects,” explains de Jong. “Low-energy neutrinos must give a relatively large fraction of their energy to the muon and interact close to the detector, but they are numerous; high-energy neutrinos can interact further away, and give a smaller fraction of their energy to the muon, but they are rare.”
More data is needed to understand the sources of ultra-high-energy neutrinos such as that observed by KM3NeT, where construction has continued in the two years since this remarkable early detection. So far, 33 of 230 ARCA detection units and 24 of 115 ORCA detection units have been installed. Once construction is complete, likely by the end of the decade, KM3NeT will be similar in size to IceCube.
“Once KM3NeT and Baikal–GVD are fully constructed, we will have three large-scale neutrino telescopes of about the same size in operation around the world,” adds Mauricio Bustamante, theoretical astroparticle physicist at the Niels Bohr Institute of the University of Copenhagen. “This expanded network will monitor the full sky with nearly equal sensitivity in any direction, improving the chances of detecting new neutrino sources, including faint ones in new regions of the sky.”
Quarks contribute less than 1% to the mass of protons and neutrons. This provokes an astonishing question: where does the other 99% of the mass of the visible universe come from? The answer lies in the gluon, and how it interacts with itself to bind quarks together inside hadrons.
Much remains to be understood about gluon dynamics. At present, the chief experimental challenge is to observe the onset of gluon saturation – a dynamic equilibrium between gluon splitting and recombination predicted by QCD. The experimental key looks likely to be a rare but intriguing type of LHC interaction known as an ultraperipheral collision (UPC), and the breakthrough may come as soon as the next experimental run.
Gluon saturation is expected to end the rapid growth in gluon density measured at the HERA electron–proton collider at DESY in the 1990s and 2000s. HERA observed this growth as the energy of interactions increased and as the fraction of the proton’s momentum borne by the gluons (Bjorken x) decreased.
So gluons become more numerous in hadrons as their energy decreases – but to what end?
Gluonic hotspots are now being probed with unprecedented precision at the LHC and are central to understanding the high-energy regime of QCD
Nonlinear effects are expected to arise due to processes like gluon recombination, wherein two gluons combine to become one. When gluon recombination becomes a significant factor in QCD dynamics, gluon saturation sets in – an emergent phenomenon whose energy scale is a critical parameter to determine experimentally. At this scale, gluons begin to act like classical fields and gluon density plateaus. A dilute partonic picture transitions to a dense, saturated state. For recombination to take precedence over splitting, gluon momenta must be very small, corresponding to low values of Bjorken x. The saturation scale should also be directly proportional to the colour-charge density, making heavy nuclei like lead ideal for studying nonlinear QCD phenomena.
But despite strong theoretical reasoning and tantalising experimental hints, direct evidence for gluon saturation remains elusive.
Since the conclusion of the HERA programme, the quest to explore gluon saturation has shifted focus to the LHC. But with no point-like electron to probe the hadronic target, LHC physicists had to find a new point-like probe: light itself. UPCs at the LHC exploit the flux of quasi-real high-energy photons generated by ultra-relativistic particles. For heavy ions like lead, this flux of photons is enhanced by the square of the nuclear charge, enabling studies of photon-proton (γp) and photon-nucleus interactions at centre-of-mass energies reaching the TeV scale.
Keeping it clean
What really sets UPCs apart is their clean environment. UPCs occur at large impact parameters well outside the range of the strong nuclear force, allowing the nuclei to remain intact. Unlike hadronic collisions, which can produce thousands of particles, UPCs often involve only a few final-state particles, for example a single J/ψ, providing an ideal laboratory for gluon saturation. J/ψ are produced when a cc pair created by two or more gluons from one nucleus is brought on-shell by interacting with a quasi-real photon from the other nucleus (see “Sensitivity to saturation” figure).
Gluon saturation models predict deviations in the γp → J/ψp cross section from the power-law behaviour observed at HERA. The LHC experiments are placing a significant focus on investigating the energy dependence of this process to identify potential signatures of saturation, with ALICE and LHCb extending studies to higher γp centre-of-mass energies (Wγp) and lower Bjorken x than HERA. The results so far reveal that the cross-section continues to increase with energy, consistent with the power-law trend (see “Approaching the plateau?” figure).
The symmetric nature of pp collisions introduces significant challenges. In pp collisions, either proton can act as the photon source, leading to an intrinsic ambiguity in identifying the photon emitter. In proton–lead (pPb) collisions, the lead nucleus overwhelmingly dominates photon emission, eliminating this ambiguity. This makes pPb collisions an ideal environment for precise studies of the photoproduction of J/ψ by protons.
During LHC Run 1, the ALICE experiment probed Wγp up to 706 GeV in pPb collisions, more than doubling HERA’s maximum reach of 300 GeV. This translates to probing Bjorken-x values as low as 10–5, significantly beyond the regime explored at HERA. LHCb took a different approach. The collaboration inferred the behaviour of pp collisions at high energies (“W+ solutions”) by assuming knowledge of their energy dependence at low energies (“W- solutions”), allowing LHCb to probe gluon energies as small as 10–6 in Bjorken x and Wγp up to 2 TeV.
There is not yet any theoretical consensus on whether LHC data align with gluon-saturation predictions, and the measurements remain statistically limited, leaving room for further exploration. Theoretical challenges include incomplete next-to-leading-order calculations and the reliance of some models on fits to HERA data. Progress will depend on robust and model-independent calculations and high-quality UPC data from pPb collisions in LHC Run 3 and Run 4.
Some models predict a slowing increase in the γp → J/ψp cross section with energy at small Bjorken x. If these models are correct, gluon saturation will likely be discovered in LHC Run 4, where we expect to see a clear observation of whether pPb data deviate from the power law observed so far.
Gluonic hotspots
If a UPC photon interacts with the collective colour field of a nucleus – coherent scattering – it probes its overall distribution of gluons. If a UPC photon interacts with individual nucleons or smaller sub-nucleonic structures – incoherent scattering – it can probe smaller-scale gluon fluctuations.
These fluctuations, known as gluonic hotspots, are theorised to become more numerous and overlap in the regime of gluon saturation (see “Onset of saturation” figure). Now being probed with unprecedented precision at the LHC, they are central to understanding the high-energy regime of QCD.
Gluonic hotspots are used to model the internal transverse structure of colliding protons or nuclei (see “Hotspot snapshots” figure). The saturation scale is inherently impact-parameter dependent, with the densest colour charge densities concentrated at the core of the proton or nucleus, and diminishing toward the periphery, though subject to fluctuations. Researchers are increasingly interested in exploring how these fluctuations depend on the impact parameter of collisions to better characterise the spatial dynamics of colour charge. Future analyses will pinpoint contributions from localised hotspots where saturation effects are most likely to be observed.
The energy dependence of incoherent or dissociative photoproduction promises a clear signature for gluon saturation, independent of the coherent power-law method described above. As saturation sets in, all gluon configurations in the target converge to similar densities, causing the variance of the gluon field to decrease, and with it the dissociative cross section. Detecting a peak and a decline in the incoherent cross-section as a function of energy would represent a clear signature of gluon saturation.
The ALICE collaboration has taken significant steps in exploring this quantum terrain, demonstrating the possibility of studying different geometrical configurations of quantum fluctuations in processes where protons or lead nucleons dissociate. The results highlight a striking correlation between momentum transfer, which is inversely proportional to the impact parameter, and the size of the target structure. The observation that sub-nucleonic structures impart the greatest momentum transfer is compelling evidence for gluonic quantum fluctuations at the sub-nucleon level.
Into the shadows
In 1982 the European Muon Collaboration observed an intriguing phenomenon: nuclei appeared to contain fewer gluons than expected based on the contributions from their individual protons and neutrons. This effect, known as nuclear shadowing, was observed in experiments conducted at CERN at moderate values of Bjorken x. It is now known to occur because the interaction of a probe with one gluon reduces the likelihood of the probe interacting with other gluons within the nucleus – the gluons hiding behind them, in their shadow, so to speak. At smaller values of Bjorken x, saturation further suppresses the number of gluons contributing to the interaction.
The relationship between gluon saturation and nuclear shadowing is poorly understood, and separating their effects remains an open challenge. The situation is further complicated by an experimental reliance on lead–lead (PbPb) collisions, which, like pp collisions, suffer from ambiguity in identifying the interacting nucleus, unless the interaction is accompanied by an ejected neutron.
The ALICE, CMS and LHCb experiments have extensively studied nuclear shadowing via the exclusive production of vector mesons such as J/ψ in ultraperipheral PbPb
collisions. Results span photon–nucleus collision energies from 10 to 1000 GeV. The onset of nuclear shadowing, or another nonlinear QCD phenomenon like saturation, is clearly visible as a function of energy and Bjorken x (see “Nuclear shadowing” figure).
Multidimensional maps
While both saturation-based and gluon shadowing models describe the data reasonably well at high energies, neither framework captures the observed trends across the entire kinematic range. Future efforts must go beyond energy dependence by being differential in momentum transfer and studying a range of vector mesons with complementary sensitivities to the saturation scale.
Soon to be constructed at Brookhaven National Laboratory, the Electron-Ion Collider (EIC) promises to transform our understanding of gluonic matter. Designed specifically for QCD research, the EIC will probe gluon saturation and shadowing in unprecedented detail, using a broad array of reactions, collision species and energy levels. By providing a multidimensional map of gluonic behaviour, the EIC will address fundamental questions such as the origin of mass and nuclear spin.
Before then, a tenfold increase in PbPb statistics in LHC Runs 3 and 4 will allow a transformative leap in low Bjorken-x physics. Though not originally designed for low Bjorken-x physics, the LHC’s unparalleled energy reach and diverse range of colliding systems offers unique opportunities to explore gluon dynamics at the highest energies.
Enhanced capabilities
Surpassing the gains from increased luminosity alone, ALICE’s new triggerless detector readout mode will offer a vast improvement over previous runs, which were constrained by dedicated triggers and bandwidth limitations. Subdetector upgrades will also play an important role. The muon forward tracker has already enhanced ALICE’s capabilities, and the high-granularity forward calorimeter set to be installed in time for Run 4 is specifically designed to improve sensitivity to small Bjorken-x physics (see “Saturation specific” figure).
Ultraperipheral-collision physics at the LHC is far more than a technical exploration of QCD. Gluons govern the structure of all visible matter. Saturation, hotspots and shadowing shed light on the origin of 99% of the mass of the visible universe.
The 14th Higgs Hunting workshop took place from 23 to 25 September 2024 at Orsay’s IJCLab and Paris’s Laboratoire Astroparticule et Cosmologie. More than 100 participants joined lively discussions to decipher the latest developments in theory and results from the ATLAS and CMS experiments.
The portrait of the Higgs boson painted by experimental data is becoming more and more precise. Many new Run 2 and first Run 3 results have developed the picture this year. Highlights included the latest di-Higgs combinations with cross-section upper limits reaching down to 2.5 times the Standard Model (SM) expectations. A few excesses seen in various analyses were also discussed. The CMS collaboration reported a brand new excess of top–antitop events near the top–antitop production threshold, with a local significance of more than 5σ above the background described by perturbative quantum chromodynamics (QCD) only, that could be due to a pseudoscalar top–antitop bound state. A new W-boson mass measurement by the CMS collaboration – a subject deeply connected to electroweak symmetry breaking – was also presented, reporting a value consistent with the SM prediction with a very accurate precision of 9.9 MeV (CERN Courier November/December 2024 p7).
Parton shower event generators were in the spotlight. Historical talks by Torbjörn Sjöstrand (Lund University) and Bryan Webber (University of Cambridge) described the evolution of the PYTHIA and HERWIG generators, the crucial role they played in the discovery of the Higgs boson, and the role they now play in the LHC’s physics programme. Differences in the modelling of the parton–shower systematics by the ATLAS and CMS collaborations led to lively discussions!
The vision talk was given by Lance Dixon (SLAC) about the reconstruction of scattering amplitudes directly from analytic properties, as a complementary approach to Lagrangians and Feynman diagrams. Oliver Bruning (CERN) conveyed the message that the HL-LHC accelerator project is well on track, and Patricia McBride (Fermilab) reached a similar conclusion regarding ATLAS and CMS’s Phase-2 upgrades, enjoining new and young people to join the effort, to ensure they are ready and commissioned for the start of Run 4.
The next Higgs Hunting workshop will be held in Orsay and Paris from 15 to 17 July 2025, following EPS-HEP in Marseille from 7 to 11 July.
Thirty years ago, physicists from Harvard University set out to build a portable antiproton trap. They tested it on electrons, transporting them 5000 km from Nebraska to Massachusetts, but it was never used to transport antimatter. Now, a spin-off project of the Baryon Antibaryon Symmetry Experiment (BASE) at CERN has tested their own antiproton trap, this time using protons. The ultimate goal is to deliver antiprotons to labs beyond CERN’s reach.
“For studying the fundamental properties of protons and antiprotons, you need to take extremely precise measurements – as precise as you can possibly make it,” explains principal investigator Christian Smorra. “This level of precision is extremely difficult to achieve in the antimatter factory, and can only be reached when the accelerator is shut down. This is why we need to relocate the measurements – so we can get rid of these problems and measure anytime.”
The team has made considerable strides to miniaturise their apparatus. BASE-STEP is far and away the most compact design for an antiproton trap yet built, measuring just 2 metres in length, 1.58 metres in height and 0.87 metres across. Weighing in at 1 tonne, transportation is nevertheless a complex operation. On 24 October, 70 protons were introduced into the trap and lifted onto a truck using two overhead cranes. The protons made a round trip through CERN’s main site before returning home to the antimatter factory. All 70 protons were safely transported and the experiment with these particles continued seemlessly, successfully demonstrating the trap’s performance.
Antimatter needs to be handled carefully, to avoid it annihilating with the walls of the trap. This is hard to achieve in the controlled environment of a laboratory, let alone on a moving truck. Just like in the BASE laboratory, BASE–STEP uses a Penning trap with two electrode stacks inside a single solenoid. The magnetic field confines charged particles radially, and the electric fields trap them axially. The first electrode stack collects antiprotons from CERN’s antimatter factory and serves as an “airlock” by protecting antiprotons from annihilation with the molecules of external gases. The second is used for long-term storage. While in transit, non-destructive image-current detection monitors the particles and makes sure they have not hit the walls of the trap.
“We originally wanted a system that you can put in the back of your car,” says Smorra. “Next, we want to try using permanent magnets instead of a superconducting solenoid. This would make the trap even smaller and save CHF 300,000. With this technology, there will be so much more potential for future experiments at CERN and beyond.”
With or without a superconducting magnet, continuous cooling is essential to prevent heat from degrading the trap’s ultra-high vacuum. Penning traps conventionally require two separate cooling systems – one for the trap and one for the superconducting magnet. BASE-STEP combines the cooling systems into one, as the Harvard team proposed in 1993. Ultimately, the transport system will have a cryocooler that is attached to a mobile power generator with a liquid-helium buffer tank present as a backup. Should the power generator be interrupted, the back-up cooling system provides a grace period of four hours to fix it and save the precious cargo of antiprotons. But such a scenario carries no safety risk given the miniscule amount of antimatter being transported. “The worst that can happen is the antiprotons annihilate, and you have to go back to the antimatter factory to refill the trap,” explains Smorra.
With the proton trial-run a success, the team are confident they will be able to use this apparatus to successfully deliver antiprotons to precision laboratories in Europe. Next summer, BASE-STEP will load up the trap with 1000 antiprotons and hit the road. Their first stop is scheduled to be Heinrich Heine University inGermany.
“We can use the same apparatus for the antiproton transport,” says Smorra. “All we need to do is switch the polarity of the electrodes.”
Since the LHC began operations in 2008, the CMS experiment has been searching for signs of supersymmetry (SUSY) – the only remaining spacetime symmetry not yet observed to have consequences for physics. It has explored higher and higher masses of supersymmetric particles (sparticles) with increasing collision energies and growing datasets. No evidence has been observed so far. A new CMS analysis using data recorded between 2016 and 2018 continues this search in an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.
The masses of SUSY sparticles have very important implications for both the physics of our universe and how they could be potentially produced and observed at experiments like CMS. The heavier the sparticle, the rarer its appearance. On the other hand, when heavy sparticles decay, their mass is converted to the masses and momenta of SM particles, like leptons and jets. These particles are detected by CMS, with large masses leaving potentially spectacular (and conspicuous) signatures. Each heavy sparticle is expected to continue to decay to lighter ones, ending with the lightest SUSY particles (LSPs). LSPs, though massive, are stable and do not decay in the detector. Instead, they appear as missing momentum. In cases of compressed sparticle mass spectra, the mass difference between the initially produced sparticles and LSPs is small. This means the low rates of production of massive sparticles are not accompanied by high-momentum decay products in the detector. Most of their mass ends up escaping in the form of invisible particles, significantly complicating observation.
This new CMS result turns this difficulty on its head, using a kinematic observable RISR, which is directly sensitive to the mass of LSPs as opposed to the mass difference between parent sparticles and LSPs. The result is even better discrimination between SUSY and SM backgrounds when sparticle spectra are more compressed.
This approach focuses on events where putative SUSY candidates receive a significant “kick” from initial-state radiation (ISR) – additional jets recoiling opposite the system of sparticles. When the sparticle masses are highly compressed, the invisible, massive LSPs receive most of the ISR momentum-kick, with this fraction telling us about the LSP masses through the RISR observable.
Given the generic applicability of the approach, the analysis is able to systematically probe a large class of possible scenarios. This includes events with various numbers of leptons (0, 1, 2 or 3) and jets (including those from heavy-flavour quarks), with a focus on objects with low momentum. These multiplicities, along with RISR and other selected discriminating variables, are used to categorise recorded events and a comprehensive fit is performed to all these regions. Compressed SUSY signals would appear at larger values of RISR, while bins at lower values are used to model and constrain SM backgrounds. With more than 2000 different bins in RISR, over several hundred object-based categories, a significant fraction of the experimental phase space in which compressed SUSY could hide is scrutinised.
In the absence of significant observed deviations in data yields from SM expectations, a large collection of SUSY scenarios can be excluded at high confidence level (CL), including those with the production of stop quarks, EWKinos and sleptons. As can be seen in the results for stop quarks (figure 1), the analysis is able to achieve excellent sensitivity to compressed SUSY. Here, as for many of the SUSY scenarios considered, the analysis provides the world’s most stringent constraints on compressed SUSY, further narrowing the space it could be hiding.
Completed in 2022, China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. Like the International Space Station, its ability to provide large amounts of power, support heavy payloads and access powerful communication and computing facilities give it many advantages over typical satellite platforms. As such, both Chinese and international collaborations have been developing a number of science missions ranging from optical astronomy to the detection of cosmic rays with PeV energies.
For optical astronomy, the space station will be accompanied by the Xuntian telescope, which can be translated to “survey the heavens”. Xuntian is currently planned to be launched in mid-2025 to fly alongside Tiangong, thereby allowing for regular maintenance. Although its spatial resolution will be similar to that of the Hubble Space Telescope, Xuntian’s field of view will be about 300 times larger, allowing the observation of many objects at the same time. In addition to producing impressive images similar to those sent by Hubble, the instrument will be important for cosmological studies where large statistics for astronomical objects are typically required to study their evolution.
Another instrument that will observe large portions of the sky is LyRIC (Lyman UV Radiation from Interstellar medium and Circum-galactic medium). After being placed on the space station in the coming years, LyRIC will probe the poorly studied far-ultraviolet regime that contains emission lines from neutral hydrogen and other elements. While difficult to measure, this allows studies of baryonic matter in the universe, which can be used to answer important questions such as why only about half of the total baryons in the standard “ΛCDM” cosmological model can be accounted for.
At slightly higher energies, the Diffuse X-ray Explorer (DIXE) aims to use a novel type of X-ray detector to reach an energy resolution better than 1% in the 0.1 to 10 keV energy range. It achieves this using cryogenic transition-edge sensors (TESs), which exploit the rapid change in resistance that occurs during a superconducting phase transition. In this regime, the resistivity of the material is highly dependent on its temperature, allowing the detection of minuscule temperature increases resulting from X-rays being absorbed by the material. Positioned to scan the sky above the Tiangong space station, DIXE will be able, among other things, to measure the velocity of material that appears to have been emitted by the Milky Way during an active stage of its central black hole. Its high-energy resolution will allow Doppler shifts of the order of several eV to be measured, requiring the TES detectors to operate at 50 mK. Achieving such temperatures demands a cooling system of 640 W – a power level that is difficult to achieve on a satellite, but relatively easy to acquire on a space station. As such, DIXE will be one of the first detectors using this new technology when it launches in 2025, leading the way for missions such as the European ATHENA mission that plans to use it starting in 2037.
Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up
POLAR-2 was accepted as an international payload on the China space station through the United Nations Office for Outer Space Affairs and has since become a CERN-recognised experiment. The mission started as a Swiss, German, Polish and Chinese collaboration building on the success of POLAR, which flew on the space station’s predecessor Tiangong-2. Like its earlier incarnation, POLAR-2 measures the polarisation of high-energy X rays or gamma rays to provide insights into, for example, the magnetic fields that produced the emission. As one of the most sensitive gamma-ray detectors in the sky, POLAR-2 can also play an important role in alerting other instruments when a bright gamma-ray transient, such as a gamma-ray burst, appears. The importance of such alerts has resulted in the expansion of POLAR-2 to include an accompanying imaging spectrometer, which will provide detailed spectral and location information on any gamma-ray transient. Also now foreseen for this second payload is an additional wide-field-of-view X-ray polarimeter. The international team developing the three instruments, which are scheduled to be launched in 2027, is led by the Institute of High Energy Physics in Beijing.
For studying the universe using even higher energy emissions, the space station will host the High Energy cosmic-Radiation Detection Facility (HERD). HERD is designed to study both cosmic rays and gamma rays at energies beyond those accessible to instruments like AMS-02, CALET (CERN Courier July/August 2024 p24) and DAMPE. It aims to achieve this, in part, by simply being larger, resulting in a mass that is currently only possible to support on a space station. The HERD calorimeter will be 55 radiation lengths long and consist of several tonnes of scintillating cubic LYSO crystals. The instrument will also use high-precision silicon trackers, which in combination with the deep calorimeter, will provide a better angular resolution and a geometrical acceptance 30 times larger than the present AMS-02 (which is due to be upgraded next year). This will allow HERD to probe the cosmic-ray spectrum up to PeV energies, filling in the energy gap between current space missions and ground-based detectors. HERD started out as an international mission with a large European contribution, however delays on the European side regarding participation, in combination with a launch requirement of 2027, mean that it is currently foreseen to be a fully Chinese mission.
Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up. As well as providing researchers with a pristine view of the electromagnetic universe, instruments such as HERD will enable vital cross-checks of data from AMS-02 and other unique experiments in space.
Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. He pointed out that if monopoles exist, electric charge must be quantised, meaning that particle charges must be integer multiples of a fundamental charge. Electric charge quantisation is indeed observed in nature, with no other known explanation for this striking phenomenon. The ATLAS collaboration performed a search for these elusive particles using lead–lead (PbPb) collisions at 5.36 TeV from Run 3 of the Large Hadron Collider.
The search targeted the production of monopole–antimonopole pairs via photon–photon interactions, a process enhanced in heavy-ion collisions due to the strong electromagnetic fields (∝Z2) generated by the Z = 82 lead nuclei. Ultraperipheral collisions are ideal for this search, as they feature electromagnetic interactions without direct nuclear contact, allowing rare processes like monopole production to dominate in visible signatures. The ATLAS study employed a novel detection technique exploiting the expected highly ionising nature of these particles, leaving a characteristic signal in the innermost silicon detectors of the ATLAS experiment (figure 1).
The analysis employed a non-perturbative semiclassical model to estimate monopole production. Traditional perturbative models, which rely on Feynman diagrams, are inadequate due to the large coupling constant of magnetic monopoles. Instead, the study used a model based on the Schwinger mechanism, adapted for magnetic fields, to predict monopole production in the ultraperipheral collisions’ strong magnetic fields. This approach offers a more robust
theoretical framework for the search.
The experiment’s trigger system was critical to the search. Given the high ionisation signature of monopoles, traditional calorimeter-based triggers were unsuitable, as even high-momentum monopoles lose energy rapidly through ionisation and do not reach the calorimeter. Instead, the trigger, newly introduced for the 2023 PbPb data-taking campaign, focused on detecting the forward neutrons emitted during electromagnetic interactions. The level-1 trigger system identified neutrons using the Zero-Degree Calorimeter, while the high-level trigger required more than 100 clusters of pixel-detector hits in the inner detector – an approach sensitive to monopoles due to their high ionisation signatures.
Additionally, the analysis examined the topology of pixel clusters to further refine the search, as a more aligned azimuthal distribution in the data would indicate a signature consistent with monopoles (figure 1), while the uniform distribution typically associated with beam-induced backgrounds could be identified and suppressed.
No significant monopole signal is observed beyond the expected background, with the latter being estimated using a data-driven technique. Consequently, the analysis set new upper limits on the cross-section for magnetic monopole production (figure 2), significantly improving existing limits for low-mass monopoles in the 20–150 GeV range. Assuming a non-perturbative semiclassical model, the search excludes monopoles with a single Dirac magnetic charge and masses below 120 GeV. The techniques developed in this search will open new possibilities to study other highly ionising particles that may emerge from beyond-Standard Model physics.
In high-energy collisions at the LHC, prompt photons are those that do not originate from particle decays and are instead directly produced by the hard scattering of quarks and gluons (partons). Due to their early production, they provide a clean method to probe the partons inside the colliding nucleons, and in particular the fraction of the momentum of the nucleon carried by each parton (Bjorken x). The distribution of each parton in Bjorken x is known as its parton distribution function (PDF).
Theoretical models of particle production rely on the precise knowledge of PDFs, which are derived from vast amounts of experimental data. The high centre-of-mass energies (√s) at the LHC probe very small values of the momentum fraction, Bjorken x. At “midrapidity”, when a parton scatters with a large angle with respect to the beam axis, and a prompt photon is produced in the final state, a useful approximation to Bjorken x is provided by the dimensionless variable xT = 2pT/√s, where pT is the transverse momentum of the prompt photon.
Prompt photons can also be produced by next-to-leading order processes such as parton fragmentation or bremsstrahlung. A clean separation of the different prompt photon sources is difficult experimentally, but fragmentation can be suppressed by selecting “isolated photons”. For a photon to be considered isolated, the sum of the transverse energies or transverse momenta of the particles produced in a cone around the photon must be smaller than some threshold – a selection that can be done both in the experimental measurement and theoretical calculations. An isolation requirement also helps to reduce the background of decay photons, since hadrons that can decay to photons are often produced in jet fragmentation.
The ALICE collaboration now reports the measurement of the differential cross-section for isolated photons in proton–proton collisions at √s = 13 TeV at midrapidity. The photon measurement is performed by the electromagnetic calorimeter, and the isolated photons are selected by combining with the data from the central inner tracking system and time-projection chamber, requiring that the summed pT of the charged particles in a cone of angular radius 0.4 radians centred on the photon candidate be smaller than 1.5 GeV/c. The isolated photon cross-sections are obtained within the transverse momentum range from 7 to 200 GeV/c, corresponding to 1.1 × 10–3 < xT < 30.8 × 10–3.
Figure 1 shows the new ALICE results alongside those from ATLAS, CMS and prior measurements in proton–proton and proton–antiproton collisions at lower values of √s. The figure spans more than 15 orders of magnitude on the y-axis, representing the cross-section, over a wide range of xT. The present measurement probes the smallest Bjorken x with isolated photons at midrapidity to date. The experimental data points show an agreement between all the measurements when scaled with the collision energy to the power n = 4.5. Such a scaling is designed to cancel the predicted 1/(pT)n dependence of partonic 2 → 2 scattering cross-sections in perturbative QCD and reveal insights into the gluon PDF (see “The other 99%“).
This measurement will help to constrain the gluon PDF and will play a crucial role in exploring medium-induced modifications of hard probes in nucleus–nucleus collisions.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.