The main goal of this annual workshop is to review the status of the PBC studies continued or launched after the European Particle Physics Strategy update, with a focus on the programmes under consideration for start of operation after the next LHC long shutdown LS3. The workshop is also opened to presentation of new ideas of potential interest for CERN, after submission along the guidelines given on the PBC Home Page.
We are pleased to announce the Higgs 2022 Conference that will take place in the on-site format.
The conference will focus on new experimental and theoretical results on the Higgs boson.
Latest measurement of the Higgs boson properties and recent theoretical developments in the Higgs boson sector, in the Standard Model and in physics Beyond the Standard Model will be presented and discussed at the Conference.
Contributions will be organised in several parallel and plenary sessions.
During the the Conference, the ten years anniversary of Higgs boson discovery will be celebrated with social events opened to the general public.
The conference is planned to be kept in hybrid format with a substantial in-person participation, in compliance with the relevant COVID-19 regulations at the time of the meeting.
Complementing previous results by Belle, BaBar and LHCb, the LHCb collaboration has reported a new test of lepton flavour universality in b → cℓ νℓ decays. At a seminar at CERN on Tuesday 18 October, the collaboration announced the first simultaneous measurements of the ratio of the branching fraction of B-meson decays to D mesons: R(D*)= BR(B→D*τ–ντ)/BR(B→D*μ–νμ) and R(D)= BR(B–→D0τ–ντ)/BR(B–→D0μ–νμ) at a hadron collider. Based on Run 1 data recorded at a centre-of-mass energy of 7 and 8 TeV, they found R(D*) = 0.281 ± 0.018 (stat.) ± 0.024 (syst.) and R(D) = 0.441 ± 0.060 (stat.) ±0.066 (syst.). The values, which are consistent with the Standard Model (SM) expectation within 1.9 σ, bring further information to the pattern of “flavour anomalies” reported in recent years.
Lepton-flavour universality holds that aside from mass differences, all interactions must couple identically to different leptons. As such, the rate of B-meson decays to different leptons is expected to be the same, apart from known differences due to their different masses. Global fits of R(D(*)) measurements, which probe b → c quark transitions, show that the ratio of B-meson to D-meson decays tends to be larger (by about 3.2 σ) than the SM prediction. The ratios of electronic to muonic B-meson decays, R(K), which probe b → s quark transitions, are also under scrutiny to test this basic principle of the SM.
To reconstruct b → cτ–ντ decays, LHCb used the leptonic τ–→μ–νν decay to identify the visible decay products D(*) and µ–. “We use the measurement of the B flight direction to constrain the kinematics of the unreconstructed particles, and with an approximation reconstruct the rest frame kinematic quantities,” says LHCb’s Greg Ciezarek, who presented the results. “The challenge is then to understand the modelling of the various background processes which also produce the same visible decay products but have additional missing particles different distributions in the rest frame quantities. We use control samples selected based on these missing particles to constrain the modelling of background processes and justify our level of understanding.”
The respective SM predictions for the ratios R(D) and R(D*) are very clean because they are independent of uncertainties induced by the CKM-matrix element Vcb and hadronic matrix elements. The new values of R(D) and R(D*) are compatible both with the current world average compiled by the HFLAV collaboration, and with the SM prediction (at 2.2σ and 2.3σ). The combined LHCb result provides improved sensitivity to a possible lepton-universality breaking process.
“Rare B-meson decays and ratios such as R(K) and R(D(*)) are powerful probes to search for beyond the Standard Model particles, which are not directly detectable at the LHC,” says Ben Allanach, theorist at the University of Cambridge.
Marking 10 years since the discovery of the Higgs boson, a two-day workshop held at the University of Birmingham on 30 June and 1 July brought together ATLAS and CMS physicists who were involved in the discovery and subsequent characterisation of the Higgs boson. Around 75 physicists, in addition to members of the public who attended a colloquium, celebrated this momentous discovery together with PhD students, early-career researchers and members of IOP’s history of physics group. In an informal atmosphere, participants recalled and gave insights on what had taken place, spicing it with personal stories that placed the human dimension of science under the spotlight.
The story of the Higgs-boson search was traced from the times of LEP and the Tevatron. Participants were reminded of the uncertainty and excitement during the final days of LEP: the hints of an excess of events at around 115 GeV and the ensuing controversy surrounding the decision to either stop the machine or extend its data-taking further. For the Tevatron, the focus was more on the relentless race against time until the LHC could provide an overwhelming dataset. It was considered plausible that the Tevatron could observe the Higgs boson first, leading CERN to delay a scheduled break in LHC data-taking following its 2011 run.
The timeline of the design, construction and commissioning of the LHC experiments was presented, with a particular focus on the excellent performance achieved by ATLAS and CMS since the beginning of Run 1. The parallel role of theory and the collaboration among theorists and experimentalists was also discussed. Speakers from the experiments involved in the Higgs-discovery analyses provided personal perspectives on the events leading up to the 4 July 2012 announcement.
With his unique perspective, former CERN Director-General Chris Llewellyn-Smith described the early discussions and approval of the LHC project during a well-attended public symposium. He recalled his discussions with former UK prime minister Margaret Thatcher, the role of the ill-fated US Superconducting Super Collider and the “byzantine politics” that led to the LHC’s approval in 1994. Most importantly, he emphasised that the LHC was not inevitable: scientists had to fight to secure funding and bring it to reality. Former ATLAS spokesperson David Charlton reflected on the preparation of the experiments, the LHC startup in 2008 and subsequent magnet problems that delayed the physics runs until 2010, noting the excellent performance of the machine and detectors that enabled the discovery to be made much earlier than expected.
The workshop would not have been complete without a discussion on what happened after the discovery. Precision measurements of the Higgs-boson couplings, observation of new decay and production modes, as well as the search for Higgs-boson pair-production were described, always with a focus on the challenges that needed to be overcome. The workshop closed with a look to the future, both in terms of experimental prospects of the High-Luminosity LHC and theory.
“I am an opportunist, in one way an extremely successful one. Weinberg and I were working along similar lines with similar attitudes. I wish you well for your celebrations and regret that I can’t be with you in person.”
Peter Higgs winner of the 2013 Nobel Prize in Physics.
“It was an overwhelming time for us. It took time to understand what had happened. I especially remember the excitement among the young researchers.”
Rolf Heuer former CERN Director-General.
“It took 14 years to build the LHC. At one point we had 1000 dipoles, each costing a million Swiss francs, stored on the surface, throughout rain and snow.”
Lyn Evans former LHC project director.
“The first two years of measuring Standard Model physics were essential to give us confidence in the readiness of the two experiments to search for new physics.”
Peter Jenni founding ATLAS spokesperson.
“A key question for CMS was: can tracking be done in a congested environment with just a few points, albeit precise ones? It was a huge achievement requiring more than 200 m2 of active silicon.”
Michel Della Negra founding CMS spokesperson.
“I remember on 4 July 2012 a magnificent presentation of a historical discovery. I would also like to celebrate the life of Robert Brout, a great physicist and important man.”
François Englert winner of the 2013 Nobel Prize in Physics.
“The gist of the theory behind the Higgs boson would easily compete with the most far-fetched conspiracy theory, yet it seems nature chose it.”
Eliezer Rabinovici president of the CERN Council.
“The structure of the vacuum is intimately connected to how the Higgs boson interacts with itself. To probe this phenomenon at the LHC we can study the production of Higgs-boson pairs.”
André David CMS experimentalist (CERN).
“Collaboration between experiment and theory is even more necessary now to find any hints for BSM physics.”
“Precision Higgs physics is a telescope to high-scale physics, so I’m looking forward to the next 10 years of discovery.”
Sally Dawson theorist (BNL).
“Theory accuracy will be even more important to make the best of the HL-LHC data, especially in the case in which no evidence of new physics will show up… This is also crucial for the Monte Carlo tools used in the analyses.”
Massimiliano Grazzini theorist (University of Zurich).
“After 10 years we’ve measured the five main production and five major decay mechanisms of the Higgs boson.”
Kerstin Tackmann ATLAS experimentalist (DESY).
“What we know so far – Mass: known to 0.11%. Width: closing in on SM value of 3.2+2.5–1.7MeV (plus evidence of off-shell Higgs production). Spin 0: spin 1 & 2 excluded at 99.9% CL. CP structure: in accordance with SM CP-even hypotheses.”
Marco Delmastro ATLAS experimentalist (CNRS/IN2P3 LAPP).
“We have learned much about the 125 GeV Higgs boson since its discovery. The LHC Run 3 starts tomorrow: ready for the next decade of Higgs-boson exploration!”
Adinda de Wit CMS experimentalist (University of Zurich).
“The Higgs boson is linked to profound structural problems in the Standard Model. It is therefore an extraordinary discovery tool that calls for a broad experimental programme at the LHC and beyond.”
Fabiola Gianotti CERN Director-General.
“Elusive non-resonant pairs of Higgs bosons are the prime experimental signature of the Higgs-boson self-coupling. We are all eager to analyse Run 3 data to further probe HH events!”
Arnaud Ferrari ATLAS experimentalist (Uppsala University).
“New physics can affect differently the different fermion generations. We have to precisely measure the couplings if we want to understand the Higgs boson’s nature.”
Andrea Marini CMS experimentalist (CERN).
“From its potential invisible, forbidden, and exotic decays to the possible existence of scalar siblings, the Higgs boson plays a fundamental role in searches for physics beyond the Standard Model.”
“An incredible collaborative effort has brought us this far. But there is much more to come, especially during Long Shutdown 3, with HL-LHC paving the way from Run 3 to ultimate performance. Interesting times ahead to say the least!”
Mike Lamont CERN director for accelerators and technology.
“The hard work and creativity in reconstruction and analysis techniques are already evident since the last round of projections. Imagine what we can do in the next 20 years!”
Elizabeth Brost ATLAS experimentalist (BNL).
“The Higgs is the first really new elementary particle we’ve seen. We need to study it to death!”
Understanding hadronic final states is key to a successful physics programme at the LHC. The quarks and gluons flying out from proton–proton collisions instantly hadronise into sprays of particles called jets. Each jet has a unique composition that makes their flavour identification and energy calibration challenging. While the performance of jet-classification schemes has been increased by the fast-paced evolution of machine-learning algorithms, another, more subtle, revolution is ongoing in terms of precision jet-energy corrections.
CMS physicists have taken advantage of the data collected during LHC Run 2 to observe jets in many different final states and systematically understand their differences in detail. The main differences originate from the varying fractions of gluons making up the jets and the different amounts of final-state radiation (FSR) in the events, causing an imbalance between the leading jet and its companions. The gluon uncertainty was constrained by splitting the Z+jet sample by flavour, using a combination of quark–gluon likelihood and b/c-quark tagging, while FSR was constrained by combining the missing-ET projection fraction (MPF) and direct balance (DB) methods. The MPF and DB methods have been well established at the LHC since Run 1: while in the DB method the jet response is evaluated by comparing the reconstructed jet momentum directly to the momentum of the reference object, the MPF method considers the response of the whole hadronic activity in the event, recoiling versus the reference object. Figure 1 shows the agreement achieved with the Run 2 data after carefully accounting for these biases for samples with different jet-flavour compositions.
Precise jet-energy corrections are critical for some of the recent high-profile measurements by CMS, such as an intriguing double dijet excess at high mass, a recent exceptionally accurate top-quark mass measurement, and the most precise extraction of the strong coupling constant at hadron colliders using inclusive jets.
The expected increase of pileup in Run 3 and at the High-Luminosity LHC will pose additional challenges in the derivation of precise jet-energy corrections, but CMS physicists are well prepared: CMS will adopt the next-generation particle-flow algorithm (PUPPI, for PileUp Per Particle Id) as the default reconstruction algorithm to tackle pileup effects within jets at the single-particle level.
Jets can be used to address some of the most intriguing puzzles of the Standard Model (SM), in particular: is the SM vacuum metastable, or do some new particles and fields stabilise it? The top-quark mass and strong-coupling-constant measurements address the former question via their interplay with the Higgs-boson mass, while dijet-resonance searches tackle the latter.
Underlying these studies are the jet-energy corrections and the awareness that each jet flavour is unique.
Photon-induced reactions are regularly studied in ultra-peripheral nucleus–nucleus collisions (UPCs) at the LHC. In these collisions, the accelerated ions, which carry a strong electromagnetic field, pass by each other with an impact parameter (the distance between their centres) larger than the sum of their nuclear radii. Hadronic interactions between nuclei are therefore strongly suppressed. At LHC energies, the photoproduction of charmonium (a bound state of charm and anti-charm quarks) in UPCs is sensitive to the gluon distributions in nuclei over a wide low Bjorken-x range. In particular, in coherent interactions, the photon emitted by one of the nuclei couples to the other nucleus as a whole, leaving it intact, while a J/ψ meson is emitted with a characteristic low transverse momentum (pT) of about 60 MeV, which is roughly of the order of the inverse of the nuclear radius.
Surprisingly, in 2016 ALICE measured an unexpectedly large yield of J/ψ mesons at very low pT in peripheral, not ultra-peripheral, PbPb collisions at a centre-of-mass energy of 2.76 TeV. The excess with respect to expectations from hadronic J/ψ-meson production was interpreted as the first indication of coherent photoproduction of J/ψ mesons in PbPb collisions with nuclear overlap. This effect comes with many theoretical challenges. For instance, how can the coherence condition survive in the photon–nucleus interaction if the latter is broken up during the hadronic collision? Do only the non-interacting spectator nucleons participate in the coherent process? Can the photoproduced J/ψ meson be affected by interactions with the formed and fast-expanding quark–gluon plasma (QGP) created in nucleus–nucleus collisions? Recent theoretical developments on the subject are based on calculations for UPCs in which the J/ψ meson photoproduction-cross section is computed as the product of an effective photon flux and an effective photonuclear cross section for the process γPb → J/ψPb, with both terms usually modified to account for the nuclear overlap.
The ALICE experiment has recently measured the coherently photoproduced J/ψ mesons in PbPb collisions at a centre-of-mass energy of 5.02 TeV, using the full Run 2 data sample. The measurement is performed at forward rapidity (2.5 < y < 4) in the dimuon decay channel. For the first time, a significant (> 5σ) coherently photoproduced J/ψ-meson signal is observed even in semi-central PbPb collisions. In figure 1, the coherently photoproduced J/ψ cross section is shown as a function of the mean number of nucleons participating in the hadronic interaction (<Npart>). In this representation, the most central head-on PbPb collisions correspond to large <Npart> values close to 400. The photoproduced J/ψ cross section does not exhibit a strong dependence on collision centrality (i.e. on the amount of nuclear overlap) within the current experimental precision. A UPC-like model (the red line in figure 1) reproduces the semi-central to central PbPb data if a modified photon flux and photonuclear cross section to account for the nuclear overlap are included.
To clarify the theory behind this experimental observation of coherent J/ψ photoproduction, the upcoming Run 3 data will be crucial in several aspects. ALICE expects to collect a much larger data sample, thereby measuring a statistically significant signal in most central collisions. At midrapidity, the larger data sample and the excellent momentum resolution of the detector will allow for pT-differential cross-section measurements, which will shed light on the role of spectator nucleons for the coherence condition. By extending the coherently photo-produced J/ψ cross-section measurement towards most central PbPb collisions, ALICE will study the possible interaction of these charmonia with the QGP. Photoproduced J/ψ mesons could therefore turn out to be a completely new probe of the charmonium dissociation in the QGP.
The top quark – the heaviest known elementary particle – differs from the other quarks by its much larger mass and a lifetime that is shorter than the time needed to form hadronic bound states. Within the Standard Model (SM), the top quark decays almost exclusively into a W boson and a b quark, and the dominant production mechanism in proton–proton (pp) collisions is top-quark pair (tt) production.
Measurements of tt production at various pp centre-of-mass energies at the LHC probe different values of Bjorken-x, the fraction of the proton’s longitudinal momentum carried by the parton participating in the initial interaction. In particular, the fraction of tt events produced through quark–antiquark annihilation increases from 11% at 13 TeV to 25% at 5.02 TeV. A measurement of the tt production cross-section thus places additional constraints on the proton’s parton distribution functions (PDFs), which describe the probabilities of finding quarks and gluons at particular x values.
In November 2017, the ATLAS experiment recorded a week of pp-collision data at a centre-of-mass energy of 5.02 TeV. Although the main motivation of this 5.02 TeV dataset is to provide a proton reference sample for the ATLAS heavy-ion physics programme, it also provides a unique opportunity to study top-quark production at a previously unexplored energy in ATLAS. The majority of the data was recorded with a mean number of two inelastic pp collisions per bunch crossing compared to roughly 35 collisions during the 13 TeV runs. Due to much lower pileup conditions, the ATLAS calorimeter cluster noise thresholds were adjusted accordingly, and a dedicated jet-energy scale calibration was performed.
Now, the ATLAS collaboration has released its measurement of the tt production cross-section at 5.02 TeV in two final states. Events in the dilepton channel were selected by requiring opposite-charge pairs of leptons, resulting in a small, high-purity sample. Events in the single-lepton final states were separated into subsamples with different signal-to-background ratios, and a multivariate technique was used to further separate signal from background events. The two measurements were combined, taking the correlated systematic uncertainties into account.
The measured cross section in the dilepton channel (65.7 ± 4.9 pb) corresponds to a relative uncertainty of 7.5%, of which 6.8% is statistical. The single-lepton measurement (68.2 ± 3.1 pb), on the other hand, has a 4.5% uncertainty that is primarily systematic. This measurement is slightly more precise than the single-lepton measurement at 13 TeV, despite the much smaller (almost a factor of 500!) integrated luminosity. The combination of the two measurements gives 67.5 ± 2.6 pb, corresponding to an uncertainty of just 3.9%.
The new ATLAS result is consistent with the SM prediction and with a measurement by the CMS collaboration, though with a total uncertainty reduced by almost a factor of two. It thus improves our understanding of the top-quark production at different centre-of-mass energies and allows an important test of the compatibility with predictions from different PDF sets (see figure 1). The result also provides a new measurement of high-x proton structure and shows a 5% reduction in the gluon PDF uncertainty in the region around x = 0.1, which is relevant for Higgs-boson production. Moreover, the measurement paves the way for the study of top-quark production in collisions involving heavy ions.
For the past 60 years, the second has been defined in terms of atomic transitions between two hyperfine states of caesium-133. Such transitions, which correspond to radiation in the microwave regime, enable state-of-the art atomic clocks to keep time at the level of one second in more than 300 million years. A newer breed of optical clocks developed since the 2000s exploit frequencies that are about 105 times higher. While still under development, optical clocks based on aluminium ions are already reaching accuracies of about one second in 33 billion years, corresponding to a relative systematic frequency uncertainty below 1 × 10–18.
To further reduce these uncertainties, in 2003 Ekkehard Peik and Christian Tamm of Physikalisch-Technische Bundesanstalt in Germany proposed the use of a nuclear instead of atomic transition for time measurements. Due to the small nuclear moments (corresponding to the vastly different dimensions of atoms and nuclei), and thus the very weak coupling to perturbing electromagnetic fields, a “nuclear clock” is less vulnerable to external perturbations. In addition to enabling a more accurate timepiece, this offers the potential for nuclear clocks to be used as quantum sensors to test fundamental physics.
A clock typically consists of an oscillator and a frequency-counting device. In a nuclear clock (see “Nuclear clock schematic” figure), the oscillator is provided by the frequency of a transition between two nuclear states (in contrast to a transition between two states in the electronic shell in the case of an atomic clock). For the frequency-counting device, a narrow-band laser resonantly excites the nuclear-clock transition, while the corresponding oscillations of the laser light are counted using a frequency comb. This device (the invention of which was recognised by the 2005 Nobel Prize in Physics) is a laser source whose spectrum consists of a series of discrete, equally spaced frequency lines. After a certain number of oscillations, given by the frequency of the nuclear transition, one second has elapsed.
The need for direct laser excitation strongly constrains applicable nuclear-clock transitions: their energy has to be low enough to be accessible with existing laser technology, while simultaneously exhibiting a narrow linewidth. As the linewidth is determined by the lifetime of the excited nuclear state, the latter has to be long enough to allow for highly stable clock operation. So far, only the metastable (isomeric) first excited state of 229Th, denoted 229mTh, qualifies as a candidate for a nuclear clock, due to its exceptionally low excitation energy.
The existence of the isomeric state was conjectured in 1976 from gamma-ray spectroscopy of 229Th, and its excitation energy has only recently been determined to be 8.19 ± 0.12 eV (corresponding to a vacuum-ultraviolet wavelength of 151.4 ± 2.2 nm). Not only is it the lowest nuclear excitation among the roughly 184,000 excited states of the 3300 or so known nuclides, its expected lifetime is of the order of 1000 s, resulting in an extremely narrow relative linewidth (ΔE/E ~ 10–20) for its ground-state transition (see “Unique transition” figure). Besides high resilience against external perturbations, this represents another attractive property for a thorium nuclear clock.
Networks of ultra-precise synchronised nuclear clocks could enable a search for ultra light dark matter
Achieving optical control of the nuclear transition via a direct laser excitation would open a broad range of applications. A nuclear clock’s sensitivity to the gravitational redshift, which causes a clock’s relative frequency to change depending on its absolute height, could enable more accurate global positioning systems and high-sensitivity detections of fluctuations of Earth’s gravitational potential induced by seismic or tectonic activities. Furthermore, while the few-eV thorium transition emerges from a fortunate near-degeneracy of the two lowest nuclear-energy levels in 229Th, the Coulomb and strong-force contributions to these energies differ at the MeV level. This makes the nuclear-level structure of 229Th uniquely sensitive to variations of fundamental constants and ultralight dark matter. Many theories predict variations of the fine structure constant, for example, but on tiny yearly rates. The high sensitivity provided by the thorium isomer could allow such variations to be identified. Moreover, networks of ultra-precise synchronised clocks could enable a search for (ultra light) dark-matter signals.
Two different approaches have been proposed to realise a nuclear clock: one based on trapped ions and another using doped solid-state crystals. The first approach starts from individually trapped Th ions, which promises an unprecedented suppression of systematic clock-frequency shift and leads to an expected relative clock accuracy of about 1 × 10–19. The other approach relies on embedding 229Th atoms in a vacuum–ultraviolet (VUV) transparent crystal such as CaF2. This has the advantage of a large concentration (> 1015/cm3) of Th nuclei in the crystal, leading to a considerably higher signal-to-noise ratio and thus a greater clock stability.
A precise characterisation of the thorium isomer’s properties is a prerequisite for any kind of nuclear clock. In 2016 the present authors and colleagues made the
first direct identification of 229mTh by detecting electrons emitted from its dominant decay mode: internal-conversion (IC), whereby a nuclear excited state decays by the direct emission of one of its atomic electrons (see “Isomeric signal” figure). This brought the long-term objective of a nuclear clock into the focus of international research.
Currently, experimental access to 229mTh is possible only via radioactive decays of heavier isotopes or by X-ray pumping from higher-lying rotational nuclear levels, as shown by Takahiko Masuda and co-workers in 2019. The former, based on the alpha decay of 233U (2% branching ratio), is the most commonly used approach. Very recently, however, a promising new experiment exploiting β– decay from 229Ac was performed at CERN’s ISOLDE facility led by a team at KU Leuven. Here, 229Ac is online-produced and mass-separated before being implanted into a large-bandgap VUV-transparent crystal. In both population schemes, either photons or conversion electrons emitted during the isomeric decay are detected.
In the IC-based approach, a positively charged 229mTh ion beam is generated from alpha-decay daughter products recoiling off a 233U source placed inside a buffer-gas stopping cell. The decay products are thermalised, guided by electrical fields towards an exit nozzle, extracted into a longitudinally 15-fold segmented radiofrequency quadrupole (RFQ) that acts as an ion guide, phase-space cooler and optionally a beam buncher, followed by a quadrupole mass separator for beam purification. In charged thorium isomers, the otherwise dominant IC decay branch is energetically forbidden, leading to a prolongation of the lifetime by up to nine orders of magnitude.
Operating the segmented RFQ as a linear Paul trap to generate sharp ion pulses enables the half-life of the thorium isomer to be determined. In work performed by the present authors in 2017, pulsed ions from the RFQ were collected and neutralised on a metal surface, triggering their IC decay. Since the long ionic lifetime was inaccessible due to the limited ion-storage time imposed by the trap’s vacuum conditions, the drastically reduced lifetime of neutral isomers was targeted. Time-resolved detection of the low-energy conversion electrons determined the lifetime to be 7 ± 1 μs.
Recently, considerable progress has been made in determining the 229mTh excitation energy – a milestone en route to a nuclear clock. In general, experimental approaches to determine the excitation energy fall into three categories: indirect measurements via gamma-ray spectroscopy of energetically low-lying rotational transitions in 229Th; direct spectroscopy of fluorescence photons emitted in radiative decays; and via electrons emitted in the IC decay of neutral 229mTh. The first approach led to the conjecture of the isomer’s existence and finally, in 2007, to the long-accepted value of 7.6 ± 0.5 eV. The second approach tries to measure the energy of photons emitted directly in the ground-state decay of the thorium isomer.
The first direct measurement of the thorium isomer’s excitation energy was reported by the present authors and co-workers in 2019. Using a compact magnetic-bottle spectrometer equipped with a repulsive electrostatic potential, followed by a microchannel-plate detector, the kinetic energy of the IC electrons emitted after an in-flight neutralisation of Th ions emitted from a 233U source could be determined. The experiment provided a value for the excitation energy of the nuclear-clock transition of 8.28 ± 0.17 eV. At around the same time in Japan, Masuda and co-workers used synchrotron radiation to achieve the first population of the isomer via resonant X-ray pumping into the second excited nuclear state of 229Th at 29.19 keV, which decays predominantly into 229mTh. By combining their measurement with earlier published gamma-spectroscopic data, the team could constrain the isomeric excitation energy to the range 2.5–8.9 eV. More recently, led by teams at Heidelberg and Vienna, the excited isomers were implanted into the absorber of a custom-built cryogenic magnetic micro-calorimeter and the isomeric energy was measured by detecting the temperature-induced change of the magnetisation using SQUIDs. This produced a value of 8.10 ± 0.17 eV for the clock-transition energy, resulting in a world-average of 8.19 ± 0.12 eV.
Besides precise knowledge of the excitation energy, another prerequisite for a nuclear clock is the possibility to monitor the nuclear excitation on short timescales. Peik and Tamm proposed a method to do this in 2003 based on the “double resonance” principle, which requires knowledge of the hyperfine structure of the thorium isomer. Therefore, in 2018, two different laser beams were collinearly superimposed on the 229Th ion beam, initiating a two-step excitation in the atomic shell of 229Th. By varying both laser frequencies, resonant excitations of hyperfine components both of the 229Th ground state and the 229mTh isomer could be identified and thus the hyperfine splitting signature of both states could be established by detecting their de-excitation (see “Hyperfine splitting” figure). The eventual observation of the 229mTh hyperfine structure in 2018 not only will in the future allow a non-destructive verification of the nuclear excitation, but enabled the isomer’s magnetic dipole and electrical quadrupole moments, and the mean-square charge radius, to be determined.
Roadmap towards a nuclear clock
So far, the identification and characterisation of the thorium isomer has largely been driven by nuclear physics, where techniques such as gamma spectroscopy, conversion-electron spectroscopy and radioactive decays offer a description in units of electron volts. Now the challenge is to refine our knowledge of the isomeric excitation energy with laser-spectroscopic precision to enable optical control of the nuclear-clock transition. This requires bridging a gap of about 12 orders of magnitude in the precision of the 229mTh excitation energy, from around 0.1 eV to the sub-kHz regime. In a first step, existing broad-band laser technology can be used to localise the nuclear resonance with an accuracy of about 1 GHz. In a second step, using VUV frequency-comb spectroscopy presently under development, it is envisaged to improve the accuracy into the (sub-)kHz range.
Another practical challenge when designing a high-precision ion-trap-based nuclear clock is the generation of thermally decoupled, ultra-cold 229Th ions via laser cooling. 229Th3+ is particularly suited due to its electronic level structure, with only one valence electron. Due to the high chemical reactivity of thorium, a cryogenic Paul trap is the ideal environment for laser cooling, since almost all residual gas atoms will freeze out at 4 K, increasing the trapping time into the region of a few hours. This will form the basis for direct laser excitation of 229mTh and will also enable a measurement of the not yet experimentally determined isomeric lifetime of 229Th ions. For the alternative development of a compact solid-state nuclear clock it will be necessary to suppress the 229mTh decay via internal conversion in a large band-gap, VUV transparent crystal and to detect the γ decay of the excited nuclear state. Proof-of-principle studies of this approach are currently ongoing at ISOLDE.
Laser-spectroscopy activities on the thorium isomer are also ongoing in the US, for example at JILA, NIST and UCLA
Many of the recent breakthroughs in understanding the 229Th clock transition emerged from the European Union project “nuClock”, which terminated in 2019. A subsequent project, ThoriumNuclearClock (ThNC), aims to demonstrate at least one nuclear clock by 2026. Laser-spectroscopy activities on the thorium isomer are also ongoing in the US, for example at JILA, NIST and UCLA.
In view of the large progress in recent years and ongoing worldwide efforts both experimentally and theoretically, the road is paved towards the first nuclear clock. It will complement highly precise optical atomic clocks, while in some areas, in the long run, nuclear clocks might even have the potential to replace them. Moreover, and beyond its superb timekeeping capabilities, a nuclear clock is a unique type of quantum sensor allowing for fundamental physics tests, from the variation of fundamental constants to searches for dark matter.
Colliding particles at high energies is a tried and tested route to uncover the secrets of the universe. In a collider, charged particles are packed in bunches, accelerated and smashed into each other to create new forms of matter. Whether accelerating elementary electrons or composite hadrons, past and existing colliders all deal with matter constituents. Colliding force-carrying particles such as photons is more ambitious, but can be done, even at the Large Hadron Collider (LHC).
The LHC, as its name implies, collides hadrons (protons or ions) into one another. In most cases of interest, projectile protons break up in the collision and a large number of energetic particles are produced. Occasionally, however, protons interact through a different mechanism, whereby they remain intact and exchange photons that fuse to create new particles (see “Photon fusion” figure). Photon–photon fusion has a unique signature: the particles originating from this kind of interaction are produced exclusively, i.e. they are the only ones in the final state along with the protons, which often do not disintegrate. Despite this clear imprint, when the LHC operates at nominal instantaneous luminosities, with a few dozen proton–proton interactions in a single bunch crossing, the exclusive fingerprint is contaminated by extra particles from different interactions. This makes the identification of photon–photon fusion challenging.
The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2
Protons that survive the collision, having lost a small fraction of their momentum, leave the interaction point still packed within the proton bunch, but gradually drift away as they travel further along the beamline. During LHC Run 2, the CMS collaboration installed a set of forward proton detectors, the Precision Proton Spectrometer (PPS), at a distance of about 200 m from the interaction point on both sides of the CMS apparatus. The PPS detectors can get as close to the beam as a few millimetres and detect protons that have lost between 2% and 15% of their initial kinetic energy (see “Precision Proton Spectrometer up close” panel). They are the CMS detectors located the farthest from the interaction point and the closest to the beam pipe, opening the door to a new physics domain, represented by central-exclusive-production processes in standard LHC running conditions.
Testing the Standard Model
Central exclusive production (CEP) processes at the LHC allow novel tests of the Standard Model (SM) and searches for new phenomena by potentially granting access to some of the rarest SM reactions so far unexplored. The identification of such exclusive processes relies on the correlation between the proton momentum loss measured by PPS and the kinematics of the central system, allowing the mass and rapidity of the central system in the interaction to be inferred very accurately (see “Tagging exclusive events” and “Exclusive identification” figures). Furthermore, the rules for exclusive photon–photon interactions only allow states with certain quantum numbers (in particular, spin and parity) to be produced.
Precision Proton Spectrometer up close
PPS was born in 2014 as a joint project between the CMS and TOTEM collaborations (CERN Courier April 2017 p23), and in 2018 became a subsystem of CMS following an MoU between CERN, CMS and TOTEM. For the specialised PPS setup to work as designed, its detectors must be located within a few millimetres of the LHC proton beam. The Roman Pots technique – moveable steel “pockets” enclosing the detectors under moderate vacuum conditions with a thin wall facing the beam – is perfectly suited for this task. This technique has been successfully exploited by the TOTEM and ATLAS collaborations at the LHC and was used in the past by experiments at the ISR, the SPS, the Tevatron and HERA. The challenge for PPS is the requirement that the detectors operate continuously during standard LHC running conditions, as opposed to dedicated special runs with a very low interaction rate.
The PPS design for LHC Run 2 incorporated tracking and timing detectors on both sides of CMS. The tracking detector comprises two stations located 10 m apart, capable of reconstructing the position and angle of the incoming proton. Precise timing is needed to associate the production vertex of two protons to the primary interaction vertex reconstructed by the CMS tracker. The first tracking stations of the proton spectrometer were equipped with silicon-strip trackers from TOTEM – a precise and reliable system used since the start of the LHC. In parallel, a suitable detector technology for efficient operation during standard LHC runs was developed, and in 2017 half of the tracking stations (one per side) were replaced by new silicon pixel trackers designed to cope with the higher hit rate. The x, y coordinates provided by the pixels resolve multiple proton tracks in the same bunch crossing, while the “3D” technology used for sensor fabrication greatly enhances resistance against radiation damage. The transition from strips was completed in 2018, when the fully pixel-based tracker was employed.
In parallel, the timing system was set up. It is based on diamond pad sensors initially developed for a new TOTEM detector. The signal collection is segmented in relatively large pads, read out individually by custom, high-speed electronics. Each plane contributes to the time measurement of the proton hit with a resolution of about 100 ps. The design of the detector evolved during Run 2 with different geometries and set-ups, improving the performance in terms of efficiency and overall time resolution.
The most common and cleanest process in photon–photon collisions is the exclusive production of a pair of leptons. Theoretical calculations of such processes date back almost a century to the well-known Breit–Wheeler process. The first result obtained by PPS after commissioning in 2016 was the measurement of (semi-)exclusive production of e+e– and μ+μ– pairs using about 10 fb–1 of CMS data: 20 candidate events were identified with a di-lepton mass greater than 110 GeV. This process is now used as a “standard candle” to calibrate PPS and validate its performance. The cross section of this process has been measured by the ATLAS collaboration with their forward proton spectrometer, AFP (CERN Courier September/October 2020 p15).
An interesting process to study is the exclusive production of W-boson pairs. In the SM, electroweak gauge bosons are allowed to interact with each other through point-like triple and quartic couplings. Most extensions of the SM modify the strength of these couplings. At the LHC, electroweak self-couplings are probed via gauge-boson scattering, and specifically photon–photon scattering. A notable advantage of exclusive processes is the excellent mass resolution obtained from PPS, allowing the study of self-couplings at different scales with very high precision.
During Run 2, PPS reconstructed intact protons that lost down to 2% of their kinetic energy, which for proton–proton collisions at 13 TeV translates to sensitivity for
central mass values above 260 GeV. In the production of electroweak boson pairs, WW or ZZ, the quartic self-coupling mainly contributes to the high invariant-mass tail of the di-boson system. The analysis searched for anomalously large values of the quartic gauge coupling and the results provide the first constraint on γγZZ in an exclusive channel and a competitive constraint on γγWW compared to other vector-boson-scattering searches.
Many SM processes proceeding via photon fusion have a relatively low cross section. For example, the predicted cross section for CEP of top quark–antiquark pairs is of the order of 0.1 fb. A search for this process was performed early this year using about 30 fb–1 of CMS data recorded in 2017, with protons tagged by PPS. While the sensitivity of the analysis is not sufficient to test the SM prediction, it can probe possible enhancements due to additional contributions from new physics. Also, the analysis established tools with which to search for exclusive production processes in a multi-jet environment using machine-learning techniques.
The SM provides very accurate predictions for processes occurring at the LHC. Yet, it cannot explain the origin of several observations such as the existence of dark matter, the matter–antimatter asymmetry in the universe and neutrino masses. So far, the LHC experiments have been unable to provide answers to those questions, but the search is ongoing. Since physics with PPS mostly targets photon collisions, the only assumption is that the new physics is coupled to the electroweak sector, opening a plethora of opportunities for new searches.
Photon–photon scattering has already been observed in heavy-ion collisions by the LHC experiments, for example by ATLAS (CERN Courier December 2016 p9). But new physics would be expected to enter at higher di-photon masses, which is where PPS comes into play. Recently, a search for di-photon exclusive events was performed using about 100 fb–1 of CMS data at a di-photon mass greater than 350 GeV, where SM contributions are negligible. In the absence of an unexpected signal, a new best limit was set on anomalous four-photon coupling parameters. In addition, a limit on the coupling of axion-like particles to photon was set in the mass region 500–2000 GeV. These are the most restrictive limits to date.
A new, interesting possibility to look for unknown particles is represented by the “missing mass” technique. The exclusivity of CEP makes it possible, in two-particle final states, to infer the four-momentum of one particle if the other is measured. This is done by exploiting the fact that, if the protons are measured and the beam energy is known, the kinematics of the centrally produced final state can be determined: no direct measurements of the second particle are required, allowing us to “see the unseen”. This technique was demonstrated for the first time at the LHC this year, using around 40 and 2 fb–1 of Run 2 data in a search for pp → pZXp and pp → pγXp, respectively, where X represents a neutral, integer-spin particle with an unspecified decay mode. In the absence of an observed signal, the analysis sets the first upper limits for the production of an unspecified particle in the mass range 600–1600 GeV.
Looking forward with PPS
For LHC Run 3, which began in earnest on 5 July, the PPS team has implemented several upgrades to maximise the physics output from the expected increase in integrated luminosity. The mechanics and readout electronics of the pixel tracker have been redesigned to allow remote shifting of the sensors in several small steps, which better distributes the radiation damage caused by the highly non-uniform irradiation. All timing stations are now equipped with “double diamond” sensors, and from 2023 an additional, second station will be added to each PPS arm. This will improve the resolution of the measured arrival time of protons, which is crucial for reconstructing the z coordinate of a possible common vertex, by at least a factor of two. Finally, a new software trigger has been developed that requires the presence of tagged protons in both PPS arms, thus allowing the use of lower energy thresholds for the selection of events with two particle jets in CMS.
The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2, despite only a doubling of the integrated luminosity. This significant increase is due to the upgrade of the detectors, especially of the timing stations, thus placing PPS in the spotlight of the Run 3 research programme. Timing detectors also play a crucial role in the planning for the high-luminosity LHC (HL-LHC) phase. The CMS collaboration has released an expression of interest to pursue studies of CEP at the HL-LHC with the ambitious plan of installing near-beam proton spectrometers at 196, 220, 234, and 420 m from the interaction point. This would extend the accessible mass range to the region between 50 GeV and 2.7 TeV. The main challenge here is to mitigate high “pileup” effects using the timing information, for which new detector technologies, including synergies with the future CMS timing detectors, are being considered.
PPS significantly extends the LHC physics programme, and is a tribute to the ingenuity of the CMS collaboration in the ongoing search for new physics.