Photon-induced reactions are regularly studied in ultra-peripheral nucleus–nucleus collisions (UPCs) at the LHC. In these collisions, the accelerated ions, which carry a strong electromagnetic field, pass by each other with an impact parameter (the distance between their centres) larger than the sum of their nuclear radii. Hadronic interactions between nuclei are therefore strongly suppressed. At LHC energies, the photoproduction of charmonium (a bound state of charm and anti-charm quarks) in UPCs is sensitive to the gluon distributions in nuclei over a wide low Bjorken-x range. In particular, in coherent interactions, the photon emitted by one of the nuclei couples to the other nucleus as a whole, leaving it intact, while a J/ψ meson is emitted with a characteristic low transverse momentum (pT) of about 60 MeV, which is roughly of the order of the inverse of the nuclear radius.
Surprisingly, in 2016 ALICE measured an unexpectedly large yield of J/ψ mesons at very low pT in peripheral, not ultra-peripheral, PbPb collisions at a centre-of-mass energy of 2.76 TeV. The excess with respect to expectations from hadronic J/ψ-meson production was interpreted as the first indication of coherent photoproduction of J/ψ mesons in PbPb collisions with nuclear overlap. This effect comes with many theoretical challenges. For instance, how can the coherence condition survive in the photon–nucleus interaction if the latter is broken up during the hadronic collision? Do only the non-interacting spectator nucleons participate in the coherent process? Can the photoproduced J/ψ meson be affected by interactions with the formed and fast-expanding quark–gluon plasma (QGP) created in nucleus–nucleus collisions? Recent theoretical developments on the subject are based on calculations for UPCs in which the J/ψ meson photoproduction-cross section is computed as the product of an effective photon flux and an effective photonuclear cross section for the process γPb → J/ψPb, with both terms usually modified to account for the nuclear overlap.
The ALICE experiment has recently measured the coherently photoproduced J/ψ mesons in PbPb collisions at a centre-of-mass energy of 5.02 TeV, using the full Run 2 data sample. The measurement is performed at forward rapidity (2.5 < y < 4) in the dimuon decay channel. For the first time, a significant (> 5σ) coherently photoproduced J/ψ-meson signal is observed even in semi-central PbPb collisions. In figure 1, the coherently photoproduced J/ψ cross section is shown as a function of the mean number of nucleons participating in the hadronic interaction (<Npart>). In this representation, the most central head-on PbPb collisions correspond to large <Npart> values close to 400. The photoproduced J/ψ cross section does not exhibit a strong dependence on collision centrality (i.e. on the amount of nuclear overlap) within the current experimental precision. A UPC-like model (the red line in figure 1) reproduces the semi-central to central PbPb data if a modified photon flux and photonuclear cross section to account for the nuclear overlap are included.
To clarify the theory behind this experimental observation of coherent J/ψ photoproduction, the upcoming Run 3 data will be crucial in several aspects. ALICE expects to collect a much larger data sample, thereby measuring a statistically significant signal in most central collisions. At midrapidity, the larger data sample and the excellent momentum resolution of the detector will allow for pT-differential cross-section measurements, which will shed light on the role of spectator nucleons for the coherence condition. By extending the coherently photo-produced J/ψ cross-section measurement towards most central PbPb collisions, ALICE will study the possible interaction of these charmonia with the QGP. Photoproduced J/ψ mesons could therefore turn out to be a completely new probe of the charmonium dissociation in the QGP.
The top quark – the heaviest known elementary particle – differs from the other quarks by its much larger mass and a lifetime that is shorter than the time needed to form hadronic bound states. Within the Standard Model (SM), the top quark decays almost exclusively into a W boson and a b quark, and the dominant production mechanism in proton–proton (pp) collisions is top-quark pair (tt) production.
Measurements of tt production at various pp centre-of-mass energies at the LHC probe different values of Bjorken-x, the fraction of the proton’s longitudinal momentum carried by the parton participating in the initial interaction. In particular, the fraction of tt events produced through quark–antiquark annihilation increases from 11% at 13 TeV to 25% at 5.02 TeV. A measurement of the tt production cross-section thus places additional constraints on the proton’s parton distribution functions (PDFs), which describe the probabilities of finding quarks and gluons at particular x values.
In November 2017, the ATLAS experiment recorded a week of pp-collision data at a centre-of-mass energy of 5.02 TeV. Although the main motivation of this 5.02 TeV dataset is to provide a proton reference sample for the ATLAS heavy-ion physics programme, it also provides a unique opportunity to study top-quark production at a previously unexplored energy in ATLAS. The majority of the data was recorded with a mean number of two inelastic pp collisions per bunch crossing compared to roughly 35 collisions during the 13 TeV runs. Due to much lower pileup conditions, the ATLAS calorimeter cluster noise thresholds were adjusted accordingly, and a dedicated jet-energy scale calibration was performed.
Now, the ATLAS collaboration has released its measurement of the tt production cross-section at 5.02 TeV in two final states. Events in the dilepton channel were selected by requiring opposite-charge pairs of leptons, resulting in a small, high-purity sample. Events in the single-lepton final states were separated into subsamples with different signal-to-background ratios, and a multivariate technique was used to further separate signal from background events. The two measurements were combined, taking the correlated systematic uncertainties into account.
The measured cross section in the dilepton channel (65.7 ± 4.9 pb) corresponds to a relative uncertainty of 7.5%, of which 6.8% is statistical. The single-lepton measurement (68.2 ± 3.1 pb), on the other hand, has a 4.5% uncertainty that is primarily systematic. This measurement is slightly more precise than the single-lepton measurement at 13 TeV, despite the much smaller (almost a factor of 500!) integrated luminosity. The combination of the two measurements gives 67.5 ± 2.6 pb, corresponding to an uncertainty of just 3.9%.
The new ATLAS result is consistent with the SM prediction and with a measurement by the CMS collaboration, though with a total uncertainty reduced by almost a factor of two. It thus improves our understanding of the top-quark production at different centre-of-mass energies and allows an important test of the compatibility with predictions from different PDF sets (see figure 1). The result also provides a new measurement of high-x proton structure and shows a 5% reduction in the gluon PDF uncertainty in the region around x = 0.1, which is relevant for Higgs-boson production. Moreover, the measurement paves the way for the study of top-quark production in collisions involving heavy ions.
For the past 60 years, the second has been defined in terms of atomic transitions between two hyperfine states of caesium-133. Such transitions, which correspond to radiation in the microwave regime, enable state-of-the art atomic clocks to keep time at the level of one second in more than 300 million years. A newer breed of optical clocks developed since the 2000s exploit frequencies that are about 105 times higher. While still under development, optical clocks based on aluminium ions are already reaching accuracies of about one second in 33 billion years, corresponding to a relative systematic frequency uncertainty below 1 × 10–18.
To further reduce these uncertainties, in 2003 Ekkehard Peik and Christian Tamm of Physikalisch-Technische Bundesanstalt in Germany proposed the use of a nuclear instead of atomic transition for time measurements. Due to the small nuclear moments (corresponding to the vastly different dimensions of atoms and nuclei), and thus the very weak coupling to perturbing electromagnetic fields, a “nuclear clock” is less vulnerable to external perturbations. In addition to enabling a more accurate timepiece, this offers the potential for nuclear clocks to be used as quantum sensors to test fundamental physics.
Clockwork
A clock typically consists of an oscillator and a frequency-counting device. In a nuclear clock (see “Nuclear clock schematic” figure), the oscillator is provided by the frequency of a transition between two nuclear states (in contrast to a transition between two states in the electronic shell in the case of an atomic clock). For the frequency-counting device, a narrow-band laser resonantly excites the nuclear-clock transition, while the corresponding oscillations of the laser light are counted using a frequency comb. This device (the invention of which was recognised by the 2005 Nobel Prize in Physics) is a laser source whose spectrum consists of a series of discrete, equally spaced frequency lines. After a certain number of oscillations, given by the frequency of the nuclear transition, one second has elapsed.
The need for direct laser excitation strongly constrains applicable nuclear-clock transitions: their energy has to be low enough to be accessible with existing laser technology, while simultaneously exhibiting a narrow linewidth. As the linewidth is determined by the lifetime of the excited nuclear state, the latter has to be long enough to allow for highly stable clock operation. So far, only the metastable (isomeric) first excited state of 229Th, denoted 229mTh, qualifies as a candidate for a nuclear clock, due to its exceptionally low excitation energy.
The existence of the isomeric state was conjectured in 1976 from gamma-ray spectroscopy of 229Th, and its excitation energy has only recently been determined to be 8.19 ± 0.12 eV (corresponding to a vacuum-ultraviolet wavelength of 151.4 ± 2.2 nm). Not only is it the lowest nuclear excitation among the roughly 184,000 excited states of the 3300 or so known nuclides, its expected lifetime is of the order of 1000 s, resulting in an extremely narrow relative linewidth (ΔE/E ~ 10–20) for its ground-state transition (see “Unique transition” figure). Besides high resilience against external perturbations, this represents another attractive property for a thorium nuclear clock.
Networks of ultra-precise synchronised nuclear clocks could enable a search for ultra light dark matter
Achieving optical control of the nuclear transition via a direct laser excitation would open a broad range of applications. A nuclear clock’s sensitivity to the gravitational redshift, which causes a clock’s relative frequency to change depending on its absolute height, could enable more accurate global positioning systems and high-sensitivity detections of fluctuations of Earth’s gravitational potential induced by seismic or tectonic activities. Furthermore, while the few-eV thorium transition emerges from a fortunate near-degeneracy of the two lowest nuclear-energy levels in 229Th, the Coulomb and strong-force contributions to these energies differ at the MeV level. This makes the nuclear-level structure of 229Th uniquely sensitive to variations of fundamental constants and ultralight dark matter. Many theories predict variations of the fine structure constant, for example, but on tiny yearly rates. The high sensitivity provided by the thorium isomer could allow such variations to be identified. Moreover, networks of ultra-precise synchronised clocks could enable a search for (ultra light) dark-matter signals.
Two different approaches have been proposed to realise a nuclear clock: one based on trapped ions and another using doped solid-state crystals. The first approach starts from individually trapped Th ions, which promises an unprecedented suppression of systematic clock-frequency shift and leads to an expected relative clock accuracy of about 1 × 10–19. The other approach relies on embedding 229Th atoms in a vacuum–ultraviolet (VUV) transparent crystal such as CaF2. This has the advantage of a large concentration (> 1015/cm3) of Th nuclei in the crystal, leading to a considerably higher signal-to-noise ratio and thus a greater clock stability.
Precise characterisation
A precise characterisation of the thorium isomer’s properties is a prerequisite for any kind of nuclear clock. In 2016 the present authors and colleagues made the
first direct identification of 229mTh by detecting electrons emitted from its dominant decay mode: internal-conversion (IC), whereby a nuclear excited state decays by the direct emission of one of its atomic electrons (see “Isomeric signal” figure). This brought the long-term objective of a nuclear clock into the focus of international research.
Currently, experimental access to 229mTh is possible only via radioactive decays of heavier isotopes or by X-ray pumping from higher-lying rotational nuclear levels, as shown by Takahiko Masuda and co-workers in 2019. The former, based on the alpha decay of 233U (2% branching ratio), is the most commonly used approach. Very recently, however, a promising new experiment exploiting β– decay from 229Ac was performed at CERN’s ISOLDE facility led by a team at KU Leuven. Here, 229Ac is online-produced and mass-separated before being implanted into a large-bandgap VUV-transparent crystal. In both population schemes, either photons or conversion electrons emitted during the isomeric decay are detected.
In the IC-based approach, a positively charged 229mTh ion beam is generated from alpha-decay daughter products recoiling off a 233U source placed inside a buffer-gas stopping cell. The decay products are thermalised, guided by electrical fields towards an exit nozzle, extracted into a longitudinally 15-fold segmented radiofrequency quadrupole (RFQ) that acts as an ion guide, phase-space cooler and optionally a beam buncher, followed by a quadrupole mass separator for beam purification. In charged thorium isomers, the otherwise dominant IC decay branch is energetically forbidden, leading to a prolongation of the lifetime by up to nine orders of magnitude.
Operating the segmented RFQ as a linear Paul trap to generate sharp ion pulses enables the half-life of the thorium isomer to be determined. In work performed by the present authors in 2017, pulsed ions from the RFQ were collected and neutralised on a metal surface, triggering their IC decay. Since the long ionic lifetime was inaccessible due to the limited ion-storage time imposed by the trap’s vacuum conditions, the drastically reduced lifetime of neutral isomers was targeted. Time-resolved detection of the low-energy conversion electrons determined the lifetime to be 7 ± 1 μs.
Excitation energy
Recently, considerable progress has been made in determining the 229mTh excitation energy – a milestone en route to a nuclear clock. In general, experimental approaches to determine the excitation energy fall into three categories: indirect measurements via gamma-ray spectroscopy of energetically low-lying rotational transitions in 229Th; direct spectroscopy of fluorescence photons emitted in radiative decays; and via electrons emitted in the IC decay of neutral 229mTh. The first approach led to the conjecture of the isomer’s existence and finally, in 2007, to the long-accepted value of 7.6 ± 0.5 eV. The second approach tries to measure the energy of photons emitted directly in the ground-state decay of the thorium isomer.
The first direct measurement of the thorium isomer’s excitation energy was reported by the present authors and co-workers in 2019. Using a compact magnetic-bottle spectrometer equipped with a repulsive electrostatic potential, followed by a microchannel-plate detector, the kinetic energy of the IC electrons emitted after an in-flight neutralisation of Th ions emitted from a 233U source could be determined. The experiment provided a value for the excitation energy of the nuclear-clock transition of 8.28 ± 0.17 eV. At around the same time in Japan, Masuda and co-workers used synchrotron radiation to achieve the first population of the isomer via resonant X-ray pumping into the second excited nuclear state of 229Th at 29.19 keV, which decays predominantly into 229mTh. By combining their measurement with earlier published gamma-spectroscopic data, the team could constrain the isomeric excitation energy to the range 2.5–8.9 eV. More recently, led by teams at Heidelberg and Vienna, the excited isomers were implanted into the absorber of a custom-built cryogenic magnetic micro-calorimeter and the isomeric energy was measured by detecting the temperature-induced change of the magnetisation using SQUIDs. This produced a value of 8.10 ± 0.17 eV for the clock-transition energy, resulting in a world-average of 8.19 ± 0.12 eV.
Besides precise knowledge of the excitation energy, another prerequisite for a nuclear clock is the possibility to monitor the nuclear excitation on short timescales. Peik and Tamm proposed a method to do this in 2003 based on the “double resonance” principle, which requires knowledge of the hyperfine structure of the thorium isomer. Therefore, in 2018, two different laser beams were collinearly superimposed on the 229Th ion beam, initiating a two-step excitation in the atomic shell of 229Th. By varying both laser frequencies, resonant excitations of hyperfine components both of the 229Th ground state and the 229mTh isomer could be identified and thus the hyperfine splitting signature of both states could be established by detecting their de-excitation (see “Hyperfine splitting” figure). The eventual observation of the 229mTh hyperfine structure in 2018 not only will in the future allow a non-destructive verification of the nuclear excitation, but enabled the isomer’s magnetic dipole and electrical quadrupole moments, and the mean-square charge radius, to be determined.
Roadmap towards a nuclear clock
So far, the identification and characterisation of the thorium isomer has largely been driven by nuclear physics, where techniques such as gamma spectroscopy, conversion-electron spectroscopy and radioactive decays offer a description in units of electron volts. Now the challenge is to refine our knowledge of the isomeric excitation energy with laser-spectroscopic precision to enable optical control of the nuclear-clock transition. This requires bridging a gap of about 12 orders of magnitude in the precision of the 229mTh excitation energy, from around 0.1 eV to the sub-kHz regime. In a first step, existing broad-band laser technology can be used to localise the nuclear resonance with an accuracy of about 1 GHz. In a second step, using VUV frequency-comb spectroscopy presently under development, it is envisaged to improve the accuracy into the (sub-)kHz range.
Another practical challenge when designing a high-precision ion-trap-based nuclear clock is the generation of thermally decoupled, ultra-cold 229Th ions via laser cooling. 229Th3+ is particularly suited due to its electronic level structure, with only one valence electron. Due to the high chemical reactivity of thorium, a cryogenic Paul trap is the ideal environment for laser cooling, since almost all residual gas atoms will freeze out at 4 K, increasing the trapping time into the region of a few hours. This will form the basis for direct laser excitation of 229mTh and will also enable a measurement of the not yet experimentally determined isomeric lifetime of 229Th ions. For the alternative development of a compact solid-state nuclear clock it will be necessary to suppress the 229mTh decay via internal conversion in a large band-gap, VUV transparent crystal and to detect the γ decay of the excited nuclear state. Proof-of-principle studies of this approach are currently ongoing at ISOLDE.
Laser-spectroscopy activities on the thorium isomer are also ongoing in the US, for example at JILA, NIST and UCLA
Many of the recent breakthroughs in understanding the 229Th clock transition emerged from the European Union project “nuClock”, which terminated in 2019. A subsequent project, ThoriumNuclearClock (ThNC), aims to demonstrate at least one nuclear clock by 2026. Laser-spectroscopy activities on the thorium isomer are also ongoing in the US, for example at JILA, NIST and UCLA.
In view of the large progress in recent years and ongoing worldwide efforts both experimentally and theoretically, the road is paved towards the first nuclear clock. It will complement highly precise optical atomic clocks, while in some areas, in the long run, nuclear clocks might even have the potential to replace them. Moreover, and beyond its superb timekeeping capabilities, a nuclear clock is a unique type of quantum sensor allowing for fundamental physics tests, from the variation of fundamental constants to searches for dark matter.
Colliding particles at high energies is a tried and tested route to uncover the secrets of the universe. In a collider, charged particles are packed in bunches, accelerated and smashed into each other to create new forms of matter. Whether accelerating elementary electrons or composite hadrons, past and existing colliders all deal with matter constituents. Colliding force-carrying particles such as photons is more ambitious, but can be done, even at the Large Hadron Collider (LHC).
The LHC, as its name implies, collides hadrons (protons or ions) into one another. In most cases of interest, projectile protons break up in the collision and a large number of energetic particles are produced. Occasionally, however, protons interact through a different mechanism, whereby they remain intact and exchange photons that fuse to create new particles (see “Photon fusion” figure). Photon–photon fusion has a unique signature: the particles originating from this kind of interaction are produced exclusively, i.e. they are the only ones in the final state along with the protons, which often do not disintegrate. Despite this clear imprint, when the LHC operates at nominal instantaneous luminosities, with a few dozen proton–proton interactions in a single bunch crossing, the exclusive fingerprint is contaminated by extra particles from different interactions. This makes the identification of photon–photon fusion challenging.
The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2
Protons that survive the collision, having lost a small fraction of their momentum, leave the interaction point still packed within the proton bunch, but gradually drift away as they travel further along the beamline. During LHC Run 2, the CMS collaboration installed a set of forward proton detectors, the Precision Proton Spectrometer (PPS), at a distance of about 200 m from the interaction point on both sides of the CMS apparatus. The PPS detectors can get as close to the beam as a few millimetres and detect protons that have lost between 2% and 15% of their initial kinetic energy (see “Precision Proton Spectrometer up close” panel). They are the CMS detectors located the farthest from the interaction point and the closest to the beam pipe, opening the door to a new physics domain, represented by central-exclusive-production processes in standard LHC running conditions.
Testing the Standard Model
Central exclusive production (CEP) processes at the LHC allow novel tests of the Standard Model (SM) and searches for new phenomena by potentially granting access to some of the rarest SM reactions so far unexplored. The identification of such exclusive processes relies on the correlation between the proton momentum loss measured by PPS and the kinematics of the central system, allowing the mass and rapidity of the central system in the interaction to be inferred very accurately (see “Tagging exclusive events” and “Exclusive identification” figures). Furthermore, the rules for exclusive photon–photon interactions only allow states with certain quantum numbers (in particular, spin and parity) to be produced.
PPS was born in 2014 as a joint project between the CMS and TOTEM collaborations (CERN Courier April 2017 p23), and in 2018 became a subsystem of CMS following an MoU between CERN, CMS and TOTEM. For the specialised PPS setup to work as designed, its detectors must be located within a few millimetres of the LHC proton beam. The Roman Pots technique – moveable steel “pockets” enclosing the detectors under moderate vacuum conditions with a thin wall facing the beam – is perfectly suited for this task. This technique has been successfully exploited by the TOTEM and ATLAS collaborations at the LHC and was used in the past by experiments at the ISR, the SPS, the Tevatron and HERA. The challenge for PPS is the requirement that the detectors operate continuously during standard LHC running conditions, as opposed to dedicated special runs with a very low interaction rate.
The PPS design for LHC Run 2 incorporated tracking and timing detectors on both sides of CMS. The tracking detector comprises two stations located 10 m apart, capable of reconstructing the position and angle of the incoming proton. Precise timing is needed to associate the production vertex of two protons to the primary interaction vertex reconstructed by the CMS tracker. The first tracking stations of the proton spectrometer were equipped with silicon-strip trackers from TOTEM – a precise and reliable system used since the start of the LHC. In parallel, a suitable detector technology for efficient operation during standard LHC runs was developed, and in 2017 half of the tracking stations (one per side) were replaced by new silicon pixel trackers designed to cope with the higher hit rate. The x, y coordinates provided by the pixels resolve multiple proton tracks in the same bunch crossing, while the “3D” technology used for sensor fabrication greatly enhances resistance against radiation damage. The transition from strips was completed in 2018, when the fully pixel-based tracker was employed.
In parallel, the timing system was set up. It is based on diamond pad sensors initially developed for a new TOTEM detector. The signal collection is segmented in relatively large pads, read out individually by custom, high-speed electronics. Each plane contributes to the time measurement of the proton hit with a resolution of about 100 ps. The design of the detector evolved during Run 2 with different geometries and set-ups, improving the performance in terms of efficiency and overall time resolution.
The most common and cleanest process in photon–photon collisions is the exclusive production of a pair of leptons. Theoretical calculations of such processes date back almost a century to the well-known Breit–Wheeler process. The first result obtained by PPS after commissioning in 2016 was the measurement of (semi-)exclusive production of e+e– and μ+μ– pairs using about 10 fb–1 of CMS data: 20 candidate events were identified with a di-lepton mass greater than 110 GeV. This process is now used as a “standard candle” to calibrate PPS and validate its performance. The cross section of this process has been measured by the ATLAS collaboration with their forward proton spectrometer, AFP (CERN Courier September/October 2020 p15).
An interesting process to study is the exclusive production of W-boson pairs. In the SM, electroweak gauge bosons are allowed to interact with each other through point-like triple and quartic couplings. Most extensions of the SM modify the strength of these couplings. At the LHC, electroweak self-couplings are probed via gauge-boson scattering, and specifically photon–photon scattering. A notable advantage of exclusive processes is the excellent mass resolution obtained from PPS, allowing the study of self-couplings at different scales with very high precision.
During Run 2, PPS reconstructed intact protons that lost down to 2% of their kinetic energy, which for proton–proton collisions at 13 TeV translates to sensitivity for
central mass values above 260 GeV. In the production of electroweak boson pairs, WW or ZZ, the quartic self-coupling mainly contributes to the high invariant-mass tail of the di-boson system. The analysis searched for anomalously large values of the quartic gauge coupling and the results provide the first constraint on γγZZ in an exclusive channel and a competitive constraint on γγWW compared to other vector-boson-scattering searches.
Many SM processes proceeding via photon fusion have a relatively low cross section. For example, the predicted cross section for CEP of top quark–antiquark pairs is of the order of 0.1 fb. A search for this process was performed early this year using about 30 fb–1 of CMS data recorded in 2017, with protons tagged by PPS. While the sensitivity of the analysis is not sufficient to test the SM prediction, it can probe possible enhancements due to additional contributions from new physics. Also, the analysis established tools with which to search for exclusive production processes in a multi-jet environment using machine-learning techniques.
Uncharted domains
The SM provides very accurate predictions for processes occurring at the LHC. Yet, it cannot explain the origin of several observations such as the existence of dark matter, the matter–antimatter asymmetry in the universe and neutrino masses. So far, the LHC experiments have been unable to provide answers to those questions, but the search is ongoing. Since physics with PPS mostly targets photon collisions, the only assumption is that the new physics is coupled to the electroweak sector, opening a plethora of opportunities for new searches.
Photon–photon scattering has already been observed in heavy-ion collisions by the LHC experiments, for example by ATLAS (CERN Courier December 2016 p9). But new physics would be expected to enter at higher di-photon masses, which is where PPS comes into play. Recently, a search for di-photon exclusive events was performed using about 100 fb–1 of CMS data at a di-photon mass greater than 350 GeV, where SM contributions are negligible. In the absence of an unexpected signal, a new best limit was set on anomalous four-photon coupling parameters. In addition, a limit on the coupling of axion-like particles to photon was set in the mass region 500–2000 GeV. These are the most restrictive limits to date.
A new, interesting possibility to look for unknown particles is represented by the “missing mass” technique. The exclusivity of CEP makes it possible, in two-particle final states, to infer the four-momentum of one particle if the other is measured. This is done by exploiting the fact that, if the protons are measured and the beam energy is known, the kinematics of the centrally produced final state can be determined: no direct measurements of the second particle are required, allowing us to “see the unseen”. This technique was demonstrated for the first time at the LHC this year, using around 40 and 2 fb–1 of Run 2 data in a search for pp → pZXp and pp → pγXp, respectively, where X represents a neutral, integer-spin particle with an unspecified decay mode. In the absence of an observed signal, the analysis sets the first upper limits for the production of an unspecified particle in the mass range 600–1600 GeV.
For LHC Run 3, which began in earnest on 5 July, the PPS team has implemented several upgrades to maximise the physics output from the expected increase in integrated luminosity. The mechanics and readout electronics of the pixel tracker have been redesigned to allow remote shifting of the sensors in several small steps, which better distributes the radiation damage caused by the highly non-uniform irradiation. All timing stations are now equipped with “double diamond” sensors, and from 2023 an additional, second station will be added to each PPS arm. This will improve the resolution of the measured arrival time of protons, which is crucial for reconstructing the z coordinate of a possible common vertex, by at least a factor of two. Finally, a new software trigger has been developed that requires the presence of tagged protons in both PPS arms, thus allowing the use of lower energy thresholds for the selection of events with two particle jets in CMS.
The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2, despite only a doubling of the integrated luminosity. This significant increase is due to the upgrade of the detectors, especially of the timing stations, thus placing PPS in the spotlight of the Run 3 research programme. Timing detectors also play a crucial role in the planning for the high-luminosity LHC (HL-LHC) phase. The CMS collaboration has released an expression of interest to pursue studies of CEP at the HL-LHC with the ambitious plan of installing near-beam proton spectrometers at 196, 220, 234, and 420 m from the interaction point. This would extend the accessible mass range to the region between 50 GeV and 2.7 TeV. The main challenge here is to mitigate high “pileup” effects using the timing information, for which new detector technologies, including synergies with the future CMS timing detectors, are being considered.
PPS significantly extends the LHC physics programme, and is a tribute to the ingenuity of the CMS collaboration in the ongoing search for new physics.
With so many new hadronic states being discovered at the LHC (67 and counting, with the vast majority seen by LHCb), it can be difficult to keep track of what’s what. While most are variations of known mesons and baryons, LHCb is uncovering an increasing number of exotic hadrons, namely tetraquarks and pentaquarks. A case in point is its recent discovery, announced at CERN on 5 July, of a new strange pentaquark (with quark content ccuds) and a new tetraquark pair: one constituting the first doubly charged open-charm tetraquark (csud) and the other a neutral isospin partner (csud). The situation has prompted the LHCb collaboration to introduce a new naming scheme. “We’re creating ‘particle zoo 2.0’,” says Niels Tuning, LHCb physics coordinator. “We’re witnessing a period of discovery similar to the 1950s, when a ‘zoo’ of hadrons ultimately led to the quark model of conventional hadrons in the 1960s.”
While the quark model allows the existence of multiquark states beyond two- and three-quark mesons and baryons, the traditional naming scheme for hadrons doesn’t make much allowance for what these particles should be called. When the first tetraquark candidate was discovered at the Belle experiment in 2003, it was denoted by “X” because it didn’t seem to be a conventional charmonium state. Shortly afterwards, a similarly mysterious but different state turned up at BaBar and was denoted “Y”. Subsequent exotic states seen at Belle and BESIII were dubbed “Z”, and more recently tetraquarks discovered at LHCb were labelled “T”.
Complicating matters further, the subscripts added to differentiate between the various states lack consistency. For example, the first known tetraquark states contained both charm and anticharm quarks, so a subscript “c” was added. But the recent discoveries of tetraquarks and pentaquarks containing a single strange quark require an extra subscript “s”. On top of all of that, explains LHCb’s Tim Gershon, who initiated the new naming scheme, tetraquarks discovered by LHCb in 2020 contain a single charm quark. “We couldn’t assign the subscript ‘c’ because we’ve always used that to denote states containing charm and anticharm, so we didn’t know what symbols to use,” he explains. “Things were starting to become a bit confusing, so we thought it was time to bring some kind of logic to the naming scheme. We have done this over an extended period, not only within LHCb but also involving other experiments and theorists in this field.”
Helpfully, the new proposal labels all tetraquarks “T” and all pentaquarks “P”, with a set of rules regarding the necessary subscripts and superscripts. In this scheme, the two different spin states of the open-charm tetraquarks discovered by LHCb in 2020 become Tcs0(2900)0 and Tcs1(2900)0 instead of X0(2900)0 and X1(2900)0, for example, while the latest pentaquark is denoted PΛψs(4338)0. The collaboration hopes that the new scheme, which can be extended to six- or seven-quark hadrons, will make it easier for experts to communicate while also helping newcomers to the field.
The new scheme could make it easier to spot patterns that might have been missed before
Importantly, it could make it easier to spot patterns that might have been missed before, perhaps shedding light on the central question of whether exotic hadrons are compact tightly bound multi-quark states or more loosely bound molecular-like states. The new LHCb scheme might even help researchers predict new exotic hadrons, just as the multiplets arising from the quark model made it possible to predict new mesons and baryons such as the Ω–.
“Before this new scheme it was almost like a Tower of Babel situation where it was difficult to communicate,” says Gershon. “We have created a document that people can use as a kind of dictionary, in the hope that it will help the field to progress more rapidly.”
The keenly awaited first science-grade images from the James Webb Space Telescope were released on 12 July – and they did not disappoint. Thanks to Webb’s unprecedented 6.5 m mirror, together with its four main instruments (NIRCam, NIRSpec, NIRISS and MIRI), the $10 billion probe marks a new dawn for observational astrophysics.
The past six months since Webb’s launch from French Guiana have been devoted to commissioning, including alignment and calibration of the mirrors and bringing temperatures to cyrogenic levels to minimise noise from heat radiated from the equipment (CERN Courier March/April 2022 p7). Unlike the Hubble Space Telescope, Webb does not look at ultraviolet or visible light but is primarily sensitive to near- and mid-infrared wavelengths. This enables it to look at the farthest galaxies and stars, as early as a few hundred million years after the Big Bang.
Wealth of information
Pictured here are some of Webb’s early-release images. The first deep-field image (top) covers the same area of the sky as a grain of sand held at arm’s length, and is swarming with galaxies. At the centre is a cluster called SMACS 0723, whose combined mass is so high that its gravitational field bends the light of objects that lie behind it (resulting in arc-like features), revealing galaxies that existed when the universe was less than a billion years old. The image was taken using NIRCam and is a combination of images at different wavelengths. The spectrographs, NIRSpec and NIRISS, will provide a wealth of information on the composition of stars, galaxies and their clusters, offering a rare peak into the earliest stages of their formation and evolution.
Stephan’s Quintet (bottom left) is a visual group of five galaxy clusters that was first discovered in 1877 and remains one of the most studied compact galaxy groups. The actual grouping involves only four galaxies, which are predicted to eventually merge. The non-member, NGC 7320, which lies about 40 million light years from Earth rather than 290 million for the actual group, is seen on the left, with vast regions of active star formation in its numerous spiral arms.
A third stunning image, the Southern Ring nebula (bottom right), shows a dying star. With its reservoirs of light elements already exhausted, it starts using up any available heavier elements to sustain itself – a complex and violent process that results in large amounts of material being ejected from the star in intervals, visible as shells.
These images are just a taste, yet not all Webb data will be so visually spectacular. By extending Hubble’s observations of distant supernovae and other standard candles, for example, the telescope should enable the local rate of expansion to be determined more precisely, possibly shedding light on the nature of dark energy. By measuring the motion and gravitational lensing of early objects, it will also survey the distribution of dark matter, and might even hint at what it’s made of. Using transmission spectroscopy, Webb will also reveal exoplanets in unprecedented detail, learn about their chemical compositions and search for signatures of habitability.
As the LHCb experiment prepares for data taking with an upgraded detector for LHC Run 3, the rich harvest of results using data collected in Run 1 and Run 2 of the LHC continues.
A fascinating area of study is the quantum-mechanical oscillation of neutral mesons between their particle and antiparticle states, implying a coupled system of two mesons with different lifetimes. The phenomenology of the Bs system is particularly interesting as it provides a sensitive probe to physics beyond the Standard Model. A Bs meson oscillates with a frequency of about 3 × 1012 Hz, or on average about nine times during its lifetime, τ. In addition, a sizeable difference between the decay widths of the heavy (ΓH) and light (ΓL) mass eigenstates is expected. Measuring the lifetime of a CP-even Bs-decay mode determines τL = 1/ΓL.
LHCb has recently released a new and precise measurement of this parameter, making use of Bs→ J/ψη decays selected from 5.7 fb–1 of Run 2 data. The study improves the previous Run 1 precision by a factor of two. Due to the combinatorial background, the reconstruction of the η meson via its two-photon decay mode is a particular challenge for this analysis. Despite this, and even with the modest energy resolution of the calorimeter leading to a relatively broad mass peak overlapping partially with the signal from the B0→ J/ψη decay, a competitive accuracy has been achieved. By exploiting the latest machine-learning techniques to reduce the background and the well understood LHCb detector, the Bs→ J/ψη decay is observed (figure 1), and τL is extracted from a two-dimensional fit to the mass and decay time.
The analysis finds τL = 1.445 ± 0.016 (stat) ± 0.008 (syst) ps, which is the most precise measurement of this quantity. Combined with the LHCb Run 1 study of this and the Bs→ Ds+ Ds– decay mode, τL = 1.437 ± 0.014 ps, which agrees well both with the Standard Model expectation (τL = 1.422 ± 0.013 ps) and the value inferred from measurements of Γs and ΔΓs in Bs→ J/ψφ decays. Further improvement in the knowledge of τL is expected both by considering other CP-even Bs decays to final states containing η or η′ mesons, the Bs→ Ds+ Ds– dataset collected during Run 2 and from the upcoming Run 3.
Despite two decades of extensive studies, the production of antinuclei in heavy-ion collisions is not yet fully understood. Antinuclei production is usually modelled by two conceptually different theoretical models, the statistical hadronisation model (SHM) and coalescence models. In the SHM, deuteron antinuclei are produced from a locally thermally equilibrated source, while antinuclei are formed from the binding of constituent nucleons, which are close in momentum and position phase space in the coalescence model. Both models predict very similar production yields of, for example, deuteron antinuclei, bound states of an antiproton and an antineutron. This calls for new experimental observables that discern different production models.
Measuring higher moments of the multiplicity distribution of antinuclei as well as the correlation with antinucleons produced in the collision have been recently proposed as sensitive variables to antinucleosynthesis processes in heavy-ion collisions. The first measurement of the variance to mean ratio of the multiplicity distribution of antideuterons is compared to the predictions of the SHM and coalescence models (figure 1). The coalescence model fails to describe the observed ratio of the variance and mean of the multiplicity distribution of antideuterons. The measurements are consistent with the statistical baseline, a Poissonian distribution, as well as with the SHM in the presence of baryon number conservation. However, this observable proves insensitive to the size of the correlation volume used in the SHM to conserve the baryon number.
The Pearson correlation coefficient between the number of produced antideuterons and antiprotons constrains the latter effectively. The small negative correlation reflects that there are less protons observed in events with at least one deuteron than in an average event (figure 1). The coalescence model does not reproduce the measurement, whereas it is possible to fit the measurement to extract the correlation volume out of the SHM. The obtained correlation volume is 1.6 times the volume of the fireball per unit of rapidity, which is smaller compared to those describing proton yields and a similar measurement of net-proton number fluctuations. These findings point to a later formation of the correlation among protons and deuterons compared to that among antiprotons and protons.
Overall, these results present a severe challenge to the current understanding of antinuclei production in heavy-ion collisions at the LHC energies. With the LHC Run 3 data it will be possible to extend these measurements to heavier antinuclei and to higher order correlation coefficients and moments of the antinuclei multiplicity distribution that are even more sensitive to details of the nucleosynthesis process in heavy-ion collisions.
NuFACT 2022 is the 23rd in the series of yearly international workshops which started in 1999 and which had previously been called the International Workshop on Neutrino Factories. The change of name to International Workshop on Neutrinos from Accelerators is related to the fact that the workshop program has, over the years, come to include all current and future accelerator and also reactor based neutrino projects, including also muon projects, not only the Neutrino Factory project.
The main goal of the workshop is to review the progress of current and future facilities able to improve on measurements of the properties of neutral and charged lepton flavor violation, as well as searches for new phenomena beyond the capabilities of presently planned experiments. The workshop is both interdisciplinary and interregional in that experimenters, theorists, and accelerator physicists from all over the world share expertise with the common goal of reviewing the results of currently operating experiments and designing the next generation of experiments. To allow for worldwide participation we plan to broadcast plenary sessions and make at least some selected parallel sessions available. Plenary sessions will be mostly held in the mornings in Utah, which translates into convenient times for international participants from the Americans and Europe/Africa regions.
Before and during the conference we will also have several mini-workshops and panel discussions. Currently we plan to have a mini-workshop on Multi-Messenger Tomography of Earth and a workshop on Muon Colliders, and plan to have a panel discussion on the Snowmass exercise.
We are currently envisioning a fully in-person event. Plenary and selected parallel sessions will be streamed for world-wide participation. NuFact will include some dedicated hybrid events with opportunities for remote participants to give presentations and to discuss with the in-person participants.
The NuFact 2022 workshop is divided into seven Working Groups covering the following topics:
Neutrino Oscillation Physics (Working Group 1),
Neutrino Scattering Physics (Working Group 2),
Accelerator Physics (Working Group 3),
Muon Physics (Working Group 4), and
Neutrinos Beyond PMNS (Working Group 5)
Detectors (Working Group 6)
Inclusion, Diversity, Equity, Education & Outreach (Working Group 7)
Amplitudes 2022, the 14th in a series of annual meetings, brings together a community of researchers in this fast-growing field of theoretical physics, interested in both formal and practical aspects of scattering amplitudes, and wide range of applications from pure mathematics to collider and gravitational wave physics.
Karol Kampf (Charles University), Jaroslav Trnka (UC Davis)
Christoph Bartsch, Jiří Novotný, Petr Vaško (Charles University Prague)
Klaus Bering, Michal Pazderka (Masaryk University Brno)
Constantinos Skordis, Renann Lipinski Jusinskas, Will Emond (Academy of Sciences Prague), Taro Brown (UC Davis)
Confirmed speakers
Agnese Bissi, Alessandra Buonanno, Alex Edison, Alexander Zhiboedov, Alfredo Guevara, Amit Sever, Andrew McLeod, Andrzej Pokraka, Callum Jones, Chia-Hsien Shen, Congkao Wen, Dalimil Mazac, David Kosower, Dimitry Chicherin, Fabrizio Caola, Gabriele Travaglini, Guilherme Pimentel, Gustav Mogull, Henriette Elvang, Hofie Hannesdottir, Johannes Henn, Julio Parra-Martinez, Lance Dixon, Livia Ferro, Lucia Cordova, Marc Spradlin, Mariana Carrillo-Gonzalez, Matthias Wilhelm, Michèle Levi, Monica Pate, Natalie Paquette, Nathaniel Craig, Nima Arkani-Hamed, Sabrina Pasterski, Sebastian Mizera, Shruti Paranjape, Song He, Tim Adamo, Xi Yin, Zohar Komargodski, Zvi Bern
Sponsors
This conference receives funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Novel structures in scattering amplitudes, grant agreement No 725110). We also acknowledge the support of Charles University, the Czech Science Foundation, the Czech Ministry of Education, the Czech Academy of Sciences and Masaryk University.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.