Topics

Snowmass back at KITP

snowmass_theory_frontier_image

Between February 23-25, the Kavli Institute of Theoretical Physics (KITP) in Santa Barbara, California, hosted the Theory Frontier conference of the US Particle Physics Community Planning Exercise, “Snowmass 2021“. Organised by the Division of Particles and Fields of the American Physical Society (APS DPF), Snowmass aims to identify and document a scientific vision for the future of particle physics in the U.S. and abroad. The event brought together theorists from the entire spectrum of high-energy physics, fostering dialogue and revealing common threads, to sketch a decadal vision for high-energy theory in advance of the main Snowmass Community Summer Study in Seattle on 17-26 July.

It was also one of the first large in-person meetings for the US particle physics community since the start of the COVID-19 pandemic.

The conference began in earnest with Juan Maldacena’s (IAS) vision for formal theory in the coming decade. Highlighting promising directions in quantum field theory and quantum gravity, he surveyed recent developments in “bootstrap” techniques for conformal field theories, amplitudes and cosmology; implications of quantum information for understanding quantum field theories; new dualities in supersymmetric and non-supersymmetric field theories; progress on the black-hole information problem; and constraints on effective field theories from consistent coupling to quantum gravity. Following talks by Eva Silverstein (U. Stanford) on quantum gravity and cosmology and Xi Dong (UC Santa Barbara) on geometry and entanglement, David Gross (KITP) brought the morning to a close by recalling the role of string theory in the quest for unification and emphasising its renewed promise in understanding QCD.

Clay Cordova (Chicago), David Simmons-Duffin (Caltech), Shu Heng Shao (IAS) and Ibrahima Bah (Johns Hopkins) followed with a comprehensive overview of recent progress in quantum field theory. Cordova’s summary of supersymmetric field theory touched on the classification of superconformal field theories, improved understanding of maximally supersymmetric theories in diverse dimensions, and connections between supersymmetric and non-supersymmetric dynamics. Simmons-Duffin made a heroic attempt to convey the essentials of the conformal bootstrap in a 15-minute talk, while Shao surveyed generalised global symmetries and Bah detailed geometric techniques guiding the classification of superconformal field theories.

The first afternoon began with Raman Sundrum’s (Maryland) vision for particle phenomenology, in which he surveyed the pressing questions motivating physics beyond the Standard Model, some promising theoretical mechanisms for answering them, and the experimental opportunities that follow. Tim Tait (UC Irvine) followed with an overview of dark- matter models and motivation, drawing a contrast between the more top-down perspective on dark matter prevalent during the previous Snowmass process in 2013 (also hosted by KITP) and the much broader bottom-up perspective governing today’s thinking. Devin Walker (Dartmouth) and Gilly Elor (Mainz) brought the first day’s physics talks to a close with bosonic dark matter and new ideas in baryogenesis.

The final session of the first day was devoted to issues of equity and inclusion in the high-energy theory community, with  DPF early-career member Julia Gonski (Columbia) making a persuasive case giving a voice to early-career physicists in the years between Snowmass processes.  Connecting from Cambridge, Howard Georgi (Harvard) delivered a compelling speech on the essential value of diversity in physics, recalling Ann Nelson’s legacy and reminding the packed auditorium that “progress will not happen at all unless the good people who think that there is nothing they can do actually wake up and start doing.” This was followed by a panel discussion moderated by Devin Walker (Dartmouth) and featuring Georgi, Bah, Masha Baryakhtar (Washington), and Tien-Tien Yu (Oregon) in dialogue about their experiences.

Developments across all facets of the high-energy theory community are shaping new ways of exploring the universe from the shortest length scales to the very longest

The second and third days of the conference spanned the entire spectrum of activity within high-energy theory, consolidated around quantum information science with talks by Tom Hartman (Cornell), Raphael Bousso (Berkeley), Hank Lamm (Fermilab) and Yoni Kahn (Illinois). Marius Wiesemann (MPI), Felix Kling (DESY) and Ian Moult (Yale) discussed simulations for collider physics, and Michael Wagman (Fermilab), Huey-Wen Lin (Michigan State) and Thomas Blum (Connecticut) emphasised recent progress in lattice gauge theory. Recent developments in precision theory were covered by Bernhard Mistlberger (CTP), Emanuele Mereghetti (LANL) and Dave Soper (Oregon) and the status of scattering-amplitudes applications by Nima Arkani-Hamed (IAS), Mikhail Solon (Caltech) and Henriette Elvang (Michigan). Masha Baryakhtar (Washington), Nicholas Rodd (CERN) and Daniel Green (UC San Diego) reviewed astroparticle and cosmology theory, followed by an overview of effective field theory approaches in cosmology and gravity by Mehrdad Mirbabayi (ICTP) and Walter Goldberger (Yale); Isabel Garcia Garcia (KITP) discussed alternative approaches to effective field theories in gravitation. Recent findings in neutrino theory were covered by Alex Friedland (SLAC), Mu Chun Chen (UC Irvine) and Zahra Tabrizi (Northwestern). Bridging these themes with talks on amplitudes and collider physics, machine learning for particle theory and cosmological implications of dark sector models were talks by Lance Dixon (SLAC), Jesse Thaler (MIT) and Neal Weiner (New York). Connections with the many other “frontiers” in the Snowmass process were underlined by Laura Reina (Florida State), Lian-Tao Wang (Chicago), Pedro Machado (Fermilab), Flip Tanedo (UC Riverside), Steve Gottlieb (Indiana), and Alexey Petrov (Wayne State).

The rich and broad programme of the Snowmass Theory Conference demonstrates the vibrancy of high-energy theory at this interesting juncture for the field, following the discovery of the final missing piece of the Standard Model, the Higgs boson, in 2012. Subsequent developments across all facets of the high-energy theory community are shaping new ways of exploring the universe from the shortest length scales to the very longest. The many thematic threads and opportunities covered in the conference bode well for the final Snowmass discussions with the whole community in Seattle this summer.

Gravitational-wave astronomy turns to AI

New frontiers in gravitational-wave (GW) astronomy were discussed in the charming and culturally vibrant region of Oaxaca, Mexico from 14 to 19 November. Around 37 participants attended the hybrid Banff International Research Station for Mathematical Innovation and Discovery (BIRS) “Detection and Analysis of Gravitational Waves in the Era of Multi-Messenger Astronomy: From Mathematical Modelling to Machine Learning’’ workshop. Topics ranged from numerical relativity to observational astrophysics and computer science, including the latest applications of machine-learning algorithms for the analysis of GW data.

GW observations are a new way to explore the universe’s deepest mysteries. They allow researchers to test gravity in extreme conditions, to get important clues on the mathematical structure and possible extension of general relativity, and to understand the origin of matter and the evolution of the universe. As more GW observations with increased detector sensitivities spur astrophysical and theoretical investigations, the analysis and interpretation of GW data faces new challenges which require close collaboration with all GW researchers. The Oaxaca workshop focused on a topic that is currently receiving a lot of attention: the development of efficient machine-learning (ML) methods and numerical-analysis algorithms for the detection and analysis of GWs. The programme gave participants an overview of new-physics phenomena that could be probed by current or next-generation GW detectors, as well as data-analysis tools that are being developed to search for astrophysical signals in noisy data.

Since their first detections in 2015, the LIGO and Virgo detectors have reached an unprecedented GW sensitivity. They have observed signals from binary black-hole mergers and a handful of signals from binary neutron star and mixed black hole-neutron star systems. In discussing the role that numerical relativity plays in unveiling the GW sky, Pablo Laguna and Deirdre Shoemaker (U. Texas) showed how it can help in understanding the physical signatures of GW events, for example by distinguishing black hole-neutron star binaries from binary black-hole mergers. On the observational side, several talks focused on possible signatures of new physics in future detections. Adam Coogan (U. de Montréal and Mila) and Gianfranco Bertone (U. of Amsterdam, and chair of EuCAPT) discussed dark-matter halos around black holes. Distinctive GW signals  could help to determine whether dark matter is made of a cold, collisionless particle via signatures of intermediate mass-ratio inspirals embedded in dark-matter halos. In addition, primordial black holes could be dark-matter candidates.

Bernard Mueller (U. Monash) and Pablo Cerdá-Durán (U. de Valencia) described GW emission from core-collapse supernovae. The range of current detectors is limited to the Milky Way, where the rate of supernovae is about one per century. However, if and when a galactic supernova happens, its GW signature will be within reach of existing detectors. Lorena Magaña Zertuche (U. of Mississippi) talked about the physics of black-hole ringdown – the process whereby gravitational waves are emitted in the aftermath of a binary black-hole merger – which is crucial for understanding astrophysical black holes and testing general relativity. Finally, Leïla Haegel (U. de Paris) described how the detection of GW dispersion would indicate the breaking of Lorentz symmetry: if a GW propagates according to a modified dispersion relation, its frequency modes will propagate at different speeds, changing  the phase evolution of the signals with respect to general relativity.

Machine learning
Applications of different flavours of ML algorithms to GW astronomy, ranging from the detection of GWs to their characterisation in detector simulations, were the focus of the rest of the workshop.

ML has seen a huge development in recent years and has been increasingly used in many fields of science. In GW astronomy, a variety of supervised, unsupervised, and reinforcement ML algorithms, such as deep learning, neural networks, genetic programming and support vector machines, have been developed. They have been used to successfully deal with noise in the detector, signal processing, data analysis for signal detections and for reducing the non-astrophysical background of GW searches. These algorithms must be able to deal with large data sets and demand  a high accuracy to model  theoretical waveforms and to perform  searches at the limit of instrument sensitivities. The next step for a successful use of ML in GW science will be the integration of ML techniques with more traditional numerical-analysis methods that have been developed for the modelling, real-time detection, and analysis of signals.

The BIRS workshop provided a broad overview of the latest advances in this field, as well as open questions that need to be solved to apply robust ML techniques to a wide range of problems. These include reliable background estimation, modelling gravitational waveforms in regions of the parameter space not covered by full numerical relativity simulations, and determining populations of GW sources and their properties. Although ML for GW astronomy is in its infancy, there is no doubt that it will play an increasingly important role in the detection and characterization of GWs leading to new discoveries.

Dijet excess intrigues at CMS

The Standard Model (SM) has been extremely successful in describing the behaviour of elementary particles. Nevertheless, conundrums such as the nature of dark matter and the cosmological matter-antimatter asymmetry strongly suggest that the theory is incomplete. Hence, the SM is widely viewed as an effective low-energy limit of a more fundamental underlying theory which must be modified to describe particles and their interactions at higher energies.

A powerful way to discover new particles expected from physics beyond the SM is to search for high-mass dijet or multi-jet resonances, as these are expected to have large production cross-sections at hadron colliders. These searches look for a pair of jets originating from a pair of quarks or gluons, coming from the decay of a new particle “X” and appearing as a narrow bump in the invariant dijet-mass distribution. Since the energy scale of new physics is most likely high, it is natural to expect these new particles to be massive.

CMS_Figure1

CMS and ATLAS have performed a suite of single-dijet-resonance searches. The next step is to look for new identical-mass particles “X” that are produced in pairs, with (resonant mode) or without (non-resonant mode) a new intermediate heavier particle “Y” being produced and decaying to pairs of X. Such processes would yield two dijet resonances and four jets in the final state: the dijet mass would correspond to particle X and the four-jet mass to particle Y.

The CMS experiment was also motivated to search for Y→ XX → 4-jets by a candidate event recorded in 2017, which was presented by a previous CMS search for dijet resonances (figure 1). This spectacular event has four high transverse-momentum jets, forming two dijet pairs each with an invariant mass of 1.9 TeV and a four-jet invariant mass of 8 TeV.

CMS_Figure2

Presented on 14 March at Rencontres de Moriond, the CMS collaboration has found another very similar event in a new search optimised for this specific Y→ XX → 4-jets topology. These events could originate from quantum-chromodynamics processes, but those are expected to be extremely rare (figure 2). The two candidate events are clearly visible at high masses, distinct from all the rest. Also shown (magenta) is a simulation of a possible new-physics signal – a diquark decaying to vector-like quarks – with a four-jet mass of 8.4 TeV and a dijet mass of 2.1 TeV, which very nicely describes these two candidates.

The hypothesis that these events originate from the SM at the observed X and Y masses is disfavoured with a local significance of 3.9σ. Taking into account the full range of possible X and Y mass values, the compatibility of the observation with the SM expectation leads to a global significance of 1.6σ.

The upcoming LHC Run 3 and future High-Luminosity LHC runs will be crucial in telling us whether these events are statistical fluctuations of the SM expectation, or the first signs of yet another groundbreaking discovery at LHC.

Spotlight on FCC physics

Ten years after the discovery of a Standard Model-like Higgs boson at the LHC, particle physicists face profound questions lying at the intersection of particle physics, cosmology and astrophysics. A visionary new research infrastructure at CERN, the proposed Future Circular Collider (FCC), would create opportunities to either answer them or refine our present understanding. The latest activities towards the ambitious FCC physics programme were the focus of the 5th FCC Physics Workshop, co-organised with the University of Liverpool as an online event from 7 to 11 February. It was the largest such workshop to date, with more than 650 registrants, and welcomed a wide community geographically and thematically, including members of other “Higgs factory” and future projects.

The overall FCC programme – comprising an electron-positron Higgs and electroweak factory (FCC-ee) as a first stage followed by a high-energy proton-proton collider (FCC-hh) – combines the two key strategies of high-energy physics. FCC-ee offers a unique set of precision measurements to be confronted with testable predictions and opens the possibility for exploration at the intensity frontier, while FCC-hh would enable further precision and the continuation of open exploration at the energy frontier. The February workshop saw advances in our understanding of the physics potential of FCC-ee, and discussions of the possibilities provided at FCC-hh and at a possible FCC-eh facility.

The overall FCC programme combines the two key strategies of high-energy physics: precision measurements at the intensity frontier and the open exploration at the energy frontier

The proposed R&D efforts for the FCC align with the requests of the 2020 update of the European strategy for particle physics and the recently published accelerator and detector R&D roadmaps established by the Laboratory Directors Group and ECFA. Key activities of the FCC feasibility study, including the development of a regional implementation scenario in collaboration with the CERN host states, were presented.

Over the past several months, a new baseline scenario for a 91 km-circumference layout has been established, balancing the optimisation of the machine performance, physics output and territorial constraints. In addition, work is ongoing to develop a sustainable operational model for FCC taking into account human and financial resources and striving to minimise its environmental impact. Ongoing testing and prototyping work on key FCC-ee technologies will demonstrate the technical feasibility of this machine, while parallel R&D developments on high-field magnets pave the way to FCC-hh.

Physics programme
A central element of the overall FCC physics programme is the precise study of the Higgs sector. FCC-ee would provide model-independent measurements of the Higgs width and its coupling to Standard Model particles, in many cases with sub-percent precision and qualitatively different to the measurements possible at the LHC and HL-LHC. The FCC-hh stage has unique capabilities for measuring the Higgs-boson self-interactions, profiting from previous measurements at FCC-ee. The full FCC programme thus allows the reconstruction of the Higgs potential, which could give unique insights into some of the most fundamental puzzles in modern cosmology, including the breaking of electroweak symmetry and the evolution of the universe in the first picoseconds after the Big Bang.

Presentations and discussions throughout the week showed the impressive breadth of the FCC programme, extending far beyond the Higgs factory alone. The large integrated luminosity to be accumulated by FCC-ee at the Z-pole enables high-precision electroweak measurements and an ambitious flavour-physics programme. While the latter is still in the early phase of development, it is clear that the number of B mesons and tau-lepton pairs produced at FCC-ee significantly surpasses those at Belle II, making FCC-ee the flavour factory of the 2040s. Ongoing studies are also revealing its potential for studying interactions and decays of heavy-flavour hadrons and tau leptons, which may provide access to new phenomena including lepton-flavour universality-violating processes. Similarly, the capabilities of FCC-ee to study beyond-the-Standard Model signatures such as heavy neutral leptons have come into further focus. Interleaved presentations on FCC-ee, FCC-hh and FCC-eh physics also further intensified the connections between the lepton- and hadron-collider communities.

The impressive potential of the full FCC programme is also inspiring theoretical work. This ranges from overarching studies on our understanding of naturalness, to concrete strategies to improve the precision of calculations to match the precision of the experimental programme.

The physics thrusts of the FCC-ee programme inform an evaluation of the run plan, which will be influenced by technical considerations on the accelerator side as well as by physics needs and the overall attractiveness and timeliness of the different energy stages (ranging from the Z pole at 91 GeV to the tt threshold at 365 GeV). In particular, the possibility for a direct measurement of the electron Yukawa coupling by extensive operation at the Higgs pole (125 GeV) raises unrivaled challenges, which will be further explored within the FCC feasibility study. The main challenge here is to reduce the spread in the centre-of-mass energy by a factor of around ten while maintaining the high luminosity, requiring a monochromatisation scheme long theorised but never applied in practice.

CLD_iso_view

Detectors status and plan
Designing detectors to meet the physics requirements of FCC-ee physics calls for a strong R&D programme. Concrete detector concepts for FCC-ee were discussed, helping to establish a coherent set of requirements to fully benefit from the statistics and the broad variety of physics channels available.

The primary experimental challenge at FCC-ee is how to deal with the extremely high instantaneous luminosities. Conditions are the most demanding at the Z pole, with the luminosity surpassing 1036 cm-2s-1 and the rate of physics events exceeding 100 kHz. Since collisions are continuous, it is not possible to employ “power pulsing” of the front-end electronics as has been developed for detector concepts at linear colliders. Instead, there is a focus on the development of fast, low-power detector components and electronics, and on efficient and lightweight solutions for powering and cooling. With the enormous data samples expected at FCC-ee, statistical uncertainties will in general be tiny (about a factor of 500 smaller than at LEP). The experimental challenge will be to minimise systematic effects towards the same level.

The mind-boggling integrated luminosities delivered by FCC-ee would allow Standard Model particles – in particular the W, Z and Higgs bosons and the top quark, but also the b and c quarks and the tau lepton – to be studied with unprecedented precision. The expected number of Z bosons produced (5×1012) is more than five orders of magnitude larger than the number collected at LEP, and more than three orders of magnitude larger than that envisioned at a linear collider. The high-precision measurements and the observation of rare processes made possible by these large data samples will open opportunities for new-physics discoveries, including the direct observation of very weakly-coupled particles such as heavy-neutral leptons, which are promising candidates to explain the baryon asymmetry of the universe.

With overlapping requirements, designs for FCC-ee can follow the example of detectors proposed for linear colliders.

The detectors that will be located at two (possibly four) FCC-ee interaction points must be designed to fully profit from the extraordinary statistics. Detector concepts under study feature: a 2 T solenoidal magnetic field (limited in strength to avoid blow-up of the low-emittance beams crossing at 30 mrad); a small-pitch, thin-layers vertex detector providing an excellent impact-parameter resolution for lifetime measurements; a highly transparent tracking system providing a superior momentum resolution; a finely segmented calorimeter system with excellent energy resolution for electrons and photons, isolated hadrons and jets; and a muon system. To fully exploit the heavy-flavour possibilities, at least one of the detector systems will need efficient particle-identification capabilities allowing π/K separation over a wide momentum range, for which there are ongoing R&D efforts on compact, light RICH detectors.

With overlapping requirements, designs for FCC-ee can follow the example of detectors proposed for linear colliders. The CLIC-inspired CLD concept – featuring a silicon-pixel vertex detector and a silicon tracker followed by a 3D-imaging, highly granular calorimeter system (a silicon-tungsten ECAL and a scintillator-steel HCAL) surrounded by a superconducting solenoid and muon chambers interleaved with a steel return yoke – is being adapted to the FCC-ee experimental environment. Further engineering effort is needed to make it compatible with the continuous-beam operation at FCC-ee. Detector optimisation studies are being facilitated by the robust existing software framework which has been recently integrated into the FCC study.

FCC Curved silicon

The IDEA (International Detector for Electron-positron Accelerator) concept, specifically developed for a circular electron-positron collider, brings in alternative technological solutions. It includes a five-layer vertex detector surrounded by a drift chamber, enclosed in a single-layer silicon “wrapper”. The distinctive element of the He-based drift chamber is its high transparency. Indeed, the material budget of the full tracking system, including the vertex detector and the wrapper, amounts to only about 5% (10%) of a radiation length in the barrel (forward) direction. The drift chamber promises superior particle-identification capabilities via the use of a cluster-counting technique that is currently under test-beam study. In the baseline design, a thin low-mass solenoid is placed inside a monolithic, 2 m-deep, dual-readout fibre calorimeter. An alternative (more expensive) design also features a finely segmented crystal ECAL placed immediately inside the solenoid, providing an excellent energy resolution for electrons and photons.

FCC feedthrough_test_setup

Recently, work has started on a third FCC-ee detector concept comprising: a silicon vertex detector; a light tracker (drift chamber or full-silicon device); a thin, low-mass solenoid; a highly-granular noble liquid-based ECAL; a scintillator-iron HCAL; and a muon system. The current baseline ECAL design is based on lead/steel absorbers and active liquid-argon, but a more compact option based on tungsten and liquid-krypton is an interesting option. The concept design is currently being implemented inside the FCC software framework.

All detector concepts are under evolution and there is ample room for further innovative concepts and ideas.

Closing remarks
Circular colliders reach higher luminosities than linear machines because the same particle bunches are used over many turns, while detectors can be installed at several interaction points. The FCC-ee programme greatly benefits from the possibility of having four interaction points to allow the collection of more data, systematic robustness and better physics coverage — especially for very rare processes that could offer hints as to where new physics could lie. In addition, the same tunnel can be used for an energy-frontier hadron collider at a later stage.

The FCC feasibility study will be submitted by 2025, informing the next update of the European strategy for particle physics. Such a machine could start operation at CERN within a few years after the full exploitation of the HL-LHC in around 2040. CERN, together with its international partners, therefore has the opportunity to lead the way for a post-LHC research infrastructure that will provide a multi-decade research programme exploring some of the most fundamental questions in physics. The geographical distribution of participants in the 5th FCC physics workshop testifies to the global attractiveness of the project. In addition, the ongoing physics and engineering efforts, the cooperation with the host states, the support from the European physics community and the global cooperation to tackle the open challenges of this endeavour, are reassuring for the next steps of the FCC feasibility study.

Graph neural networks boost di-Higgs search

Figure 1

Two fundamental characteristics of the Higgs boson (H) that have yet to be measured precisely are its self-coupling λ, which indicates how strongly it interacts with itself, and its quartic coupling to the vector bosons, which mediate the weak force. These couplings can be directly accessed at the LHC by studying the production of Higgs-boson pairs, which is an extremely rare process occurring about 1000 times less frequently than single-H production. However, several new-physics models predict a significant enhancement in the HH production rate compared to the Standard Model (SM) prediction, especially when the H pairs are very energetic, or boosted. Recently, the CMS collaboration developed a new strategy employing graph neural networks to search for boosted HH production in the four-bottom-quark final state, which is one of the most sensitive modes currently under examination.

H pairs are produced primarily via gluon and vector-boson fusion. The former production mode is sensitive to the self-coupling, while the latter probes the quartic coupling involving a pair of weak vector bosons and two Higgs bosons. The extracted modifiers of the coupling-strength parameters, κλ and κ2V, quantify their strengths relative to the SM expectation.

This latest CMS search targets both production modes and selects two Higgs bosons with a high Lorentz boost. When each Higgs boson decays to a pair of bottom quarks, the two quarks are reconstructed as a single large-radius jet. The main challenge is thus to identify the specific H jet while rejecting the background from light-flavour quarks and gluons. Graph neural networks, such as the ParticleNet algorithm, have been shown to distinguish successfully between real H jets and background jets. Using measured properties of the particles and secondary vertices within the jet cone, this algorithm treats each jet as an unordered set of its constituents, considers potential correlations between them, and assigns each jet a probability to originate from a Higgs-boson decay. At an H-jet selection efficiency of 60%, ParticleNet rejects background jets twice as efficiently as the previous best algorithm (known as DeepAK8). A modified version of this algorithm is also used to improve the H-jet mass resolution by nearly 40%.

Using the full LHC Run-2 dataset, the new result excludes an HH production rate larger than 9 times the SM cross-section at 95% confidence level, versus an expected limit of 5. This represents an improvement by a factor of 30 compared to the previous best result for boosted HH production. The analysis yields a strong constraint on the HH production rate and κλ, and the most stringent constraint on κ2V to date, assuming all other H couplings to be at their SM values (see figure 1). For the first time, and with the assumption that the other couplings are consistent with the SM, the result excludes the κ2V = 0 scenario at over five standard deviations, confirming the existence of a quartic coupling between two vector bosons and two Higgs bosons. This search paves the way for a more extensive use of advanced machine-learning techniques, the exploration of the boosted HH production regime, and further investigation into the potentially anomalous character of the Higgs boson in Run 3 and beyond.

Extending the reach on Higgs’ self-coupling

Figure 1

The discovery of the Higgs boson and the comprehensive measurements of its properties provide a strong indication that the mechanism of electroweak symmetry breaking (EWSB) is compatible with the one predicted by Brout, Englert and Higgs (BEH) in 1964. But there remain unprobed features of EWSB, chiefly whether the form of the BEH potential follows the predicted “Mexican hat” shape. One of the parameters that determines the form of the BEH potential is the Higgs boson’s trilinear self-coupling, λ. Experimentally, this fundamental parameter can be measured via Higgs-boson pair (HH) production, where a single virtual Higgs boson splits into two Higgs bosons. However, such a measurement is very challenging as the Standard Model (SM) HH production cross-section is more than 1000 times lower than that of single Higgs-boson production.

Beyond the SM (BSM) physics with modified or new Higgs-boson couplings could lead to significantly enhanced HH production. Some BSM scenarios predict new heavy particles that may lead to resonant HH production, contrary to the non-resonant production featured by the triple-Higgs-boson coupling. New ATLAS results set tight constraints on both the non-resonant and resonant scenarios, showing that the boundaries of what can be achieved with the current and future LHC datasets can be significantly pushed.

The ATLAS collaboration recently released results of searches for HH production in three final states – bbγγ, bbττ and 4b (where one Higgs boson decays into two b-quarks and the other into two photons, two tau-leptons or two b-quarks) and their combination, exploiting the full LHC Run-2 dataset. The first two analyses target both resonant and non-resonant HH production, while the 4b analysis targets only resonant HH production. These three channels are the most sensitive final states in each scenario. The three decay modes of the second Higgs boson provide good sensitivity in different kinematic regions, so that the analyses are highly complementary. The HH → bbγγ process has the lowest branching ratio but high efficiency to trigger and reconstruct photons, as well as an excellent diphoton mass resolution, leading to the best sensitivity at low HH invariant masses. The HH → 4b final state has the highest branching ratio but suffers from the requirement to impose high transverse momentum b-jet trigger thresholds, the ambiguity in the Higgs boson reconstruction and the large multijet background. However, it provides the best sensitivity at high HH invariant masses. Finally, the HH → bbττ decay has a moderate branching ratio as well as a moderate background contamination, giving the best sensitivity in the intermediate HH mass range. 

BSM physics with new Higgs-boson couplings could lead to significantly enhanced HH production

With the latest analyses, a remarkably stringent observed (expected) upper limit of 3.1 (3.1) times the SM prediction on non-resonant HH production was obtained at 95% confidence level (CL). The coupling strength of the Higgs boson trilinear self-coupling in units of the SM value κλ is observed (expected) to be constrained between –1.0 and 6.6 (–1.2 and 7.2) at 95% CL (see figure 1). These are the world’s tightest constraints obtained on this process. The observed (expected) exclusion limits at 95% CL on the resonant HH production cross-section range between 1.1 and 595 fb (1.2 and 392 fb) for resonance masses between 250 and 5000 GeV. 

The sensitivity of the current analyses is still limited by statistical uncertainties and is expected to improve significantly with the future luminosity increase during LHC Run 3 and the HL-LHC programme. A comparison between the current results and previous partial Run-2 dataset results has shown that an improvement by more than a factor of three on the limits is achieved. A factor of two was expected from the larger dataset, and the remaining improvements arise from better object reconstruction and identification techniques, and new analy­sis methods. 

These latest results inspire confidence that the observation of the SM HH production and a precise measurement of the Higgs-boson trilinear self-coupling may be possible at the HL-LHC.

Crab cavities enter next phase

The imminent start of LHC Run 3 following a vast programme of works completed during Long Shutdown 2 marks a milestone for the CERN accelerator complex. When stable proton beams return to the LHC this year (see LHC Run 3: the final countdown), they will collide at higher energies (13.6 compared to 13 TeV) and with higher luminosities (containing up to 1.8 × 1011 protons per bunch compared to 1.3–1.4 × 1011) than in Run 2. Physicists working on the LHC experiments can therefore look forward to a rich harvest of results during the next three years. After Run 3, the statistical gain in running the accelerator without a significant luminosity increase beyond its design and ultimate values will become marginal. Therefore, to maintain scientific progress and to exploit its full capacity, the LHC is undergoing upgrades that will allow a decisive increase of its luminosity during Run 4, expected to begin in 2029, and beyond.

Several technologies are being developed for this High-Luminosity LHC (HL-LHC) upgrade. One is new, large-aperture quadrupole magnets based on a niobium-tin superconductor. These will be installed on either side of the ATLAS and CMS experiments, providing the space required for smaller beam-spot sizes at the interaction points and shielding against the higher radiation levels when operating at increased luminosities. The other key technology, necessary to take advantage of the smaller beam-spot size at the interaction points, is a series of superconducting radio-frequency (RF) “crab” cavities that enlarge the overlap area of the incoming bunches and thus increase the probability of collisions. Never used before at a hadron collider, a total of 16 compact crab cavities will be installed on either side of each of ATLAS and CMS once Run 3 ends and Long Shutdown 3 begins.

The crab-cavity test facility

At a collider such as the LHC, it is imperative that the two counter-circulating beams are physically separated by an angle, aka the crossing angle, such that bunches collide only in one single location over the common interaction region (where the two beams share the same beam pipe). The bunches at the HL-LHC will be 10 cm long and only 7 μm wide at the collision points, resembling thin long wires. As a result, even a very small angle between the bunches implies an immediate loss in luminosity. With the use of powerful superconducting crab cavities, the tilt of the bunches at the collision point can be precisely controlled to make it optimal for the experiments and fully exploit the scientific potential of the HL-LHC.

Radical concepts 

The tight space constraints from the relatively small separation of the two beams outside the common interaction region requires a radically new RF concept for particle deflection, employing a novel shape and significantly smaller cavities than those used in other accelerators. Designs for such devices began around 10 years ago, with CERN settling on two types: double quarter wave (DQW) and RF-dipole (RFD). The former will be fitted around CMS, where bunches are separated vertically, and the latter around ATLAS, where bunches will be separated horizontally, requiring crab cavities uniquely designed for each plane. It is also planned to swap the crossing-angle planes and crab-cavity installations at a later stage during the HL-LHC operation.

The RF-dipole cavity

In 2017, two prototype DQW-type cavities were built and assembled at CERN into a special cryomodule and tested at 2 K, validating the mechanical, cryogenic and RF functioning. The module was then installed in the Super Proton Synchrotron (SPS) for beam tests, with the world’s first “crabbing” of a proton beam demonstrated on 31 May 2018. In parallel, the fabrication of two prototype RFD-type cavities from high-purity niobium was underway at CERN. Following the integration of the devices into a titanium helium tank at the beginning of 2021, and successful tests at 2 K reaching voltages well beyond the nominal value of 3.4 MV, the cavities were equipped with specially designed RF couplers, which are necessary for beam operations. The two cavities are now being integrated into a cryomodule at Daresbury Laboratory in the UK as a joint effort between CERN and the UK’s Science and Technology Facilities Council (STFC). The cryomodule will be installed in a 15 m-long straight section (LSS6) of the SPS in 2023 for its first test with proton beams. This location in the SPS is equipped with a special by-pass and other services, which were put in place in 2017–2018 to test and operate the DQW-type module. 

The manufacturing challenge 

Due to the complex shape and micrometric tolerances required for the HL-LHC crab cavities, a detailed study was performed to realise the final shape through forming, machining, welding and brazing operations on the high-purity niobium sheets and associated materials (see “Fine machining” images). To ensure a uniform removal of material along the cavities’ complex shape, a rotational buffer chemical polishing (BCP) facility was built at CERN for surface etching of the HL-LHC crab cavities. For the RFD and DQW, the rotational setup etches approximately 250 μm of the internal RF surface to remove the damaged cortical layer during the forming process. Ultrasound measurements were performed to follow the evolution of the cavity-wall thickness during the BCP steps, showing remarkable uniformity (see “Chemical etching” images).

The chemical-etching setup

Preparation of the RFD cavities involved a similar process as that for the DQW modules. Following chemical etching and a very high-temperature bake at 650 °C in a vacuum furnace, the cavities are rinsed in ultra-pure water at high pressure (100 bar) for approximately seven hours. This process has proven to be a key step in the HL-LHC crab-cavity preparation to enable extremely high fields and suppress electron-field emitters, which can limit the performance. The cavity is then closed with its RF ancillaries in an ISO4 cleanroom environment to preserve the ultra-clean RF surface, and installed into a special vertical cryostat to cool the cavity surface to its 2 K operating temperature (see “Clean and cool” image, top). Both RFD cavities reached performances well above the nominal target of 3.4 MV. RFD1 reached more than 50% over the nominal voltage and RFD2 reached above a factor of two (7 MV) – a world-record deflecting field in this frequency range. These performances were reproducible after the assembly and welding of the helium tank owing to the careful preparation of the RF surface throughout the different steps of assembly and preparation. 

RF dipole cavity and cold magnetic shield

The helium tank provides a volume around the cavity surface that is maintained at 2 K with superfluid helium (see “Clean and cool” image, bottom). Due to sizeable deformations during the cool-down process from ambient temperature, a titanium vessel which has a thermal behaviour close to that of the niobium cavity is used. A magnetic shield between the cavity and the helium tank suppresses stray fields in the operating environment and further preserves cavity performance. Following the tests with helium tanks, the cavities were equipped with higher-order-mode couplers and field antennae to undergo a final test at 2 K before cryostating them into a two-cavity string.

The crab cavities require many ancillary components to allow them to function. This overall system is known as a cryomodule (see “Cryomodule” image, top) and ensures that the operational environment is correct, including the temperature, stability, vacuum conditions and RF frequency of the cavities. Technical challenges arise due to the need to assemble the cavity string in an ISO4 cleanroom, the space constraints of the LHC (leading to the rectangular compact shape), and the requirement of fully welded joints (where typically “O” rings would be used for the insulation vacuum).

Design components

The outer vacuum chamber (OVC) of the cryomodule provides an insulation vacuum to prevent heat leaking to the environment as well as providing interfaces to any external connections. Manufactured by ALCA Technology in Italy, the OVC used a rectangular design where the cavity string is mounted to a top-plate that is lowered into the rest of the OVC, and includes four large windows to allow access for repair in situ if required (see “Cryomodule” image, bottom). Since the first DQW prototype module, several cryomodule interfaces including cryogenic and vacuum components were updated to be fully compatible with the final installation in the HL-LHC. 

The HL-LHC crab-cavity programme has developed into a mature project supported by a large number of collaborating institutions around the world

Since superconducting RF cavities can have a higher surface resistance if cooled below their transition temperature in the presence of a magnetic field, they need to be shielded from Earth’s magnetic field and stray fields in the surrounding environment. This is achieved using a warm magnetic shield manufactured in the OVC, and a cold magnetic shield mounted inside the liquid-helium vessel. Both shields, which are made from special nickel-iron alloys, are manufactured by Magnetic Shields Ltd in the UK.

Status and outlook

The RFD crab-cavity pre-series cryomodule will be assembled this year at Daresbury lab, where the infrastructure on site has been upgraded, including an extension to the ISO4 cleanroom area and the introduction of an ISO6 preparation area. A bespoke five-tonne crane has also been installed and commissioned to allow the precise lowering of the delicate cavity string into the outer vacuum vessel.

RF dipole cryomodule and outer vacuum vessel

Parallel activities are taking place elsewhere. The HL-LHC crab-cavity programme has developed into a mature project supported by a large number of collaborating institutions around the world. In the US, the Department of Energy is supporting the HL-LHC Accelerator Upgrade Project to coordinate the efforts and leverage the expertise of a group of US laboratories and universities (FNAL, BNL, JLAB, SLAC, ODU) to deliver the series RFD cavities for the HL-LHC. In 2021, two RFD prototype cavities were built by the US collaboration and exceeded the two most important functional project requirements for crab cavities – deflecting voltage and quality factor. After this successful demonstration, the fabrication of the pre-series cavities was launched.

Crab cavities were first implemented in an accelerator in 2006, at the KEKB electron–positron collider in Japan, where they helped the collider reach record luminosities. A different “crab-waist” scheme is currently employed at KEKB’s successor, SuperKEKB, helping to reach even higher luminosities. The development of ultra-compact, very-high-field cavities for a high-energy hadron collider such as the HL-LHC is even more challenging, and will be essential to maximise the scientific output of this flagship facility beyond the 2030s. 

Beyond the HL-LHC, the compact crab-cavity concepts have been adopted by future facilities, including the proton–proton stage of the proposed Future Circular Collider; the Electron–Ion Collider under construction at Brookhaven; bunch compression in synchrotron X-ray sources to produce shorter pulses; and ultrafast particle separators in proton linacs to separate bunches of secondary particles for different experiments. The full implementation of this technology at the HL-LHC is therefore keenly awaited. 

Form follows function in QCD

Hadron from factors

In the 1970s, the study of low-energy (few GeV) hadron–hadron collisions in bubble chambers was all the rage. It seemed that we understood very little. We had the SU(3) of flavour, Regge theory and the S-matrix to describe hadronic processes, but no overarching theory. Of course, theorists were already working on perturbative QCD and this started to gain traction when experimental results from the Big European Bubble Chamber at CERN showed signs of the scaling violations and made an early measurement of the QCD scale, ΛQCD. We have been living with the predictions of perturbative QCD ever since, at increasingly higher orders. But there have always been non-perturbative inputs, such as the parton distribution functions.

Hadron Form Factors: From Basic Phenomenology to QCD Sum Rules takes us back to low-energy hadron physics and shows us how much more we know about it today. In particular, it explores the formalism for heavy-flavour decays, which is particularly relevant at a time when it seems that the only anomalies we observe with respect to the Standard Model appear in various B-meson decays. It also explores the connections between space-like and time-like processes in terms of QCD sum rules connecting perturbative and non-perturbative behaviour.

The book takes us back to low-energy hadron physics and shows us how much more we know about it today

The general introduction reminds us of the formalism of form factors in the atomic case. This is generalised to mesons and baryons in chapters 2 and 3, after the introduction of QCD in chapter 1, with an emphasis on quark and gluon electroweak currents and their generalisation to effective currents. Hadron spectroscopy is reviewed from a modern perspective and heavy-quark effective theory is introduced. In chapter 2, the formalism for the pion form factor, which is related to the pion decay constant, is introduced via e-π scattering. Due emphasis is placed on how one may measure these quantities. I also appreciated the explanation of how a pseudoscalar particle such as the pion can decay via the axial vector current – a question

hadron_form_review

often raised by smart undergraduates. (Clue: the axial vector current is not conserved). Next, the πe3 decay is considered and generalised to K-, D- and B-meson semileptonic decays. Chapter 3 covers the baryon form factors and their decay constants, and chapter 4 considers hadronic radiative transitions. Chapter 5 relates the pion form factor in the space-like region to its counterpart in the time-like region in e+e → π+π, where one has to consider resonances and widths. Relationships are developed, whereby one can see that by measuring pion and kaon form factors in e+e scattering one can predict the widths of decays such as τ → ππν and τ → KKν. In chapter 6, non-local hadronic matrix elements are introduced to extend the formalism to deal with decays such as π → γγ and B → Kμμ.

The book shifts gears in chapters 7–10. Here, QCD is used to calculate hadronic matrix elements. Chapter 7 covers the calculation of the form factors in the infinite momentum frame, whereby the asymptotic form factor can be expressed in terms of the pion decay constant and a pion distribution amplitude describing the momentum distribution between two valence partons in the pion. In chapter 8, the QCD sum rules are introduced. The two-point correlation of quark current operators can be calculated in perturbative QCD at large space-like momenta, and the result is expressed in terms of perturbative contributions and the QCD vacuum condensates. This can then be related through the sum rule to the hadronic degrees of freedom in the time-like region. Such sum rules are used to gain information on both condensate densities or quark masses from accurate hadronic data and hadronic decay constants and masses from QCD calculations. The connection is made to parton–hadron duality and to the operator product expansion. Some illustrative examples of the technique, such as the calculation of the strange-quark mass and the pion decay constant, are also given. Chapter 9 concerns the light-cone expansion and light-cone dominance, which is then used to explain the role of light-cone sum rules in chapter 10. The use of these sum rules in calculating hadron form factors is illustrated with the pion form factor and also with the heavy-to-light form factors necessary for B → π, B → K, D → π, D → K and B → D decays.

Overall, this book is not an easy read, but there are many useful insights. This is essentially a textbook, and a valuable reference work that belongs in the libraries of particle-physics institutes around the world.

Your Adventures at CERN: Play the Hero Among Particles and a Particular Dinosaur!

Your adventures at CERN

Billed as a bizarre adventure filled with brain-tickling facts about particles and science wonders, Your Adventures at CERN invites young audiences to experience a visit to CERN in different guises.

The reader can choose one of three characters, each with a different story: a tourist, a student and a researcher. The stories are intertwined, and the choice of the reader’s actions through the book changes their journey, rather than following a linear chronology. The stories are filled with puzzles, mazes, quizzes and many other games that challenge the reader. Engaging physics references and explanations, as well as the solutions to the quizzes, are given at the back of the book.

Author Letizia Diamante, a biochemist turned science communicator who previously worked in the CERN press office, portrays the CERN experience in an engaging and understandable way. The adventures are illustrated with funny jokes and charismatic characters, such as “Schrödy”, a hungry cat that guides the reader through the adventures in exchange for food. Detailed hand-drawn illustrations by Claudia Flandoli are included, together with photographs of CERN facilities that take the reader directly into the heart of the lab. Moreover, the book includes several historical facts about particle physics and other topics, such as the city of Geneva and the extinct dinosaurs from the Jurassic era, which is named after the nearby Jura mountains on the border between France and Switzerland. A particle-physics glossary and extra information, such as fun cooking recipes, are also included at the end.

Although targeted mainly at children, this book is also suitable for teenagers and adults looking for a soft introduction to high-energy physics and CERN, offering a refreshing addition to the more mainstream popular particle-physics literature.

Fear of a Black Universe: an outsider’s guide to the future of physics

Fear of a black universe feature

Stephon Alexander is a professor of theoretical physics at Brown University, specialising in cosmology, particle physics and quantum gravity. He is also a self-professed outsider, as the subtitle of his latest book Fear of a Black Universe suggests. His first book, The Jazz of Physics, was published in 2016. Fear of a Black Universe is a rallying cry for anyone who feels like a misfit because their identity or outside-the-box thinking doesn’t mesh with cultural norms. By interweaving historical anecdotes and personal experiences, Alexander shows how outsiders drive innovation by making connections and asking questions insiders might dismiss as trivial.

Alexander is Black and internalised his outsider sense early in his career. As a postdoc in the early 2000s, he found that his attempts to engage with other postdocs in his group were rebuffed. He eventually learned from his friend Brian Keating, who is white, the reason why: “They feel that they had to work so hard to get to the top and you got in easily, through affirmative action”. Instead of finding his peers’ rejection limiting, Alexander reinterpreted their dismissal as liberating: “I’ve come to realise that when you fit in, you might have to worry about maintaining your place in the proverbial club… so I eventually became comfortable being the outsider. And since I was never an insider, I didn’t have to worry that colleagues might laugh at me for my unlikely approach.”

Instead of finding his peers’ rejection limiting, Alexander reinterpreted their dismissal as liberating

Alexander argues that true breakthroughs come from “deviants”. He draws parallels between outsiders in physics and graffiti artists, who were considered vandals until the art world recognised their talent and contributions. Alexander recounts his own “deviance” in a humorous and sometimes  self-deprecating manner. He recalls a talk he gave at a conference about his first independent paper, which involved reinterpreting the universe as a three-dimensional membrane orbiting a five-dimensional black hole. During the talk he was often interrupted, eventually prompting a well-respected Indian physicist to stand up and shout “Let him finish! No one ever died from theorising.”

Alexander took these words to heart, and asks his readers to do the same during the speculative discussions in the second part of his book. Here, Alexander intersperses mainstream physics with some of his self-described “strange” ideas, acknowledging that some readers might write him off as an “oddball crank”. He explores the intersection of physics with philosophy, biology, consciousness, and searches for extraterrestrial life. Some sections – such as the chapter on alien quantum computers generating the effect of dark energy – feel more like science fiction than science. But Alexander reassures readers that, while many of his ideas are strange, so are many experimentally verified tenants of physics. “In fact, the likelihood that any one of us will create a new paradigm because we have violated the norms… is very slim” he observes.

Science wise, this book is not for the faint-hearted. While many other public-facing physics books slowly wade readers into early-20th-century physics and touch on more abstract concepts only in the final chapters, part I of Fear of a Black Universe dives directly into relativity, quantum mechanics and emergence. Part II then launches into a much deeper discussion about supersymmetry, baryogenesis, quantum gravity and quantum computing. But the strength of Alexander’s new work isn’t in its retellings of Einstein’s thought experiments or even its deconstruction of today’s cosmological enigma. More than anything, this book makes a case for cultivating diversity in science that goes beyond “gesticulations of identity politics”.

Fear of a Black Universe is both mind-bending and refreshing. It approaches physics with a childlike curiosity and allows the reader to playfully contemplate questions many have but few discuss for fear of sounding like a crank. This book will be enjoyable for scientists and science enthusiasts who can set cultural norms aside and just enjoy the ride.

bright-rec iop pub iop-science physcis connect