Comsol -leaderboard other pages

Topics

Cornering compressed SUSY

CMS figure 1

Since the LHC began operations in 2008, the CMS experiment has been searching for signs of supersymmetry (SUSY) – the only remaining spacetime symmetry not yet observed to have consequences for physics. It has explored higher and higher masses of supersymmetric particles (sparticles) with increasing collision energies and growing datasets. No evidence has been observed so far. A new CMS analysis using data recorded between 2016 and 2018 continues this search in an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The masses of SUSY sparticles have very important implications for both the physics of our universe and how they could be potentially produced and observed at experiments like CMS. The heavier the sparticle, the rarer its appearance. On the other hand, when heavy sparticles decay, their mass is converted to the masses and momenta of SM particles, like leptons and jets. These particles are detected by CMS, with large masses leaving potentially spectacular (and conspicuous) signatures. Each heavy sparticle is expected to continue to decay to lighter ones, ending with the lightest SUSY particles (LSPs). LSPs, though massive, are stable and do not decay in the detector. Instead, they appear as missing momentum. In cases of compressed sparticle mass spectra, the mass difference between the initially produced sparticles and LSPs is small. This means the low rates of production of massive sparticles are not accompanied by high-momentum decay products in the detector. Most of their mass ends up escaping in the form of invisible particles, significantly complicating observation.

This new CMS result turns this difficulty on its head, using a kinematic observable RISR, which is directly sensitive to the mass of LSPs as opposed to the mass difference between parent sparticles and LSPs. The result is even better discrimination between SUSY and SM backgrounds when sparticle spectra are more compressed.

This approach focuses on events where putative SUSY candidates receive a significant “kick” from initial-state radiation (ISR) – additional jets recoiling opposite the system of sparticles. When the sparticle masses are highly compressed, the invisible, massive LSPs receive most of the ISR momentum-kick, with this fraction telling us about the LSP masses through the RISR observable.

Given the generic applicability of the approach, the analysis is able to systematically probe a large class of possible scenarios. This includes events with various numbers of leptons (0, 1, 2 or 3) and jets (including those from heavy-flavour quarks), with a focus on objects with low momentum. These multiplicities, along with RISR and other selected discriminating variables, are used to categorise recorded events and a comprehensive fit is performed to all these regions. Compressed SUSY signals would appear at larger values of RISR, while bins at lower values are used to model and constrain SM backgrounds. With more than 2000 different bins in RISR, over several hundred object-based categ­ories, a significant fraction of the experimental phase space in which compressed SUSY could hide is scrutinised.

In the absence of significant observed deviations in data yields from SM expectations, a large collection of SUSY scenarios can be excluded at high confidence level (CL), including those with the production of stop quarks, EWKinos and sleptons. As can be seen in the results for stop quarks (figure 1), the analysis is able to achieve excellent sensitivity to compressed SUSY. Here, as for many of the SUSY scenarios considered, the analy­sis provides the world’s most stringent constraints on compressed SUSY, further narrowing the space it could be hiding.

Chinese space station gears up for astrophysics

Completed in 2022, China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. Like the International Space Station, its ability to provide large amounts of power, support heavy payloads and access powerful communication and computing facilities give it many advantages over typical satellite platforms. As such, both Chinese and international collaborations have been developing a number of science missions ranging from optical astronomy to the detection of cosmic rays with PeV energies.

For optical astronomy, the space station will be accompanied by the Xuntian telescope, which can be translated to “survey the heavens”. Xuntian is currently planned to be launched in mid-2025 to fly alongside Tiangong, thereby allowing for regular maintenance. Although its spatial resolution will be similar to that of the Hubble Space Telescope, Xuntian’s field of view will be about 300 times larger, allowing the observation of many objects at the same time. In addition to producing impressive images similar to those sent by Hubble, the instrument will be important for cosmological studies where large statistics for astronomical objects are typically required to study their evolution.

Another instrument that will observe large portions of the sky is LyRIC (Lyman UV Radiation from Interstellar medium and Circum-galactic medium). After being placed on the space station in the coming years, LyRIC will probe the poorly studied far-ultraviolet regime that contains emission lines from neutral hydrogen and other elements. While difficult to measure, this allows studies of baryonic matter in the universe, which can be used to answer important questions such as why only about half of the total baryons in the standard “ΛCDM” cosmological model can be accounted for.

At slightly higher energies, the Diffuse X-ray Explorer (DIXE) aims to use a novel type of X-ray detector to reach an energy resolution better than 1% in the 0.1 to 10 keV energy range. It achieves this using cryogenic transition-edge sensors (TESs), which exploit the rapid change in resistance that occurs during a superconducting phase transition. In this regime, the resistivity of the material is highly dependent on its temperature, allowing the detection of minuscule temperature increases resulting from X-rays being absorbed by the material. Positioned to scan the sky above the Tiangong space station, DIXE will be able, among other things, to measure the velocity of mat­erial that appears to have been emitted by the Milky Way during an active stage of its central black hole. Its high-energy resolution will allow Doppler shifts of the order of several eV to be measured, requiring the TES detectors to operate at 50 mK. Achieving such temperatures demands a cooling system of 640 W – a power level that is difficult to achieve on a satellite, but relatively easy to acquire on a space station. As such, DIXE will be one of the first detectors using this new technology when it launches in 2025, leading the way for missions such as the European ATHENA mission that plans to use it starting in 2037.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up

POLAR-2 was accepted as an international payload on the China space station through the United Nations Office for Outer Space Affairs and has since become a CERN-recognised experiment. The mission started as a Swiss, German, Polish and Chinese collaboration building on the success of POLAR, which flew on the space station’s predecessor Tiangong-2. Like its earlier incarnation, POLAR-2 measures the polarisation of high-energy X rays or gamma rays to provide insights into, for example, the magnetic fields that produced the emission. As one of the most sensitive gamma-ray detectors in the sky, POLAR-2 can also play an important role in alerting other instruments when a bright gamma-ray transient, such as a gamma-ray burst, appears. The importance of such alerts has resulted in the expansion of POLAR-2 to include an accompanying imaging spectrometer, which will provide detailed spectral and location information on any gamma-ray transient. Also now foreseen for this second payload is an additional wide-field-of-view X-ray polarimeter. The international team developing the three instruments, which are scheduled to be launched in 2027, is led by the Institute of High Energy Physics in Beijing.

For studying the universe using even higher energy emissions, the space station will host the High Energy cosmic-Radiation Detection Facility (HERD). HERD is designed to study both cosmic rays and gamma rays at energies beyond those accessible to instruments like AMS-02, CALET (CERN Courier July/August 2024 p24) and DAMPE. It aims to achieve this, in part, by simply being larger, resulting in a mass that is currently only possible to support on a space station. The HERD calorimeter will be 55 radiation lengths long and consist of several tonnes of scintillating cubic LYSO crystals. The instrument will also use high-precision silicon trackers, which in combination with the deep calorimeter, will provide a better angular resolution and a geometrical acceptance 30 times larger than the present AMS-02 (which is due to be upgraded next year). This will allow HERD to probe the cosmic-ray spectrum up to PeV energies, filling in the energy gap between current space missions and ground-based detectors. HERD started out as an international mission with a large European contribution, however delays on the European side regarding participation, in combination with a launch requirement of 2027, mean that it is currently foreseen to be a fully Chinese mission.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up. As well as providing researchers with a pristine view of the electromagnetic universe, instruments such as HERD will enable vital cross-checks of data from AMS-02 and other unique experiments in space.

Taking the lead in the monopole hunt

ATLAS figure 1

Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. He pointed out that if monopoles exist, electric charge must be quantised, meaning that particle charges must be integer multiples of a fundamental charge. Electric charge quantisation is indeed observed in nature, with no other known explanation for this striking phenomenon. The ATLAS collaboration performed a search for these elusive particles using lead–lead (PbPb) collisions at 5.36 TeV from Run 3 of the Large Hadron Collider.

The search targeted the production of monopole–antimonopole pairs via photon–photon interactions, a process enhanced in heavy-ion collisions due to the strong electromagnetic fields (Z2) generated by the Z = 82 lead nuclei. Ultraperipheral collisions are ideal for this search, as they feature electromagnetic interactions without direct nuclear contact, allowing rare processes like monopole production to dominate in visible signatures. The ATLAS study employed a novel detection technique exploiting the expected highly ionising nature of these particles, leaving a characteristic signal in the innermost silicon detectors of the ATLAS experiment (figure 1).

The analysis employed a non-perturbative semiclassical model to estimate monopole production. Traditional perturbative models, which rely on Feynman diagrams, are inadequate due to the large coupling constant of magnetic monopoles. Instead, the study used a model based on the Schwinger mechanism, adapted for magnetic fields, to predict monopole production in the ultraperipheral collisions’ strong magnetic fields. This approach offers a more robust
theoretical framework for the search.

ATLAS figure 2

The experiment’s trigger system was critical to the search. Given the high ionisation signature of monopoles, traditional calorimeter-based triggers were unsuitable, as even high-momentum monopoles lose energy rapidly through ionisation and do not reach the calorimeter. Instead, the trigger, newly introduced for the 2023 PbPb data-taking campaign, focused on detecting the forward neutrons emitted during electromagnetic interactions. The level-1 trigger system identified neutrons using the Zero-Degree Calorimeter, while the high-level trigger required more than 100 clusters of pixel-detector hits in the inner detector – an approach sensitive to monopoles due to their high ionisation signatures.

Additionally, the analysis examined the topology of pixel clusters to further refine the search, as a more aligned azimuthal distribution in the data would indicate a signature consistent with monopoles (figure 1), while the uniform distribution typically associated with beam-induced backgrounds could be identified and suppressed.

No significant monopole signal is observed beyond the expected background, with the latter being estimated using a data-driven technique. Consequently, the analysis set new upper limits on the cross-section for magnetic monopole production (figure 2), significantly improving existing limits for low-mass monopoles in the 20–150 GeV range. Assuming a non-perturbative semiclassical model, the search excludes monopoles with a single Dirac magnetic charge and masses below 120 GeV. The techniques developed in this search will open new possibilities to study other highly ionising particles that may emerge from beyond-Standard Model physics.

Isolating photons at low Bjorken x

ALICE figure 1

In high-energy collisions at the LHC, prompt photons are those that do not originate from particle decays and are instead directly produced by the hard scattering of quarks and gluons (partons). Due to their early production, they provide a clean method to probe the partons inside the colliding nucleons, and in particular the fraction of the momentum of the nucleon carried by each parton (Bjorken x). The distribution of each parton in Bjorken x is known as its parton distribution function (PDF).

Theoretical models of particle production rely on the precise knowledge of PDFs, which are derived from vast amounts of experimental data. The high centre-of-mass energies (√s) at the LHC probe very small values of the momentum fraction, Bjorken x. At “midrapidity”, when a parton scatters with a large angle with respect to the beam axis, and a prompt photon is produced in the final state, a useful approximation to Bjorken x is provided by the dimensionless variable xT = 2pT/√s, where pT is the transverse momentum of the prompt photon.

Prompt photons can also be produced by next-to-leading order processes such as parton fragmentation or bremsstrahlung. A clean separation of the different prompt photon sources is difficult experimentally, but fragmentation can be suppressed by selecting “isolated photons”. For a photon to be considered isolated, the sum of the transverse energies or transverse momenta of the particles produced in a cone around the photon must be smaller than some threshold – a selection that can be done both in the experimental measurement and theoretical calculations. An isolation requirement also helps to reduce the background of decay photons, since hadrons that can decay to photons are often produced in jet fragmentation.

The ALICE collaboration now reports the measurement of the differential cross-section for isolated photons in proton–proton collisions at √s = 13 TeV at midrapidity. The photon measurement is performed by the electromagnetic calorimeter, and the isolated photons are selected by combining with the data from the central inner tracking system and time-projection chamber, requiring that the summed pT of the charged particles in a cone of angular radius 0.4 radians centred on the photon candidate be smaller than 1.5 GeV/c. The isolated photon cross-sections are obtained within the transverse momentum range from 7 to 200 GeV/c, corresponding to 1.1 × 10–3 < xT < 30.8 × 10–3.

Figure 1 shows the new ALICE results alongside those from ATLAS, CMS and prior measurements in proton–proton and proton–antiproton collisions at lower values of √s. The figure spans more than 15 orders of magnitude on the y-axis, representing the cross-section, over a wide range of xT. The present measurement probes the smallest Bjorken x with isolated photons at midrapidity to date. The experimental data points show an agreement between all the measurements when scaled with the collision energy to the power n = 4.5. Such a scaling is designed to cancel the predicted 1/(pT)n dependence of partonic 2  2 scattering cross-sections in perturbative QCD and reveal insights into the gluon PDF (see “The other 99%“).

This measurement will help to constrain the gluon PDF and will play a crucial role in exploring medium-induced modifications of hard probes in nucleus–nucleus collisions.

R(D) ratios in line at LHCb

LHCb figure 1

The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation in the framework of the Standard Model (SM). The b  cτντ transition has the potential to reveal new particles or forces that interact primarily with third-generation particles, which are subject to the less stringent experimental constraints at present. As a tree-level SM process mediated by W-boson exchange, its amplitude is large, resulting in large branching fractions and significant data samples to analyse.

The observable under scrutiny is the ratio of decay rates between the signal mode involving τ and ντ leptons from the third generation of fermions and the normalisation mode containing μ and νμ leptons from the second generation. Within the SM, this lepton flavour universality (LFU) ratio deviates from unity only due to the different mass of the charged leptons – but new contributions could change the value of the ratios. A longstanding tension exists between the SM prediction and the experimental measurements, requiring further input to clarify the source of the discrepancy.

The LHCb collaboration analysed four decay modes: B0 D(*)+ν, with ℓ representing τ or μ. Each is selected using the same visible final state of one muon and light hadrons from the decay of the charm meson. In the normalisation mode, the muon originates directly from the B-hadron decay, while in the signal mode, it arises from the decay of the τ lepton. The four contributions are analysed simultaneously, yielding two LFU ratios between taus and muons – one using the ground state of the D+ meson and one the excited state D*+.

The control of the background contributions is particularly complicated in this analysis as the final state is not fully reconstructible, limiting the resolution on some of the discriminating variables. Instead, a three-dimensional template fit separates the signal and the normalisation from the background versus: the momentum transferred to the lepton pair (q2); the energy of the muon in the rest frame of the B meson (Eμ*); and the invariant mass missing from the visible system. Each contribution is modelled using a template histogram derived either from simulation or from selected control samples in data.

This constitutes the world’s second most precise measurement of R(D)

To prevent the simulated data sample size from becoming a limiting factor in the precision of the measurement, a fast tracker-only simulation technique was exploited for the first time in LHCb. Another novel aspect of this work is the use of the HAMMER software tool during the minimisation procedure of the likelihood fit, which enables a fast, but exact, variation of a template as a function of the decay-model parameters. This variation is important to allow the form factors of both the signal and normalisation channels to vary as the constraints derived from the predictions that use precise lattice calculations can have larger uncertainties than those obtained from the fit.

The fit projection over one of the discriminating variables is shown in figure 1, illustrating the complexity of the analysed data sample but nonetheless showcasing LHCb’s ability to distinguish the signal modes (red and orange) from the normalisation modes (two shades of blue) and background contributions.

The measured LFU ratios are in good agreement with the current world average and the predictions of the SM: R(D+) = 0.249 ± 0.043 (stat.) ± 0.047 (syst.) and R(D*+) = 0.402 ± 0.081(stat.) ± 0.085 (syst.). Under isospin symmetry assumptions, this constitutes the world’s second most precise measurement of R(D), following a 2019 measurement by the Belle collaboration. This analysis complements other ongoing efforts at LHCb and other experiments to test LFU across different decay channels. The precision of the measurements reported here is primarily limited by the size of the signal and control samples, so more precise measurements are expected with future LHCb datasets.

Dark matter: evidence, theory and constraints

Dark Matter: Evidence, Theory and Constraints

Cold non-baryonic dark matter appears to make up 85% of the matter and 25% of the energy in our universe. However, we don’t yet know what it is. As the opening of many research proposals state, “The nature of dark matter is one of the major open questions in physics.”

The evidence for dark matter comes from astronomical and cosmological observations. Theoretical particle physics provides us with various well motivated candidates, such as weakly interacting massive particles (WIMPs), axions and primordial black holes. Each has different experimental and observational signatures and a wide range of searches are taking place. Dark-matter research spans a very broad range of topics and methods. This makes it a challenging research field to enter and master. Dark Matter: Evidence, Theory and Constraints by David Marsh, David Ellis and Viraf Mehta, the latest addition to the Princeton Series in Astrophysics, clearly presents the relevant essentials of all of these areas.

The book starts with a brief history of dark matter and some warm-up calculations involving units. Part one outlines the evidence for dark matter, on scales ranging from individual galaxies to the entire universe. It compactly summarises the essential background material, including cosmological perturbation theory.

Part two focuses on theories of dark matter. After an overview of the Standard Model of particle physics, it covers three candidates with very different motivations, properties and phenomenology: WIMPs, axions and primordial black holes. Part three then covers both direct and indirect searches for these candidates. I particularly like the schematic illustrations of experiments; they should be helpful for theorists who want to (and should!) understand the essentials of experimental searches.

The main content finishes with a brief overview of other dark-matter candidates. Some of these arguably merit more extensive coverage, in particular sterile neutrinos. The book ends with extensive recommendations for further reading, including textbooks, review papers and key research papers.

Dark-matter research spans a broad range of topics and methods, making it a challenging field to master

The one thing I would argue with is the claim in the introduction that dark matter has already been discovered. I agree with the authors that the evidence for dark matter is strong and currently cannot all be explained by modified gravity theories. However, given that all of the evidence for dark matter comes from its gravitational effects, I’m open to the possibility that our understanding of gravity is incorrect or incomplete. The authors are also more positive than I am about the prospects for dark-matter detection in the near future, claiming that we will soon know which dark-matter candidates exist “in the real pantheon of nature”. Optimism is a good thing, but this is a promise that dark-matter researchers (myself included…) have now been making for several decades.

The conversational writing style is engaging and easy to read. The annotation of equations with explanatory text is novel and helpful, and  the inclusion of numerous diagrams – simple and illustrative where possible and complex when called for – aids understanding. The attention to detail is impressive. I reviewed a draft copy for the publishers, and all of my comments and suggestions have been addressed in detail.

This book will be extremely useful to newcomers to the field, and I recommend it strongly to PhD students and undergraduate research students. It is particularly well suited as a companion to a lecture course, with numerous quizzes, problems and online materials, including numerical calculations and plots using Jupyter notebooks. It will also be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The B’s Ke+es

The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world to CERN from 23 to 25 October 2024. Patrick Koppenburg (Nikhef) began the meeting by looking back 10 years, when three and four sigma anomalies abounded: the inclusive/exclusive puzzles; the illuminatingly named P5 observable; and the lepton-universality ratios for rare B decays. While LHCb measurements have mostly eliminated the anomalies seen in the lepton-universality ratios, many of the other anomalies persist – most notably, the corresponding branching fractions for rare B-meson decays still appear to be suppressed significantly below Standard Model (SM) theory predictions. Sara Celani (Heidelberg) reinforced this picture with new results for Bs→ φμ+μ and Bs→ φe+e, showing the continued importance of new-physics searches in these modes.

Changing flavour

The discussion on rare B decays continued in the session on flavour-changing neutral-currents. With new lattice-QCD results pinning down short-distance local hadronic contributions, the discussion focused on understanding the long-distance contributions arising from hadronic resonances and charm rescattering. Arianna Tinari (Zurich) and Martin Hoferichter (Bern) judged the latter not to be dramatic in magnitude. Lakshan Madhan (Cambridge) presented a new amplitude analysis in which the long and short-distance contributions are separated via the kinematic dependence of the decay amplitudes. New theo­retical analyses of the nonlocal form factors for B → K(*)μ+μ and B → K(*)e+e were representative of the workshop as a whole: truly the bee’s knees.

Another challenge to accurate theory predictions for rare decays, the widths of vector final states, snuck its way into the flavour-changing charged-currents session, where Luka Leskovec (Ljubljana) presented a comprehensive overview of lattice methods for decays to resonances. Leskovec’s optimistic outlook for semileptonic decays with two mesons in the final state stood in contrast to prospects for applying lattice methods to D-D mixing: such studies are currently limited to the SU(3)-flavour symmetric point of equal light-quark masses, explained Felix Erben (CERN), though he offered a glimmer of hope in the form of spectral reconstruction methods currently under development.

LHCb’s beauty and charm physics programme reported substantial progress. Novel techniques have been implemented in the most recent CP-violation studies, potentially leading to an impressive uncertainty of just 1° in future measurements of the CKM angle gamma. LHCb has recently placed a special emphasis on beauty and charm baryons, where the experiment offers unique capabilities to perform many interesting measurements ranging from CP violation to searches for very rare decays and their form factors. Going from three quarks to four and five, the spectroscopy session illustrated the rich and complex debate around tetraquark and pentaquark states with a big open discussion on the underlying structure of the 20 or so discovered at LHCb: which are bound states of quarks and which are simply meson molecules? (CERN Courier November/December 2024 p26 and p33.)

LHCb’s ability to do unique physics was further highlighted in the QCD, electroweak (EW) and exotica session, where the collaboration has shown the most recent publicly available measurement of the weak-mixing angle in conjunction with W/Z-boson production cross-sections and other EW observables. LHCb have put an emphasis on combined QCD + QED and effective-field-theory calculations, and the interplay between EW precision observables and new-physics effects in couplings to the third generation. By studying phase space inaccessible to any other experiment, a study of hypothetical dark photons decaying to electrons showed the LHCb experiment to be a unique environment for direct searches for long-lived and low-mass particles.

Attendees left the workshop with a fresh perspective

Parallel to Implications 2024, the inaugural LHCb Open Data and Ntuple Wizard Workshop, took place on 22 October as a satellite event, providing theorists and phenomenologists with a first look at a novel software application for on-demand access to custom ntuples from the experiment’s open data. The LHCb Ntupling Service will offer a step-by-step wizard for requesting custom ntuples and a dashboard to monitor the status of requests, communicate with the LHCb open data team and retrieve data. The beta version was released at the workshop in advance of the anticipated public release of the application in 2025, which promises open access to LHCb’s Run 2 dataset for the first time.

A recurring satellite event features lectures by theorists on topics following LHCb’s scientific output. This year, Simon Kuberski (CERN) and Saša Prelovšek (Ljubljana) took the audience on a guided tour through lattice QCD and spectroscopy.

With LHCb’s integrated luminosity in 2024 exceeding all previous years combined, excitement was heightened. Attendees left the workshop with a fresh perspective on how to approach the challenges faced by our community.

From spinors to supersymmetry

From Spinors to Supersymmetry

This text is a hefty volume of around 1000 pages describing the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. The authors of this volume, Herbi Dreiner, Howard Haber and Stephen Martin, are household names in the phenomenology of particle physics with many original contributions in the topics that are covered in the book. Haber is also well known at CERN as a co-author of the legendary Higgs Hunter’s Guide (Perseus Books, 1990), a book that most collider physicists of the pre and early LHC eras are very familiar with.

The book starts with a 250-page introduction (chapters one to five) to the Standard Model (SM), covering more or less the theory material that one finds in standard advanced textbooks. The emphasis is on the theoretical side, with no discussion on experimental results, providing a succinct discussion of topics ranging from how to obtain Feynman rules to anomaly-cancellation calculations. In chapter six, extensions of the SM are discussed, starting with the seesaw-extended SM, moving on to a very detailed exposition of the two-Higgs-doublet model and finishing with grand unification theories (GUTs).

The second part of the book (from chapter seven onwards) is about supersymmetry in general. It begins with an accessible introduction that is also applicable to other beyond-SM-physics scenarios. This gentle and very pedagogical pattern continues to chapter eight, before proceeding to a more demanding supersymmetry-algebra discussion in chapter nine. Superfields, supersymmetric radiative corrections and supersymmetry symmetry breaking, which are discussed in the subsequent chapters, are more advanced topics that will be of interest to specialists in these areas.

The third part (chapter 13 onwards) discusses realistic supersymmetric models starting from the minimal supersymmetric SM (MSSM). After some preliminaries, chapter 15 provides a general presentation of MSSM phenomenology, discussing signatures relevant for proton–proton and electron–positron collisions, as well as direct dark-matter searches. A short discussion on beyond-MSSM scenarios is given in chapter 16, including NMSSM, seesaw, GUTs and R-parity violating theories. Phenomenological implications, for example their impact on proton decay, are also discussed.

Part four includes basic Feynman diagram calculations in the SM and MSSM using two-component spinor formalism. Starting from very simple tree-level SM processes, like Bhabha scattering and Z-boson decays, it proceeds with tree-level supersymmetric processes, standard one-loop calculations and their supersymmetric counterparts, and Higgs-boson mass corrections. The presentation of this is very practical and useful for those who want to see how to perform easy calculations in SM or MSSM using two-component spinor formalism. The material is accessible and detailed enough to be used for teaching master’s or graduate-level students.

A valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry

The book finishes with almost 200 pages of appendices covering all sorts of useful topics, from notation to commonly used identity lists and group theory.

The book requires some familiarity with master’s-level particle-physics concepts, for example via Halzen and Martin’s Quarks and Leptons or Paganini’s Fundamentals of Particle Physics. Some familiarity with quantum field theory is helpful but not needed for large parts of the book. No effort is made to be brief: two-component spinor formalism is discussed in all its detail in a very pedagogic and clear way. Parts two and three are a significant enhancement to the well known A Supersymmetry Primer (arXiv:hep-ph/9709356), which is very popular among beginners to supersymmetry and written by Stephen Martin, one of authors of this volume. A rich collection of exercises is included in every chapter, and the appendix chapters are no exception to this.

Do not let the word supersymmetry in the title to fool you: even if you are not interested in supersymmetric extensions you can find a detailed exposition on two-component formalism for spinors, SM calculations with this formalism and a detailed discussion on how to design extensions of the scalar sector of the SM. Chapter three is particularly useful, describing in 54 pages how to get from the two-component to the four-component spinor formalism that is more familiar to many of us.

This is a book for advanced graduate students and researchers in particle-physics phenomenology, which nevertheless contains much that will be of interest to advanced physics students and particle-physics researchers in boththeory and experiment. This is because the size of the volume allows the authors to start from the basics and dwell in topics that most other books of that type cover in less detail, making them less accessible. I expect that Dreiner, Haber and Martin will become a valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry.

First signs of antihyperhelium-4

Heavy-ion collisions at the LHC create suitable conditions for the production of atomic nuclei and exotic hypernuclei, as well as their antimatter counterparts, antinuclei and antihypernuclei. Measurements of these forms of matter are important for understanding the formation of hadrons from the quark–gluon plasma and studying the matter–antimatter asymmetry seen in the present-day universe.

Hypernuclei are exotic nuclei formed by a mix of protons, neutrons and hyperons, the latter being unstable particles containing one or more strange quarks. More than 70 years since their discovery in cosmic rays, hypernuclei remain a source of fascination for physicists due to their rarity in nature and the challenge of creating and studying them in the laboratory.

In heavy-ion collisions, hypernuclei are created in significant quantities, but only the lightest hypernucleus, hypertriton, and its antimatter partner, antihypertriton, have been observed. Hypertriton is composed of a proton, a neutron and a lambda hyperon containing one strange quark. Antihypertriton is made up of an antiproton, an antineutron and an antilambda.

Following hot on the heels of the observation of antihyperhydrogen-4 (a bound state of an antiproton, two antineutrons and an antilambda) earlier this year by the STAR collaboration at the Relativistic Heavy Ion Collider (RHIC), the ALICE collaboration at the LHC has now seen the first ever evidence for antihyperhelium-4, which is composed of two antiprotons, an antineutron and an antilambda. The result has a significance of 3.5 standard deviations. If confirmed, antihyper­helium-4 would be the heaviest antimatter hypernucleus yet seen at the LHC.

Hypernuclei remain a source of fascination due to their rarity in nature and the challenge of creating and studying them in the lab

The ALICE measurement is based on lead–lead collision data taken in 2018 at a centre-of-mass energy of 5.02 TeV for each colliding pair of nucleons, be they protons or neutrons. Using a machine-learning technique that outperforms conventional hypernuclei search techniques, the ALICE researchers looked at the data for signals of hyperhydrogen-4, hyperhelium-4 and their antimatter partners. Candidates for (anti)hyperhydrogen-4 were identified by looking for the (anti)helium-4 nucleus and the charged pion into which it decays, whereas candidates for (anti)hyperhelium-4 were identified via its decay into an (anti)helium-3 nucleus, an (anti)proton and a charged pion.

In addition to finding evidence of antihyperhelium-4 with a significance of 3.5 standard deviations, and evidence of antihyperhydrogen-4 with a significance of 4.5 standard deviations, the ALICE team measured the production yields and masses of both hypernuclei.

For both hypernuclei, the measured masses are compatible with the current world-average values. The measured production yields were compared with predictions from the statistical hadronisation model, which provides a good description of the formation of hadrons and nuclei in heavy-ion collisions. This comparison shows that the model’s predictions agree closely with the data if both excited hypernuclear states and ground states are included in the predictions. The results confirm that the statistical hadronisation model can also provide a good description of the production of hyper­nuclei modelled to be compact objects with sizes of around 2 femtometres.

The researchers also determined the antiparticle-to-particle yield ratios for both hypernuclei and found that they agree with unity within the experimental uncertainties. This agreement is consistent with ALICE’s observation of the equal production of matter and antimatter at LHC energies and adds to the ongoing research into the matter–antimatter imbalance in the universe.

Inside pentaquarks and tetraquarks

Strange pentaquarks

Breakthroughs are like London buses. You wait a long time, and three turn up at once. In 1963 and 1964, Murray Gell-Mann, André Peterman and George Zweig independently developed the concept of quarks (q) and antiquarks (q) as the fundamental constituents of the observed bestiary of mesons (qq) and baryons (qqq).

But other states were allowed too. Additional qq pairs could be added at will, to create tetraquarks (qqqq), pentaquarks (qqqqq) and other states besides. In the
1970s, Robert L Jaffe carried out the first explicit calculations of multiquark states, based on the framework of the MIT bag model. Under the auspices of the new theory of quantum chromodynamics (QCD), this computationally simplified model ignored gluon interactions and considered quarks to be free, though confined in a bag with a steep potential at its boundary. These and other early theoretical efforts triggered many experimental searches, but no clear-cut results.

New regimes

Evidence for such states took nearly two decades to emerge. The essential precursors were the discovery of the charm quark (c) at SLAC and BNL in the November Revolution of 1974, some 50 years ago (p41), and the discovery of the bottom quark (b) at Fermilab three years later. The masses and lifetimes of these heavy quarks allowed experiments to probe new regimes in parameter space where otherwise inexplicable bumps in energy spectra could be resolved (see “Heavy breakthroughs” panel).

Heavy breakthroughs

Double hidden charm

With the benefit of hindsight, it is clear why early experimental efforts did not find irrefutable evidence for multiquark states. For a multiquark state to be clearly identifiable, it is not enough to form a multiquark colour-singlet (a mixture of colourless red–green–blue, red–antired, green–antigreen and blue–antiblue components). Such a state also needs to be narrow and long-lived enough to stand out on top of the experimental background, and has to have distinct decay modes that cannot be explained by the decay of a conventional hadron. Multiquark states containing only light quarks (up, down and strange) typically have many open decay channels, with a large phase space, so they tend to be wide and short-lived. Moreover, they share these decay channels with excited states of conventional hadrons and mix with them, so they are extremely difficult to pin down.

Multiquark states with at least one heavy quark are very different. Once hadrons are “dressed” by gluons, they acquire effective masses of the order of several hundred MeV, with all quarks coupling in the same way to gluons. For light quarks, the bare quark masses are negligible compared to the effective mass, and can be neglected to zeroth order. But for heavy quarks (c or b), the ratio of the bare quark masses to the effective mass of the hadron dramatically affects the dynamics and the experimental situation, creating narrow multiquark states that stand out. These states were not seen in the early searches simply because the relevant production cross sections are very small and particle identification requires very high spatial resolution. These features became accessible only with the advent of the huge luminosity and the superb spatial resolution provided by vertex detectors in bottom and charm factories such as BaBar, Belle, BESIII and LHCb.

The attraction between two heavy quarks scales like α2smq, where αs is the strong coupling constant and mq is the mass of the quarks. This is because the Coulomb-like part of the QCD potential dominates, scaling as –αs/r as a function of distance r, and yielding an analogue of the Bohr radius ~1/(αsmq). Thus, the interaction grows approximately linearly with the heavy quark mass. In at least one case (discussed below), the highly anticipated but as yet undiscovered bbud. tetraquark Tbb is expected to result in a state with a mass that is below the two-meson threshold, and therefore stable under strong interactions.

Exclusively heavy states are also possible. In 2020 and in 2024, respectively, LHCb and CMS discovered exotic states Tcccc(6900) and Tcccc(6600), which both decay into two J/ψ particles, implying a quark content (cccc). J/ψ does not couple to light quarks, so these states are unlikely to be hadronic molecules bound by light meson exchange. Though they are too heavy to be the ground state of a (cccc) compact tetraquark, they might perhaps be its excitations. Measuring their spin and parity would be very helpful in distinguishing between the various alternatives that have been proposed.

The first unambiguously exotic hadron, the X(3872) (dubbed χc1(3872) in the LHCb collaboration’s new taxonomy; see “What’s in a name?” panel), was discovered at the Belle experiment at KEK in Japan in 2003. Subsequently confirmed by many other experiments, its nature is still controversial. (More of that later.) Since then, there has been a rapidly growing body of experimental evidence for the existence of exotic multiquark hadrons. New states have been discovered at Belle, at the BaBar experiment at SLAC in the US, at the BESIII experiment at IHEP in China, and at the CMS and LHCb experiments at CERN (see “A bestiary of exotic hadrons“). In all cases with robust evidence, the exotic new states contain at least one heavy charm or bottom quark. The majority include two.

The key theoretical question is how the quarks are organised inside these multiquark states. Are they hadronic molecules, with two heavy hadrons bound by the exchange of light mesons? Or are they compact objects with all quarks located within a single confinement volume?

Compact candidate

The compact and molecular interpretations each provide a natural explanation for part of the data, but neither explains all. Both kinds of structures appear in nature, and certain states may be superpositions of compact and molecular states.

In the molecular case the deuteron is a good mental image. (As a bound state of a proton and a neutron, it is technically a molecular hexaquark.) In the compact interpretation, the diquark – an entangled pair of quarks with well-defined spin, colour and flavour quantum numbers – may play a crucial role. Diquarks have curious properties, whereby, for example, a strongly correlated red–green pair of quarks can behave like a blue antiquark, opening up intriguing possibilities for the interpretation of qqqq and qqqqq states.

Compact states

A clearcut example of a compact structure is the Tbb tetraquark with quark content bbud. Tbb has not yet been observed experimentally, but its existence is supported by robust theoretical evidence from several complementary approaches. As for any ground-state hadron, its mass is given to a good approximation by the sum of its constituent quark masses and their (negative) binding energy. The constituent masses implied here are effective masses that also include the quarks’ kinetic energies. The binding energy is negative as it was released when the compact state formed.

In the case of Tbb, the binding energy is expected to be so large that its mass is below all two-meson decay channels: it can only decay weakly, and must be stable with respect to the strong interaction. No such exotic hadron has yet been discovered, making Tbb a highly prized target for experimentalists. Such a large binding energy cannot be generated by meson exchange and must be due to colour forces between the very heavy b quarks. Tbb is an iso­scalar with JP = 1+. Its charmed analogue, Tcc = (ccud), also known as Tcc(3875)+, was observed by LHCb in 2021 to be a whisker away from stability, with a very small binding energy and width less than 1 MeV (CERN Courier September/October 2021 p7). The big difference between the binding energies of Tbb and Tcc, which make the former stable and the latter unstable, is due to the substantially greater mass of the b quark than the c quark, as discussed in the panel above. An intermediate case, Tbc = (bcud), is very likely also below threshold for strong decay and therefore stable. It is also easier to produce and detect than Tbb and therefore extremely tempting experimentally.

Molecular pentaquarks

At the other extreme, we have states that are most probably pure hadronic molecules. The most conspicuous examples are the Pc(4312), Pc(4440) and Pc(4457) pentaquarks discovered by LHCb in 2019, and labelled according to the convention adopted by the Particle Data Group as Pcc(4312)+, Pcc(4440)+ and Pcc(4457)+. All three have quark content (ccuud) and decay into J/ψp, with an energy release of order 300 MeV. Yet, despite having such a large phase space, all three have anomalously narrow widths less than about 10 MeV. Put more simply, the pentaquarks decay remarkably slowly, given how much energy stands to be released.

But why should long life count against the pentaquarks being tightly bound and compact? In a compact (ccuud) state there is nothing to prevent the charm quark from binding with the anticharm quark, hadronising as J/ψ and leaving behind a (uud) proton. It would decay immediately with a large width.

Anomalously narrow

On the other hand, hadronic molecules such as ΣcD and ΣcD* automatically provide a decay-suppression mechanism. Hadronic molecules are typically large, so the c quark inside the Σc baryon is typically far from the c quark inside the D or D* meson. Because of this, the formation of J/ψ = (c c) has a low probability, resulting in a long lifetime and a narrow width. (Unstable particles decay randomly within fixed half-lives. According to Heisenberg’s uncertainty principle, this uncertainty on their lifetime yields a reciprocal uncertainty on their energy, which may be directly observed as the width of the peak in the spectrum of their measured masses when they are created in particle collisions. Long-lived particles exhibit sharply spiked peaks, and short-lived particles exhibit broad peaks. Though the lifetimes of strongly interacting particle are usually not measurable directly, they may be inferred from these “widths”, which are measured in units of energy.)

Additional evidence in favour of their molecular nature comes from the mass of Pc(4312) being just below the ΣcD production threshold, and the masses of Pc(4440) and Pc(4457) being just below the ΣcD* production threshold. This is perfectly natural. Hadronic molecules are weakly bound, so they typically only form an S-wave bound state, with no orbital angular momentum. So ΣcD, which combines a spin-1/2 baryon and a spin-0 negative-parity meson, can only form a single state with JP = 1/2. By contrast, ΣcD*, which combines a spin-1/2 baryon and spin-1 negative-parity meson, can form two closely-spaced states with JP = 1/2 and 3/2, with a small splitting coming from a spin–spin interaction.

An example of a possible mixture of a compact state and a hadronic molecule is provided by the X(3872) meson

The robust prediction of the JP quantum numbers makes it very straightforward in principle to kill this physical picture, if one were to measure JP values different from these. Conversely, measuring the predicted values of JP would provide a strong confirmation (see “The 23 exotic hadrons discovered at the LHC table”).

These predictions have already received substantial indirect support from the strange-pentaquark sector. The spin-parity of the Pccs(4338), which also has a narrow width below 10 MeV, has been determined by LHCb to be 1/2, exactly as expected for a Ξc D molecule (see “Strange pentaquark” figure).

The mysterious X(3872)

An example of a possible mixture of a compact state and a hadronic molecule is provided by the already mentioned X(3872) meson. Its mass is so close to the sum of the masses of a D0 meson and a D*0 meson that no difference has yet been established with statistical significance, but it is known to be less than about 1 MeV. It can decay to J/ψπ+π with a branching ratio (3.5 ± 0.9)%, releasing almost 500 MeV of energy. Yet its width is only of order 1 MeV. This is an even more striking case of relative stability in the face of naively expected instability than for the pentaquarks. At first sight, then, it is tempting to identify X(3872) as a clearcut D0D*0 hadronic molecule.

Particle precision

The situation is not that simple, however. If X(3872) is just a weakly-bound hadronic molecule, it is expected to be very large, of the scale of a few fermi (10–15 m). So it should be very difficult to produce it in hard reactions, requiring a large momentum transfer. Yet this is not the case. A possible resolution might come from X(3872) being a mixture of a D0D*0molecular state and χc1(2P), a conventional radial excitation of P-wave charmonium, which is much more compact and is expected to have a similar mass and the same JPC = 1++ quantum numbers. Additional evidence in favour of such a mixing comes from comparing the rates of the radiative decays X(3872) → J/ψγ and X(3872) → ψ(2S)γ.

The question associated with exotic mesons and baryons can be posed crisply: is an observed state a molecule, a compact multiquark system or something in between? We have given examples of each. Definitive compact-multiquark behaviour can be confirmed if a state’s flavour-SU(3) partners are identified. This is because compact states are bound by colour forces, which are only weakly sensitive to flavour-SU(3) rotations. (Such rotations exchange up, down and strange quarks, and to a good approximation the strong force treats these light flavours equally at the energies of charmed and beautiful exotic hadrons.) For example, if X(3872) should in fact prove to be a compact tetraquark, it should have charged isospin partners that have not yet been observed.

On the experimental front, the sensitivity of LHCb, Belle II, BESIII, CMS and ATLAS have continued to reap great benefits to hadron spectroscopy. Together with the proposed super τ-charm factory in China, they are virtually guaranteed to discover additional exotic hadrons, expanding our understanding of QCD in its strongly interacting regime.

bright-rec iop pub iop-science physcis connect