Comsol -leaderboard other pages

Topics

The new particles

Sam Ting in November 1974

Anyone in touch with the world of high-energy physics will be well aware of the ferment created by the news from Brookhaven and Stanford, followed by Frascati and DESY, of the existence of new particles. But new particles have been unearthed in profusion by high-energy accelerators during the past 20 years. Why the excitement over the new discoveries?

A brief answer is that the particles have been found in a mass region where they were completely unexpected with stability properties which, at this stage of the game, are completely inexplicable. In this article we will first describe the discoveries and then discuss some of the speculations as to what the discoveries might mean.

We begin at the Brookhaven National Laboratory where, since the Spring of this year, a MIT/Brookhaven team have been looking at collisions between two protons which yielded (amongst other things) an electron and a positron. A series of experiments on the production of electron–positron pairs in particle collisions has been going on for about eight years in groups led by Sam Ting, mainly at the DESY synchrotron in Hamburg. The aim is to study some of the electromagnetic features of particles where energy is manifest in the form of a photon which materialises in an electron–positron pair. The experiments are not easy to do because the probability that the collisions will yield such a pair is very low. The detection system has to be capable of picking out an event from a million or more other types of event.

Beryllium bombardment

It was with long experience of such problems behind them that the MIT/Brookhaven team led by Ting, J J Aubert, U J Becker and P J Biggs brought into action a detection system with a double arm spectrometer in a slow ejected proton beam at the Brookhaven 33 GeV synchrotron. They used beams of 28.5 GeV bombarding a beryllium target. The two spectrometer arms span out at 15° either side of the incident beam direction and have magnets, Cherenkov counters, multiwire proportional chambers, scintillation counters and lead glass counters. With this array, it is possible to identify electrons and positrons coming from the same source and to measure their energy.

From about August, the realisation that they were on to something important began slowly to grow. The spectrometer was totting up an unusually large number of events where the combined energies of the electron and positron were equal to 3.1 GeV.

The detection system of the experiment at Brookhaven that spotted the new particle

This is the classic way of spotting a resonance. An unstable particle, which breaks up too quickly to be seen itself, is identified by adding up the energies of more stable particles which emerge from its decay. Looking at many interactions, if energies repeatedly add up to the same figure (as opposed to the other possible figures all around it), they indicate that the measured particles are coming from the break up of an unseen particle whose mass is equal to the measured sum.

The team went through extraordinary contortions to check their apparatus to be sure that nothing was biasing their results. The particle decaying into the electron and positron they were measuring was a difficult one to swallow. The energy region had been scoured before, even if not so thoroughly, without anything being seen. Also the resonance was looking “narrow” – this means that the energy sums were coming out at 3.1 GeV with great precision rather than, for example, spanning from 2.9 to 3.3 GeV. The width is a measure of the stability of the particle (from Heisenberg’s Uncertainty Principle, which requires only that the product of the average lifetime and the width be a constant). A narrow width means that the particle lives a long time. No other particle of such a heavy mass (over three times the mass of the proton) has anything like that stability.

By the end of October, the team had about 500 events from a 3.1 GeV particle. They were keen to extend their search to the maximum mass their detection system could pin down (about 5.5 GeV) but were prodded into print mid-November by dramatic news from the other coast of America. They baptised the particle J, which is a letter close to the Chinese symbol for “ting”. From then on, the experiment has had top priority. Sam Ting said that the Director of the Laboratory, George Vineyard, asked him how much time on the machine he would need – which is not the way such conversations usually go.

The apparition of the particle at the Stanford Linear Accelerator Center on 10 November was nothing short of shattering. Burt Richter described it as “the most exciting and frantic week-end in particle physics I have ever been through”. It followed an upgrading of the electron–positron storage ring SPEAR during the late Summer.

Until June, SPEAR was operating with beams of energy up to 2.5 GeV so that the total energy in the collision was up to a peak of 5 GeV. The ring was shut down during the late summer to install a new RF system and new power supplies so as to reach about 4.5 GeV per beam. It was switched on again in September and within two days beams were orbiting the storage ring again. Only three of the four new RF cavities were in action so the beams could only be taken to 3.8 GeV. Within two weeks the luminosity had climbed to 5 × 1030cm–2 s–1 (the luminosity dictates the number of interactions the physicists can see) and time began to be allocated to experimental teams to bring their detection systems into trim.

It was the Berkeley/Stanford team led by Richter, M Perl, W Chinowsky, G Goldhaber and G H Trilling who went into action during the week-end 9–10 November to check back on some “funny” readings they had seen in June. They were using a detection system consisting of a large solenoid magnet, wire chambers, scintillation counters and shower counters, almost completely surrounding one of the two intersection regions where the electrons and positrons are brought into head-on collision.

Put through its paces

During the first series of measurements with SPEAR, when it went through its energy paces, the cross-section (or probability of an interaction between an electron and positron occurring) was a little high at 1.6 GeV beam energy (3.2 GeV collision energy) compared with at the neighbouring beam energies. The June exercise, which gave the funny readings, was a look over this energy region again. Cross-sections were measured with electrons and positrons at 1.5, 1.55, 1.6 and 1.65 GeV. Again 1.6 GeV was a little high but 1.55 GeV was even more peculiar. In eight runs, six measurements agreed with the 1.5 GeV data while two were higher (one of them five-times higher). So, obviously, a gremlin had crept in to the apparatus. While meditating during the transformation from SPEAR I to SPEAR II, the gremlin was looked for but not found. It was then that the suspicion grew that between 3.1 and 3.2 GeV collision energies could lie a resonance.

During the night of 9–10 November the hunt began, changing the beam energies in 0.5 MeV steps. By 11.00 a.m. Sunday morning the new particle had been unequivocally found. A set of cross-section measurements around 3.1 GeV showed that the probability of interaction jumped by a factor of 10 from 20 to 200 nanobarns. In a state of euphoria, the champagne was cracked open and the team began celebrating an important discovery. Gerson Goldhaber retired in search of peace and quiet to write the findings for immediate publication.

The detection system at the SPEAR storage ring at Stanford

While he was away, it was decided to polish up the data by going slowly over the resonance again. The beams were nudged from 1.55 to 1.57 and everything went crazy. The interaction probability soared higher; from around 20 nanobarns the cross-section jumped to 2000 nanobarns and the detector was flooded with events producing hadrons. Pief Panofsky, the Director of SLAC, arrived and paced around invoking the Deity in utter amazement at what was being seen. Gerson Goldhaber then emerged with his paper proudly announcing the 200 nanobarn resonance and had to start again, writing 10 times more proudly.

Within hours of the SPEAR measurements, the telephone wires across the Atlantic were humming as information enquiries and rumours were exchanged. As soon as it became clear what had happened, the European Laboratories looked to see how they could contribute to the excitement. The obvious candidates, to be in on the act quickly, were the electron–positron storage rings at Frascati and DESY.

From 13 November, the experimental teams on the ADONE storage ring (from Frascati and the INFN sections of the universities of Naples, Padua, Pisa and Rome) began to search in the same energy region. They have detection systems for three experiments known as gamma–gamma (wide solid angle detector with high efficiency for detecting neutral particles), MEA (solenoidal magnetic spectrometer with wide gap spark chambers and shower detectors) and baryon–antibaryon (coaxial hodoscopes of scintillators covering a wide solid angle). The ADONE operators were able to jack the beam energy up a little above its normal peak of 1.5 GeV and on 15 November the new particle was seen in all three detection systems. The data confirmed the mass and the high stability. The experiments are continuing using the complementary abilities of the detectors to gather as much information as possible on the nature of the particle.

At DESY, the DORIS storage ring was brought into action with the PLUTO and DASP detection systems described later in this issue on page 427. During the week-end of 23–24 November, a clear signal at about 3.1 GeV total energy was seen in both detectors, with PLUTO measuring events with many emerging hadrons and DASP measuring two emerging particles. The angular distribution of elastic electron–positron scattering was measured at 3.1 GeV, and around it, and a distinct change was seen. The detectors are now concentrating on measuring branching ratios – the relative rate at which the particle decays in different ways.

Excitation times

In the meantime, SPEAR II had struck again. On 21 November, another particle was seen at 3.7 GeV. Like the first it is a very narrow resonance indicating the same high stability. The Berkeley/Stanford team have called the particles psi (3105) and psi (3695).

No-one had written the recipe for these particles and that is part of what all the excitement is about. At this stage, we can only speculate about what they might mean.  First of all, for the past year, something has been expected in the hadron–lepton relationship. The leptons are particles, like the electron, which we believe do not feel the strong force. Their interactions, such as are initiated in an electron–positron storage ring, can produce hadrons (or strong force particles) via their common electromagnetic features. On the basis of the theory that hadrons are built up of quarks (a theory that has a growing weight of experimental support – see CERN Courier October 1974 pp331–333), it is possible to calculate relative rates at which the electron–positron interaction will yield hadrons and the rate should decrease as the energy goes higher. The results from the Cambridge bypass and SPEAR about a year ago showed hadrons being produced much more profusely than these predictions.

What seems to be the inverse of this observation is seen at the CERN Intersecting Storage Rings and the 400 GeV synchrotron at the FermiLab. In interactions between hadrons, such as proton–proton collisions, leptons are seen coming off at much higher relative rates than could be predicted. Are the new particles behind this hadron–lepton mystery? And if so, how?

Signs of a revolution

Other speculations are that the particles have new properties to add to the familiar ones like charge, spin, parity… As the complexity of particle behaviour has been uncovered, names have had to be selected to describe different aspects. These names are linked, in the mathematical description of what is going on, to quantum numbers. When particles interact, the quantum numbers are generally conserved – the properties of the particles going into the interaction are carried away, in some perhaps very different combination, by the particles which emerge. If there are new properties, they also will influence what interactions can take place.

To explain what might be happening, we can consider the property called “strangeness”. This was assigned to particles like the neutral kaon and lambda to explain why they were always produced in pairs – the strangeness quantum number is then conserved, the kaon carrying +1, the lambda carrying –1. It is because the kaon has strangeness that it is a very stable particle. It will not readily break up into other particles which do not have this property.

They baptised the particle J, which is a letter close to the Chinese symbol for “ting”

Two new properties have recently been invoked by the theorists – colour and charm. Colour is a suggested property of quarks which makes sense of the statistics used to calculate the consequences of their existence. This gives us nine basic quarks – three coloured varieties of each of the three familiar ones. Charm is a suggested property which makes sense of some observations concerning neutral current interactions (discussed below).

It is the remarkable stability of the new particles which makes it so attractive to invoke colour or charm. From the measured width of the resonances they seem to live for about 10–20 seconds and do not decay rapidly like all the other resonances in their mass range. Perhaps they carry a new quantum number?

Unfortunately, even if the new particles are coloured, since they are formed electromagnetically they should be able to decay the same way and the sums do not give their high stability. In addition, the sums say that there is not enough energy around for them to be built up of charmed constituents. The answer may lie in new properties but not in a way that we can easily calculate.

Yet another possibility is that we are, at last, seeing the intermediate boson. This particle was proposed many years ago as an intermediary of the weak force. Just as the strong force is communicated between hadrons by passing mesons around and the electromagnetic force is communicated between charged particles by passing photons around, it is thought that the weak force could also act via the exchange of a particle rather than “at a point”.

Perhaps the new particles carry a new quantum number?

When it was believed that the weak interactions always involved a change of electric charge between the lepton going into the interaction and the lepton going out, the intermediate boson (often referred to as the W particle) was always envisaged as a charged particle. The CERN discovery of neutral currents in 1973 revealed that a charge change between the leptons need not take place; there could also be a neutral version of the intermediate boson (often referred to as the Z particle). The Z particle can also be treated in the theory which has had encouraging success in uniting the interpretations of the weak and electromagnetic forces.

This work has taken the Z mass into the 70 GeV region and its appearance around 3 GeV would damage some of the beautiful features of the reunification theories. A strong clue could come from looking for asymmetries in the decays of the new particles because, if they are of the Z variety, parity violation should occur.

1974 has been one of the most fascinating years ever experienced in high-energy physics. Still reeling from the neutral current discovery, the year began with the SPEAR hadron production mystery, continued with new high-energy information from the FermiLab and the CERN ISR, including the high lepton production rate, and finished with the discovery of the new particles. And all this against a background of feverish theoretical activity trying to keep pace with what the new accelerators and storage rings have been uncovering.

Cornering the Higgs couplings to quarks

One of nature’s greatest mysteries lies in the masses of the elementary fermions. Each of the three generations of quarks and charged leptons is progressively heavier than the first one, which forms ordinary matter, but the overall pattern and vast mass differences remain empirical and unexplained. In the Standard Model (SM), charged fermions acquire mass through interactions with the Higgs field. Consequently, their interaction strength with the Higgs boson, a ripple of the Higgs field, is proportional to the fermions’ mass. Precise measurements of these interaction strengths could offer insights into the mass-generation mechanism and potentially uncover new physics to explain this mystery.

The ATLAS collaboration recently released improved results on the Higgs boson’s interaction with second- and third-generation quarks (charm, bottom and top), based on the analysis of data collected during LHC Run 2 (2015–2018). The analyses refine two studies: Higgs-boson decays to charm- and bottom-quark pairs (H → cc and H → bb) in events where the Higgs boson is produced together with a weak boson V (W or Z); and, since the Higgs boson is too light to decay into a top-quark pair, the interaction with top quarks is probed in Higgs production in association with a top-quark pair (ttH) in events with H → bb decays. Sensitivity to H → cc and H → bb in VH production is increased by a factor of three and by 15%, respectively. Sensitivity to ttH, H → bb production is doubled.

Innovative analysis techniques were crucial to these improvements, several involving machine learning techniques, such as state-of-the-art transformers in the extremely challenging ttH(bb) analysis. Both analyses utilised an upgraded algorithm for identifying particle jets from bottom and charm quarks. A bespoke implementation allowed, for the first time, analysis of VH events coherently for both H → cc and H → bb decays. The enhanced classification of the signal from various background processes allowed a tripling of the number of selected ttH, H → bb events, and was the single largest improvement to increase the sensitivity to VH, H → cc. Both analyses improved their methods for estimating background processes including new theoretical predictions and the refined assessment of related uncertainties – a key component to boost the ttH, H → bb sensitivity.

ATLAS figure 2

Due to these improvements, ATLAS measured the ttH, H → bb cross-section with a precision of 24%, better than any single measurement before. The signal strength relative to the SM prediction is found to be 0.81 ± 0.21, consistent with the SM expectation of unity. It does not confirm previous results from ATLAS and CMS that left room for a lower-than-expected ttH cross section, dispelling speculations of new physics in this process. The compatibility between new and previous ATLAS results is estimated to be 21%.

In the new analysis VH, H → bb production was measured with a record precision of 18%; WH, H → bb production was observed for the first time with a significance of 5.3σ. Because H → cc decays are suppressed by a factor of 20 relative to H → bb decays, given the difference in quark masses, and are more difficult to identify, no significant sign of this process was found in the data. However, an upper limit on potential enhancements of the VH, H → cc rate of 11.3 times the SM prediction was placed at the 95% confidence level, allowing ATLAS to constrain the Higgs-charm coupling to less than 4.2 times the SM value, the strongest direct constraint to date.

The ttH and VH cross-sections were measured (double-)differentially with increased reach, granularity, and precision (figures 1 and 2). Notably, in the high transverse-momentum regime, where potential new physics effects are not yet excluded, the measurements were extended and the precision nearly doubled. However, neither analysis shows significant deviations from Standard Model predictions.

The significant new dataset from the ongoing Run 3 of the LHC, coupled with further advanced techniques like transformer-based jet identification, promises even more rigorous tests soon, and amplifies the excitement for the High-Luminosity LHC, where further precision will push the boundaries of our understanding of the Higgs boson – and perhaps yield clues to the mystery of the fermion masses.

An intricate web of interconnected strings

Strings 2024 participants

Since its inception in the mid-1980s, the Strings conference has sought to summarise the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory. As one of the most anticipated gatherings in theoretical physics, the conference serves as a platform for exchanging knowledge, fostering new collaborations and pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature. The most recent edition, Strings 2024, attracted about 400 in-person participants to CERN in June, with several hundred more scientists following on-line.

One way to view string theory is as a model of fundamental interactions that provides a unification of particle physics with gravity. While generic features of the Standard Model and gravity arise naturally in string theory, it has lacked concrete experimental predictions so far. In recent years, the strategy has shifted from concrete model building to more systematically understanding the universal features that models of particle physics must satisfy when coupled to quantum gravity.

Into the swamp

Remarkably, there are very subtle consistency conditions that are invisible in ordinary particle physics, as they involve indirect arguments such as whether black holes can evaporate in a consistent manner. This has led to the notion of the “Swampland”, which encompasses the set of otherwise well-behaved quantum field theories that fail these subtle quantum-gravity consistency conditions. This may lead to concrete implications for particle physics and cosmology.

An important question addressed during the conference was whether these low-energy consistency conditions always point back to string theory as the only consistent “UV completion” (fundamental realisation at distance scales shorter than can be probed at low energies) of quantum gravity, as suggested by numerous investigations. Whether there is any other possible UV completion involving a version of quantum gravity unrelated to string theory remains an important open question, so it is no surprise that significant research efforts are focused in this direction.

Attempts at explicit model construction were also discussed, together with a joint discussion on cosmology, particle physics and their connections to string theory. Among other topics, recent progress on realising accelerating cosmologies in string theory was reported, as well as a stringy model for dark energy.

A different viewpoint, shared by many researchers, is to employ string theory rather as a framework or tool to study quantum gravity, without any special emphasis on its unification with particle physics. It has long been known that there is a fundamental tension when trying to combine gravity with quantum mechanics, which many regard as one of the most important, open conceptual problems in theoretical physics. This becomes most evident when one zooms in on quantum black holes. It was in this context that the holographic nature of quantum gravity was discovered – the idea that all the information contained within a volume of space can be described by data on its boundary, suggesting that the universe’s fundamental degrees of freedom can be thought of as living on a holographic screen. This may not only hold the key for understanding the decay of black holes via Hawking radiation, but can also teach us important lessons about quantum cosmology.

Strings serves as a platform for pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature

Thousands of papers have been written on this subject within the last decades, and indeed holographic quantum gravity continues to be one of string theory’s most active subfields. Recent breakthroughs include the exact or approximate solution of quantum gravity in low-dimensional toy models in anti-de Sitter space, the extension to de-Sitter space, an improved understanding of the nature of microstates of black holes, the precise way they decay, discovering connections between emergent geometry and quantum information theory, and developing powerful tools for investigating these phenomena, such as bootstrap methods.

Other developments that were reviewed include the use of novel kinds of generalised symmetries and string field theory. Strings 2024 also gave a voice to more tangentially related areas such as scattering amplitudes, non-perturbative quantum field theory, particle phenomenology and cosmology. Many of these topics are interconnected to the core areas mentioned in this article and with each other, both technically and/or conceptually. It is this intricate web of highly non-trivial consistent interconnections between subfields that generates meaning beyond the sum of its parts, and forms the unifying umbrella called string theory.

The conference concluded with a novel “future vision” session, which considered 100 crowd-sourced open questions in string theory that might plausibly be answered in the next 10 years. These 100 questions provide a glimpse of where string theory may head in the near future.

Look to the Higgs self-coupling

What are the microscopic origins of the Higgs boson? As long as we lack the short-wavelength probes needed to study its structure directly, our best tool to confront this question is to measure its interactions.

Let’s consider two with starkly contrasting experimental prospects. The coupling of the Higgs boson to two Z bosons (HZZ) has been measured with a precision of around 5%, increasing to around 1.3% by the end of High-Luminosity LHC (HL-LHC) operations. The Higgs boson’s self-coupling (HHH) has so far only been measured with a precision of the order of several hundred percent, improving to around the 50% level by the end of HL-LHC operations – though it’s now rumoured that this latter estimate may be too pessimistic.

Good motives

As HZZ can be measured much more precisely than HHH, is it the more promising window beyond the Standard Model (SM)? An agnostic might say that both measurements are equally valuable, while a “top down” theorist might seek to judge which theories are well motivated, and ask how they modify the two couplings. In supersymmetry and minimal composite Higgs models, for example, modifications to HZZ and HHH are typically of a similar magnitude. But “well motivated” is a slippery notion and I don’t entirely trust it.

Fortunately there is a happy compromise between these perspectives, using the tool of choice of the informed agnostic: effective field theory. It’s really the same physical principle as trying to look within an object when your microscope operates on wavelengths greater than its physical extent. Just as the microscopic structure of an atom is imprinted, at low energies, in its multipolar (dipole, quadrupole and so forth) interactions with photons, so too would the microscopic structure of the Higgs boson leave its trace in modifications to its SM interactions.

All possible coupling modifications from microscopic new physics can be captured by effective field theory and organised into classes of “UV-completion”. UV-completions are the concrete microscopic scenarios that could exist. (Here, ultraviolet light is a metaphor for the short-wavelength probes needed to study the Higgs boson’s microscopic origins in detail.) Scenarios with similar patterns are said to live in the same universality class. Families of universality classes can be identified from the bottom up. A powerful tool for this is naïve dimensional analysis (NDA).

Matthew McCullough

One particularly sharp arrow in the NDA quiver is ℏ counting, which establishes how many couplings and/or ℏs must be present in the EFT modification of an interaction. Couplings tell you the number of fundamental interactions involved. ℏs establish the need for quantum effects. For instance, NDA tells us that the coefficient of the Fermi interaction must have two couplings, which the electroweak theory duly supplies – a W boson transforms a neutron into a proton, and then decays into an electron and a neutrino.

For our purposes, NDA tells us that modifications to HZZ must necessarily involve one more ℏ or two fewer couplings than any underlying EFT interaction that modifies HHH. In the case of one more ℏ, modifications to HZZ could potentially be an entire quantum loop factor smaller than modifications to HHH. In the case of two fewer couplings, modifications to HHH could be as large as a factor g2 greater than for HZZ, where g is a generic coupling. Either way, it is theoretically possible that the BSM modifications could be up to a couple of orders of magnitude greater for HHH than for HZZ. (Naively, a loop factor counts as around 1/16 π2 or about 0.01, and in the most strongly interacting scenarios, g2 can rise to about 16 π2.)

Why does this contrast so strongly with supersymmetry and the minimal composite Higgs? They are simply in universality classes where modifications to HZZ and HHH are comparable in magnitude. But there are more universality classes in heaven and Earth than are dreamt of in our well-motivated scenarios.

Faced with the theoretical possibility of a large hierarchy in coupling modifications, it behoves the effective theorist to provide an existence proof of a concrete UV-completion where this happens, or we may have revealed a universality class of measure zero. But such an example exists: the custodial quadruplet model. I often say it’s a model that only a mother could love, but it could exist in nature, and gives rise to coupling modifications a full loop factor of about 200 greater for HHH than HZZ.

When confronted with theories beyond the SM, all Higgs couplings are not born equal: UV-completions matter. Though HZZ measurements are arguably the most powerful general probe, future measurements of HHH will explore new territory that is inaccessible to other coupling measurements. This territory is largely uncharted, exotic and beyond the best guesses of theorists. Not bad circumstances for the start of any adventure.

Electroweak SUSY after LHC Run 2

ATLAS figure 1

Supersymmetry (SUSY) provides elegant solutions to many of the problems of the Standard Model (SM) by introducing new boson/fermion partners for each SM fermion/boson, and by extending the Higgs sector. If SUSY is realised in nature at the TeV scale, it would accommodate a light Higgs boson without excessive fine-tuning. It could furthermore provide a viable dark-matter candidate, and be a key ingredient to the unification of the electroweak and strong forces at high energy. The SUSY partners of the SM bosons can mix to form what are called charginos and neutralinos, collectively referred to as electroweakinos.

Electroweakinos would be produced only through the electroweak interaction, where their production cross sections in proton–proton collisions are orders of magnitude smaller than strongly produced squarks and gluinos (the supersymmetric partners of quarks and gluons). Therefore, while extensive searches using the Run 1 (7–8 TeV) and Run 2 (13 TeV) LHC datasets have turned up null results, the corresponding chargino/neutralino exclusion limits remain substantially weaker than those for strongly interacting SUSY particles.

The ATLAS collaboration has recently released a comprehensive analysis of the electroweak SUSY landscape based on its Run 2 searches. Each individual search targeted specific chargino/neutralino production mechanisms and subsequent decay modes. The analyses were originally interpreted in so-called “simplified models”, where only one production mechanism is considered, and only one possible decay. However, if SUSY is realised in nature, its particles will have many possible production and decay modes, with rates depending on the SUSY parameters. The new ATLAS analysis brings these pieces together by reinterpreting 10 searches in the phenomenological Minimal Supersymmetric Standard Model (pMSSM), which includes a range of SUSY particles, production mechanisms and decay modes governed by 19 SUSY parameters. The results provide a global picture of ATLAS’s sensitivity to electroweak SUSY and, importantly, reveals the gaps that remain to be explored.

ATLAS figure 2

The 19-dimensional pMSSM parameter space was randomly sampled to produce a set of 20,000 SUSY model points. The 10 selected ATLAS searches were then performed on each model point to determine whether it is excluded with at least 95% confidence level. This involved simulating datasets for each SUSY model, and re-running the corresponding analyses and statistical fits. An extensive suite of reinterpretation tools was employed to achieve this, including preserved likelihoods and RECAST – a framework for preserving analysis workflows and re-applying them to new signal models.

The results show that, while electro­weakino masses have been excluded up to 1 TeV in simplified models, the coverage with regard to the pMSSM is not exhaustive. Numerous scenarios remain viable, including mass regions nominally covered by previous searches (inside the dashed line in figure 1). The pMSSM models may evade detection due to smaller production cross-sections and decay probabilities compared to simplified models. Scenarios with small mass-splittings between the lightest and next-to-lightest neutralino can reproduce the dark-matter relic density, but are particularly elusive at the LHC. The decays in these models produce challenging event features with low-momentum particles that are difficult to reconstruct and separate from SM events.

Beyond ATLAS, experiments such as LZ aim at detecting relic dark-matter particles through their scattering by target nuclei. This provides a complementary probe to ATLAS searches for dark matter produced in the LHC collisions. Figure 2 shows the LZ sensitivity to the pMSSM models considered by ATLAS, compared to the sensitivity of its SUSY searches. ATLAS is particularly sensitive to the region where the dark-matter candidate is around half the Z/Higgs-boson mass, causing enhanced dark-matter annihilation that could have reduced the otherwise overabundant dark-matter relic density to the observed value.

The new ATLAS results demonstrate the breadth and depth of its search programme for supersymmetry, while uncovering its gaps. Supersymmetry may still be hiding in the data, and several scenarios have been identified that will be targeted, benefiting from the incoming Run 3 data.

A pevatron at the galactic centre

Best-fit HAWC spectral energy distribution

The measured all-particle energy spectrum for cosmic rays (CRs) is famously described by a steeply falling power law. The spectrum is almost featureless from energies of around 30 GeV to 3 PeV, where a break (also known as the “knee”) is encountered, after which the spectrum becomes steeper. It is believed that CRs with energies below the knee have galactic origins. This is supported by the observation of diffuse gamma rays from the galactic disk in the GeV range (a predominant mechanism for the production of gamma rays is via the decay of neutral pions created when relativistic protons interact with the ambient gas). The knee could be explained by either the maximum energy that galactic sources can accelerate CR particles to, or the escape of CR particles from the galaxy if they are energetic enough to overcome the confinement of galactic magnetic fields. Both scenarios, however, assume the presence of astrophysical sources within the galaxy that could accelerate CR particles up to PeV energies. For decades, scientists have therefore been on the hunt for such sources, reasonably called “pevatrons”.

Recently, researchers at the High-Altitude Water Cherenkov (HAWC) observatory in Mexico reported the observation of ultra-high energy (> 100 TeV) gamma rays from the central region of the galaxy. Using nearly seven years of data, the team found that a point source, HAWC J1746-2856, with a simple power-law spectrum and no signs of a cutoff from 6 to 114 TeV best describes the observed gamma-ray flux. A total of 98 events were observed at energies above 100 TeV.

To analyse the spatial distribution of the observed gamma rays, the researchers plotted a significance map of the galactic centre. On this map, they also plotted the point-like supernova remnant SNR G0.9+0.1 and an unidentified extended source HESS J1745-303, both located 1° away from the galactic centre. While supernova remnants have long been a favoured candidate for galactic pevatrons, HAWC did not observe any excess at either of these source positions. There are, however, two other interesting point sources in this region: Sgr A* (HESS J1745-290), the supermassive black hole in the galactic centre; and HESS J1746-285, an unidentified source that is spatially coincident with the galactic radio arc. Imaging atmospheric Cherenkov telescopes such as HESS, VERITAS and MAGIC have measured the gamma-ray emissions from these sources up to an energy of about 20 TeV, but HAWC has an angular resolution about six times larger at such energies and therefore cannot resolve them.

To eliminate the contamination to the flux from these sources, the authors assumed that their spectra cover the full HAWC energy range and then estimated the event count by convolving the reported best-fit model from HESS with the instrument-response functions of HAWC. The resulting HAWC spectral energy distribution, after subtracting these sources (see figure), seems to be compatible with the diffuse emission data points from HESS while still maintaining a power-law behaviour, with no signs of a cutoff and extending up to at least 114 TeV. This is the first detection of gamma rays at energies > 100 TeV from the galactic centre, thereby providing convincing evidence of the presence of a pevatron.

This is the first detection of gamma rays at energies > 100 TeV from the galactic centre

Furthermore, the diffuse emission is spatially correlated with the morphology of the central molecular zone (CMZ) – a region in the innermost 500 pc of the galaxy consisting of enormous molecular clouds corresponding to around 60 million solar masses. Such a correlation supports a hadronic scenario for the origin of cosmic rays, where gamma rays are produced via the interaction of relativistic protons with the ambient gas. In the leptonic scenario, electrons with energies above 100 TeV produce gamma rays via inverse Compton scattering, but such electrons suffer severe radiative losses; for a magnetic field strength of 100 μG, the maximum distance that such electrons can traverse is much smaller than the CMZ. On the other hand, in the hadronic case the escape time for protons is orders of magnitude shorter than the cooling time (via π0 decay). The stronger magnetic field could confine them for a longer period but, as the authors argue, the escape time is also much smaller than the age of the galaxy, thereby pointing to a young source that is quasi-continuously injecting and accelerating protons into the CMZ.

The study also computes the energy density of cosmic-ray protons with energies above 100 TeV to be 8.1 × 10–3eV/cm3. This is higher than the 1 × 10–3eV/cm3 local measurement from the Alpha Magnetic Spectrometer in 2015, indicating the presence of newly accelerated protons in the energy range 0.1–1 PeV. The capabilities of this study did not extend to the identification of the source, but with better modelling of the CMZ in the future, and improved performances of upcoming observatories such as CTAO and SWGO, candidate sites in the galactic centre are expected to be probed with much higher resolution.

Lattice calculations start to clarify muon g-2

In 1974, Kenneth G Wilson suggested modelling the continuous spacetime of quantum chromodynamics (QCD) with a discrete lattice – space and time would be represented as a grid of points, with quarks on the lattice points and gluons on the links between them. Lattice QCD has only grown in importance since, with international symposia on lattice field theory taking place annually since 1984. Since then the conference has developed and by now furnishes an important forum for both established experts and early-career researchers alike to report recent progress, and the published proceedings provide a valuable resource. The 41st symposium, Lattice 2024, welcomed 500 participants to the University of Liverpool from 28 July to 3 August.

Hadronic contributions

One of the highest profile topics in lattice QCD is the evaluation of hadronic contributions to the magnetic moment of the muon. For many years, the experimental measurements from Brookhaven and Fermilab have appeared to be in tension with the Standard Model (SM), based on theoretical predictions that rely on data from e+e annihilation to hadrons. Intense work on the lattice by multiple groups is now maturing rapidly and providing a valuable cross-check for data-driven SM calculations.

At the lowest order in quantum electrodynamics, the Dirac equation accounts for precisely two Bohr magnetons in the muon’s magnetic moment (g = 2) – a contribution arising purely from the muon interacting with a single real external photon representing the magnetic field. At higher orders in QED, virtual Standard Model particles modify that value, leading to a so-called anomalous magnetic moment g–2. The Schwinger term adds a virtual photon and a contribution to g-2 of approximately 0.2%. Adding individual virtual W, Z or Higgs bosons adds a well defined contribution a factor of a million or so smaller. The remaining relevant contributions are from hadronic vacuum polarisation (HVP) and hadronic light-by-light (HLBL) scattering. HVP and HLBL both add hadronic contributions integrated to all orders in the strong coupling constant to interactions between the muon and the external electric field, which also feature additional virtual photons. Though their contributions to g-2 are in the ballpark of the small electroweak contribution, they are more difficult to calculate, and dominate the error budget for the SM prediction of the muon’s g-2.

Christine Davies (University of Glasgow) gave a comprehensive survey of muon g–2 that stressed several high-level points: the small HLBL contribution looks to be settled, and is unlikely to be a key piece to the puzzle; recent tensions among the e+e experiments for HVP have emerged and need to be better understood; and in the most contentious region, all eight recent lattice–QCD calculations agree with each other and with the very recent e+e hadrons experiment CMD 3 (2024 Phys. Rev. Lett. 132 231903), though not so much with earlier experiments. Thus, lattice QCD and CMD 3 suggest there is “almost certainly less new physics in muon g–2 than previously hoped, and perhaps none,” said Davies. We shall see: many groups are preparing results for the full HVP, targeting a new whitepaper from the Muon g–2 Theory Initiative by the end of this year, in anticipation of the final measurement from the Fermilab experiment sometime in 2025.

New directions

While the main focus of Lattice calculations is the study of QCD, lattice methods have been applied beyond that. There is a small but active community investigating systems that could be relevant to physics beyond the Standard Model, including composite Higgs models, supersymmetry and dark matter. These studies often inspire formal “theoretical” developments that are of interest beyond the lattice community. Particularly exciting directions this year were the development on emergent phases, non-invertible symmetries and their possible application to formulate chiral gauge theories, one of the outstanding theoretical issues in lattice gauge theories.

The lattice QCD community is one of the main users of high-performance computing resources

The lattice QCD community is one of the main users of high-performance computing resources, with its simulation efforts generating petabytes of Monte Carlo data. For more than 20 years, a community wide effort, the international lattice data grid (ILDG), has allowed this data to be shared. Since its inception, ILDG implemented the FAIR principles – data should be findable, accessible, interoperable and reusable – almost fully. The lattice QCD community is now discussing Open Science. Ed Bennett (Swansea) led a panel discussion that explored the benefits of ILDG embracing open science, such as higher credibility for published results, and not least the means to fulfill the expectations of funding bodies. Sustainably maintaining the infrastructure and employing the personnel required calls for national or even international community efforts to convince the funding agencies to provide corresponding funding lines, but also the researchers of the benefits of open science.

The Kenneth G. Wilson Award for Excellence in Lattice Field Theory was awarded to Michael Wagman (Fermi­lab) for his lattice-QCD studies of noise reduction in nuclear systems, the structure of nuclei and transverse-momentum-dependent hadronic structure functions. Fifty years on from Wilson’s seminal paper, two of the field’s earliest contributors, John Kogut (US Department of Energy) and Jan Smit (University of Amsterdam), reminisced about the birth of the lattice in a special session chaired by Liverpool pioneer Chris Michael. Both speakers gave fascinating insights into a time where physics was extracted from a handful of small-volume gauge configurations, compared to hundreds of thousands today.

Lattice 2025 will take place at the Tata Institute of Fundamental Research in Mumbai, India, from 3 to 8 November 2025.

ALICE does the double slit

In the famous double-slit experiment, an interference pattern consisting of dark and bright bands emerges when a beam of light hits two narrow slits. The same effect has also been seen with particles such as electrons and protons, demons­trating the wave nature of propagating particles in quantum mechanics. Typically, experiments of this type produce interference patterns at the nanometre scale. In a recent study, the ALICE collaboration measured a similar interference pattern at the femtometre scale using ultra-peripheral collisions between lead nuclei at the LHC.

In ultra-peripheral collisions, two nuclei pass close to each other without colliding. With their impact parameter larger than the sum of their radii, one nucleus emits a photon that transforms into a virtual quark–antiquark pair. This pair interacts strongly with the other nucleus, resulting in the emission of a vector meson and the exchange of two gluons. Such vector-meson photoproduction is a well-established tool for probing the internal structure of colliding nuclei.

In vector-meson photoproduction involving symmetric systems, such as two lead nuclei, it is not possible to determine which of the nuclei emitted the photon and which emitted the two gluons. Crucially, however, due to the short range of the strong force between the virtual quark–antiquark pair and the nucleus, the vector mesons must have been produced within or close to one of the two well-separated nuclei. Because of this and their relatively short lifetime, the vector mesons decay quite rapidly into other particles. These decay products form a quantum-mechanically entangled state and generate an interference pattern akin to that of a double-slit interferometer.

In the photoproduction of the electrically neutral ρ0 vector meson, the interference pattern takes the form of a cos(2φ) modulation of the ρ0 yield, where φ is the angle between the two vectors formed by the sum and difference of the transverse momenta of the two oppositely charged pions into which the ρ0 decays. The strength of the modulation is expected to increase as the impact parameter decreases.

Using a dataset of 57,000 ρ0 mesons produced in lead–lead collisions at an energy of 5.02 TeV per nucleon pair during Run 2 of the LHC, the ALICE team measured the cos(2φ) modulation of the ρ0 yield for different values of the impact parameter. The measurements showed that the strength of the modulation varies strongly with the impact parameter. Theoretical calculations indicate that this behaviour is indeed the result of a quantum interference effect at the femtometre scale.

In the ongoing Run 3 of the LHC and in the next run, Run 4, ALICE is expected to collect more than 15 million ρ0 mesons from lead–lead collisions. This enhanced dataset will allow a more detailed analysis of the interference effect, further testing the validity of quantum mechanics at femtometre scales.

Strange correlations benchmark hadronisation

ALICE figure 1

In high-energy hadronic and heavy-ion collisions, strange quarks are dominantly produced from gluon fusion. In contrast to u and d quarks, they are not present in the colliding particles. Since strangeness is a conserved quantity in QCD, the net number of strange and anti-strange particles must equal zero, making them prime observable to study the dynamics of these collisions. Various experimental results from high-multiplicity pp collisions at the LHC demonstrate striking similarities to Pb–Pb collision results. Notably, the fraction of hadrons carrying one or more strange quarks smoothly increases as a function of particle multiplicity in pp and p–Pb collisions to values consistent with those measured in peripheral Pb–Pb collisions. Multi-particle correlations in pp collisions also closely resemble those in Pb–Pb collisions.

Explaining such observations requires understanding the hadronisation mechanism, which governs how quarks and gluons rearrange into bound states (hadrons). Since there are no first-principle calculations of the hadronisation process available, phenomenological models are used, based on either the Lund string fragmentation (Pythia 8, HIJING) or a statistical approach assuming a system of hadrons and their resonances (HRG) at thermal and chemical equilibrium. Despite having vastly different approaches, both models successfully describe the enhanced production of strange hadrons. This similarity calls for new observables to decisively discriminate between these two approaches.

The data indicate a weaker opposite-sign strangeness correlation than that predicted by string fragmentation

In a recently published study, the ALICE collaboration measured correlations between particles arising from the conservation of quantum numbers to further distinguish the two models. In the string fragmentation model, the quantum numbers are conserved locally through the creation of quark–antiquark pairs from the breaking of colour strings. This leads to a short-range rapidity correlation between strange and anti-strange hadrons. On the other hand, in the statistical hadronisation approach, quantum numbers are conserved globally over a finite volume, leading to long-range correlations between both strange–strange and strange–anti-strange hadron pairs. Quantum-number conservation leads to correlated particle production that is probed by measuring the yields of charged kaons (with one strange quark) and multistrange baryons (Ξ and Ξ+) on an event-by-event basis. In ALICE, charged kaons are directly tracked in the detectors, while Ξ baryons are reconstructed via their weak decay to a charged pion and a Λ-baryon, which is itself identified via its weak decay into a proton and a charged pion.

Figure 1 shows the first measurement of the correlation between the “net number” of Ξ baryons and kaons, as a function of the charged-particle multiplicity at midrapidity in pp, p–Pb and Pb–Pb collisions, where the net number is the difference between particle and antiparticle multiplicities. The experimental results deviate from the uncorrelated baseline (dashed line), and string fragmentation models that mainly correlate strange hadrons with opposite strange quark content over a small rapidity range fail to describe both observables. At the same time, the measurements agree with the statistical hadronisation model description that includes opposite-sign and same-sign strangeness correlations over large rapidity intervals. The data indicate a weaker opposite-sign strangeness correlation than that predicted by string fragmentation, suggesting that the correlation volume for strangeness conservation extends to about three units of rapidity.

The present study will be extended using the recently collected data during LHC Run 3. The larger data samples will enable similar measurements for the triply strange Ω baryon, as well as the study of higher cumulants.

Exploring the Higgs potential at ATLAS

ATLAS figure 1

Immediately after the Big Bang, all the particles we know about today were massless and moving at the speed of light. About 10–12 seconds later, the scalar Higgs field spontaneously broke the symmetry of the electroweak force, separating it into the electromagnetic and weak forces, and giving mass to fundamental particles. Without this process, the universe as we know it would not exist.

Since its discovery in 2012, measurements of the Higgs boson – the particle associated with the new field – have refined our understanding of its properties, but it remains unknown how closely the field’s energy potential resembles the predicted Mexican hat shape. Studying the Higgs potential can provide insights into the dynamics of the early universe, and the stability of the vacuum with respect to potential future changes.

The Higgs boson’s self-coupling strength λ governs the cubic and quartic terms in the equation describing the potential. It can be probed using the pair production of Higgs bosons (HH), though this is experimentally challenging as this process is more than 1000 times less likely than the production of a single Higgs boson. This is partly due to destructive interference between the two leading order diagrams in the dominant gluon–gluon fusion production mode.

The ATLAS collaboration recently compiled a series of results targeting HH decays to bbγγ, bbττ, bbbb, bbll plus missing transverse energy (ETmiss), and multilepton final states. Each analysis uses the full LHC Run 2 data set. A key parameter is the HH signal strength, μHH, which divides the measured HH production rate by the Standard Model (SM) prediction. This combination yields the strongest expected constraints to date on μHH, and an observed upper limit of 2.9 times the SM prediction (figure 1). The combination also sets the most stringent constraints to date on the strength of the Higgs boson’s self-coupling of –1.2 < κλ < 7.2, where κλ = λ/λSM, its value relative to the SM prediction.

Each analysis contributes in a complementary way to the global picture of HH interactions and faces its own set of unique challenges.

Despite its tiny branching fraction of just 0.26% of all HH decays, HH → bbγγ provides very good sensitivity to μHH thanks to the ATLAS detector’s excellent di-photon mass resolution. It also sets the best constraints on λ due to its sensitivity to HH events with low invariant mass.

The HH → bbττ analysis (7.3% of HH decays) exploits state-of-the-art hadronic–tau identification to control the complex mix of electroweak, multijet and top-quark backgrounds. It yields the strongest limits on μHH and the second tightest constraints on λ.

HH → bbbb (34%) has good sensitivity to μHH thanks to ATLAS’s excellent b-jet identification, but controlling the multijet background presents a formidable challenge, which is tackled in a fully data-driven fashion.

Studying the Higgs potential can provide insights into the dynamics of the early universe

The decays HH → bbWW and HH → bbττ in fully leptonic final states have very similar characteristics and are thus targeted in a single HH → bbll+ETmiss analysis. Contributions from the bbZZ decay mode, where one Z decays to charged light leptons and the other to neutrinos, are also considered.

Finally, the HH → multilepton analy­sis is designed to catch decay modes where the HH system cannot be fully reconstructed due to ambiguity in how the decay products should be assigned to the two Higgs bosons. The analysis uses nine signal regions with different multiplicities of light charged leptons, hadronic taus and photons. It is complementary to all the exclusive channels discussed above.

For the ongoing LHC Run 3, ATLAS designed new triggers to enhance sensitivity to the hadronic HH → bbττ and HH → bbbb channels. Improved b-jet identification algorithms will increase the efficiency in selecting HH signals and distinguishing them from background processes. With these and other improvements, our prospects have never looked brighter for homing in on the Higgs self-coupling.

bright-rec iop pub iop-science physcis connect