The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.
With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.
AI algorithms
A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN CourierJuly/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.
The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ– angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.
Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.
ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.
A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.
The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.
Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.
Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.
The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.
The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.
In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.
“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.
In a game of snakes and ladders, players move methodically up the board, occasionally encountering opportunities to climb a ladder. The NA62 experiment at CERN is one such opportunity. Searching for ultra-rare decays at colliders and fixed- target experiments like NA62 can offer a glimpse at energy scales an order of magnitude higher than is directly accessible when creating particles in a frontier machine.
The trick is to study hadron decays that are highly suppressed by the GIM mechanism (see “Charming clues for existence“). Should massive particles beyond the Standard Model (SM) exist at the right energy scale, they could disrupt the delicate cancellations expected in the SM by making brief virtual appearances according to the limits imposed by Heisenberg’s uncertainty principle. In a recent featured article, Andrzej Buras (Technical University Munich) identified the six most promising rare decays where new physics might be discovered before the end of the decade (CERN Courier July/August 2024 p30). Among them is K+→ π+νν, the ultra-rare decay sought by NA62. In the SM, fewer than one K+in 10 billion decays this way, requiring the team to exercise meticulous attention to detail in excluding backgrounds. The collaboration has now announced that it has observed the process with 5σ significance.
“This observation is the culmination of a project that started more than a decade ago,” says spokesperson Giuseppe Ruggiero of INFN and the University of Florence. “Looking for effects in nature that have probabilities of happening of the order of 10–11 is both fascinating and challenging. After rigorous and painstaking work, we have finally seen the process NA62 was designed and built to observe.”
In the NA62 experiment, kaons are produced by colliding a high-intensity proton beam from CERN’s Super Proton Synchrotron into a stationary beryllium target. Almost a billion secondary particles are produced each second. Of these, about 6% are positively charged kaons that are tagged and matched with positively charged pions from the decay K+→ π+νν, with the neutrinos escaping undetected. Upgrades to NA62 during Long Shutdown 2 increased the experiment’s signal efficiency while maintaining its sample purity, allowing the collaboration to double the expected signal of their previous measurement using new data collected between 2021 and 2022. A total of 51 events pass the stringent selection criteria, over an expected background of 18+3–2, definitely establishing the existence of this decay for the first time.
NA62 measures the branching ratio for K+→ π+νν to be 13.0+3.3–2.9× 10–11 – the most precise measurement to date and about 50% higher than the SM prediction, though compatible with it within 1.7σ at the current level of precision. NA62’s full data set will be required to test the validity of the SM in this decay. Data taking is ongoing.
The LHCb collaboration has undertaken a new study of B → DD decays using data from LHC Run 2. In the case of B0→ D+D– decays, the analysis excludes CP-symmetry at a confidence level greater than six standard deviations – a first in the analysis of a single decay mode.
The study of differences between matter and antimatter (CP violation) is a core aspect of the physics programme at LHCb. Measurements of CP violation in decays of neutral B0 mesons play a crucial role in the search for physics beyond the Standard Model thanks to the ability of the B0 meson to oscillate into its antiparticle, the B0 meson. Given increases in experimental precision, improved control over the magnitude of hadronic effects becomes important, which is a major challenge in most decay modes. In this measurement, a neutral B meson decays to two charm D mesons – an interesting topology that offers a method to control these high-order hadronic contributions from the Standard Model via the concept of U-spin symmetry.
In the new analysis, B0→ D+D– and Bs0→ Ds+Ds– are studied simultaneously. U-spin symmetry exchanges the spectator down quarks in the first decay with strange quarks to form the second decay. A joint analysis therefore strongly constrains uncertainties related to hadronic matrix elements by relating CP-violation and branching-fraction measurements in the two decay channels.
In both decays, the same final state is accessible to both matter and antimatter states of the B0 or Bs0 meson, enabling interference between two decay paths: the direct decay of the meson to the final state; and a decay after the meson has oscillated into its antiparticle counterpart. The time-dependent decay rate of each flavour (matter or antimatter) of the meson depends on CP-violating effects and is parameterised in terms dependent on the fundamental properties of the B mesons and the fundamental CP-violating weak phases β and βs, in the case of B0 and Bs0 decays, respectively. The tree-level and exchange Feynman diagrams participating to this decay process, which in turn depend on specific values of the terms in the Cabibbo–Kobayashi–Maskawa quark-mixing matrix, determine the expected value of the β(s) phases. This matrix encodes our best understanding of the CP-violating effects within the Standard Model, and testing its expected properties is a crucial means to fully exploit closure tests of this theoretical framework.
The study of differences between matter and antimatter is a core aspect of the physics programme at LHCb
The analysis uses flavour tagging to identify the matter or antimatter flavour of the neutral B meson at its production and thus allows the determination of the decay path – a key task in time- dependent measurements of CP violation. The flavour-tagging algorithms exploit the fact that b and b quarks are almost exclusively produced in pairs in pp collisions. When the b quark forms a B meson (and similarly for its antimatter equivalent), additional particles are produced in the fragmentation process of the pp collision. From the charges and species of these particles, the flavour of the signal B meson at production can be inferred. This information is combined with the reconstructed position of the decay vertex of the meson, allowing the flavour-tagged decay-time distribution of each analysed flavour to be measured.
Figure 1 shows the asymmetry between the decay-time distributions of the B0 and the B0 mesons for the B0→ D+D–decay mode. Alongside the Bs0→ Ds+Ds– data, these results represent the most precise single measurements of the CP-violation parameters in their respective channels. Results from the two decay modes are used in combination with other B → DD measurements to precisely determine Standard Model parameters.
According to the cosmological standard model, the first generation of nuclei was produced during the cooling of the hot mixture of quarks and gluons that was created shortly following the Big Bang. Relativistic heavy-ion collisions create a quark–gluon plasma (QGP) on a small scale, producing a “little bang”. In such collisions, the nucleosynthesis mechanism at play is different from the one of the Big Bang due to the rapid cool down of the fireball. Recently, the nucleosynthesis mechanism in heavy-ion collisions has been investigated via the measurement of hypertriton production by the ALICE collaboration.
The hypertriton, which consists of a proton, a neutron and a Λ hyperon, can be considered to be a loosely bound deuteron-Λ molecule (see “Inside pentaquarks and tetraquarks“). In this picture, the energy required to separate the Λ from the deuteron (BΛ)is about 100 keV, significantly lower than the binding energy of ordinary nuclei. This makes hypertriton production a sensitive probe of the properties of the fireball.
In heavy-ion collisions, the formation of nuclei can be explained by two main classes of models. The statistical hadronisation model (SHM) assumes that particles are produced from a system in thermal equilibrium. In this model, the production rate of nuclei depends only on their mass, quantum numbers and the temperature and volume of the system. On the other hand, in coalescence models, nuclei are formed from nucleons that are close together in phase space. In these models, the production rate of nuclei is also sensitive to their nuclear structure and size.
For an ordinary nucleus like the deuteron, coalescence and SHM predict similar production rates in all colliding systems, but for a loosely bound molecule such as the hypertriton, the predictions of the two models differ significantly. In order to identify the mechanism of nuclear production, the ALICE collaboration used the ratio between the production rates of hypertriton and helium-3 – also known as a yield ratio – as an observable.
ALICE measured hypertriton production as a function of charged-particle multiplicity density using Pb–Pb collisions collected at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2. Figure 1 shows the yield ratio of hypertriton to 3He across different multiplicity intervals. The data points (red) exhibit a clear deviation from the SHM (dashed orange line), but are well-described by the coalescence model (blue band), supporting the conclusion that hypertriton formation at the LHC is driven by the coalescence mechanism.
The ongoing LHC Run 3 is expected to improve the precision of these measurements across all collision systems, allowing us to probe the internal structure of hypertriton and even heavier hypernuclei, whose properties remain largely unknown. This will provide insights into the interactions between ordinary nucleons and hyperons, which are essential for understanding the internal composition of neutron stars.
Anyone in touch with the world of high-energy physics will be well aware of the ferment created by the news from Brookhaven and Stanford, followed by Frascati and DESY, of the existence of new particles. But new particles have been unearthed in profusion by high-energy accelerators during the past 20 years. Why the excitement over the new discoveries?
A brief answer is that the particles have been found in a mass region where they were completely unexpected with stability properties which, at this stage of the game, are completely inexplicable. In this article we will first describe the discoveries and then discuss some of the speculations as to what the discoveries might mean.
We begin at the Brookhaven National Laboratory where, since the Spring of this year, a MIT/Brookhaven team have been looking at collisions between two protons which yielded (amongst other things) an electron and a positron. A series of experiments on the production of electron–positron pairs in particle collisions has been going on for about eight years in groups led by Sam Ting, mainly at the DESY synchrotron in Hamburg. The aim is to study some of the electromagnetic features of particles where energy is manifest in the form of a photon which materialises in an electron–positron pair. The experiments are not easy to do because the probability that the collisions will yield such a pair is very low. The detection system has to be capable of picking out an event from a million or more other types of event.
Beryllium bombardment
It was with long experience of such problems behind them that the MIT/Brookhaven team led by Ting, J J Aubert, U J Becker and P J Biggs brought into action a detection system with a double arm spectrometer in a slow ejected proton beam at the Brookhaven 33 GeV synchrotron. They used beams of 28.5 GeV bombarding a beryllium target. The two spectrometer arms span out at 15° either side of the incident beam direction and have magnets, Cherenkov counters, multiwire proportional chambers, scintillation counters and lead glass counters. With this array, it is possible to identify electrons and positrons coming from the same source and to measure their energy.
From about August, the realisation that they were on to something important began slowly to grow. The spectrometer was totting up an unusually large number of events where the combined energies of the electron and positron were equal to 3.1 GeV.
This is the classic way of spotting a resonance. An unstable particle, which breaks up too quickly to be seen itself, is identified by adding up the energies of more stable particles which emerge from its decay. Looking at many interactions, if energies repeatedly add up to the same figure (as opposed to the other possible figures all around it), they indicate that the measured particles are coming from the break up of an unseen particle whose mass is equal to the measured sum.
The team went through extraordinary contortions to check their apparatus to be sure that nothing was biasing their results. The particle decaying into the electron and positron they were measuring was a difficult one to swallow. The energy region had been scoured before, even if not so thoroughly, without anything being seen. Also the resonance was looking “narrow” – this means that the energy sums were coming out at 3.1 GeV with great precision rather than, for example, spanning from 2.9 to 3.3 GeV. The width is a measure of the stability of the particle (from Heisenberg’s Uncertainty Principle, which requires only that the product of the average lifetime and the width be a constant). A narrow width means that the particle lives a long time. No other particle of such a heavy mass (over three times the mass of the proton) has anything like that stability.
By the end of October, the team had about 500 events from a 3.1 GeV particle. They were keen to extend their search to the maximum mass their detection system could pin down (about 5.5 GeV) but were prodded into print mid-November by dramatic news from the other coast of America. They baptised the particle J, which is a letter close to the Chinese symbol for “ting”. From then on, the experiment has had top priority. Sam Ting said that the Director of the Laboratory, George Vineyard, asked him how much time on the machine he would need – which is not the way such conversations usually go.
The apparition of the particle at the Stanford Linear Accelerator Center on 10 November was nothing short of shattering. Burt Richter described it as “the most exciting and frantic week-end in particle physics I have ever been through”. It followed an upgrading of the electron–positron storage ring SPEAR during the late Summer.
Until June, SPEAR was operating with beams of energy up to 2.5 GeV so that the total energy in the collision was up to a peak of 5 GeV. The ring was shut down during the late summer to install a new RF system and new power supplies so as to reach about 4.5 GeV per beam. It was switched on again in September and within two days beams were orbiting the storage ring again. Only three of the four new RF cavities were in action so the beams could only be taken to 3.8 GeV. Within two weeks the luminosity had climbed to 5 × 1030cm–2 s–1 (the luminosity dictates the number of interactions the physicists can see) and time began to be allocated to experimental teams to bring their detection systems into trim.
It was the Berkeley/Stanford team led by Richter, M Perl, W Chinowsky, G Goldhaber and G H Trilling who went into action during the week-end 9–10 November to check back on some “funny” readings they had seen in June. They were using a detection system consisting of a large solenoid magnet, wire chambers, scintillation counters and shower counters, almost completely surrounding one of the two intersection regions where the electrons and positrons are brought into head-on collision.
Put through its paces
During the first series of measurements with SPEAR, when it went through its energy paces, the cross-section (or probability of an interaction between an electron and positron occurring) was a little high at 1.6 GeV beam energy (3.2 GeV collision energy) compared with at the neighbouring beam energies. The June exercise, which gave the funny readings, was a look over this energy region again. Cross-sections were measured with electrons and positrons at 1.5, 1.55, 1.6 and 1.65 GeV. Again 1.6 GeV was a little high but 1.55 GeV was even more peculiar. In eight runs, six measurements agreed with the 1.5 GeV data while two were higher (one of them five-times higher). So, obviously, a gremlin had crept in to the apparatus. While meditating during the transformation from SPEAR I to SPEAR II, the gremlin was looked for but not found. It was then that the suspicion grew that between 3.1 and 3.2 GeV collision energies could lie a resonance.
During the night of 9–10 November the hunt began, changing the beam energies in 0.5 MeV steps. By 11.00 a.m. Sunday morning the new particle had been unequivocally found. A set of cross-section measurements around 3.1 GeV showed that the probability of interaction jumped by a factor of 10 from 20 to 200 nanobarns. In a state of euphoria, the champagne was cracked open and the team began celebrating an important discovery. Gerson Goldhaber retired in search of peace and quiet to write the findings for immediate publication.
While he was away, it was decided to polish up the data by going slowly over the resonance again. The beams were nudged from 1.55 to 1.57 and everything went crazy. The interaction probability soared higher; from around 20 nanobarns the cross-section jumped to 2000 nanobarns and the detector was flooded with events producing hadrons. Pief Panofsky, the Director of SLAC, arrived and paced around invoking the Deity in utter amazement at what was being seen. Gerson Goldhaber then emerged with his paper proudly announcing the 200 nanobarn resonance and had to start again, writing 10 times more proudly.
Within hours of the SPEAR measurements, the telephone wires across the Atlantic were humming as information enquiries and rumours were exchanged. As soon as it became clear what had happened, the European Laboratories looked to see how they could contribute to the excitement. The obvious candidates, to be in on the act quickly, were the electron–positron storage rings at Frascati and DESY.
From 13 November, the experimental teams on the ADONE storage ring (from Frascati and the INFN sections of the universities of Naples, Padua, Pisa and Rome) began to search in the same energy region. They have detection systems for three experiments known as gamma–gamma (wide solid angle detector with high efficiency for detecting neutral particles), MEA (solenoidal magnetic spectrometer with wide gap spark chambers and shower detectors) and baryon–antibaryon (coaxial hodoscopes of scintillators covering a wide solid angle). The ADONE operators were able to jack the beam energy up a little above its normal peak of 1.5 GeV and on 15 November the new particle was seen in all three detection systems. The data confirmed the mass and the high stability. The experiments are continuing using the complementary abilities of the detectors to gather as much information as possible on the nature of the particle.
At DESY, the DORIS storage ring was brought into action with the PLUTO and DASP detection systems described later in this issue on page 427. During the week-end of 23–24 November, a clear signal at about 3.1 GeV total energy was seen in both detectors, with PLUTO measuring events with many emerging hadrons and DASP measuring two emerging particles. The angular distribution of elastic electron–positron scattering was measured at 3.1 GeV, and around it, and a distinct change was seen. The detectors are now concentrating on measuring branching ratios – the relative rate at which the particle decays in different ways.
Excitation times
In the meantime, SPEAR II had struck again. On 21 November, another particle was seen at 3.7 GeV. Like the first it is a very narrow resonance indicating the same high stability. The Berkeley/Stanford team have called the particles psi (3105) and psi (3695).
No-one had written the recipe for these particles and that is part of what all the excitement is about. At this stage, we can only speculate about what they might mean.First of all, for the past year, something has been expected in the hadron–lepton relationship. The leptons are particles, like the electron, which we believe do not feel the strong force. Their interactions, such as are initiated in an electron–positron storage ring, can produce hadrons (or strong force particles) via their common electromagnetic features. On the basis of the theory that hadrons are built up of quarks (a theory that has a growing weight of experimental support – see CERN Courier October 1974 pp331–333), it is possible to calculate relative rates at which the electron–positron interaction will yield hadrons and the rate should decrease as the energy goes higher. The results from the Cambridge bypass and SPEAR about a year ago showed hadrons being produced much more profusely than these predictions.
What seems to be the inverse of this observation is seen at the CERN Intersecting Storage Rings and the 400 GeV synchrotron at the FermiLab. In interactions between hadrons, such as proton–proton collisions, leptons are seen coming off at much higher relative rates than could be predicted. Are the new particles behind this hadron–lepton mystery? And if so, how?
Other speculations are that the particles have new properties to add to the familiar ones like charge, spin, parity… As the complexity of particle behaviour has been uncovered, names have had to be selected to describe different aspects. These names are linked, in the mathematical description of what is going on, to quantum numbers. When particles interact, the quantum numbers are generally conserved – the properties of the particles going into the interaction are carried away, in some perhaps very different combination, by the particles which emerge. If there are new properties, they also will influence what interactions can take place.
To explain what might be happening, we can consider the property called “strangeness”. This was assigned to particles like the neutral kaon and lambda to explain why they were always produced in pairs – the strangeness quantum number is then conserved, the kaon carrying +1, the lambda carrying –1. It is because the kaon has strangeness that it is a very stable particle. It will not readily break up into other particles which do not have this property.
They baptised the particle J, which is a letter close to the Chinese symbol for “ting”
Two new properties have recently been invoked by the theorists – colour and charm. Colour is a suggested property of quarks which makes sense of the statistics used to calculate the consequences of their existence. This gives us nine basic quarks – three coloured varieties of each of the three familiar ones. Charm is a suggested property which makes sense of some observations concerning neutral current interactions (discussed below).
It is the remarkable stability of the new particles which makes it so attractive to invoke colour or charm. From the measured width of the resonances they seem to live for about 10–20 seconds and do not decay rapidly like all the other resonances in their mass range. Perhaps they carry a new quantum number?
Unfortunately, even if the new particles are coloured, since they are formed electromagnetically they should be able to decay the same way and the sums do not give their high stability. In addition, the sums say that there is not enough energy around for them to be built up of charmed constituents. The answer may lie in new properties but not in a way that we can easily calculate.
Yet another possibility is that we are, at last, seeing the intermediate boson. This particle was proposed many years ago as an intermediary of the weak force. Just as the strong force is communicated between hadrons by passing mesons around and the electromagnetic force is communicated between charged particles by passing photons around, it is thought that the weak force could also act via the exchange of a particle rather than “at a point”.
Perhaps the new particles carry a new quantum number?
When it was believed that the weak interactions always involved a change of electric charge between the lepton going into the interaction and the lepton going out, the intermediate boson (often referred to as the W particle) was always envisaged as a charged particle. The CERN discovery of neutral currents in 1973 revealed that a charge change between the leptons need not take place; there could also be a neutral version of the intermediate boson (often referred to as the Z particle). The Z particle can also be treated in the theory which has had encouraging success in uniting the interpretations of the weak and electromagnetic forces.
This work has taken the Z mass into the 70 GeV region and its appearance around 3 GeV would damage some of the beautiful features of the reunification theories. A strong clue could come from looking for asymmetries in the decays of the new particles because, if they are of the Z variety, parity violation should occur.
1974 has been one of the most fascinating years ever experienced in high-energy physics. Still reeling from the neutral current discovery, the year began with the SPEAR hadron production mystery, continued with new high-energy information from the FermiLab and the CERN ISR, including the high lepton production rate, and finished with the discovery of the new particles. And all this against a background of feverish theoretical activity trying to keep pace with what the new accelerators and storage rings have been uncovering.
For further details and an account of current challenges and opportunities in charm physics, see “Charming clues for existence”.
One of nature’s greatest mysteries lies in the masses of the elementary fermions. Each of the three generations of quarks and charged leptons is progressively heavier than the first one, which forms ordinary matter, but the overall pattern and vast mass differences remain empirical and unexplained. In the Standard Model (SM), charged fermions acquire mass through interactions with the Higgs field. Consequently, their interaction strength with the Higgs boson, a ripple of the Higgs field, is proportional to the fermions’ mass. Precise measurements of these interaction strengths could offer insights into the mass-generation mechanism and potentially uncover new physics to explain this mystery.
The ATLAS collaboration recently released improved results on the Higgs boson’s interaction with second- and third-generation quarks (charm, bottom and top), based on the analysis of data collected during LHC Run 2 (2015–2018). The analyses refine two studies: Higgs-boson decays to charm- and bottom-quark pairs (H → cc and H → bb) in events where the Higgs boson is produced together with a weak boson V (W or Z); and, since the Higgs boson is too light to decay into a top-quark pair, the interaction with top quarks is probed in Higgs production in association with a top-quark pair (ttH) in events with H → bb decays. Sensitivity to H → cc and H → bb in VH production is increased by a factor of three and by 15%, respectively. Sensitivity to ttH, H → bb production is doubled.
Innovative analysis techniques were crucial to these improvements, several involving machine learning techniques, such as state-of-the-art transformers in the extremely challenging ttH(bb) analysis. Both analyses utilised an upgraded algorithm for identifying particle jets from bottom and charm quarks. A bespoke implementation allowed, for the first time, analysis of VH events coherently for both H → cc and H → bb decays. The enhanced classification of the signal from various background processes allowed a tripling of the number of selected ttH, H → bb events, and was the single largest improvement to increase the sensitivity to VH, H → cc. Both analyses improved their methods for estimating background processes including new theoretical predictions and the refined assessment of related uncertainties – a key component to boost the ttH, H → bb sensitivity.
Due to these improvements, ATLAS measured the ttH, H → bb cross-section with a precision of 24%, better than any single measurement before. The signal strength relative to the SM prediction is found to be 0.81 ± 0.21, consistent with the SM expectation of unity. It does not confirm previous results from ATLAS and CMS that left room for a lower-than-expected ttH cross section, dispelling speculations of new physics in this process. The compatibility between new and previous ATLAS results is estimated to be 21%.
In the new analysis VH, H → bb production was measured with a record precision of 18%; WH, H → bb production was observed for the first time with a significance of 5.3σ. Because H → cc decays are suppressed by a factor of 20 relative to H → bb decays, given the difference in quark masses, and are more difficult to identify, no significant sign of this process was found in the data. However, an upper limit on potential enhancements of the VH, H → cc rate of 11.3 times the SM prediction was placed at the 95% confidence level, allowing ATLAS to constrain the Higgs-charm coupling to less than 4.2 times the SM value, the strongest direct constraint to date.
The ttH and VH cross-sections were measured (double-)differentially with increased reach, granularity, and precision (figures 1 and 2). Notably, in the high transverse-momentum regime, where potential new physics effects are not yet excluded, the measurements were extended and the precision nearly doubled. However, neither analysis shows significant deviations from Standard Model predictions.
The significant new dataset from the ongoing Run 3 of the LHC, coupled with further advanced techniques like transformer-based jet identification, promises even more rigorous tests soon, and amplifies the excitement for the High-Luminosity LHC, where further precision will push the boundaries of our understanding of the Higgs boson – and perhaps yield clues to the mystery of the fermion masses.
Since its inception in the mid-1980s, the Strings conference has sought to summarise the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory. As one of the most anticipated gatherings in theoretical physics, the conference serves as a platform for exchanging knowledge, fostering new collaborations and pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature. The most recent edition, Strings 2024, attracted about 400 in-person participants to CERN in June, with several hundred more scientists following on-line.
One way to view string theory is as a model of fundamental interactions that provides a unification of particle physics with gravity. While generic features of the Standard Model and gravity arise naturally in string theory, it has lacked concrete experimental predictions so far. In recent years, the strategy has shifted from concrete model building to more systematically understanding the universal features that models of particle physics must satisfy when coupled to quantum gravity.
Into the swamp
Remarkably, there are very subtle consistency conditions that are invisible in ordinary particle physics, as they involve indirect arguments such as whether black holes can evaporate in a consistent manner. This has led to the notion of the “Swampland”, which encompasses the set of otherwise well-behaved quantum field theories that fail these subtle quantum-gravity consistency conditions. This may lead to concrete implications for particle physics and cosmology.
An important question addressed during the conference was whether these low-energy consistency conditions always point back to string theory as the only consistent “UV completion” (fundamental realisation at distance scales shorter than can be probed at low energies) of quantum gravity, as suggested by numerous investigations. Whether there is any other possible UV completion involving a version of quantum gravity unrelated to string theory remains an important open question, so it is no surprise that significant research efforts are focused in this direction.
Attempts at explicit model construction were also discussed, together with a joint discussion on cosmology, particle physics and their connections to string theory. Among other topics, recent progress on realising accelerating cosmologies in string theory was reported, as well as a stringy model for dark energy.
A different viewpoint, shared by many researchers, is to employ string theory rather as a framework or tool to study quantum gravity, without any special emphasis on its unification with particle physics. It has long been known that there is a fundamental tension when trying to combine gravity with quantum mechanics, which many regard as one of the most important, open conceptual problems in theoretical physics. This becomes most evident when one zooms in on quantum black holes. It was in this context that the holographic nature of quantum gravity was discovered – the idea that all the information contained within a volume of space can be described by data on its boundary, suggesting that the universe’s fundamental degrees of freedom can be thought of as living on a holographic screen. This may not only hold the key for understanding the decay of black holes via Hawking radiation, but can also teach us important lessons about quantum cosmology.
Strings serves as a platform for pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature
Thousands of papers have been written on this subject within the last decades, and indeed holographic quantum gravity continues to be one of string theory’s most active subfields. Recent breakthroughs include the exact or approximate solution of quantum gravity in low-dimensional toy models in anti-de Sitter space, the extension to de-Sitter space, an improved understanding of the nature of microstates of black holes, the precise way they decay, discovering connections between emergent geometry and quantum information theory, and developing powerful tools for investigating these phenomena, such as bootstrap methods.
Other developments that were reviewed include the use of novel kinds of generalised symmetries and string field theory. Strings 2024 also gave a voice to more tangentially related areas such as scattering amplitudes, non-perturbative quantum field theory, particle phenomenology and cosmology. Many of these topics are interconnected to the core areas mentioned in this article and with each other, both technically and/or conceptually. It is this intricate web of highly non-trivial consistent interconnections between subfields that generates meaning beyond the sum of its parts, and forms the unifying umbrella called string theory.
The conference concluded with a novel “future vision” session, which considered 100 crowd-sourced open questions in string theory that might plausibly be answered in the next 10 years. These 100 questions provide a glimpse of where string theory may head in the near future.
What are the microscopic origins of the Higgs boson? As long as we lack the short-wavelength probes needed to study its structure directly, our best tool to confront this question is to measure its interactions.
Let’s consider two with starkly contrasting experimental prospects. The coupling of the Higgs boson to two Z bosons (HZZ) has been measured with a precision of around 5%, increasing to around 1.3% by the end of High-Luminosity LHC (HL-LHC) operations. The Higgs boson’s self-coupling (HHH) has so far only been measured with a precision of the order of several hundred percent, improving to around the 50% level by the end of HL-LHC operations – though it’s now rumoured that this latter estimate may be too pessimistic.
Good motives
As HZZ can be measured much more precisely than HHH, is it the more promising window beyond the Standard Model (SM)? An agnostic might say that both measurements are equally valuable, while a “top down” theorist might seek to judge which theories are well motivated, and ask how they modify the two couplings. In supersymmetry and minimal composite Higgs models, for example, modifications to HZZ and HHH are typically of a similar magnitude. But “well motivated” is a slippery notion and I don’t entirely trust it.
Fortunately there is a happy compromise between these perspectives, using the tool of choice of the informed agnostic: effective field theory. It’s really the same physical principle as trying to look within an object when your microscope operates on wavelengths greater than its physical extent. Just as the microscopic structure of an atom is imprinted, at low energies, in its multipolar (dipole, quadrupole and so forth) interactions with photons, so too would the microscopic structure of the Higgs boson leave its trace in modifications to its SM interactions.
All possible coupling modifications from microscopic new physics can be captured by effective field theory and organised into classes of “UV-completion”. UV-completions are the concrete microscopic scenarios that could exist. (Here, ultraviolet light is a metaphor for the short-wavelength probes needed to study the Higgs boson’s microscopic origins in detail.) Scenarios with similar patterns are said to live in the same universality class. Families of universality classes can be identified from the bottom up. A powerful tool for this is naïve dimensional analysis (NDA).
One particularly sharp arrow in the NDA quiver is ℏ counting, which establishes how many couplings and/or ℏs must be present in the EFT modification of an interaction. Couplings tell you the number of fundamental interactions involved. ℏs establish the need for quantum effects. For instance, NDA tells us that the coefficient of the Fermi interaction must have two couplings, which the electroweak theory duly supplies – a W boson transforms a neutron into a proton, and then decays into an electron and a neutrino.
For our purposes, NDA tells us that modifications to HZZ must necessarily involve one more ℏ or two fewer couplings than any underlying EFT interaction that modifies HHH. In the case of one more ℏ, modifications to HZZ could potentially be an entire quantum loop factor smaller than modifications to HHH. In the case of two fewer couplings, modifications to HHH could be as large as a factor g2 greater than for HZZ, where g is a generic coupling. Either way, it is theoretically possible that the BSM modifications could be up to a couple of orders of magnitude greater for HHH than for HZZ. (Naively, a loop factor counts as around 1/16 π2 or about 0.01, and in the most strongly interacting scenarios, g2 can rise to about 16 π2.)
Why does this contrast so strongly with supersymmetry and the minimal composite Higgs? They are simply in universality classes where modifications to HZZ and HHH are comparable in magnitude. But there are more universality classes in heaven and Earth than are dreamt of in our well-motivated scenarios.
Faced with the theoretical possibility of a large hierarchy in coupling modifications, it behoves the effective theorist to provide an existence proof of a concrete UV-completion where this happens, or we may have revealed a universality class of measure zero. But such an example exists: the custodial quadruplet model. I often say it’s a model that only a mother could love, but it could exist in nature, and gives rise to coupling modifications a full loop factor of about 200 greater for HHH than HZZ.
When confronted with theories beyond the SM, all Higgs couplings are not born equal: UV-completions matter. Though HZZ measurements are arguably the most powerful general probe, future measurements of HHH will explore new territory that is inaccessible to other coupling measurements. This territory is largely uncharted, exotic and beyond the best guesses of theorists. Not bad circumstances for the start of any adventure.
Supersymmetry (SUSY) provides elegant solutions to many of the problems of the Standard Model (SM) by introducing new boson/fermion partners for each SM fermion/boson, and by extending the Higgs sector. If SUSY is realised in nature at the TeV scale, it would accommodate a light Higgs boson without excessive fine-tuning. It could furthermore provide a viable dark-matter candidate, and be a key ingredient to the unification of the electroweak and strong forces at high energy. The SUSY partners of the SM bosons can mix to form what are called charginos and neutralinos, collectively referred to as electroweakinos.
Electroweakinos would be produced only through the electroweak interaction, where their production cross sections in proton–proton collisions are orders of magnitude smaller than strongly produced squarks and gluinos (the supersymmetric partners of quarks and gluons). Therefore, while extensive searches using the Run 1 (7–8 TeV) and Run 2 (13 TeV) LHC datasets have turned up null results, the corresponding chargino/neutralino exclusion limits remain substantially weaker than those for strongly interacting SUSY particles.
The ATLAS collaboration has recently released a comprehensive analysis of the electroweak SUSY landscape based on its Run 2 searches. Each individual search targeted specific chargino/neutralino production mechanisms and subsequent decay modes. The analyses were originally interpreted in so-called “simplified models”, where only one production mechanism is considered, and only one possible decay. However, if SUSY is realised in nature, its particles will have many possible production and decay modes, with rates depending on the SUSY parameters. The new ATLAS analysis brings these pieces together by reinterpreting 10 searches in the phenomenological Minimal Supersymmetric Standard Model (pMSSM), which includes a range of SUSY particles, production mechanisms and decay modes governed by 19 SUSY parameters. The results provide a global picture of ATLAS’s sensitivity to electroweak SUSY and, importantly, reveals the gaps that remain to be explored.
The 19-dimensional pMSSM parameter space was randomly sampled to produce a set of 20,000 SUSY model points. The 10 selected ATLAS searches were then performed on each model point to determine whether it is excluded with at least 95% confidence level. This involved simulating datasets for each SUSY model, and re-running the corresponding analyses and statistical fits. An extensive suite of reinterpretation tools was employed to achieve this, including preserved likelihoods and RECAST – a framework for preserving analysis workflows and re-applying them to new signal models.
The results show that, while electroweakino masses have been excluded up to 1 TeV in simplified models, the coverage with regard to the pMSSM is not exhaustive. Numerous scenarios remain viable, including mass regions nominally covered by previous searches (inside the dashed line in figure 1). The pMSSM models may evade detection due to smaller production cross-sections and decay probabilities compared to simplified models. Scenarios with small mass-splittings between the lightest and next-to-lightest neutralino can reproduce the dark-matter relic density, but are particularly elusive at the LHC. The decays in these models produce challenging event features with low-momentum particles that are difficult to reconstruct and separate from SM events.
Beyond ATLAS, experiments such as LZ aim at detecting relic dark-matter particles through their scattering by target nuclei. This provides a complementary probe to ATLAS searches for dark matter produced in the LHC collisions. Figure 2 shows the LZ sensitivity to the pMSSM models considered by ATLAS, compared to the sensitivity of its SUSY searches. ATLAS is particularly sensitive to the region where the dark-matter candidate is around half the Z/Higgs-boson mass, causing enhanced dark-matter annihilation that could have reduced the otherwise overabundant dark-matter relic density to the observed value.
The new ATLAS results demonstrate the breadth and depth of its search programme for supersymmetry, while uncovering its gaps. Supersymmetry may still be hiding in the data, and several scenarios have been identified that will be targeted, benefiting from the incoming Run 3 data.
The measured all-particle energy spectrum for cosmic rays (CRs) is famously described by a steeply falling power law. The spectrum is almost featureless from energies of around 30 GeV to 3 PeV, where a break (also known as the “knee”) is encountered, after which the spectrum becomes steeper. It is believed that CRs with energies below the knee have galactic origins. This is supported by the observation of diffuse gamma rays from the galactic disk in the GeV range (a predominant mechanism for the production of gamma rays is via the decay of neutral pions created when relativistic protons interact with the ambient gas). The knee could be explained by either the maximum energy that galactic sources can accelerate CR particles to, or the escape of CR particles from the galaxy if they are energetic enough to overcome the confinement of galactic magnetic fields. Both scenarios, however, assume the presence of astrophysical sources within the galaxy that could accelerate CR particles up to PeV energies. For decades, scientists have therefore been on the hunt for such sources, reasonably called “pevatrons”.
Recently, researchers at the High-Altitude Water Cherenkov (HAWC) observatory in Mexico reported the observation of ultra-high energy (> 100 TeV) gamma rays from the central region of the galaxy. Using nearly seven years of data, the team found that a point source, HAWC J1746-2856, with a simple power-law spectrum and no signs of a cutoff from 6 to 114 TeV best describes the observed gamma-ray flux. A total of 98 events were observed at energies above 100 TeV.
To analyse the spatial distribution of the observed gamma rays, the researchers plotted a significance map of the galactic centre. On this map, they also plotted the point-like supernova remnant SNR G0.9+0.1 and an unidentified extended source HESS J1745-303, both located 1° away from the galactic centre. While supernova remnants have long been a favoured candidate for galactic pevatrons, HAWC did not observe any excess at either of these source positions. There are, however, two other interesting point sources in this region: Sgr A* (HESS J1745-290), the supermassive black hole in the galactic centre; and HESS J1746-285, an unidentified source that is spatially coincident with the galactic radio arc. Imaging atmospheric Cherenkov telescopes such as HESS, VERITAS and MAGIC have measured the gamma-ray emissions from these sources up to an energy of about 20 TeV, but HAWC has an angular resolution about six times larger at such energies and therefore cannot resolve them.
To eliminate the contamination to the flux from these sources, the authors assumed that their spectra cover the full HAWC energy range and then estimated the event count by convolving the reported best-fit model from HESS with the instrument-response functions of HAWC. The resulting HAWC spectral energy distribution, after subtracting these sources (see figure), seems to be compatible with the diffuse emission data points from HESS while still maintaining a power-law behaviour, with no signs of a cutoff and extending up to at least 114 TeV. This is the first detection of gamma rays at energies > 100 TeV from the galactic centre, thereby providing convincing evidence of the presence of a pevatron.
This is the first detection of gamma rays at energies > 100 TeV from the galactic centre
Furthermore, the diffuse emission is spatially correlated with the morphology of the central molecular zone (CMZ) – a region in the innermost 500 pc of the galaxy consisting of enormous molecular clouds corresponding to around 60 million solar masses. Such a correlation supports a hadronic scenario for the origin of cosmic rays, where gamma rays are produced via the interaction of relativistic protons with the ambient gas. In the leptonic scenario, electrons with energies above 100 TeV produce gamma rays via inverse Compton scattering, but such electrons suffer severe radiative losses; for a magnetic field strength of 100 μG, the maximum distance that such electrons can traverse is much smaller than the CMZ. On the other hand, in the hadronic case the escape time for protons is orders of magnitude shorter than the cooling time (via π0 decay). The stronger magnetic field could confine them for a longer period but, as the authors argue, the escape time is also much smaller than the age of the galaxy, thereby pointing to a young source that is quasi-continuously injecting and accelerating protons into the CMZ.
The study also computes the energy density of cosmic-ray protons with energies above 100 TeV to be 8.1 × 10–3eV/cm3. This is higher than the 1 × 10–3eV/cm3 local measurement from the Alpha Magnetic Spectrometer in 2015, indicating the presence of newly accelerated protons in the energy range 0.1–1 PeV. The capabilities of this study did not extend to the identification of the source, but with better modelling of the CMZ in the future, and improved performances of upcoming observatories such as CTAO and SWGO, candidate sites in the galactic centre are expected to be probed with much higher resolution.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.