In a game of snakes and ladders, players move methodically up the board, occasionally encountering opportunities to climb a ladder. The NA62 experiment at CERN is one such opportunity. Searching for ultra-rare decays at colliders and fixed- target experiments like NA62 can offer a glimpse at energy scales an order of magnitude higher than is directly accessible when creating particles in a frontier machine.
The trick is to study hadron decays that are highly suppressed by the GIM mechanism (see “Charming clues for existence“). Should massive particles beyond the Standard Model (SM) exist at the right energy scale, they could disrupt the delicate cancellations expected in the SM by making brief virtual appearances according to the limits imposed by Heisenberg’s uncertainty principle. In a recent featured article, Andrzej Buras (Technical University Munich) identified the six most promising rare decays where new physics might be discovered before the end of the decade (CERN Courier July/August 2024 p30). Among them is K+→ π+νν, the ultra-rare decay sought by NA62. In the SM, fewer than one K+in 10 billion decays this way, requiring the team to exercise meticulous attention to detail in excluding backgrounds. The collaboration has now announced that it has observed the process with 5σ significance.
“This observation is the culmination of a project that started more than a decade ago,” says spokesperson Giuseppe Ruggiero of INFN and the University of Florence. “Looking for effects in nature that have probabilities of happening of the order of 10–11 is both fascinating and challenging. After rigorous and painstaking work, we have finally seen the process NA62 was designed and built to observe.”
In the NA62 experiment, kaons are produced by colliding a high-intensity proton beam from CERN’s Super Proton Synchrotron into a stationary beryllium target. Almost a billion secondary particles are produced each second. Of these, about 6% are positively charged kaons that are tagged and matched with positively charged pions from the decay K+→ π+νν, with the neutrinos escaping undetected. Upgrades to NA62 during Long Shutdown 2 increased the experiment’s signal efficiency while maintaining its sample purity, allowing the collaboration to double the expected signal of their previous measurement using new data collected between 2021 and 2022. A total of 51 events pass the stringent selection criteria, over an expected background of 18+3–2, definitely establishing the existence of this decay for the first time.
NA62 measures the branching ratio for K+→ π+νν to be 13.0+3.3–2.9× 10–11 – the most precise measurement to date and about 50% higher than the SM prediction, though compatible with it within 1.7σ at the current level of precision. NA62’s full data set will be required to test the validity of the SM in this decay. Data taking is ongoing.
The LHCb collaboration has undertaken a new study of B → DD decays using data from LHC Run 2. In the case of B0→ D+D– decays, the analysis excludes CP-symmetry at a confidence level greater than six standard deviations – a first in the analysis of a single decay mode.
The study of differences between matter and antimatter (CP violation) is a core aspect of the physics programme at LHCb. Measurements of CP violation in decays of neutral B0 mesons play a crucial role in the search for physics beyond the Standard Model thanks to the ability of the B0 meson to oscillate into its antiparticle, the B0 meson. Given increases in experimental precision, improved control over the magnitude of hadronic effects becomes important, which is a major challenge in most decay modes. In this measurement, a neutral B meson decays to two charm D mesons – an interesting topology that offers a method to control these high-order hadronic contributions from the Standard Model via the concept of U-spin symmetry.
In the new analysis, B0→ D+D– and Bs0→ Ds+Ds– are studied simultaneously. U-spin symmetry exchanges the spectator down quarks in the first decay with strange quarks to form the second decay. A joint analysis therefore strongly constrains uncertainties related to hadronic matrix elements by relating CP-violation and branching-fraction measurements in the two decay channels.
In both decays, the same final state is accessible to both matter and antimatter states of the B0 or Bs0 meson, enabling interference between two decay paths: the direct decay of the meson to the final state; and a decay after the meson has oscillated into its antiparticle counterpart. The time-dependent decay rate of each flavour (matter or antimatter) of the meson depends on CP-violating effects and is parameterised in terms dependent on the fundamental properties of the B mesons and the fundamental CP-violating weak phases β and βs, in the case of B0 and Bs0 decays, respectively. The tree-level and exchange Feynman diagrams participating to this decay process, which in turn depend on specific values of the terms in the Cabibbo–Kobayashi–Maskawa quark-mixing matrix, determine the expected value of the β(s) phases. This matrix encodes our best understanding of the CP-violating effects within the Standard Model, and testing its expected properties is a crucial means to fully exploit closure tests of this theoretical framework.
The study of differences between matter and antimatter is a core aspect of the physics programme at LHCb
The analysis uses flavour tagging to identify the matter or antimatter flavour of the neutral B meson at its production and thus allows the determination of the decay path – a key task in time- dependent measurements of CP violation. The flavour-tagging algorithms exploit the fact that b and b quarks are almost exclusively produced in pairs in pp collisions. When the b quark forms a B meson (and similarly for its antimatter equivalent), additional particles are produced in the fragmentation process of the pp collision. From the charges and species of these particles, the flavour of the signal B meson at production can be inferred. This information is combined with the reconstructed position of the decay vertex of the meson, allowing the flavour-tagged decay-time distribution of each analysed flavour to be measured.
Figure 1 shows the asymmetry between the decay-time distributions of the B0 and the B0 mesons for the B0→ D+D–decay mode. Alongside the Bs0→ Ds+Ds– data, these results represent the most precise single measurements of the CP-violation parameters in their respective channels. Results from the two decay modes are used in combination with other B → DD measurements to precisely determine Standard Model parameters.
High-school physics curricula don’t include much particle physics. The Beamline for Schools (BL4S) competition seeks to remedy this by offering high-school students the chance to turn CERN or DESY into their own laboratory. Since 2014, more than 20,000 students from 2750 teams in 108 countries have competed in BL4S, with 25 winning teams coming to the labs to perform experiments they planned from blackboard to beamline. Though, at 10 years old, the competition is still young, multiple career trajectories have already been influenced, with the impact radiating out into participants’ communities of origin.
For Hiroki Kozuki, a member of a winning team from Switzerland in 2020, learning the fundamentals of particle physics while constructing his team’s project proposal was what first sparked his interest in the subject.
“Our mentor gave us after-school classes on particle physics, fundamentals, quantum mechanics and special relativity,” says Kozuki. “I really felt as though there was so much more depth to physics. I still remember this one lecture where he taught us about the fundamental forces and quarks… It’s like he just pulled the tablecloth out from under my feet. I thought: nature is so much more beautiful when I see all these mechanisms underneath it that I didn’t know existed. That’s the moment where I got hooked on particle physics.” Kozuki will soon graduate from Imperial College London, and hopes to pursue a career in research.
Sabrina Giorgetti, from an Italian team, tells a similar story. “I can say confidently that the reason I chose physics for my bachelor’s, master’s and PhD was because of this experience.” One of the competition’s earliest winners from back in 2015, Giorgetti is now working on the CMS experiment for her PhD. One of her most memorable experiences from BL4S was getting to know the other winning team, who were from South Africa. This solidified her decision to pursue a career in academia.
“You really feel like you can reach out and collaborate with people all over the world, which is something I find truly amazing,” she says. “Now it’s even more international than it was nine years ago. I learnt at BL4S that if you’re interested in research at a place like CERN, it’s not only about physics. It may look like that from the outside, but it’s also engineering, IT and science communication – it’s a very broad world.”
The power of collaboration
As well as getting hands-on with the equipment, one of the primary aims of BL4S is to encourage students to collaborate in a way they wouldn’t in a typical high-school context. While physics experiments in school are usually conducted in pairs, BL4S allows students to work in larger teams, as is common in professional and research environments. The competition provides the chance to explore uncharted territory, rather than repeating timeworn experiments in school.
2023 winner Isabella Vesely from the US is now majoring in physics, electrical engineering and computer science at MIT. Alongside trying to fix their experiment prior to running it on the beamline, her most impactful memories involve collaborating with the other winning team from Pakistan. “We overcame so many challenges with collaboration,” explains Vesely. “They were from a completely different background to us, and it was very cool to talk to them about the experiment, our shared interest in physics and get to know each other personally. I’m still in touch with them now.”
One fellow 2023 winner is just down the road at Harvard. Zohaib Abbas, a member of the winning Pakistan team that year, is now majoring in physics. “In Pakistan, there weren’t any physical laboratories, so nothing was hands-on and all the physics was theoretical,” he says, recalling his shock at the US team’s technical skills, which included 3D printing and coding. After his education, Abbas wants to bring some of this knowledge back to Pakistan in the hopes of growing the physics community in his hometown. “After I got into BL4S, there have been hundreds of people in Pakistan who have been reaching out to me because they didn’t know about this opportunity. I think that BL4S is doing a really great job at exposing people to particle physics.”
All of the students recalled the significant challenge of ensuring the functionality of their instruments across one of CERN’s or DESY’s beamlines. While the project seemed a daunting task at first, the participants enjoyed following the process from start to finish, from the initial idea through to the data collection and analysis.
“It was really exciting to see the whole process in such a short timescale,” said Vesely. “It’s pretty complicated seeing all the work that’s already been done at these experiments, so it’s really cool to contribute a small piece of data and integrate that with everything else.”
Kozuki concurs. Though only he went on to study physics, with teammates branching off into subjects ranging from mathematics to law and medicine, they still plan to get together and take another crack at the data they compiled in 2020. “We want to take another look and see if we find anything we didn’t see before. These projects go on far beyond those two weeks, and the team that you worked with are forever connected.”
For Kozuki, it’s all about collaboration. “I want to be in a field where everyone shares this fundamental desire to crack open some mysteries about the universe. I think that this incremental contribution to science is a very noble motivation. It’s one I really felt when working at CERN. Everyone is genuinely so excited to do their work, and it’s such an encouraging environment. I learnt so much about particle physics, the accelerators and the detectors, but I think those are somewhat secondary compared to the interpersonal connections I developed at BL4S. These are the sorts of international collaborations that accelerate science, and it’s something I want to be a part of.”
The Future Circular Collider (FCC) is envisaged to be a multi-stage facility for exploring the energy and intensity frontiers of particle physics. An initial electron–positron collider phase (FCC-ee) would focus on ultra-precise measurements at the centre-of-mass energies required to create Z bosons, W-boson pairs, Higgs bosons and top-quark pairs, followed by proton and heavy-ion collisions in a hadron-collider phase (FCC-hh), which would probe the energy frontier directly. As recommended by the 2020 update of the European strategy for particle physics, a feasibility study for the FCC is in full swing. Following the submission to the CERN Council of the study’s midterm report earlier this year (CERN Courier March/April 2024 pp25–38), and the signing of a joint statement of intent on planning for large research infrastructures by CERN and the US government (CERN Courier July/August 2024 p10), FCC Week 2024 convened more than 450 scientists, researchers and industry leaders in San Francisco from 10 to 14 June, with the aim of engaging the wider scientific community, in particular in North America. Since then, more than 20 groups have joined the FCC collaboration.
SLAC and LBNL directors John Sarrao and Mike Witherell opened the meeting by emphasising the vital roles of international collaboration between national laboratories in advancing scientific discovery. Sarrao highlighted SLAC’s historical contributions to high-energy physics and expressed enthusiasm for the FCC’s scientific potential. Witherell reflected on the legacy of particle accelerators in fundamental science and the importance of continued innovation.
CERN Director-General Fabiola Gianotti identified three pillars of her vision for the laboratory: flagship projects like the LHC; a diverse complementary scientific programme; and preparations for future projects. She identified the FCC as the best future match for this vision, asserting that it has unparalleled potential for discovering new physics and can accommodate a large and diverse scientific community. “It is crucial to design a facility that offers a broad scientific programme, many experiments and exciting physics to attract young talents,” she said.
International collaboration, especially with the US, is important in ensuring the project’s success
FCC-ee would operate at several centre-of-mass energies corresponding to the Z-boson pole, W-boson pair-production, Higgs-boson pole or top-quark pair production. The beam current at each of these points would be determined by the design value of 50 MW synchrotron-radiation power per beam. At lower energies, the machine could accommodate more bunches, achieving 1.3 amperes and a luminosity in excess of 1036 cm–2 s–1 at the Z pole. Measurements of electroweak observables and Higgs-boson couplings would be improved by a factor of between 10 and 50. Remarkably, FCC-ee would also provide 10 times the ambitious design statistics of SuperKEKB/Belle II for bottom and charm quarks, making it the world-leading machine at the intensity frontier. Along with other measurements of electroweak observables, FCC-ee will indirectly probe energies up to 70 TeV for weakly interacting particles. Unlike at proposed linear colliders, four interaction points would increase scientific robustness, reduce systematic uncertainties and allow for specialised experiments, maximising the collider’s physics output.
For FCC-hh, two approaches are being pursued for the necessary high-field superconducting magnets. The first involves advancing niobium–tin technology, which is currently mastered at 11–12 T for the High-Luminosity LHC, with the goal of reaching operational fields of 14 T. The second focuses on high-temperature superconductors (HTS) such as REBCO and iron-based superconductors (IBS). REBCO comes mainly in tape form (CERN CourierMay/June 2023 p37), whereas IBS comes in both tape and wire form. With niobium-tin, 14 T would allow proton–proton collision energies of 80 TeV in a 90 km ring. HTS-based magnets could potentially reach fields up to 20 T, and centre-of-mass energies proportionally higher, in the vicinity of 120 TeV. If HTS magnets prove technically feasible, they could greatly decrease the cryogenic power. The development of such technologies also holds great promise beyond fundamental research, for example in transportation and electricity transmission.
FCC study leader Michael Benedikt (CERN) outlined the status of the ongoing feasibility study, which is set to be completed by March 2025. No technical showstoppers have yet been found, paving the way for the next phase of detailed technical and environmental impact studies and critical site investigations. Benedikt stressed the importance of international collaboration, especially with the US, in ensuring the project’s success.
The next step for the FCC project is to provide information to the CERN Council, via the upcoming update of the European strategy for particle physics, to facilitate a decision on whether to pursue the FCC by the end of 2027 or in early 2028. This includes further developing the civil engineering and technical design of major systems and components to present a more detailed cost estimate, continuing technical R&D activities, and working with CERN’s host states on regional implementation development and authorisation processes along with the launch of an environmental impact study. FCC would intersect 31 municipalities in France and 10 in Switzerland. Detailed work is ongoing to identify and reserve plots of land for surface sites, address site-specific design aspects, and explore socio-economic and ecological opportunities such as waste-heat utilisation.
According to the cosmological standard model, the first generation of nuclei was produced during the cooling of the hot mixture of quarks and gluons that was created shortly following the Big Bang. Relativistic heavy-ion collisions create a quark–gluon plasma (QGP) on a small scale, producing a “little bang”. In such collisions, the nucleosynthesis mechanism at play is different from the one of the Big Bang due to the rapid cool down of the fireball. Recently, the nucleosynthesis mechanism in heavy-ion collisions has been investigated via the measurement of hypertriton production by the ALICE collaboration.
The hypertriton, which consists of a proton, a neutron and a Λ hyperon, can be considered to be a loosely bound deuteron-Λ molecule (see “Inside pentaquarks and tetraquarks“). In this picture, the energy required to separate the Λ from the deuteron (BΛ)is about 100 keV, significantly lower than the binding energy of ordinary nuclei. This makes hypertriton production a sensitive probe of the properties of the fireball.
In heavy-ion collisions, the formation of nuclei can be explained by two main classes of models. The statistical hadronisation model (SHM) assumes that particles are produced from a system in thermal equilibrium. In this model, the production rate of nuclei depends only on their mass, quantum numbers and the temperature and volume of the system. On the other hand, in coalescence models, nuclei are formed from nucleons that are close together in phase space. In these models, the production rate of nuclei is also sensitive to their nuclear structure and size.
For an ordinary nucleus like the deuteron, coalescence and SHM predict similar production rates in all colliding systems, but for a loosely bound molecule such as the hypertriton, the predictions of the two models differ significantly. In order to identify the mechanism of nuclear production, the ALICE collaboration used the ratio between the production rates of hypertriton and helium-3 – also known as a yield ratio – as an observable.
ALICE measured hypertriton production as a function of charged-particle multiplicity density using Pb–Pb collisions collected at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2. Figure 1 shows the yield ratio of hypertriton to 3He across different multiplicity intervals. The data points (red) exhibit a clear deviation from the SHM (dashed orange line), but are well-described by the coalescence model (blue band), supporting the conclusion that hypertriton formation at the LHC is driven by the coalescence mechanism.
The ongoing LHC Run 3 is expected to improve the precision of these measurements across all collision systems, allowing us to probe the internal structure of hypertriton and even heavier hypernuclei, whose properties remain largely unknown. This will provide insights into the interactions between ordinary nucleons and hyperons, which are essential for understanding the internal composition of neutron stars.
In November 1974, the research groups of Samuel Ting at Brookhaven National Laboratory and Burton Richter at SLAC independently discovered a resonance at 3.1 GeV that was less than 1 MeV wide. Posterity soon named it J/ψ, juxtaposing the names chosen by each group in a unique compromise. Its discovery would complete the second generation of fermions with the charm quark, giving experimental impetus to the new theories of electroweak unification (1967) and quantum chromodynamics (1973). But with the theories fresh and experimenters experiencing an annus mirabilisfollowing the indirect discovery of the Z boson in neutral currents the year before, the nature of the J/ψ was not immediately clear.
“Why the excitement over the new discoveries?” asked the Courier in December 1974 (see “The new particles“). “A brief answer is that the particles have been found in a mass region where they were completely unexpected, with stability properties which, at this stage of the game, are completely inexplicable.”
The J/ψ is now known to be made up of a charm quark and a charm antiquark. Unable to decay via the strong interaction, its width is just 92.6 keV, corresponding to an unexpectedly long lifetime of 7.1 × 10–21 s. Charm quarks do not form ordinary matter like protons and neutrons, but J/ψ resonances and D mesons, which contain a charm quark and a less-massive up, down or strange antiquark.
Fifty years on from the November Revolution, charm physics is experiencing a renaissance. The LHCb, BESIII and Belle II experiments are producing a huge number of interesting and precise measurements in the charm system, with two crucial groundbreaking results on D0 mesons by LHCb holding particular significance: the observation that they violate CP symmetry when they decay; and the observation that they oscillate into their antiparticles. The rate of CP violation is particularly interesting – about 10 times larger than the most sophisticated Standard Model (SM) predictions, preliminary and uncertain though they are. Are these predictions naive, or is this the first glimpse of why there is more matter than antimatter in the universe?
Suppressed
Despite the initial confusion, the charm quark had already been indirectly discovered in 1970 by Sheldon Glashow, John Iliopoulos and Luciano Maiani (GIM), who introduced it to explain why K0→ μ+μ– decays are suppressed. Their paper gained widespread recognition during the November Revolution, and the GIM mechanism they discovered impacts cutting-edge calculations in charm physics to this day.
Previously, only the three light quarks (up, down and strange) were known. Alongside electrons and electron neutrinos, up and down quarks make up the first generation of fermions. The detection of muons in cosmic rays in 1936 was the first evidence for a second generation, triggering Isidor Rabi’s famous exclamation “Who ordered that?” Strange particles were found in 1947, providing evidence for a second generation of quarks, though it took until 1964 for Murray Gell-Mann and George Zweig to discover this ordering principle of the subatomic world.
In a model of three quarks, the decay of a K0 meson (a down–antistrange system) into two muons can only proceed by briefly transforming the meson into a W+W– pair – an infamous flavour-changing neutral current – linked in a loop by a virtual up quark and virtual muon neutrino. While the amplitude for this process is problematically large given observed rates, the GIM mechanism cancels it almost exactly by introducing destructive quantum interference with a process that replaces the up quark with a new charm quark. The remaining finite value of the amplitude stems from the difference in the masses of the virtual quarks compared to the W boson, mu2/MW2 and mc2/MW2. Since both mass ratios are close to zero, K0→ μ+μ– is highly suppressed.
The interference is destructive because the Cabibbo matrix describing the coupling strength of the charged weak interaction is a rotation of the two generations of quarks. All four couplings in the matrix – up–down (cos θC), charm-strange (cos θC), charm-down (sin θC) and up-strange (–sin θC) – arise in the decay of a K0 meson, with the minus sign causing the cancellation.
Maybe the charm quark will in the end provide the ultimate clue to explain our existence
The direct experimental detection of the first particle containing charm is typically attributed to Ting and Richter in 1974, however, there was already some direct evidence for charmed mesons in Japan in 1971, though unfortunately in only one cosmic-ray event, and with no estimation of background (see “Cosmic charm” figure). Unnoticed by Western scientists, the measurements indicated a charm-quark mass of the order of 1.5 GeV, which is close to current estimates. In 1973, the quark-mixing formalism was extended by Makoto Kobayashi and Toshihide Maskawa to three generations of quarks, incorporating CP violation in the SM by allowing the couplings to be complex numbers with an imaginary part. The amount of CP violation contained in the resulting Cabibbo–Kobayashi–Maskawa (CKM) matrix does not appear to be sufficient to explain the observed matter–antimatter asymmetry in the universe.
The third generation of quarks began to be experimentally established in 1977 with the discovery of ϒ resonances (bottom–antibottom systems). In 1986, GIM cancellations in the matter–antimatter oscillations of neutral B mesons (B0–B0 mixing) indicated a large value of the top-quark mass, with mt2/MW2 not negligible, in contrast to mu2/MW2 and mc2/MW2. The top quark was directly discovered at the Tevatron in 1995. With the discovery of the Higgs boson in 2012 at the LHC, the full particle spectrum of the SM has now been experimentally confirmed.
Charm renaissance
More recently, two crucial effects in the charm system have been experimentally confirmed. Both measurements present intriguing discrepancies by comparison with naive theoretical expectations.
First, in 2019, the LHCb collaboration at CERN observed the first definitive evidence for CP violation in charm. A difference in the behaviour of matter and antimatter particles, CP violation can be expressed directly in charm decays, indirectly in the matter–antimatter oscillations of charmed particles, or in a quantum admixture of both effects. To isolate direct CP violation, LHCb proved that the difference in matter–antimatter asymmetries seen in D0→ K+K– and D0→ π+π– decays (ΔACP) is nonzero. Though the observed CP violation is tiny, it is nevertheless approximately a factor 10 larger than the best available SM predictions. Currently the big question is whether these naive SM expectations can be enhanced by a factor of 10 due to non-perturbative effects, or whether the measurement of ΔACP is a first glimpse of physics beyond the SM, perhaps also answering the question of why there is more matter than antimatter in the universe.
Two years later, LHCb definitively demonstrated the transformation of neutral D0 mesons into their antiparticles (D0–D0mixing). These transitions only involve virtual down-type quarks (down, strange and bottom), causing extreme GIM cancellations as md2/MW2, ms2/MW2and mb2/MW2 are all negligible (see “Matter–antimatter mixing” figure). Theory calculations are preliminary here too, but naive SM predictions of the mass splitting between the mass eigenstates of the neutral D-meson system are at present several orders of magnitude below the experimental value.
The charm system has often proved to be more experimentally challenging than the bottom system, with matter–antimatter oscillations and direct and indirect CP violation all discovered first for the bottom quark, and indirect CP violation still awaiting confirmation in charm. The theoretical description of the charm system also presents several interesting features by comparison to the bottom system. They may be regarded as challenges, peculiarities, or even opportunities.
A challenge is the use of perturbation theory. The strong coupling at the scale of the charm-quark mass is quite large – αs(mc) ≈ 0.35 – and perturbative expansions in the strong coupling only converge as (1, 0.35, 0.12, …). The charm quark is also not particularly heavy, and perturbative expansions in Λ/mc only converge as roughly (1, 0.33, 0.11, …), assuming Λ is an energy scale of the order of the hadronic scale of the strong interaction. If the coefficients being multiplied are of similar sizes, then these series may converge.
Numerical cancellations are a peculiarity, and often classified as strong or even crazy in cases such as D0–D0 mixing, where contributions cancel to one part in 105.
The fact that CKM couplings involving the charm quark (Vcd, Vcs and Vcb) have almost vanishing imaginary parts is an opportunity. With CP-violating effects in charm systems expected to be tiny, any measurement of sizable CP violating effects would indicate the presence of physics beyond the SM (BSM).
A final peculiarity is that loop-induced charm decays and D-mixing both proceed exclusively via virtual down-type quarks, presenting opportunities to extend sensitivity to BSM physics via joint analyses with complementary bottom and strange decays.
At first sight, these effects complicate the theoretical treatment of the charm system. Many approaches are therefore based on approximations such as SU(3)F flavour symmetry or U-spin symmetry (see “Using U-spin to squeeze CP violation”). On the other hand, these properties can also be a virtue, making some observables very sensitive to higher orders in our expansions and providing an ideal testing ground for QCD tools.
Thanks to many theoretical improvements, we are now in a position to start answering the question of whether perturbative expansions in the strong coupling and the inverse of the quark mass are applicable in the charm system. Recently, progress has been made with observables that are free from severe cancellations: a double expansion in Λ/mcand αs (the heavy-quark expansion) seems to be able to reproduce the D0 lifetime (see “Charmed life” figure); and theoretical calculations of branching fractions for non-leptonic two-body D0 decays seem to be in good agreement with experimental values (see “Two body” figure).
All these theory predictions still suffer from large uncertainties, but they can be systematically improved. Demonstrating the validity of these theory tools with higher precision could imply that the measured value of CP violation in the charm system (ΔACP) has a BSM origin.
The future
Charm physics therefore has a bright future. Many of the current theory approaches can be systematically improved with currently available technologies by adding higher-order perturbative corrections. A full lattice-QCD description of D-mixing and non-leptonic D-meson decays requires new ideas, but first steps have already been taken. These theory developments should give us deeper insights into the question of whether ΔACP and D0–D0 mixing can be described within the SM.
More precise experimental data can also help in answering these questions. The BESIII experiment at IHEP in China and the Belle II experiment at KEK in Japan can investigate inclusive semileptonic charm decays and measure parameters that are needed for the heavy-quark expansion. LHCb and Belle II can investigate CP-violating effects in D0–D0 mixing and in channels other than D0→ K+K– and π+π–. The super tau–charm factory proposed by China could contribute further precise data and a future e+e– collider running as an ultimate Z factory could provide an independent experimental cross-check for ΔACP.
Another exciting field is that of rare charm decays such as D+→ π+μ+μ– and D+→ π+νν, which proceed via loop diagrams similar to those in K0→ μ+μ– decays and D0–D0 oscillations. Here, null tests can be constructed using observables that vanish precisely in the SM, allowing future experimental data to unambiguously probe BSM effects.
Maybe the charm quark will in the end provide the ultimate clue to explain our existence. Wouldn’t that be charming?
Anyone in touch with the world of high-energy physics will be well aware of the ferment created by the news from Brookhaven and Stanford, followed by Frascati and DESY, of the existence of new particles. But new particles have been unearthed in profusion by high-energy accelerators during the past 20 years. Why the excitement over the new discoveries?
A brief answer is that the particles have been found in a mass region where they were completely unexpected with stability properties which, at this stage of the game, are completely inexplicable. In this article we will first describe the discoveries and then discuss some of the speculations as to what the discoveries might mean.
We begin at the Brookhaven National Laboratory where, since the Spring of this year, a MIT/Brookhaven team have been looking at collisions between two protons which yielded (amongst other things) an electron and a positron. A series of experiments on the production of electron–positron pairs in particle collisions has been going on for about eight years in groups led by Sam Ting, mainly at the DESY synchrotron in Hamburg. The aim is to study some of the electromagnetic features of particles where energy is manifest in the form of a photon which materialises in an electron–positron pair. The experiments are not easy to do because the probability that the collisions will yield such a pair is very low. The detection system has to be capable of picking out an event from a million or more other types of event.
Beryllium bombardment
It was with long experience of such problems behind them that the MIT/Brookhaven team led by Ting, J J Aubert, U J Becker and P J Biggs brought into action a detection system with a double arm spectrometer in a slow ejected proton beam at the Brookhaven 33 GeV synchrotron. They used beams of 28.5 GeV bombarding a beryllium target. The two spectrometer arms span out at 15° either side of the incident beam direction and have magnets, Cherenkov counters, multiwire proportional chambers, scintillation counters and lead glass counters. With this array, it is possible to identify electrons and positrons coming from the same source and to measure their energy.
From about August, the realisation that they were on to something important began slowly to grow. The spectrometer was totting up an unusually large number of events where the combined energies of the electron and positron were equal to 3.1 GeV.
This is the classic way of spotting a resonance. An unstable particle, which breaks up too quickly to be seen itself, is identified by adding up the energies of more stable particles which emerge from its decay. Looking at many interactions, if energies repeatedly add up to the same figure (as opposed to the other possible figures all around it), they indicate that the measured particles are coming from the break up of an unseen particle whose mass is equal to the measured sum.
The team went through extraordinary contortions to check their apparatus to be sure that nothing was biasing their results. The particle decaying into the electron and positron they were measuring was a difficult one to swallow. The energy region had been scoured before, even if not so thoroughly, without anything being seen. Also the resonance was looking “narrow” – this means that the energy sums were coming out at 3.1 GeV with great precision rather than, for example, spanning from 2.9 to 3.3 GeV. The width is a measure of the stability of the particle (from Heisenberg’s Uncertainty Principle, which requires only that the product of the average lifetime and the width be a constant). A narrow width means that the particle lives a long time. No other particle of such a heavy mass (over three times the mass of the proton) has anything like that stability.
By the end of October, the team had about 500 events from a 3.1 GeV particle. They were keen to extend their search to the maximum mass their detection system could pin down (about 5.5 GeV) but were prodded into print mid-November by dramatic news from the other coast of America. They baptised the particle J, which is a letter close to the Chinese symbol for “ting”. From then on, the experiment has had top priority. Sam Ting said that the Director of the Laboratory, George Vineyard, asked him how much time on the machine he would need – which is not the way such conversations usually go.
The apparition of the particle at the Stanford Linear Accelerator Center on 10 November was nothing short of shattering. Burt Richter described it as “the most exciting and frantic week-end in particle physics I have ever been through”. It followed an upgrading of the electron–positron storage ring SPEAR during the late Summer.
Until June, SPEAR was operating with beams of energy up to 2.5 GeV so that the total energy in the collision was up to a peak of 5 GeV. The ring was shut down during the late summer to install a new RF system and new power supplies so as to reach about 4.5 GeV per beam. It was switched on again in September and within two days beams were orbiting the storage ring again. Only three of the four new RF cavities were in action so the beams could only be taken to 3.8 GeV. Within two weeks the luminosity had climbed to 5 × 1030cm–2 s–1 (the luminosity dictates the number of interactions the physicists can see) and time began to be allocated to experimental teams to bring their detection systems into trim.
It was the Berkeley/Stanford team led by Richter, M Perl, W Chinowsky, G Goldhaber and G H Trilling who went into action during the week-end 9–10 November to check back on some “funny” readings they had seen in June. They were using a detection system consisting of a large solenoid magnet, wire chambers, scintillation counters and shower counters, almost completely surrounding one of the two intersection regions where the electrons and positrons are brought into head-on collision.
Put through its paces
During the first series of measurements with SPEAR, when it went through its energy paces, the cross-section (or probability of an interaction between an electron and positron occurring) was a little high at 1.6 GeV beam energy (3.2 GeV collision energy) compared with at the neighbouring beam energies. The June exercise, which gave the funny readings, was a look over this energy region again. Cross-sections were measured with electrons and positrons at 1.5, 1.55, 1.6 and 1.65 GeV. Again 1.6 GeV was a little high but 1.55 GeV was even more peculiar. In eight runs, six measurements agreed with the 1.5 GeV data while two were higher (one of them five-times higher). So, obviously, a gremlin had crept in to the apparatus. While meditating during the transformation from SPEAR I to SPEAR II, the gremlin was looked for but not found. It was then that the suspicion grew that between 3.1 and 3.2 GeV collision energies could lie a resonance.
During the night of 9–10 November the hunt began, changing the beam energies in 0.5 MeV steps. By 11.00 a.m. Sunday morning the new particle had been unequivocally found. A set of cross-section measurements around 3.1 GeV showed that the probability of interaction jumped by a factor of 10 from 20 to 200 nanobarns. In a state of euphoria, the champagne was cracked open and the team began celebrating an important discovery. Gerson Goldhaber retired in search of peace and quiet to write the findings for immediate publication.
While he was away, it was decided to polish up the data by going slowly over the resonance again. The beams were nudged from 1.55 to 1.57 and everything went crazy. The interaction probability soared higher; from around 20 nanobarns the cross-section jumped to 2000 nanobarns and the detector was flooded with events producing hadrons. Pief Panofsky, the Director of SLAC, arrived and paced around invoking the Deity in utter amazement at what was being seen. Gerson Goldhaber then emerged with his paper proudly announcing the 200 nanobarn resonance and had to start again, writing 10 times more proudly.
Within hours of the SPEAR measurements, the telephone wires across the Atlantic were humming as information enquiries and rumours were exchanged. As soon as it became clear what had happened, the European Laboratories looked to see how they could contribute to the excitement. The obvious candidates, to be in on the act quickly, were the electron–positron storage rings at Frascati and DESY.
From 13 November, the experimental teams on the ADONE storage ring (from Frascati and the INFN sections of the universities of Naples, Padua, Pisa and Rome) began to search in the same energy region. They have detection systems for three experiments known as gamma–gamma (wide solid angle detector with high efficiency for detecting neutral particles), MEA (solenoidal magnetic spectrometer with wide gap spark chambers and shower detectors) and baryon–antibaryon (coaxial hodoscopes of scintillators covering a wide solid angle). The ADONE operators were able to jack the beam energy up a little above its normal peak of 1.5 GeV and on 15 November the new particle was seen in all three detection systems. The data confirmed the mass and the high stability. The experiments are continuing using the complementary abilities of the detectors to gather as much information as possible on the nature of the particle.
At DESY, the DORIS storage ring was brought into action with the PLUTO and DASP detection systems described later in this issue on page 427. During the week-end of 23–24 November, a clear signal at about 3.1 GeV total energy was seen in both detectors, with PLUTO measuring events with many emerging hadrons and DASP measuring two emerging particles. The angular distribution of elastic electron–positron scattering was measured at 3.1 GeV, and around it, and a distinct change was seen. The detectors are now concentrating on measuring branching ratios – the relative rate at which the particle decays in different ways.
Excitation times
In the meantime, SPEAR II had struck again. On 21 November, another particle was seen at 3.7 GeV. Like the first it is a very narrow resonance indicating the same high stability. The Berkeley/Stanford team have called the particles psi (3105) and psi (3695).
No-one had written the recipe for these particles and that is part of what all the excitement is about. At this stage, we can only speculate about what they might mean.First of all, for the past year, something has been expected in the hadron–lepton relationship. The leptons are particles, like the electron, which we believe do not feel the strong force. Their interactions, such as are initiated in an electron–positron storage ring, can produce hadrons (or strong force particles) via their common electromagnetic features. On the basis of the theory that hadrons are built up of quarks (a theory that has a growing weight of experimental support – see CERN Courier October 1974 pp331–333), it is possible to calculate relative rates at which the electron–positron interaction will yield hadrons and the rate should decrease as the energy goes higher. The results from the Cambridge bypass and SPEAR about a year ago showed hadrons being produced much more profusely than these predictions.
What seems to be the inverse of this observation is seen at the CERN Intersecting Storage Rings and the 400 GeV synchrotron at the FermiLab. In interactions between hadrons, such as proton–proton collisions, leptons are seen coming off at much higher relative rates than could be predicted. Are the new particles behind this hadron–lepton mystery? And if so, how?
Other speculations are that the particles have new properties to add to the familiar ones like charge, spin, parity… As the complexity of particle behaviour has been uncovered, names have had to be selected to describe different aspects. These names are linked, in the mathematical description of what is going on, to quantum numbers. When particles interact, the quantum numbers are generally conserved – the properties of the particles going into the interaction are carried away, in some perhaps very different combination, by the particles which emerge. If there are new properties, they also will influence what interactions can take place.
To explain what might be happening, we can consider the property called “strangeness”. This was assigned to particles like the neutral kaon and lambda to explain why they were always produced in pairs – the strangeness quantum number is then conserved, the kaon carrying +1, the lambda carrying –1. It is because the kaon has strangeness that it is a very stable particle. It will not readily break up into other particles which do not have this property.
They baptised the particle J, which is a letter close to the Chinese symbol for “ting”
Two new properties have recently been invoked by the theorists – colour and charm. Colour is a suggested property of quarks which makes sense of the statistics used to calculate the consequences of their existence. This gives us nine basic quarks – three coloured varieties of each of the three familiar ones. Charm is a suggested property which makes sense of some observations concerning neutral current interactions (discussed below).
It is the remarkable stability of the new particles which makes it so attractive to invoke colour or charm. From the measured width of the resonances they seem to live for about 10–20 seconds and do not decay rapidly like all the other resonances in their mass range. Perhaps they carry a new quantum number?
Unfortunately, even if the new particles are coloured, since they are formed electromagnetically they should be able to decay the same way and the sums do not give their high stability. In addition, the sums say that there is not enough energy around for them to be built up of charmed constituents. The answer may lie in new properties but not in a way that we can easily calculate.
Yet another possibility is that we are, at last, seeing the intermediate boson. This particle was proposed many years ago as an intermediary of the weak force. Just as the strong force is communicated between hadrons by passing mesons around and the electromagnetic force is communicated between charged particles by passing photons around, it is thought that the weak force could also act via the exchange of a particle rather than “at a point”.
Perhaps the new particles carry a new quantum number?
When it was believed that the weak interactions always involved a change of electric charge between the lepton going into the interaction and the lepton going out, the intermediate boson (often referred to as the W particle) was always envisaged as a charged particle. The CERN discovery of neutral currents in 1973 revealed that a charge change between the leptons need not take place; there could also be a neutral version of the intermediate boson (often referred to as the Z particle). The Z particle can also be treated in the theory which has had encouraging success in uniting the interpretations of the weak and electromagnetic forces.
This work has taken the Z mass into the 70 GeV region and its appearance around 3 GeV would damage some of the beautiful features of the reunification theories. A strong clue could come from looking for asymmetries in the decays of the new particles because, if they are of the Z variety, parity violation should occur.
1974 has been one of the most fascinating years ever experienced in high-energy physics. Still reeling from the neutral current discovery, the year began with the SPEAR hadron production mystery, continued with new high-energy information from the FermiLab and the CERN ISR, including the high lepton production rate, and finished with the discovery of the new particles. And all this against a background of feverish theoretical activity trying to keep pace with what the new accelerators and storage rings have been uncovering.
For further details and an account of current challenges and opportunities in charm physics, see “Charming clues for existence”.
What role does science communication play in your academic career?
When I was a postdoc I started to realise that the science communication side of my life was really important to me. It felt like I was having a big impact – and in research, you don’t always feel like you’re having that big impact. When you’re a grad student or postdoc, you spend a lot of time dealing with rejection, feeling like you’re not making progress or you’re not good enough. I realised that with science communication, I was able to really feel like I did know something, and I was able to share that with people.
When I began to apply for faculty jobs, I realised I didn’t want to just do science writing as a nights and weekends job, I wanted it to be integrated into my career. Partially because I didn’t want to give up the opportunity to have that kind of impact, but also because I really enjoyed it. It was energising for me and helped me contextualise the work I was doing as a scientist.
How did you begin your career in science communication?
I’ve always enjoyed writing stories and poetry. At some point I figured out that I could write about science. When I went to grad school I took a class on science journalism and the professor helped me pitch some stories to magazines, and I started to do freelance science writing. Then I discovered Twitter. That was even better because I could share every little idea I had with a big audience. Between Twitter and freelance science writing, I garnered quite a large profile in science communication and that led to opportunities to speak and do more writing. At some point I was approached by agents and publishers about writing books.
Who is your audience?
When I’m not talking to other scientists, my main community is generally those who have a high-school education, but not necessarily a university education. I don’t tailor things to people who aren’t interested in science, or try to change people’s minds on whether science is a good idea. I try to help people who don’t have a science background feel empowered to learn about science. I think there are a lot of people who don’t see themselves as “science people”. I think that’s a silly concept but a lot of people conceptualise it that way. They feel like science is closed to them.
The more that science communicators can give people a moment of understanding, an insight into science, I think they can really help people get more involved in science. The best feedback I’ve ever gotten is when students have come up to me and said “I started studying physics because I followed you on Twitter and I saw that I could do this,” or they read my book and that inspired them. That’s absolutely the best thing that comes out of this. It is possible to have a big impact on individuals by doing social media and science communication – and hopefully change the situation in science itself over time.
What were your own preconceptions of academia?
I have been excited about science since I was a little kid. I saw that Stephen Hawking was called a cosmologist, so I decided I wanted to be a cosmologist too. I had this vision in my head that I would be a theoretical physicist. I thought that involved a lot of standing alone in a small room with a blackboard, writing equations and having eureka moments. That’s what was always depicted on TV: you just sit by yourself and think real hard. When I actually got into academia, I was surprised by how collaborative and social it is. That was probably the biggest difference between expectation and reality.
How do you communicate the challenges of academia, alongside the awe-inspiring discoveries and eureka moments?
I think it’s important to talk about what it’s really like to be an academic, in both good ways and bad. Most people outside of academia have no idea what we do, so it’s really valuable to share our experiences, both because it challenges stereotypes in terms of what we’re really motivated by and how we spend our time, but also because there are a lot of people who have the same impression I did: where you just sit alone in a room with a chalkboard. I believe it’s important to be clear about what you actually do in academia, so more people can see themselves happy in the job.
At the same time, there are challenges. Academia is hard and can be very isolating. My advice for early-career researchers is to have things other than science in your life. As a student you’re working on something that potentially no one else cares very much about, except maybe your supervisor. You’re going to be the world-expert on it for a while. It can be hard to go through that and not have anybody to talk to about your work. I think it’s important to acknowledge what people go through and encourage them to get support.
There are of course other parts of academia that can be really challenging, like moving all the time. I went from West coast to East coast between undergrad and grad school, and then from the US to the UK, from the UK to Australia, back to the US and then to Canada. That’s a lot. It’s hard. They’re all big moves so you lose whatever local support system you had and you have to start over in a new place, make new friends and get used to a whole new government bureaucracy.
So there are a whole lot of things that are difficult about academia, and you do need to acknowledge those because a lot of them affect equity. Some of these make it more challenging to have diversity in the field, and they disproportionately affect some groups more than others. It is important to talk about these issues instead of just sweeping people under the rug.
Do you think that social media can help to diversify science and research?
Yes! I think that a large reason why people from underrepresented groups leave science is because they lack the feeling of belonging. If you get into a field and don’t feel like you belong, it’s hard to power through that. It makes it very unpleasant to be there. So I think that one of the ways social media can really help is by letting people see scientists who are not the stereotypical old white men. Talking about what being a scientist is really like, what the lifestyle is like, is really helpful for dismantling those stereotypes.
Your first book, The End of Everything, explored astrophysics but your next will popularise particle physics. Have you had to change your strategy when communicating different subjects?
This book is definitely a lot harder to write. The first one was very big and dramatic: the universe is ending! In this one, I’m really trying to get deeper into how fundamental physics works, which is a more challenging story to tell. The way I’m framing it is through “how to build a universe”. It’s about how fundamental physics connects with the structure of reality, both in terms of what we experience in our daily lives, but also the structure of the universe, and how physicists are working to understand that. I also want to highlight some of the scientists who are doing that work.
So yes, it’s much harder to find a catchy hook, but I think the subject matter and topics are things that people are curious about and have a hunger to understand. There really is a desire amongst the public to understand what the point of studying particle physics is.
Is high-energy physics succeeding when it comes to communicating with the public?
I think that there are some aspects where high-energy physics does a fantastic job. When the Higgs boson was discovered in 2012, it was all over the news and everybody was talking about it. Even though it’s a really tough concept to explain, a lot of people got some inkling of its importance.
A lot of science communication in high-energy physics relies on big discoveries, however recently there have not been that many discoveries at the level of international news. There have been many interesting anomalies in recent years, however in terms of discoveries we had the Higgs and the neutrino mass in 1998, but I’m not sure that there are many others that would really grab your attention if you’re not already invested in physics.
Part of the challenge is just the phase of discovery that particle physics is in right now. We have a model, and we’re trying to find the edges of validity of that model. We see some anomalies and then we fix them, and some might stick around. We have some ideas and theories but they might not pan out. That’s kind of the story we’re working with right now, whereas if you’re looking at astronomy, we had gravitational waves and dark energy. We get new telescopes with beautiful pictures all the time, so it’s easier to communicate and get people excited than it is in particle physics, where we’re constantly refining the model and learning new things. It’s a fantastically exciting time, but there have been no big paradigm shifts recently.
How can you keep people engaged in a subject where big discoveries aren’t constantly being made?
I think it’s hard. There are a few ways to go about it. You can talk about the really massive journey we’re on: this hugely consequential and difficult challenge we’re facing in high-energy physics. It’s a huge task of massive global effort, so you can help people feel involved in the quest to go beyond the Standard Model of particle physics.
You need to acknowledge it’s going to be a long journey before we make any big discoveries. There’s much work to be done, and we’re learning lots of amazing things along the way. We’re getting much higher precision. The process of discovery is also hugely consequential outside of high-energy physics: there are so many technological spin-offs that tie into other fields, like cosmology. Discoveries are being made between particle and cosmological physics that are really exciting.
Every little milestone is an achievement to be celebrated
We don’t know what the end of the story looks like. There aren’t a lot of big signposts along the way where we can say “we’ve made so much progress, we’re halfway there!” Highlighting the purpose of discovery, the little exciting things that we accomplish along the way such as new experimental achievements, and the people who are involved and what they’re excited about – this is how we can get around this communication challenge.
Every little milestone is an achievement to be celebrated. CERN is the biggest laboratory in the world. It’s one of humanity’s crowning achievements in terms of technology and international collaboration – I don’t think that’s an exaggeration. CERN and the International Space Station. Those two labs are examples of where a bunch of different countries, which may or may not get along, collaborate to achieve something that they can’t do alone. Seeing how everyone works together on these projects is really inspiring. If more people were able to get a glimpse of the excitement and enthusiasm around these experiments, it would make a big difference.
During its September session, the CERN Council was presented with a revised schedule for Long Shutdown 3 (LS3) of the LHC and its injector complex. For the LHC, LS3 is now scheduled to begin at the start of July 2026, seven and a half months later than planned. The overall length of the shutdown will increase by around four months. Combined, these measures will shift the start of the High-Luminosity LHC (HL-LHC) by approximately one year, to June 2030. The extensive programme of work for the injectors will begin in September 2026, with a gradual restart of operations scheduled to take place in 2028.
“The decision to shift the start of the HL-LHC by approximately one year and increase the length of the shutdown reflects a consensus supported by our scientific committees,” explains Mike Lamont, CERN director for accelerators and technology. “The delayed start of LS3 is primarily due to significant challenges encountered during the Phase II upgrades of the ATLAS and CMS experiments, which have led to the erosion of contingency time and introduced considerable schedule risks. The challenges faced by the experiment teams included COVID-19 and the impact of the Russian invasion of Ukraine.”
LS3 represents a pivotal phase in enhancing CERN’s capabilities. During the shutdown, ATLAS and CMS will replace many of their detectors and a large part of their electronics. Schedule contingencies have been insufficient for the new inner tracker for ATLAS, and for the HGCAL and new tracker for CMS. The delayed start of LS3 will allow the collaborations more time to develop and build these highly sophisticated detectors and systems.
On the machine side, a key activity during LS3 is the drilling of 28 vertical cores to link the new HL-LHC technical galleries to the LHC tunnel. Initially expected to take six months, this timeframe was reduced to two months in 2021 to optimise the schedule. However, challenges encountered during the tendering process and in subsequent consultations with specialists necessitated a return to the original six-month timeline for core excavation.
In addition to high-luminosity enhancements, LS3 will involve a major programme of work across the accelerator complex. This includes the North Area consolidation project and the transformation of the ECN3 cavern into a high-intensity fixed-target facility; the dismantling of the CNGS target to make way for the next phase of wakefield-acceleration research at AWAKE; improvements to ISOLDE to boost the facility’s nuclear-studies potential; and extensive maintenance and consolidation across all machines and facilities to ensure operational safety, longevity and availability.
“All these activities are essential to ensuring the medium-term future of the laboratory and allowing full exploitation of its remarkable potential in the coming decades,” says Lamont.
One of nature’s greatest mysteries lies in the masses of the elementary fermions. Each of the three generations of quarks and charged leptons is progressively heavier than the first one, which forms ordinary matter, but the overall pattern and vast mass differences remain empirical and unexplained. In the Standard Model (SM), charged fermions acquire mass through interactions with the Higgs field. Consequently, their interaction strength with the Higgs boson, a ripple of the Higgs field, is proportional to the fermions’ mass. Precise measurements of these interaction strengths could offer insights into the mass-generation mechanism and potentially uncover new physics to explain this mystery.
The ATLAS collaboration recently released improved results on the Higgs boson’s interaction with second- and third-generation quarks (charm, bottom and top), based on the analysis of data collected during LHC Run 2 (2015–2018). The analyses refine two studies: Higgs-boson decays to charm- and bottom-quark pairs (H → cc and H → bb) in events where the Higgs boson is produced together with a weak boson V (W or Z); and, since the Higgs boson is too light to decay into a top-quark pair, the interaction with top quarks is probed in Higgs production in association with a top-quark pair (ttH) in events with H → bb decays. Sensitivity to H → cc and H → bb in VH production is increased by a factor of three and by 15%, respectively. Sensitivity to ttH, H → bb production is doubled.
Innovative analysis techniques were crucial to these improvements, several involving machine learning techniques, such as state-of-the-art transformers in the extremely challenging ttH(bb) analysis. Both analyses utilised an upgraded algorithm for identifying particle jets from bottom and charm quarks. A bespoke implementation allowed, for the first time, analysis of VH events coherently for both H → cc and H → bb decays. The enhanced classification of the signal from various background processes allowed a tripling of the number of selected ttH, H → bb events, and was the single largest improvement to increase the sensitivity to VH, H → cc. Both analyses improved their methods for estimating background processes including new theoretical predictions and the refined assessment of related uncertainties – a key component to boost the ttH, H → bb sensitivity.
Due to these improvements, ATLAS measured the ttH, H → bb cross-section with a precision of 24%, better than any single measurement before. The signal strength relative to the SM prediction is found to be 0.81 ± 0.21, consistent with the SM expectation of unity. It does not confirm previous results from ATLAS and CMS that left room for a lower-than-expected ttH cross section, dispelling speculations of new physics in this process. The compatibility between new and previous ATLAS results is estimated to be 21%.
In the new analysis VH, H → bb production was measured with a record precision of 18%; WH, H → bb production was observed for the first time with a significance of 5.3σ. Because H → cc decays are suppressed by a factor of 20 relative to H → bb decays, given the difference in quark masses, and are more difficult to identify, no significant sign of this process was found in the data. However, an upper limit on potential enhancements of the VH, H → cc rate of 11.3 times the SM prediction was placed at the 95% confidence level, allowing ATLAS to constrain the Higgs-charm coupling to less than 4.2 times the SM value, the strongest direct constraint to date.
The ttH and VH cross-sections were measured (double-)differentially with increased reach, granularity, and precision (figures 1 and 2). Notably, in the high transverse-momentum regime, where potential new physics effects are not yet excluded, the measurements were extended and the precision nearly doubled. However, neither analysis shows significant deviations from Standard Model predictions.
The significant new dataset from the ongoing Run 3 of the LHC, coupled with further advanced techniques like transformer-based jet identification, promises even more rigorous tests soon, and amplifies the excitement for the High-Luminosity LHC, where further precision will push the boundaries of our understanding of the Higgs boson – and perhaps yield clues to the mystery of the fermion masses.