Topics

W mass snaps back

Based on the latest data inputs, the Standard Model (SM) constrains the mass of the W boson (mW) to be 80,353 ± 6 MeV. At tree level, mW depends only on the mass of the Z boson and the weak and electromagnetic couplings. The boson’s tendency to briefly transform into a top quark and a bottom quark causes the largest quantum correction. Any departure from the SM prediction could signal the presence of additional loops containing unknown heavy particles.

The CDF experiment at the Tevatron observed just such a departure in 2022, plunging the boson into a midlife crisis 39 years after it was discovered at CERN’s SpSS collider (CERN Courier September/October 2023 p27). A new measurement from the CMS experiment at the LHC now contradicts the anomaly reported by CDF. While the CDF result stands seven standard deviations above the SM, CMS’s measurement aligns with the SM prediction and previous results at the LHC. The CMS and CDF results claim joint first place in precision, provoking a dilemma for phenomenologists.

New-physics puzzle

“The result by CDF remains puzzling, as it is extremely difficult to explain the discrepancy with the three LHC measurements by the presence of new physics, in particular as there is also a discrepancy with D0 at the same facility,” says Jens Erler of Johannes Gutenberg-Universität Mainz. “Together with measurements of the weak mixing angle, the CMS result confirms the validity of the SM up to new physics scales well into the TeV region.”

“I would not call this ‘case closed’,” agrees Sven Heinemeyer of the Universidad Autónoma de Madrid. “There must be a reason why CDF got such an anomalously high value, and understanding what is going on may be very beneficial for future investigations. We know that the SM is not the last word, and there are clear cases that require physics beyond the SM (BSM). The question is at which scale BSM physics appears, or how strongly it is coupled to the SM particles.”

The result confirms the validity of the SM up to new physics scales well into the TeV region

To obtain their result, CDF analysed four million W-boson decays originating from 1.96 TeV proton–antiproton collisions at Fermilab’s Tevatron collider between 1984 and 2011. In stark disagreement with the SM, the analysis yielded a mass of 80,433.5 ± 9.4 MeV. This result induced the ATLAS collaboration to revisit its 2017 analysis of W → μν and W → eνdecays in 7 TeV proton–proton collisions using the latest global data on parton distribution functions, which describe the probable momenta of quarks and gluons inside the proton. A newly developed fit was also implemented. The central value remained consistent with the SM, with a reduced uncertainty of 16 MeV increasing its tension with the new CDF result. A less precise measurement by the LHCb collaboration also favoured the SM (CERN Courier May/June 2023 p10).

CMS now reports mW to be 80,360.2 ± 9.9 MeV, concluding a study of W → μν decays begun eight years ago.

“One of the main strategic choices of this analysis is to use a large dataset of Run 2 data,” says CMS spokesperson Gautier Hamel de Monchenault. “We are using 16.8 fb–1 of 13 TeV data at a relatively high pileup of on average 25 interactions per bunch crossing, leading to very large samples of about 7.5 million Z bosons and 90 million W bosons.”

With high pileup and high energies come additional challenges. The measurement uses an innovative analysis tech­nique that benchmarks W → μν decay systematics using Z → μμ decays as independent validation wherein one muon is treated as a neutrino. The ultimate precision of the measurement relies on reconstructing the muon’s momentum in the detector’s silicon tracker to better than one part in 10,000 – a groundbreaking level of accuracy built on minutely modelling energy loss, multiple scattering, magnetic-field inhomogeneities and misalignments. “What is remarkable is that this incredible level of precision on the muon momentum measurement is obtained without using Z → μμ as a calibration candle, but only using a huge sample of J/ψ→ μμ events,” says Hamel de Monchenault. “In this way, the Z → μμ sample can be used for an independent closure test, which also provides a competitive measurement of the Z mass.”

Measurement matters

Measuring mW using W → μν decays is challenging because the neutrino escapes undetected. mW must be inferred from either the distribution of the transverse mass visible in the events (mT) or the distribution of the transverse momentum of the muons (pT). The mT approach used by CDF is the most precise option at the Tevatron, but typically less precise at the LHC, where hadronic recoil is difficult to distinguish from pileup. The LHC experiments also face a greater challenge when reconstructing mW from distributions of pT. In proton–antiproton collisions at the Tevatron, W bosons could be created via the annihilation of pairs of valence quarks. In proton–proton collisions at the LHC, the antiquark in the annihilating pair must come from the less well understood sea; and at LHC energies, the partons have lower fractions of the proton’s momentum – a less well constrained domain of parton distribution functions.

“Instead of exploiting the Z → μμ sample to tune the parameters of W-boson production, CMS is using the W data themselves to constrain the theory parameters of the prediction for the pT spectrum, and using the independent Z → μμ sample to validate this procedure,” explains Hamel de Monchenault. “This validation gives us great confidence in our theory modelling.”

“The CDF collaboration doesn’t have an explanation for the incompatibility of the results,” says spokesperson David Toback of Texas A&M University. “Our focus is on the checks of our own analysis and understanding of the ATLAS and CMS methods so we can provide useful critiques that might be helpful in future dialogues. On the one hand, the consistency of the ATLAS and CMS results must be taken seriously. On the other, given the number of iterations and improvements needed over decades for our own analysis – CDF has published five times over 30 years – we still consider both LHC results ‘early days’ and look forward to more details, improved methodology and additional measurements.”

The LHC experiments each plan improvements using new data. The results will build on a legacy of electroweak precision at the LHC that was not anticipated to be possible at a hadron collider (CERN Courier September/October 2024 p29).

“The ATLAS collaboration is extremely impressed with the new measurement by CMS and the extraordinary precision achieved using high-pileup data,” says spokesperson Andreas Hoecker. “It is a tour de force, accomplished by means of a highly complex fit, for which we applaud the CMS collaboration.” ATLAS’s next measurement of mW will focus on low-pileup data, to improve sensitivity to mT relative to their previous result.

The ATLAS collaboration is extremely impressed with the new measurement by CMS

The LHCb collaboration is working on an update of their measurement using its full Run 2 data set. LHCb’s forward acceptance may prove to be powerful in a global fit. “LHCb probes parton density functions in different phase space regions, and that makes the measurements from LHCb anticorrelated with those of ATLAS and CMS, promising a significant impact on the average, even if the overall uncertainty is larger,” says spokesperson Vincenzo Vagnoni. The goal is to progress LHC measurements towards a combined precision of 5 MeV. CMS plans several improvements to their own analysis.

“There is still a significant factor to be gained on the momentum scale, with which we could reach the same precision on the Z-boson mass as LEP,” says Hamel de Monchenault. “We are confident that we can also use a future, large low-pileup run to exploit the W recoil and mT to complement the muon pT spectrum. Electrons can also be used, although in this case the Z sample could not be kept independent in the energy calibration.”

Shifting sands for muon g–2

Lattice–QCD calculation

The Dirac equation predicts the magnetic moment of the muon (g) to be precisely two in units of the Bohr magneton. Virtual lines and loops add roughly 0.1% to this value, giving rise to a so-called anomalous contribution often quantified by aμ = (g–2)/2. Countless electromagnetic loops dominate the calculation, spontaneous symmetry breaking is evident in the effect of weak interactions, and contributions from the strong force are non-perturbative. Despite this formidable complexity, theoretical calculations of aμ have been experimentally verified to nine significant figures.

The devil is in the 10th digit. The experimental world average for aμ currently stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in a 2020 white paper. But two recent results may ease this tension in advance of a new showdown with experiment next year.

The first new input is data from the CMD-3 experiment at the Budker Institute of Nuclear Physics, which yields aμconsistent with experimental data. Comparable electron–positron (e+e) collider data from the KLOE experiment at the National Laboratory of Frascati, the BaBar experiment at SLAC, the BESIII experiment at IHEP Beijing and CMD-3’s predecessor CMD-2, were the backbone of the 2020 theory white paper. With KLOE and CMD-3 now incompatible at the level of 5σ, theorists are exploring alternative bases for the theoretical prediction, such as an ab-initio approach based on lattice QCD and a data-driven approach using tau–lepton decays.

The second new result is an updated theory calculation of aμ by the Budapest–Marseille–Wuppertal (BMW) collaboration. BMW’s ab-initio lattice–QCD calculation of 2020 was the first to challenge the data-driven consensus expressed in the 2020 white paper. The recent update now claims a superior precision, driven in part by the pragmatic implementation of a data-driven approach in the low-mass region, where experiments are in good agreement. Though only accounting for 5% of the hadronic contribution to aμ, this “long distance” region is often the largest source of error in lattice–QCD calculations, and relatively insensitive to the use of finer lattices.

The new BMW result is fully compatible with the experimental world average, and incompatible with the 2020 white paper at the level of 4σ.

“It seems to me that the 0.9σ agreement between the direct experimental measurement of the magnetic moment of the muon and the ab-initio calculation of BMW has most probably postponed the possible discovery of new physics in this process,” says BMW spokesperson Zoltán Fodor (Wuppertal). “It is important to mention that other groups have partial results, too, so-called window results, and they all agree with us and in several cases disagree with the result of the data-driven method.”

These two analyses were among the many discussed at the seventh plenary workshop of the Muon g-2 Theory Initiative held in Tsukuba, Japan from 9 to 13 September. The theory initiative is planning to release an updated prediction in a white paper due to be published in early 2025. With multiple mature e+e and lattice–QCD analyses underway for several years, attention now turns to tau decays – the subject of a soon-to-be-announced mini-workshop to ensure their full availability for consideration as a possible basis for the 2025 white paper. Input data would likely originate from tau decays recorded by the Belle experiment at KEK and the ALEPH experiment at CERN, both now decommissioned.

I am hopeful we will be able to establish consolidation between independent lattice calculations at the sub-percent level

“From a theoretical point of view, the challenge for including the tau data is the isospin rotation that is needed to convert the weak hadronic tau decay to the desired input for hadronic vacuum polarisation,” explains theory-initiative chair Aida X El-Khadra (University of Illinois). Hadronic vacuum polarisation (HVP) is the most challenging part of the calculation of aμ, accounting for the effect of a muon emitting a virtual photon that briefly transforms into a flurry of quarks and gluons just before it absorbs the photon representing the magnetic field (CERN Courier May/June 2021 p25).

Lattice QCD offers the possibility of a purely theoretical calculation of HVP. While BMW remains the only group to have published a full lattice-QCD calculation, multiple groups are zeroing in on its most sensitive aspects (CERN CourierSeptember/October 2024 p21).

“The main challenge in lattice-QCD calculations of HVP is improving the precision to the desired sub-percent level, especially at long distances,” continues El-Khadra. “With the new results for the long-distance contribution by the RBC/UKQCD and Mainz collaborations that were already reported this year, and the results that are still expected to be released this fall, I am hopeful that we will be able to establish consolidation between independent lattice calculations at the sub-percent level. In this case we will provide a lattice-only determination of HVP in the second white paper.”

A bestiary of exotic hadrons

Twenty-three exotic states discovered at the LHC

Seventy-six new particles have been discovered at the Large Hadron Collider (LHC) so far: the Higgs boson, 52 conventional hadrons and a bestiary of 23 exotic hadrons whose structure cannot reliably be explained or their existence predicted.

The exotic states are varied and complex, displaying little discernible pattern at first glance. They represent a fascinating detective story: an experimentally driven quest to understand the exotic offspring of the strong interaction, motivating rival schools of thought among theorists.

This surge in new hadrons has been one of the least expected outcomes of the LHC (see “Unexpected” figure). With a tenfold increase in data at the High-Luminosity LHC (HL-LHC) on the horizon, and further new states also likely to emerge at the Belle II experiment in Japan, the BESIII experiment in China, and perhaps at a super charm–tau factory in the same country, their story is in its infancy, with twists and turns still to come.

Building blocks

Just as electric charges arrange themselves in neutral atoms, the colour charges that carry the strong interaction arrange themselves into colourless composite states. As fundamental particles with colour charge, quarks (q) and gluons (g) therefore cannot exist independently, but only in colour-neutral composite states called hadrons. Since the discovery of the pion in 1947, a rich phenomenology of mesons (qq) and baryons (qqq) inspired the quark model and eventually the theory of quantum chromodynamics (QCD), which serves as an impeccable description of the strong interaction to this day.

But why should nature not also contain exotic colour-neutral combinations such as tetraquarks (qqqq), pentaquarks (qqqqq), hexaquarks (qqqqqq or qqqqqq), hybrid hadrons (qqg or qqgg) and glueballs (gg or ggg)?

Twenty-three exotic hadrons have been discovered so far at the LH

The existence of exotic hadrons was debated without consensus for decades, with interest growing in the early 2000s, when new states with unexpected features were observed. In 2003, the BaBar experiment at SLAC discovered the D*s0(2317)+ meson, with a mass close to the sum of the masses of a D meson and a kaon. A few months later that year, Belle discovered the χc1(3872) meson, then called X(3872) (see “What’s in a name?” panel), with a mass close to the sum of the masses of a D0 meson and a D*0 meson. As well as their striking closeness to meson–meson thresholds, the “width” of their signals was much narrower than expected. (Measured in units of energy, such widths are reciprocal to particle lifetimes.)

Soon afterwards, in 2007, a number of other charmonium-like and bottomonium-like states were observed. Belle’s observation in 2007 of the electrically charged charmonium-like state Z(4430)+ (now called Tcc1(4430)+) was a pathfinder in theorising the existence of QCD exotics. Though these states exhibited the telltale signs of being excitations of a charm–anticharm (cc) system (see “The new particles”), their net electric charge indicated a system that could not be composed of only a quark–antiquark pair, as particles and antiparticles have opposite electric charges. Two additional quarks had to be present.

Exotic states at the LHC

The start-up of the LHC opened up the trail, with 23 new exotic hadrons observed there so far (see “The 23 exotic hadrons discovered at the LHC” table). The harvest of new states began in autumn 2013 with the CMS experiment at the LHC reporting the observation of the χc1(4140) state in the J/ψφ mass spectrum in B+→ J/ψφK+ decays, confirming a hint from the CDF experiment at Fermilab. Its minimal quark content is likely ccss. CMS also reported evidence for a state at a higher mass, observed by the LHCb experiment at the LHC in 2016 as the χc1(4274), alongside two more states at masses of 4500 and 4700 MeV.

What’s in a name?

Reflecting their mystery, the first exotic states were named X, Y and Z. Later on, the proliferation of exotic states required an extension of the particle naming scheme. Manifestly exotic tetraquarks and pentaquarks are now denoted T and P, respectively, with a subscript listing the bottom (b), charm (c) and strange (s) quark content. Exotic quarkonium-like states follow the naming scheme of the conventional mesons, where the name is related to the quark content and spin-parity combination. For example, ψ denotes a state with at least a cc quark pair and JPC = 1––, and χc1 denotes a state with at least a cc quark pair and JPC = 1++. Numbers in parentheses refer to approximate measured masses in MeV. Exotic hadrons are classified as mesons or baryons depending on whether they have baryon number zero or not.

In a 2021 analysis of the same B+→ J/ψφK+ decay mode including LHC Run 2 data, LHCb reported two more neutral states, χc1(4685) and X(4630), that do not correspond to cc states expected from the quark model. The analysis also reported two more resonances seen in the J/ψK+ mass spectrum, Tccs1(4000)+ and Tccs1(4220)+. Carrying charge and strangeness, these charmonia-like states are manifestly exotic, with a minimal quark content ccus.

For Tccs1(4000)+, LHCb had sufficient data to produce an Argand diagram with the distinct signature of a resonance (see “Round resonances” panel). A possible isospin partner, Tccs1(4000)0 was later found in B0→ J/ψφK0s decays, lending further evidence that it is a resonance and not a kinematical feature. (According to an approximate symmetry of QCD, the strong interaction should treat a ccus state almost exactly like a ccds. state, as up and down quarks have the same colour charges and similar masses.) Other charmonium-like tetraquarks were later seen by LHCb in the decays χc0(3960) → D+sDs and χc1(4010) → D*+D.

Table of the 23 exotic hadrons discovered at the LHC

The world’s first pentaquarks were discovered by LHCb in 2015. Two pentaquarks appeared in the J/ψp spectrum by studying Λ0b→ J/ψpK decays: Pcc(4380)+, a rather broad resonance with a width of 200 MeV; and Pcc(4450)+, which is narrower at 40 MeV. The observed decay mode implied a minimal quark content ccuud, excluding any conventional interpretation.

These states were hiding in plain sight: they were spotted independently by several LHCb physicists, including a CERN summer student. In a 2019 analysis using more data, the heavier state was identified as the sum of two overlapping pentaquarks now called Pcc(4440)+ and Pcc(4457)+. Another narrow state was also seen at a mass of 4312 MeV. LHCb observed the first strange pentaquark in B→ J/ψΛp decays in 2022, with a quark content ccuds.

Other manifestly exotic hadrons followed, with two exotic hadrons Tcccc(6600) and Tcccc(6900) observed by LHCb, CMS and ATLAS in the J/ψJ/ψ spectrum. They can be interpreted as a tetraquark made of two charm and two anti-charm quarks – a fully charmed tetraquark. When both J/ψ mesons decay to a muon pair, the final state consists of four muons, allowing the LHCb, ATLAS and CMS experiments to study the final spectrum in multiple acceptance regions and transverse momentum ranges. These states do not contain any light quarks, which eases their theoretical study and also implies a state with four bottom quarks that could be long-lived.

Doubly charming

The world’s first double-open-charm meson was discovered by LHCb in 2021: the Tcc(3875)+. With a charm of two, it cannot be accommodated in the conventional qq scheme. There is an intriguing similarity between the exotic Tcc(3875)+(ccud) and the charmonium-like (cc-like) χc1(3872) meson discovered by Belle in 2003, whose nature is still controversial. Both have similar masses and remarkably narrow widths. The jury is still out on their interpretation (see “Inside pentaquarks and tetraquarks“).

The discovery of a Tcc(3875)+ (ccud) meson also implies the existence of a Tbb state, with a bbud quark content, that should be stable except with regard to weak decays. The observation of the first long-lived exotic state, with a sizable flight distance, is an intriguing goal for future experiments. At the HL-LHC, the search for B+c mesons displaced from the interaction point, could return the first evidence for a Tbb tetraquark given that the decays of weakly decaying double-beauty hadrons such as Ξbbq and Tbb are their only known sources.

Round resonances

Round resonances

Particles are most likely to be created in collisions when the centre-of-mass energy matches their mass. The longer the mean lifetime of the new particle, the greater the uncertainty on its decay time and, via Heisenberg’s uncertainty principle, the smaller the uncertainty on their energy. Such particles have narrow peaks in their energy spectra. Fast-decaying particles have broad peaks. Searching for such “resonances” can reveal new particles – but bumps can be deceiving. A more revealing analysis fits differential decay rates to measure the complex quantum amplitude A(s) describing the production of the particle. As the energy (√s) increases, the amplitude traces a circle counterclockwise in the complex plane, with the magnitude of the amplitude tracing the classic resonant peak observed in energy spectra (see figure above left).

Demonstrating this behaviour, as LHCb did in 2021 for theTccs1(4000)+ meson (above, centre) is a significant experimental achievement, which the collaboration also performed in 2018 for the pathfinding Z(4430)+ (Tcc1(4430)+) meson discovered by Belle in 2007 (black points, above right). The LHCb measurement confirmed its resonant character and resolved any controversy over whether it was a true exotic state. The simulated blue measurement illustrates the improvement such measurements stand to accrue with upgraded detectors and increased statistics at the HL-LHC.

There are also other exotic states predicted by QCD that are still missing in the particle zoo, such as meson–gluon hybrids and glueballs. Hybrid mesons could be identified by exotic spin-parity (JP) quantum numbers not allowed in the qq scheme. Glueballs could be observed in gluon-enriched heavy-ion collisions. A potential candidate has recently been observed by the BESIII collaboration, which is another major player in exotic spectroscopy.

Exotic hadrons might even have been observed in the light quark sector without having been searched for. The scalar mesons are too numerous to fit in the conventional quark model, and some of them, for instance the f0(980) and a0(980) mesons, might be tetraquarks. Exotic light pentaquarks may also exist. Twenty years ago, the θ+ baryon caused quite some excitement, being apparently openly exotic, with a positive strangeness and a minimal quark content uudds. No fewer than 10 different experiments presented evidence for it, including several quoting 5σ significance, before it disappeared in blind analyses of larger data samples with better background subtraction (CERN Courier April 2004 p29). Its story is now material for historians of science, but its interpretation triggered many theory papers that are still useful today.

The challenge of understanding how quarks are bound inside exotic hadrons is the greatest outstanding question in hadron spectroscopy. Models include a cloud of light quarks and gluons bound to a heavy qq core by van-der-Waals-like forces (hadro-quarkonium); colour-singlet hadrons bound by residual nuclear forces (hadronic molecules); and compact tetraquarks [qq] [qq] and pentaquarks [qq][qq]q composed of diquarks [qq] and antidiquarks [qq], which masquerade as antiquarks and quarks, respectively.

The LHCb experiment at CERN

Some exotic hadrons may also have been misinterpreted as resonant states when they are actually “threshold cusps” – enhancements caused by rescattering. For instance, the Pcc(4457)+ pentaquark seen in Λ0b→ J/ψpK decays could in fact be rescattering between the D0 and Λc(2595)+ decay products in Λ0b→ Λc(2595)+D0K to exchange a charm quark and form a J/ψp system. This hypothesis can be tested by searching for additional decay modes and isospin partners, or via detailed amplitude analyses – a process already completed for many of the aforementioned states, but not yet all.

Establishing the nature of the exotic hadrons will be challenging, and a comprehensive organisation of exotic hadrons in flavour multiples is still missing. Establishing whether exotic hadrons obey the same flavour symmetries as conventional hadrons will be an important step forward in understanding their composition.

Effective predictions

The dynamics of quarks and gluons can be described perturbatively in hard processes thanks to the smallness of the strong coupling constant at short distances, but the spectrum of stable hadrons is affected by non-perturbative effects and cannot be computed from the fundamental theory. Though lattice QCD attempts this by discretising space–time in a cubic lattice, the results are time consuming and limited in precision by computational power. Predictions rely on approximate analytical methods such as effective field theories.

The challenge of understanding how quarks are bound inside exotic hadrons is the greatest outstanding question in hadron spectroscopy

Hadron physics is therefore driven by empirical data, and hadron spectroscopy plays a pivotal role in testing the predictions of lattice QCD, which is itself an increasingly important tool in precision electroweak physics and searches for physics beyond the Standard Model.

Like Mendeleev and Gell-Mann, we are at the beginning of a new field, in the taxonomy stage, discovering, studying and classifying exotic hadrons. The deeper challenge is to explain and anticipate them. Though the underlying principles are fully known, we are still far from being able to do the chemistry of quantum chromodynamics.

Inside pentaquarks and tetraquarks

Strange pentaquarks

Breakthroughs are like London buses. You wait a long time, and three turn up at once. In 1963 and 1964, Murray Gell-Mann, André Peterman and George Zweig independently developed the concept of quarks (q) and antiquarks (q) as the fundamental constituents of the observed bestiary of mesons (qq) and baryons (qqq).

But other states were allowed too. Additional qq pairs could be added at will, to create tetraquarks (qqqq), pentaquarks (qqqqq) and other states besides. In the
1970s, Robert L Jaffe carried out the first explicit calculations of multiquark states, based on the framework of the MIT bag model. Under the auspices of the new theory of quantum chromodynamics (QCD), this computationally simplified model ignored gluon interactions and considered quarks to be free, though confined in a bag with a steep potential at its boundary. These and other early theoretical efforts triggered many experimental searches, but no clear-cut results.

New regimes

Evidence for such states took nearly two decades to emerge. The essential precursors were the discovery of the charm quark (c) at SLAC and BNL in the November Revolution of 1974, some 50 years ago (p41), and the discovery of the bottom quark (b) at Fermilab three years later. The masses and lifetimes of these heavy quarks allowed experiments to probe new regimes in parameter space where otherwise inexplicable bumps in energy spectra could be resolved (see “Heavy breakthroughs” panel).

Heavy breakthroughs

Double hidden charm

With the benefit of hindsight, it is clear why early experimental efforts did not find irrefutable evidence for multiquark states. For a multiquark state to be clearly identifiable, it is not enough to form a multiquark colour-singlet (a mixture of colourless red–green–blue, red–antired, green–antigreen and blue–antiblue components). Such a state also needs to be narrow and long-lived enough to stand out on top of the experimental background, and has to have distinct decay modes that cannot be explained by the decay of a conventional hadron. Multiquark states containing only light quarks (up, down and strange) typically have many open decay channels, with a large phase space, so they tend to be wide and short-lived. Moreover, they share these decay channels with excited states of conventional hadrons and mix with them, so they are extremely difficult to pin down.

Multiquark states with at least one heavy quark are very different. Once hadrons are “dressed” by gluons, they acquire effective masses of the order of several hundred MeV, with all quarks coupling in the same way to gluons. For light quarks, the bare quark masses are negligible compared to the effective mass, and can be neglected to zeroth order. But for heavy quarks (c or b), the ratio of the bare quark masses to the effective mass of the hadron dramatically affects the dynamics and the experimental situation, creating narrow multiquark states that stand out. These states were not seen in the early searches simply because the relevant production cross sections are very small and particle identification requires very high spatial resolution. These features became accessible only with the advent of the huge luminosity and the superb spatial resolution provided by vertex detectors in bottom and charm factories such as BaBar, Belle, BESIII and LHCb.

The attraction between two heavy quarks scales like α2smq, where αs is the strong coupling constant and mq is the mass of the quarks. This is because the Coulomb-like part of the QCD potential dominates, scaling as –αs/r as a function of distance r, and yielding an analogue of the Bohr radius ~1/(αsmq). Thus, the interaction grows approximately linearly with the heavy quark mass. In at least one case (discussed below), the highly anticipated but as yet undiscovered bbud. tetraquark Tbb is expected to result in a state with a mass that is below the two-meson threshold, and therefore stable under strong interactions.

Exclusively heavy states are also possible. In 2020 and in 2024, respectively, LHCb and CMS discovered exotic states Tcccc(6900) and Tcccc(6600), which both decay into two J/ψ particles, implying a quark content (cccc). J/ψ does not couple to light quarks, so these states are unlikely to be hadronic molecules bound by light meson exchange. Though they are too heavy to be the ground state of a (cccc) compact tetraquark, they might perhaps be its excitations. Measuring their spin and parity would be very helpful in distinguishing between the various alternatives that have been proposed.

The first unambiguously exotic hadron, the X(3872) (dubbed χc1(3872) in the LHCb collaboration’s new taxonomy; see “What’s in a name?” panel), was discovered at the Belle experiment at KEK in Japan in 2003. Subsequently confirmed by many other experiments, its nature is still controversial. (More of that later.) Since then, there has been a rapidly growing body of experimental evidence for the existence of exotic multiquark hadrons. New states have been discovered at Belle, at the BaBar experiment at SLAC in the US, at the BESIII experiment at IHEP in China, and at the CMS and LHCb experiments at CERN (see “A bestiary of exotic hadrons“). In all cases with robust evidence, the exotic new states contain at least one heavy charm or bottom quark. The majority include two.

The key theoretical question is how the quarks are organised inside these multiquark states. Are they hadronic molecules, with two heavy hadrons bound by the exchange of light mesons? Or are they compact objects with all quarks located within a single confinement volume?

Compact candidate

The compact and molecular interpretations each provide a natural explanation for part of the data, but neither explains all. Both kinds of structures appear in nature, and certain states may be superpositions of compact and molecular states.

In the molecular case the deuteron is a good mental image. (As a bound state of a proton and a neutron, it is technically a molecular hexaquark.) In the compact interpretation, the diquark – an entangled pair of quarks with well-defined spin, colour and flavour quantum numbers – may play a crucial role. Diquarks have curious properties, whereby, for example, a strongly correlated red–green pair of quarks can behave like a blue antiquark, opening up intriguing possibilities for the interpretation of qqqq and qqqqq states.

Compact states

A clearcut example of a compact structure is the Tbb tetraquark with quark content bbud. Tbb has not yet been observed experimentally, but its existence is supported by robust theoretical evidence from several complementary approaches. As for any ground-state hadron, its mass is given to a good approximation by the sum of its constituent quark masses and their (negative) binding energy. The constituent masses implied here are effective masses that also include the quarks’ kinetic energies. The binding energy is negative as it was released when the compact state formed.

In the case of Tbb, the binding energy is expected to be so large that its mass is below all two-meson decay channels: it can only decay weakly, and must be stable with respect to the strong interaction. No such exotic hadron has yet been discovered, making Tbb a highly prized target for experimentalists. Such a large binding energy cannot be generated by meson exchange and must be due to colour forces between the very heavy b quarks. Tbb is an iso­scalar with JP = 1+. Its charmed analogue, Tcc = (ccud), also known as Tcc(3875)+, was observed by LHCb in 2021 to be a whisker away from stability, with a very small binding energy and width less than 1 MeV (CERN Courier September/October 2021 p7). The big difference between the binding energies of Tbb and Tcc, which make the former stable and the latter unstable, is due to the substantially greater mass of the b quark than the c quark, as discussed in the panel above. An intermediate case, Tbc = (bcud), is very likely also below threshold for strong decay and therefore stable. It is also easier to produce and detect than Tbb and therefore extremely tempting experimentally.

Molecular pentaquarks

At the other extreme, we have states that are most probably pure hadronic molecules. The most conspicuous examples are the Pc(4312), Pc(4440) and Pc(4457) pentaquarks discovered by LHCb in 2019, and labelled according to the convention adopted by the Particle Data Group as Pcc(4312)+, Pcc(4440)+ and Pcc(4457)+. All three have quark content (ccuud) and decay into J/ψp, with an energy release of order 300 MeV. Yet, despite having such a large phase space, all three have anomalously narrow widths less than about 10 MeV. Put more simply, the pentaquarks decay remarkably slowly, given how much energy stands to be released.

But why should long life count against the pentaquarks being tightly bound and compact? In a compact (ccuud) state there is nothing to prevent the charm quark from binding with the anticharm quark, hadronising as J/ψ and leaving behind a (uud) proton. It would decay immediately with a large width.

Anomalously narrow

On the other hand, hadronic molecules such as ΣcD and ΣcD* automatically provide a decay-suppression mechanism. Hadronic molecules are typically large, so the c quark inside the Σc baryon is typically far from the c quark inside the D or D* meson. Because of this, the formation of J/ψ = (c c) has a low probability, resulting in a long lifetime and a narrow width. (Unstable particles decay randomly within fixed half-lives. According to Heisenberg’s uncertainty principle, this uncertainty on their lifetime yields a reciprocal uncertainty on their energy, which may be directly observed as the width of the peak in the spectrum of their measured masses when they are created in particle collisions. Long-lived particles exhibit sharply spiked peaks, and short-lived particles exhibit broad peaks. Though the lifetimes of strongly interacting particle are usually not measurable directly, they may be inferred from these “widths”, which are measured in units of energy.)

Additional evidence in favour of their molecular nature comes from the mass of Pc(4312) being just below the ΣcD production threshold, and the masses of Pc(4440) and Pc(4457) being just below the ΣcD* production threshold. This is perfectly natural. Hadronic molecules are weakly bound, so they typically only form an S-wave bound state, with no orbital angular momentum. So ΣcD, which combines a spin-1/2 baryon and a spin-0 negative-parity meson, can only form a single state with JP = 1/2. By contrast, ΣcD*, which combines a spin-1/2 baryon and spin-1 negative-parity meson, can form two closely-spaced states with JP = 1/2 and 3/2, with a small splitting coming from a spin–spin interaction.

An example of a possible mixture of a compact state and a hadronic molecule is provided by the X(3872) meson

The robust prediction of the JP quantum numbers makes it very straightforward in principle to kill this physical picture, if one were to measure JP values different from these. Conversely, measuring the predicted values of JP would provide a strong confirmation (see “The 23 exotic hadrons discovered at the LHC table”).

These predictions have already received substantial indirect support from the strange-pentaquark sector. The spin-parity of the Pccs(4338), which also has a narrow width below 10 MeV, has been determined by LHCb to be 1/2, exactly as expected for a Ξc D molecule (see “Strange pentaquark” figure).

The mysterious X(3872)

An example of a possible mixture of a compact state and a hadronic molecule is provided by the already mentioned X(3872) meson. Its mass is so close to the sum of the masses of a D0 meson and a D*0 meson that no difference has yet been established with statistical significance, but it is known to be less than about 1 MeV. It can decay to J/ψπ+π with a branching ratio (3.5 ± 0.9)%, releasing almost 500 MeV of energy. Yet its width is only of order 1 MeV. This is an even more striking case of relative stability in the face of naively expected instability than for the pentaquarks. At first sight, then, it is tempting to identify X(3872) as a clearcut D0D*0 hadronic molecule.

Particle precision

The situation is not that simple, however. If X(3872) is just a weakly-bound hadronic molecule, it is expected to be very large, of the scale of a few fermi (10–15 m). So it should be very difficult to produce it in hard reactions, requiring a large momentum transfer. Yet this is not the case. A possible resolution might come from X(3872) being a mixture of a D0D*0molecular state and χc1(2P), a conventional radial excitation of P-wave charmonium, which is much more compact and is expected to have a similar mass and the same JPC = 1++ quantum numbers. Additional evidence in favour of such a mixing comes from comparing the rates of the radiative decays X(3872) → J/ψγ and X(3872) → ψ(2S)γ.

The question associated with exotic mesons and baryons can be posed crisply: is an observed state a molecule, a compact multiquark system or something in between? We have given examples of each. Definitive compact-multiquark behaviour can be confirmed if a state’s flavour-SU(3) partners are identified. This is because compact states are bound by colour forces, which are only weakly sensitive to flavour-SU(3) rotations. (Such rotations exchange up, down and strange quarks, and to a good approximation the strong force treats these light flavours equally at the energies of charmed and beautiful exotic hadrons.) For example, if X(3872) should in fact prove to be a compact tetraquark, it should have charged isospin partners that have not yet been observed.

On the experimental front, the sensitivity of LHCb, Belle II, BESIII, CMS and ATLAS have continued to reap great benefits to hadron spectroscopy. Together with the proposed super τ-charm factory in China, they are virtually guaranteed to discover additional exotic hadrons, expanding our understanding of QCD in its strongly interacting regime.

Data analysis in the age of AI

Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September at Imperial College London for PHYSTAT’s Statistics meets Machine Learning workshop. The goal of the meeting, which is part of the PHYSTAT series, was to discuss recent developments in machine learning (ML) and their impact on the statistical data-analysis techniques used in particle physics and astronomy.

Particle-physics experiments typically produce large amounts of highly complex data. Extracting information about the properties of fundamental physics interactions from these data is a non-trivial task. The general availability of simulation frameworks makes it relatively straightforward to model the forward process of data analysis: to go from an analytically formulated theory of nature to a sample of simulated events that describe the observation of that theory for a given particle collider and detector in minute detail. The inverse process – to infer from a set of observed data what is learned about a theory – is much harder as the predictions at the detector level are only available as “point clouds” of simulated events, rather than as the analytically formulated distributions that are needed by most statistical-inference methods.

Traditionally, statistical techniques have found a variety of ways to deal with this problem, mostly centered on simplifying the data via summary statistics that can be modelled empirically in an analytical form. A wide range of ML algorithms, ranging from neural networks to boosted decision trees trained to classify events as signal- or background-like, have been used in the past 25 years to construct such summary statistics.

The broader field of ML has experienced a very rapid development in recent years, moving from relatively straightforward models capable of describing a handful of observable quantities, to neural models with advanced architectures such as normalising flows, diffusion models and transformers. These boast millions to billions of parameters that are potentially capable of describing hundreds to thousands of observables – and can now extract features from the data with an order-of-magnitude better performance than traditional approaches. 

New generation

These advances are driven by newly available computation strategies that not only calculate the learned functions, but also their analytical derivatives with respect to all model parameters, greatly speeding up training times, in particular in combination with modern computing hardware with graphics processing units (GPUs) that facilitate massively parallel calculations. This new generation of ML models offers great potential for novel uses in physics data analyses, but have not yet found their way to the mainstream of published physics results on a large scale. Nevertheless, significant progress has been made in the particle-physics community in learning the technology needed, and many new developments using this technology were shown at the workshop.

This new generation of machine-learning models offers great potential for novel uses in physics data analyses

Many of these ML developments showcase the ability of modern ML architectures to learn multidimensional distributions from point-cloud training samples to a very good approximation, even when the number of dimensions is large, for example between 20 and 100. 

A prime use-case of such ML models is an emerging statistical analysis strategy known as simulation-based inference (SBI), where learned approximations of the probability density of signal and background over the full high-dimensional observables space are used, dispensing with the notion of summary statistics to simplify the data. Many examples were shown at the workshop, with applications ranging from particle physics to astronomy, pointing to significant improvements in sensitivity. Work is ongoing on procedures to model systematic uncertainties, and no published results in particle physics exist to date. Examples from astronomy showed that SBI can give results of comparable precision to the default Markov chain Monte Carlo approach for Bayesian computations, but with orders of magnitude faster computation times.

Beyond binning

A commonly used alternative approach to the full-fledged theory parameter inference from observed data is known as deconvolution or unfolding. Here the goal is publishing intermediate results in a form where the detector response has been taken out, but stopping short of interpreting this result in a particular theory framework. The classical approach to unfolding requires estimating a response matrix that captures the smearing effect of the detector on a particular observable, and applying the inverse of that to obtain an estimate of a theory-level distribution – however, this approach is challenging and limited in scope, as the inversion is numerically unstable, and requires a low dimensionality binning of the data. Results on several ML-based approaches were presented, which either learn the response matrix from modelling distributions outright (the generative approach) or learn classifiers that reweight simulated samples (the discriminative approach). Both approaches show very promising results that do not have the limitations on the binning and dimensionality of the distribution of the classical response-inversion approach.

A third domain where ML is facilitating great progress is that of anomaly searches, where an anomaly can either be a single observation that doesn’t fit the distribution (mostly in astronomy), or a collection of events that together don’t fit the distribution (mostly in particle physics). Several analyses highlighted both the power of ML models in such searches and the bounds from statistical theory: it is impossible to optimise sensitivity for single-event anomalies without knowing the outlier distribution, and unsupervised anomaly detectors require a semi-supervised statistical model to interpret ensembles of outliers.

A final application of machine-learned distributions that was much discussed is data augmentation – sampling a new, larger data sample from a learned distribution. If the synthetic data is significantly larger than the training sample, its statistical power will be greater, but will derive this statistical power from the smooth interpolation of the model, potentially generating so-called inductive bias. The validity of the assumed smoothness depends on its realism in a particular setting, for which there is no generic validation strategy. The use of a generative model amounts to a tradeoff between bias and variance.

Interpretable and explainable

Beyond the various novel applications of ML, there were lively discussions on the more fundamental aspects of artificial intelligence (AI), notably on the notion of and need for AI to be interpretable or explainable. Explainable AI aims to elucidate what input information was used, and its relative importance, but this goal has no unambiguous definition. The discussion on the need for explainability centres to a large extent on trust: would you trust a discovery if it is unclear what information the model used and how it was used? Can you convince peers of the validity of your result? The notion of interpretable AI goes beyond that. It is an often-desired quality by scientists, as human knowledge resulting from AI-based science is generally desired to be interpretable, for example in the form of theories based on symmetries, or structures that are simple, or “low-rank”. However, interpretability has no formal criteria, which makes it an impractical requirement. Beyond practicality, there is also a fundamental point: why should nature be simple? Why should models that describe it be restricted to being interpretable? The almost philosophical nature of this question made the discussion on interpretability one of the liveliest ones in the workshop, but for now without conclusion.

Human knowledge resulting from AI-based science is generally desired to be interpretable

For the longer-term future there are several interesting developments in the pipeline. In the design and training of new neural models, two techniques were shown to have great promise. The first one is the concept of foundation models, which are very large models that are pre-trained by very large datasets to learn generic features of the data. When these pre-trained generic models are retrained to perform a specific task, they are shown to outperform purpose-trained models for that same task. The second is on encoding domain knowledge in the network. Networks that have known symmetry principles encoded in the model can significantly outperform models that are generically trained on the same data.

The evaluation of systematic effects is still mostly taken care of in the statistical post-processing step. Future ML techniques may more fully integrate systematic uncertainties, for example by reducing the sensitivity to these uncertainties through adversarial training or pivoting methods. Beyond that, future methods may also integrate the currently separate step of propagating systematic uncertainties (“learning the profiling”) into the training of the procedure. A truly global end-to-end optimisation of the full analysis chain may ultimately become feasible and computationally tractable for models that provide analytical derivatives.

Inside pyramids, underneath glaciers

Muon radiography – muography for short – uses cosmic-ray muons to probe and image large, dense objects. Coordinated by editors Paola Scampoli and Akitaka Ariga of the University of Bern, the authors of this book provide an invaluable snapshot of this booming research area. From muon detectors, which differ significantly from those used in fundamental physics research, to applications of muography in scientific, cultural, industrial and societal scenarios, a broad cross section of experts describe the physical principles that underpin modern muography.

Hiroyuki Tanaka of the University of Tokyo begins the book with historical developments and perspectives. He guides readers from the first documented use of cosmic-ray muons in 1955 for rock overburden estimation, to current studies of the sea-level dynamics in Tokyo Bay using muon detectors laid on the seafloor and visionary ideas to bring muography to other planets using teleguided rovers.

Scattering methods

Tanaka limits his discussion to the muon-absorption approach to muography, which images an object by comparing the muon flux before and after – or with and without – an object. The muon-scattering approach, which was invented two decades ago, instead exploits the deflection of muons passing through matter that is due to electromagnetic interactions with nuclei. The interested reader will find several examples of the application of muon scattering in other chapters, particularly that on civil and industrial applications by Davide Pagano (Pavia) and Altea Lorenzon (Padova). Scattering methods have an edge in these fields thanks to their sensitivity to the atomic number of the materials under investigation.

Cosmic Ray Muography

Peter Grieder (Bern), who sadly passed away shortly before the publication of the book, gives an excellent and concise introduction to the physics of cosmic rays, which Paolo Checchia (Padova) expands on, delving into the physics of interactions between muons and matter. Akira Nishio (Nagoya University) describes the history and physical principles of nuclear emulsions. These detectors played an important role in the history of particle physics, but are not very popular now as they cannot provide real-time information. Though modern detectors are a more common choice today, nuclear emulsions still find a niche in muography thanks to their portability. The large accumulation of data from muography experiments requires automatic analysis, for which dedicated scanning systems have been developed. Nishio includes a long and insightful discussion on how the nuclear-emulsions community reacted to supply-chain evolution. The transition from analogue to digital cameras meant that most film-producing firms changed their core business or simply disappeared, and researchers had to take a large part of the production process into their own hands.

Fabio Ambrosino and Giulio Saracino of INFN Napoli next take on the task of providing an overview of the much broader and more popular category of real-time detectors, such as those commonly used in experiments at particle colliders. Elaborating on the requirements set by the cosmic rate and environmental factors, their
chapter explains why scintillator and gas-based tracking devices are the most popular options in muography. They also touch on more exotic detector options, including Cherenkov telescopes and cylindrical tracking detectors that fit in boreholes.

In spite of their superficial similarity, methods that are common in X-ray imaging need quite a lot of ingenuity to be adapted to the context of muography. For example, the source cannot be controlled in muography, and is not mono­chromatic. Both energy and direction are random and have a very broad distribution, and one cannot afford to take data from more than a few viewpoints. Shogo Nagahara and Seigo Miyamoto of the University of Tokyo provide a specialised but intriguing insight into 3D image reconstruction using filtered back-projection.

A broad cross section of experts describe the physical principles that underpin modern muography

Geoscience is among the most mature applications of muography. While Jacques Marteau (Claude Bernard University Lyon 1) provides a broad overview of decades of activities spanning from volcano studies to the exploration of natural caves, Ryuichi Nishiyama (Tokyo) explores recent studies where muography provided unique data on the shape of the bedrock underneath two major glaciers in the Swiss Alps.

One of the greatest successes of muography is the study of pyramids, which is given ample space in the chapter on archaeology by Kunihiro Morishima (Nagoya). In 1971, Nobel-laureate Luis Alvarez’s team pioneered the use of muography in archaeology during an investigation at the pyramid of Khafre in Giza, Egypt, motivated by his hunch that an unknown large chamber could be hiding in the pyramid. Their data convincingly excluded that possibility, but the attempt can be regarded as launching modern muography (CERN Courier May/June 2023 p32). Half a century later, muography was reintroduced to the exploration of Egyptian pyramids thanks to ScanPyramids – an international project led by particle-physics teams in France and Japan under the supervision of the Heritage Innovation and Preservation Institute. ScanPyramids aims at systematically surveying all of the main pyramids in the Giza complex, and recently made headlines by finding a previously unknown corridor-shaped cavity in Khufu’s Great Pyramid, which is the second largest pyramid in the world. To support the claim, which was initially based on muography alone, the finding was cross-checked with the more traditional surveying method based on ground penetrating radar, and finally confirmed via visual inspection through an endoscope.

Pedagogical focus

This book is a precious resource for anyone approaching muography, from students to senior scientists, and potential practitioners from both academic and industrial communities. There are some other excellent books that have already been published on the same topic, and that have showcased original research, but Cosmic Ray Muography’s pedagogical focus, which prioritises the explanation of timeless first principles, will not become outdated any time soon. Given each chapter was written independently, there is a certain degree of overlap and some incoherence in terminology, but this gives the reader valuable exposure to different perspectives about what matters most in this type of research.

A rich harvest of results in Prague

The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.

With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.

AI algorithms

A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN Courier July/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.

The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.

Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.

Maurizio Pierini

ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.

A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.

The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.

Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.

Lia Merminga and Fabiola Gianotti

Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.

The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.

The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.

In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.

“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.

An obligation to engage

Science is for everyone, and everyone depends on science, so why not bring more of it to society? That was the idea behind the CERN & Society Foundation, established 10 years ago.

The longer I work in science, and the more people I talk to about science, the more I become convinced that everyone is interested in science whether they realise it or not. Many have emerged from their school education with a belief that science is hard and not for them, but they nevertheless ask the very same questions that those at the cutting edge of fundamental physics research ask, and that people have been asking since time immemorial: what is the universe made of, where did we come from and where are we going? Such curiosity is part of what it is to be human. On a more prosaic level, science and technology play an ever-growing role in modern society, and it is incumbent on all of us to understand its consequences and engage on the debate about its uses.

The power to inspire

When I tell people about CERN, more often than not their eyes light up with excitement and they want to know more. Experiences like this show that the scientific community needs to do all it can to engage with society at large in a fast-changing world. We need to bring people closer to an understanding of science, of how science works and why critical evidence-based thinking is vital in every walk of life, not only in science.

Laboratories like CERN are extraordinary places where people from all over the world come together to explore nature’s mysteries. I believe that when we come together like this, we have the power to inspire and an obligation to use this power to address the critical challenge of public engagement in science and technology. CERN has always taken this responsibility seriously. Ten years ago, it added a new string to its bow in the form of the CERN & Society Foundation. Through philanthropy, the foundation spreads CERN’s spirit of scientific curiosity.

Rolf-Dieter Heuer

The CERN & Society Foundation helps the laboratory to deepen its impact beyond the core mission of fundamental physics research. Projects supported by the foundation encourage talented young people from around the globe to follow STEM careers, catalyse innovation for the benefit of all, and inspire wide and diverse audiences. From training high-school teachers to producing medical isotopes, donors’ generosity brings research excellence to all corners of society.

The foundation’s work rests on three pillars: education and outreach, innovation and knowledge exchange, and culture and creativity. Allow me to highlight one example from each pillar that I particularly like.

One of the flagships of the education and outreach pillar is the Beamline for Schools (BL4S) competition. Launched in 2014, BL4S invites groups of high-school students from around the world to submit a proposal for an experiment at CERN. The winning teams are invited to come to CERN to carry out their experiment under expert supervision from CERN scientists. More recently, the DESY laboratory has joined the programme and also welcomes high-school groups to work on a beamline there. Project proposals have ranged from fundamental physics to projects aimed at enabling cosmic-ray tomography of the pyramids by measuring muon transmission through limestone (see “Inside pyramids, underneath glaciers“). To date, some 20,000 students have taken part in the competition, with 25 winning teams coming to CERN or DESY to carry out their experiments (see “From blackboard to beamline“).

Zenodo is a great example of the innovation and knowledge-exchange pillar. It provides a repository for free and easy access to research results, data and analy­sis code, thereby promoting the ideal of open science, which is at the very heart of scientific progress. Zenodo taps into CERN’s long-standing tradition and know-how in sharing and preserving scientific knowledge for the benefit of all. The scientific community can now store data in a non-commercial environment, freely available for society at large. Zenodo goes far beyond high-energy physics and played an important role during the COVID-19 pandemic.

Mutual inspiration

Our flagship culture-and-creativity initiative is the world-leading Arts at CERN programme, which recognises the creativity inherent in both the arts and the sciences, and harnesses them to generate benefits for both. Participating artists and scientists find mutual inspiration, going on to inspire audiences around the world.

“In an era where society needs science more than ever, inspiring new generations to believe in their dreams and giving them the tools and space to change the world is essential,” said one donor recently. It is encouraging to hear such sentiments, and there’s no doubt that the CERN & Society Foundation should feel satisfied with its first decade. Through the examples I have cited above, and many more that I have not mentioned, the foundation has made a tangible difference. It is, however, but one voice. Scientists and scientific organisations in prominent positions should take inspiration from the foundation: the world needs more ambassadors for science. On that note, all that remains is for me to say happy birthday, CERN & Society Foundation.

Combining clues from the Higgs boson

CMS figure 1

Following the discovery of the Higgs boson in 2012, the CMS collaboration has been exploring its properties with ever-increasing precision. Data recorded during LHC Run 2 have been used to measure differential production cross-sections of the Higgs boson in different decay channels – a pair of photons, two Z bosons, two W bosons and two tau leptons – and as functions of different observables. These results have now been combined to provide measurements of spectra at the ultimate achievable precision.

Differential cross-section measurements provide the most model-independent way to study Higgs-boson production at the LHC, for which theoretical predictions exist up to next-to-next-to-next-to-leading order in perturbative QCD. One of the most important obser­vables is the transverse momentum (figure 1). This distribution is particularly sensitive both to modelling issues in Standard Model (SM) predictions and possible contributions from physics-beyond-the-SM (BSM).

In the new CMS result, two frameworks are used to test for hints of BSM: the κ-formalism and effective field theories.

The κ-formalism assumes that new physics effects would only affect the couplings between the Higgs boson and other particles. These new physics effects are then parameterised in terms of coefficients, κ. Using this approach, two-dimensional constraints are set on κc (the coupling coefficient of the Higgs boson to the charm quark), κb (Higgs to bottom) and κt (Higgs to top). None show significant deviations from the SM at present.

CMS figure 2

Effective field theories parametrise deviations from the SM by supplementing the Lagrangian with higher-dimensional operators and their associated Wilson coefficients (WCs). The effect of the operators is suppressed by powers of the putative new-physics energy scale, Λ. Measurements of WCs that differ from zero may hint at BSM physics.

The CMS differential cross-section measurements are parametrised, and constraints are derived on the WCs from a simultaneous fit. In the most challenging case, a set of 31 WCs is used as input to a principal-component analysis procedure in which the most sensitive directions in the data are identified. These directions (expressed as linear combinations of the WCs) are then constrained in a simultaneous fit (figure 2). In the upper panel, the limits on the WCs are converted to lower limits on the new physics scale. The results agree with SM predictions, with a moderate 2σ tension present in one of the directions (EV5). Here the major contribution is provided by the cHq3 coefficient, which mostly affects vector-boson fusion, VH production at high Higgs-boson transverse momenta (V = W, Z) and W-boson decays.

The combined results not only provide highly precise measurements of Higgs-boson production, but also place stringent constraints on possible deviations from the SM, deepening our understanding while leaving open the possibility of new physics at higher precision or energy scales.

Dignitaries mark CERN’s 70th anniversary

On 1 October a high-level ceremony at CERN marked 70 years of science, innovation and collaboration. In attendance were 38 national delegations, including eight heads of state or government and 13 ministers, along with many scientific, political and economic leaders who demonstrated strong support for CERN’s mission and future ambition. “CERN has become a global hub because it rallied Europe, and this is even more crucial today,” said president of the European Commission Ursula von der Leyen. “China is planning a 100 km collider to challenge CERN’s global leadership. Therefore, I am proud that we have financed the feasibility study for CERN’s Future Circular Collider. As the global science race is on, I want Europe to switch gear.” CERN’s year-long 70th anniversary programme has seen more than 100 events organised in 63 cities in 28 countries, bringing together thousands of people to discuss the wonders and applications of particle physics. “I am very honoured to welcome representatives from our Member and Associate Member States, our Observers and our partners from all over the world on this very special day,” said CERN Director-General Fabiola Gianotti. “CERN is a great success for Europe and its global partners, and our founders would be very proud to see what CERN has accomplished over the seven decades of its life.”

bright-rec iop pub iop-science physcis connect