Comsol -leaderboard other pages

Topics

Invisibles, in sight

Around 150 researchers gathered at CERN from 1 to 5 September to discuss the origin of the observed matter–antimatter asymmetry in the universe, the source of its accelerated expansion, the nature of dark matter and the mechanism behind neutrino masses. The vibrant atmosphere of the annual meeting of the Invisibles research network encouraged lively discussions, particularly among early-career researchers.

Marzia Bordone (University of Zurich) highlighted central questions in flavour physics, such as the tensions in the determinations of quark flavour-mixing parameters and the anomalies in leptonic and semileptonic B-meson decays (CERN Courier January/February 2025 p14). She showed that new bosons beyond the Standard Model that primarily interact with the heaviest quarks are theoretically well motivated and could be responsible for these flavour anomalies. Bordone emphasised that collaboration between experiment and theory, as well as data from future colliders like FCC-ee, will be essential to understand whether these effects are genuine signs of new physics.

Lina Necib (MIT) shared impressive new results on the distribution of galactic dark matter. Though invisible, dark matter interacts gravitationally and is present in all galaxies across the universe. Her team used exquisite data from the ESA Gaia satellite to track stellar trajectories in the Milky Way and determine the local dark-matter distribution to within 20–30% precision – which means about 300,000 dark-matter particles per cubic metre assuming they have mass similar to that of the proton. This is a huge improvement over what could be done just one decade ago, and will aid experiments in their direct search for dark matter in laboratories worldwide.

The most quoted dark-matter candidates at Invisibles25 were probably axions: particles once postulated to explain why the strong interactions that bind protons and neutrons behave in the same way for particles and antiparticles. Nicole Righi (King’s College London) discussed how these particles are ubiquitous in string theory. According to Righi, their detection may imply a hot Big Bang, with a rather late thermal stage, or hint at some special feature of the geometry of ultracompact dimensions related to quantum gravity.

The most intriguing talk was perhaps the CERN colloquium given by the 2011 Nobel laureate Adam Riess (Johns Hopkins University). By setting up an impressive system of distance measurements to extragalactic systems, Riess and his team have measured the expansion rate of the universe – the Hubble constant – with per cent accuracy. Their results indicate a value about 10% higher than that inferred from the cosmic microwave background within the standard ΛCDM model, a discrepancy known as the “Hubble tension”. After more than a decade of scrutiny, no single systematic error appears sufficient to account for it, and theoretical explanations remain tightly constrained (CERN Courier March/April 2025 p28). In this regard, Julien Lesgourgues (RWTH Aachen University) pointed out that, despite the thousands of papers written on the Hubble tension, there is no compelling extension of ΛCDM that could truly accommodate it.

While 95% of the universe’s energy density is invisible, the community studying it is very real. Invisibles now has a long history and is based on three innovative training networks funded by the European Union, as well as two Marie Curie exchange networks. The network includes more than 100 researchers and 50 PhD students spread across key beneficiaries in Europe, as well as America, Asia and Africa – CERN being one of their long-term partners. The energy and enthusiasm of the participants at this conference were palpable, as nature continues to offer deep mysteries that the Invisibles community strives to unravel.

Higgs hunters revel in Run 3 data

The 15th Higgs Hunting workshop took place from 15 to 17 July at IJCLab in Orsay and LPNHE in Paris. It offered an opportunity to about 100 participants to step back and review the most recent LHC Run 2 and 3 Higgs-boson results, together with some of the latest theoretical developments.

One of the highlights concerned the Higgs boson’s coupling to the charm quark, with the CMS collaboration presenting a new search using Higgs production in association with a top–antitop pair. The analysis, targeting Higgs decays into charm–quark pairs, reached a sensitivity comparable to the best existing direct constraints on this elusive interaction. New ATLAS analyses showcased the impact of the large Run 3 dataset, hinting at great potential for Higgs physics in the years to come – for example, Run 3 data has reduced the uncertainties on the coupling of the Higgs boson to muons and Zγ by 30% and 38%, respectively. On the di-Higgs front, the expected upper limit on the signal-strength modifier, measured in the bbγγ final state only, has now surpassed in sensitivity the combination of all Run 2 HH channels (see “A step towards the Higgs self-coupling”). The sensitivity to di-Higgs production is expected to improve significantly during Run 3, raising hopes of seeing a signal before the next long shutdown, from mid-2026 to the end of 2029.

Juan Rojo (Vrije Universiteit Amsterdam) discussed parton distribution functions for Higgs processes at the LHC, while Thomas Gehrmann (University of Zurich) reviewed recent developments in general Higgs theory. Mathieu Pellen (University of Freiburg) provided a review of vector-boson fusion, Jose Santiago Perez (University of Granada) summarised the effective field theory framework and Oleksii Matsedonskyi (University of Cambridge) reviewed progress on electroweak phase transitions. In his “vision” talk, Alfredo Urbano (INFN Rome) discussed the interplay between Higgs physics and early-universe cosmology. Finally, Benjamin Fuks (LPTHE, Sorbonne University) presented a toponium model, bringing the elusive romance of top–quark pairs back into the spotlight (CERN Courier September/October 2025 p9).

After a cruise on the Seine in the light of the Olympic Cauldron, participants were propelled toward the future during the European Strategy for Particle Physics session. The ESPPU secretary Karl Jakobs (University of Freiburg) and various session speakers set the stage for spirited and vigorous discussions of the options before the community – in particular, the scenarios to pursue should the FCC programme, the clear plan A, not be realised. The next Higgs Hunting workshop will be held in Orsay and Paris from 16 to 18 September 2026.

All aboard the scalar adventure

Since the discovery of the Higgs boson in 2012, the ATLAS and CMS collaborations have made significant progress in scrutinising its properties and interactions. So far, measurements are compatible with an elementary Higgs boson, originating from the minimal scalar sector required by the Standard Model. However, current experimental precision leaves ample room for this picture to change. In particular, the full potential of the LHC and its high-luminosity upgrade to search for a richer scalar sector beyond the Standard Model (BSM) is only beginning to be tapped.

The first Workshop on the Impact of Higgs Studies on New Theories of Fundamental Interactions, which took place on the Island of Capri, Italy, from 6 to 10 October 2025, gathered around 40 experimentalists and theorists to explore the pivotal role of the Higgs boson in exploring BSM physics. Participants discussed the implications of extended scalar sectors and the latest ATLAS and CMS searches, including current potential anomalies in LHC data.

“The Higgs boson has moved from the realm of being just a new particle to becoming a tool for searches for BSM particles,” said Greg Landsberg (Brown University) in an opening talk.

An extended scalar sector can address several mysteries in the SM. For example, it could serve as a mediator to a hidden sector that includes dark-matter particles, or play a role in generating the observed matter–antimatter asymmetry during an electroweak phase transition. Modified or extended Higgs sectors also arise in supersymmetric and other BSM models that address why the 125 GeV Higgs boson is so light compared to the Planck mass – despite quantum corrections that should drive it to much higher scales – and might shed light on the perplexing pattern of fermion masses and flavours.

One way to look for new physics in the scalar sector is modifications in the decay rates, coupling strengths and CP-properties of the Higgs boson. Another is to look for signs of additional neutral or charged scalar bosons, such as those predicted in longstanding two-Higgs-doublet or Higgs-triplet models. The workshop saw ATLAS and CMS researchers present their latest limits on extended Higgs sectors, which are based on an increasing number of model-independent or signature-based searches. While the data so far are consistent with the SM, a few mild excesses have attracted the attention of some theorists.

In diphoton final states, a slight excess of events persists in CMS data at a mass of 95 GeV. Hints of a small excess at a mass of 152 GeV are also present in ATLAS data, while a previously reported excess at 650 GeV has faded after full examination of Run 2 data. Workshop participants also heard suggestions that the Brout–Englert–Higgs potential could allow for a second resonance at 690 GeV.

The High-Luminosity LHC will enable us to explore the scalar sector in detail

“We haven’t seen concrete evidence for extended Higgs sectors, but intriguing features appear in various mass scales,” said CMS collaborator Sezen Sekmen (Kyungpook National University). “Run 3 ATLAS and CMS searches are in full swing, with improved triggering, object reconstruction and analysis techniques.”

Di-Higgs production, the rate of which depends on the strength of the Higgs boson’s self-coupling, offers a direct probe of the shape of the Brout–Englert–Higgs potential and is a key target of the LHC Higgs programme. Multiple SM extensions predict measurable effects on the di-Higgs production rate. In addition to non-resonant searches in di-Higgs production, ATLAS and CMS are pursuing a number of searches for BSM resonances decaying into a pair of Higgs bosons, which were shown during the workshop.

Rich exchanges between experimentalists and theorists in an informal setting gave rise to several new lines of attack for physicists to explore further. Moreover, the critical role of the High-Luminosity LHC to probe the scalar sector of the SM at the TeV scale was made clear.

“Much discussed during this workshop was the concern that people in the field are becoming demotivated by the lack of discoveries at the LHC since the Higgs, and that we have to wait for a future collider to make the next advance,” says organiser Andreas Crivellin (University of Zurich). “Nothing could be further from the truth: the scalar sector is not only the least explored of the SM and the one with the greatest potential to conceal new phenomena, but one that the High-Luminosity LHC will enable us to explore in detail.”

Subtleties of quantum fields

Quantum field theory unites quantum physics with special relativity. It is the framework of the Standard Model (SM), which describes the electromagnetic, weak and strong interactions as gauge forces, mediated by photons, gluons and W and Z bosons, plus additional interactions mediated by the Higgs field. The success of the SM has exceeded all expectations, and its mathematical structure has led to a number of impressive predictions. These include the existence of the charm quark, discovered in 1974, and the existence of the Higgs boson, discovered in 2012.

Uncovering Quantum Field Theory and the Standard Model by Wolfgang Bietenholz of the National Autonomous University of Mexico and Uwe-Jens Wiese from the University of Bern, explains the foundations of quantum field theory in great depth, from classical field theory and canonical quantisation to regularisation and renormalisation, via path integrals and the renormalisation group. What really makes the book special are frequently discussed relations to statistical mechanics and condensed-matter physics.

Riding a wave

The section on particles and “wavicles” is highly original. In quantum field theory, quantised excitations of fields cannot be interpreted as point-like particles. Unlike massive particles in non-relativistic quantum mechanics, these excitations have non-trivial localisation properties, which apply to photons and electrons alike. To emphasise the difference between non-relativistic particles and wave excitations in a relativistic theory, one may refer to them as “wavicles”, following Frank Wilczek. As discussed in chapter 3, an intuitive understanding of wavicles can be gained by the analogy to phonons in a crystal. Another remarkable feature of charged fields is the infinite extension of their excitations due to their Coulomb field. This means that any charged state necessarily includes an infrared cloud of soft gauge bosons. As a result, they cannot be described by ordinary one-particle states and are referred to as “infra­particles”. Their properties, along with the related “superselection sectors,” are explained in the section on scalar quantum electrodynamics. 

Uncovering Quantum Field Theory and the Standard Model

The SM can be characterised as a non-abelian chiral gauge theory. Bietenholz and Wiese explain the various aspects of chirality in great detail. Anomalies in global and local symmetries are carefully discussed in the continuum as well as on a space–time lattice, based on the Ginsparg–Wilson relation and Lüscher’s lattice chiral symmetry. Confinement of quarks and gluons, the hadron spectrum, the parton model and hard processes, chiral perturbation theory and deconfinement at high temperatures uncover perturbative and non-perturbative aspects of quantum chromodynamics (QCD), the theory of strong interactions. Numerical simulations of strongly coupled lattice Yang–Mills theories are very demanding. During the past four decades, much progress has been made in turning lattice QCD into a quantitative reliable tool by controlling statistical and systematic uncertainties, which is clearly explained to the critical reader. The treatment of QCD is supplemented by an introduction to the electroweak theory covering the Higgs mechanism, electroweak symmetry breaking and flavour physics of quarks and leptons.

The number of quark colours, which is three in nature, plays a prominent role in this book. At the quantum level, gauge symmetries can fail due to anomalies, rendering a theory inconsistent. The SM is free of anomalies, but this only works because of a delicate interplay between quark and lepton charges and the number of colours. An important example of this interplay is the decay of the neutral pion into two photons. The subtleties of this process are explained in chapter 24.

The number of quark colours, which is three in nature, plays a prominent role in this book

Most remarkably, the SM predicts baryon-number-violating processes. This arises from the vacuum structure of the weak SU(2) gauge fields, which involves topologically distinct field configurations. Quantum tunnelling between them, together with the anomaly in the baryon–number current, leads to baryon–number violating transitions, as discussed in chapter 26. Similarly, in QCD a non-trivial topology of the gluon field leads to an explicit breaking of the flavour-singlet axial symmetry and, subsequently, to the mass of the η′ meson. Moreover, the gauge field topology gives rise to an additional parameter in QCD, the vacuum-angle θ. Since this parameter induces an electric dipole moment of the neutron that satisfies a strong upper bound, this confronts us with the strong-CP problem: what constrains θ to be so tiny that the experimental upper bound on the neutron dipole moment is satisfied? A solution may be provided by the Peccei–Quinn symmetry and axions, as discussed in a dedicated chapter.

By analogy with the QCD vacuum angle, one can introduce a CP-violating electromagnetic parameter θ into the SM – even though it has no physical effect in pure QED. This brings us to a gem of the book: its discussion of the Witten effect. In the presence of such a θ, the electric charge of a magnetic monopole becomes θ/2π plus an integer. This leads to the remarkable conclusion that for non-zero θ, all monopoles become dyons, carrying both electric and magnetic charge.

The SM is an effective low-energy theory and we do not know at what energy scale elements of a more fundamental theory will become visible. Its gauge structure and quark and lepton content hint at a possible unification of the interactions into a larger gauge group, which is discussed in the final chapter. Once gravity is included, one is confronted with a hierarchy problem: the question of why the electroweak scale is so small compared to the Planck mass, at which the Compton wavelength of a particle and its Schwarzschild radius coincide. Hence, at Planck energies quantum gravitational effects cannot be ignored. Perhaps, solving the electroweak hierarchy puzzle requires working with supersymmetric theories. For all students and scientists struggling with the SM and exploring possible extensions, the nine appendices will be a very valuable source of information for their research.

Einstein’s entanglement

Quantum entanglement is the quantum phenomenon par excellence. Our world is a quantum world: the matter that we see and touch is the most obvious consequence of quantum physics and it wouldn’t really exist the way it is in a purely classical world. However, in our modern parlance when we talk about quantum sensors or quantum computing, what makes these things “quantum” is the employment of entanglement. Entanglement was first discussed by Einstein and Schrödinger, and later became famous with the celebrated EPR (Einstein–Podolsky–Rosen) paper of 1935.

The magic of entanglement

In an entangled particle system, some properties have to be assigned to the system itself and not to individual particles. When a neutral pion decays into two photons, for example, conservation of angular momentum requires their total spin to be zero. Since the photons travel in opposite directions in the pion’s rest frame, in order for their spins to cancel they must share the same “helicity”. Helicity is the spin projection along the direction of motion, and only two states are possible: left- or right-handed. If one photon is measured to be left-handed, the other must be left-handed as well. The entangled photons must be thought of as a single quantum object: neither do the individual particles have predefined spins nor does the measurement performed on one cause the other to pick a spin orientation. Experiments in more complicated systems have ruled these possibilities out, at least in their simplest incarnations, and this is exactly where the magic of entanglement begins.

Quantum entanglement is the main topic of Einstein’s Entanglement by William Stuckey, Michael Silberstein and Timothy McDevitt, all currently teaching at Elizabethtown College, Pennsylvania. The trio have complementary expertise in physics, philosophy and maths, and this is not their first book on the foundations of physics. They aim to explain why entanglement is so puzzling to physicists and the various ways that have been employed over the years to explain (or even explain away) the phenomenon. They also want to introduce the readers to their own idea on how to solve the riddle and argue about its merits.

Why is entanglement so puzzling to physicists, and what has been employed to explain the phenomenon?

General readers may struggle in places. The book does have accessible chapters, for example one at the start with a quantum-gloves experiment – a nice way to introduce the reader to the problem – as well as a chapter on special relativity. Much of the discussion about quantum mechanics, however, uses advanced concepts such as Hilbert space and the Bloch sphere, that belong to an undergraduate course in quantum mechanics. Philosophical terminology, such as “wave-function realism”, is also used copiously. The explanations and the discussion provided are of good quality and an interested reader in the interpretations of quantum mechanics with some background in physics has a lot to gain. The authors quote copiously from a superb list of references and include many interesting historical facts that make reading the book very entertaining.

In general, the book criticises constructive approaches to interpreting quantum mechanics that explicitly postulate physical phenomena. In the example of neutral-pion decay that I gave previously, the case in which the measurement of one photon causes the other photon to pick a spin would require a constructive explanation. These can be contrasted with principle explanations, which may involve, for example, invoking an overarching symmetry. To quote an example that is used many times in the book, the relativity principle can be used to explain Lorentz length contraction without the need for a physical mechanism to contract the bodies, which would require a constructive explanation.

The authors make the claim that the conceptual issues with entanglement can be solved by sticking to principle explanations and, in particular, with the demand that Planck’s constant is measured to be the same in all inertial reference frames. Whether this simple suggestion is adequate to explain the mysteries of quantum mechanics, I will leave to the reader. Seneca wrote in his Natural Questions that “our descendants will be astonished at our ignorance of what to them is obvious”. If the authors are correct, entanglement may prove to be a case in point.

Memories of quarkonia

The world of particle physics was revolutionised in November 1974 by the discovery of the J/ψ particle. At the time, most of the elements of the Standard Model of particle physics had already been formulated, but only a limited set of fundamental fermions were confidently believed to exist: the electron and muon, their associated neutrinos, and the up, down and strange quarks that were thought to make up the strongly interacting particles known at that time. The J/ψ proved to be a charm–anticharm bound state, vindicating the existence of a quark flavour first hypothesised by Sheldon Glashow and James Bjorken in 1964 (CERN Courier January/February 2025 p35). Its discovery eliminated any lingering doubts regarding the quark model of 1964 (see “Nineteen sixty-four“) and sparked the development of the Standard Model into its modern form.

This new “charmonium” state was the first example of quarkonium: a heavy quark bound to an antiquark of the same flavour. It was named by analogy to positronium, a bound state of an electron and a positron, which decays by mutual annihilation into two or three photons. Composed of unstable quarks, bound by gluons rather than photons, and decaying mainly via the annihilation of their constituent quarks, quarkonia have fascinated particle physicists ever since.

The charmonium interpretation of the J/ψ was cemented by the subsequent discovery of a spectrum of related ccstates, and ultimately by the observation of charmed particles in 1976. The discovery of charmonium was followed in 1977 by the identification of bottomonium mesons and particles containing bottom quarks. While toponium – a bound state of a top quark and antiquark – was predicted in principle, most physicists thought that its observation would have to wait for the innate precision of a next-generation e+e collider following the LHC, in view of the top quark’s large mass and exceptionally rapid decay, more than 1012 times quicker than the bottom quark. The complex environment at a hadron collider, where the composite nature of protons precludes knowledge of the initial collision energy of pairs of colliding partons within them, would make toponium particularly difficult to identify at the LHC.

However, in the second half of 2024, the CMS collaboration reported an enhancement near the threshold for tt production at the LHC, which is now most plausibly interpreted as the lowest-lying toponium state. The existence of this enhancement has recently been corroborated by the ATLAS collaboration (see”ATLAS confirms top–antitop excess“).

Here are the personal memories of an eyewitness who followed these 50 years of quarkonium discoveries firsthand.

Strangeonium?

In hindsight, the quarkonium story can be thought to have begun in 1963 with the discovery of the φ meson. The φ was an unexpectedly stable and narrow resonance, decaying mainly into kaons rather than the relatively light pions, despite lying only just above the KK threshold. Heavier quarkonia cannot decay into a pair of mesons containing single heavy quarks, as their masses lie below the energy threshold for such “open flavour” decays.

The preference of the φ to decay into kaons was soon interpreted by Susumu Okubo as a consequence of approximate SU(3) flavour symmetry, developing mathematical ideas based on unitary 3 × 3 matrices with a determinant one. At the beginning of 1964, quarks were proposed and George Zweig suggested that the φ was a bound state of a strange quark and a strange anti-quark (or aces as he termed them). After 1974, the portmanteau word “strangeonium” was retrospectively applied to the φ and similar heavier ss bound states, but the name has never really caught on.

Why is R rising?

In the year or so prior to the discovery of the J/ψ in November 1974, there was much speculation about data from the Cambridge Electron Accelerator (CEA) at Harvard and the Stanford Positron–Electron Asymmetric Ring (SPEAR) at SLAC. Data from these e+e colliders indicated a rise in the ratio, R, of cross-sections for hadron and μ+μ production (see “Why is R rising?” figure). Was this a failure of the parton model that had only recently found acceptance as a model for the apparently scale-invariant internal structure of hadrons observed in deep-inelastic scattering experiments? Did partons indeed have internal structure? Or were there “new” partons that had not been seen previously, such as charm or coloured quarks? I was asked on several occasions to review the dozens of theoretical suggestions on the market, including at the ICHEP conference in the summer of 1974. In preparation, I toted a large Migros shopping bag filled with dozens of theoretical papers around Europe. Playing the part of an objective reviewer, I did not come out strongly in favour of any specific interpretation, however, during talks that autumn in Copenhagen and Dublin, I finally spoke out in favour of charm as the best-motivated explanation of the increase in R.

November revolution

Then, on 11 November 1974, the news broke that two experimental groups, one working at BNL under the leadership of Sam Ting and the other at SLAC led by Burt Richter, had discovered, in parallel, the narrow vector boson that bears the composite name J/ψ (see “Charmonium” figure). The worldwide particle-physics community went into convulsions (CERN Courier November/December 2024 p41) – and the CERN Theory Division was no exception. We held informal midnight discussion sessions around an open-mic phone with Fred Gilman in the SLAC theory group, who generously shared with us the latest J/ψ news. Away from the phone, like many groups around the world, we debated the merits and demerits of many different theoretical ideas. Rather than write a plethora of rival papers about these ideas, we decided to bundle our thoughts into a collective preprint. Instead of taking individual responsibility for our trivial thoughts, the preprint was anonymous, the place of the authors’ names being taken by a mysterious “CERN Theory Boson Workshop”. Eagle eyes will spot that the equations were handwritten by Mary K Gaillard (CERN Courier July/August 2025 p47). Informally, we called ourselves Co-Co, for communication collective. With “no pretentions to originality or priority,” we explored five hypotheses: a hidden charm vector meson, a coloured vector meson, an intermediate vector boson, a Higgs meson and narrow resonances in strong interactions.

Charmonium

My immediate instinct was to advocate the charmonium interpretation of the J/ψ, and this was the first interpretation to be described in our paper. This was on the basis of the Glashow–Iliopoulos–Maiani (GIM) mechanism, which accounted for the observed suppression of flavour-changing neutral currents by postulating the existence a charm quark with a mass around 2 GeV (see CERN Courier July/August 2024 p30), and the Zweig rule, which suggested phenomenologically that quarkonia do not easily decay by quark–antiquark annihilation via gluons into other flavours of quarks. So I was somewhat surprised when one of the authors of the GIM paper wrote a paper proposing that it might be an intermediate electroweak vector boson. A few days after the J/ψ discovery came the news of the (almost equally narrow) ψ′ discovery, which I was told as I was walking along the theory corridor to my office one morning. My informant was a senior theorist who was convinced that this discovery would kill the charmonium interpretation of the J/ψ. However, before I reached my office I realised that an extension of the Zweig rule would also suppress ψJ/ψ + light meson decays, so the ψ′ could also be narrow.

Keen competition

The charmonium interpretation of the J/ψ and ψ′ states predicted that there should be intermediate P-wave states (with one unit of orbital angular momentum) that could be detected in radiative decays of the ψ′. In the first half of 1975 there was keen competition between teams at SLAC and DESY to discover these states. That summer I was visiting SLAC, where I discovered one day under the cover of a copying machine, before their discovery was announced, a sheet of paper with plots showing clear evidence for the P-wave states. I made a copy, went to Burt Richter’s office and handed him the sheet of paper. I also asked whether he wanted my copy. He graciously allowed me to keep it, as long as I kept quiet about it, which I did until the discovery was officially announced a few weeks later.

The story of quarkonium can be thought to have begun in 1963 with the discovery of the φ meson

Discussion about the interpretation of the new particles, in particular between advocates of charm and Han–Nambu coloured quarks – a different way to explain the new particles’ astounding stability by giving them a new quantum number – rumbled on for a couple of years until the discovery of charmed particles in 1976. During this period we conducted some debates in the main CERN auditorium moderated by John Bell. I remember one such debate in particular, during which a distinguished senior British theorist spoke for coloured quarks and I spoke for charm. I was somewhat taken aback when he described me as representing the “establishment”, as I was under 30 at the time.

Over the following year, my attention wandered to grand unified theories, and my first paper on the subject was with Michael Chanowitz and Mary K Gaillard, which we completed in May 1977. We realised while writing this paper that simple grand unified theories – which unify the electroweak and strong interactions – would relate the mass of the τ heavy lepton that had been discovered in 1975 to the mass of the bottom quark, which was confidently expected but whose mass was unknown. Our prediction was mb/mτ = 2 to 5, but we did not include it in the abstract. Shortly afterwards, while our paper was in proof, the discovery of the ϒ state (or states) by a group at Fermilab led by Leon Lederman (see “Bottomonium” figure) became known, implying that mb ~ 4.5 GeV. I added our successful mass prediction by hand in the margin of the corrected proof. Unfortunately, the journal misunderstood my handwriting and printed our prediction as mb/mτ = 2605, a spectacularly inaccurate postdiction! It remains to be seen whether the idea of a grand unified theory is correct: it also predicted successfully the electroweak mixing angle θW and suggested that neutrinos might have mass, but direct evidence, such as the decay of the proton, has yet to be found.

Peak performance

Meanwhile, buoyed by the success of our prediction for mb, Mary K Gaillard, Dimitri Nanopoulos, Serge Rudaz and I set to work on a paper about the phenomenology of the top and bottom quarks. One of our predictions was that the first two excited states of the ϒ, the ϒ′ and ϒ′′, should be detectable by the Lederman experiment because the Zweig rule would suppress their cascade decays to lighter bottomonia via light-meson emission. Indeed, the Lederman experiment found that the ϒ bump was broader than the experimental resolution, and the bump was eventually resolved into three bottomonium peaks.

Bottomonium

It was in the same paper that we introduced the terminology of “penguin diagrams”, wherein a quark bound in a hadron changes flavour not at tree level via W-boson exchange but via a loop containing heavy particles (like W bosons or top quarks), emitting a gluon, photon or Z boson. Similar diagrams had been discussed by the ITEP theoretical school in Moscow, in connection with K decays, and we realised that they would be important in B-hadron decays. I took an evening off to go to a bar in the Old Town of Geneva, where I got involved in a game of darts with the experimental physicist Melissa Franklin. She bet me that if I lost the game I had to include the word “penguin” in my next paper. Melissa abandoned the darts game before the end, and was replaced by Serge Rudaz, who beat me. I still felt obligated to carry out the conditions of the bet, but for some time it was not clear to me how to get the word into the b-quark paper that we were writing at the time. Then, another evening, after working at CERN, I stopped to visit some friends on my way back to my apartment, where I inhaled some (at that time) illegal substance. Later, when I got home and continued working on our paper, I had a sudden inspiration that the famous Russian diagrams look like penguins. So we put the word into our paper, and it has now appeared in almost 10,000 papers.

What of toponium, the last remaining frontier in the world of quarkonia? In the early 1980s there were no experimental indications as to how heavy the top quark might be, and there were hopes that it might be within the range of existing or planned e+e colliders such as PETRA, TRISTAN and LEP. When the LEP experimental programme was being devised, I was involved in setting “examination questions” for candidate experimental designs that included asking how well they could measure the properties of toponium. In parallel, the first theoretical papers on the formalism for toponium production in e+e and hadron–hadron collisions appeared.

Toponium will be a very interesting target for future e+e colliders

But the top quark did not appear until the mid-1990s at the Tevatron proton–antiproton collider at Fermilab, with a mass around 175 GeV, implying that toponium measurements would require an e+e collider with an energy much greater than LEP, around 350 GeV. Many theoretical studies were made of the cross section in the neighbourhood of the e+e tt threshold, and how precisely the top quark mass, electroweak and Higgs couplings could be measured.

Meanwhile, a smaller number of theorists were calculating the possible toponium signal at the LHC, and the LHC experiments ATLAS and CMS started measuring tt production with high statistics. CMS and ATLAS embarked on programmes to search for quantum-mechanical correlations in the final-state decay products of the top quarks and antiquarks, as should occur if the tt state were to be produced in a specific spin-parity state. They both found decay correlations characteristic of tt production in a pseudoscalar state: it was the first time such a quantum correlation had been observed at such high energies.

The CMS collaboration used these studies to improve the sensitivities of dedicated searches they were making for possible heavy Higgs bosons decaying into tt final states, as would be expected in many extensions of the Standard Model. Intriguingly, hints of a possible excess of events around the tt threshold with the type of correlation expected from a pseudoscalar tt state began to emerge in the CMS data, but initially not with high significance.

Pseudoscalar states

I first heard about this excess at an Asia–CERN physics school in Thailand, and started wondering whether it could be due to the lowest-lying toponium state, which would decay predominantly into unstable top quarks and antiquarks rather than via their annihilation, or to a heavy pseudoscalar Higgs boson, and how one might distinguish between these hypotheses. A few years previously, Abdelhak Djouadi, Andrei Popov, Jérémie Quevillon and I had studied in detail the possible signatures of heavy Higgs bosons in tt final states at the LHC, and shown that they would have significant interference effects that would generate dips in the cross-section as well as bumps.

Toponium?

The significance of the CMS signal subsequently increased to over 5σ, showing up in a tailored search for new pseudoscalar states decaying into tt pairs with specific spin correlations, and recently this CMS discovery has been confirmed by the ATLAS Collaboration, with a significance over 7σ. Unfortunately, the experimental resolution in the tt invariant mass is not precise enough to see any dip due to pseudoscalar Higgs production, and Djouadi, Quevillon and I have concluded that it is not yet possible to discriminate between the toponium and Higgs hypotheses on purely experimental grounds.

However, despite being a fan of extra Higgs bosons, I have to concede that toponium is the more plausible interpretation of the CMS threshold excess. The mass is consistent with that expected for toponium, the signal strength is consistent with theoretical calculations in QCD, and the tt spin correlations are just what one expects for the lowest-lying pseudoscalar toponium state that would be produced in gluon–gluon collisions.

Caution is still in order. The pseudoscalar Higgs hypothesis cannot (yet) be excluded. Nevertheless, it would be a wonderful golden anniversary present for quarkonium if, some 50 years after the discovery of the J/ψ, the appearance of its last, most massive sibling were to be confirmed.

Toponium will be a very interesting target for future e+e colliders, which will be able to determine its properties with much greater accuracy than a hadron collider could achieve, making precise measurements of the mass of the top quark and its electroweak couplings possible. The quarkonium saga is far from over.

Nineteen sixty-four

Murray Gell-Mann
George Zweig
Evidence for SU(3) symmetry
Cosmic microwave background radiation
James Bjorken and Sheldon Glashow
Broken symmetry and the mass of gauge vector mesons
Evidence for the 2π decay
Peter Higgs
Global conservation laws and massless particles
Spin and unitary-spin independence in a paraquark model of baryons and mesons

In the history of elementary particle physics, 1964 was truly an annus mirabilis. Not only did the quark hypothesis emerge – independently from two theo­rists half a world apart – but a multiplicity of theorists came up with the idea of spontaneous symmetry breaking as an attractive method to generate elementary particle masses. And two pivotal experiments that year began to alter the way astronomers, cosmologists and physicists think about the universe.

Shown on the left is a timeline of the key 1964 milestones; discoveries that laid the groundwork for the Standard Model of particle physics and continue to be actively studied and refined today (images: N Eskandari, A Epshtein).

Some of the insights published in 1964 were first conceived in 1963. Caltech theorist Murray Gell-Mann had been ruminating about quarks ever since a March 1963 luncheon discussion with Robert Serber at Columbia University. Serber was exploring the possibility of a triplet of fundamental particles that in various combinations could account for mesons and baryons in Gell-Mann’s SU(3) symmetry scheme, dubbed “the Eightfold Way”. But Gell-Mann summarily dismissed his suggestion, showing him on a napkin how any such fundaments would have to have fractional charges of –2/3 or 1/3 the charge on an electron, which seemed absurd.

From the ridiculous to the sublime

Still, he realised, such ridiculous entities might be allowable if they somehow never materialised outside of the hadrons. For much of the year, Gell-Mann toyed with the idea in his musings, calling such hypothetical entities by the nonsense word “quorks”, until he encountered the famous line in Finnegans Wake by James Joyce, “Three quarks for Muster Mark.” He even discussed it with his old MIT thesis adviser, then CERN Director-General Victor Weisskopf, who chided him not to waste their time talking about such nonsense on an international phone call.

In late 1963, Gell-Mann finally wrote the quark idea up for publication and sent his paper to the newer European journal Physics Letters rather than the (then) more prestigious Physical Review Letters, in part because he thought it would be rejected there. “A schematic model of baryons and mesons”, published on 1 February 1964, is brief and to the point. After a few preliminary remarks, he noted that “a simpler, more elegant scheme can be constructed if we allow non-integral values for the charges … We then refer to the members u(2/3), d(–1/3) and s(–1/3) of the triplet as ‘quarks’.” But toward the end, he hedged his bets, warning readers not to take the existence of these quarks too seriously: “A search for stable quarks of charge +2/3 or –1/3 … at the highest-energy accelerators would help to reassure us of the non-existence of real quarks.”

As often happens in the history of science, the idea of quarks had another, independent genesis – at CERN in 1964. George Zweig, a CERN postdoc who had recently been a Caltech graduate student with Richard Feynman and Gell-Mann, was wondering why the φ meson lived so long before decaying into a pair of K mesons. A subtle conservation law must be at work, he figured, which led him to consider a constituent model of the hadrons. If the φ were somehow composed of two more fundamental entities, one with strangeness +1 and the other with –1, then its great preference for kaon decays over other, energetically more favourable possibilities, could be explained. These two strange constituents would find it difficult to “eat one another,” as he later put it, so two individual, strange kaons would be required to carry each of them away.

Late in the fall of 1963, Zweig discovered that he could reproduce the meson and baryon octets of the Eightfold Way from such constituents if they carried fractional charges of 2/3 and –1/3. Although he at first thought this possibility artificial, it solved a lot of other problems, and he began working feverishly on the idea, day and night. He wrote up his theory for publication, calling his fractionally charged particles “aces” – in part because he figured there would be four of them. Mesons, built from pairs of these aces, formed the “deuces” and baryons the “treys” in his deck of cards. His theory first appeared as a long CERN report in mid-January 1964, just as Gell-Mann’s quark paper was awaiting publication at Physics Letters.

As chance would have it, there was an intensive activity going on in parallel that January – an experimental search for the Ω baryon that Gell-Mann had predicted just six months earlier at a Geneva particle-physics conference. With negative charge and a mass almost twice that of the proton, it had to have strangeness –3 and would sit atop a 10-fold decuplet of heavy baryons predicted in his Eightfold Way. Brookhaven experimenter Nick Samios was eagerly seeking evidence of this very strange particle in the initial run of the 80 inch bubble chamber that he and colleagues had spent years planning and building. On 31 January 1964, he finally found a bubble-chamber photograph with just the right signatures. It might be the “gold-plated event” that could prove the existence of the Ω baryon.

After more detailed tests to make sure of this conclusion, the Brookhaven team delivered a paper with the unassuming title “Observation of a hyperon with strangeness minus three” to Physical Review Letters. With 33 authors, it reported only one event. But with that singular event, any remaining doubt about SU(3) symmetry and Gell-Mann’s Eightfold Way evaporated.

A fourth quark for Muster Mark?

Later in spring 1964, James Bjorken and Sheldon Glashow crossed paths in Copenhagen, on leave from Harvard and Stanford, working at Niels Bohr’s Institute for Theoretical Physics. Seeking to establish lepton–hadron symmetry, they needed a fourth quark because a fourth lepton – the muon neutrino – had been discovered in 1962 at Brookhaven. Bjorken and Glashow were early adherents of the idea that hadrons were made of quarks, but based their arguments on SU(4) symmetry rather than SU(3). “We called the new quark flavour ‘charm,’ completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day,” recalled Glashow (CERN Courier January/February 2025 p35). Their Physics Letters article appeared that summer, but it took another decade before solid evidence for charm turned up in the famous J/ψ discovery at Brookhaven and SLAC. The charm quark they had predicted in 1964 was the central player in the so-called November Revolution a decade later that led to widespread acceptance of the Standard Model of particle physics.

In the same year, Oscar Greenberg at the University of Maryland was wrestling with the difficult problem of how to confine three supposedly identical quarks within a volume hardly larger than a proton. According to the sacrosanct Pauli exclusion principle, identical spin–1/2 fermions could never occupy the exact same quantum state. So how, for example, could one ever cram three strange quarks inside an Ω baryon?

One possible solution, Greenberg realised, was that quarks carry a new physical property that distinguished them from one another so they were not in fact identical. Instead of a single quark triplet, that is, there could be three distinct triplets of what he dubbed “paraquarks”, publishing his ideas in November 1964, and capping an extraordinary year of insights into hadrons. We now recognise his insight as anticipating the existence of “coloured” quarks, where colour is the source of the relentless QCD force binding them within mesons and baryons.

The origin of mass

Although it took more than a decade for experiments to verify them, these insights unravelled the nature of hadrons, revealing a new family of fermions and hinting at the nature of the strong force. Yet they were not necessarily the most important ideas developed in particle physics in 1964. During that summer, three theorists – Robert Brout, François Englert and Peter Higgs – formulated an innovative technique to generate particle masses using spontaneous symmetry breaking of non-Abelian Yang–Mills gauge theories – a class of field theories that would later describe the electroweak and strong forces in the Standard Model.

Murray Gell-Mann and Yuval Ne’eman

Inspired by successful theories of superconductivity, symmetry-breaking ideas had been percolating among those few still working on quantum field theory, then in deep decline in particle physics, but they foundered whenever masses were introduced “by hand” into the theories. Or, as Yoichiro Nambu and Peter Goldstone realised in the early 1960s, massless bosons appeared in the theories that did not correspond to anything observed in experiments.

If they existed, the W (and later, Z) bosons carrying the short-range weak force had to be extremely massive (as is now well known). Brout and Englert – and independently Higgs – found they could generate the masses of such vector bosons if the gauge symmetry governing their behaviour was instead spontaneously broken, preserving the underlying symmetry while allowing for distinctive, asymmetric particle states. In solid-state physics, for example, magnetic domains will spontaneously align along a single direction, breaking the underlying symmetry of the electromagnetic field. Brout and Englert published their solution in June 1964, while Higgs followed suit a month later (after his paper was rejected by Physics Letters). Higgs subsequently showed that this symmetry breaking required a scalar boson to exist that was soon named after him. Dubbed the “Higgs mechanism,” this mass-generating process became a crucial feature of the unification of the weak and electromagnetic forces a few years later by Steven Weinberg and Abdus Salam. And after their electroweak theory was shown in 1971 to be renormalisable, and hence calculable, the theoretical floodgates opened wide, leading to today’s dominant Standard Model paradigm.

Surprise, surprise!

Besides the quark model and the Higgs mechanism, 1964 witnessed two surprising discoveries that would light up almost any other year in the history of science. That summer saw the publication of an epochal experiment leading to the discovery of CP violation in the decays of long-lived neutral mesons. Led by Princeton physicists Jim Cronin and Val Fitch, their Brookhaven experiment had discerned a small but non-negligible fraction – 0.2% – of two-body decays into a pair of pions, instead of into the dominant CP-conserving three-body decays. For months, the group wrestled with trying to understand this surprising result before publishing it that July in Physical Review Letters.

Robert Brout and François Englert

It took almost another decade before Japanese theorists Makoto Kobayashi and Toshihide Maskawa proved that such a small amount of CP violation was the natural result of the Standard Model if there were three quark-lepton families instead of the two then known to exist. Whether this phenomenon has any causal relation to the dominance of matter in the universe is still up for grabs decades later. “Indeed, it is almost certain that the CP violation observed in the K-meson system is not directly responsible for the matter dominance of the universe,” wrote Cronin in the early 1990s, “but one would wish that it is related to whatever the mechanism was that created [this] matter dominance.”

Robert W Wilson and Arno Penzias

Another epochal 1964 observation was not published until 1965, but it deserves mention here because of its tremendous significance for the subsequent marriage of particle physics and cosmology. That summer, Arno Penzias and Robert W Wilson of Bell Telephone Labs were in the process of converting a large microwave antenna in Holmdel, NJ, for use in radio astronomy. Shaped like a giant alpenhorn lying on its side, the device had been developed for early satellite communications. But the microwave signals that it was receiving included a faint, persistent “hiss” no matter the direction in which the horn was pointed; they at first interpreted the hiss as background noise – possibly due to some smelly pigeon droppings that had accumulated inside, which they removed. Still it persisted. Penzias and Wilson were at a complete loss to explain it.

Cosmological consequences

It so happened that a Princeton group led by Robert Dicke and James Peebles was just then building a radiometer to search for the uniform microwave radiation that should suffuse the universe had it begun in a colossal fireball, as a few cosmologists had been arguing for decades. In the spring of 1965, Penzias read a preprint of a paper by Peebles on the subject and called Dicke to suggest he come to Holmdel to view their results. After arriving and realising they had been scooped, the Princeton physicists soon confirmed the Bell Labs results using their own rooftop radiometer.

Besides the quark model and the Higgs mechanism, 1964 witnessed two surprising discoveries that would light up almost any other year in the history
of science

The results were published as back-to-back letters in the Astrophysical Journal on 7 May 1965. The Princeton group wrote extensively about the cosmological consequences of the discovery, while Penzias and Wilson submitted just a brief, dry description of their work, “A measurement of excess antenna temperature at 4080 Mc/s” – ruling out other possible interpretations of the uniform signal corresponding to the radiation expected from a 3.5 K blackbody.

Subsequent measurements at many other frequencies have established that this is indeed the cosmic background radiation expected from the Big Bang birth of the universe, confirming that it had in fact occurred. That was an incredibly brief, hot, dense phase of its existence, which has prodded many particle physicists to take up the study of its evolution and remnants. This discovery of the cosmic background radiation therefore serves as a fitting capstone on what was truly a pivotal year for particle physics.

Mixed signals from X17

MEG II and PADME experiments

Almost a decade after ATOMKI researchers reported an unexpected peak in electron–positron pairs from beryllium nuclear transitions, the case for a new “X17” particle remains open. Proposed as a light boson with a mass of about 17 MeV and very weak couplings, it would belong to the sometimes-overlooked low-energy frontier of physics beyond the Standard Model. Two recent results now pull in opposite directions: the MEG II experiment at the Paul Scherrer Institute found no signal in the same transition, while the PADME experiment at INFN Frascati reports a modest excess in electron–positron scattering at the corresponding mass.

The story of the elusive X17 particle began at the Institute for Nuclear Research (ATOMKI) in Debrecen, Hungary, where nuclear physicist Attila János Krasznahorkay and colleagues set out to study the de-excitation of a beryllium-8 state. Their target was the dark photon – a particle hypothesised to mediate interactions between ordinary and dark matter. In their setup, a beam of protons strikes a lithium-7 target, producing an excited beryllium nucleus that releases a proton or de-excites to the beryllium-8 ground state by emitting an 18.1 MeV gamma ray – or, very rarely, an electron–positron pair.

Controversial anomaly

In 2015, ATOMKI claimed to have observed an excess of electron–positron pairs with a statistical significance of 6.8σ. Follow-up measurements with different nuclei were also reported to yield statistically significant excess at the same mass. The team claimed the excess was consistent with the creation of a short-lived neutral boson with a mass of about 17 MeV. Given that it would be produced in nuclear transitions and decay into electron–positron pairs, the X17 should couple to nucleons, electrons and positrons. But many relevant constraints squeeze the parameter space for new physics at low energies, and independent tests are essential to resolve an unexpected and controversial anomaly that is now a decade old.

In November 2024, MEG II announced a direct cross-check of the anomaly, publishing their results in July 2025. Designed for high-precision tracking and calorimetry, the experiment combines dedicated background monitors with a spectrometer based on a lightweight, single-volume drift chamber that records the ionisation trails of charged particles. The detector is designed to search for evidence of the rare lepton-flavour-violating decay μ+ → e+γ, with the collaboration recently reporting world-leading limits at EPS-HEP (see “High-energy physics meets in Marseille”). It is also well suited to probing electron–positron final states, and has the mass resolution required to test the narrow-resonance interpretation of the ATOMKI anomaly.

Motivated by interest in X17, the collaboration directed a proton beam with energy up to 1.1 MeV onto a lithium-7 target, to study the same nuclear process as ATOMKI. Their data disfavours the ATOMKI hypothesis and imposes an upper limit on the branching ratio of 1.2 × 10–5 at 90% confidence.

“While the result does not close the case,” notes Angela Papa of INFN, the University of Pisa and the Paul Scherrer Institute, “it weakens the simplest interpretations of the anomaly.”

But MEG II is not the only cross check in progress. In May, the PADME collaboration reported an independent test that doesn’t repeat the ATOMKI experiment, but seeks to disentangle the X17 question from the complexities of nuclear physics.

For theorists, X17 is an awkward fit

Initially designed to search for evidence of states that decay invisibly, like dark photons or axion-like particles, PADME collides a positron beam with energies reaching 550 MeV with a 100 µm-thick active diamond target. Annihilations of positrons with electrons bound in the target material are reconstructed by detecting the resulting photons, with any peak in the missing-mass spectrum signalling an unseen product. The photon energy and impact position is measured by a finely segmented electromagnetic calorimeter with crystals refurbished from the L3 experiment at LEP.

“The PADME approach relies only on the suggested interaction of X17 with electrons and positrons,” remarks spokesperson Venelin Kozhuharov of Sofia University and INFN Frascati. “Since the ATOMKI excess was observed in electron–positron final states, this is the minimal possible assumption that can be made for X17.”

Instead of searching for evidence of unseen particles, PADME varied the beam energy to look for an electron-positron resonance in the expected X17 mass range. The collaboration claims that the combined dataset displays an excess near 16.90 MeV with a local significance of 2.5σ.

For theorists, X17 is an awkward fit. Most consider dark photons and axions to be the best motivated candidates for low mass, weakly coupled new physics states, says Claudio Toni of LAPTh. Another possibility, he says, is a bound state of known particles, though QCD states such as pions are about eight times heavier, and pure QED effects usually occur at much lower scales than 17 MeV.

“We should be cautious,” says Toni. “Since X17 is expected to couple to both protons and electrons, the absence of signals elsewhere forces any theoretical proposal to respect stringent constraints. We should focus on its phenomenology.”

ATLAS confirms top–antitop excess

Quasi-bound candidate

At the LHC, almost all top–antitop pairs are produced in a smooth invariant-mass spectrum described by perturbative QCD. In March, the CMS collaboration announced the discovery of an additional 1% localised near the energy threshold to produce a top quark and its antiquark (CERN Courier May/June 2025 p7). The ATLAS collaboration has now confirmed this observation.

“The measurement was challenging due to the small cross section and the limited mass resolution of about 20%,” says Tomas Dado of the ATLAS collaboration and CERN. “Sensitivity was achieved by exploiting high statistics, lepton angular variables sensitive to spin correlations, and by carefully constraining modelling uncertainties.”

Toponium

The simplest explanation for the excess appears to be a spectrum of “quasi-bound” states of a top quark and its antiquark that are often collectively referred to as toponium, by reference to the charmonium and bottomonium states discovered in the November Revolution of 1974 (see “Memories of quarkonia“). But there the similarities end. Thanks to the unique properties of the most massive fundamental particle yet discovered, toponium is expected to be exceptionally broad rather than exceptionally narrow in energy spectra, and to disintegrate via the weak decay of its constituent quarks rather than via their mutual annihilation.

“Historically, it was assumed that the LHC would never reach the sensitivity required to probe such effects, but ATLAS and CMS have shown that this expectation was too pessimistic,” says Benjamin Fuks of the Sorbonne. “This regime corresponds to the production of a slowly moving top–antitop pair that has time to exchange multiple gluons before one of the top quarks decays. The invariant mass of the system lies slightly below the open top–antitop threshold, which implies that at least one of the top quarks is off-shell. This contrasts with conventional top–antitop production, where the tops are typically produced far above threshold, move relativistically and do not experience significant non-relativistic gluon dynamics.”

While CMS fitted a pseudo-scalar resonance that couples to gluons and top quarks – the essential features of the ground state of toponium – the new ATLAS analysis employs a model recently published by Fuks and his collaborators that additionally includes all S-wave excitations. ATLAS reports a cross-section for such quasi-bound excitations of 9.0 ± 1.3 pb, consistent with CMS’s measurement of 8.8 ± 1.3 pb. ATLAS’s measurement rises to 13.9 ± 1.9 pb when applying the same signal model as CMS.

Future measurements of top quark–antiquark pairs will compare the threshold excess to the expectations of non-relativistic QCD, search for the possible presence of new fields beyond the Standard Model, and study the quantum entanglement of the top and antitop quarks.

“At the High-Luminosity LHC, the main objective is to exploit the much larger dataset to go beyond a single-bin description of the sub-threshold top–antitop invariant mass distribution,” says Fuks. “At a future electron–positron collider, the top–antitop threshold scan has long been recognised as a cornerstone measurement, with toponium contributions playing an essential role.”

For Dado, this story reflects a satisfying interplay between theorists and the LHC experiments.

“Theorists proposed entanglement studies, ATLAS demonstrated entangled top–antitop pairs and CMS applied spin-sensitive observables to reveal the quasi-bound-state effect,” he says. “The next step is for theory to deliver a complete description of the top–antitop threshold.”

Full coherence at fifty

The most common neutrino interactions are the most difficult to detect. But thanks to advances in detector technology, coherent elastic neutrino–nucleus scattering (CEνNS) is emerging from behind backgrounds, 50 years after it was first hypothesised. These low-energy interactions are insensitive to the intricacies of nuclear or nucleon structure, making them a promising tool for precision searches for physics beyond the Standard Model. They also offer a route to miniaturising neutrino detectors.

“I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations,” says Manfred Lindner (Max Planck Institute for Nuclear Physics in Heidelberg), the spokesperson for the CONUS+ experiment, which reported the first evidence for fully coherent CEνNS in July. “The technology of CONUS+ is mature and seems scalable. I believe that we are at the beginning of precision neutrino physics with CEνNS and CONUS+ is one of the door openers!”

Act of hubris

Daniel Z Freedman is not best known for CEνNS, but in 1974 the future supergravity architect suggested that experimenters search for evidence of neutrinos interacting not with nucleons but “coherently” with entire nuclei. This process should dominate when the de Broglie wavelength of the neutrino is the diameter of the nucleus or larger. The question of which specific neutron exchanged a Z boson with the incoming neutrino would sum in the quantum amplitude rather than the probability, leading to an N2 dependence on the number of neutrons. As a result, CEνNS cross sections are typically enhanced by a factor of between 100 and 1000.

Freedman noted that his proposal may have been an “act of hubris”, because the interaction rate, detector resolution and backgrounds would all pose grave experimental difficulties. His caveat was perspicacious. It took until 2017 for indisputable evidence for CEνNS to emerge at Oak Ridge National Laboratory in the US, where the COHERENT experiment observed CEνNS by neutrinos with a maximum energy of 52 MeV, emerging from pion decays at rest (CERN Courier October 2017 p8). At these energies, the coherence condition is only partially fulfilled, and nuclear structure still plays a role.

The CONUS+ collaboration now presents evidence for CEνNS in the fully coherent regime. The experiment – one of many launched at nuclear reactors following the COHERENT demonstration – uses reactor electron anti-neutrinos with energies below 10 MeV generated across 119 days at the Leibstadt Nuclear Power Plant in Switzerland. The team observed 395 ± 106 neutrinos compared to a Standard Model expectation of 347 ± 59 events, corresponding to a statistical significance for the observation of CEνNS of 3.7σ.

I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations

It is no wonder that detection took 50 years. The only signal of CEνNS is a gentle nuclear recoil – an effect often compared to the effect of a ping-pong ball on a tanker. In CONUS+, the nuclear recoils of the CEνNS interactions are detected using the ionisation signal of point-contact high-purity germanium detectors with ultra-low energy thresholds as low as 160 eV.

The team has now increased the mass of their four semiconductor detectors from 1 to 2.4 kg to provide better statistics and potentially a lower threshold energy. CONUS+ is highly sensitive to physics beyond the Standard Model, says the team, including non-standard interaction parameters, new light mediators and electromagnetic properties of the neutrino such as electrical millicharges or neutrino magnetic moments. Lindner estimates that the CONUS+ technology could be scaled up to 100 kg, potentially yielding 100,000 CEνNS events per year of operation.

Into the neutrino fog

One researcher’s holy grail is another’s curse. In 2024, dark-matter experiments reported entering the “neutrino fog”, as their sensitivity to nuclear recoils crossed the threshold to detect a background of solar-neutrino CEνNS interactions. The PandaX-4T and XENONnT collaborations reported 2.6σ and 2.7σ evidence for CEνNS interactions in their liquid–xenon time projection chambers, based on estimated signals of 79 and 11 interactions, respectively. These were the first direct measurements of nuclear recoils from solar neutrinos with dark-matter detectors. Boron-8 solar neutrinos have slightly higher energies than those detected by CONUS+, and are also in the fully coherent regime.

CEνNS has promise for nuclear-reactor monitoring

“The neutrino flux in CONUS+ is many orders of magnitude bigger than in dark-matter detectors,” notes Lindner, who is also co-spokesperson of the XENON collaboration. “This is compensated by a much larger target mass, a larger CEνNS cross section due to the larger number of neutrons in xenon versus germanium, a longer running time and differences in detection efficiencies. Both experiments have in common that all backgrounds of natural or imposed radioactivity must be suppressed by many orders of magnitude such that the CEνNS process can be extracted over backgrounds.”

The current experimental frontier for CEνNS is towards low energy thresholds, concludes COHERENT spokesperson Kate Scholberg of Duke University. “The coupling of recoil energy to observable energy can be in the form of a dim flash of light picked up by light sensors, a tiny zap of charge collected in a semiconductor detector, or a small thermal pulse observed in a bolometer. A number of collaborations are pursuing novel technologies with sub-keV thresholds, among them cryogenic bolometers. A further goal is measurement over a range of nuclei, as this will test the SM prediction of an N2 dependence of the CEνNS cross section. And for higher-energy neutrino sources, for which the coherence is not quite perfect, there are opportunities to learn about nuclear structure. Another future possibility is directional recoil detection. If we are lucky, nature may give us a supernova burst of CEνNS recoils. As for societal applications, CEνNS has promise for nuclear-reactor monitoring for nonproliferation purposes due to its large cross section and interaction threshold below that for inverse-beta-decay of 1.8 MeV.”

bright-rec iop pub iop-science physcis connect