Comsol -leaderboard other pages

Topics

Memories of quarkonia

The world of particle physics was revolutionised in November 1974 by the discovery of the J/ψ particle. At the time, most of the elements of the Standard Model of particle physics had already been formulated, but only a limited set of fundamental fermions were confidently believed to exist: the electron and muon, their associated neutrinos, and the up, down and strange quarks that were thought to make up the strongly interacting particles known at that time. The J/ψ proved to be a charm–anticharm bound state, vindicating the existence of a quark flavour first hypothesised by Sheldon Glashow and James Bjorken in 1964 (CERN Courier January/February 2025 p35). Its discovery eliminated any lingering doubts regarding the quark model of 1964 (see “Nineteen sixty-four“) and sparked the development of the Standard Model into its modern form.

This new “charmonium” state was the first example of quarkonium: a heavy quark bound to an antiquark of the same flavour. It was named by analogy to positronium, a bound state of an electron and a positron, which decays by mutual annihilation into two or three photons. Composed of unstable quarks, bound by gluons rather than photons, and decaying mainly via the annihilation of their constituent quarks, quarkonia have fascinated particle physicists ever since.

The charmonium interpretation of the J/ψ was cemented by the subsequent discovery of a spectrum of related ccstates, and ultimately by the observation of charmed particles in 1976. The discovery of charmonium was followed in 1977 by the identification of bottomonium mesons and particles containing bottom quarks. While toponium – a bound state of a top quark and antiquark – was predicted in principle, most physicists thought that its observation would have to wait for the innate precision of a next-generation e+e collider following the LHC, in view of the top quark’s large mass and exceptionally rapid decay, more than 1012 times quicker than the bottom quark. The complex environment at a hadron collider, where the composite nature of protons precludes knowledge of the initial collision energy of pairs of colliding partons within them, would make toponium particularly difficult to identify at the LHC.

However, in the second half of 2024, the CMS collaboration reported an enhancement near the threshold for tt production at the LHC, which is now most plausibly interpreted as the lowest-lying toponium state. The existence of this enhancement has recently been corroborated by the ATLAS collaboration (see”ATLAS confirms top–antitop excess“).

Here are the personal memories of an eyewitness who followed these 50 years of quarkonium discoveries firsthand.

Strangeonium?

In hindsight, the quarkonium story can be thought to have begun in 1963 with the discovery of the φ meson. The φ was an unexpectedly stable and narrow resonance, decaying mainly into kaons rather than the relatively light pions, despite lying only just above the KK threshold. Heavier quarkonia cannot decay into a pair of mesons containing single heavy quarks, as their masses lie below the energy threshold for such “open flavour” decays.

The preference of the φ to decay into kaons was soon interpreted by Susumu Okubo as a consequence of approximate SU(3) flavour symmetry, developing mathematical ideas based on unitary 3 × 3 matrices with a determinant one. At the beginning of 1964, quarks were proposed and George Zweig suggested that the φ was a bound state of a strange quark and a strange anti-quark (or aces as he termed them). After 1974, the portmanteau word “strangeonium” was retrospectively applied to the φ and similar heavier ss bound states, but the name has never really caught on.

Why is R rising?

In the year or so prior to the discovery of the J/ψ in November 1974, there was much speculation about data from the Cambridge Electron Accelerator (CEA) at Harvard and the Stanford Positron–Electron Asymmetric Ring (SPEAR) at SLAC. Data from these e+e colliders indicated a rise in the ratio, R, of cross-sections for hadron and μ+μ production (see “Why is R rising?” figure). Was this a failure of the parton model that had only recently found acceptance as a model for the apparently scale-invariant internal structure of hadrons observed in deep-inelastic scattering experiments? Did partons indeed have internal structure? Or were there “new” partons that had not been seen previously, such as charm or coloured quarks? I was asked on several occasions to review the dozens of theoretical suggestions on the market, including at the ICHEP conference in the summer of 1974. In preparation, I toted a large Migros shopping bag filled with dozens of theoretical papers around Europe. Playing the part of an objective reviewer, I did not come out strongly in favour of any specific interpretation, however, during talks that autumn in Copenhagen and Dublin, I finally spoke out in favour of charm as the best-motivated explanation of the increase in R.

November revolution

Then, on 11 November 1974, the news broke that two experimental groups, one working at BNL under the leadership of Sam Ting and the other at SLAC led by Burt Richter, had discovered, in parallel, the narrow vector boson that bears the composite name J/ψ (see “Charmonium” figure). The worldwide particle-physics community went into convulsions (CERN Courier November/December 2024 p41) – and the CERN Theory Division was no exception. We held informal midnight discussion sessions around an open-mic phone with Fred Gilman in the SLAC theory group, who generously shared with us the latest J/ψ news. Away from the phone, like many groups around the world, we debated the merits and demerits of many different theoretical ideas. Rather than write a plethora of rival papers about these ideas, we decided to bundle our thoughts into a collective preprint. Instead of taking individual responsibility for our trivial thoughts, the preprint was anonymous, the place of the authors’ names being taken by a mysterious “CERN Theory Boson Workshop”. Eagle eyes will spot that the equations were handwritten by Mary K Gaillard (CERN Courier July/August 2025 p47). Informally, we called ourselves Co-Co, for communication collective. With “no pretentions to originality or priority,” we explored five hypotheses: a hidden charm vector meson, a coloured vector meson, an intermediate vector boson, a Higgs meson and narrow resonances in strong interactions.

Charmonium

My immediate instinct was to advocate the charmonium interpretation of the J/ψ, and this was the first interpretation to be described in our paper. This was on the basis of the Glashow–Iliopoulos–Maiani (GIM) mechanism, which accounted for the observed suppression of flavour-changing neutral currents by postulating the existence a charm quark with a mass around 2 GeV (see CERN Courier July/August 2024 p30), and the Zweig rule, which suggested phenomenologically that quarkonia do not easily decay by quark–antiquark annihilation via gluons into other flavours of quarks. So I was somewhat surprised when one of the authors of the GIM paper wrote a paper proposing that it might be an intermediate electroweak vector boson. A few days after the J/ψ discovery came the news of the (almost equally narrow) ψ′ discovery, which I was told as I was walking along the theory corridor to my office one morning. My informant was a senior theorist who was convinced that this discovery would kill the charmonium interpretation of the J/ψ. However, before I reached my office I realised that an extension of the Zweig rule would also suppress ψJ/ψ + light meson decays, so the ψ′ could also be narrow.

Keen competition

The charmonium interpretation of the J/ψ and ψ′ states predicted that there should be intermediate P-wave states (with one unit of orbital angular momentum) that could be detected in radiative decays of the ψ′. In the first half of 1975 there was keen competition between teams at SLAC and DESY to discover these states. That summer I was visiting SLAC, where I discovered one day under the cover of a copying machine, before their discovery was announced, a sheet of paper with plots showing clear evidence for the P-wave states. I made a copy, went to Burt Richter’s office and handed him the sheet of paper. I also asked whether he wanted my copy. He graciously allowed me to keep it, as long as I kept quiet about it, which I did until the discovery was officially announced a few weeks later.

The story of quarkonium can be thought to have begun in 1963 with the discovery of the φ meson

Discussion about the interpretation of the new particles, in particular between advocates of charm and Han–Nambu coloured quarks – a different way to explain the new particles’ astounding stability by giving them a new quantum number – rumbled on for a couple of years until the discovery of charmed particles in 1976. During this period we conducted some debates in the main CERN auditorium moderated by John Bell. I remember one such debate in particular, during which a distinguished senior British theorist spoke for coloured quarks and I spoke for charm. I was somewhat taken aback when he described me as representing the “establishment”, as I was under 30 at the time.

Over the following year, my attention wandered to grand unified theories, and my first paper on the subject was with Michael Chanowitz and Mary K Gaillard, which we completed in May 1977. We realised while writing this paper that simple grand unified theories – which unify the electroweak and strong interactions – would relate the mass of the τ heavy lepton that had been discovered in 1975 to the mass of the bottom quark, which was confidently expected but whose mass was unknown. Our prediction was mb/mτ = 2 to 5, but we did not include it in the abstract. Shortly afterwards, while our paper was in proof, the discovery of the ϒ state (or states) by a group at Fermilab led by Leon Lederman (see “Bottomonium” figure) became known, implying that mb ~ 4.5 GeV. I added our successful mass prediction by hand in the margin of the corrected proof. Unfortunately, the journal misunderstood my handwriting and printed our prediction as mb/mτ = 2605, a spectacularly inaccurate postdiction! It remains to be seen whether the idea of a grand unified theory is correct: it also predicted successfully the electroweak mixing angle θW and suggested that neutrinos might have mass, but direct evidence, such as the decay of the proton, has yet to be found.

Peak performance

Meanwhile, buoyed by the success of our prediction for mb, Mary K Gaillard, Dimitri Nanopoulos, Serge Rudaz and I set to work on a paper about the phenomenology of the top and bottom quarks. One of our predictions was that the first two excited states of the ϒ, the ϒ′ and ϒ′′, should be detectable by the Lederman experiment because the Zweig rule would suppress their cascade decays to lighter bottomonia via light-meson emission. Indeed, the Lederman experiment found that the ϒ bump was broader than the experimental resolution, and the bump was eventually resolved into three bottomonium peaks.

Bottomonium

It was in the same paper that we introduced the terminology of “penguin diagrams”, wherein a quark bound in a hadron changes flavour not at tree level via W-boson exchange but via a loop containing heavy particles (like W bosons or top quarks), emitting a gluon, photon or Z boson. Similar diagrams had been discussed by the ITEP theoretical school in Moscow, in connection with K decays, and we realised that they would be important in B-hadron decays. I took an evening off to go to a bar in the Old Town of Geneva, where I got involved in a game of darts with the experimental physicist Melissa Franklin. She bet me that if I lost the game I had to include the word “penguin” in my next paper. Melissa abandoned the darts game before the end, and was replaced by Serge Rudaz, who beat me. I still felt obligated to carry out the conditions of the bet, but for some time it was not clear to me how to get the word into the b-quark paper that we were writing at the time. Then, another evening, after working at CERN, I stopped to visit some friends on my way back to my apartment, where I inhaled some (at that time) illegal substance. Later, when I got home and continued working on our paper, I had a sudden inspiration that the famous Russian diagrams look like penguins. So we put the word into our paper, and it has now appeared in almost 10,000 papers.

What of toponium, the last remaining frontier in the world of quarkonia? In the early 1980s there were no experimental indications as to how heavy the top quark might be, and there were hopes that it might be within the range of existing or planned e+e colliders such as PETRA, TRISTAN and LEP. When the LEP experimental programme was being devised, I was involved in setting “examination questions” for candidate experimental designs that included asking how well they could measure the properties of toponium. In parallel, the first theoretical papers on the formalism for toponium production in e+e and hadron–hadron collisions appeared.

Toponium will be a very interesting target for future e+e colliders

But the top quark did not appear until the mid-1990s at the Tevatron proton–antiproton collider at Fermilab, with a mass around 175 GeV, implying that toponium measurements would require an e+e collider with an energy much greater than LEP, around 350 GeV. Many theoretical studies were made of the cross section in the neighbourhood of the e+e tt threshold, and how precisely the top quark mass, electroweak and Higgs couplings could be measured.

Meanwhile, a smaller number of theorists were calculating the possible toponium signal at the LHC, and the LHC experiments ATLAS and CMS started measuring tt production with high statistics. CMS and ATLAS embarked on programmes to search for quantum-mechanical correlations in the final-state decay products of the top quarks and antiquarks, as should occur if the tt state were to be produced in a specific spin-parity state. They both found decay correlations characteristic of tt production in a pseudoscalar state: it was the first time such a quantum correlation had been observed at such high energies.

The CMS collaboration used these studies to improve the sensitivities of dedicated searches they were making for possible heavy Higgs bosons decaying into tt final states, as would be expected in many extensions of the Standard Model. Intriguingly, hints of a possible excess of events around the tt threshold with the type of correlation expected from a pseudoscalar tt state began to emerge in the CMS data, but initially not with high significance.

Pseudoscalar states

I first heard about this excess at an Asia–CERN physics school in Thailand, and started wondering whether it could be due to the lowest-lying toponium state, which would decay predominantly into unstable top quarks and antiquarks rather than via their annihilation, or to a heavy pseudoscalar Higgs boson, and how one might distinguish between these hypotheses. A few years previously, Abdelhak Djouadi, Andrei Popov, Jérémie Quevillon and I had studied in detail the possible signatures of heavy Higgs bosons in tt final states at the LHC, and shown that they would have significant interference effects that would generate dips in the cross-section as well as bumps.

Toponium?

The significance of the CMS signal subsequently increased to over 5σ, showing up in a tailored search for new pseudoscalar states decaying into tt pairs with specific spin correlations, and recently this CMS discovery has been confirmed by the ATLAS Collaboration, with a significance over 7σ. Unfortunately, the experimental resolution in the tt invariant mass is not precise enough to see any dip due to pseudoscalar Higgs production, and Djouadi, Quevillon and I have concluded that it is not yet possible to discriminate between the toponium and Higgs hypotheses on purely experimental grounds.

However, despite being a fan of extra Higgs bosons, I have to concede that toponium is the more plausible interpretation of the CMS threshold excess. The mass is consistent with that expected for toponium, the signal strength is consistent with theoretical calculations in QCD, and the tt spin correlations are just what one expects for the lowest-lying pseudoscalar toponium state that would be produced in gluon–gluon collisions.

Caution is still in order. The pseudoscalar Higgs hypothesis cannot (yet) be excluded. Nevertheless, it would be a wonderful golden anniversary present for quarkonium if, some 50 years after the discovery of the J/ψ, the appearance of its last, most massive sibling were to be confirmed.

Toponium will be a very interesting target for future e+e colliders, which will be able to determine its properties with much greater accuracy than a hadron collider could achieve, making precise measurements of the mass of the top quark and its electroweak couplings possible. The quarkonium saga is far from over.

Hidden treasures

Data resurrection

In 2009, the JADE experiment had been inoperational for 23 years. The PETRA electron–positron collider that served it had already completed a second life as a pre-accelerator for the HERA electron–proton collider and was preparing for a third life as an X-ray source. JADE and the other PETRA experiments were a piece of physics history, well known for seminal measurements of three-jet quark–quark-gluon events, and early studies of quark fragmentation and jet hadronisation. But two decades after being decommissioned, the JADE collaboration was yet to publish one of its signature measurements.

At high energies and short distances, the strong force becomes weaker. Quarks behave almost like free particles. This “asymptotic freedom” is a unique hallmark of QCD. In 2009, as now, JADE’s electron–positron data was unique in the low-energy range, with other data sets lost to history. When reprocessed with modern next-to-next-to-leading-order QCD and improved simulation tools, the DESY experiment was able to rival experiments at CERN’s higher-energy Large Electron–Positron (LEP) collider for precision on the strong coupling constant, contributing to a stunning proof of QCD’s most fundamental behaviour. The key was a farsighted and original initiative by Siggi Bethke to preserve JADE’s data and analysis software.

New perspectives

This data resurrection from JADE demonstrated how data can be reinterpreted to give new perspectives decades after an experiment ends. It was a timely demonstration. In 2009, HERA and SLAC’s PEP-II electron–positron collider had been recently decommissioned, and Fermilab’s Tevatron proton–antiproton collider was approaching the end of its operations. Each facility nevertheless had a strong analysis programme ahead, and CERN’s Large Hadron Collider (LHC) was preparing for its first collisions. How could all this data be preserved?

The uniqueness of these programmes, for which no upgrade or followup was planned for the coming decades, invited the consideration of data usability at horizons well beyond a few years. A few host labs risked a small investment, with dedicated data-preservation projects beginning, for example, at SLAC, DESY, Fermlilab, IHEP and CERN (see “Data preservation” dashboard). To exchange data-preservation concepts, methodologies and policies, and to ensure the long-term preservation of HEP data, the Data Preservation in High Energy Physics (DPHEP) group was created in 2014. DPHEP is a global initiative under the supervision of the International Committee for Future Accelerators (ICFA), with strong support from CERN from the beginning. It actively welcomes new collaborators and new partner experiments, to ensure a vibrant and long-term future for the precious data sets being collected at present and future colliders.

At the beginning of our efforts, DPHEP designed a four-level classification of data abstraction. Level 1 corresponds to the information typically found in a scientific publication or its associated HEPData entry (a public repository for high-energy physics data tables). Level 4 includes all inputs necessary to fully reprocess the original data and simulate the experiment from scratch.

The concept of data preservation had to be extended too. Simply storing data and freezing software is bound to fail as operating systems evolve and analysis knowledge disappears. A sensible preservation process must begin early on, while the experiments are still active, and take into account the research goals and available resources. Long-term collaboration organisation plays a crucial role, as data cannot be preserved without stable resources. Software must adapt to rapidly changing computing infrastructure to ensure that the data remains accessible in the long term.

Return on investment

But how much research gain could be expected for a reasonable investment in data preservation? We conservatively estimate that for dedicated investments below 1% of the cost of the construction of a facility, the scientific output increases by 10% or more. Publication records confirm that scientific outputs at major experimental facilities continue long after the end of operations (see “Publications per year, during and after data taking” panel). Publication rates remain substantial well beyond the “canonical” five years after the end of the data taking, particularly for experiments that pursued dedicated data-preservation programmes. For some experiments, the lifetime of the preservation system is by now comparable with the data-taking period, illustrating the need to carefully define collaborations for the long term.

Publication records confirm that scientific outputs at major experimental facilities continue long after the end of operations

The most striking example is BaBar, an electron–positron-collider experiment at SLAC that was designed to investigate the violation of charge-parity symmetry in the decays of B mesons, and which continues to publish using a preservation system now hosted outside the original experiment site. Aging infrastructure is now presenting challenges, raising questions about the very-long-term hosting of historical experiments – “preservation 2.0” – or the definitive end of the programme. The other historical b-factory, Belle, benefits from a follow-up experiment on site.

Publications per year, during and after data taking

Publications per year, during and after data taking

The publication record at experiments associated with the DPHEP initiative. Data-taking periods of the relevant facilities are shaded, and the fraction of peer-reviewed articles published afterwards is indicated as a percentage for facilities that are not still operational. The data, which exclude conference proceedings, were extracted from Inspire-HEP on 31 July 2025.

HERA, an electron– and positron–proton collider that was designed to study deep inelastic scattering (DIS) and the structure of the proton, continues to publish and even to attract new collaborators as the community prepares for the Electron Ion Collider (EIC) at BNL, nicely demonstrating the relevance of data preservation for future programmes. The EIC will continue studies of DIS in the regime of gluon saturation (CERN Courier January/February 2025 p31), with polarised beams exploring nucleon spin and a range of nuclear targets. The use of new machine-learning algorithms on the preserved HERA data has even allowed aspects of the EIC physics case to be explored: an example of those “treasures” not foreseen at the end of collisions.

IHEP in China conducts a vigorous data-preservation programme around BESIII data from electron–positron collisions in the BEPCII charm factory. The collaboration is considering using artificial intelligence to rank data priorities and user support for data reuse.

Remarkably, LEP experiments are still publishing physics analyses with archived ALEPH data almost 25 years after the completion of the LEP programme on 4 November 2000. The revival of the CERNLIB collection of FORTRAN data-analysis software libraries has also enabled the resurrection of the legacy software stacks of both DELPHI and OPAL, including the spectacular revival of their event displays (see “Data resurrection” figure). The DELPHI collaboration revised their fairly restrictive data-access policy in early 2024, opening and publishing their data via CERN’s Open Data Portal.

Some LEP data is currently being migrated into the standardised EDM4hep (event data model) format that has been developed for future colliders. As well as testing the format with real data, this will ensure data preservation and support software development, analysis training and detector design for the electron–positron collider phase of the proposed Future Circular Collider using real events.

The future is open

In the past 10 years, data preservation has grown in prominence in parallel with open science, which promotes free public access to publications, data and software in community-driven repositories, and according to the FAIR principles of findability, accessibility, interoperability and reusability. Together, data preservation and open science help maximise the benefits of fundamental research. Collaborations can fully exploit their data and share its unique benefits with the international community.

The two concepts are distinct but tightly linked. Data preservation focuses on maintaining data integrity and usability over time, whereas open data emphasises accessibility and sharing. They have in common the need for careful and resource-loaded planning, with a crucial role played by the host laboratory.

Treasure chest

Data preservation and open science both require clear policies and a proactive approach. Beginning at the very start of an experiment is essential. Clear guidelines on copyright, resource allocation for long-term storage, access strategies and maintenance must be established to address the challenges of data longevity. Last but not least, it is crucially important to design collaborations to ensure smooth international cooperation long after data taking has finished. By addressing these aspects, collaborations can create robust frameworks for preserving, managing and sharing scientific data effectively over the long term.

Today, most collaborations target the highest standards of data preservation (level 4). Open-source software should be prioritised, because the uncontrolled obsolescence of commercial software endangers the entire data-preservation model. It is crucial to maintain all of the data and the software stack, which requires continuous effort to adapt older versions to evolving computing environments. This applies to both software and hardware infrastructures. Synergies between old and new experiments can provide valuable solutions, as demonstrated by HERA and EIC, Belle and Belle II, and the Antares and KM3NeT neutrino telescopes.

From afterthought to forethought

In the past decade, data preservation has evolved from simply an afterthought as experiments wrapped up operations into a necessary specification for HEP experiments. Data preservation is now recognised as a source of cost-effective research. Progress has been rapid, but its implementation remains fragile and needs to be protected and planned.

In the past 10 years, data preservation has grown in prominence in parallel with open science

The benefits will be significant. Signals not imagined during the experiments’ lifetime can be searched for. Data can be reanalysed in light of advances in theory and observations from other realms of fundamental science. Education, training and outreach can be brought to life by demonstrating classic measurements with real data. And scientific integrity is fully realised when results are fully reproducible.

The LHC, having surpassed an exabyte of data, now holds the largest scientific data set ever accumulated. The High-Luminosity LHC will increase this by an order of magnitude. When the programme comes to an end, it will likely be the last data at the energy frontier for decades. History suggests that 10% of the LHC’s scientific programme will not yet have been published when collisions end, and a further 10% not even imagined. While the community discusses its strategy for future colliders, it must therefore also bear in mind data preservation. It is the key to unearthing hidden treasures in the data of the past, present and future.

Nineteen sixty-four

Murray Gell-Mann
George Zweig
Evidence for SU(3) symmetry
Cosmic microwave background radiation
James Bjorken and Sheldon Glashow
Broken symmetry and the mass of gauge vector mesons
Evidence for the 2π decay
Peter Higgs
Global conservation laws and massless particles
Spin and unitary-spin independence in a paraquark model of baryons and mesons

In the history of elementary particle physics, 1964 was truly an annus mirabilis. Not only did the quark hypothesis emerge – independently from two theo­rists half a world apart – but a multiplicity of theorists came up with the idea of spontaneous symmetry breaking as an attractive method to generate elementary particle masses. And two pivotal experiments that year began to alter the way astronomers, cosmologists and physicists think about the universe.

Shown on the left is a timeline of the key 1964 milestones; discoveries that laid the groundwork for the Standard Model of particle physics and continue to be actively studied and refined today (images: N Eskandari, A Epshtein).

Some of the insights published in 1964 were first conceived in 1963. Caltech theorist Murray Gell-Mann had been ruminating about quarks ever since a March 1963 luncheon discussion with Robert Serber at Columbia University. Serber was exploring the possibility of a triplet of fundamental particles that in various combinations could account for mesons and baryons in Gell-Mann’s SU(3) symmetry scheme, dubbed “the Eightfold Way”. But Gell-Mann summarily dismissed his suggestion, showing him on a napkin how any such fundaments would have to have fractional charges of –2/3 or 1/3 the charge on an electron, which seemed absurd.

From the ridiculous to the sublime

Still, he realised, such ridiculous entities might be allowable if they somehow never materialised outside of the hadrons. For much of the year, Gell-Mann toyed with the idea in his musings, calling such hypothetical entities by the nonsense word “quorks”, until he encountered the famous line in Finnegans Wake by James Joyce, “Three quarks for Muster Mark.” He even discussed it with his old MIT thesis adviser, then CERN Director-General Victor Weisskopf, who chided him not to waste their time talking about such nonsense on an international phone call.

In late 1963, Gell-Mann finally wrote the quark idea up for publication and sent his paper to the newer European journal Physics Letters rather than the (then) more prestigious Physical Review Letters, in part because he thought it would be rejected there. “A schematic model of baryons and mesons”, published on 1 February 1964, is brief and to the point. After a few preliminary remarks, he noted that “a simpler, more elegant scheme can be constructed if we allow non-integral values for the charges … We then refer to the members u(2/3), d(–1/3) and s(–1/3) of the triplet as ‘quarks’.” But toward the end, he hedged his bets, warning readers not to take the existence of these quarks too seriously: “A search for stable quarks of charge +2/3 or –1/3 … at the highest-energy accelerators would help to reassure us of the non-existence of real quarks.”

As often happens in the history of science, the idea of quarks had another, independent genesis – at CERN in 1964. George Zweig, a CERN postdoc who had recently been a Caltech graduate student with Richard Feynman and Gell-Mann, was wondering why the φ meson lived so long before decaying into a pair of K mesons. A subtle conservation law must be at work, he figured, which led him to consider a constituent model of the hadrons. If the φ were somehow composed of two more fundamental entities, one with strangeness +1 and the other with –1, then its great preference for kaon decays over other, energetically more favourable possibilities, could be explained. These two strange constituents would find it difficult to “eat one another,” as he later put it, so two individual, strange kaons would be required to carry each of them away.

Late in the fall of 1963, Zweig discovered that he could reproduce the meson and baryon octets of the Eightfold Way from such constituents if they carried fractional charges of 2/3 and –1/3. Although he at first thought this possibility artificial, it solved a lot of other problems, and he began working feverishly on the idea, day and night. He wrote up his theory for publication, calling his fractionally charged particles “aces” – in part because he figured there would be four of them. Mesons, built from pairs of these aces, formed the “deuces” and baryons the “treys” in his deck of cards. His theory first appeared as a long CERN report in mid-January 1964, just as Gell-Mann’s quark paper was awaiting publication at Physics Letters.

As chance would have it, there was an intensive activity going on in parallel that January – an experimental search for the Ω baryon that Gell-Mann had predicted just six months earlier at a Geneva particle-physics conference. With negative charge and a mass almost twice that of the proton, it had to have strangeness –3 and would sit atop a 10-fold decuplet of heavy baryons predicted in his Eightfold Way. Brookhaven experimenter Nick Samios was eagerly seeking evidence of this very strange particle in the initial run of the 80 inch bubble chamber that he and colleagues had spent years planning and building. On 31 January 1964, he finally found a bubble-chamber photograph with just the right signatures. It might be the “gold-plated event” that could prove the existence of the Ω baryon.

After more detailed tests to make sure of this conclusion, the Brookhaven team delivered a paper with the unassuming title “Observation of a hyperon with strangeness minus three” to Physical Review Letters. With 33 authors, it reported only one event. But with that singular event, any remaining doubt about SU(3) symmetry and Gell-Mann’s Eightfold Way evaporated.

A fourth quark for Muster Mark?

Later in spring 1964, James Bjorken and Sheldon Glashow crossed paths in Copenhagen, on leave from Harvard and Stanford, working at Niels Bohr’s Institute for Theoretical Physics. Seeking to establish lepton–hadron symmetry, they needed a fourth quark because a fourth lepton – the muon neutrino – had been discovered in 1962 at Brookhaven. Bjorken and Glashow were early adherents of the idea that hadrons were made of quarks, but based their arguments on SU(4) symmetry rather than SU(3). “We called the new quark flavour ‘charm,’ completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day,” recalled Glashow (CERN Courier January/February 2025 p35). Their Physics Letters article appeared that summer, but it took another decade before solid evidence for charm turned up in the famous J/ψ discovery at Brookhaven and SLAC. The charm quark they had predicted in 1964 was the central player in the so-called November Revolution a decade later that led to widespread acceptance of the Standard Model of particle physics.

In the same year, Oscar Greenberg at the University of Maryland was wrestling with the difficult problem of how to confine three supposedly identical quarks within a volume hardly larger than a proton. According to the sacrosanct Pauli exclusion principle, identical spin–1/2 fermions could never occupy the exact same quantum state. So how, for example, could one ever cram three strange quarks inside an Ω baryon?

One possible solution, Greenberg realised, was that quarks carry a new physical property that distinguished them from one another so they were not in fact identical. Instead of a single quark triplet, that is, there could be three distinct triplets of what he dubbed “paraquarks”, publishing his ideas in November 1964, and capping an extraordinary year of insights into hadrons. We now recognise his insight as anticipating the existence of “coloured” quarks, where colour is the source of the relentless QCD force binding them within mesons and baryons.

The origin of mass

Although it took more than a decade for experiments to verify them, these insights unravelled the nature of hadrons, revealing a new family of fermions and hinting at the nature of the strong force. Yet they were not necessarily the most important ideas developed in particle physics in 1964. During that summer, three theorists – Robert Brout, François Englert and Peter Higgs – formulated an innovative technique to generate particle masses using spontaneous symmetry breaking of non-Abelian Yang–Mills gauge theories – a class of field theories that would later describe the electroweak and strong forces in the Standard Model.

Murray Gell-Mann and Yuval Ne’eman

Inspired by successful theories of superconductivity, symmetry-breaking ideas had been percolating among those few still working on quantum field theory, then in deep decline in particle physics, but they foundered whenever masses were introduced “by hand” into the theories. Or, as Yoichiro Nambu and Peter Goldstone realised in the early 1960s, massless bosons appeared in the theories that did not correspond to anything observed in experiments.

If they existed, the W (and later, Z) bosons carrying the short-range weak force had to be extremely massive (as is now well known). Brout and Englert – and independently Higgs – found they could generate the masses of such vector bosons if the gauge symmetry governing their behaviour was instead spontaneously broken, preserving the underlying symmetry while allowing for distinctive, asymmetric particle states. In solid-state physics, for example, magnetic domains will spontaneously align along a single direction, breaking the underlying symmetry of the electromagnetic field. Brout and Englert published their solution in June 1964, while Higgs followed suit a month later (after his paper was rejected by Physics Letters). Higgs subsequently showed that this symmetry breaking required a scalar boson to exist that was soon named after him. Dubbed the “Higgs mechanism,” this mass-generating process became a crucial feature of the unification of the weak and electromagnetic forces a few years later by Steven Weinberg and Abdus Salam. And after their electroweak theory was shown in 1971 to be renormalisable, and hence calculable, the theoretical floodgates opened wide, leading to today’s dominant Standard Model paradigm.

Surprise, surprise!

Besides the quark model and the Higgs mechanism, 1964 witnessed two surprising discoveries that would light up almost any other year in the history of science. That summer saw the publication of an epochal experiment leading to the discovery of CP violation in the decays of long-lived neutral mesons. Led by Princeton physicists Jim Cronin and Val Fitch, their Brookhaven experiment had discerned a small but non-negligible fraction – 0.2% – of two-body decays into a pair of pions, instead of into the dominant CP-conserving three-body decays. For months, the group wrestled with trying to understand this surprising result before publishing it that July in Physical Review Letters.

Robert Brout and François Englert

It took almost another decade before Japanese theorists Makoto Kobayashi and Toshihide Maskawa proved that such a small amount of CP violation was the natural result of the Standard Model if there were three quark-lepton families instead of the two then known to exist. Whether this phenomenon has any causal relation to the dominance of matter in the universe is still up for grabs decades later. “Indeed, it is almost certain that the CP violation observed in the K-meson system is not directly responsible for the matter dominance of the universe,” wrote Cronin in the early 1990s, “but one would wish that it is related to whatever the mechanism was that created [this] matter dominance.”

Robert W Wilson and Arno Penzias

Another epochal 1964 observation was not published until 1965, but it deserves mention here because of its tremendous significance for the subsequent marriage of particle physics and cosmology. That summer, Arno Penzias and Robert W Wilson of Bell Telephone Labs were in the process of converting a large microwave antenna in Holmdel, NJ, for use in radio astronomy. Shaped like a giant alpenhorn lying on its side, the device had been developed for early satellite communications. But the microwave signals that it was receiving included a faint, persistent “hiss” no matter the direction in which the horn was pointed; they at first interpreted the hiss as background noise – possibly due to some smelly pigeon droppings that had accumulated inside, which they removed. Still it persisted. Penzias and Wilson were at a complete loss to explain it.

Cosmological consequences

It so happened that a Princeton group led by Robert Dicke and James Peebles was just then building a radiometer to search for the uniform microwave radiation that should suffuse the universe had it begun in a colossal fireball, as a few cosmologists had been arguing for decades. In the spring of 1965, Penzias read a preprint of a paper by Peebles on the subject and called Dicke to suggest he come to Holmdel to view their results. After arriving and realising they had been scooped, the Princeton physicists soon confirmed the Bell Labs results using their own rooftop radiometer.

Besides the quark model and the Higgs mechanism, 1964 witnessed two surprising discoveries that would light up almost any other year in the history
of science

The results were published as back-to-back letters in the Astrophysical Journal on 7 May 1965. The Princeton group wrote extensively about the cosmological consequences of the discovery, while Penzias and Wilson submitted just a brief, dry description of their work, “A measurement of excess antenna temperature at 4080 Mc/s” – ruling out other possible interpretations of the uniform signal corresponding to the radiation expected from a 3.5 K blackbody.

Subsequent measurements at many other frequencies have established that this is indeed the cosmic background radiation expected from the Big Bang birth of the universe, confirming that it had in fact occurred. That was an incredibly brief, hot, dense phase of its existence, which has prodded many particle physicists to take up the study of its evolution and remnants. This discovery of the cosmic background radiation therefore serves as a fitting capstone on what was truly a pivotal year for particle physics.

Mixed signals from X17

MEG II and PADME experiments

Almost a decade after ATOMKI researchers reported an unexpected peak in electron–positron pairs from beryllium nuclear transitions, the case for a new “X17” particle remains open. Proposed as a light boson with a mass of about 17 MeV and very weak couplings, it would belong to the sometimes-overlooked low-energy frontier of physics beyond the Standard Model. Two recent results now pull in opposite directions: the MEG II experiment at the Paul Scherrer Institute found no signal in the same transition, while the PADME experiment at INFN Frascati reports a modest excess in electron–positron scattering at the corresponding mass.

The story of the elusive X17 particle began at the Institute for Nuclear Research (ATOMKI) in Debrecen, Hungary, where nuclear physicist Attila János Krasznahorkay and colleagues set out to study the de-excitation of a beryllium-8 state. Their target was the dark photon – a particle hypothesised to mediate interactions between ordinary and dark matter. In their setup, a beam of protons strikes a lithium-7 target, producing an excited beryllium nucleus that releases a proton or de-excites to the beryllium-8 ground state by emitting an 18.1 MeV gamma ray – or, very rarely, an electron–positron pair.

Controversial anomaly

In 2015, ATOMKI claimed to have observed an excess of electron–positron pairs with a statistical significance of 6.8σ. Follow-up measurements with different nuclei were also reported to yield statistically significant excess at the same mass. The team claimed the excess was consistent with the creation of a short-lived neutral boson with a mass of about 17 MeV. Given that it would be produced in nuclear transitions and decay into electron–positron pairs, the X17 should couple to nucleons, electrons and positrons. But many relevant constraints squeeze the parameter space for new physics at low energies, and independent tests are essential to resolve an unexpected and controversial anomaly that is now a decade old.

In November 2024, MEG II announced a direct cross-check of the anomaly, publishing their results in July 2025. Designed for high-precision tracking and calorimetry, the experiment combines dedicated background monitors with a spectrometer based on a lightweight, single-volume drift chamber that records the ionisation trails of charged particles. The detector is designed to search for evidence of the rare lepton-flavour-violating decay μ+ → e+γ, with the collaboration recently reporting world-leading limits at EPS-HEP (see “High-energy physics meets in Marseille”). It is also well suited to probing electron–positron final states, and has the mass resolution required to test the narrow-resonance interpretation of the ATOMKI anomaly.

Motivated by interest in X17, the collaboration directed a proton beam with energy up to 1.1 MeV onto a lithium-7 target, to study the same nuclear process as ATOMKI. Their data disfavours the ATOMKI hypothesis and imposes an upper limit on the branching ratio of 1.2 × 10–5 at 90% confidence.

“While the result does not close the case,” notes Angela Papa of INFN, the University of Pisa and the Paul Scherrer Institute, “it weakens the simplest interpretations of the anomaly.”

But MEG II is not the only cross check in progress. In May, the PADME collaboration reported an independent test that doesn’t repeat the ATOMKI experiment, but seeks to disentangle the X17 question from the complexities of nuclear physics.

For theorists, X17 is an awkward fit

Initially designed to search for evidence of states that decay invisibly, like dark photons or axion-like particles, PADME collides a positron beam with energies reaching 550 MeV with a 100 µm-thick active diamond target. Annihilations of positrons with electrons bound in the target material are reconstructed by detecting the resulting photons, with any peak in the missing-mass spectrum signalling an unseen product. The photon energy and impact position is measured by a finely segmented electromagnetic calorimeter with crystals refurbished from the L3 experiment at LEP.

“The PADME approach relies only on the suggested interaction of X17 with electrons and positrons,” remarks spokesperson Venelin Kozhuharov of Sofia University and INFN Frascati. “Since the ATOMKI excess was observed in electron–positron final states, this is the minimal possible assumption that can be made for X17.”

Instead of searching for evidence of unseen particles, PADME varied the beam energy to look for an electron-positron resonance in the expected X17 mass range. The collaboration claims that the combined dataset displays an excess near 16.90 MeV with a local significance of 2.5σ.

For theorists, X17 is an awkward fit. Most consider dark photons and axions to be the best motivated candidates for low mass, weakly coupled new physics states, says Claudio Toni of LAPTh. Another possibility, he says, is a bound state of known particles, though QCD states such as pions are about eight times heavier, and pure QED effects usually occur at much lower scales than 17 MeV.

“We should be cautious,” says Toni. “Since X17 is expected to couple to both protons and electrons, the absence of signals elsewhere forces any theoretical proposal to respect stringent constraints. We should focus on its phenomenology.”

ATLAS confirms top–antitop excess

Quasi-bound candidate

At the LHC, almost all top–antitop pairs are produced in a smooth invariant-mass spectrum described by perturbative QCD. In March, the CMS collaboration announced the discovery of an additional 1% localised near the energy threshold to produce a top quark and its antiquark (CERN Courier May/June 2025 p7). The ATLAS collaboration has now confirmed this observation.

“The measurement was challenging due to the small cross section and the limited mass resolution of about 20%,” says Tomas Dado of the ATLAS collaboration and CERN. “Sensitivity was achieved by exploiting high statistics, lepton angular variables sensitive to spin correlations, and by carefully constraining modelling uncertainties.”

Toponium

The simplest explanation for the excess appears to be a spectrum of “quasi-bound” states of a top quark and its antiquark that are often collectively referred to as toponium, by reference to the charmonium and bottomonium states discovered in the November Revolution of 1974 (see “Memories of quarkonia“). But there the similarities end. Thanks to the unique properties of the most massive fundamental particle yet discovered, toponium is expected to be exceptionally broad rather than exceptionally narrow in energy spectra, and to disintegrate via the weak decay of its constituent quarks rather than via their mutual annihilation.

“Historically, it was assumed that the LHC would never reach the sensitivity required to probe such effects, but ATLAS and CMS have shown that this expectation was too pessimistic,” says Benjamin Fuks of the Sorbonne. “This regime corresponds to the production of a slowly moving top–antitop pair that has time to exchange multiple gluons before one of the top quarks decays. The invariant mass of the system lies slightly below the open top–antitop threshold, which implies that at least one of the top quarks is off-shell. This contrasts with conventional top–antitop production, where the tops are typically produced far above threshold, move relativistically and do not experience significant non-relativistic gluon dynamics.”

While CMS fitted a pseudo-scalar resonance that couples to gluons and top quarks – the essential features of the ground state of toponium – the new ATLAS analysis employs a model recently published by Fuks and his collaborators that additionally includes all S-wave excitations. ATLAS reports a cross-section for such quasi-bound excitations of 9.0 ± 1.3 pb, consistent with CMS’s measurement of 8.8 ± 1.3 pb. ATLAS’s measurement rises to 13.9 ± 1.9 pb when applying the same signal model as CMS.

Future measurements of top quark–antiquark pairs will compare the threshold excess to the expectations of non-relativistic QCD, search for the possible presence of new fields beyond the Standard Model, and study the quantum entanglement of the top and antitop quarks.

“At the High-Luminosity LHC, the main objective is to exploit the much larger dataset to go beyond a single-bin description of the sub-threshold top–antitop invariant mass distribution,” says Fuks. “At a future electron–positron collider, the top–antitop threshold scan has long been recognised as a cornerstone measurement, with toponium contributions playing an essential role.”

For Dado, this story reflects a satisfying interplay between theorists and the LHC experiments.

“Theorists proposed entanglement studies, ATLAS demonstrated entangled top–antitop pairs and CMS applied spin-sensitive observables to reveal the quasi-bound-state effect,” he says. “The next step is for theory to deliver a complete description of the top–antitop threshold.”

US publishes 40-year vision for particle physics

Elementary Particle Physics: The Higgs and Beyond

Big science requires long-term planning. In June, the US National Academies of Sciences, Engineering, and Medicine published an unprecedented 40-year strategy for US particle physics titled Elementary Particle Physics: The Higgs and Beyond. Its recommendations include participating in the proposed Future Circular Collider at CERN and hosting the world’s highest-energy elementary particle collider around the middle of the century (see “Eight recommendations” panel). The report assesses that a 10 TeV muon col­lider would complement the discovery potential of a 100 TeV proton collider.

“The shift to a 40-year horizon in the new report reflects a recognition that modern particle-physics projects and scientific questions are of unprecedented scale and complexity, demanding a much longer-term strategic commitment, international cooperation and investment for continued leadership,” says report co-chair Maria Spiropulu of the California Institute of Technology. “A staggered approach towards large research-infrastructure projects, rich in scientific advancement, technological breakthroughs and collaboration, can shield the field from stagnation.”

Eight recommendations

1. The US should host the world’s highest-energy elementary particle collider around the middle of the century. This requires the immediate creation of a national muon collider R&D programme to enable the construction of a demonstrator of the key new technologies and their integration.

2. The US should participate in the international Future Circular Collider Higgs factory currently under study at CERN to unravel the physics of the Higgs boson.

3. The US should continue to pursue and develop new approaches to questions ranging from neutrino physics and tests of fundamental symmetries to the mysteries of dark matter, dark energy, cosmic inflation and the excess of matter over antimatter in the universe.

4. The US should explore new synergistic partnerships across traditional science disciplines and funding boundaries.

5. The US should invest for the long journey ahead with sustained R&D funding in accelerator science and technology, advanced instrumentation, all aspects of computing, emerging technologies from other disciplines and a healthy core research programme.

6. The federal government should provide the means and the particle-physics community should take responsibility for recruiting, training, mentoring and retaining the highly motivated student and postdoctoral workforce required for the success of the field’s ambitious science goals.

7. The US should engage internationally through existing and new partnerships, and explore new cooperative planning mechanisms.

8. Funding agencies, national laboratories and universities should work to minimise the environmental impact of particle-physics research and facilities.

Source: National Academies of Sciences, Engineering, and Medicine 2025 Elementary Particle Physics: The Higgs and Beyond. Washington, DC: The National Academies Press.

The report is authored by a committee of leading scientists selected by the National Academies. Its mandate complements the grassroots-led Snowmass process and the budget-conscious P5 process (CERN Courier January/February 2024 p7). The previous report in this series, Revealing the Hidden Nature of Space and Time: Charting the Course for Elementary Particle Physics was published in 2006. It called for the full exploitation of the LHC, a strategic focus on linear-collider R&D, expanding particle astrophysics, and pursuing an internationally coordinated, staged programme in neutrino physics.

Two conclusions underpin the new report’s recommendations. The first identifies three workforce issues currently threatening the future of particle physics: the morale of early-career scientists, a shortfall in the number of accelerator scientists, and growing barriers to international exchanges. The second urges US leadership in elementary particle physics, citing benefits to science, the nation and humanity.

Full coherence at fifty

The most common neutrino interactions are the most difficult to detect. But thanks to advances in detector technology, coherent elastic neutrino–nucleus scattering (CEνNS) is emerging from behind backgrounds, 50 years after it was first hypothesised. These low-energy interactions are insensitive to the intricacies of nuclear or nucleon structure, making them a promising tool for precision searches for physics beyond the Standard Model. They also offer a route to miniaturising neutrino detectors.

“I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations,” says Manfred Lindner (Max Planck Institute for Nuclear Physics in Heidelberg), the spokesperson for the CONUS+ experiment, which reported the first evidence for fully coherent CEνNS in July. “The technology of CONUS+ is mature and seems scalable. I believe that we are at the beginning of precision neutrino physics with CEνNS and CONUS+ is one of the door openers!”

Act of hubris

Daniel Z Freedman is not best known for CEνNS, but in 1974 the future supergravity architect suggested that experimenters search for evidence of neutrinos interacting not with nucleons but “coherently” with entire nuclei. This process should dominate when the de Broglie wavelength of the neutrino is the diameter of the nucleus or larger. The question of which specific neutron exchanged a Z boson with the incoming neutrino would sum in the quantum amplitude rather than the probability, leading to an N2 dependence on the number of neutrons. As a result, CEνNS cross sections are typically enhanced by a factor of between 100 and 1000.

Freedman noted that his proposal may have been an “act of hubris”, because the interaction rate, detector resolution and backgrounds would all pose grave experimental difficulties. His caveat was perspicacious. It took until 2017 for indisputable evidence for CEνNS to emerge at Oak Ridge National Laboratory in the US, where the COHERENT experiment observed CEνNS by neutrinos with a maximum energy of 52 MeV, emerging from pion decays at rest (CERN Courier October 2017 p8). At these energies, the coherence condition is only partially fulfilled, and nuclear structure still plays a role.

The CONUS+ collaboration now presents evidence for CEνNS in the fully coherent regime. The experiment – one of many launched at nuclear reactors following the COHERENT demonstration – uses reactor electron anti-neutrinos with energies below 10 MeV generated across 119 days at the Leibstadt Nuclear Power Plant in Switzerland. The team observed 395 ± 106 neutrinos compared to a Standard Model expectation of 347 ± 59 events, corresponding to a statistical significance for the observation of CEνNS of 3.7σ.

I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations

It is no wonder that detection took 50 years. The only signal of CEνNS is a gentle nuclear recoil – an effect often compared to the effect of a ping-pong ball on a tanker. In CONUS+, the nuclear recoils of the CEνNS interactions are detected using the ionisation signal of point-contact high-purity germanium detectors with ultra-low energy thresholds as low as 160 eV.

The team has now increased the mass of their four semiconductor detectors from 1 to 2.4 kg to provide better statistics and potentially a lower threshold energy. CONUS+ is highly sensitive to physics beyond the Standard Model, says the team, including non-standard interaction parameters, new light mediators and electromagnetic properties of the neutrino such as electrical millicharges or neutrino magnetic moments. Lindner estimates that the CONUS+ technology could be scaled up to 100 kg, potentially yielding 100,000 CEνNS events per year of operation.

Into the neutrino fog

One researcher’s holy grail is another’s curse. In 2024, dark-matter experiments reported entering the “neutrino fog”, as their sensitivity to nuclear recoils crossed the threshold to detect a background of solar-neutrino CEνNS interactions. The PandaX-4T and XENONnT collaborations reported 2.6σ and 2.7σ evidence for CEνNS interactions in their liquid–xenon time projection chambers, based on estimated signals of 79 and 11 interactions, respectively. These were the first direct measurements of nuclear recoils from solar neutrinos with dark-matter detectors. Boron-8 solar neutrinos have slightly higher energies than those detected by CONUS+, and are also in the fully coherent regime.

CEνNS has promise for nuclear-reactor monitoring

“The neutrino flux in CONUS+ is many orders of magnitude bigger than in dark-matter detectors,” notes Lindner, who is also co-spokesperson of the XENON collaboration. “This is compensated by a much larger target mass, a larger CEνNS cross section due to the larger number of neutrons in xenon versus germanium, a longer running time and differences in detection efficiencies. Both experiments have in common that all backgrounds of natural or imposed radioactivity must be suppressed by many orders of magnitude such that the CEνNS process can be extracted over backgrounds.”

The current experimental frontier for CEνNS is towards low energy thresholds, concludes COHERENT spokesperson Kate Scholberg of Duke University. “The coupling of recoil energy to observable energy can be in the form of a dim flash of light picked up by light sensors, a tiny zap of charge collected in a semiconductor detector, or a small thermal pulse observed in a bolometer. A number of collaborations are pursuing novel technologies with sub-keV thresholds, among them cryogenic bolometers. A further goal is measurement over a range of nuclei, as this will test the SM prediction of an N2 dependence of the CEνNS cross section. And for higher-energy neutrino sources, for which the coherence is not quite perfect, there are opportunities to learn about nuclear structure. Another future possibility is directional recoil detection. If we are lucky, nature may give us a supernova burst of CEνNS recoils. As for societal applications, CEνNS has promise for nuclear-reactor monitoring for nonproliferation purposes due to its large cross section and interaction threshold below that for inverse-beta-decay of 1.8 MeV.”

Einstein Probe detects exotic gamma-ray bursts

Supernovae are some of the most well-known astrophysical phenomena. The energies involved in these powerful explosions are, however, dwarfed by a gamma-ray burst (GRB). These extra-galactic explosions form the most powerful electromagnetic explosions in the universe and play an important role in its evolution. First detected in 1967, they consist of a bright pulse of gamma rays, lasting from several seconds to several minutes. This is followed by an afterglow emission that can be measured from X-rays down to radio energies for days or even months. Thanks to 60 years of observations of these events by a range of detectors, we now know that the longer GRBs are an extreme version of a core-collapse supernova. In GRBs, the death of the heavy star is accompanied by two powerful relativistic jets. If such a jet points towards Earth we can detect gamma-ray photons even for GRBs at distances of billions of light years. Thanks to detailed observations, the afterglow is now understood to be the result of synchrotron emission produced as the jet crashes into the interstellar medium.

After the detection of over 10,000 gamma-ray components of GRBs by dedicated gamma-ray satellites, the most common models associate the longer ones with supernovae. This has been confirmed thanks to detections of afterglow emission coinciding with supernova events in other galaxies. The exact characteristics that cause some heavy stars to produce a GRB remain, however, poorly understood. Furthermore, many open questions remain regarding the nature and origin of the relativistic jets and how the gamma rays are produced within them.

While the emission has been studied extensively in gamma rays, detections at soft X-ray energies are limited. This changed in early 2024 with the launch of the Einstein Probe (EP) satellite. EP is a novel X-ray telescope, developed by the Chinese Academy of Sciences (CAS) in collaboration with ESA, the Max Planck Institute for Extraterrestrial Physics and the Centre National d’Études Spatiales. EP is unique in its wide field of view (1/11th of the sky) in soft X-rays, made possible thanks to complex X-ray optics. As GRBs occur at random positions in the sky at random times, the large field of view increases its chance to observe them. Within its first year EP detected several GRB events, most of which challenge our understanding of them.

One of these occurred on 14 April 2024. It consisted of a bright flash of X-rays lasting about 2.5 minutes. The event was also observed by ground-based optical and radio telescopes that were alerted to its location in the sky by EP. These observations at lower photon energies were consistent with a weak afterglow together with the signatures from a relatively standard supernova-like event. The supernova emission showed it to originate from a star which, prior to its death, had already shed its outer layers of hydrogen and helium. Along with the spectrum detected by EP, the detection of an afterglow indicates the existence of a relativistic jet. The overall picture is therefore consistent with a GRB. However, a crucial part was missing: a gamma-ray component.

In addition, the emission spectrum observed by EP looks significantly softer as it peaks at keV rather than the 100s of keV energies typical for GRBs. The results hint at this being at an explosion that produced a relativistic jet which – for unknown reasons – was not energetic enough to produce the standard gamma-ray emission. The progenitor star therefore appears to bridge the stellar population which causes a “simple” core collapse supernova and those that produce GRBs.

Another event, detected on 15 March 2024, produced soft X-rays consisting of six separate epochs spread out over 17 minutes. Here, a gamma-ray component was detected by NASA’s Swift BAT instrument, confirming it to be a GRB. However, unlike any other GRB, the gamma-ray emission started long after the onset of the X-ray emission. This lack of gamma-ray emission in the early stages is difficult to reconcile with standard emission models. There, the emission comes from a single uniform jet where the highest energies are emitted at the start when the jet is at its most energetic.

In their publication in Nature Astronomy, the EP collaboration suggests the possibility that the early X-ray emission comes from either shocks from the supernova explosion itself or from weaker relativistic jets preceding the main powerful jet. Other proposed explanations include complex jet structures and pose that EP observed the jet far away from its centre. In this explanation, the matter in the jet moves faster in the centre while at the edges its Lorentz factor (or velocity) is significantly slower, thereby producing a lower-energy longer-lasting emission, undetectable before the launch of EP.

Overall, the two detections appear to indicate that the GRBs detected over the last 60 years, where the emission was dominated by gamma rays, were only a subset of a more complex phenomenon. At a time where two of the most important instruments in GRB astronomy from the last two decades, NASA’s Fermi and Swift missions, are proposed to be switched off, EP is taking over an important role and opening the window to soft X-ray observations.

CP symmetry in diphoton Higgs decays

CMS figure 1

In addition to giving mass to elementary particles, the Brout–Englert–Higgs mechanism provides a testing ground for the fundamental symmetries of nature. In a recent analysis, the CMS collaboration searched for violations of charge–parity (CP) symmetry in the decays of Higgs bosons into two photons. The results set some of the strongest limits to date on anomalous Higgs-boson couplings that violate CP symmetry.

CP symmetry is particularly interesting as violations reveal fundamental differences in the behaviour of matter and antimatter, potentially explaining why the former appears to be much more abundant in the observed universe. While the Standard Model predicts that CP symmetry should be violated, the effect is not sufficient to account for the observed imbalance, motivating searches for additional sources of CP violation. CP symmetry requires that the laws of physics remain the same when particles are replaced by their corresponding antiparticles (C symmetry) and their spatial coordinates are reflected as in a mirror (P symmetry). In 1967, Andrei Sakharov established CP violation as one of three necessary requirements for a cosmic imbalance between matter and antimatter.

The CMS collaboration probed Higgs-boson interactions with electro­weak bosons and gluons, using decays into two energetic photons. This final state is particularly precise: photons are well reconstructed thanks to the energy resolution of the CMS electromagnetic calorimeter and backgrounds can be accurately estimated. The analysis employed 138 fb–1 of proton–proton collision data at a centre-of-mass energy of 13 TeV and focused on two main channels. Electroweak production of the Higgs boson, via vector boson fusion (VBF) or in association with a W or Z boson (VH), tests the Higgs boson’s couplings to electroweak gauge bosons. Gluon fusion, which occurs through loops dominated by the top quark, is sensitive to possible CP-violating interactions with fermions. A full angular analysis was performed to separate different coupling hypotheses, exploiting both the kinematic properties of the photons from the Higgs boson decay and the particles produced alongside it.

The matrix element likelihood approach (MELA) was used to minimise the number of observables, while retaining all essential information. Deep neural networks and boosted decision trees classified events based on their topology and kinematic properties, isolating signal-like events from background or alternative new-physics scenarios. Events were then grouped into analysis categories, each optimised to enhance sensitivity to anomalous couplings for a specific production mode.

The data favour the Standard Model configuration, with no significant deviation from its predictions (see figure 1). By placing some of the most stringent constraints yet on CP-violating interactions between the Higgs boson and vector bosons, the study highlights how precise measurements in simple final states can yield insights into the symmetries governing particle physics. With the upcoming data from Run 3 of the LHC and the High-Luminosity LHC, CMS is well positioned to push these limits further and potentially uncover hidden aspects of the Higgs sector.

Charming energy–energy correlators

ALICE figure 1

Narrow sprays of particles called jets erupt from high-energy quarks and gluons. The ALICE collaboration has now measured so-called energy–energy correlators (EECs) of charm-quark jets for the first time – revealing new details of the elusive “dead cone” effect.

Unlike in quantum electrodynamics, the quantum chromodynamics (QCD) coupling constant gets weaker at higher energies – a feature known as asymptotic freedom. This allows high-energy partons to scatter and radiate additional partons, forming showers. As their energy splits between more and more products, decreasing toward the characteristic QCD confinement scale, interactions grow strong enough to bind partons within colour-neutral hadrons. The structure, energy profile and angular distribution of particles within the jets bear traces of the initial collision and the parton-to-hadron transitions, making them powerful probes of both perturbative and non-perturbative QCD effects. To understand the interplay between these two regimes, researchers track how jet properties vary with the mass and colour of the initiating partons.

Due to the gluon’s larger colour charge, QCD predicts gluon-initiated jets to be broader and contain more low-momentum particles than those from quarks. Additionally, the significant mass of heavy quarks should suppress collinear gluon emission, inducing the so-called “dead-cone” effect at small angles. These expectations can be tested by comparing jet substructure across flavours. A key observable for this purpose is the EEC, which measures how energy is distributed within a jet as a function of the angular separation RL between particle pairs. The large-RL region is dominated by early partonic splittings, reflecting perturbative dynamics, while a small RL value corresponds to later radiation shaped by final-state hadrons. The intermediate-RL region captures the transition where hadronisation begins to affect the jet structure. This characteristic shape enables the separation of perturbative and non-perturbative regimes, revealing flavour-dependent dynamics of jet formation and hadronisation.

The ALICE Collaboration measured the EEC for charm–quark jets tagged with D0 mesons, reconstructed via the D0 K π+ decay mode (branching ratio 3.93 ± 0.04%), in proton–proton collisions at centre-of-mass energy 13 TeV. Jets are inferred from charged-particle tracks using the anti-kT algorithm, clustering products in momentum space with a resolution parameter R = 0.4.

At low transverse momentum, where the effect of the charm-quark mass is most prominent, the EEC amplitude is found to be significantly suppressed for charm jets relative to inclusive jets initiated by light-quarks and gluons. The difference is more pronounced at small angles due to the dead-cone effect (see figure 1). Despite the sizable charm–quark mass, the distribution peak position remains similar across the two populations, pointing to a complex mix of parton flavour effects in the shower evolution and enhanced non-perturbative contributions such as hadronisation. Perturbative QCD calculations reproduce the general shape at large RL but show tension near the peak, indicating the need for theoretical improvements for heavy-quark jets. The upward trend in the ratio of charm to inclusive jets as a function of RL, reproduced with PYTHIA 8, suggests that they deviate in fragmentation.

This first measurement of the heavy-flavour jet EEC helps disentangle perturbative and non-perturbative QCD effects in jet formation, constraining theoretical models. Furthermore, it provides an essential vacuum baseline for future studies in heavy-ion collisions, where the quark–gluon plasma is expected to alter jet properties.

bright-rec iop pub iop-science physcis connect