Comsol -leaderboard other pages

Topics

The many flavours of LHCb

The 15th edition of the Implications of LHCb Measurements and Future Prospects annual workshop took place at CERN from 4 to 7 November 2025, attracting more than 180 participants from the LHCb experiment and the theoretical physics community.

Peilian Li (UCAS) described how, thanks to an upgraded trigger that is fully software-based, the dataset gathered in 2025 alone already exceeded the total one from Run 1 and Run 2 combined. The future of LHCb was discussed, with prospects for an upgrade targeting the high-luminosity phase of the LHC, where timing information will be introduced. Theorist Monika Blanke (KIT) concluded the workshop with a keynote on the status of B-decay anomalies, highlighting the importance of LHCb measurements on constraining new physics models.

Much attention went to the long-standing discrepancies between data and theory on lepton–flavour–universality tests – such as the measurement of the R(D) and R(D*) ratios in semileptonic B-meson decays. Marzia Bordone (UZH) gave a theoretical overview of the determination of the form factors describing B  D* transitions, highlighting discrepancies in the determination of some form-factor shapes, both among different lattice–QCD determinations and within extractions from different experimental datasets.

A new combination of all LHCb measurements of the CKM angle γ, which quantifies a key CP-violating phase in b-hadron decays, yielded an overall value of (62.8 ± 2.6)°. The collaboration reported flagship electroweak precision measurements of the effective weak mixing angle and the W-boson mass, as well as the first dedicated measurement of the Z-boson mass at the LHC.

An exciting focus for 2026 will be the search for the double open-beauty tetraquark Tbb(bbud)

An exciting focus for 2026 will be the search for the double open-beauty tetraquark Tbb(bbud) – the first accessible exotic hadron expected to be stable against strong decay (CERN Courier November/December 2024 p34). Saša Prelovšek (UL) presented the first lattice-QCD calculation of the state’s electromagnetic form factors, allowing her to rule out an interpretation of the tetraquark as a loosely–bound B–B* molecule.

The legacy Run 1+2 B  K*μ+μ angular analysis, based on a dataset roughly twice as large as that used in previous ones, was presented. Previously seen tensions were confirmed with much increased precision and new observables are reported for the first time. Theorists Arianna Tinari (UZH), Giuseppe Gagliardi (INFN Rome3) and Nazila Mahmoudi (IP2I, CERN) reviewed the status of the non-local hadronic contributions that could affect this channel, discussing how the use of different theoretical approaches can be employed to determine these contributions and how compatible the current results are with the theoretical expectations.

Zhengchen Lian (THU, INFN Firenze) showed the characteristic “bowling–pin” deformation of neon nuclei as it was recently observed using the SMOG2 apparatus, which allows collisions of LHC protons with a variety of fixed-target light nuclei injected into the beampipe (CERN Courier November/December 2025 p8).

Tokyo targets the two infinities

From 17 to 23 November, the second International Conference on Physics of the Two Infinities (P2I) gathered nearly 200 participants on the historic Hongo campus of the University of Tokyo. Organised by the ILANCE laboratory, a joint initiative by CNRS and the University of Tokyo, the P2I series aims to bridge the largest and smallest scales of the universe. In this spirit, the 2025 programme drew together results from cosmological surveys, particle colliders and neutrino detectors.

Two cosmological tensions will play a key role in the coming decades. One concerns how strongly matter clumps together to form structures such as galaxy clusters and filaments. The other involves the universe’s expansion rate, H0. In both cases, measurements based on early-universe data differ from those conducted in the local universe. The discrepancy on H0 has now reached about 6σ (CERN Courier March/April 2025 p28). Independent methods, such as strong lensing, lensed supernovae and gravitational-wave standard sirens, are essential to confirm or resolve this discrepancy. Several of these techniques are expected to reach 1% precision in the near future. More broadly, upcoming large-scale cosmological missions, including Euclid, DESI, LiteBIRD and the Legacy Survey of Space and Time (LSST) – which released its world-leading camera’s first images in June – are set to deliver important insights into inflation, dark energy and the cosmological effects of neutrino masses.

The dark universe featured prominently. Participants discussed an excess of gamma rays from the galactic centre detected by the Fermi telescope, which is consistent with the self-annihilation of weakly interacting massive particles (WIMPs) and may represent one of the strongest experimental hints for dark matter. Recent analyses on more than 40 million galaxies and quasars in DESI’s Data Release 2 show that fits to baryon acoustic oscillation distances deviate from the standard ΛCDM model at the 2.8 to 4.2σ level, with a dynamical dark energy providing a better match. Euclid, having identified approximately 26 million galaxies out to over 10.5 billion light-years, is poised to constrain the nature of dark matter by combining measurements of large-scale structure, gravitational-lensing statistics, small-scale substructure, dwarf-galaxy populations and stellar streams. Experiments such as XENONnT and PandaX-4T are instead pursuing a mature direct-detection programme.

Future colliders were a central topic at P2I. While new physics has long been expected to emerge near the TeV scale to stabilise the Higgs mass, the Standard Model remains in excellent agreement with current data, and precision flavour measurements constrain many possible new particles to lie at much higher energies. The LHC collaborations presented a flurry of new results and superb prospects for its high–luminosity phase, alongside new results from Belle II and NA64. Looking ahead, a major future collider will be essential for exploring and probing the laws connecting particle physics with the earliest moments of the universe.

The conference hosted the first-ever public presentation of JUNO’s experimental results, only a few hours after their appearance on arXiv. Despite relying on only 59.1 days of data, the experiment has already demonstrated excellent detector performance and produced competitive measurements on solar-neutrino oscillation that are fully consistent with previous results. This level of precision is remarkable, after barely two months of data collection. Three major questions in neutrino physics remain unresolved: the ordering of neutrino masses, the value of the CP-violating parameter and the octant of the mixing angle θ32. The next generation of experiments, including JUNO, DUNE, Hyper-K and upgraded neutrino telescopes, are specifically designed to answer these questions. Meanwhile, DESI has reported a new, stringent upper limit of 0.064 eV on the sum of neutrino masses, within a flat ΛCDM framework. It is the tightest cosmological constraint to date.

The LHC collaborations presented a flurry of new results and superb prospects for its high–luminosity phase

New data from the JWST, Subaru and ALMA telescopes revealed an unexpectedly rich population of galaxies only 200–300 million years after the Big Bang. Many of these early systems appear to grow far more rapidly than predicted by the ΛCDM model, raising questions such as whether star formation efficiency was significantly higher in the early universe or whether we currently underestimate the growth of dark-matter halos (CERN Courier November/December 2025 p11). These data also highlighted a surprisingly abundant population of high-redshift active galactic nuclei, with important implications for black-hole seeding and early supermassive black-hole formation. A comprehensive review of the rapidly evolving field of supernova and transient astronomy was also presented. The mechanisms behind core-collapse supernovae remain only partially understood, and the thermonuclear explosions of white dwarfs continue to pose open questions. At the same time, observations keep identifying new transient classes, whose physical origins are still under investigation. Important insights into protostars, discs and planet formation were also discussed. Observations show that interstellar bubbles and molecular filaments shape the formation of stars and planets across a vast range of physical scales. More than 6000 exoplanets have today been detected, from hot Jupiters to super Earths and ocean planets, many without counterparts in our Solar System.

With more than 150 new gravitational-wave (GW) candidates now identified, including extreme ones with rapid spins and highly asymmetric component masses, GW astronomy offers outstanding opportunities to investigate gravity in the strong-field regime. Notably, the GW250114 event was shown to obey Hawking’s area law, which states that the total horizon area cannot decrease during a black-hole merger, providing strong confirmation of general relativity in the most nonlinear regime. Next-generation observatories such as the Einstein Telescope, Cosmic Explorer and LISA will allow detailed black-hole spectroscopy and impose tighter constraints on alternative theories of gravity.

Even if the transition to multi-messenger astronomy began in the late 20th century, the first binary neutron-star merger, GW170817, remains its landmark event. An extraordinary global effort – more than 70 teams and 100 instruments pointed at the event for years – highlighted several historic firsts: the first gravitational-wave “standard siren” measurement of the Hubble constant, the first association between a neutron-star merger and a short gamma-ray burst, the first observed kilonovae confirming the astrophysical site of heavy-element production, and the first direct test comparing the speed of gravity and light. Very-high-energy gamma-ray astronomy (HESS, MAGIC and VERITAS) also reported impressive results, with more than 300 sources above 100 GeV observed, and bright prospects, as the Cherenkov Telescope Array Observatory (CTAO) is about to start operations.

Tau leptons join the hunt

LHCb figure 1

As the Standard Model (SM) withstands increasingly stringent experimental tests, rare decays remain a prime hunting ground for new physics. In a recent paper, the LHCb collaboration reports its first dedicated searches for the decays B0→ K+πτ+τ and Bs0→ K+Kτ+τ, pushing hadron–collider flavour physics further into tau-rich territory.

At the quark level, the B0→ K+πτ+τ and Bs0→ K+Kτ+τ decays happen via the flavour-changing process b → sτ+τ, which is highly suppressed in the SM. The expected branching fractions of around 10–7 would place these decays well below the current experimental sensitivity. However, many new-physics scenarios, such as those involving leptoquarks or additional Z bosons, predict mediators that couple preferentially to third-generation leptons.

The tensions with the SM observed in the ratios of semileptonic branching fractions R(D(*)) and in b → sμ+μ processes could, for example, result in an enhancement of b → sτ+τ decays. Yet despite its potential to yield signs of new physics, the tau sector remains largely unexplored.

The LHCb analysis only considered tau decays to muons, in order to exploit the detector’s excellent muon identification systems. Reconstructing decays to final states with tau leptons at a hadron collider is notoriously challenging, particularly when relying on leptonic decays such as τ+→ μ+ντνμ, which result in multiple unreconstructed neutrinos. Using the Run 2 data set of about 5.4 fb–1 of proton–proton collisions, the collaboration applied machine-learning techniques to extract the topological and isolation features of suppressed tau-pair signals from the background.

Due to the large amount of missing energy in the final state, the B-meson mass cannot be fully reconstructed and the output of the machine-learning algorithm was instead fitted to search for a b → sτ+τ component. The search was primarily limited by the size of the control samples used to constrain the background shapes – a limitation that will be alleviated by the larger datasets expected in future LHC runs.

No significant signal excess was observed in either the K+πτ+τ or the K+Kτ+τ final states. Upper limits on the branching fractions were then established in bins of the dihadron invariant masses, allowing separate exploration of regions dominated by dihadron resonances and those expected to be primarily non-resonant.

These results represent the worlds most stringent limits on b  sτ+τ transitions

When interpreted in terms of resonant modes, the limits are B(B0→ K*(892)0τ+τ) < 2.8 × 10–4 and B(Bs0→ φ(1020)τ+τ) < 4.7 × 10–4 at the 95% confidence level. The B0→ K*(892)0τ+τ limit improves on previous bounds by approximately an order of magnitude, while the limit on Bs0→ φ(1020)τ+τ is the first ever established.

These results represent the world’s most stringent limits on b → sτ+τ transitions. The analysis lays essential groundwork for future searches, as the larger LHCb datasets from LHC Run 3 and beyond are expected to open a new frontier in measurements of rare b-hadron transitions involving heavy leptons.

With the upgraded detector and the novel fully software-based trigger, the efficiency in selecting low-pT muons – and consequently the tau leptons from which they originate – will be much improved. Sensitivity to b → sτ+τ transitions is therefore expected to increase substantially in the coming years.

Strangeness at its extremes

ALICE figure 1

Strangeness production in high-energy hadron collisions is a powerful tool for exploring quantum chromodynamics (QCD). Unlike up and down, strange quarks are not present as valence quarks in colliding protons and neutrons, and must therefore appear through interactions. They are, however, still light enough to be abundantly produced at the LHC.

Over the past 15 years, the ALICE collaboration has shown that the abundance of strange over non-strange hadrons grows with event multiplicity in all collision systems. In particular, high-multiplicity proton–proton (pp) collisions display a significant strangeness enhancement, reaching saturation levels similar to those in heavy-ion collisions. In one of the most precise studies of strange-to-non-strange hadron production to date, the ALICE collaboration has reported its recent results from pp and lead–lead collisions at the LHC.

Strange hadrons (Ks0, Λ, Ξ, Ω) were reconstructed from their weak-decay topologies. Candidates were then selected by applying geometrical and kinematic cuts, estimating and subtracting backgrounds, and correcting the resulting distributions using detector-response simulations. The analyses were carried out at a centre-of-mass energy per nucleon pair of 5.02 TeV and span a wide multiplicity range, from 2 to 2000 charged particles at mid-rapidity.

To better understand how strangeness is produced, the collaboration has taken a significant step by measuring the probability distribution of forming a specific number of strange particles of the same species per event. This study, based on event-by-event strange-particle counting, moves beyond average yields and probes higher orders in the strange-particle production probability distribution. To account for the response of the detector, each candidate is assigned a probability of being genuine rather than background, and a Bayesian unfolding method iteratively corrects for particles that were missed or misidentified to reconstruct the true counts. This provides a novel technique for testing theoretical strangeness-production mechanisms, particularly in events characterised by a significant imbalance between strange and non-strange particles.

Exploiting a large dataset of pp collisions, the probability of producing n particles of a given species S (S = Ks0, Λ, Ξ or Ω) per event, P(nS), could be determined up to a maximum of nS = 7 for Ks0, nS = 5 for Λ, nS = 4 for Ξ and nS = 2 for Ω (see figure 1). An increase of P(nS) with charged-particle multiplicity is observed, becoming more pronounced for larger n, as reflected by the growing separation between the curves corresponding to low- and high-multiplicity classes in the high-n tail of the distributions.

The average production yield of n particles per event can be calculated from the P(nS) distributions, taking into account all possible combinations that result in a given multiplet. This makes it possible to compare events with the same or a different overall strange quark content that hadronise into various combinations of hadrons in the final state. While the ratio between Ω triplets to single Ks0 shows an extreme strangeness-enhancement pattern up to two orders of magnitude across multiplicity, comparing hadron combinations that differ in up- and down-quark content but share the same total s-quark content (for instance, Ω singlets compared to Λ triplets) helps isolate the part of the enhancement unrelated to strangeness.

Comparisons with state-of-the-art phenomenological models show that this new approach greatly enhances sensitivity to the underlying physics mechanisms implemented in different event generators. Together with the traditional strange-to-pion observables, the multiplicity-differential probability distributions of strange hadrons provide a more detailed picture of how strange quarks are produced and hadronise in high-energy collisions, offering a stringent benchmark for the phenomenological description of non-perturbative QCD.

Introduction to neutrino and particle physics

Neutrino physics is a vibrant field of study, with spectacular recent advances. To this day, neutrino oscillations are the only experimental evidence of physics beyond the Standard Model, and, 25 years after this discovery, breathtaking progress has been achieved in both theory and experiment. Giulia Ricciardi’s new textbook provides a timely new resource in a fast developing field.

Entering this exciting field of research can be intimidating, thanks to the breadth of topics that need to be mastered. As well as particle physics, neutrinos touch astroparticle physics, cosmology, astrophysics, nuclear physics and geophysics, and many neutrino textbooks assume advanced knowledge of quantum field theory and particle theory. Ricciardi achieves a brilliant balance by providing a solid foundation in these areas, alongside a comprehensive overview of neutrino theory and experiment. This sets her book apart from most other literature on the subject and makes it a precious resource for newcomers and experts alike. She provides a self-contained introduction to group theory, symmetries, gauge theories and the Standard Model, with an approach that is both accessible and scientifically rigorous, putting the emphasis on understanding key concepts rather than abstract formalisms.

With the theoretical foundations in place, Ricciardi then turns to neutrino masses, neutrino mixing, astrophysical neutrinos and neutrino oscillations. Dirac, Majorana and Dirac-plus-Majorana mass terms are explored, alongside the “see-saw” mechanism and its possible implementations. A full chapter is devoted to neutrino oscillations in the vacuum and in matter, preparing the reader to explore neutrino oscillations in experiments, first from natural sources, such as the Sun, supernovae, the atmosphere and cosmic neutrinos; a subsequent chapter then covers reactor and accelerator neutrinos, giving a detailed overview of the key theoretical and experimental issues. Ricciardi avoids a common omission in neutrino textbooks by addressing neutrin–nucleus interactions – a fast developing topic in theory and a crucial aspect of interpreting current and future experiments. The book concludes with a look at the current research and future prospects, including a discussion of neutrino-mass measurements and neutrinoless double-beta decay.

The clarity with which Ricciardi links theoretical concepts to experimental observations is remarkable. Her book is engaging and eminently enjoyable. I highly recommend it.

If Einstein had known

How would Einstein have reacted to Bell’s theorem and the experimental results derived from it? Alain Aspect’s new French-language book Si Einstein avait su (If Einstein had known) can be recommended to anybody interested in the Einstein–Bohr debates about quantum mechanics, how a CERN theorist, John Stewart Bell (1928–1990), weighed in in 1964, and how experimentalists converted Bell’s idea into ingenious physical experiments. Aspect shared the 2022 Nobel Prize in Physics with John F Clauser and Anton Zeilinger for this work.

The core part of Aspect’s book covers his own contributions to the experimental test of Bell’s inequality spanning 1975 to 1985. He gives a very personal account of his involvement as an experimental physicist in this matter, starting soon after he visited Bell at CERN in spring 1975 for advice concerning his French Thèse d’État. With anecdotes that give the reader the impression of sitting next to the author and listening to his stories, Aspect recounts how, in 1975, captivated by Bell’s work, he set up experiments in underground rooms at the Institut d’Optique in Orsay to test hidden-variable theories. He explains his experiments in detail with diagrams and figures from his original publications as well as images of the apparatus used. By 1981 and for several years to come, it was Aspect’s experiments that came closest to Bell’s idea on how to test the inequality formulated in 1964. Aspect defended his thesis in 1983 in a packed auditorium with illustrious examiners such as J S Bell, C Cohen-Tannoudji and B d’Espagnat. Not long afterwards, Cohen-Tannoudji invited him to the Collège de France and the Paris ENS to work on the laser cooling and manipulation of atoms – a quite different subject. At that time, Aspect didn’t see any point in closing some of the remaining loopholes in his experiments.

To prepare the terrain for his story, Aspect first tells the history of quantum mechanics from 1900 to 1935. He begins with a discussion of Planck’s blackbody radiation (1900), Einstein’s description of the photoelectric effect (1905) and the heat capacity of solids (1907), the wave–particle duality of light, first Solvay Congress (1911), Bohr’s atomic model (1913) and matter–radiation interaction according to Einstein (1916). He then covers the Einstein–Bohr debates at the Solvay congresses of 1927 and 1930 on the interpretation of the probability aspects of quantum mechanics.

Aspect then turns to the Einstein, Pod­olsky, Rosen (EPR) paper of 1935, which discusses a gedankenexperiment involving two entangled quantum mechanical particles. Whereas the previous Einstein–Bohr debates ended with convincing arguments by Bohr refuting Einstein’s point of view, Bohr didn’t come up with a clear answer to Einstein’s objection of 1935, namely that he considered quantum mechanics to be incomplete. In 1935 and the following years, for most physicists the Einstein–Bohr debate had been considered uninteresting and purely philosophical. It had practically no influence on the success of the application of quantum mechanics. Between 1935 and 1964, the EPR subject was nearly dormant, apart from David Bohm’s interventions during the 1950s. In 1964 Bell took up the EPR paradox, which had been advanced as an argument that quantum mechanics should be supplemented by additional variables (CERN Courier July/August 2025 p21).

Aspect describes clearly and convincingly how Bell entered the scene and how the inequality with his name triggered experimentalists to get involved: experiments with polarisation-entangled photons and their correlations could decide whether Einstein or Bohr’s view of quantum mechanics was correct. Bell’s discovery transferred the Einstein–Bohr debate from epistemology to the realm of experimental physics. At the end of the 1960s the first experiments based on Bell’s inequality started to take form. Aspect describes how these analysed the polarisation correlation of the entangled photons at a separation of a few metres. He discusses their difficulties and limitations, starting with the experiments launched by Clauser et al.

In the final chapter, covering 1985 to the present, Aspect explains why he decided not to continue his research with entangled photons and to switch subject. His opinion was that the technology at the time wasn’t ripe enough to close some of the remaining loopholes in his experiments – loopholes of a type that Bell considered less important. Aspect was convinced that if quantum mechanics was faulty, one would have seen indications of that in his experiments. It took until 2015 for two of the loopholes left open by Aspect’s experiments (the locality and detection loophole) to be simultaneously closed. Yet no experiment, as ideal as it is, can be said to be totally loophole-free, as Aspect says. The final chapter also covers more philosophical aspects of quantum non-locality and speculations about how Einstein would have reacted to the violation of Bell’s inequalities. In complementary sections, Aspect speaks about the no-cloning theorem, technological applications of quantum optics like quantum cryptography according to Ekert, quantum teleportation and quantum random number generators.

Who will profit from reading this book? First one should say that it is not a quantum-mechanics or quantum-optics textbook. Most of the material is written in such a way that it will be accessible and enjoyable to the educated layperson. For the more curious reader, supplementary sections cover physical aspects in deeper detail, and the book cites more than 80 original references. Aspect’s long experience and honed pedagogical skills are evident throughout. It is an engaging and authoritative introduction to one of the most profound debates in modern physics.

Alchemy by pure light

New results in fundamental physics can be a long time coming. Experimental discoveries of elementary particles have often occurred only decades after their prediction by theory.

Still, the discovery of the fundamental particles of the Standard Model has been speedy in comparison to another longstanding quest in natural philosophy: chrysopoeia, the medieval alchemists’ dream of transforming the “base metal” lead into the precious metal gold. This may have been motivated by the observation that the dull grey, relatively abundant metal lead is of similar density to gold, which has been coveted for its beautiful colour and rarity for millennia.

The quest goes back at least to the mythical, or mystical, notion of the philosopher’s stone and Zosimos of Panopolis around 300 CE. Its evolution, in various cultures, through medieval times and up to the 19th century, is a fascinating thread in the emergence of modern empirical science from earlier ways of thinking. Some of the leaders of this transition, such as Isaac Newton, also practised alchemy. While the alchemists pioneered many of the techniques of modern chemistry, it was only much later that it became clear that lead and gold are distinct chemical elements and that chemical methods are powerless to transmute one into the other.

With the dawn of nuclear physics in the 20th century, it was discovered that elements could transform into others through nuclear reactions, either naturally by radioactive decay or in the laboratory. In 1940, gold was produced at the Harvard Cyclotron by bombarding a mercury target with fast neutrons. Some 40 years ago, tiny amounts of gold were produced in nuclear reactions between beams of carbon and neon, and a bismuth target at the Bevalac in Berkeley. Very recently, gold isotopes were produced at the ISOLDE facility at CERN by bombarding a uranium target with proton beams (see “Historic gold” images).

Historic gold

Now, tucked away discreetly in the conclusions of a paper recently published by the ALICE collaboration, one can find the observation, originating from Igor Pshenichnov, Uliana Dmitrieva and Chiara Oppedisano, that “the transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC.”

ALICE has finally measured the transmutation of lead into gold, not via the crucibles and alembics of the alchemists, nor even by the established techniques of nuclear bombardment used in the experiments mentioned above, but in a novel and interesting way that has become possible in “near-miss” interactions of lead nuclei at the LHC.

At the LHC, lead has been transformed into gold by light.   

Since the first announcement, this story has attracted considerable attention in the media. Here I would like to put this assertion in scientific context and indicate its relevance in testing our understanding of processes that can limit the performance of the LHC and future colliders such as the FCC.

Electromagnetic pancakes

Any charged particle at rest is surrounded by lines of electric fields radiating outwards in all directions. These fields are particularly strong close to a lead nucleus because it contains 82 protons, each with one elementary charge. In the LHC, the lead nuclei travel at 99.999994% of the speed of light, squeezing the field lines into a thin pancake transverse to the direction of motion in the laboratory frame of reference. This compression is so strong that, in the vicinity of the nucleus, we find the strongest magnetic and electric fields known in the universe, trillions of times stronger than even the prodigiously powerful superconducting magnets of the LHC, and orders of magnitude greater than the Schwinger limit where the vacuum polarises or the magnetic fields found in rare, rapidly spinning neutron stars called magnetars. Of course, these fields extend only over a very short time as one nucleus passes by the other. Quantum mechanics, via a famous insight of Fermi, Weizsäcker and Williams, tells us that this electromagnetic flash is equivalent to a pulse of quasi-real photons whose intensity and energy are greatly boosted by the large charge and the relativistic compression.

When two beams of nuclei are brought into collision in the LHC, some hadronic interactions occur. In the unimaginable temperatures and densities of this ultimate crucible we create droplets of the quark–gluon plasma, the main subject of study of the heavy-ion programme. However, when nuclei “just miss” each other, the interactions of these electromagnetic fields amount to photon–photon and photon–nucleus collisions. Some of the processes occurring in these so-called ultra-peripheral collisions (UPCs) are so strong that they would limit the performance of the collider, were it not for special measures implemented in the last 10 years.

Spotting spectators

The ALICE paper is one among many exploring the rich field of fundamental physics studies opened up by UPCs at the LHC (CERN Courier January/February 2025 p31). Among them are electromagnetic dissociation processes where a photon interacting with a nucleus can excite oscillations of its internal structure and result in the ejection of small numbers of neutrons and protons that are detected by ALICE’s zero degree calorimeters (ZDCs). The ALICE experiment is unique in having calorimeters to detect spectator protons as well as neutrons (see “Spotting spectators” figure). The residual nuclei are not detected although they contribute to the signals measured by the beam-loss monitor system of the LHC.

Each 208Pb nucleus in the LHC beams contains 82 protons and 208–82 = 126 neutrons. To create gold, a nucleus with a charge of 79, three protons must be removed, together with a variable number of neutrons.    

Alchemy in ALICE

While less frequent than the creation of the elements thallium (single-proton emission) or mercury (two-proton emission), the results of the ALICE paper show that each of the two colliding lead-ion beams contribute a cross section of 6.8 ± 2.2 barns to gold production, implying that the LHC now produces gold at a maximum rate of about 89 kHz from lead–lead collisions at the ALICE collision point, or 280 kHz from all the LHC experiments combined. During Run 2 of the LHC (2015–2018), about 86 billion gold nuclei were created at all four LHC experiments, but in terms of mass this was only a tiny 2.9 × 10–11 g of gold. Almost twice as much has already been produced in Run 3 (since 2023).

The transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC

Strikingly, this gold production is somewhat larger than the rate of hadronic nuclear collisions, which occur at about 50 kHz for a total cross section of 7.67 ± 0.25 barns.

Different isotopes of gold are created according to the number of neutrons that are emitted at the same time as the three protons. To create 197Au, the only stable isotope and the main component of natural gold, a further eight neutrons must be removed – a very unlikely process. Most of the gold produced is in the form of unstable isotopes with lifetimes of the order of a minute.

Although the ZDC signals confirm the proton and neutron emission, the transformed nuclei are not themselves detected by ALICE and their fate is not discussed in the paper. These interaction products nevertheless propagate hundreds of metres through the beampipe in several secondary beams whose trajectories can be calculated, as seen in the “Ultraperipheral products” figure.

Ultraperipheral products

The ordinate shows horizontal displacement from the central path of the outgoing beam. This coordinate system is commonly used in accelerator physics as it suppresses the bending of the central trajectory – downwards in the figure – and its separation into the beam pipes of the LHC arcs.   

The “5σ” envelope of the intense main beam of 208Pb nuclei that did not collide is shown in blue. Neutrons from electromagnetic dissociation and other processes are plotted in magenta. They begin with a certain divergence and then travel down the LHC beam pipe in straight lines, forming a cone, until they are detected by the ALICE ZDC, some 114 m away from the collision, after the place where the beam pipe splits in two. Because of the coordinate system, the neutron cone appears to bend sharply at the first separation dipole magnet.

Protons are shown in green. As they only have 40% of the magnetic rigidity of the main beam, they bend quickly away from the central trajectory in the first separation magnet, before being detected by a different part of the ZDC on the other side of the beam pipe.

Photon–photon interactions in UPCs copiously produce electron–positron pairs. In a small fraction of them, corresponding nevertheless to a large cross-section of about 280 barns, the electron is created in a bound state of one of the 208Pb nuclei, generating a secondary beam of 208Pb81+ single-electron ions. The beam from this so-called bound-free pair production (BFPP), shown in red, carries a power of about 150 W – enough to quench the superconducting coils of the LHC magnets, causing them to transition from the superconducting to the normal resistive state. Such quenches can seriously disrupt accelerator operation, as the stored magnetic energy is rapidly released as heat within the affected magnet.

To prevent this, new “TCLD” collimators were installed on either side of ALICE during the second long shutdown of the LHC. Together with a variable-amplitude bump in the beam orbit, which pulls the BFPP beam away from the first impact point so that it can be safely absorbed on the TCLD, this allowed the luminosity to be increased to more than six times the original LHC design, just in time to exploit the full capacity of the upgraded ALICE detector in Run 3.

Light-ion collider

A first at the LHC

Besides lead, the LHC has recently collided beams of 16O and 20Ne (see “First oxygen and neon collisions at the LHC”), and nuclear transmutation has manifested itself in another way. In hadronic or electromagnetic events where equal numbers of protons and neutrons are emitted, the outgoing nucleus has almost the same charge-to-mass ratio, since nuclear binding energies are very small at the top of the periodic table. It may then continue to circulate with the original beam, resulting in a small contamination that increases during the several hours of an LHC fill. Hybrid collisions can then occur, for example including a 14N nucleus formed by the ejection of a proton and a neutron from 16O. Fortunately, the momentum spread introduced by the interactions puts many of these nuclei outside the acceptance of the radio-frequency cavities that keep the beams bunched as they circulate around the ring, so the effect is smaller than had first been expected.

The most powerful beam from an electromagnetic-dissociation process is 207Pb from single neutron emission, plotted in green. It has comparable intensity to 208Pb81+ but propagates through the LHC arc to the collimation system at Point 3.

Similar electromagnetic-dissociation processes occur elsewhere, notably in beam interactions with the LHC collimation system. The recent ALICE paper, together with earlier ones on neutron emissions in UPCs, helps to test our understanding of the nuclear interactions that are an essential ingredient of complex beam-physics simulations. These are used to understand and control beam losses that might otherwise provoke frequent magnet quenches or beam dumps. At the LHC, a deep symbiosis has emerged between the fundamental nuclear physics studied by the experiments and the accelerator physics limiting its performance as a heavy-ion collider – or even as a light-ion collider (see “Light-ion collider” panel).

The figure also shows beams of the three heaviest gold isotopes in gold. 204Au has an impact point in a dipole magnet but is far too weak to quench it. 203Au follows almost the same trajectory as the BFPP beam. 202Au propagates through the arc to Point 3. The extremely weak flux of 197Au, the only stable isotope of gold, is also shown.

Worth its weight in gold

Prospecting for gold at the LHC looks even more futile when we consider that the gold nuclei emerge from the collision point with very high energies. They hit the LHC beam pipe or collimators at various points downstream where they immediately fragment in hadronic showers of single protons, neutrons and other particles. The gold exists for tens of milliseconds at most.

And finally, the isotopically pure lead used in CERN’s ion source costs more by weight than gold, so realising the alchemists’ dream at the LHC was a poor business plan from the outset.

The moral of this story, perhaps, is that among modern-day natural philosophers, LHC physicists take issue with the designation of lead as a “base” metal. We find, on the contrary, that 208Pb, the heaviest stable isotope among all the elements, is worth far more than its weight in gold for the riches of the physics discoveries that it has led us to.

JUNO takes aim at neutrino-mass hierarchy

Compared to the quark sector, the lepton sector is the Wild West of the weak interaction, with large mixing angles and large uncertainties. To tame this wildness, neutrino physicists are set to bring a new generation of detectors online in the next five years, each roughly an order of magnitude larger than its predecessor. The first of these to become operational is the Jiangmen Underground Neutrino Observatory (JUNO) in Guangdong Province, China, which began data taking on 26 August. The new 20 kton liquid-scintillator detector will seek to resolve one of the major open questions in particle physics: whether the third neutrino-mass eigenstate (ν3) is heavier or lighter than the second (ν2).

“Building JUNO has been a journey of extraordinary challenges,” says JUNO chief engineer Ma Xiaoyan. “It demanded not only new ideas and technologies, but also years of careful planning, testing and perseverance. Meeting the stringent requirements of purity, stability and safety called for the dedication of hundreds of engineers and technicians. Their teamwork and integrity turned a bold design into a functioning detector, ready now to open a new window on the world of neutrinos.”

Main goals

Neutrinos interact only via the parity-violating weak interaction, providing direct evidence only for left-handed neutrinos. As a result, right-handed neutrinos are not part of the Standard Model (SM) of particle physics. As the SM explains fermion masses by a coupling of the Higgs field to a left-handed fermion and its right-handed counterpart of the same flavour, neutrinos are predicted to be massless – a prediction still consistent with every effort to directly measure a neutrino mass yet attempted. Yet decades of observations of the flavour oscillations of solar, atmospheric, reactor, accelerator and astrophysical neutrinos have provided incontrovertible indirect evidence that neutrinos must have tiny masses below the sensitivity of instruments to detect. Observations of quantum interference between flavour eigenstates – the electron, muon and tau neutrinos – indicate that there must be a small mass splitting between ν1 and the slightly more massive ν2, and a larger mass splitting to ν3. But it is not yet known whether the mass eigenvalues follow a so-called normal hierarchy, m1 < m2 < m3, or an inverted hierarchy, m3 < m1 < m2. Resolving this question is the main physics goal of the JUNO experiment.

JUNO’s determination of the mass ordering is largely free of parameter degeneracies

“Unlike other approaches, JUNO’s determination of the mass ordering does not rely on the scattering of neutrinos with atomic electrons in the Earth’s crust or the value of the leptonic CP phase, and hence is largely free of parameter degeneracies,” explains JUNO spokesperson Wang Yifang. “JUNO will also deliver order‑of‑magnitude improvements in the precision of several neutrino‑oscillation parameters and enable cutting‑edge studies of neutrinos from the Sun, supernovae, the atmosphere and the Earth. It will also open new windows to explore unknown physics, including searches for sterile neutrinos and proton decay.”

Additional eye

Located 700 m underground near Jiangmen city, JUNO detects antineutrinos produced 53 km away by the Taishan and Yangjiang nuclear power plants. At the heart of the detector is a liquid‑scintillator detector inside a 44 m-deep water pool. A stainless-steel truss supports an acrylic sphere housing the liquid scintillator, as well as 20,000 20‑inch photomultiplier tubes (PMTs), 25,600 three‑inch PMTs, front‑end electronics, cabling and anti‑magnetic compensation coils. All the PMTs operate simultaneously to capture scintillation light from neutrino interactions and convert it to electrical signals.

To distinguish the extremely fine flavour oscillations that will allow JUNO to observe the neutrino-mass hierarchy, the experiment must achieve an extremely fine energy resolution of almost 50 keV for a typical 3 MeV reactor antineutrino. To attain this, JUNO had to push performance margins in several areas relative to the KamLAND experiment in Japan, which was previously the world’s largest liquid-scintillator detector.

“JUNO is a factor 20 larger than KamLAND, yet our required energy resolution is a factor two better,” explains Wang. “To achieve this, we have covered the full detector with PMTs with only 3 mm clearance and twice the photo-detection efficiency. By optimising the recipe of the liquid scintillator, we were able to improve its attenuation length by a factor of two to over 20 m, and increase its light yield by 50%.”

Go with the flow

Proposed in 2008 and approved in 2013, JUNO began underground construction in 2015. Detector installation started in December 2021 and was completed in December 2024, followed by a phased filling campaign. Within 45 days, the team filled the detector with 60 ktons of ultra‑pure water, keeping the liquid‑level difference between the inner and outer acrylic spheres within centimetres and maintaining a flow‑rate uncertainty below 0.5% to safeguard structural integrity.

Over the next six months, 20 ktons of liquid scintillator progressively filled the 35.4 m diameter acrylic sphere while displacing the water. Stringent requirements on scintillator purity, optical transparency and extremely low radioactivity had to be maintained throughout. In parallel, the collaboration conducted detector debugging, commissioning and optimisation, enabling a seamless transition to full operations at the completion of filling.

JUNO is designed for a scientific lifetime of up to 30 years, with a possible upgrade path allowing a search for neutrinoless double‑beta decay, says the team. Such an upgrade would probe the absolute neutrino-mass scale and test whether neutrinos are truly Dirac fermions, as assumed by the SM, or Majorana fermions without distinct antiparticles, as favoured by several attempts to address fundamental questions spanning particle physics and cosmology.

First oxygen and neon collisions at the LHC

In the first microseconds after the Big Bang, extreme temperatures prevented quarks and gluons from binding into hadrons, filling the universe with a deconfined quark–gluon plasma. Heavy-ion collisions between pairs of gold (19779Au79+) or lead (20882Pb82+) nuclei have long been observed to produce fleeting droplets of this medium, but light–ion collisions remain relatively unexplored. Between 29 June and 9 July 2025, LHC physicists pushed the study of the quark–gluon plasma into new territory, with the first dedicated studies of collisions between pairs of oxygen (168O8+) and neon (2010Ne10+) nuclei, and between oxygen nuclei and protons.

“Early analyses have already helped characterise the geometry of oxygen and neon nuclei, including the latter’s predicted prolate ‘bowling-pin’ shape,” says Anthony Timmins of the University of Houston. “More importantly, they appear consistent with the onset of the quark-gluon plasma in light–ion collisions.”

As the quark–gluon plasma appears to behave like a near-perfect fluid with low viscosity, the key to modelling heavy-ion collisions is hydrodynamics – the physics of how fluids evolve under pressure gradients, viscous stresses and other forces. When two lead nuclei collide at the LHC, they create a tiny, extremely hot fireball where quarks and gluons interact so frequently they reach local thermal equilibrium within about 10–23 s. Measurements of gold–gold collisions at Brookhaven’s RHIC and lead–lead collisions at the LHC suggest that the quark–gluon plasma flows with an extraordinarily low viscosity, close to the quantum limit, allowing momentum to move rapidly across the system. But it’s not clear whether the same rules apply to the smaller nuclear systems involved in light–ion collisions.

“For hydrodynamics to work, along with the appropriate quark-gluon plasma equation of state, you need a separation of scales between the mean free path of quarks and gluons, the pressure gradients and overall system size,” explains Timmins. “As you move to smaller systems, those scales start to overlap. Oxygen and neon are expected to sit near that threshold, close to the limits of plasma formation.”

Across the oxygen–oxygen and neon–neon datasets, the ALICE, ATLAS and CMS collaborations decomposed the transverse distribution of emitted particles into Fourier modes – a way to search for collective, fluid-like behaviour. Measurements of the “elliptic” and “triangular” Fourier components as functions of event multiplicity support the emergence of a collective flow driven by the initial collision geometry. The collaborations observe signs of energetic-probe suppression in oxygen–oxygen collisions – a signature of the droplet “quenching” jets in a way not observed in proton–proton collisions. Similar features appeared in a one-day xenon–xenon run that took place in October 2017.

These initial results are just a smattering of those to come

CMS compared particle yields in light-ion collisions to a proton–proton reference. After scaling for the number of binary nucleon–nucleon interactions, the collaboration observed a maximum suppression of 0.69 ± 0.04 at a transverse momentum of about 6 GeV, more than five standard deviations from unity. While milder than that observed for lead–lead and xenon–xenon collisions, the data point to genuine medium-induced suppression in the smallest ion–ion system studied to date. Meanwhile, ATLAS reported the first dijet transverse-momentum imbalance in a light-ion system. The reduction in balanced jets is consistent with path-length-dependent energy-loss effects, though apparently weaker than in lead–lead collisions.

In “head-on” collisions, ALICE, ATLAS and CMS all observed a neon–oxygen–lead hierarchy in elliptic flow, suggesting that, if a quark–gluon plasma does form, it exhibits the most pronounced “almond shape” in neon collisions. This pattern reflects the expected nuclear geometries of each species. Lead-208 is a doubly magic nucleus, with complete proton and neutron shells that render it tightly bound and nearly spherical in its ground state. Conversely, neon is predicted to be prolate, with its inherent elongation producing a larger elliptic overlap. Oxygen falls in between, consistent with models describing it as roughly spherical or weakly clustered.

ALICE and ATLAS reported a hierarchy of flow coefficients in light-ion collisions, with elliptic, triangular and quadrangular flows progressively decreasing as their Fourier index rises, in line with hydrodynamic expectations. Like CMS’s charged hadron yields, ALICE’s preliminary neutral pion yields exhibit a suppression at large momenta.

In a previous fixed-target study, the LHCb collaboration also measured the elliptic and triangular components of the flow in lead–neon and lead–argon collisions, observing the distinctive shape of the neon nucleus. As for proton–oxygen collisions, LHCb’s forward-rapidity coverage can probe the partonic structure of nuclei at very small values of Bjorken-x – the fraction of the nucleon’s momentum carried by a quark or gluon. Such measurements help constrain nuclear parton distribution functions in the low-x region dominated by gluons and provide rare benchmarks for modelling ultra-high-energy cosmic rays colliding with atmospheric oxygen.

These initial results are just a smattering of those to come. In a whirlwind 11-day campaign, physicists made full use of the brief but precious opportunity to investigate the formation of quark–gluon plasma in the uncharted territory of light ions. Accelerator physicists and experimentalists came together to tackle peculiar problems, such as the appearance of polluting species in the beams due to nuclear transmutation (see “Alchemy by pure light“). Despite the tight schedule, luminosity targets for proton–oxygen, oxygen–oxygen and neon–neon collisions were exceeded by large factors, thanks to high accelerator availability and the high injector intensity delivered by the LHC team.

“These early oxygen and neon studies show that indications of collective flow and parton-energy-loss-like suppression persist even in much smaller systems, while providing new sensitivity to nuclear geometry and valuable prospects for forward-physics studies,” concludes Timmins. “The next step is to pin down oxygen’s nuclear parton distribution function. That will be crucial for understanding the hadron-suppression patterns we see, with proton–oxygen and ultra-peripheral collisions being great ways to get there.”

The puzzle of an excess of bright early galaxies

Since the Big Bang, primordial density perturbations have continually merged and grown to form ever larger structures. This “hierarchical” model of galaxy formation has withstood observational scrutiny for more than four decades. However, understanding the emergence of the earliest galaxies in the first few hundred million years after the Big Bang has remained a key frontier in the field of astrophysics. This is also one of the key science aims of the James Webb Space Telescope (JWST), launched on Christmas Day in 2021.

Its large, cryogenically-cooled mirror and infrared instruments let it capture the faint, redshifted ultraviolet light from the universe’s earliest stars and galaxies. Since its launch, the JWST has collected unprecedented samples of astrophysical sources within the first 500 million years of the Big Bang, utterly transforming our understanding of early galaxy formation.

Stellar observations

Tantalisingly, JWST’s observations hint at an excess of galaxies very bright in the ultra-violet (UV) within the first 400 million years, as compared to expectations from early formation within the standard Lambda Cold Dark matter model. Given that UV photons are a key indicator of young star formation, these observations seem to imply that early galaxies in any given volume of space were overly efficient at forming stars in the infancy of the universe.

However, extraordinary claims demand extraordinary evidence. These puzzling observations have come under immense scrutiny in confirming that the sources lie at the inferred redshifts, and do not just probe over-dense regions that might preferentially host galaxies with high star-formation rates. It could still be the case that the apparent excess of bright galaxies is cosmic variance – a statistical fluctuation caused by the relatively small regions of the sky probed by the JWST so far.

Such observational caveats notwith­standing, theorists have developed a number of distinct “families” of explanations.

UV photons are readily attenuated by dust at low redshifts. If, however, these early galaxies had ejected all of their dust, one might be able to observe almost all of the intrinsic UV light they produced, making them brighter than expected based on lower-redshift benchmarks.

Bias may also arise from detecting only those sources powered by rapid bursts of star formation that briefly elevate galaxies to extreme luminosities.

Extraordinary claims demand extraordinary evidence

Several explanations focus on modifying the physics of star formation itself, for example regarding “stellar feedback” – the energy and momentum that newly formed stars inject back into their surrounding gas, that can heat, ionise or expel gas, and slow or shut down further star formation. Early galaxies might have high star-formation rates because stellar feedback was largely inefficient, allowing them to retain most of their gas for further star formation, or perhaps because a larger fraction of gas was able to form stars in the first place.

While the relative number of low- and high-mass stars in a newly formed stellar population – the initial mass function (IMF) – has been mapped out in the local universe to some extent, its evolution with redshift remains an open question. Since the IMF crucially determines the total UV light produced per unit mass of star formed, a “top-heavy” IMF, with a larger fraction of massive stars compared to that in the local universe, could explain the observations.

Alternatively, the striking ultraviolet light may not arise solely from ordinary young stars – it could instead be powered by accretion onto black holes, which JWST is finding in unexpected numbers.

Alternative cosmologies

Finally, a number of works also appeal to alternative cosmologies to enhance structure formation at such early epochs, invoking an evolving dark-energy equation of state, primordial magnetic fields or even primordial black holes.

A key caveat involved in these observations is that redshifts are often inferred purely from broadband fluxes in different filters – a technique known as photometry. Spectroscopic data are urgently required, not only to verify their exact distances but also to distinguish between different physical scenarios such as bursty star formation, an evolving IMF or contamination by active galactic nuclei, where supermassive black holes accrete gas. Upcoming deep observations with facilities such as the Atacama Large Millimeter/submillimeter Array (ALMA) and the Northern Extended Millimeter Array (NOEMA) will be crucial for constraining the dust content of these systems and thereby clarifying their intrinsic star-formation rates. Extremely large surveys with facilities such as Euclid, the Nancy Grace Roman Space Telescope and the Extremely Large Telescope will also be crucial in surveying early galaxies over large volumes and sampling all possible density fields.

Combining these datasets will be critical in shedding light on this unexpected puzzle unearthed by the JWST.

bright-rec iop pub iop-science physcis connect