The 25th Zimányi Winter School gathered 120 researchers in Budapest to discuss recent advances in medium- and high-energy nuclear physics. The programme focused on the properties of strongly-interacting matter produced in heavy-ion collisions – little bangs that recreate conditions a few microseconds after the Big Bang.
József Zimányi was a pioneer of Hungarian and international heavy-ion physics, playing a central role in establishing relativistic heavy-ion research in Hungary and contributing key developments to hydrodynamic descriptions of nuclear collisions. Much of the week’s programme revisited the problems that occupied his career, including how the hot, dense system created in a collision evolves and how it converts its energy into the observed hadrons.
Giuseppe Verde (INFN Catania) and Máté Csanád (ELTE) emphasised the role of femtoscopic methods, rooted in the Hanbury Brown–Twiss interferometry originally developed for stellar measurements, in understanding the system that emerges from heavy-ion collisions. Quantum entanglement in high-energy nuclear collisions – a subject closely connected to the 2025 Nobel Prize in Physics – was also explored in a dedicated, invited lecture by Dmitri Kharzeev (Stony Brook University), who described the approach and the results of his team that suggest the origin of the observed thermodynamic properties is quantum entanglement itself.
The NA61/SHINE collaboration reported ongoing studies of isospin-symmetry breaking, including a recent result where the charged-to-neutral kaon ratio in argon–scandium collisions deviates at 4.7σ from expectations based on approximate isospin symmetry (CERN Courier March/April 2025 p9). Further detailed studies are planned, with potential implications for improving the understanding of antimatter production.
Hydrodynamic modelling remains one of the most successful tools in heavy-ion physics. Tetsufumi Hirano (Sophia University, Japan), the first recipient of the Zimányi Medal, discussed how the collision system behaves like an expanding relativistic fluid, whose collective motion encodes its initial conditions and transport properties. Hydrodynamic approaches incorporating spin effects – and the resulting polarisation effects in heavy-ion collisions – were discussed by Wojciech Florkowski (Jagiellonian University) and Victor E Ambrus (West University of Timisoara).
The 15th edition of the Implications of LHCb Measurements and Future Prospects annual workshop took place at CERN from 4 to 7 November 2025, attracting more than 180 participants from the LHCb experiment and the theoretical physics community.
Peilian Li (UCAS) described how, thanks to an upgraded trigger that is fully software-based, the dataset gathered in 2025 alone already exceeded the total one from Run 1 and Run 2 combined. The future of LHCb was discussed, with prospects for an upgrade targeting the high-luminosity phase of the LHC, where timing information will be introduced. Theorist Monika Blanke (KIT) concluded the workshop with a keynote on the status of B-decay anomalies, highlighting the importance of LHCb measurements on constraining new physics models.
Much attention went to the long-standing discrepancies between data and theory on lepton–flavour–universality tests – such as the measurement of the R(D) and R(D*) ratios in semileptonic B-meson decays. Marzia Bordone (UZH) gave a theoretical overview of the determination of the form factors describing B → D* transitions, highlighting discrepancies in the determination of some form-factor shapes, both among different lattice–QCD determinations and within extractions from different experimental datasets.
A new combination of all LHCb measurements of the CKM angle γ, which quantifies a key CP-violating phase in b-hadron decays, yielded an overall value of (62.8 ± 2.6)°. The collaboration reported flagship electroweak precision measurements of the effective weak mixing angle and the W-boson mass, as well as the first dedicated measurement of the Z-boson mass at the LHC.
An exciting focus for 2026 will be the search for the double open-beauty tetraquark Tbb(bbud)
An exciting focus for 2026 will be the search for the double open-beauty tetraquark Tbb(bbud) – the first accessible exotic hadron expected to be stable against strong decay (CERN Courier November/December 2024 p34). Saša Prelovšek (UL) presented the first lattice-QCD calculation of the state’s electromagnetic form factors, allowing her to rule out an interpretation of the tetraquark as a loosely–bound B–B* molecule.
The legacy Run 1+2 B → K*μ+μ– angular analysis, based on a dataset roughly twice as large as that used in previous ones, was presented. Previously seen tensions were confirmed with much increased precision and new observables are reported for the first time. Theorists Arianna Tinari (UZH), Giuseppe Gagliardi (INFN Rome3) and Nazila Mahmoudi (IP2I, CERN) reviewed the status of the non-local hadronic contributions that could affect this channel, discussing how the use of different theoretical approaches can be employed to determine these contributions and how compatible the current results are with the theoretical expectations.
Zhengchen Lian (THU, INFN Firenze) showed the characteristic “bowling–pin” deformation of neon nuclei as it was recently observed using the SMOG2 apparatus, which allows collisions of LHC protons with a variety of fixed-target light nuclei injected into the beampipe (CERN Courier November/December 2025 p8).
From 17 to 23 November, the second International Conference on Physics of the Two Infinities (P2I) gathered nearly 200 participants on the historic Hongo campus of the University of Tokyo. Organised by the ILANCE laboratory, a joint initiative by CNRS and the University of Tokyo, the P2I series aims to bridge the largest and smallest scales of the universe. In this spirit, the 2025 programme drew together results from cosmological surveys, particle colliders and neutrino detectors.
Two cosmological tensions will play a key role in the coming decades. One concerns how strongly matter clumps together to form structures such as galaxy clusters and filaments. The other involves the universe’s expansion rate, H0. In both cases, measurements based on early-universe data differ from those conducted in the local universe. The discrepancy on H0 has now reached about 6σ (CERN Courier March/April 2025 p28). Independent methods, such as strong lensing, lensed supernovae and gravitational-wave standard sirens, are essential to confirm or resolve this discrepancy. Several of these techniques are expected to reach 1% precision in the near future. More broadly, upcoming large-scale cosmological missions, including Euclid, DESI, LiteBIRD and the Legacy Survey of Space and Time (LSST) – which released its world-leading camera’s first images in June – are set to deliver important insights into inflation, dark energy and the cosmological effects of neutrino masses.
The dark universe featured prominently. Participants discussed an excess of gamma rays from the galactic centre detected by the Fermi telescope, which is consistent with the self-annihilation of weakly interacting massive particles (WIMPs) and may represent one of the strongest experimental hints for dark matter. Recent analyses on more than 40 million galaxies and quasars in DESI’s Data Release 2 show that fits to baryon acoustic oscillation distances deviate from the standard ΛCDM model at the 2.8 to 4.2σ level, with a dynamical dark energy providing a better match. Euclid, having identified approximately 26 million galaxies out to over 10.5 billion light-years, is poised to constrain the nature of dark matter by combining measurements of large-scale structure, gravitational-lensing statistics, small-scale substructure, dwarf-galaxy populations and stellar streams. Experiments such as XENONnT and PandaX-4T are instead pursuing a mature direct-detection programme.
Future colliders were a central topic at P2I. While new physics has long been expected to emerge near the TeV scale to stabilise the Higgs mass, the Standard Model remains in excellent agreement with current data, and precision flavour measurements constrain many possible new particles to lie at much higher energies. The LHC collaborations presented a flurry of new results and superb prospects for its high–luminosity phase, alongside new results from Belle II and NA64. Looking ahead, a major future collider will be essential for exploring and probing the laws connecting particle physics with the earliest moments of the universe.
The conference hosted the first-ever public presentation of JUNO’s experimental results, only a few hours after their appearance on arXiv. Despite relying on only 59.1 days of data, the experiment has already demonstrated excellent detector performance and produced competitive measurements on solar-neutrino oscillation that are fully consistent with previous results. This level of precision is remarkable, after barely two months of data collection. Three major questions in neutrino physics remain unresolved: the ordering of neutrino masses, the value of the CP-violating parameter and the octant of the mixing angle θ32. The next generation of experiments, including JUNO, DUNE, Hyper-K and upgraded neutrino telescopes, are specifically designed to answer these questions. Meanwhile, DESI has reported a new, stringent upper limit of 0.064 eV on the sum of neutrino masses, within a flat ΛCDM framework. It is the tightest cosmological constraint to date.
The LHC collaborations presented a flurry of new results and superb prospects for its high–luminosity phase
New data from the JWST, Subaru and ALMA telescopes revealed an unexpectedly rich population of galaxies only 200–300 million years after the Big Bang. Many of these early systems appear to grow far more rapidly than predicted by the ΛCDM model, raising questions such as whether star formation efficiency was significantly higher in the early universe or whether we currently underestimate the growth of dark-matter halos (CERN Courier November/December 2025 p11). These data also highlighted a surprisingly abundant population of high-redshift active galactic nuclei, with important implications for black-hole seeding and early supermassive black-hole formation. A comprehensive review of the rapidly evolving field of supernova and transient astronomy was also presented. The mechanisms behind core-collapse supernovae remain only partially understood, and the thermonuclear explosions of white dwarfs continue to pose open questions. At the same time, observations keep identifying new transient classes, whose physical origins are still under investigation. Important insights into protostars, discs and planet formation were also discussed. Observations show that interstellar bubbles and molecular filaments shape the formation of stars and planets across a vast range of physical scales. More than 6000 exoplanets have today been detected, from hot Jupiters to super Earths and ocean planets, many without counterparts in our Solar System.
With more than 150 new gravitational-wave (GW) candidates now identified, including extreme ones with rapid spins and highly asymmetric component masses, GW astronomy offers outstanding opportunities to investigate gravity in the strong-field regime. Notably, the GW250114 event was shown to obey Hawking’s area law, which states that the total horizon area cannot decrease during a black-hole merger, providing strong confirmation of general relativity in the most nonlinear regime. Next-generation observatories such as the Einstein Telescope, Cosmic Explorer and LISA will allow detailed black-hole spectroscopy and impose tighter constraints on alternative theories of gravity.
Even if the transition to multi-messenger astronomy began in the late 20th century, the first binary neutron-star merger, GW170817, remains its landmark event. An extraordinary global effort – more than 70 teams and 100 instruments pointed at the event for years – highlighted several historic firsts: the first gravitational-wave “standard siren” measurement of the Hubble constant, the first association between a neutron-star merger and a short gamma-ray burst, the first observed kilonovae confirming the astrophysical site of heavy-element production, and the first direct test comparing the speed of gravity and light. Very-high-energy gamma-ray astronomy (HESS, MAGIC and VERITAS) also reported impressive results, with more than 300 sources above 100 GeV observed, and bright prospects, as the Cherenkov Telescope Array Observatory (CTAO) is about to start operations.
As the Standard Model (SM) withstands increasingly stringent experimental tests, rare decays remain a prime hunting ground for new physics. In a recent paper, the LHCb collaboration reports its first dedicated searches for the decays B0→ K+π–τ+τ– and Bs0→ K+K–τ+τ–, pushing hadron–collider flavour physics further into tau-rich territory.
At the quark level, the B0→ K+π–τ+τ– and Bs0→ K+K–τ+τ– decays happen via the flavour-changing process b → sτ+τ–, which is highly suppressed in the SM. The expected branching fractions of around 10–7 would place these decays well below the current experimental sensitivity. However, many new-physics scenarios, such as those involving leptoquarks or additional Z′ bosons, predict mediators that couple preferentially to third-generation leptons.
The tensions with the SM observed in the ratios of semileptonic branching fractions R(D(*)) and in b → sμ+μ– processes could, for example, result in an enhancement of b → sτ+τ– decays. Yet despite its potential to yield signs of new physics, the tau sector remains largely unexplored.
The LHCb analysis only considered tau decays to muons, in order to exploit the detector’s excellent muon identification systems. Reconstructing decays to final states with tau leptons at a hadron collider is notoriously challenging, particularly when relying on leptonic decays such as τ+→ μ+ντνμ, which result in multiple unreconstructed neutrinos. Using the Run 2 data set of about 5.4 fb–1 of proton–proton collisions, the collaboration applied machine-learning techniques to extract the topological and isolation features of suppressed tau-pair signals from the background.
Due to the large amount of missing energy in the final state, the B-meson mass cannot be fully reconstructed and the output of the machine-learning algorithm was instead fitted to search for a b → sτ+τ– component. The search was primarily limited by the size of the control samples used to constrain the background shapes – a limitation that will be alleviated by the larger datasets expected in future LHC runs.
No significant signal excess was observed in either the K+π–τ+τ– or the K+K–τ+τ– final states. Upper limits on the branching fractions were then established in bins of the dihadron invariant masses, allowing separate exploration of regions dominated by dihadron resonances and those expected to be primarily non-resonant.
These results represent the world’s most stringent limits on b → sτ+τ– transitions
When interpreted in terms of resonant modes, the limits are B(B0→ K*(892)0τ+τ–) < 2.8 × 10–4 and B(Bs0→ φ(1020)τ+τ–) < 4.7 × 10–4 at the 95% confidence level. The B0→ K*(892)0τ+τ– limit improves on previous bounds by approximately an order of magnitude, while the limit on Bs0→ φ(1020)τ+τ– is the first ever established.
These results represent the world’s most stringent limits on b → sτ+τ– transitions. The analysis lays essential groundwork for future searches, as the larger LHCb datasets from LHC Run 3 and beyond are expected to open a new frontier in measurements of rare b-hadron transitions involving heavy leptons.
With the upgraded detector and the novel fully software-based trigger, the efficiency in selecting low-pT muons – and consequently the tau leptons from which they originate – will be much improved. Sensitivity to b → sτ+τ– transitions is therefore expected to increase substantially in the coming years.
Strangeness production in high-energy hadron collisions is a powerful tool for exploring quantum chromodynamics (QCD). Unlike up and down, strange quarks are not present as valence quarks in colliding protons and neutrons, and must therefore appear through interactions. They are, however, still light enough to be abundantly produced at the LHC.
Over the past 15 years, the ALICE collaboration has shown that the abundance of strange over non-strange hadrons grows with event multiplicity in all collision systems. In particular, high-multiplicity proton–proton (pp) collisions display a significant strangeness enhancement, reaching saturation levels similar to those in heavy-ion collisions. In one of the most precise studies of strange-to-non-strange hadron production to date, the ALICE collaboration has reported its recent results from pp and lead-lead collisions at the LHC.
Strange hadrons (Ks0, Λ, Ξ, Ω) were reconstructed from their weak-decay topologies. Candidates were then selected by applying geometrical and kinematic cuts, estimating and subtracting backgrounds, and correcting the resulting distributions using detector-response simulations. The analyses were carried out at a centre-of-mass energy per nucleon pair of 5.02 TeV and span a wide multiplicity range, from 2 to 2000 charged particles at mid-rapidity.
To better understand how strangeness is produced, the collaboration has taken a significant step by measuring the probability distribution of forming a specific number of strange particles of the same species per event. This study, based on event-by-event strange-particle counting, moves beyond average yields and probes higher orders in the strange-particle production probability distribution. To account for the response of the detector, each candidate is assigned a probability of being genuine rather than background, and a Bayesian unfolding method iteratively corrects for particles that were missed or misidentified to reconstruct the true counts. This provides a novel technique for testing theoretical strangeness-production mechanisms, particularly in events characterised by a significant imbalance between strange and non-strange particles.
Exploiting a large dataset of pp collisions, the probability of producing n particles of a given species S (S = Ks0, Λ, Ξ– or Ω–) per event, P(nS), could be determined up to a maximum of nS = 7 for Ks0, nS = 5 for Λ, nS = 4 for Ξ and nS = 2 for Ω (see figure 1). An increase of P(nS) with charged-particle multiplicity is observed, becoming more pronounced for larger n, as reflected by the growing separation between the curves corresponding to low- and high-multiplicity classes in the high-n tail of the distributions.
The average production yield of n particles per event can be calculated from the P(nS) distributions, taking into account all possible combinations that result in a given multiplet. This makes it possible to compare events with the same or a different overall strange quark content that hadronise into various combinations of hadrons in the final state. While the ratio between Ω triplets to single Ks0 shows an extreme strangeness-enhancement pattern up to two orders of magnitude across multiplicity, comparing hadron combinations that differ in up- and down-quark content but share the same total s-quark content (for instance, Ω singlets compared to Λ triplets) helps isolate the part of the enhancement unrelated to strangeness.
Comparisons with state-of-the-art phenomenological models show that this new approach greatly enhances sensitivity to the underlying physics mechanisms implemented in different event generators. Together with the traditional strange-to-pion observables, the multiplicity-differential probability distributions of strange hadrons provide a more detailed picture of how strange quarks are produced and hadronise in high-energy collisions, offering a stringent benchmark for the phenomenological description of non-perturbative QCD.
Neutrino physics is a vibrant field of study, with spectacular recent advances. To this day, neutrino oscillations are the only experimental evidence of physics beyond the Standard Model, and, 25 years after this discovery, breathtaking progress has been achieved in both theory and experiment. Giulia Ricciardi’s new textbook provides a timely new resource in a fast developing field.
Entering this exciting field of research can be intimidating, thanks to the breadth of topics that need to be mastered. As well as particle physics, neutrinos touch astroparticle physics, cosmology, astrophysics, nuclear physics and geophysics, and many neutrino textbooks assume advanced knowledge of quantum field theory and particle theory. Ricciardi achieves a brilliant balance by providing a solid foundation in these areas, alongside a comprehensive overview of neutrino theory and experiment. This sets her book apart from most other literature on the subject and makes it a precious resource for newcomers and experts alike. She provides a self-contained introduction to group theory, symmetries, gauge theories and the Standard Model, with an approach that is both accessible and scientifically rigorous, putting the emphasis on understanding key concepts rather than abstract formalisms.
With the theoretical foundations in place, Ricciardi then turns to neutrino masses, neutrino mixing, astrophysical neutrinos and neutrino oscillations. Dirac, Majorana and Dirac-plus-Majorana mass terms are explored, alongside the “see-saw” mechanism and its possible implementations. A full chapter is devoted to neutrino oscillations in the vacuum and in matter, preparing the reader to explore neutrino oscillations in experiments, first from natural sources, such as the Sun, supernovae, the atmosphere and cosmic neutrinos; a subsequent chapter then covers reactor and accelerator neutrinos, giving a detailed overview of the key theoretical and experimental issues. Ricciardi avoids a common omission in neutrino textbooks by addressing neutrin–nucleus interactions – a fast developing topic in theory and a crucial aspect of interpreting current and future experiments. The book concludes with a look at the current research and future prospects, including a discussion of neutrino-mass measurements and neutrinoless double-beta decay.
The clarity with which Ricciardi links theoretical concepts to experimental observations is remarkable. Her book is engaging and eminently enjoyable. I highly recommend it.
How would Einstein have reacted to Bell’s theorem and the experimental results derived from it? Alain Aspect’s new French-language book Si Einstein avait su (If Einstein had known) can be recommended to anybody interested in the Einstein–Bohr debates about quantum mechanics, how a CERN theorist, John Stewart Bell (1928–1990), weighed in in 1964, and how experimentalists converted Bell’s idea into ingenious physical experiments. Aspect shared the 2022 Nobel Prize in Physics with John F Clauser and Anton Zeilinger for this work.
The core part of Aspect’s book covers his own contributions to the experimental test of Bell’s inequality spanning 1975 to 1985. He gives a very personal account of his involvement as an experimental physicist in this matter, starting soon after he visited Bell at CERN in spring 1975 for advice concerning his French Thèse d’État. With anecdotes that give the reader the impression of sitting next to the author and listening to his stories, Aspect recounts how, in 1975, captivated by Bell’s work, he set up experiments in underground rooms at the Institut d’Optique in Orsay to test hidden-variable theories. He explains his experiments in detail with diagrams and figures from his original publications as well as images of the apparatus used. By 1981 and for several years to come, it was Aspect’s experiments that came closest to Bell’s idea on how to test the inequality formulated in 1964. Aspect defended his thesis in 1983 in a packed auditorium with illustrious examiners such as J S Bell, C Cohen-Tannoudji and B d’Espagnat. Not long afterwards, Cohen-Tannoudji invited him to the Collège de France and the Paris ENS to work on the laser cooling and manipulation of atoms – a quite different subject. At that time, Aspect didn’t see any point in closing some of the remaining loopholes in his experiments.
To prepare the terrain for his story, Aspect first tells the history of quantum mechanics from 1900 to 1935. He begins with a discussion of Planck’s blackbody radiation (1900), Einstein’s description of the photoelectric effect (1905) and the heat capacity of solids (1907), the wave–particle duality of light, first Solvay Congress (1911), Bohr’s atomic model (1913) and matter–radiation interaction according to Einstein (1916). He then covers the Einstein–Bohr debates at the Solvay congresses of 1927 and 1930 on the interpretation of the probability aspects of quantum mechanics.
Aspect then turns to the Einstein, Podolsky, Rosen (EPR) paper of 1935, which discusses a gedankenexperiment involving two entangled quantum mechanical particles. Whereas the previous Einstein–Bohr debates ended with convincing arguments by Bohr refuting Einstein’s point of view, Bohr didn’t come up with a clear answer to Einstein’s objection of 1935, namely that he considered quantum mechanics to be incomplete. In 1935 and the following years, for most physicists the Einstein–Bohr debate had been considered uninteresting and purely philosophical. It had practically no influence on the success of the application of quantum mechanics. Between 1935 and 1964, the EPR subject was nearly dormant, apart from David Bohm’s interventions during the 1950s. In 1964 Bell took up the EPR paradox, which had been advanced as an argument that quantum mechanics should be supplemented by additional variables (CERN Courier July/August 2025 p21).
Aspect describes clearly and convincingly how Bell entered the scene and how the inequality with his name triggered experimentalists to get involved: experiments with polarisation-entangled photons and their correlations could decide whether Einstein or Bohr’s view of quantum mechanics was correct. Bell’s discovery transferred the Einstein–Bohr debate from epistemology to the realm of experimental physics. At the end of the 1960s the first experiments based on Bell’s inequality started to take form. Aspect describes how these analysed the polarisation correlation of the entangled photons at a separation of a few metres. He discusses their difficulties and limitations, starting with the experiments launched by Clauser et al.
In the final chapter, covering 1985 to the present, Aspect explains why he decided not to continue his research with entangled photons and to switch subject. His opinion was that the technology at the time wasn’t ripe enough to close some of the remaining loopholes in his experiments – loopholes of a type that Bell considered less important. Aspect was convinced that if quantum mechanics was faulty, one would have seen indications of that in his experiments. It took until 2015 for two of the loopholes left open by Aspect’s experiments (the locality and detection loophole) to be simultaneously closed. Yet no experiment, as ideal as it is, can be said to be totally loophole-free, as Aspect says. The final chapter also covers more philosophical aspects of quantum non-locality and speculations about how Einstein would have reacted to the violation of Bell’s inequalities. In complementary sections, Aspect speaks about the no-cloning theorem, technological applications of quantum optics like quantum cryptography according to Ekert, quantum teleportation and quantum random number generators.
Who will profit from reading this book? First one should say that it is not a quantum-mechanics or quantum-optics textbook. Most of the material is written in such a way that it will be accessible and enjoyable to the educated layperson. For the more curious reader, supplementary sections cover physical aspects in deeper detail, and the book cites more than 80 original references. Aspect’s long experience and honed pedagogical skills are evident throughout. It is an engaging and authoritative introduction to one of the most profound debates in modern physics.
Hendrik Verweij, who was for many years a driving force in the development of electronics for high-energy physics, passed away on 11 August 2025 in Meyrin, Switzerland, at the age of 93.
Born in Linschoten near Gouda in the Netherlands, Henk earned a degree in electrical engineering at the Technical High School in Hilversum and started his career as an instrumentation specialist at Philips, working on oscilloscopes. He joined CERN in July 1956, bringing his expertise in electronics to the newly founded laboratory. With Ian Pizer, group leader of the electronics group of the nuclear-physics-division, he published CERN Yellow Report 61-15 on a nanosecond-sampling oscilloscope, followed by a paper on a fast amplifier one year later.
During the next four decades, developments in electronics profoundly transformed the world. Henk played a crucial role in bringing this transformation to CERN’s electronics instrumentation, and he eventually succeeded Pizer as group leader. Over the years he worked with numerous colleagues on fast signal-processing circuits. The creation of a collection of standardised modules facilitated the setup of a variety of CERN experiments. With Bjorn Hallgren and others, he realised the simultaneous, fast time and amplitude digitisation of the inner drift detector of the innovative UA1 experiment at CERN’s Super Proton Synchrotron, which discovered the W and Z bosons together with the UA2 experiment.
In the 1960s, recognising the importance of standardisation for engaging industry, Henk built close ties with colleagues in the US, including at Lawrence Berkeley Laboratory, SLAC and the National Bureau of Standards (NBS). He took part in the discussions that led to the Nuclear Instrumentation Module (NIM) standard, defined in 1964 by the US Atomic Energy Commission, and served on the NIM committee chaired by Lou Costrell of the NBS.
Henk was also a member of the ESONE committee for the CAMAC and later FASTBUS standards, working alongside colleagues such as Bob Dobinson, Fred Iselin, Phil Ponting, Peggie Rimmer, Tim Berners-Lee and many others from across Europe and the US in this international effort. He contributed hardware for standard modules both before and after the publication of the FASTBUS specification in 1984, and reported regularly at conferences on the status of European developments. A strong advocate of collaboration with industry, he also helped persuade LeCroy to establish a facility near CERN.
A driving force in the development of electronics for high-energy physics
Towards the end of his career, Henk became group leader of the microelectronics group at CERN, closing the loop in this transformational electronics evolution with integrated circuit developments for silicon microstrip, hybrid pixel and other detectors. When he retired in the 1990s, the group had built up the necessary expertise to design optimised application-specific integrated circuits (ASICs) for the LHC detectors. Ultimately, these allow the recording of millions of frames per second and event selection from the on-chip stored data.
Retirement did not diminish Henk’s interest in CERN and its electronics activities. He often passed by in the microelectronics group at CERN, regularly participating in Medipix meetings on the development of hybrid pixel-detector read-out chips for medical imaging and other applications.
Henk played an important role in making advances in microelectronics available to the high-energy physics community. His friends and colleagues will miss his experience, vision and irrepressible enthusiasm.
A major step toward shaping the future of European particle physics was reached on 2 October, with the release of the Physics Briefing Book of the 2026 update of the European Strategy for Particle Physics. Despite its 250 pages, it is a concise summary of the vast amount of work contained in the 266 written submissions to the strategy process and the deliberations of the Open Symposium in Venice in June (CERN Courier September/October 2025 p24).
The briefing book compiled by the Physics Preparatory Group is an impressive distillation of our current knowledge of particle physics, and a preview of the exciting prospects offered by future programmes. It provides the scientific basis for defining Europe’s long-term particle-physics priorities and determining the flagship collider that will best advance the field. To this end, it presents comparisons of the physics reach of the different candidate machines, which often have different strengths in probing new physics beyond the Standard Model (SM).
Condensing all this in a few sentences is difficult, though two messages are clear: if the next collider at CERN is an electron–positron collider, the exploration of new physics will proceed mainly through high-precision measurements; and the highest physics reach into the structure of physics beyond the SM via indirect searches will be provided by the combined exploration of the Higgs, electroweak and flavour domains.
Following a visionary outlook for the field from theory, the briefing book divides its exploration of the future of particle physics into seven sectors of fundamental physics and three technology pillars that underpin them.
1. Higgs and electroweak physics
In the new era that has dawned with the discovery of the Higgs boson, numerous fundamental questions remain, including whether the Higgs boson is an elementary scalar, part of an extended scalar sector, or even a portal to entirely new phenomena. The briefing book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy, looking for indirect signs of new physics.
Addressing these requires highly precise measurements of its couplings, self-interaction and quantum corrections. While the High-Luminosity LHC (HL-LHC) will continue to improve several Higgs and electroweak measurements, the next qualitative leap in precision will be provided by future electron–positron colliders, such as the FCC-ee, the Linear Collider Facility (LCF), CLIC or LEP3. And while these would provide very important information, it would fall upon the shoulders of an energy-frontier machine like the FCC-hh or a muon collider to access potential heavy states. Using the absolute HZZ coupling from the FCC-ee, such machines would measure the single-Higgs-boson couplings with a precision better than 1%, and the Higgs self-coupling at the level of a few per cent (see “Higgs self-coupling” figure).
This anticipated leap in experimental precision will necessitate major advances in theory, simulation and detector technology. In the coming decades, electroweak physics and the Higgs boson in particular will remain a cornerstone of particle physics, linking the precision and energy frontiers in the search for deeper laws of nature.
2. Strong interaction physics
Precise knowledge of the strong interaction will be essential for understanding visible matter, exploring the SM with precision, and interpreting future discoveries at the energy frontier. Building upon advanced studies of QCD at the HL-LHC, future high-luminosity electron–positron colliders such as FCC-ee and LEP3 would, like LHeC, enable per-mille precision on the strong coupling constant, and a greatly improved understanding of the transition between the perturbative and non-perturbative regimes of QCD. The LHeC would bring increased precision on parton-distribution functions that would be very useful for many physics measurements at the FCC-hh. FCC-hh would itself open up a major new frontier for strong-interaction studies.
A deep understanding of the strong interaction also necessitates the study of strongly interacting matter under extreme conditions with heavy-ion collisions. ALICE and the other experiments at the LHC will continue to illuminate this physics, revealing insights into the early universe and the interiors of neutron stars.
3. Flavour physics
With high-precision measurements of quark and lepton processes, flavour studies test the SM at energy scales far above those directly accessible to colliders, thanks to their sensitivity to the effects of virtual particles in quantum loops. Small deviations from theoretical predictions could signal new interactions or particles influencing rare processes or CP-violating effects, making flavour physics one of the most sensitive paths toward discovering physics beyond the SM.
The book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy
Global efforts are today led by the LHCb, ATLAS and CMS experiments at the LHC and by the Belle II experiment at SuperKEKB. These experiments have complementary strengths: huge data samples from proton–proton collisions at CERN and a clean environment in electron–positron collisions at KEK. Combining the two will provide powerful tests of lepton-flavour universality, searches for exotic decays and refinements in the understanding of hadronic effects.
The next major step in precision flavour physics would require “tera-Z” samples of a trillion Z bosons from a high-luminosity electron–positron collider such as the FCC-ee, alongside a spectrum of focused experimental initiatives at a more modest scale.
4. Neutrino physics
Neutrino physics addresses open fundamental questions related to neutrino masses and their deep connections to the matter–antimatter asymmetry in the universe and its cosmic evolution. Upcoming experiments including long-baseline accelerator-neutrino experiments (DUNE and Hyper-Kamiokande), reactor experiments such as JUNO (see “JUNO takes aim at neutrino-mass hierarchy” and astroparticle observatories (KM3NeT and IceCube, see also CERN Courier May/June 2025 p23) will likely unravel the neutrino mass hierarchy and discover leptonic CP violation.
In parallel, the hunt for neutrinoless-double-beta decay continues. A signal would indicate that neutrinos are Majorana fermions, which would be indisputable evidence for new physics! Such efforts extend the reach of particle physics beyond accelerators and deepen connections between disciplines. Efforts to determine the absolute mass of neutrinos are also very important.
The chapter highlights the growing synergy between neutrino experiments and collider, astrophysical and cosmological studies, as well as the pivotal role of theory developments. Precision measurements of neutrino interactions provide crucial support for oscillation measurements, and for nuclear and astroparticle physics. New facilities at accelerators explore neutrino scattering at higher energies, while advances in detector technologies have enabled the measurement of coherent neutrino scattering, opening new opportunities for new physics searches. Neutrino physics is a truly global enterprise, with strong European participation and a pivotal role for the CERN neutrino platform.
5. Cosmic messengers
Astroparticle physics and cosmology increasingly provide new and complementary information to laboratory particle-physics experiments in addressing fundamental questions about the universe. A rich set of recent achievements in these fields includes high-precision measurements of cosmological perturbations in the cosmic microwave background (CMB) and in galaxy surveys, a first measurement of an extragalactic neutrino flux, accurate antimatter fluxes and the discovery of gravitational waves (GWs).
Leveraging information from these experiments has given rise to the field of multi-messenger astronomy. The next generation of instruments, from neutrino telescopes to ground- and space-based CMB and GW observatories, promises exciting results with important clues for
particle physics.
6. Beyond the Standard Model
The landscape for physics beyond the SM is vast, calling for an extended exploration effort with exciting prospects for discovery. It encompasses new scalar or gauge sectors, supersymmetry, compositeness, extra dimensions and dark-sector extensions that connect visible and invisible matter.
Many of these models predict new particles or deviations from SM couplings that would be accessible to next-generation accelerators. The briefing book shows that future electron–positron colliders such as FCC-ee, CLIC, LCF and LEP3 have sensitivity to the indirect effects of new physics through precision Higgs, electroweak and flavour measurements. With their per-mille precision measurements, electron–positron colliders will be essential tools for revealing the virtual effects of heavy new physics beyond the direct reach of colliders. In direct searches, CLIC would extend the energy frontier to 1.5 TeV, whereas FCC-hh would extend it to tens of TeV, potentially enabling the direct observation of new physics such as new gauge bosons, supersymmetric particles and heavy scalar partners. A muon collider would combine precision and energy reach, offering a compact high-energy platform for direct and indirect discovery.
This chapter of the briefing book underscores the complementarity between collider and non-collider experiments. Low-energy precision experiments, searches for electric dipole moments, rare decays and axion or dark-photon experiments probe new interactions at extremely small couplings, while astrophysical and cosmological observations constrain new physics over sprawling mass scales.
7. Dark matter and the dark sector
The nature of dark matter, and the dark sector more generally, remains one of the deepest mysteries in modern physics. A broad range of masses and interaction strengths must be explored, encompassing numerous potential dark-matter phenomenologies, from ultralight axions and hidden photons to weakly interacting massive particles, sterile neutrinos and heavy composite states. The theory space of the dark sector is just as crowded, with models involving new forces or “portals” that link visible and invisible matter.
As no single experimental technique can cover all possibilities, progress will rely on exploiting the complementarity between collider experiments, direct and indirect searches for dark matter, and cosmological observations. Diversity is the key aspect of this developing experimental programme!
8. Accelerator science and technology
The briefing book considers the potential paths to higher energies and luminosities offered by each proposal for CERN’s next flagship project: the two circular colliders FCC-ee and FCC-hh, the two linear colliders LCF and CLIC, and a muon collider; LEP3 and LHeC are also considered as colliders that could potentially offer a physics programme to bridge the time between the HL-LHC and the next high-energy flagship collider. The technical readiness, cost and timeline of each collider are summarised, alongside their environmental impact and energy efficiency (see “Energy efficiency” figure).
The two main development fronts in this technology pillar are high-field magnets and efficient radio-frequency (RF) cavities. High-field superconducting magnets are essential for the FCC-hh, while high-temperature superconducting magnet technology, which presents unique opportunities and challenges, might be relevant to the FCC-hh as a second-stage machine after the FCC-ee. Efficient RF systems are required by all accelerators (CERN Courier May/June 2025 p30). Research and development (R&D) on advanced acceleration concepts, such as plasma-wakefield acceleration and muon colliders, also present much promise but necessitate significant work before they can present a viable solution for a future collider.
Preserving Europe’s leadership in accelerator science and technology requires a broad and extensive programme of work with continuous support for accelerator laboratories and test facilities. Such investments will continue to be very important for applications in medicine, materials science and industry.
9. Detector instrumentation
A wealth of lessons learned from the LHC and HL-LHC experiments are guiding the development of the next generation of detectors, which must have higher granularity, and – for a hadron collider – a higher radiation tolerance, alongside improved timing resolution and data throughput.
As the eyes through which we observe collisions at accelerators, detectors require a coherent and long-term R&D programme. Central to these developments will be the detector R&D collaborations, which have provided a structured framework for organising and steering the work since the previous update to the European Strategy for Particle Physics. These span the full spectrum of detector systems, with high-rate gaseous detectors, liquid detectors and high-performance silicon sensors for precision timing, precision particle identification, low-mass tracking and advanced calorimetry.
If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive
All these detectors will also require advances in readout electronics, trigger systems and real-time data processing. A major new element is the growing role of AI and quantum sensing, both of which already offer innovative methods for analysis, optimisation and detector design (CERN Courier July/August 2025 p31). As in computing, there are high hopes and well-founded expectations that these technologies will transform detector design and operation.
To maintain Europe’s leadership in instrumentation, it is important to maintain sustained investments in test-beam infrastructures and engineering. This supports a mutually beneficial symbiosis with industry. Detector R&D is a portal to sectors as diverse as medical diagnostics and space exploration, providing essential tools such as imaging technologies, fast electronics and radiation-hard sensors for a wide range of applications.
If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive. The briefing book pays much attention to the major leaps in computation and storage that are required by future experiments, with simulation, data management and processing at the top of the list (see “Data challenge” figure). Less demanding in resources, but equally demanding of further development, is data analysis. Planning for these new systems is guided by sustainable computing practices, including energy-efficient software and data centres. The next frontier is the HL-LHC, which will be the testing ground and the basis for future development, and serves as an example for the preservation of the current wealth of experimental data and software (CERN Courier September/October 2025 p41).
Several paradigm shifts hold great promise for the future of computing in high-energy physics. Heterogeneous computing integrates CPUs, GPUs and accelerators, providing hugely increased capabilities and better scaling than traditional CPU usage. Machine learning is already being deployed in event simulation, reconstruction and even triggering, and the first signs from quantum computing are very positive. The combination of AI with quantum technology promises a revolution in all aspects of software and of the development, deployment and usage of computing systems.
Some closing remarks
Beyond detailed physics summaries, two overarching issues appear throughout the briefing book.
First, progress will depend on a sustained interplay between experiment, theory and advances in accelerators, instrumentation and computing. The need for continued theoretical development is as pertinent as ever, as improved calculations will be critical for extracting the full physics potential of future experiments.
Second, all this work relies on people – the true driving force behind scientific programmes. There is an urgent need for academia and research institutions to attract and support experts in accelerator technologies, instrumentation and computing by offering long-term career paths. A lasting commitment to training the new generation of physicists who will carry out these exciting research programmes is equally important.
Revisiting the briefing book to craft the current summary brought home very clearly just how far the field of particle physics has come – and, more importantly, how much more there is to explore in nature. The best is yet to come!
In 1895, mere months after Wilhelm Röntgen discovered X-rays, doctors explored their ability to treat superficial tumours. Today, the X-rays are generated by electron linacs rather than vacuum tubes, but the principle is the same, and radiotherapy is part of most cancer treatment programmes.
Charged hadrons offer distinct advantages. Though they are more challenging to manipulate in a clinical environment, protons and heavy ions deposit most of their energy just before they stop, at the so-called Bragg peak, allowing medical physicists to spare healthy tissue and target cancer cells precisely. Particle therapy has been an effective component of the most advanced cancer therapies for nearly 80 years, since it was proposed by Robert R Wilson in 1946.
With the incidence of cancer rising across the world, research into particle therapy is more valuable than ever to human wellbeing – and the science isn’t slowing down. Today, progress requires adapting accelerator physics to the demands of the burgeoning field of radiobiology. This is the scientific basis for developing and validating a whole new generation of treatment modalities, from FLASH therapy to combining particle therapy with immunotherapy.
Here are the top five facts accelerator physicists need to know about biology at the Bragg peak.
Almost every cell’s control centre is contained within its nucleus, which houses DNA – your body’s genetic instruction manual. If the cell’s DNA becomes compromised, it can mutate and lose control of its basic functions, leading the cell to die or multiply uncontrollably. The latter results in cancer.
For more than a century, radiation doses have been effective in halting the uncontrollable growth of cancerous cells. Today, the key insight from radiobiology is that for the same radiation dose, biological effects such as cell death, genetic instability and tissue toxicity differ significantly based on both beam parameters and the tissue being targeted.
Biologists have discovered that a “linear energy transfer” of roughly 100 keV/μm produces the most significant biological effect. At this density of ionisation, the distance between energy deposition events is roughly equal to the diameter of the DNA double helix, creating complex, repair-resistant DNA lesions that strongly reduce cell survival. Beyond 100 keV/μm, energy is wasted.
DNA is the main target of radiotherapy because it holds the genetic information essential for the cell’s survival and proliferation. Made up of a double helix that looks like a twisted ladder, DNA consists of two strands of nucleotides held together by hydrogen bonds. The sequence of these nucleotides forms the cell’s unique genetic code. A poorly repaired lesion on this ladder leaves a permanent mark on the genome.
When radiation induces a double-strand break, repair is primarily attempted through two pathways: either by rejoining the broken ends of the DNA, or by replacing the break with an identical copy of healthy DNA (see “Repair shop” image). The efficiency of these repairs decreases dramatically when the breaks occur in close spatial proximity or if they are chemically complex. Such scenarios frequently result in lethal mis-repair events or severe alterations in the genetic code, ultimately compromising cell survival.
This fundamental aspect of radiobiology strongly motivates the use of particle therapy over conventional radiotherapy. Whereas X-rays deliver less than 10 keV/μm, creating sparse ionisation events, protons deposit tens of keV/μm near the Bragg peak, and heavy ions 100 keV/μm or more.
2. Mitochondria and membranes matter too
For decades, radiobiology revolved around studying damage to DNA in cell nuclei. However, mounting evidence reveals that an important aspect of cellular dysfunction can be inflicted by damage to other components of cells, such as the cell membrane and the collection of “organelles” inside it. And the nucleus is not the only organelle containing DNA.
Mitochondria generate energy and serve as the body’s cellular executioners. If a mitochondrion recognises that its cell’s DNA has been damaged, it may order the cell membrane to become permeable. Without the structure of the cell membrane, the cell breaks apart, its fragments carried away by immune cells. This is one mechanism behind “programmed cell death” – a controlled form of death, where the cell essentially presses its own self-destruct button (see “Self-destruct” image).
Irradiated mitochondrial DNA can suffer from strand breaks, base–pair mismatches and deletions in the code. In space-radiation studies, damage to mitochondrial DNA is a serious health concern as it can lead to mutations, premature ageing and even the creation of tumours. But programmed cell death can prevent a cancer cell from multiplying into a tumour. By disrupting the mitochondria of tumour cells, particle irradiation can compromise their energy metabolism and amplify cell death, increasing the permeability of the cell membrane and encouraging the tumour cell to self-destruct. Though a less common occurrence, membrane damage by irradiation can also directly lead to cell death.
3. Bystander cells exhibit their own radiation response
For many years, radiobiology was driven by a simple assumption: only cells directly hit by radiation would be damaged. This view started to change in the 1990s, when researchers noticed something unexpected: even cells that had not been irradiated showed signs of stress or injury when they were near the irradiated cells. This phenomenon, known as the bystander effect, revealed that irradiated cells can send bio-chemical signals to their neighbours, which may in turn respond as if they themselves had been exposed, potentially triggering an immune response (see “Communication” image).
“Non-targeted” effects propagate not only in space, but also in time, through the phenomenon of radiation-induced genomic instability. This temporal dimension is characterised by the delayed appearance of genomic alterations across multiple cell generations. Radiation damage propagates across cells and tissues, and over time, adding complexity beyond the simple dose–response paradigm.
Although the underlying mechanisms remain unclear, the clustered ionisation events produced by carbon ions generate complex DNA damage and cell death, while largely preserving nearby, unirradiated cells.
4. Radiation damage activates the immune system
Cancer cells multiply because the immune system fails to recognise them as a threat (see “Immune response” image). The modern pharmaceutical-based technique of immunotherapy seeks to alert the immune system to the threat posed by cancer cells it has ignored by chemically tagging them. Radiotherapy seeks to activate the immune system by inflicting recognisable cellular damage, but long courses of photon radiation can also weaken overall immunity.
This negative effect is often caused by the exposure of circulating blood and active blood-producing organs to radiation doses. Fortunately, particle therapy’s ability to tightly conform the dose to the target and subject surrounding tissues to a minimal dose can significantly mitigate the reduction of immune blood cells, better preserving systemic immunity. By inflicting complex, clustered DNA lesions, heavy ions have the strongest potential to directly trigger programmed cell death, even in the most difficult-to-treat cancer cells, bypassing some of the molecular tricks that tumours use to survive, and amplifying the immune response beyond conventional radiotherapy with X-rays. This is linked to the complex, clustered DNA lesions induced by high-energy-transfer radiation, which triggers the DNA damage–repair signals strongly associated with immune activation.
These biological differences provide a strong rationale for the rapidly emerging research frontier of combining particle therapy with immunotherapy. Particle therapy’s key advantage is its ability to amplify immunogenic cell death, where the cell’s surface changes, creating “danger tags” to recruit immune cells to come and kill it, recognise others like it, and kill those too. This ability for particle therapy to mitigate systemic immunosuppression makes it a theoretically superior partner for immunotherapy compared to conventional X-rays.
5. Ultra-high dose rates protect healthy tissues
In recent years, the attention of clinicians and researchers has focused on the “FLASH” effect– a groundbreaking concept in cancer treatment where radiation is delivered at an ultra-high dose rate in excess of 40 J/kg/s. FLASH radiotherapy appears to minimise damage to healthy tissues while maintaining at least the same level of tumour control as conventional methods. Inflammation in healthy tissues is reduced, and the number of immune cells entering the tumour increased, helping the body fight cancer more effectively. This can significantly widen the therapeutic window – the optimal range of radiation doses that can successfully treat a tumour while minimising toxicity to healthy tissues.
Though the radiobiological mechanisms behind this protective effect remain unclear, several hypotheses have been proposed. A leading theory focuses on oxygen depletion or “hypoxia”.
As tumours grow, they outpace the surrounding blood vessels’ ability to provide oxygen (see “Oxygen depletion” image). By condensing the dose in a very short time, it is thought that FLASH therapy may induce transient hypoxia within normal tissues too, reducing oxygen-dependent DNA damage there, while killing tumour cells at the same rate. Using a similar mechanism, FLASH therapy may also preserve mitochondrial integrity and energy production in normal tissues.
It is still under investigation whether a FLASH effect occurs with carbon ions, but combining the biological benefits of high-energy-transfer radiation with those of FLASH could be very promising.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.