As the Standard Model (SM) withstands increasingly stringent experimental tests, rare decays remain a prime hunting ground for new physics. In a recent paper, the LHCb collaboration reports its first dedicated searches for the decays B0→ K+π–τ+τ– and Bs0→ K+K–τ+τ–, pushing hadron–collider flavour physics further into tau-rich territory.
At the quark level, the B0→ K+π–τ+τ– and Bs0→ K+K–τ+τ– decays happen via the flavour-changing process b → sτ+τ–, which is highly suppressed in the SM. The expected branching fractions of around 10–7 would place these decays well below the current experimental sensitivity. However, many new-physics scenarios, such as those involving leptoquarks or additional Z′ bosons, predict mediators that couple preferentially to third-generation leptons.
The tensions with the SM observed in the ratios of semileptonic branching fractions R(D(*)) and in b → sμ+μ– processes could, for example, result in an enhancement of b → sτ+τ– decays. Yet despite its potential to yield signs of new physics, the tau sector remains largely unexplored.
The LHCb analysis only considered tau decays to muons, in order to exploit the detector’s excellent muon identification systems. Reconstructing decays to final states with tau leptons at a hadron collider is notoriously challenging, particularly when relying on leptonic decays such as τ+→ μ+ντνμ, which result in multiple unreconstructed neutrinos. Using the Run 2 data set of about 5.4 fb–1 of proton–proton collisions, the collaboration applied machine-learning techniques to extract the topological and isolation features of suppressed tau-pair signals from the background.
Due to the large amount of missing energy in the final state, the B-meson mass cannot be fully reconstructed and the output of the machine-learning algorithm was instead fitted to search for a b → sτ+τ– component. The search was primarily limited by the size of the control samples used to constrain the background shapes – a limitation that will be alleviated by the larger datasets expected in future LHC runs.
No significant signal excess was observed in either the K+π–τ+τ– or the K+K–τ+τ– final states. Upper limits on the branching fractions were then established in bins of the dihadron invariant masses, allowing separate exploration of regions dominated by dihadron resonances and those expected to be primarily non-resonant.
These results represent the world’s most stringent limits on b → sτ+τ– transitions
When interpreted in terms of resonant modes, the limits are B(B0→ K*(892)0τ+τ–) < 2.8 × 10–4 and B(Bs0→ φ(1020)τ+τ–) < 4.7 × 10–4 at the 95% confidence level. The B0→ K*(892)0τ+τ– limit improves on previous bounds by approximately an order of magnitude, while the limit on Bs0→ φ(1020)τ+τ– is the first ever established.
These results represent the world’s most stringent limits on b → sτ+τ– transitions. The analysis lays essential groundwork for future searches, as the larger LHCb datasets from LHC Run 3 and beyond are expected to open a new frontier in measurements of rare b-hadron transitions involving heavy leptons.
With the upgraded detector and the novel fully software-based trigger, the efficiency in selecting low-pT muons – and consequently the tau leptons from which they originate – will be much improved. Sensitivity to b → sτ+τ– transitions is therefore expected to increase substantially in the coming years.
Strangeness production in high-energy hadron collisions is a powerful tool for exploring quantum chromodynamics (QCD). Unlike up and down, strange quarks are not present as valence quarks in colliding protons and neutrons, and must therefore appear through interactions. They are, however, still light enough to be abundantly produced at the LHC.
Over the past 15 years, the ALICE collaboration has shown that the abundance of strange over non-strange hadrons grows with event multiplicity in all collision systems. In particular, high-multiplicity proton–proton (pp) collisions display a significant strangeness enhancement, reaching saturation levels similar to those in heavy-ion collisions. In one of the most precise studies of strange-to-non-strange hadron production to date, the ALICE collaboration has reported its recent results from pp and lead–lead collisions at the LHC.
Strange hadrons (Ks0, Λ, Ξ, Ω) were reconstructed from their weak-decay topologies. Candidates were then selected by applying geometrical and kinematic cuts, estimating and subtracting backgrounds, and correcting the resulting distributions using detector-response simulations. The analyses were carried out at a centre-of-mass energy per nucleon pair of 5.02 TeV and span a wide multiplicity range, from 2 to 2000 charged particles at mid-rapidity.
To better understand how strangeness is produced, the collaboration has taken a significant step by measuring the probability distribution of forming a specific number of strange particles of the same species per event. This study, based on event-by-event strange-particle counting, moves beyond average yields and probes higher orders in the strange-particle production probability distribution. To account for the response of the detector, each candidate is assigned a probability of being genuine rather than background, and a Bayesian unfolding method iteratively corrects for particles that were missed or misidentified to reconstruct the true counts. This provides a novel technique for testing theoretical strangeness-production mechanisms, particularly in events characterised by a significant imbalance between strange and non-strange particles.
Exploiting a large dataset of pp collisions, the probability of producing n particles of a given species S (S = Ks0, Λ, Ξ– or Ω–) per event, P(nS), could be determined up to a maximum of nS = 7 for Ks0, nS = 5 for Λ, nS = 4 for Ξ and nS = 2 for Ω (see figure 1). An increase of P(nS) with charged-particle multiplicity is observed, becoming more pronounced for larger n, as reflected by the growing separation between the curves corresponding to low- and high-multiplicity classes in the high-n tail of the distributions.
The average production yield of n particles per event can be calculated from the P(nS) distributions, taking into account all possible combinations that result in a given multiplet. This makes it possible to compare events with the same or a different overall strange quark content that hadronise into various combinations of hadrons in the final state. While the ratio between Ω triplets to single Ks0 shows an extreme strangeness-enhancement pattern up to two orders of magnitude across multiplicity, comparing hadron combinations that differ in up- and down-quark content but share the same total s-quark content (for instance, Ω singlets compared to Λ triplets) helps isolate the part of the enhancement unrelated to strangeness.
Comparisons with state-of-the-art phenomenological models show that this new approach greatly enhances sensitivity to the underlying physics mechanisms implemented in different event generators. Together with the traditional strange-to-pion observables, the multiplicity-differential probability distributions of strange hadrons provide a more detailed picture of how strange quarks are produced and hadronise in high-energy collisions, offering a stringent benchmark for the phenomenological description of non-perturbative QCD.
Neutrino physics is a vibrant field of study, with spectacular recent advances. To this day, neutrino oscillations are the only experimental evidence of physics beyond the Standard Model, and, 25 years after this discovery, breathtaking progress has been achieved in both theory and experiment. Giulia Ricciardi’s new textbook provides a timely new resource in a fast developing field.
Entering this exciting field of research can be intimidating, thanks to the breadth of topics that need to be mastered. As well as particle physics, neutrinos touch astroparticle physics, cosmology, astrophysics, nuclear physics and geophysics, and many neutrino textbooks assume advanced knowledge of quantum field theory and particle theory. Ricciardi achieves a brilliant balance by providing a solid foundation in these areas, alongside a comprehensive overview of neutrino theory and experiment. This sets her book apart from most other literature on the subject and makes it a precious resource for newcomers and experts alike. She provides a self-contained introduction to group theory, symmetries, gauge theories and the Standard Model, with an approach that is both accessible and scientifically rigorous, putting the emphasis on understanding key concepts rather than abstract formalisms.
With the theoretical foundations in place, Ricciardi then turns to neutrino masses, neutrino mixing, astrophysical neutrinos and neutrino oscillations. Dirac, Majorana and Dirac-plus-Majorana mass terms are explored, alongside the “see-saw” mechanism and its possible implementations. A full chapter is devoted to neutrino oscillations in the vacuum and in matter, preparing the reader to explore neutrino oscillations in experiments, first from natural sources, such as the Sun, supernovae, the atmosphere and cosmic neutrinos; a subsequent chapter then covers reactor and accelerator neutrinos, giving a detailed overview of the key theoretical and experimental issues. Ricciardi avoids a common omission in neutrino textbooks by addressing neutrin–nucleus interactions – a fast developing topic in theory and a crucial aspect of interpreting current and future experiments. The book concludes with a look at the current research and future prospects, including a discussion of neutrino-mass measurements and neutrinoless double-beta decay.
The clarity with which Ricciardi links theoretical concepts to experimental observations is remarkable. Her book is engaging and eminently enjoyable. I highly recommend it.
How would Einstein have reacted to Bell’s theorem and the experimental results derived from it? Alain Aspect’s new French-language book Si Einstein avait su (If Einstein had known) can be recommended to anybody interested in the Einstein–Bohr debates about quantum mechanics, how a CERN theorist, John Stewart Bell (1928–1990), weighed in in 1964, and how experimentalists converted Bell’s idea into ingenious physical experiments. Aspect shared the 2022 Nobel Prize in Physics with John F Clauser and Anton Zeilinger for this work.
The core part of Aspect’s book covers his own contributions to the experimental test of Bell’s inequality spanning 1975 to 1985. He gives a very personal account of his involvement as an experimental physicist in this matter, starting soon after he visited Bell at CERN in spring 1975 for advice concerning his French Thèse d’État. With anecdotes that give the reader the impression of sitting next to the author and listening to his stories, Aspect recounts how, in 1975, captivated by Bell’s work, he set up experiments in underground rooms at the Institut d’Optique in Orsay to test hidden-variable theories. He explains his experiments in detail with diagrams and figures from his original publications as well as images of the apparatus used. By 1981 and for several years to come, it was Aspect’s experiments that came closest to Bell’s idea on how to test the inequality formulated in 1964. Aspect defended his thesis in 1983 in a packed auditorium with illustrious examiners such as J S Bell, C Cohen-Tannoudji and B d’Espagnat. Not long afterwards, Cohen-Tannoudji invited him to the Collège de France and the Paris ENS to work on the laser cooling and manipulation of atoms – a quite different subject. At that time, Aspect didn’t see any point in closing some of the remaining loopholes in his experiments.
To prepare the terrain for his story, Aspect first tells the history of quantum mechanics from 1900 to 1935. He begins with a discussion of Planck’s blackbody radiation (1900), Einstein’s description of the photoelectric effect (1905) and the heat capacity of solids (1907), the wave–particle duality of light, first Solvay Congress (1911), Bohr’s atomic model (1913) and matter–radiation interaction according to Einstein (1916). He then covers the Einstein–Bohr debates at the Solvay congresses of 1927 and 1930 on the interpretation of the probability aspects of quantum mechanics.
Aspect then turns to the Einstein, Podolsky, Rosen (EPR) paper of 1935, which discusses a gedankenexperiment involving two entangled quantum mechanical particles. Whereas the previous Einstein–Bohr debates ended with convincing arguments by Bohr refuting Einstein’s point of view, Bohr didn’t come up with a clear answer to Einstein’s objection of 1935, namely that he considered quantum mechanics to be incomplete. In 1935 and the following years, for most physicists the Einstein–Bohr debate had been considered uninteresting and purely philosophical. It had practically no influence on the success of the application of quantum mechanics. Between 1935 and 1964, the EPR subject was nearly dormant, apart from David Bohm’s interventions during the 1950s. In 1964 Bell took up the EPR paradox, which had been advanced as an argument that quantum mechanics should be supplemented by additional variables (CERN Courier July/August 2025 p21).
Aspect describes clearly and convincingly how Bell entered the scene and how the inequality with his name triggered experimentalists to get involved: experiments with polarisation-entangled photons and their correlations could decide whether Einstein or Bohr’s view of quantum mechanics was correct. Bell’s discovery transferred the Einstein–Bohr debate from epistemology to the realm of experimental physics. At the end of the 1960s the first experiments based on Bell’s inequality started to take form. Aspect describes how these analysed the polarisation correlation of the entangled photons at a separation of a few metres. He discusses their difficulties and limitations, starting with the experiments launched by Clauser et al.
In the final chapter, covering 1985 to the present, Aspect explains why he decided not to continue his research with entangled photons and to switch subject. His opinion was that the technology at the time wasn’t ripe enough to close some of the remaining loopholes in his experiments – loopholes of a type that Bell considered less important. Aspect was convinced that if quantum mechanics was faulty, one would have seen indications of that in his experiments. It took until 2015 for two of the loopholes left open by Aspect’s experiments (the locality and detection loophole) to be simultaneously closed. Yet no experiment, as ideal as it is, can be said to be totally loophole-free, as Aspect says. The final chapter also covers more philosophical aspects of quantum non-locality and speculations about how Einstein would have reacted to the violation of Bell’s inequalities. In complementary sections, Aspect speaks about the no-cloning theorem, technological applications of quantum optics like quantum cryptography according to Ekert, quantum teleportation and quantum random number generators.
Who will profit from reading this book? First one should say that it is not a quantum-mechanics or quantum-optics textbook. Most of the material is written in such a way that it will be accessible and enjoyable to the educated layperson. For the more curious reader, supplementary sections cover physical aspects in deeper detail, and the book cites more than 80 original references. Aspect’s long experience and honed pedagogical skills are evident throughout. It is an engaging and authoritative introduction to one of the most profound debates in modern physics.
Hendrik Verweij, who was for many years a driving force in the development of electronics for high-energy physics, passed away on 11 August 2025 in Meyrin, Switzerland, at the age of 93.
Born in Linschoten near Gouda in the Netherlands, Henk earned a degree in electrical engineering at the Technical High School in Hilversum and started his career as an instrumentation specialist at Philips, working on oscilloscopes. He joined CERN in July 1956, bringing his expertise in electronics to the newly founded laboratory. With Ian Pizer, group leader of the electronics group of the nuclear-physics-division, he published CERN Yellow Report 61-15 on a nanosecond-sampling oscilloscope, followed by a paper on a fast amplifier one year later.
During the next four decades, developments in electronics profoundly transformed the world. Henk played a crucial role in bringing this transformation to CERN’s electronics instrumentation, and he eventually succeeded Pizer as group leader. Over the years he worked with numerous colleagues on fast signal-processing circuits. The creation of a collection of standardised modules facilitated the setup of a variety of CERN experiments. With Bjorn Hallgren and others, he realised the simultaneous, fast time and amplitude digitisation of the inner drift detector of the innovative UA1 experiment at CERN’s Super Proton Synchrotron, which discovered the W and Z bosons together with the UA2 experiment.
In the 1960s, recognising the importance of standardisation for engaging industry, Henk built close ties with colleagues in the US, including at Lawrence Berkeley Laboratory, SLAC and the National Bureau of Standards (NBS). He took part in the discussions that led to the Nuclear Instrumentation Module (NIM) standard, defined in 1964 by the US Atomic Energy Commission, and served on the NIM committee chaired by Lou Costrell of the NBS.
Henk was also a member of the ESONE committee for the CAMAC and later FASTBUS standards, working alongside colleagues such as Bob Dobinson, Fred Iselin, Phil Ponting, Peggie Rimmer, Tim Berners-Lee and many others from across Europe and the US in this international effort. He contributed hardware for standard modules both before and after the publication of the FASTBUS specification in 1984, and reported regularly at conferences on the status of European developments. A strong advocate of collaboration with industry, he also helped persuade LeCroy to establish a facility near CERN.
A driving force in the development of electronics for high-energy physics
Towards the end of his career, Henk became group leader of the microelectronics group at CERN, closing the loop in this transformational electronics evolution with integrated circuit developments for silicon microstrip, hybrid pixel and other detectors. When he retired in the 1990s, the group had built up the necessary expertise to design optimised application-specific integrated circuits (ASICs) for the LHC detectors. Ultimately, these allow the recording of millions of frames per second and event selection from the on-chip stored data.
Retirement did not diminish Henk’s interest in CERN and its electronics activities. He often passed by in the microelectronics group at CERN, regularly participating in Medipix meetings on the development of hybrid pixel-detector read-out chips for medical imaging and other applications.
Henk played an important role in making advances in microelectronics available to the high-energy physics community. His friends and colleagues will miss his experience, vision and irrepressible enthusiasm.
A major step toward shaping the future of European particle physics was reached on 2 October, with the release of the Physics Briefing Book of the 2026 update of the European Strategy for Particle Physics. Despite its 250 pages, it is a concise summary of the vast amount of work contained in the 266 written submissions to the strategy process and the deliberations of the Open Symposium in Venice in June (CERN Courier September/October 2025 p24).
The briefing book compiled by the Physics Preparatory Group is an impressive distillation of our current knowledge of particle physics, and a preview of the exciting prospects offered by future programmes. It provides the scientific basis for defining Europe’s long-term particle-physics priorities and determining the flagship collider that will best advance the field. To this end, it presents comparisons of the physics reach of the different candidate machines, which often have different strengths in probing new physics beyond the Standard Model (SM).
Condensing all this in a few sentences is difficult, though two messages are clear: if the next collider at CERN is an electron–positron collider, the exploration of new physics will proceed mainly through high-precision measurements; and the highest physics reach into the structure of physics beyond the SM via indirect searches will be provided by the combined exploration of the Higgs, electroweak and flavour domains.
Following a visionary outlook for the field from theory, the briefing book divides its exploration of the future of particle physics into seven sectors of fundamental physics and three technology pillars that underpin them.
1. Higgs and electroweak physics
In the new era that has dawned with the discovery of the Higgs boson, numerous fundamental questions remain, including whether the Higgs boson is an elementary scalar, part of an extended scalar sector, or even a portal to entirely new phenomena. The briefing book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy, looking for indirect signs of new physics.
Addressing these requires highly precise measurements of its couplings, self-interaction and quantum corrections. While the High-Luminosity LHC (HL-LHC) will continue to improve several Higgs and electroweak measurements, the next qualitative leap in precision will be provided by future electron–positron colliders, such as the FCC-ee, the Linear Collider Facility (LCF), CLIC or LEP3. And while these would provide very important information, it would fall upon the shoulders of an energy-frontier machine like the FCC-hh or a muon collider to access potential heavy states. Using the absolute HZZ coupling from the FCC-ee, such machines would measure the single-Higgs-boson couplings with a precision better than 1%, and the Higgs self-coupling at the level of a few per cent (see “Higgs self-coupling” figure).
This anticipated leap in experimental precision will necessitate major advances in theory, simulation and detector technology. In the coming decades, electroweak physics and the Higgs boson in particular will remain a cornerstone of particle physics, linking the precision and energy frontiers in the search for deeper laws of nature.
2. Strong interaction physics
Precise knowledge of the strong interaction will be essential for understanding visible matter, exploring the SM with precision, and interpreting future discoveries at the energy frontier. Building upon advanced studies of QCD at the HL-LHC, future high-luminosity electron–positron colliders such as FCC-ee and LEP3 would, like LHeC, enable per-mille precision on the strong coupling constant, and a greatly improved understanding of the transition between the perturbative and non-perturbative regimes of QCD. The LHeC would bring increased precision on parton-distribution functions that would be very useful for many physics measurements at the FCC-hh. FCC-hh would itself open up a major new frontier for strong-interaction studies.
A deep understanding of the strong interaction also necessitates the study of strongly interacting matter under extreme conditions with heavy-ion collisions. ALICE and the other experiments at the LHC will continue to illuminate this physics, revealing insights into the early universe and the interiors of neutron stars.
3. Flavour physics
With high-precision measurements of quark and lepton processes, flavour studies test the SM at energy scales far above those directly accessible to colliders, thanks to their sensitivity to the effects of virtual particles in quantum loops. Small deviations from theoretical predictions could signal new interactions or particles influencing rare processes or CP-violating effects, making flavour physics one of the most sensitive paths toward discovering physics beyond the SM.
The book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy
Global efforts are today led by the LHCb, ATLAS and CMS experiments at the LHC and by the Belle II experiment at SuperKEKB. These experiments have complementary strengths: huge data samples from proton–proton collisions at CERN and a clean environment in electron–positron collisions at KEK. Combining the two will provide powerful tests of lepton-flavour universality, searches for exotic decays and refinements in the understanding of hadronic effects.
The next major step in precision flavour physics would require “tera-Z” samples of a trillion Z bosons from a high-luminosity electron–positron collider such as the FCC-ee, alongside a spectrum of focused experimental initiatives at a more modest scale.
4. Neutrino physics
Neutrino physics addresses open fundamental questions related to neutrino masses and their deep connections to the matter–antimatter asymmetry in the universe and its cosmic evolution. Upcoming experiments including long-baseline accelerator-neutrino experiments (DUNE and Hyper-Kamiokande), reactor experiments such as JUNO (see “JUNO takes aim at neutrino-mass hierarchy” and astroparticle observatories (KM3NeT and IceCube, see also CERN Courier May/June 2025 p23) will likely unravel the neutrino mass hierarchy and discover leptonic CP violation.
In parallel, the hunt for neutrinoless-double-beta decay continues. A signal would indicate that neutrinos are Majorana fermions, which would be indisputable evidence for new physics! Such efforts extend the reach of particle physics beyond accelerators and deepen connections between disciplines. Efforts to determine the absolute mass of neutrinos are also very important.
The chapter highlights the growing synergy between neutrino experiments and collider, astrophysical and cosmological studies, as well as the pivotal role of theory developments. Precision measurements of neutrino interactions provide crucial support for oscillation measurements, and for nuclear and astroparticle physics. New facilities at accelerators explore neutrino scattering at higher energies, while advances in detector technologies have enabled the measurement of coherent neutrino scattering, opening new opportunities for new physics searches. Neutrino physics is a truly global enterprise, with strong European participation and a pivotal role for the CERN neutrino platform.
5. Cosmic messengers
Astroparticle physics and cosmology increasingly provide new and complementary information to laboratory particle-physics experiments in addressing fundamental questions about the universe. A rich set of recent achievements in these fields includes high-precision measurements of cosmological perturbations in the cosmic microwave background (CMB) and in galaxy surveys, a first measurement of an extragalactic neutrino flux, accurate antimatter fluxes and the discovery of gravitational waves (GWs).
Leveraging information from these experiments has given rise to the field of multi-messenger astronomy. The next generation of instruments, from neutrino telescopes to ground- and space-based CMB and GW observatories, promises exciting results with important clues for
particle physics.
6. Beyond the Standard Model
The landscape for physics beyond the SM is vast, calling for an extended exploration effort with exciting prospects for discovery. It encompasses new scalar or gauge sectors, supersymmetry, compositeness, extra dimensions and dark-sector extensions that connect visible and invisible matter.
Many of these models predict new particles or deviations from SM couplings that would be accessible to next-generation accelerators. The briefing book shows that future electron–positron colliders such as FCC-ee, CLIC, LCF and LEP3 have sensitivity to the indirect effects of new physics through precision Higgs, electroweak and flavour measurements. With their per-mille precision measurements, electron–positron colliders will be essential tools for revealing the virtual effects of heavy new physics beyond the direct reach of colliders. In direct searches, CLIC would extend the energy frontier to 1.5 TeV, whereas FCC-hh would extend it to tens of TeV, potentially enabling the direct observation of new physics such as new gauge bosons, supersymmetric particles and heavy scalar partners. A muon collider would combine precision and energy reach, offering a compact high-energy platform for direct and indirect discovery.
This chapter of the briefing book underscores the complementarity between collider and non-collider experiments. Low-energy precision experiments, searches for electric dipole moments, rare decays and axion or dark-photon experiments probe new interactions at extremely small couplings, while astrophysical and cosmological observations constrain new physics over sprawling mass scales.
7. Dark matter and the dark sector
The nature of dark matter, and the dark sector more generally, remains one of the deepest mysteries in modern physics. A broad range of masses and interaction strengths must be explored, encompassing numerous potential dark-matter phenomenologies, from ultralight axions and hidden photons to weakly interacting massive particles, sterile neutrinos and heavy composite states. The theory space of the dark sector is just as crowded, with models involving new forces or “portals” that link visible and invisible matter.
As no single experimental technique can cover all possibilities, progress will rely on exploiting the complementarity between collider experiments, direct and indirect searches for dark matter, and cosmological observations. Diversity is the key aspect of this developing experimental programme!
8. Accelerator science and technology
The briefing book considers the potential paths to higher energies and luminosities offered by each proposal for CERN’s next flagship project: the two circular colliders FCC-ee and FCC-hh, the two linear colliders LCF and CLIC, and a muon collider; LEP3 and LHeC are also considered as colliders that could potentially offer a physics programme to bridge the time between the HL-LHC and the next high-energy flagship collider. The technical readiness, cost and timeline of each collider are summarised, alongside their environmental impact and energy efficiency (see “Energy efficiency” figure).
The two main development fronts in this technology pillar are high-field magnets and efficient radio-frequency (RF) cavities. High-field superconducting magnets are essential for the FCC-hh, while high-temperature superconducting magnet technology, which presents unique opportunities and challenges, might be relevant to the FCC-hh as a second-stage machine after the FCC-ee. Efficient RF systems are required by all accelerators (CERN Courier May/June 2025 p30). Research and development (R&D) on advanced acceleration concepts, such as plasma-wakefield acceleration and muon colliders, also present much promise but necessitate significant work before they can present a viable solution for a future collider.
Preserving Europe’s leadership in accelerator science and technology requires a broad and extensive programme of work with continuous support for accelerator laboratories and test facilities. Such investments will continue to be very important for applications in medicine, materials science and industry.
9. Detector instrumentation
A wealth of lessons learned from the LHC and HL-LHC experiments are guiding the development of the next generation of detectors, which must have higher granularity, and – for a hadron collider – a higher radiation tolerance, alongside improved timing resolution and data throughput.
As the eyes through which we observe collisions at accelerators, detectors require a coherent and long-term R&D programme. Central to these developments will be the detector R&D collaborations, which have provided a structured framework for organising and steering the work since the previous update to the European Strategy for Particle Physics. These span the full spectrum of detector systems, with high-rate gaseous detectors, liquid detectors and high-performance silicon sensors for precision timing, precision particle identification, low-mass tracking and advanced calorimetry.
If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive
All these detectors will also require advances in readout electronics, trigger systems and real-time data processing. A major new element is the growing role of AI and quantum sensing, both of which already offer innovative methods for analysis, optimisation and detector design (CERN Courier July/August 2025 p31). As in computing, there are high hopes and well-founded expectations that these technologies will transform detector design and operation.
To maintain Europe’s leadership in instrumentation, it is important to maintain sustained investments in test-beam infrastructures and engineering. This supports a mutually beneficial symbiosis with industry. Detector R&D is a portal to sectors as diverse as medical diagnostics and space exploration, providing essential tools such as imaging technologies, fast electronics and radiation-hard sensors for a wide range of applications.
If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive. The briefing book pays much attention to the major leaps in computation and storage that are required by future experiments, with simulation, data management and processing at the top of the list (see “Data challenge” figure). Less demanding in resources, but equally demanding of further development, is data analysis. Planning for these new systems is guided by sustainable computing practices, including energy-efficient software and data centres. The next frontier is the HL-LHC, which will be the testing ground and the basis for future development, and serves as an example for the preservation of the current wealth of experimental data and software (CERN Courier September/October 2025 p41).
Several paradigm shifts hold great promise for the future of computing in high-energy physics. Heterogeneous computing integrates CPUs, GPUs and accelerators, providing hugely increased capabilities and better scaling than traditional CPU usage. Machine learning is already being deployed in event simulation, reconstruction and even triggering, and the first signs from quantum computing are very positive. The combination of AI with quantum technology promises a revolution in all aspects of software and of the development, deployment and usage of computing systems.
Some closing remarks
Beyond detailed physics summaries, two overarching issues appear throughout the briefing book.
First, progress will depend on a sustained interplay between experiment, theory and advances in accelerators, instrumentation and computing. The need for continued theoretical development is as pertinent as ever, as improved calculations will be critical for extracting the full physics potential of future experiments.
Second, all this work relies on people – the true driving force behind scientific programmes. There is an urgent need for academia and research institutions to attract and support experts in accelerator technologies, instrumentation and computing by offering long-term career paths. A lasting commitment to training the new generation of physicists who will carry out these exciting research programmes is equally important.
Revisiting the briefing book to craft the current summary brought home very clearly just how far the field of particle physics has come – and, more importantly, how much more there is to explore in nature. The best is yet to come!
In 1895, mere months after Wilhelm Röntgen discovered X-rays, doctors explored their ability to treat superficial tumours. Today, the X-rays are generated by electron linacs rather than vacuum tubes, but the principle is the same, and radiotherapy is part of most cancer treatment programmes.
Charged hadrons offer distinct advantages. Though they are more challenging to manipulate in a clinical environment, protons and heavy ions deposit most of their energy just before they stop, at the so-called Bragg peak, allowing medical physicists to spare healthy tissue and target cancer cells precisely. Particle therapy has been an effective component of the most advanced cancer therapies for nearly 80 years, since it was proposed by Robert R Wilson in 1946.
With the incidence of cancer rising across the world, research into particle therapy is more valuable than ever to human wellbeing – and the science isn’t slowing down. Today, progress requires adapting accelerator physics to the demands of the burgeoning field of radiobiology. This is the scientific basis for developing and validating a whole new generation of treatment modalities, from FLASH therapy to combining particle therapy with immunotherapy.
Here are the top five facts accelerator physicists need to know about biology at the Bragg peak.
Almost every cell’s control centre is contained within its nucleus, which houses DNA – your body’s genetic instruction manual. If the cell’s DNA becomes compromised, it can mutate and lose control of its basic functions, leading the cell to die or multiply uncontrollably. The latter results in cancer.
For more than a century, radiation doses have been effective in halting the uncontrollable growth of cancerous cells. Today, the key insight from radiobiology is that for the same radiation dose, biological effects such as cell death, genetic instability and tissue toxicity differ significantly based on both beam parameters and the tissue being targeted.
Biologists have discovered that a “linear energy transfer” of roughly 100 keV/μm produces the most significant biological effect. At this density of ionisation, the distance between energy deposition events is roughly equal to the diameter of the DNA double helix, creating complex, repair-resistant DNA lesions that strongly reduce cell survival. Beyond 100 keV/μm, energy is wasted.
DNA is the main target of radiotherapy because it holds the genetic information essential for the cell’s survival and proliferation. Made up of a double helix that looks like a twisted ladder, DNA consists of two strands of nucleotides held together by hydrogen bonds. The sequence of these nucleotides forms the cell’s unique genetic code. A poorly repaired lesion on this ladder leaves a permanent mark on the genome.
When radiation induces a double-strand break, repair is primarily attempted through two pathways: either by rejoining the broken ends of the DNA, or by replacing the break with an identical copy of healthy DNA (see “Repair shop” image). The efficiency of these repairs decreases dramatically when the breaks occur in close spatial proximity or if they are chemically complex. Such scenarios frequently result in lethal mis-repair events or severe alterations in the genetic code, ultimately compromising cell survival.
This fundamental aspect of radiobiology strongly motivates the use of particle therapy over conventional radiotherapy. Whereas X-rays deliver less than 10 keV/μm, creating sparse ionisation events, protons deposit tens of keV/μm near the Bragg peak, and heavy ions 100 keV/μm or more.
2. Mitochondria and membranes matter too
For decades, radiobiology revolved around studying damage to DNA in cell nuclei. However, mounting evidence reveals that an important aspect of cellular dysfunction can be inflicted by damage to other components of cells, such as the cell membrane and the collection of “organelles” inside it. And the nucleus is not the only organelle containing DNA.
Mitochondria generate energy and serve as the body’s cellular executioners. If a mitochondrion recognises that its cell’s DNA has been damaged, it may order the cell membrane to become permeable. Without the structure of the cell membrane, the cell breaks apart, its fragments carried away by immune cells. This is one mechanism behind “programmed cell death” – a controlled form of death, where the cell essentially presses its own self-destruct button (see “Self-destruct” image).
Irradiated mitochondrial DNA can suffer from strand breaks, base–pair mismatches and deletions in the code. In space-radiation studies, damage to mitochondrial DNA is a serious health concern as it can lead to mutations, premature ageing and even the creation of tumours. But programmed cell death can prevent a cancer cell from multiplying into a tumour. By disrupting the mitochondria of tumour cells, particle irradiation can compromise their energy metabolism and amplify cell death, increasing the permeability of the cell membrane and encouraging the tumour cell to self-destruct. Though a less common occurrence, membrane damage by irradiation can also directly lead to cell death.
3. Bystander cells exhibit their own radiation response
For many years, radiobiology was driven by a simple assumption: only cells directly hit by radiation would be damaged. This view started to change in the 1990s, when researchers noticed something unexpected: even cells that had not been irradiated showed signs of stress or injury when they were near the irradiated cells. This phenomenon, known as the bystander effect, revealed that irradiated cells can send bio-chemical signals to their neighbours, which may in turn respond as if they themselves had been exposed, potentially triggering an immune response (see “Communication” image).
“Non-targeted” effects propagate not only in space, but also in time, through the phenomenon of radiation-induced genomic instability. This temporal dimension is characterised by the delayed appearance of genomic alterations across multiple cell generations. Radiation damage propagates across cells and tissues, and over time, adding complexity beyond the simple dose–response paradigm.
Although the underlying mechanisms remain unclear, the clustered ionisation events produced by carbon ions generate complex DNA damage and cell death, while largely preserving nearby, unirradiated cells.
4. Radiation damage activates the immune system
Cancer cells multiply because the immune system fails to recognise them as a threat (see “Immune response” image). The modern pharmaceutical-based technique of immunotherapy seeks to alert the immune system to the threat posed by cancer cells it has ignored by chemically tagging them. Radiotherapy seeks to activate the immune system by inflicting recognisable cellular damage, but long courses of photon radiation can also weaken overall immunity.
This negative effect is often caused by the exposure of circulating blood and active blood-producing organs to radiation doses. Fortunately, particle therapy’s ability to tightly conform the dose to the target and subject surrounding tissues to a minimal dose can significantly mitigate the reduction of immune blood cells, better preserving systemic immunity. By inflicting complex, clustered DNA lesions, heavy ions have the strongest potential to directly trigger programmed cell death, even in the most difficult-to-treat cancer cells, bypassing some of the molecular tricks that tumours use to survive, and amplifying the immune response beyond conventional radiotherapy with X-rays. This is linked to the complex, clustered DNA lesions induced by high-energy-transfer radiation, which triggers the DNA damage–repair signals strongly associated with immune activation.
These biological differences provide a strong rationale for the rapidly emerging research frontier of combining particle therapy with immunotherapy. Particle therapy’s key advantage is its ability to amplify immunogenic cell death, where the cell’s surface changes, creating “danger tags” to recruit immune cells to come and kill it, recognise others like it, and kill those too. This ability for particle therapy to mitigate systemic immunosuppression makes it a theoretically superior partner for immunotherapy compared to conventional X-rays.
5. Ultra-high dose rates protect healthy tissues
In recent years, the attention of clinicians and researchers has focused on the “FLASH” effect– a groundbreaking concept in cancer treatment where radiation is delivered at an ultra-high dose rate in excess of 40 J/kg/s. FLASH radiotherapy appears to minimise damage to healthy tissues while maintaining at least the same level of tumour control as conventional methods. Inflammation in healthy tissues is reduced, and the number of immune cells entering the tumour increased, helping the body fight cancer more effectively. This can significantly widen the therapeutic window – the optimal range of radiation doses that can successfully treat a tumour while minimising toxicity to healthy tissues.
Though the radiobiological mechanisms behind this protective effect remain unclear, several hypotheses have been proposed. A leading theory focuses on oxygen depletion or “hypoxia”.
As tumours grow, they outpace the surrounding blood vessels’ ability to provide oxygen (see “Oxygen depletion” image). By condensing the dose in a very short time, it is thought that FLASH therapy may induce transient hypoxia within normal tissues too, reducing oxygen-dependent DNA damage there, while killing tumour cells at the same rate. Using a similar mechanism, FLASH therapy may also preserve mitochondrial integrity and energy production in normal tissues.
It is still under investigation whether a FLASH effect occurs with carbon ions, but combining the biological benefits of high-energy-transfer radiation with those of FLASH could be very promising.
What excites you most about your research in 2025?
2025 has been a very exciting year. We just published a paper in Nature Physics about radioactive ion beams.
I also received an ERC Advanced Grant to study the FLASH effect with neon ions. We plan to go back to the 1970s, when Cornelius Tobias in Berkeley thought of using very heavy ions against radio-resistant tumours, but now using FLASH’s ultrahigh dose rates to reduce its toxicity to healthy tissues. Our group is also working on the simultaneous acceleration of different ions: carbon ions will stop in the tumour, but helium ions will cross the patient, providing an online monitor of the beam’s position during irradiation. The other big news in radiotherapy is vertical irradiation, where we don’t rotate the beam around the patient, but rotate the patient around the beam. This is particularly interesting for heavy-ion therapy, where building a rotating gantry that can irradiate the patient from multiple angles is almost as expensive as the whole accelerator. We are leading the Marie Curie UPLIFT training network on this topic.
Why are heavy ions so compelling?
Close to the Bragg peak, where very heavy ions are very densely ionising, the damage they cause is difficult to repair. You can kill the tumours much better than with protons. But carbon, oxygen and neon run the risk of inducing toxicity in healthy tissues. In Berkeley, more than 400 patients were treated with heavy ions. The results were not very good, and it was realised that these ions can be very toxic for normal tissue. The programme was stopped in 1992, and since then there has been no more heavy-ion therapy in the US, though carbon-ion therapy was established in Japan not long after. Today, most of the 130 particle-therapy centres worldwide use protons, but 17 centres across Asia and Europe offer carbon-ion therapy, with one now under construction at the Mayo Clinic in the US. Carbon is very convenient, because the plateau of the Bragg curve is similar to X-rays, while the peak is much more effective than protons. But still, there is evidence that it’s not heavy enough, that the charge is not high enough to get rid of very radio-resistant hypoxic tumours – tumours where you don’t have enough oxygenation. So that’s why we want to go heavier: neon. If we show that you can manage the toxicity using FLASH, then this is something that can be translated into the clinics.
There seems to be a lot of research into condensing the dose either in space, in microbeams or, in time, in the FLASH effect…
Absolutely.
Why does that spare healthy tissue at the expense of cancer cells?
That is a question I cannot answer. To be honest, nobody knows. We know that it works, but I want to make it very clear that we need more research to translate it completely to the clinic. It is true that if you either fractionate in space or compress in time, normal tissue is much more resistant, while the effect on the tumour is approximately the same, allowing you to increase the dose without harming the patient. The problem is that the data are still controversial.
So you would say that it is not yet scientifically established that the FLASH effect is real?
There is an overwhelming amount of evidence for the strong sparing of normal tissue at specific sites, especially for the skin and for the brain. But, for example, for gastrointestinal tumours the data is very controversial. Some data show no effect, some data show a protective effect, and some data show an increased effectiveness of FLASH. We cannot generalise.
Is it surprising that the effect depends on the tissue?
In medicine this is not so strange. The brain and the gut are completely different. In the gut, you have a lot of cells that are quickly duplicating, while in the brain, you almost have the same number of neurons that you had when you were a teenager – unfortunately, there is not much exchange in the brain.
So, your frontier at GSI is FLASH with neon ions. Would you argue that microbeams are equally promising?
Absolutely, yes, though millibeams more so than microbeams, because microbeams are extremely difficult to go into clinical translation. In the micron region, any kind of movement will jeopardise your spatial fractionation. But if you have millimetre spacing, then this becomes credible and feasible. You can create millibeams using a grid. Instead of having one solid beam, you have several stripes. If you use heavier ions, they don’t scatter very much and remain spatially fractionated. There is mounting evidence that fractionated irradiation of the tumour can elicit an immune response and that these immune cells eventually destroy the tumour. Research is still ongoing to understand whether it’s better to irradiate with a spatial fractionation of 1 millimetre or to only radiate the centre of the tumour, allowing the immune cells to migrate and destroy the tumour.
What’s the biology of the body’s immune response to a tumour?
To become a tumour, a cell has to fool the immune system, otherwise our immune system will destroy it. So, we are desperately trying to find a way to teach the immune system to say: “look, this is not a friend – you have to kill it, you have to destroy it.” This is immunotherapy, the subject of the Nobel Prize in medicine in 2018 and also related to the 2025 Nobel Prize in medicine on regulation of the immune system. But these drugs don’t work for every tumour. Radiotherapy is very useful in this sense, because you kill a lot of cells, and when the immune system sees a lot of dead cells, it activates. A combination of immunotherapy and radiotherapy is now being used more and more in clinical trials.
You also mentioned radioactive ion beams and the simultaneous acceleration of carbon and helium ions. Why are these approaches advantageous?
The two big problems with particle therapy are cost and range uncertainty. Having energy deposition concentrated at the Bragg peak is very nice, but if it’s not in the right position, it can do a lot of damage. Precision is therefore much more important in particle therapy than in conventional radiotherapy, as X-rays don’t have a Bragg peak – even if the patient moves a little bit, or if there is an anatomical change, it doesn’t matter. That’s why many centres prefer X-rays. To change that, we are trying to create ways to see the beam while we irradiate. Radioactive ions decay while they deposit energy in the tumour, allowing you to see the beam using PET. With carbon and helium, you don’t see the carbon beam, but you see the helium beam. These are both ways to visualise the beam during irradiation.
How significantly does radiation therapy improve human well-being in the world today?
When I started to work in radiation therapy at Berkeley, many people were telling me: “Why do you waste your time in radiation therapy? In 10 years everything will be solved.” At that time, the trend was gene therapy. Other trends have come and gone, and after 35 years in this field, radiation therapy is still a very important tool in a multidisciplinary strategy for killing tumours. More than 50% of cancer patients need radiotherapy, but, even in Europe, it is not available to all patients who need it.
Accelerator and detector physicists have to learn to speak the language of the non-specialist
What are the most promising initiatives to increase access to radiotherapy in low- and middle-income countries?
Simply making the accelerators cheaper. The GDP of most countries in Africa, South America and Asia is also steadily increasing, so you can expect that – let’s say – in 20 or 30 years from now, there will be a big demand for advanced medical technologies in these countries, because they will have the money to afford it.
Is there a global shortage of radiation physicists?
Yes, absolutely. This is true not only for particle therapy, which requires a high number of specialists to maintain the machine, but also for conventional X-ray radiotherapy with electron linacs. It’s also true for diagnostics because you need a lot of medical physicists for CT, PET and MRI.
What is your advice to high-energy physicists who have just completed a PhD or a postdoc, and want to enter medical physics?
The next step is a specialisation course. In about four years, you will become a specialised medical physicist and can start to work in the clinics. Many who take that path continue to do research alongside their clinical work, so you don’t have to give up your research career, just reorient it toward medical applications.
How does PTCOG exert leadership over global research and development?
The Particle Therapy Co-Operative Group (PTCOG) is a very interesting association. Every particle-therapy centre is represented in its steering committee. We have two big roles. One is research, so we really promote international research in particle therapy, even with grants. The second is education. For example, Spain currently has 11 proton therapy centres under construction. Each will need maybe 10 physicists. PTCOG is promoting education in particle therapy to train the next generation of radiation-therapy technicians and medical oncologists. It’s a global organisation, representing science worldwide, across national and continental branches.
Do you have a message for our community of accelerator physicists and detector physicists? How can they make their research more interdisciplinary and improve the applications?
Accelerator physicists especially, but also detector physicists, have to learn to speak the language of the non-specialist. Sometimes they are lost in translation. Also, they have to be careful not to oversell what they are doing, because you can create expectations that are not matched by reality. Tabletop laser-driven accelerators are a very interesting research topic, but don’t oversell them as something that can go into the clinics tomorrow, because then you create frustration and disappointment. There is a similar situation with linear accelerators for particle therapy. Since I started to work in this field, people have been saying “Why do we use circular accelerators? We should use linear accelerators.” After 35 years, not a single linear accelerator has been used in the clinics. There must also be a good connection with industry, because eventually clinics buy from industry, not academia.
Are there missed opportunities in the way that fundamental physicists attempt to apply their research and make it practically useful with industry and medicine?
In my opinion, it should work the other way around. Don’t say “this is what I am good at”; ask the clinical environment, “what do you need?” In particle therapy, we want accelerators that are cheaper and with a smaller footprint. So in whatever research you do, you have to prove to me that the footprint is smaller, and the cost lower.
Do forums exist where medical doctors can tell researchers what they need?
PTCOG is definitely the right place for that. We keep medicine, physics and biology together, and it’s one of the meetings with the highest industry participation. All the industries in particle therapy come to PTCOG. So that’s exactly the right forum where people should talk. We expect 1500 people at the next meeting, which will take place in Deauville, France, from 8 to 13 June 2026, shortly after IPAC.
Are accelerator physicists welcome to engage in PTCOG even if they’ve not previously worked on medical applications?
Absolutely. This is something that we are missing. Accelerator physicists mostly go to IPAC but not to PTCOG. They should also come to PTCOG to speak more with medical physicists. I would say that PTCOG is 50% medical physics, 30% medicine and 20% biology. So, there are a lot of medical physicists, but we don’t have enough accelerator physicists and detector physicists. We need more particle and nuclear physicists to come to PTCOG to see what the clinical and biology community want, and whether they can provide something.
Do you have a message for policymakers and funding agencies about how they can help push forward research in radiotherapy?
Unfortunately, radiation therapy and even surgery are wrongly perceived as old technologies. There is not much investment in them, and that is a big problem for us. What we miss is good investment at the level of cooperative programmes that develop particle therapy in a collaborative fashion. At the moment, it’s becoming increasingly difficult. All the money goes into prevention and pharmaceuticals for immunotherapy and targeted therapy, and this is something that we are trying to revert.
Are large accelerator laboratories well placed to host cooperative research projects?
Both GSI and CERN face the same challenge: their primary mission is nuclear and particle physics. Technological transfer is fine, but they may jeopardise their funding if they stray too far from their primary goal. I believe they should invest more in technological transfer, lobbying their funding agencies to demonstrate that there is a translation of their basic science into something that is useful for public health.
How does your research in particle therapy transfer to astronaut safety?
Particle therapy and space-radiation research have a lot in common. They use the same tools and there are also a lot of overlapping topics, for example radiosensitivity. One patient is more sensitive, one patient is more resistant, and we want to understand what the difference is. The same is true of astronauts – and radiation is probably the main health risk for long-term missions. Space is also a hostile environment in terms of microgravity and isolation, but here we understand the risks, and we have countermeasures. For space radiation, the problem is that we don’t understand the risk very well, because the type of radiation is so exotic. We don’t have that type of radiation on Earth, so we don’t know exactly how big the risk is. Plus, we don’t have effective countermeasures, because the radiation is so energetic that shielding will not be enough to protect the crews effectively. We need more research to reduce the uncertainty on the risk, and most of this research is done in ground-based accelerators, not in space.
Radiation therapy is probably the best interdisciplinary field that you can work in
I understand that you’re even looking into cryogenics…
Hibernation is considered science fiction, but it’s not science fiction at all – it’s something we can recreate in the lab. We call it synthetic torpor. This can be induced in animals that are non-hibernating. Bears and squirrels hibernate; humans and rats don’t, but we can induce it. And when you go into hibernation, you become more radioresistant, providing a possible countermeasure to radiation exposure, especially for long missions. You don’t need much food, you don’t age very much, metabolic processes are slowed down, and you are protected from radiation. That’s for space. This could also be applied to therapy. Imagine you have a patient with multiple metastasis and no hope for treatment. If you can induce synthetic torpor, all the tumours will stop, because when you go into a low temperature and hibernation, the tumours don’t grow. This is not the solution, because when you wake the patient up, the tumours will grow again, but what you can do is treat the tumours while you are in hibernation, while healthy tissue is more radiation resistant. The number of research groups working on this is low, so we’re quite far from considering synthetic torpor for spaceflight or clinical trials for cancer treatment. First of all, we have to see how long we can keep an animal in synthetic torpor. Second, we should translate into bigger animals like pigs or even non-human primates.
In the best-case scenario, what can particle therapy look like in 10 years’ time?
Ideally, we should probably at least double the amount of particle-therapy centres that are now available, and expand into new regions. We finally have a particle-therapy centre in Argentina, which is the first one in South America. I would like to see many more in South America and in Africa. I would also like to see more centres that try to tackle tumours where there is no treatment option, like glioblastoma or pancreatic cancer, where the mortality is the same as the incidence. If we can find ways to treat such cancers with heavy ions and give hope to these patients, this would be really useful.
Is there a final thought that you’d like to leave with readers?
Radiation therapy is probably the best interdisciplinary field that you can work in. It’s useful for society and it’s intellectually stimulating. I really hope that big centres like CERN and GSI commit more and more to the societal benefits of basic research. We need it now more than ever. We are living in a difficult global situation, and we have to prove that when we invest money in basic research, this is very well invested money. I’m very happy to be a scientist, because in science, there are no barriers, there is no border. Science is really, truly international. I’m an advocate of saying scientific collaboration should never stop. It didn’t even stop during the Cold War. At that time, the cooperation between East and West at the scientist level helped to reduce the risk of nuclear weapons. We should continue this. We don’t have to think that what is happening in the world should stop international cooperation in science: it eventually brings peace.
Herwig Schopper was born on 28 February 1924 in the German-speaking town of Landskron (today, Lanškroun) in the then young country of Czechoslovakia. He enjoyed an idyllic childhood, holidaying at his grandparents’ hotel in Abbazia (today, Opatija) on what is now the Croatian Adriatic coast. It was there that his interest in science was awakened through listening in on conversations between physicists from Budapest and Belgrade. In Landskron, he developed an interest in music and sport, learning to play both piano and double bass, and skiing in the nearby mountains. He also learned to speak English, not merely to read Shakespeare as was the norm at the time, but to be able to converse, thanks to a Jewish teacher who had previously spent time in England. This skill was to prove transformational later in life.
The idyll began to crack in 1938 when the Sudetenland was annexed by Germany. War broke out the following year, but the immediate impact on Herwig was limited. He remained in Landskron until the end of his high-school educ ation, graduating as a German citizen – and with no choice but to enlist. Joining the Luftwaffe signals corps, because he thought that would help him develop his knowledge of physics, he served for most of the war on the Eastern Front ensuring that communication lines remained open between military headquarters and the troops on the front lines. As the war drew to a close in March 1945, he was transferred west, just in time to see the Western Allies cross the Rhine at Remagen. Recalled to Berlin and given orders to head further west, Herwig instructed his driver to first make a short detour via Potsdam. This was a sign of the kind of person Herwig was that, amidst the chaos of the fall of Berlin, he wanted to see Schloss Sanssouci, Frederick the Great’s temple to the enlightenment, while he had the chance.
By the time Herwig arrived in Schleswig–Holstein, the war was over, and he found himself a prisoner of the British. He later recalled, with palpable relief, that he had managed to negotiate the war without having to shoot at anyone. On discovering that Herwig spoke English, the British military administration engaged him as a translator. This came as a great consolation to Herwig since many of his compatriots were dispatched to the mines to extract the coal that would be used to reconstruct a shattered Germany. Herwig rapidly struck up a friendship with the English captain he was assigned to. This in turn eased his passage to the University of Hamburg, where he began his research career studying optics, and later enabled him to take the first of his scientific sabbaticals when travel restrictions on German academics were still in place (see “Academic overture” image).
In 1951, Herwig left for a year in Stockholm, where he worked with Lise Meitner on beta decay. He described this time as his first step up in energy from the eV-energies of visible light to the keV-energies of beta-decay electrons. A later sabbatical, starting in 1956, would see him in Cambridge, where he worked under Meitner’s nephew, Otto Frisch, in the Cavendish laboratory. As Austrian Jews, both Meitner and Frisch had sought exile before the war. By this time, Frisch had become director of the Cavendish’s nuclear physics department and a fellow of the Royal Society.
Initial interactions
While at Cambridge, Herwig took his first steps in the emerging field of particle physics, and became one of the first to publish an experimental verification of Lee and Yang’s proposal that parity would be violated in weak interactions. His single-author paper was published soon after that by Chien-Shiung Wu and her team, leading to a lifelong friendship between the two (see “Virtuosi” image).
Following Wu’s experimental verification of parity violation, cited by Herwig in his paper, Lee and Yang received the Nobel Prize. Wu was denied the honour, ostensibly on the basis that she was one of a team and the prize can only be shared three ways. It remains in the realm of speculation whether Herwig would have shared the prize had his paper been the first to appear.
A third sabbatical, arranged by Willibald Jentschke, who wanted Herwig to develop a user group for the newly established DESY laboratory, saw the Schopper family move to Ithaca, New York in 1960. At Cornell, Herwig learned the ropes of electron synchrotrons from Bob Wilson. He also learned a valuable lesson in the hands-on approach to leadership. Arriving in Ithaca on a Saturday, Herwig decided to look around the deserted lab. He found one person there, tidying up. It turned out not to be the janitor, but the lab’s founder and director, Wilson himself. For Herwig, Cornell represented another big jump in energy, cementing Schopper as an experimental particle physicist.
Cornell represented another big jump in energy, cementing Schopper as an experimental particle physicist
Herwig’s three sabbaticals gave him the skills he would later rely on in hardware development and physics analysis, but it was back in Germany that he honed his management skills and established himself a skilled science administrator.
At the beginning of his career in Hamburg, Herwig worked under Rudolf Fleischmann, and when Fleischmann was offered a chair at Erlangen, Herwig followed. Among the research he carried out at Erlangen was an experiment to measure the helicity of gamma rays, a technique that he’d later deploy in Cambridge to measure parity violation.
It was not long before Herwig was offered a chair himself, and in 1958, at the tender age of 34, he parted from his mentor to move to Mainz. In his brief tenure there, he set wheels in motion that would lead to the later establishment of the Mainz Microtron laboratory, today known as MAMI. By this time, however, Herwig was much in demand, and he soon moved to Karlsruhe, taking up a joint position between the university and the Kernforschungszentrum, KfK. His plan was to merge the two under a single management structure as the Karlsruhe Institute for Experimental Nuclear Physics. In doing so, he laid the seeds for today’s Karlsruhe Institute of Technology, KIT.
Pioneering research
At Karlsruhe, Herwig established a user group for DESY, as Jentschke had hoped, and another at CERN. He also initiated a pioneering research programme into superconducting RF and had his first personal contacts with CERN, spending a year there in 1964. In typical Herwig fashion, he pursued his own agenda, developing a device he called a sampling total absorption counter, STAC, to measure neutron energies. At the time, few saw the need for such a device, but this form of calorimetry is now an indispensable part of any experimental particle physicists’ armoury.
In 1970, Herwig again took leave of absence from Karlsruhe to go to CERN. He’d been offered the position of head of the laboratory’s Nuclear Physics Division, but his stay was to be short lived (see “Prélude” image). The following year, Jentschke took up the position of Director-General of CERN alongside John Adams. Jentschke was to run the original CERN laboratory, Lab I, while Adams ran the new CERN Lab II, tasked with building the SPS. This left a vacancy at Germany’s national laboratory, and the job was offered to Herwig. It was too good an offer to refuse.
As chair of the DESY directorate, Herwig witnessed from afar the discovery of both charm and bottom quarks in the US. Although missing out on the discoveries, DESY’s machines were perfect laboratories to study the spectroscopy of these new quark families, and DESY went on to provide definitive measurements. Herwig also oversaw DESY’s development in synchrotron light science, repurposing the DORIS accelerator as a light source when its physics career was complete and it was succeeded by PETRA.
The ambition of the PETRA project put DESY firmly on course to becoming an international laboratory, setting the scene for the later HERA model. PETRA experiments went on to discover the gluon in 1979.
The following year, Herwig was named as CERN’s next Director-General, taking up office on 1 January 1981. By this time, the CERN Council had decided to call time on its experiment with two parallel laboratories, leaving Herwig with the task of uniting Lab I and Lab II. The Council was also considering plans to build the world’s most powerful accelerator, the Large Electron–Positron collider, LEP.
It fell to Herwig both to implement a new management structure for CERN and to see the LEP proposal through to approval (see “Architects of LEP” image). Unpopular decisions were inevitable, making the early years of Herwig’s mandate somewhat difficult. In order to get LEP approved, he had to make sacrifices. As a result, the Intersecting Storage Rings (ISR), the world’s only hadron collider, collided its final beams in 1984 and cuts had to be made across the research programme. Herwig was also confronted with a period of austerity in science funding, and found himself obliged to commit CERN to constant funding in real terms throughout the construction of LEP, and as it turns out, in perpetuity.
It fell to Herwig both to implement a new management structure for CERN and to see the LEP proposal through to approval
Herwig’s battles were not only with the lab’s governing body; he also went against the opinions of some of his scientific colleagues concerning the size of the new accelerator. True to form, Herwig stuck with his instinct, insisting that the LEP tunnel should be 27 km around, rather than the more modest 22 km that would have satisfied the immediate research goals while avoiding the difficult geology beneath the Jura mountains. Herwig, however, was looking further ahead – to the hadron collider that would follow LEP. His obstinacy was fully vindicated with the discovery of the Higgs boson in 2012, confirming the Brout–Englert–Higgs mechanism, which had been proposed almost 50 years earlier. This discovery earned the Nobel Prize for Peter Higgs and François Englert in 2013 (see “Towards LEP and the LHC” image).
The CERN blueprint
Difficult though some of his decisions may have been, there is no doubt that Herwig’s 1981 to 1988 mandate established the blueprint for CERN to this day. The end of operations of the ISR may have been unpopular, and we’ll never know what it may have gone on to achieve, but the world’s second hadron collider at the SPS delivered CERN’s first Nobel prize during Herwig’s mandate, awarded to Carlo Rubbia and Simon van der Meer in 1984 for the discovery of W and Z bosons.
Herwig turned 65 two months after stepping down as CERN Director-General, but retirement was never on his mind. In the years that followed, he carried out numerous roles for UNESCO, applying his diplomacy and foresight to new areas of science. UNESCO was in many ways a natural step for Herwig, whose diplomatic skills had been honed by the steady stream of high-profile visitors to CERN during his mandate as Director-General. At one point, he engineered a meeting at UNESCO between Jim Cronin, who was lobbying for the establishment of a cosmic-ray observatory in Argentina, and the country’s president, Carlos Menem. The following day, Menem announced the start of construction of the Pierre Auger Observatory. On another occasion, Herwig was tasked with developing the Soviet gift to Cuba of a small particle accelerator into a working laboratory. That initiative would ultimately come to nothing, but it helped Herwig prepare the groundwork for perhaps his greatest post-retirement achievement: SESAME, a light-source laboratory in Jordan that operates as an intergovernmental organisation following the CERN model (see “Science diplomacy” image). Mastering the political challenge of establishing an organisation that brings together countries from across the Middle East – including long-standing rivals – required a skill set that few possess.
Although the roots of SESAME can be traced to a much earlier date, by the end of the 20th century, when the idea was sufficiently mature for an interim organisation to be established, Herwig was the natural candidate to lead the new organisation through its formative years. His experience of running international science coupled with his post-retirement roles at UNESCO made him the obvious choice to steer SESAME from idea to reality. It was Herwig who modelled SESAME’s governing document on the CERN convention, and it was Herwig who secured the site in Jordan for the laboratory. Today, SESAME is producing world-class research – a shining example of what can be achieved when people set aside their differences and focus on what they have in common.
Establishing an organisation that brings together countries from across the Middle East required a skill set few possess
Herwig never stopped working for what he believed in. When CERN’s current Director-General convened a meeting with past Directors-General in 2024, along with the president of the CERN Council, Herwig was present. When initiatives were launched to establish an international research centre in the Balkans, Herwig stepped up to the task. He never lost his sense of what is right, and he never lost his mischievous sense of humour. Following an interview at his house in 2024 for the film The Peace Particle, the interviewer asked whether he still played the piano. Herwig stood up, walked to the piano and started to play a very simple arrangement of Christian Sinding’s “Rustle of Spring”. Just as curious glances started to be exchanged, he transitioned, with a twinkle in his eye, to a beautifully nuanced rendition of Liszt’s “Liebestraum No. 3”.
Herwig Schopper was a rare combination of genius, polymath, humanitarian and gentleman. Always humble, he could make decisions with nerves of steel when required. His legacy spans decades and disciplines, and has shaped the field of particle physics in many ways. With his passing, the world has lost a truly remarkable individual. He will be sorely missed.
New results in fundamental physics can be a long time coming. Experimental discoveries of elementary particles have often occurred only decades after their prediction by theory.
Still, the discovery of the fundamental particles of the Standard Model has been speedy in comparison to another longstanding quest in natural philosophy: chrysopoeia, the medieval alchemists’ dream of transforming the “base metal” lead into the precious metal gold. This may have been motivated by the observation that the dull grey, relatively abundant metal lead is of similar density to gold, which has been coveted for its beautiful colour and rarity for millennia.
The quest goes back at least to the mythical, or mystical, notion of the philosopher’s stone and Zosimos of Panopolis around 300 CE. Its evolution, in various cultures, through medieval times and up to the 19th century, is a fascinating thread in the emergence of modern empirical science from earlier ways of thinking. Some of the leaders of this transition, such as Isaac Newton, also practised alchemy. While the alchemists pioneered many of the techniques of modern chemistry, it was only much later that it became clear that lead and gold are distinct chemical elements and that chemical methods are powerless to transmute one into the other.
With the dawn of nuclear physics in the 20th century, it was discovered that elements could transform into others through nuclear reactions, either naturally by radioactive decay or in the laboratory. In 1940, gold was produced at the Harvard Cyclotron by bombarding a mercury target with fast neutrons. Some 40 years ago, tiny amounts of gold were produced in nuclear reactions between beams of carbon and neon, and a bismuth target at the Bevalac in Berkeley. Very recently, gold isotopes were produced at the ISOLDE facility at CERN by bombarding a uranium target with proton beams (see “Historic gold” images).
Now, tucked away discreetly in the conclusions of a paper recently published by the ALICE collaboration, one can find the observation, originating from Igor Pshenichnov, Uliana Dmitrieva and Chiara Oppedisano, that “the transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC.”
ALICE has finally measured the transmutation of lead into gold, not via the crucibles and alembics of the alchemists, nor even by the established techniques of nuclear bombardment used in the experiments mentioned above, but in a novel and interesting way that has become possible in “near-miss” interactions of lead nuclei at the LHC.
At the LHC, lead has been transformed into gold by light.
Since the first announcement, this story has attracted considerable attention in the media. Here I would like to put this assertion in scientific context and indicate its relevance in testing our understanding of processes that can limit the performance of the LHC and future colliders such as the FCC.
Electromagnetic pancakes
Any charged particle at rest is surrounded by lines of electric fields radiating outwards in all directions. These fields are particularly strong close to a lead nucleus because it contains 82 protons, each with one elementary charge. In the LHC, the lead nuclei travel at 99.999994% of the speed of light, squeezing the field lines into a thin pancake transverse to the direction of motion in the laboratory frame of reference. This compression is so strong that, in the vicinity of the nucleus, we find the strongest magnetic and electric fields known in the universe, trillions of times stronger than even the prodigiously powerful superconducting magnets of the LHC, and orders of magnitude greater than the Schwinger limit where the vacuum polarises or the magnetic fields found in rare, rapidly spinning neutron stars called magnetars. Of course, these fields extend only over a very short time as one nucleus passes by the other. Quantum mechanics, via a famous insight of Fermi, Weizsäcker and Williams, tells us that this electromagnetic flash is equivalent to a pulse of quasi-real photons whose intensity and energy are greatly boosted by the large charge and the relativistic compression.
When two beams of nuclei are brought into collision in the LHC, some hadronic interactions occur. In the unimaginable temperatures and densities of this ultimate crucible we create droplets of the quark–gluon plasma, the main subject of study of the heavy-ion programme. However, when nuclei “just miss” each other, the interactions of these electromagnetic fields amount to photon–photon and photon–nucleus collisions. Some of the processes occurring in these so-called ultra-peripheral collisions (UPCs) are so strong that they would limit the performance of the collider, were it not for special measures implemented in the last 10 years.
The ALICE paper is one among many exploring the rich field of fundamental physics studies opened up by UPCs at the LHC (CERN Courier January/February 2025 p31). Among them are electromagnetic dissociation processes where a photon interacting with a nucleus can excite oscillations of its internal structure and result in the ejection of small numbers of neutrons and protons that are detected by ALICE’s zero degree calorimeters (ZDCs). The ALICE experiment is unique in having calorimeters to detect spectator protons as well as neutrons (see “Spotting spectators” figure). The residual nuclei are not detected although they contribute to the signals measured by the beam-loss monitor system of the LHC.
Each 208Pb nucleus in the LHC beams contains 82 protons and 208–82 = 126 neutrons. To create gold, a nucleus with a charge of 79, three protons must be removed, together with a variable number of neutrons.
Alchemy in ALICE
While less frequent than the creation of the elements thallium (single-proton emission) or mercury (two-proton emission), the results of the ALICE paper show that each of the two colliding lead-ion beams contribute a cross section of 6.8 ± 2.2 barns to gold production, implying that the LHC now produces gold at a maximum rate of about 89 kHz from lead–lead collisions at the ALICE collision point, or 280 kHz from all the LHC experiments combined. During Run 2 of the LHC (2015–2018), about 86 billion gold nuclei were created at all four LHC experiments, but in terms of mass this was only a tiny 2.9 × 10–11 g of gold. Almost twice as much has already been produced in Run 3 (since 2023).
The transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC
Strikingly, this gold production is somewhat larger than the rate of hadronic nuclear collisions, which occur at about 50 kHz for a total cross section of 7.67 ± 0.25 barns.
Different isotopes of gold are created according to the number of neutrons that are emitted at the same time as the three protons. To create 197Au, the only stable isotope and the main component of natural gold, a further eight neutrons must be removed – a very unlikely process. Most of the gold produced is in the form of unstable isotopes with lifetimes of the order of a minute.
Although the ZDC signals confirm the proton and neutron emission, the transformed nuclei are not themselves detected by ALICE and their fate is not discussed in the paper. These interaction products nevertheless propagate hundreds of metres through the beampipe in several secondary beams whose trajectories can be calculated, as seen in the “Ultraperipheral products” figure.
The ordinate shows horizontal displacement from the central path of the outgoing beam. This coordinate system is commonly used in accelerator physics as it suppresses the bending of the central trajectory – downwards in the figure – and its separation into the beam pipes of the LHC arcs.
The “5σ” envelope of the intense main beam of 208Pb nuclei that did not collide is shown in blue. Neutrons from electromagnetic dissociation and other processes are plotted in magenta. They begin with a certain divergence and then travel down the LHC beam pipe in straight lines, forming a cone, until they are detected by the ALICE ZDC, some 114 m away from the collision, after the place where the beam pipe splits in two. Because of the coordinate system, the neutron cone appears to bend sharply at the first separation dipole magnet.
Protons are shown in green. As they only have 40% of the magnetic rigidity of the main beam, they bend quickly away from the central trajectory in the first separation magnet, before being detected by a different part of the ZDC on the other side of the beam pipe.
Photon–photon interactions in UPCs copiously produce electron–positron pairs. In a small fraction of them, corresponding nevertheless to a large cross-section of about 280 barns, the electron is created in a bound state of one of the 208Pb nuclei, generating a secondary beam of 208Pb81+ single-electron ions. The beam from this so-called bound-free pair production (BFPP), shown in red, carries a power of about 150 W – enough to quench the superconducting coils of the LHC magnets, causing them to transition from the superconducting to the normal resistive state. Such quenches can seriously disrupt accelerator operation, as the stored magnetic energy is rapidly released as heat within the affected magnet.
To prevent this, new “TCLD” collimators were installed on either side of ALICE during the second long shutdown of the LHC. Together with a variable-amplitude bump in the beam orbit, which pulls the BFPP beam away from the first impact point so that it can be safely absorbed on the TCLD, this allowed the luminosity to be increased to more than six times the original LHC design, just in time to exploit the full capacity of the upgraded ALICE detector in Run 3.
Besides lead, the LHC has recently collided beams of 16O and 20Ne (see “First oxygen and neon collisions at the LHC”), and nuclear transmutation has manifested itself in another way. In hadronic or electromagnetic events where equal numbers of protons and neutrons are emitted, the outgoing nucleus has almost the same charge-to-mass ratio, since nuclear binding energies are very small at the top of the periodic table. It may then continue to circulate with the original beam, resulting in a small contamination that increases during the several hours of an LHC fill. Hybrid collisions can then occur, for example including a 14N nucleus formed by the ejection of a proton and a neutron from 16O. Fortunately, the momentum spread introduced by the interactions puts many of these nuclei outside the acceptance of the radio-frequency cavities that keep the beams bunched as they circulate around the ring, so the effect is smaller than had first been expected.
The most powerful beam from an electromagnetic-dissociation process is 207Pb from single neutron emission, plotted in green. It has comparable intensity to 208Pb81+ but propagates through the LHC arc to the collimation system at Point 3.
Similar electromagnetic-dissociation processes occur elsewhere, notably in beam interactions with the LHC collimation system. The recent ALICE paper, together with earlier ones on neutron emissions in UPCs, helps to test our understanding of the nuclear interactions that are an essential ingredient of complex beam-physics simulations. These are used to understand and control beam losses that might otherwise provoke frequent magnet quenches or beam dumps. At the LHC, a deep symbiosis has emerged between the fundamental nuclear physics studied by the experiments and the accelerator physics limiting its performance as a heavy-ion collider – or even as a light-ion collider (see “Light-ion collider” panel).
The figure also shows beams of the three heaviest gold isotopes in gold. 204Au has an impact point in a dipole magnet but is far too weak to quench it. 203Au follows almost the same trajectory as the BFPP beam. 202Au propagates through the arc to Point 3. The extremely weak flux of 197Au, the only stable isotope of gold, is also shown.
Worth its weight in gold
Prospecting for gold at the LHC looks even more futile when we consider that the gold nuclei emerge from the collision point with very high energies. They hit the LHC beam pipe or collimators at various points downstream where they immediately fragment in hadronic showers of single protons, neutrons and other particles. The gold exists for tens of milliseconds at most.
And finally, the isotopically pure lead used in CERN’s ion source costs more by weight than gold, so realising the alchemists’ dream at the LHC was a poor business plan from the outset.
The moral of this story, perhaps, is that among modern-day natural philosophers, LHC physicists take issue with the designation of lead as a “base” metal. We find, on the contrary, that 208Pb, the heaviest stable isotope among all the elements, is worth far more than its weight in gold for the riches of the physics discoveries that it has led us to.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.