Comsol -leaderboard other pages

Topics

Ten windows on the future of particle physics

A major step toward shaping the future of European particle physics was reached on 2 October, with the release of the Physics Briefing Book of the 2026 update of the European Strategy for Particle Physics. Despite its 250 pages, it is a concise summary of the vast amount of work contained in the 266 written submissions to the strategy process and the deliberations of the Open Symposium in Venice in June (CERN Courier September/October 2025 p24).

The briefing book compiled by the Physics Preparatory Group is an impressive distillation of our current knowledge of particle physics, and a preview of the exciting prospects offered by future programmes. It provides the scientific basis for defining Europe’s long-term particle-physics priorities and determining the flagship collider that will best advance the field. To this end, it presents comparisons of the physics reach of the different candidate machines, which often have different strengths in probing new physics beyond the Standard Model (SM).

Condensing all this in a few sentences is difficult, though two messages are clear: if the next collider at CERN is an electron–positron collider, the exploration of new physics will proceed mainly through high-precision measurements; and the highest physics reach into the structure of physics beyond the SM via indirect searches will be provided by the combined exploration of the Higgs, electroweak and flavour domains.

Following a visionary outlook for the field from theory, the briefing book divides its exploration of the future of particle physics into seven sectors of fundamental physics and three technology pillars that underpin them.

1. Higgs and electroweak physics

In the new era that has dawned with the discovery of the Higgs boson, numerous fundamental questions remain, including whether the Higgs boson is an elementary scalar, part of an extended scalar sector, or even a portal to entirely new phenomena. The briefing book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy, looking for indirect signs of new physics.

Higgs self-coupling

Addressing these requires highly precise measurements of its couplings, self-interaction and quantum corrections. While the High-Luminosity LHC (HL-LHC) will continue to improve several Higgs and electroweak measurements, the next qualitative leap in precision will be provided by future electron–positron colliders, such as the FCC-ee, the Linear Collider Facility (LCF), CLIC or LEP3. And while these would provide very important information, it would fall upon the shoulders of an energy-frontier machine like the FCC-hh or a muon collider to access potential heavy states. Using the absolute HZZ coupling from the FCC-ee, such machines would measure the single-Higgs-boson couplings with a precision better than 1%, and the Higgs self-coupling at the level of a few per cent (see “Higgs self-coupling” figure).

This anticipated leap in experimental precision will necessitate major advances in theory, simulation and detector technology. In the coming decades, electroweak physics and the Higgs boson in particular will remain a cornerstone of particle physics, linking the precision and energy frontiers in the search for deeper laws of nature.

2. Strong interaction physics

Precise knowledge of the strong interaction will be essential for understanding visible matter, exploring the SM with precision, and interpreting future discoveries at the energy frontier. Building upon advanced studies of QCD at the HL-LHC, future high-luminosity electron–positron colliders such as FCC-ee and LEP3 would, like LHeC, enable per-mille precision on the strong coupling constant, and a greatly improved understanding of the transition between the perturbative and non-perturbative regimes of QCD. The LHeC would bring increased precision on parton-distribution functions that would be very useful for many physics measurements at the FCC-hh. FCC-hh would itself open up a major new frontier for strong-interaction studies.

A deep understanding of the strong interaction also necessitates the study of strongly interacting matter under extreme conditions with heavy-ion collisions. ALICE and the other experiments at the LHC will continue to illuminate this physics, revealing insights into the early universe and the interiors of neutron stars.

3. Flavour physics

With high-precision measurements of quark and lepton processes, flavour studies test the SM at energy scales far above those directly accessible to colliders, thanks to their sensitivity to the effects of virtual particles in quantum loops. Small deviations from theoretical predictions could signal new interactions or particles influencing rare processes or CP-violating effects, making flavour physics one of the most sensitive paths toward discovering physics beyond the SM.

The book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy

Global efforts are today led by the LHCb, ATLAS and CMS experiments at the LHC and by the Belle II experiment at SuperKEKB. These experiments have complementary strengths: huge data samples from proton–proton collisions at CERN and a clean environment in electron–positron collisions at KEK. Combining the two will provide powerful tests of lepton-flavour universality, searches for exotic decays and refinements in the understanding of hadronic effects.

The next major step in precision flavour physics would require “tera-Z” samples of a trillion Z bosons from a high-luminosity electron–positron collider such as the FCC-ee, alongside a spectrum of focused experimental initiatives at a more modest scale.

4. Neutrino physics

Neutrino physics addresses open fundamental questions related to neutrino masses and their deep connections to the matter–antimatter asymmetry in the universe and its cosmic evolution. Upcoming experiments including long-baseline accelerator-neutrino experiments (DUNE and Hyper-Kamiokande), reactor experiments such as JUNO (see “JUNO takes aim at neutrino-mass hierarchy” and astroparticle observatories (KM3NeT and IceCube, see also CERN Courier May/June 2025 p23) will likely unravel the neutrino mass hierarchy and discover leptonic CP violation.

In parallel, the hunt for neutrinoless-double-beta decay continues. A signal would indicate that neutrinos are Majorana fermions, which would be indisputable evidence for new physics! Such efforts extend the reach of particle physics beyond accelerators and deepen connections between disciplines. Efforts to determine the absolute mass of neutrinos are also very important.

The chapter highlights the growing synergy between neutrino experiments and collider, astrophysical and cosmological studies, as well as the pivotal role of theory developments. Precision measurements of neutrino interactions provide crucial support for oscillation measurements, and for nuclear and astroparticle physics. New facilities at accelerators explore neutrino scattering at higher energies, while advances in detector technologies have enabled the measurement of coherent neutrino scattering, opening new opportunities for new physics searches. Neutrino physics is a truly global enterprise, with strong European partici­pation and a pivotal role for the CERN neutrino platform.

5. Cosmic messengers

Astroparticle physics and cosmology increasingly provide new and complementary information to laboratory particle-physics experiments in addressing fundamental questions about the universe. A rich set of recent achievements in these fields includes high-precision measurements of cosmological perturbations in the cosmic microwave background (CMB) and in galaxy surveys, a first measurement of an extragalactic neutrino flux, accurate antimatter fluxes and the discovery of gravitational waves (GWs).

Leveraging information from these experiments has given rise to the field of multi-messenger astronomy. The next generation of instruments, from neutrino telescopes to ground- and space-based CMB and GW observatories, promises exciting results with important clues for
particle physics.

6. Beyond the Standard Model

The landscape for physics beyond the SM is vast, calling for an extended exploration effort with exciting prospects for discovery. It encompasses new scalar or gauge sectors, supersymmetry, compositeness, extra dimensions and dark-sector extensions that connect visible and invisible matter.

Many of these models predict new particles or deviations from SM couplings that would be accessible to next-generation accelerators. The briefing book shows that future electron–positron colliders such as FCC-ee, CLIC, LCF and LEP3 have sensitivity to the indirect effects of new physics through precision Higgs, electroweak and flavour measurements. With their per-mille precision measurements, electron–positron colliders will be essential tools for revealing the virtual effects of heavy new physics beyond the direct reach of colliders. In direct searches, CLIC would extend the energy frontier to 1.5 TeV, whereas FCC-hh would extend it to tens of TeV, potentially enabling the direct observation of new physics such as new gauge bosons, supersymmetric particles and heavy scalar partners. A muon collider would combine precision and energy reach, offering a compact high-energy platform for direct and indirect discovery.

This chapter of the briefing book underscores the complementarity between collider and non-collider experiments. Low-energy precision experiments, searches for electric dipole moments, rare decays and axion or dark-photon experiments probe new interactions at extremely small couplings, while astrophysical and cosmological observations constrain new physics over sprawling mass scales.

7. Dark matter and the dark sector

The nature of dark matter, and the dark sector more generally, remains one of the deepest mysteries in modern physics. A broad range of masses and interaction strengths must be explored, encompassing numerous potential dark-matter phenomenologies, from ultralight axions and hidden photons to weakly interacting massive particles, sterile neutrinos and heavy composite states. The theory space of the dark sector is just as crowded, with models involving new forces or “portals” that link visible and invisible matter.

As no single experimental technique can cover all possibilities, progress will rely on exploiting the complementarity between collider experiments, direct and indirect searches for dark matter, and cosmological observations. Diversity is the key aspect of this developing experimental programme!

8. Accelerator science and technology

The briefing book considers the potential paths to higher energies and luminosities offered by each proposal for CERN’s next flagship project: the two circular colliders FCC-ee and FCC-hh, the two linear colliders LCF and CLIC, and a muon collider; LEP3 and LHeC are also considered as colliders that could potentially offer a physics programme to bridge the time between the HL-LHC and the next high-energy flagship collider. The technical readiness, cost and timeline of each collider are summarised, alongside their environmental impact and energy efficiency (see “Energy efficiency” figure).

Energy efficiency

The two main development fronts in this technology pillar are high-field magnets and efficient radio-frequency (RF) cavities. High-field superconducting magnets are essential for the FCC-hh, while high-temperature superconducting magnet technology, which presents unique opportunities and challenges, might be relevant to the FCC-hh as a second-stage machine after the FCC-ee. Efficient RF systems are required by all accelerators (CERN Courier May/June 2025 p30). Research and development (R&D) on advanced acceleration concepts, such as plasma-wakefield acceleration and muon colliders, also present much promise but necessitate significant work before they can present a viable solution for a future collider.

Preserving Europe’s leadership in accelerator science and technology requires a broad and extensive programme of work with continuous support for accelerator laboratories and test facilities. Such investments will continue to be very important for applications in medicine, materials science and industry.

9. Detector instrumentation

A wealth of lessons learned from the LHC and HL-LHC experiments are guiding the development of the next generation of detectors, which must have higher granularity, and – for a hadron collider – a higher radiation tolerance, alongside improved timing resolution and data throughput.

As the eyes through which we observe collisions at accelerators, detectors require a coherent and long-term R&D programme. Central to these developments will be the detector R&D collaborations, which have provided a structured framework for organising and steering the work since the previous update to the European Strategy for Particle Physics. These span the full spectrum of detector systems, with high-rate gaseous detectors, liquid detectors and high-performance silicon sensors for precision timing, precision particle identification, low-mass tracking and advanced calorimetry.

If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive

All these detectors will also require advances in readout electronics, trigger systems and real-time data processing. A major new element is the growing role of AI and quantum sensing, both of which already offer innovative methods for analysis, optimisation and detector design (CERN Courier July/August 2025 p31). As in computing, there are high hopes and well-founded expectations that these technologies will transform detector design and operation.

To maintain Europe’s leadership in instrumentation, it is important to maintain sustained investments in test-beam infrastructures and engineering. This supports a mutually beneficial symbiosis with industry. Detector R&D is a portal to sectors as diverse as medical diagnostics and space exploration, providing essential tools such as imaging technologies, fast electronics and radiation-hard sensors for a wide range of applications.

10. Computing

Data challenge

If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive. The briefing book pays much attention to the major leaps in computation and storage that are required by future experiments, with simulation, data management and processing at the top of the list (see “Data challenge” figure). Less demanding in resources, but equally demanding of further development, is data analysis. Planning for these new systems is guided by sustainable computing practices, including energy-efficient software and data centres. The next frontier is the HL-LHC, which will be the testing ground and the basis for future development, and serves as an example for the preservation of the current wealth of experimental data and software (CERN Courier September/October 2025 p41).

Several paradigm shifts hold great promise for the future of computing in high-energy physics. Heterogeneous computing integrates CPUs, GPUs and accelerators, providing hugely increased capabilities and better scaling than traditional CPU usage. Machine learning is already being deployed in event simulation, reconstruction and even triggering, and the first signs from quantum computing are very positive. The combination of AI with quantum technology promises a revolution in all aspects of software and of the development, deployment and usage of computing systems.

Some closing remarks

Beyond detailed physics summaries, two overarching issues appear throughout the briefing book.

First, progress will depend on a sustained interplay between experiment, theory and advances in accelerators, instrumentation and computing. The need for continued theoretical development is as pertinent as ever, as improved calculations will be critical for extracting the full physics potential of future experiments.

Second, all this work relies on people – the true driving force behind scientific programmes. There is an urgent need for academia and research institutions to attract and support experts in accelerator technologies, instrumentation and computing by offering long-term career paths. A lasting commitment to training the new generation of physicists who will carry out these exciting research programmes is equally important.

Revisiting the briefing book to craft the current summary brought home very clearly just how far the field of particle physics has come – and, more importantly, how much more there is to explore in nature. The best is yet to come!

Biology at the Bragg peak

In 1895, mere months after Wilhelm Röntgen discovered X-rays, doctors explored their ability to treat superficial tumours. Today, the X-rays are generated by electron linacs rather than vacuum tubes, but the principle is the same, and radiotherapy is part of most cancer treatment programmes.

Charged hadrons offer distinct advantages. Though they are more challenging to manipulate in a clinical environment, protons and heavy ions deposit most of their energy just before they stop, at the so-called Bragg peak, allowing medical physicists to spare healthy tissue and target cancer cells precisely. Particle therapy has been an effective component of the most advanced cancer therapies for nearly 80 years, since it was proposed by Robert R Wilson in 1946.

With the incidence of cancer rising across the world, research into particle therapy is more valuable than ever to human wellbeing – and the science isn’t slowing down. Today, progress requires adapting accelerator physics to the demands of the burgeoning field of radiobiology. This is the scientific basis for developing and validating a whole new generation of treatment modalities, from FLASH therapy to combining particle therapy with immunotherapy.

Here are the top five facts accelerator physicists need to know about biology at the Bragg peak.

1. 100 keV/μm optimises damage to DNA

Repair shop

Almost every cell’s control centre is contained within its nucleus, which houses DNA – your body’s genetic instruction manual. If the cell’s DNA becomes compromised, it can mutate and lose control of its basic functions, leading the cell to die or multiply uncontrollably. The latter results in cancer.

For more than a century, radiation doses have been effective in halting the uncontrollable growth of cancerous cells. Today, the key insight from radiobiology is that for the same radiation dose, biological effects such as cell death, genetic instability and tissue toxicity differ significantly based on both beam parameters and the tissue being targeted.

Biologists have discovered that a “linear energy transfer” of roughly 100 keV/μm produces the most significant biological effect. At this density of ionisation, the distance between energy deposition events is roughly equal to the diameter of the DNA double helix, creating complex, repair-resistant DNA lesions that strongly reduce cell survival. Beyond 100 keV/μm, energy is wasted.

DNA is the main target of radiotherapy because it holds the genetic information essential for the cell’s survival and proliferation. Made up of a double helix that looks like a twisted ladder, DNA consists of two strands of nucleotides held together by hydrogen bonds. The sequence of these nucleotides forms the cell’s unique genetic code. A poorly repaired lesion on this ladder leaves a permanent mark on the genome.

When radiation induces a double-strand break, repair is primarily attempted through two pathways: either by rejoining the broken ends of the DNA, or by replacing the break with an identical copy of healthy DNA (see “Repair shop” image). The efficiency of these repairs decreases dramatically when the breaks occur in close spatial proximity or if they are chemically complex. Such scenarios frequently result in lethal mis-repair events or severe alterations in the genetic code, ultimately compromising cell survival.

This fundamental aspect of radiobiology strongly motivates the use of particle therapy over conventional radiotherapy. Whereas X-rays deliver less than 10 keV/μm, creating sparse ionisation events, protons deposit tens of keV/μm near the Bragg peak, and heavy ions 100 keV/μm or more.

2. Mitochondria and membranes matter too

For decades, radiobiology revolved around studying damage to DNA in cell nuclei. However, mounting evidence reveals that an important aspect of cellular dysfunction can be inflicted by damage to other components of cells, such as the cell membrane and the collection of “organelles” inside it. And the nucleus is not the only organelle containing DNA.

Self-destruct

Mitochondria generate energy and serve as the body’s cellular executioners. If a mitochondrion recognises that its cell’s DNA has been damaged, it may order the cell membrane to become permeable. Without the structure of the cell membrane, the cell breaks apart, its fragments carried away by immune cells. This is one mechanism behind “programmed cell death” – a controlled form of death, where the cell essentially presses its own self-destruct button (see “Self-destruct” image).

Irradiated mitochondrial DNA can suffer from strand breaks, base–pair mismatches and deletions in the code. In space-radiation studies, damage to mitochondrial DNA is a serious health concern as it can lead to mutations, premature ageing and even the creation of tumours. But programmed cell death can prevent a cancer cell from multiplying into a tumour. By disrupting the mitochondria of tumour cells, particle irradiation can compromise their energy metabolism and amplify cell death, increasing the permeability of the cell membrane and encouraging the tumour cell to self-destruct. Though a less common occurrence, membrane damage by irradiation can also directly lead to cell death.

3. Bystander cells exhibit their own radiation response

Communication

For many years, radiobiology was driven by a simple assumption: only cells directly hit by radiation would be damaged. This view started to change in the 1990s, when researchers noticed something unexpected: even cells that had not been irradiated showed signs of stress or injury when they were near the irradiated cells. This phenomenon, known as the bystander effect, revealed that irradiated cells can send bio-­chemical signals to their neighbours, which may in turn respond as if they themselves had been exposed, potentially triggering an immune response (see “Communication” image).

“Non-targeted” effects propagate not only in space, but also in time, through the phenomenon of radiation-induced genomic instability. This temporal dimension is characterised by the delayed appearance of genomic alterations across multiple cell generations. Radiation damage propagates across cells and tissues, and over time, adding complexity beyond the simple dose–response paradigm.

Although the underlying mechanisms remain unclear, the clustered ionisation events produced by carbon ions generate complex DNA damage and cell death, while largely preserving nearby, unirradiated cells.

4. Radiation damage activates the immune system

Cancer cells multiply because the immune system fails to recognise them as a threat (see “Immune response” image). The modern pharmaceutical-based technique of immunotherapy seeks to alert the immune system to the threat posed by cancer cells it has ignored by chemically tagging them. Radiotherapy seeks to activate the immune system by inflicting recognisable cellular damage, but long courses of photon radiation can also weaken overall immunity.

Immune response

This negative effect is often caused by the exposure of circulating blood and active blood-producing organs to radiation doses. Fortunately, particle therapy’s ability to tightly conform the dose to the target and subject surrounding tissues to a minimal dose can significantly mitigate the reduction of immune blood cells, better preserving systemic immunity. By inflicting complex, clustered DNA lesions, heavy ions have the strongest potential to directly trigger programmed cell death, even in the most difficult-to-treat cancer cells, bypassing some of the molecular tricks that tumours use to survive, and amplifying the immune response beyond conventional radiotherapy with X-rays. This is linked to the complex, clustered DNA lesions induced by high-energy-transfer radiation, which triggers the DNA damage–repair signals strongly associated with immune activation.

These biological differences provide a strong rationale for the rapidly emerging research frontier of combining particle therapy with immunotherapy. Particle therapy’s key advantage is its ability to amplify immunogenic cell death, where the cell’s surface changes, creating “danger tags” to recruit immune cells to come and kill it, recognise others like it, and kill those too. This ability for particle therapy to mitigate systemic immuno­suppression makes it a theoretically superior partner for immunotherapy compared to conventional X-rays.

5. Ultra-high dose rates protect healthy tissues

In recent years, the attention of clinicians and researchers has focused on the “FLASH” effect– a groundbreaking concept in cancer treatment where radiation is delivered at an ultra-high dose rate in excess of 40 J/kg/s. FLASH radiotherapy appears to minimise damage to healthy tissues while maintaining at least the same level of tumour control as conventional methods. Inflammation in healthy tissues is reduced, and the number of immune cells entering the tumour increased, helping the body fight cancer more effectively. This can significantly widen the therapeutic window – the optimal range of radiation doses that can successfully treat a tumour while minimising toxicity to healthy tissues.

Oxygen depletion

Though the radiobiological mechanisms behind this protective effect remain unclear, several hypotheses have been proposed. A leading theory focuses on oxygen depletion or “hypoxia”.

As tumours grow, they outpace the surrounding blood vessels’ ability to provide oxygen (see “Oxygen depletion” image). By condensing the dose in a very short time, it is thought that FLASH therapy may induce transient hypoxia within normal tissues too, reducing oxygen-dependent DNA damage there, while killing tumour cells at the same rate. Using a similar mechanism, FLASH therapy may also preserve mitochondrial integrity and energy production in normal tissues.

It is still under investigation whether a FLASH effect occurs with carbon ions, but combining the biological benefits of high-energy-transfer radiation with those of FLASH could be very promising.

The future of particle therapy

What excites you most about your research in 2025?

2025 has been a very exciting year. We just published a paper in Nature Physics about radioactive ion beams.

I also received an ERC Advanced Grant to study the FLASH effect with neon ions. We plan to go back to the 1970s, when Cornelius Tobias in Berkeley thought of using very heavy ions against radio-resistant tumours, but now using FLASH’s ultrahigh dose rates to reduce its toxicity to healthy tissues. Our group is also working on the simultaneous acceleration of different ions: carbon ions will stop in the tumour, but helium ions will cross the patient, providing an online monitor of the beam’s position during irradiation. The other big news in radiotherapy is vertical irradiation, where we don’t rotate the beam around the patient, but rotate the patient around the beam. This is particularly interesting for heavy-ion therapy, where building a rotating gantry that can irradiate the patient from multiple angles is almost as expensive as the whole accelerator. We are leading the Marie Curie UPLIFT training network on this topic.

Why are heavy ions so compelling?

Close to the Bragg peak, where very heavy ions are very densely ionising, the damage they cause is difficult to repair. You can kill the tumours much better than with protons. But carbon, oxygen and neon run the risk of inducing toxicity in healthy tissues. In Berkeley, more than 400 patients were treated with heavy ions. The results were not very good, and it was realised that these ions can be very toxic for normal tissue. The programme was stopped in 1992, and since then there has been no more heavy-ion therapy in the US, though carbon-ion therapy was established in Japan not long after. Today, most of the 130 particle-therapy centres worldwide use protons, but 17 centres across Asia and Europe offer carbon-ion therapy, with one now under construction at the Mayo Clinic in the US. Carbon is very convenient, because the plateau of the Bragg curve is similar to X-rays, while the peak is much more effective than protons. But still, there is evidence that it’s not heavy enough, that the charge is not high enough to get rid of very radio-resistant hypoxic tumours – tumours where you don’t have enough oxygenation. So that’s why we want to go heavier: neon. If we show that you can manage the toxicity using FLASH, then this is something that can be translated into the clinics.

There seems to be a lot of research into condensing the dose either in space, in microbeams or, in time, in the FLASH effect…

Absolutely.

Why does that spare healthy tissue at the expense of cancer cells?

That is a question I cannot answer. To be honest, nobody knows. We know that it works, but I want to make it very clear that we need more research to translate it completely to the clinic. It is true that if you either fractionate in space or compress in time, normal tissue is much more resistant, while the effect on the tumour is approximately the same, allowing you to increase the dose without harming the patient. The problem is that the data are still controversial.

So you would say that it is not yet scientifically established that the FLASH effect is real?

There is an overwhelming amount of evidence for the strong sparing of normal tissue at specific sites, especially for the skin and for the brain. But, for example, for gastrointestinal tumours the data is very controversial. Some data show no effect, some data show a protective effect, and some data show an increased effectiveness of FLASH. We cannot generalise.

Is it surprising that the effect depends on the tissue?

In medicine this is not so strange. The brain and the gut are completely different. In the gut, you have a lot of cells that are quickly duplicating, while in the brain, you almost have the same number of neurons that you had when you were a teenager – unfortunately, there is not much exchange in the brain.

So, your frontier at GSI is FLASH with neon ions. Would you argue that microbeams are equally promising?

Absolutely, yes, though millibeams more so than microbeams, because microbeams are extremely difficult to go into clinical translation. In the micron region, any kind of movement will jeopardise your spatial fractionation. But if you have millimetre spacing, then this becomes credible and feasible. You can create millibeams using a grid. Instead of having one solid beam, you have several stripes. If you use heavier ions, they don’t scatter very much and remain spatially fractionated. There is mounting evidence that fractionated irradiation of the tumour can elicit an immune response and that these immune cells eventually destroy the tumour. Research is still ongoing to understand whether it’s better to irradiate with a spatial fractionation of 1 millimetre or to only radiate the centre of the tumour, allowing the immune cells to migrate and destroy the tumour.

Radioactive-ion therapy

What’s the biology of the body’s immune response to a tumour?

To become a tumour, a cell has to fool the immune system, otherwise our immune system will destroy it. So, we are desperately trying to find a way to teach the immune system to say: “look, this is not a friend – you have to kill it, you have to destroy it.” This is immunotherapy, the subject of the Nobel Prize in medicine in 2018 and also related to the 2025 Nobel Prize in medicine on regulation of the immune system. But these drugs don’t work for every tumour. Radiotherapy is very useful in this sense, because you kill a lot of cells, and when the immune system sees a lot of dead cells, it activates. A combination of immunotherapy and radiotherapy is now being used more and more in clinical trials.

You also mentioned radioactive ion beams and the simultaneous acceleration of carbon and helium ions. Why are these approaches advantageous?

The two big problems with particle therapy are cost and range uncertainty. Having energy deposition concentrated at the Bragg peak is very nice, but if it’s not in the right position, it can do a lot of damage. Precision is therefore much more important in particle therapy than in conventional radiotherapy, as X-rays don’t have a Bragg peak – even if the patient moves a little bit, or if there is an anatomical change, it doesn’t matter. That’s why many centres prefer X-rays. To change that, we are trying to create ways to see the beam while we irradiate. Radioactive ions decay while they deposit energy in the tumour, allowing you to see the beam using PET. With carbon and helium, you don’t see the carbon beam, but you see the helium beam. These are both ways to visualise the beam during irradiation.

How significantly does radiation therapy improve human well-being in the world today?

When I started to work in radiation therapy at Berkeley, many people were telling me: “Why do you waste your time in radiation therapy? In 10 years everything will be solved.” At that time, the trend was gene therapy. Other trends have come and gone, and after 35 years in this field, radiation therapy is still a very important tool in a multidisciplinary strategy for killing tumours. More than 50% of cancer patients need radiotherapy, but, even in Europe, it is not available to all patients who need it.

Accelerator and detector physicists have to learn to speak the language of the non-specialist

What are the most promising initiatives to increase access to radiotherapy in low- and middle-income countries?

Simply making the accelerators cheaper. The GDP of most countries in Africa, South America and Asia is also steadily increasing, so you can expect that – let’s say – in 20 or 30 years from now, there will be a big demand for advanced medical technologies in these countries, because they will have the money to afford it.

Is there a global shortage of radiation physicists?

Yes, absolutely. This is true not only for particle therapy, which requires a high number of specialists to maintain the machine, but also for conventional X-ray radiotherapy with electron linacs. It’s also true for diagnostics because you need a lot of medical physicists for CT, PET and MRI.

What is your advice to high-energy physicists who have just completed a PhD or a postdoc, and want to enter medical physics?

The next step is a specialisation course. In about four years, you will become a specialised medical physicist and can start to work in the clinics. Many who take that path continue to do research alongside their clinical work, so you don’t have to give up your research career, just reorient it toward medical applications.

How does PTCOG exert leadership over global research and development?

The Particle Therapy Co-Operative Group (PTCOG) is a very interesting association. Every particle-therapy centre is represented in its steering committee. We have two big roles. One is research, so we really promote international research in particle therapy, even with grants. The second is education. For example, Spain currently has 11 proton therapy centres under construction. Each will need maybe 10 physicists. PTCOG is promoting education in particle therapy to train the next generation of radiation-therapy technicians and medical oncologists. It’s a global organisation, representing science worldwide, across national and continental branches.

Do you have a message for our community of accelerator physicists and detector physicists? How can they make their research more interdisciplinary and improve the applications?

Accelerator physicists especially, but also detector physicists, have to learn to speak the language of the non-specialist. Sometimes they are lost in translation. Also, they have to be careful not to oversell what they are doing, because you can create expectations that are not matched by reality. Tabletop laser-driven accelerators are a very interesting research topic, but don’t oversell them as something that can go into the clinics tomorrow, because then you create frustration and disappointment. There is a similar situation with linear accelerators for particle therapy. Since I started to work in this field, people have been saying “Why do we use circular accelerators? We should use linear accelerators.” After 35 years, not a single linear accelerator has been used in the clinics. There must also be a good connection with industry, because eventually clinics buy from industry, not academia.

Are there missed opportunities in the way that fundamental physicists attempt to apply their research and make it practically useful with industry and medicine?

In my opinion, it should work the other way around. Don’t say “this is what I am good at”; ask the clinical environment, “what do you need?” In particle therapy, we want accelerators that are cheaper and with a smaller footprint. So in whatever research you do, you have to prove to me that the footprint is smaller, and the cost lower.

Cave M

Do forums exist where medical doctors can tell researchers what they need?

PTCOG is definitely the right place for that. We keep medicine, physics and biology together, and it’s one of the meetings with the highest industry participation. All the industries in particle therapy come to PTCOG. So that’s exactly the right forum where people should talk. We expect 1500 people at the next meeting, which will take place in Deauville, France, from 8 to 13 June 2026, shortly after IPAC.

Are accelerator physicists welcome to engage in PTCOG even if they’ve not previously worked on medical applications?

Absolutely. This is something that we are missing. Accelerator physicists mostly go to IPAC but not to PTCOG. They should also come to PTCOG to speak more with medical physicists. I would say that PTCOG is 50% medical physics, 30% medicine and 20% biology. So, there are a lot of medical physicists, but we don’t have enough accelerator physicists and detector physicists. We need more particle and nuclear physicists to come to PTCOG to see what the clinical and biology community want, and whether they can provide something.

Do you have a message for policymakers and funding agencies about how they can help push forward research in radiotherapy?

Unfortunately, radiation therapy and even surgery are wrongly perceived as old technologies. There is not much investment in them, and that is a big problem for us. What we miss is good investment at the level of cooperative programmes that develop particle therapy in a collaborative fashion. At the moment, it’s becoming increasingly difficult. All the money goes into prevention and pharmaceuticals for immunotherapy and targeted therapy, and this is something that we are trying to revert.

Are large accelerator laboratories well placed to host cooperative research projects?

Both GSI and CERN face the same challenge: their primary mission is nuclear and particle physics. Technological transfer is fine, but they may jeopardise their funding if they stray too far from their primary goal. I believe they should invest more in technological transfer, lobbying their funding agencies to demonstrate that there is a translation of their basic science into something that is useful for public health.

How does your research in particle therapy transfer to astronaut safety?

Particle therapy and space-radiation research have a lot in common. They use the same tools and there are also a lot of overlapping topics, for example radiosensitivity. One patient is more sensitive, one patient is more resistant, and we want to understand what the difference is. The same is true of astronauts – and radiation is probably the main health risk for long-term missions. Space is also a hostile environment in terms of microgravity and isolation, but here we understand the risks, and we have countermeasures. For space radiation, the problem is that we don’t understand the risk very well, because the type of radiation is so exotic. We don’t have that type of radiation on Earth, so we don’t know exactly how big the risk is. Plus, we don’t have effective countermeasures, because the radiation is so energetic that shielding will not be enough to protect the crews effectively. We need more research to reduce the uncertainty on the risk, and most of this research is done in ground-based accelerators, not in space.

Radiation therapy is probably the best interdisciplinary field that you can work in

I understand that you’re even looking into cryogenics…

Hibernation is considered science fiction, but it’s not science fiction at all – it’s something we can recreate in the lab. We call it synthetic torpor. This can be induced in animals that are non-hibernating. Bears and squirrels hibernate; humans and rats don’t, but we can induce it. And when you go into hibernation, you become more radioresistant, providing a possible countermeasure to radiation exposure, especially for long missions. You don’t need much food, you don’t age very much, metabolic processes are slowed down, and you are protected from radiation. That’s for space. This could also be applied to therapy. Imagine you have a patient with multiple metastasis and no hope for treatment. If you can induce synthetic torpor, all the tumours will stop, because when you go into a low temperature and hibernation, the tumours don’t grow. This is not the solution, because when you wake the patient up, the tumours will grow again, but what you can do is treat the tumours while you are in hibernation, while healthy tissue is more radiation resistant. The number of research groups working on this is low, so we’re quite far from considering synthetic torpor for spaceflight or clinical trials for cancer treatment. First of all, we have to see how long we can keep an animal in synthetic torpor. Second, we should translate into bigger animals like pigs or even non-human primates.

In the best-case scenario, what can particle therapy look like in 10 years’ time?

Ideally, we should probably at least double the amount of particle-therapy centres that are now available, and expand into new regions. We finally have a particle-therapy centre in Argentina, which is the first one in South America. I would like to see many more in South America and in Africa. I would also like to see more centres that try to tackle tumours where there is no treatment option, like glioblastoma or pancreatic cancer, where the mortality is the same as the incidence. If we can find ways to treat such cancers with heavy ions and give hope to these patients, this would be really useful.

Is there a final thought that you’d like to leave with readers?

Radiation therapy is probably the best interdisciplinary field that you can work in. It’s useful for society and it’s intellectually stimulating. I really hope that big centres like CERN and GSI commit more and more to the societal benefits of basic research. We need it now more than ever. We are living in a difficult global situation, and we have to prove that when we invest money in basic research, this is very well invested money. I’m very happy to be a scientist, because in science, there are no barriers, there is no border. Science is really, truly international. I’m an advocate of saying scientific collaboration should never stop. It didn’t even stop during the Cold War. At that time, the cooperation between East and West at the scientist level helped to reduce the risk of nuclear weapons. We should continue this. We don’t have to think that what is happening in the world should stop international cooperation in science: it eventually brings peace.

Polymath, humanitarian, gentleman

Towards LEP and the LHC

Herwig Schopper was born on 28 February 1924 in the German-speaking town of Landskron (today, Lanškroun) in the then young country of Czechoslovakia. He enjoyed an idyllic childhood, holidaying at his grandparents’ hotel in Abbazia (today, Opatija) on what is now the Croatian Adriatic coast. It was there that his interest in science was awakened through listening in on conversations between physicists from Budapest and Belgrade. In Landskron, he developed an interest in music and sport, learning to play both piano and double bass, and skiing in the nearby mountains. He also learned to speak English, not merely to read Shakespeare as was the norm at the time, but to be able to converse, thanks to a Jewish teacher who had previously spent time in England. This skill was to prove transformational later in life.

The idyll began to crack in 1938 when the Sudetenland was annexed by Germany. War broke out the following year, but the immediate impact on Herwig was limited. He remained in Landskron until the end of his high-school educ ation, graduating as a German citizen – and with no choice but to enlist. Joining the Luftwaffe signals corps, because he thought that would help him develop his knowledge of physics, he served for most of the war on the Eastern Front ensuring that communication lines remained open between military headquarters and the troops on the front lines. As the war drew to a close in March 1945, he was transferred west, just in time to see the Western Allies cross the Rhine at Remagen. Recalled to Berlin and given orders to head further west, Herwig instructed his driver to first make a short detour via Potsdam. This was a sign of the kind of person Herwig was that, amidst the chaos of the fall of Berlin, he wanted to see Schloss Sanssouci, Frederick the Great’s temple to the enlightenment, while he had the chance.

Academic overture

By the time Herwig arrived in Schleswig–Holstein, the war was over, and he found himself a prisoner of the British. He later recalled, with palpable relief, that he had managed to negotiate the war without having to shoot at anyone. On discovering that Herwig spoke English, the British military administration engaged him as a translator. This came as a great consolation to Herwig since many of his compatriots were dispatched to the mines to extract the coal that would be used to reconstruct a shattered Germany. Herwig rapidly struck up a friendship with the English captain he was assigned to. This in turn eased his passage to the University of Hamburg, where he began his research career studying optics, and later enabled him to take the first of his scientific sabbaticals when travel restrictions on German academics were still in place (see “Academic overture” image).

In 1951, Herwig left for a year in Stockholm, where he worked with Lise Meitner on beta decay. He described this time as his first step up in energy from the eV-energies of visible light to the keV-energies of beta-decay electrons. A later sabbatical, starting in 1956, would see him in Cambridge, where he worked under Meitner’s nephew, Otto Frisch, in the Cavendish laboratory. As Austrian Jews, both Meitner and Frisch had sought exile before the war. By this time, Frisch had become director of the Cavendish’s nuclear physics department and a fellow of the Royal Society.

Initial interactions

While at Cambridge, Herwig took his first steps in the emerging field of particle physics, and became one of the first to publish an experimental verification of Lee and Yang’s proposal that parity would be violated in weak interactions. His single-author paper was published soon after that by Chien-Shiung Wu and her team, leading to a lifelong friendship between the two (see “Virtuosi” image).

Following Wu’s experimental verification of parity violation, cited by Herwig in his paper, Lee and Yang received the Nobel Prize. Wu was denied the honour, ostensibly on the basis that she was one of a team and the prize can only be shared three ways. It remains in the realm of speculation whether Herwig would have shared the prize had his paper been the first to appear.

Virtuosi

A third sabbatical, arranged by Willibald Jentschke, who wanted Herwig to develop a user group for the newly established DESY laboratory, saw the Schopper family move to Ithaca, New York in 1960. At Cornell, Herwig learned the ropes of electron synchrotrons from Bob Wilson. He also learned a valuable lesson in the hands-on approach to leadership. Arriving in Ithaca on a Saturday, Herwig decided to look around the deserted lab. He found one person there, tidying up. It turned out not to be the janitor, but the lab’s founder and director, Wilson himself. For Herwig, Cornell represented another big jump in energy, cementing Schopper as an experimental particle physicist.

Cornell represented another big jump in energy, cementing Schopper as an experimental particle physicist

Herwig’s three sabbaticals gave him the skills he would later rely on in hardware development and physics analysis, but it was back in Germany that he honed his management skills and established himself a skilled science administrator.

At the beginning of his career in Hamburg, Herwig worked under Rudolf Fleischmann, and when Fleischmann was offered a chair at Erlangen, Herwig followed. Among the research he carried out at Erlangen was an experiment to measure the helicity of gamma rays, a technique that he’d later deploy in Cambridge to measure parity violation.

Prélude

It was not long before Herwig was offered a chair himself, and in 1958, at the tender age of 34, he parted from his mentor to move to Mainz. In his brief tenure there, he set wheels in motion that would lead to the later establishment of the Mainz Microtron laboratory, today known as MAMI. By this time, however, Herwig was much in demand, and he soon moved to Karlsruhe, taking up a joint position between the university and the Kernforschungszentrum, KfK. His plan was to merge the two under a single management structure as the Karlsruhe Institute for Experimental Nuclear Physics. In doing so, he laid the seeds for today’s Karlsruhe Institute of Technology, KIT.

Pioneering research

At Karlsruhe, Herwig established a user group for DESY, as Jentschke had hoped, and another at CERN. He also initiated a pioneering research programme into superconducting RF and had his first personal contacts with CERN, spending a year there in 1964. In typical Herwig fashion, he pursued his own agenda, developing a device he called a sampling total absorption counter, STAC, to measure neutron energies. At the time, few saw the need for such a device, but this form of calorimetry is now an indispensable part of any experimental particle physicists’ armoury.

In 1970, Herwig again took leave of absence from Karlsruhe to go to CERN. He’d been offered the position of head of the laboratory’s Nuclear Physics Division, but his stay was to be short lived (see “Prélude” image). The following year, Jentschke took up the position of Director-General of CERN alongside John Adams. Jentschke was to run the original CERN laboratory, Lab I, while Adams ran the new CERN Lab II, tasked with building the SPS. This left a vacancy at Germany’s national laboratory, and the job was offered to Herwig. It was too good an offer to refuse.

As chair of the DESY directorate, Herwig witnessed from afar the discovery of both charm and bottom quarks in the US. Although missing out on the discoveries, DESY’s machines were perfect laboratories to study the spectroscopy of these new quark families, and DESY went on to provide definitive measurements. Herwig also oversaw DESY’s development in synchrotron light science, repurposing the DORIS accelerator as a light source when its physics career was complete and it was succeeded by PETRA.

Architects of LEP

The ambition of the PETRA project put DESY firmly on course to becoming an international laboratory, setting the scene for the later HERA model. PETRA experiments went on to discover the gluon in 1979.

The following year, Herwig was named as CERN’s next Director-General, taking up office on 1 January 1981. By this time, the CERN Council had decided to call time on its experiment with two parallel laboratories, leaving Herwig with the task of uniting Lab I and Lab II. The Council was also considering plans to build the world’s most powerful accelerator, the Large Electron–Positron collider, LEP.

It fell to Herwig both to implement a new management structure for CERN and to see the LEP proposal through to approval (see “Architects of LEP” image). Unpopular decisions were inevitable, making the early years of Herwig’s mandate somewhat difficult. In order to get LEP approved, he had to make sacrifices. As a result, the Intersecting Storage Rings (ISR), the world’s only hadron collider, collided its final beams in 1984 and cuts had to be made across the research programme. Herwig was also confronted with a period of austerity in science funding, and found himself obliged to commit CERN to constant funding in real terms throughout the construction of LEP, and as it turns out, in perpetuity.

It fell to Herwig both to implement a new management structure for CERN and to see the LEP proposal through to approval

Herwig’s battles were not only with the lab’s governing body; he also went against the opinions of some of his scientific colleagues concerning the size of the new accelerator. True to form, Herwig stuck with his instinct, insisting that the LEP tunnel should be 27 km around, rather than the more modest 22 km that would have satisfied the immediate research goals while avoiding the difficult geology beneath the Jura mountains. Herwig, however, was looking further ahead – to the hadron collider that would follow LEP. His obstinacy was fully vindicated with the discovery of the Higgs boson in 2012, confirming the Brout–Englert–Higgs mechanism, which had been proposed almost 50 years earlier. This discovery earned the Nobel Prize for Peter Higgs and François Englert in 2013 (see “Towards LEP and the LHC” image).

The CERN blueprint

Difficult though some of his decisions may have been, there is no doubt that Herwig’s 1981 to 1988 mandate established the blueprint for CERN to this day. The end of operations of the ISR may have been unpopular, and we’ll never know what it may have gone on to achieve, but the world’s second hadron collider at the SPS delivered CERN’s first Nobel prize during Herwig’s mandate, awarded to Carlo Rubbia and Simon van der Meer in 1984 for the discovery of W and Z bosons.

Herwig turned 65 two months after stepping down as CERN Director-General, but retirement was never on his mind. In the years that followed, he carried out numerous roles for UNESCO, applying his diplomacy and foresight to new areas of science. UNESCO was in many ways a natural step for Herwig, whose diplomatic skills had been honed by the steady stream of high-profile visitors to CERN during his mandate as Director-General. At one point, he engineered a meeting at UNESCO between Jim Cronin, who was lobbying for the establishment of a cosmic-ray observatory in Argentina, and the country’s president, Carlos Menem. The following day, Menem announced the start of construction of the Pierre Auger Observatory. On another occasion, Herwig was tasked with developing the Soviet gift to Cuba of a small particle accelerator into a working laboratory. That initiative would ultimately come to nothing, but it helped Herwig prepare the groundwork for perhaps his greatest post-retirement achievement: SESAME, a light-source laboratory in Jordan that operates as an intergovernmental organisation following the CERN model (see “Science diplomacy” image). Mastering the political challenge of establishing an organisation that brings together countries from across the Middle East – including long-standing rivals – required a skill set that few possess.

Science diplomacy

Although the roots of SESAME can be traced to a much earlier date, by the end of the 20th century, when the idea was sufficiently mature for an interim organisation to be established, Herwig was the natural candidate to lead the new organisation through its formative years. His experience of running international science coupled with his post-retirement roles at UNESCO made him the obvious choice to steer SESAME from idea to reality. It was Herwig who modelled SESAME’s governing document on the CERN convention, and it was Herwig who secured the site in Jordan for the laboratory. Today, SESAME is producing world-class research – a shining example of what can be achieved when people set aside their differences and focus on what they have in common.

Establishing an organisation that brings together countries from across the Middle East required a skill set few possess

Herwig never stopped working for what he believed in. When CERN’s current Director-General convened a meeting with past Directors-General in 2024, along with the president of the CERN Council, Herwig was present. When initiatives were launched to establish an international research centre in the Balkans, Herwig stepped up to the task. He never lost his sense of what is right, and he never lost his mischievous sense of humour. Following an interview at his house in 2024 for the film The Peace Particle, the interviewer asked whether he still played the piano. Herwig stood up, walked to the piano and started to play a very simple arrangement of Christian Sinding’s “Rustle of Spring”. Just as curious glances started to be exchanged, he transitioned, with a twinkle in his eye, to a beautifully nuanced rendition of Liszt’s “Liebestraum No. 3”.

Herwig Schopper was a rare combination of genius, polymath, humanitarian and gentleman. Always humble, he could make decisions with nerves of steel when required. His legacy spans decades and disciplines, and has shaped the field of particle physics in many ways. With his passing, the world has lost a truly remarkable individual. He will be sorely missed.

Alchemy by pure light

New results in fundamental physics can be a long time coming. Experimental discoveries of elementary particles have often occurred only decades after their prediction by theory.

Still, the discovery of the fundamental particles of the Standard Model has been speedy in comparison to another longstanding quest in natural philosophy: chrysopoeia, the medieval alchemists’ dream of transforming the “base metal” lead into the precious metal gold. This may have been motivated by the observation that the dull grey, relatively abundant metal lead is of similar density to gold, which has been coveted for its beautiful colour and rarity for millennia.

The quest goes back at least to the mythical, or mystical, notion of the philosopher’s stone and Zosimos of Panopolis around 300 CE. Its evolution, in various cultures, through medieval times and up to the 19th century, is a fascinating thread in the emergence of modern empirical science from earlier ways of thinking. Some of the leaders of this transition, such as Isaac Newton, also practised alchemy. While the alchemists pioneered many of the techniques of modern chemistry, it was only much later that it became clear that lead and gold are distinct chemical elements and that chemical methods are powerless to transmute one into the other.

With the dawn of nuclear physics in the 20th century, it was discovered that elements could transform into others through nuclear reactions, either naturally by radioactive decay or in the laboratory. In 1940, gold was produced at the Harvard Cyclotron by bombarding a mercury target with fast neutrons. Some 40 years ago, tiny amounts of gold were produced in nuclear reactions between beams of carbon and neon, and a bismuth target at the Bevalac in Berkeley. Very recently, gold isotopes were produced at the ISOLDE facility at CERN by bombarding a uranium target with proton beams (see “Historic gold” images).

Historic gold

Now, tucked away discreetly in the conclusions of a paper recently published by the ALICE collaboration, one can find the observation, originating from Igor Pshenichnov, Uliana Dmitrieva and Chiara Oppedisano, that “the transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC.”

ALICE has finally measured the transmutation of lead into gold, not via the crucibles and alembics of the alchemists, nor even by the established techniques of nuclear bombardment used in the experiments mentioned above, but in a novel and interesting way that has become possible in “near-miss” interactions of lead nuclei at the LHC.

At the LHC, lead has been transformed into gold by light.   

Since the first announcement, this story has attracted considerable attention in the media. Here I would like to put this assertion in scientific context and indicate its relevance in testing our understanding of processes that can limit the performance of the LHC and future colliders such as the FCC.

Electromagnetic pancakes

Any charged particle at rest is surrounded by lines of electric fields radiating outwards in all directions. These fields are particularly strong close to a lead nucleus because it contains 82 protons, each with one elementary charge. In the LHC, the lead nuclei travel at 99.999994% of the speed of light, squeezing the field lines into a thin pancake transverse to the direction of motion in the laboratory frame of reference. This compression is so strong that, in the vicinity of the nucleus, we find the strongest magnetic and electric fields known in the universe, trillions of times stronger than even the prodigiously powerful superconducting magnets of the LHC, and orders of magnitude greater than the Schwinger limit where the vacuum polarises or the magnetic fields found in rare, rapidly spinning neutron stars called magnetars. Of course, these fields extend only over a very short time as one nucleus passes by the other. Quantum mechanics, via a famous insight of Fermi, Weizsäcker and Williams, tells us that this electromagnetic flash is equivalent to a pulse of quasi-real photons whose intensity and energy are greatly boosted by the large charge and the relativistic compression.

When two beams of nuclei are brought into collision in the LHC, some hadronic interactions occur. In the unimaginable temperatures and densities of this ultimate crucible we create droplets of the quark–gluon plasma, the main subject of study of the heavy-ion programme. However, when nuclei “just miss” each other, the interactions of these electromagnetic fields amount to photon–photon and photon–nucleus collisions. Some of the processes occurring in these so-called ultra-peripheral collisions (UPCs) are so strong that they would limit the performance of the collider, were it not for special measures implemented in the last 10 years.

Spotting spectators

The ALICE paper is one among many exploring the rich field of fundamental physics studies opened up by UPCs at the LHC (CERN Courier January/February 2025 p31). Among them are electromagnetic dissociation processes where a photon interacting with a nucleus can excite oscillations of its internal structure and result in the ejection of small numbers of neutrons and protons that are detected by ALICE’s zero degree calorimeters (ZDCs). The ALICE experiment is unique in having calorimeters to detect spectator protons as well as neutrons (see “Spotting spectators” figure). The residual nuclei are not detected although they contribute to the signals measured by the beam-loss monitor system of the LHC.

Each 208Pb nucleus in the LHC beams contains 82 protons and 208–82 = 126 neutrons. To create gold, a nucleus with a charge of 79, three protons must be removed, together with a variable number of neutrons.    

Alchemy in ALICE

While less frequent than the creation of the elements thallium (single-proton emission) or mercury (two-proton emission), the results of the ALICE paper show that each of the two colliding lead-ion beams contribute a cross section of 6.8 ± 2.2 barns to gold production, implying that the LHC now produces gold at a maximum rate of about 89 kHz from lead–lead collisions at the ALICE collision point, or 280 kHz from all the LHC experiments combined. During Run 2 of the LHC (2015–2018), about 86 billion gold nuclei were created at all four LHC experiments, but in terms of mass this was only a tiny 2.9 × 10–11 g of gold. Almost twice as much has already been produced in Run 3 (since 2023).

The transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC

Strikingly, this gold production is somewhat larger than the rate of hadronic nuclear collisions, which occur at about 50 kHz for a total cross section of 7.67 ± 0.25 barns.

Different isotopes of gold are created according to the number of neutrons that are emitted at the same time as the three protons. To create 197Au, the only stable isotope and the main component of natural gold, a further eight neutrons must be removed – a very unlikely process. Most of the gold produced is in the form of unstable isotopes with lifetimes of the order of a minute.

Although the ZDC signals confirm the proton and neutron emission, the transformed nuclei are not themselves detected by ALICE and their fate is not discussed in the paper. These interaction products nevertheless propagate hundreds of metres through the beampipe in several secondary beams whose trajectories can be calculated, as seen in the “Ultraperipheral products” figure.

Ultraperipheral products

The ordinate shows horizontal displacement from the central path of the outgoing beam. This coordinate system is commonly used in accelerator physics as it suppresses the bending of the central trajectory – downwards in the figure – and its separation into the beam pipes of the LHC arcs.   

The “5σ” envelope of the intense main beam of 208Pb nuclei that did not collide is shown in blue. Neutrons from electromagnetic dissociation and other processes are plotted in magenta. They begin with a certain divergence and then travel down the LHC beam pipe in straight lines, forming a cone, until they are detected by the ALICE ZDC, some 114 m away from the collision, after the place where the beam pipe splits in two. Because of the coordinate system, the neutron cone appears to bend sharply at the first separation dipole magnet.

Protons are shown in green. As they only have 40% of the magnetic rigidity of the main beam, they bend quickly away from the central trajectory in the first separation magnet, before being detected by a different part of the ZDC on the other side of the beam pipe.

Photon–photon interactions in UPCs copiously produce electron–positron pairs. In a small fraction of them, corresponding nevertheless to a large cross-section of about 280 barns, the electron is created in a bound state of one of the 208Pb nuclei, generating a secondary beam of 208Pb81+ single-electron ions. The beam from this so-called bound-free pair production (BFPP), shown in red, carries a power of about 150 W – enough to quench the superconducting coils of the LHC magnets, causing them to transition from the superconducting to the normal resistive state. Such quenches can seriously disrupt accelerator operation, as the stored magnetic energy is rapidly released as heat within the affected magnet.

To prevent this, new “TCLD” collimators were installed on either side of ALICE during the second long shutdown of the LHC. Together with a variable-amplitude bump in the beam orbit, which pulls the BFPP beam away from the first impact point so that it can be safely absorbed on the TCLD, this allowed the luminosity to be increased to more than six times the original LHC design, just in time to exploit the full capacity of the upgraded ALICE detector in Run 3.

Light-ion collider

A first at the LHC

Besides lead, the LHC has recently collided beams of 16O and 20Ne (see “First oxygen and neon collisions at the LHC”), and nuclear transmutation has manifested itself in another way. In hadronic or electromagnetic events where equal numbers of protons and neutrons are emitted, the outgoing nucleus has almost the same charge-to-mass ratio, since nuclear binding energies are very small at the top of the periodic table. It may then continue to circulate with the original beam, resulting in a small contamination that increases during the several hours of an LHC fill. Hybrid collisions can then occur, for example including a 14N nucleus formed by the ejection of a proton and a neutron from 16O. Fortunately, the momentum spread introduced by the interactions puts many of these nuclei outside the acceptance of the radio-frequency cavities that keep the beams bunched as they circulate around the ring, so the effect is smaller than had first been expected.

The most powerful beam from an electromagnetic-dissociation process is 207Pb from single neutron emission, plotted in green. It has comparable intensity to 208Pb81+ but propagates through the LHC arc to the collimation system at Point 3.

Similar electromagnetic-dissociation processes occur elsewhere, notably in beam interactions with the LHC collimation system. The recent ALICE paper, together with earlier ones on neutron emissions in UPCs, helps to test our understanding of the nuclear interactions that are an essential ingredient of complex beam-physics simulations. These are used to understand and control beam losses that might otherwise provoke frequent magnet quenches or beam dumps. At the LHC, a deep symbiosis has emerged between the fundamental nuclear physics studied by the experiments and the accelerator physics limiting its performance as a heavy-ion collider – or even as a light-ion collider (see “Light-ion collider” panel).

The figure also shows beams of the three heaviest gold isotopes in gold. 204Au has an impact point in a dipole magnet but is far too weak to quench it. 203Au follows almost the same trajectory as the BFPP beam. 202Au propagates through the arc to Point 3. The extremely weak flux of 197Au, the only stable isotope of gold, is also shown.

Worth its weight in gold

Prospecting for gold at the LHC looks even more futile when we consider that the gold nuclei emerge from the collision point with very high energies. They hit the LHC beam pipe or collimators at various points downstream where they immediately fragment in hadronic showers of single protons, neutrons and other particles. The gold exists for tens of milliseconds at most.

And finally, the isotopically pure lead used in CERN’s ion source costs more by weight than gold, so realising the alchemists’ dream at the LHC was a poor business plan from the outset.

The moral of this story, perhaps, is that among modern-day natural philosophers, LHC physicists take issue with the designation of lead as a “base” metal. We find, on the contrary, that 208Pb, the heaviest stable isotope among all the elements, is worth far more than its weight in gold for the riches of the physics discoveries that it has led us to.

The physicist who fought war and cancer

The courage of his convictions

Joseph Rotblat’s childhood was blighted by the destruction visited on Warsaw, first by the Tsarist Army, followed by the Central Powers and completed by the Red Army from 1918 to 1920. His father’s successful paper-importing business went bankrupt in 1914, and the family became destitute. After a short course in electrical engineering, Joseph and a teenaged friend became jobbing electricians. A committed autodidact, Rotblat found his way into the Free University, where he studied physics under Ludwik Wertenstein. Wertenstein had worked with Marie Skłodowska-Curie in Paris and was the chief of the Radiological Institute in Warsaw as well as teaching at the Free University. He was the first to recognise Rotblat’s brilliance and retained him as a researcher at the Institute. Rotblat’s main research was neutron-induced artificial radioactivity: he was among the first to induce cobalt-60, which became a standard source in radiotherapy machines before reliable linear accelerators were available.

Chadwick described Rotblat as “very intelligent and very quick”

By the late 1930s, Rotblat had published more than a dozen papers, some in English journals after translation by Wertenstein; the name Rotblat was becoming known in neutron physics. The professor regarded him as the likely next head of the Radiological Institute and thought he should prepare by working outside Poland. Rotblat wanted to gain experience of the cyclotron and, although he could have joined the Joliot–Curie group in Paris, elected to go to Liverpool where James Chadwick was overseeing a machine expected to produce a proton beam within months. He arrived in Liverpool in April 1939 and was shocked by the city’s filth. He also found the scouse dialect of its citizens incomprehensible. Despite the trying circumstances, Rotblat soon impressed Chadwick with his experimental skill and was rewarded with a prestigious fellowship. Chadwick wrote to Wertenstein in June describing Rotblat as “very intelligent and very quick”.

Brimming with enthusiasm

Chadwick had formed a long-distance friendship with Ernest Lawrence, the cyclotron’s inventor, who kept him apprised of developments in Berkeley. At the time of Rotblat’s arrival, Lawrence was brimming with enthusiasm about the potential of neutrons and radioactive isotopes from cyclotrons for medical research, especially in cancer treatment. Chadwick hired Bernard Kinsey, a Cambridge graduate who spent three years with Lawrence, to take charge of the Liverpool cyclotron, and he befriended Rotblat. Liverpool had limited funding: Chadwick complained to Lawrence that the money “this laboratory has been running on in the past few years – is less than some men spend on tobacco.” Chadwick served on a Cancer Commission in Liverpool under the leadership of Lord Derby, which planned to bring cancer research to the Liverpool Radium Institute using products from the cyclotron.

James Chadwick

The small stipend from the Oliver Lodge fellowship encouraged Rotblat to return to Warsaw in August 1939 to collect his wife, Tola, and bring her to England. She was recovering from acute appendicitis; her doctors persuaded Joseph that she was not fit to travel. So he returned alone on the last train allowed to pass through Berlin before the Germans attacked Poland once more. Tola wrote her last letter to Joseph in December 1939. While he was in Warsaw, Rotblat confided in Wertenstein about his belief that a uranium fission bomb was feasible using fast neutrons, and he repeated this argument to Chadwick when he returned to Liverpool. Chadwick eventually became the leader of the British contingent on the Manhattan Project and arranged for Rotblat to come to Los Alamos in 1944 while remaining a Polish citizen. Rotblat worked in Robert Wilson’s cyclotron group and survived a significant radiation accident, receiving an estimated dose of 1.5 J/kg to his upper torso and head. The circumstances of his leaving the project in December 1944 were far more complicated than the moralistic account he wrote in The Bulletin of the Atomic Scientists 40 years later, but no less noble.

Tragedy and triumph

As Chadwick wrote to Rotblat in London, he saw “very obvious advantages” for the future of nuclear physics in Britain from Rotblat’s return to Liverpool. For one thing, “Rotblat has a wider experience on the cyclotron than anyone now in England,” and he also possessed “a mass of information on the equipment used in Project Y [Los Alamos] and Chicago.” Chadwick had two major roles in mind for Rotblat. One was to revitalise the depleted Liverpool department and to stimulate cyclotron research in England; and the second to collate the detailed data on nuclear physics brought by British scientists returning from the Manhattan Project. In 1945, Rotblat discovered that six members of his family had miraculously survived the war in Poland, but tragically not Tola. His state of despair deepened after the news of the atomic bombs being used against Japan: he knew about the possibility of a hydrogen bomb, and remembered conversations with Niels Bohr in Los Alamos about the risks of a nuclear arms race. He made two resolutions: to campaign against nuclear weapons and to leave academic nuclear physics and become a medical physicist to use his scientific knowledge for the direct benefit of people.

Joseph Rotblat
Robert Wilson

When Chadwick returned to Liverpool from the US, he found the department in a much better state than he expected. The credit for this belonged largely to Rotblat’s leadership; Chadwick wrote to Lawrence praising his outstanding ability, combined with a truly remarkable concern for the staff and students. Chadwick and Rotblat then agreed to build a synchrocyclotron in Liverpool. Rotblat selected the abandoned crypt of an unbuilt Catholic cathedral as the best site, since the local topography would provide some radiation protection. The post-war shortages, especially of steel, made this an extremely ambitious project. Rotblat presented a successful application for the largest university grant to the Department of Science and Industrial Research, and despite design and construction problems resulting in spiralling costs, the machine was in active research use from 1954 to 1968.

With the encouragement of physicians at Liverpool Royal Infirmary, Rotblat started to dabble in nuclear medicine to image thyroid glands and treat haematological disorders. In 1949 he saw an advert for the chair in physics at the Medical College of St. Bartholomew’s Hospital (Bart’s) in London and applied. While Rotblat was easily the most accomplished candidate, there was a long delay in his appointment on spurious grounds, such as being over-qualified to teach physics to medical students, likely to be a heavy consumer of research funds and xenophobia. Bart’s was a closed, reactionary institution. There was a clear division between the Medical College, with its links to London University, and the hospital, where the post-war teaching was suboptimal as it struggled to recover from the war and adjusted reluctantly to the new National Health Service (NHS). The Medical College, in Charterhouse Square, was severely bombed in the Blitz and the physics department completely destroyed. Rotblat attempted to thwart his main opponent, the dean (described as “secretive and manipulative” in one history), by visiting the hospital and meeting senior clinicians and governors. There was also a determined effort, orchestrated by Chadwick, to retain him in the ranks of nuclear physicists.

When I interviewed Rotblat in 1994, he told me that Chadwick’s final tactic was to tell him that he was close to being elected as a fellow of the Royal Society, but if he took the position at Bart’s, it would never happen. Rotblat poignantly observed: “He was right.” I mentioned this to Lorna Arnold, the nuclear historian, who thought it was a shame. She said she would take it up with her friend Rudolf Peierls. Despite being in poor health, Peierls vowed to correct this omission, and the next year the Royal Society elected Rotblat a fellow at the age of 86.

Full-time medical physicist

Rotblat’s first task at Bart’s, when he finally arrived in 1950, was to prepare a five-year departmental plan: a task he was well-qualified for after his experience with the synchrocyclotron in Liverpool. With wealthy, centuries-old hospitals such as Bart’s allowed to keep their endowments after the advent of the NHS, he also became an active committee member for the new Research Endowment Fund that provided internal grants and hired research assistants. The physics department soon collaborated with the biochemistry, pharmacology and physiology departments that required radioisotopes for research. He persuaded the Medical College to buy a 15 MV linear accelerator from Mullard, an English electronics company, which never worked for long without problems.

Rotblat resolved to campaign against nuclear weapons and use his scientific knowledge for the direct benefit of people

During his first two years, in addition to the radioisotope work, he studied the passage of electrons through biological tissue and the energy dissipation of neutrons in tissue – the 1950s were a golden age for radiobiology in England, and Rotblat forged close relationships with Hal Gray and his group at the Hammersmith Hospital. In the mid-1950s, he was approached by Patricia Lindop, a newly qualified Bart’s physician who had also obtained a first-class degree in physiology. Lindop had a five-year grant from the Nuffield Foundation to study ageing and, after discussions with Rotblat, it was soon arranged that she would study the acute and long-term effects of radiation in mice at different ages. This was a massive, prospective study that would eventually involve six research assistants and a colony of 30,000 mice. Rotblat acted as the supervisor for her PhD, and they published multiple papers together. In terms of acute death (within 30 days of a high, whole-body dose), she found that mice that were one-day old at exposure could tolerate the highest doses, whereas four-week-old mice were the most vulnerable. The interpretation of long-term effects was much less clearcut and provoked major disagreements within the radiobiology community. In a 1994 letter, Rotblat mused on the number of Manhattan Project scientists still alive: “According to my own studies on the effects of radiation on lifespan, I should have been dead a long time, having received a sub-lethal dose in Los Alamos. But here I am, advocating the closure of Los Alamos, Livermore and Sandia, instead of promoting them as health resorts!”

Patricia Lindop

In 1954, the US Bravo test obliterated the Bikini atoll and layered a Japanese fishing boat (Lucky Dragon No. 5) that was outside the exclusion zone in the South Pacific with radioactive dust. American scientists realised that the weapon massively exceeded its designed yield, and there was an unconvincing attempt to allay public fear. Rotblat was invited onto BBC’s flagship current-affairs programme, Panorama, to explain to the public the difference between the original fission bombs and the H-bomb. His lucid delivery impressed Bertrand Russell, a mathematical philosopher and a leading pacifist in World War I, who also spoke on Panorama. The two became close friends. When Rotblat went to a radiobiology conference a few months later, he met a Japanese scientist who had analysed the dust recovered from Lucky Dragon No. 5. The dust was comprised of about 60% rare-earth isotopes, leading Rotblat to believe that most of the explosive energy was due to fission not fusion. He wrote his own report, not based on any inside knowledge and despite official opposition, concluding this was a fission–fusion–fission bomb and that his TV presentation had underestimated its power by orders of magnitude. Rotblat’s report became public just as the British Cabinet decided in secret to develop thermonuclear weapons. The government was concerned that the Americans would view this as another breach of security by an ex-Manhattan Project physicist. Rotblat’s reputation as a man of the political left grew within the conservative institution of Bart’s.

Russell made a radio address at the end of 1954 to address the global existential threat posed by thermonuclear weapons and urged the public to “remember your humanity and forget the rest”. Six months later, Russell announced the Russell–Einstein Manifesto with Rotblat as one of the signatories, and relied upon by Russell to answer questions from the press. The first Pugwash conference followed in 1957 with Rotblat as a prominent contributor. His active involvement, closely supported by Lindop, would last for the rest of his life, as he encouraged communication across the East–West divide and pushed for international arms control agreements. Much of this work took place in his office at Bart’s. Rotblat and the Pugwash conference then shared the 1995 Nobel Peace Prize.

JUNO takes aim at neutrino-mass hierarchy

Compared to the quark sector, the lepton sector is the Wild West of the weak interaction, with large mixing angles and large uncertainties. To tame this wildness, neutrino physicists are set to bring a new generation of detectors online in the next five years, each roughly an order of magnitude larger than its predecessor. The first of these to become operational is the Jiangmen Underground Neutrino Observatory (JUNO) in Guangdong Province, China, which began data taking on 26 August. The new 20 kton liquid-scintillator detector will seek to resolve one of the major open questions in particle physics: whether the third neutrino-mass eigenstate (ν3) is heavier or lighter than the second (ν2).

“Building JUNO has been a journey of extraordinary challenges,” says JUNO chief engineer Ma Xiaoyan. “It demanded not only new ideas and technologies, but also years of careful planning, testing and perseverance. Meeting the stringent requirements of purity, stability and safety called for the dedication of hundreds of engineers and technicians. Their teamwork and integrity turned a bold design into a functioning detector, ready now to open a new window on the world of neutrinos.”

Main goals

Neutrinos interact only via the parity-violating weak interaction, providing direct evidence only for left-handed neutrinos. As a result, right-handed neutrinos are not part of the Standard Model (SM) of particle physics. As the SM explains fermion masses by a coupling of the Higgs field to a left-handed fermion and its right-handed counterpart of the same flavour, neutrinos are predicted to be massless – a prediction still consistent with every effort to directly measure a neutrino mass yet attempted. Yet decades of observations of the flavour oscillations of solar, atmospheric, reactor, accelerator and astrophysical neutrinos have provided incontrovertible indirect evidence that neutrinos must have tiny masses below the sensitivity of instruments to detect. Observations of quantum interference between flavour eigenstates – the electron, muon and tau neutrinos – indicate that there must be a small mass splitting between ν1 and the slightly more massive ν2, and a larger mass splitting to ν3. But it is not yet known whether the mass eigenvalues follow a so-called normal hierarchy, m1 < m2 < m3, or an inverted hierarchy, m3 < m1 < m2. Resolving this question is the main physics goal of the JUNO experiment.

JUNO’s determination of the mass ordering is largely free of parameter degeneracies

“Unlike other approaches, JUNO’s determination of the mass ordering does not rely on the scattering of neutrinos with atomic electrons in the Earth’s crust or the value of the leptonic CP phase, and hence is largely free of parameter degeneracies,” explains JUNO spokesperson Wang Yifang. “JUNO will also deliver order‑of‑magnitude improvements in the precision of several neutrino‑oscillation parameters and enable cutting‑edge studies of neutrinos from the Sun, supernovae, the atmosphere and the Earth. It will also open new windows to explore unknown physics, including searches for sterile neutrinos and proton decay.”

Additional eye

Located 700 m underground near Jiangmen city, JUNO detects antineutrinos produced 53 km away by the Taishan and Yangjiang nuclear power plants. At the heart of the detector is a liquid‑scintillator detector inside a 44 m-deep water pool. A stainless-steel truss supports an acrylic sphere housing the liquid scintillator, as well as 20,000 20‑inch photomultiplier tubes (PMTs), 25,600 three‑inch PMTs, front‑end electronics, cabling and anti‑magnetic compensation coils. All the PMTs operate simultaneously to capture scintillation light from neutrino interactions and convert it to electrical signals.

To distinguish the extremely fine flavour oscillations that will allow JUNO to observe the neutrino-mass hierarchy, the experiment must achieve an extremely fine energy resolution of almost 50 keV for a typical 3 MeV reactor antineutrino. To attain this, JUNO had to push performance margins in several areas relative to the KamLAND experiment in Japan, which was previously the world’s largest liquid-scintillator detector.

“JUNO is a factor 20 larger than KamLAND, yet our required energy resolution is a factor two better,” explains Wang. “To achieve this, we have covered the full detector with PMTs with only 3 mm clearance and twice the photo-detection efficiency. By optimising the recipe of the liquid scintillator, we were able to improve its attenuation length by a factor of two to over 20 m, and increase its light yield by 50%.”

Go with the flow

Proposed in 2008 and approved in 2013, JUNO began underground construction in 2015. Detector installation started in December 2021 and was completed in December 2024, followed by a phased filling campaign. Within 45 days, the team filled the detector with 60 ktons of ultra‑pure water, keeping the liquid‑level difference between the inner and outer acrylic spheres within centimetres and maintaining a flow‑rate uncertainty below 0.5% to safeguard structural integrity.

Over the next six months, 20 ktons of liquid scintillator progressively filled the 35.4 m diameter acrylic sphere while displacing the water. Stringent requirements on scintillator purity, optical transparency and extremely low radioactivity had to be maintained throughout. In parallel, the collaboration conducted detector debugging, commissioning and optimisation, enabling a seamless transition to full operations at the completion of filling.

JUNO is designed for a scientific lifetime of up to 30 years, with a possible upgrade path allowing a search for neutrinoless double‑beta decay, says the team. Such an upgrade would probe the absolute neutrino-mass scale and test whether neutrinos are truly Dirac fermions, as assumed by the SM, or Majorana fermions without distinct antiparticles, as favoured by several attempts to address fundamental questions spanning particle physics and cosmology.

First oxygen and neon collisions at the LHC

In the first microseconds after the Big Bang, extreme temperatures prevented quarks and gluons from binding into hadrons, filling the universe with a deconfined quark–gluon plasma. Heavy-ion collisions between pairs of gold (19779Au79+) or lead (20882Pb82+) nuclei have long been observed to produce fleeting droplets of this medium, but light–ion collisions remain relatively unexplored. Between 29 June and 9 July 2025, LHC physicists pushed the study of the quark–gluon plasma into new territory, with the first dedicated studies of collisions between pairs of oxygen (168O8+) and neon (2010Ne10+) nuclei, and between oxygen nuclei and protons.

“Early analyses have already helped characterise the geometry of oxygen and neon nuclei, including the latter’s predicted prolate ‘bowling-pin’ shape,” says Anthony Timmins of the University of Houston. “More importantly, they appear consistent with the onset of the quark-gluon plasma in light–ion collisions.”

As the quark–gluon plasma appears to behave like a near-perfect fluid with low viscosity, the key to modelling heavy-ion collisions is hydrodynamics – the physics of how fluids evolve under pressure gradients, viscous stresses and other forces. When two lead nuclei collide at the LHC, they create a tiny, extremely hot fireball where quarks and gluons interact so frequently they reach local thermal equilibrium within about 10–23 s. Measurements of gold–gold collisions at Brookhaven’s RHIC and lead–lead collisions at the LHC suggest that the quark–gluon plasma flows with an extraordinarily low viscosity, close to the quantum limit, allowing momentum to move rapidly across the system. But it’s not clear whether the same rules apply to the smaller nuclear systems involved in light–ion collisions.

“For hydrodynamics to work, along with the appropriate quark-gluon plasma equation of state, you need a separation of scales between the mean free path of quarks and gluons, the pressure gradients and overall system size,” explains Timmins. “As you move to smaller systems, those scales start to overlap. Oxygen and neon are expected to sit near that threshold, close to the limits of plasma formation.”

Across the oxygen–oxygen and neon–neon datasets, the ALICE, ATLAS and CMS collaborations decomposed the transverse distribution of emitted particles into Fourier modes – a way to search for collective, fluid-like behaviour. Measurements of the “elliptic” and “triangular” Fourier components as functions of event multiplicity support the emergence of a collective flow driven by the initial collision geometry. The collaborations observe signs of energetic-probe suppression in oxygen–oxygen collisions – a signature of the droplet “quenching” jets in a way not observed in proton–proton collisions. Similar features appeared in a one-day xenon–xenon run that took place in October 2017.

These initial results are just a smattering of those to come

CMS compared particle yields in light-ion collisions to a proton–proton reference. After scaling for the number of binary nucleon–nucleon interactions, the collaboration observed a maximum suppression of 0.69 ± 0.04 at a transverse momentum of about 6 GeV, more than five standard deviations from unity. While milder than that observed for lead–lead and xenon–xenon collisions, the data point to genuine medium-induced suppression in the smallest ion–ion system studied to date. Meanwhile, ATLAS reported the first dijet transverse-momentum imbalance in a light-ion system. The reduction in balanced jets is consistent with path-length-dependent energy-loss effects, though apparently weaker than in lead–lead collisions.

In “head-on” collisions, ALICE, ATLAS and CMS all observed a neon–oxygen–lead hierarchy in elliptic flow, suggesting that, if a quark–gluon plasma does form, it exhibits the most pronounced “almond shape” in neon collisions. This pattern reflects the expected nuclear geometries of each species. Lead-208 is a doubly magic nucleus, with complete proton and neutron shells that render it tightly bound and nearly spherical in its ground state. Conversely, neon is predicted to be prolate, with its inherent elongation producing a larger elliptic overlap. Oxygen falls in between, consistent with models describing it as roughly spherical or weakly clustered.

ALICE and ATLAS reported a hierarchy of flow coefficients in light-ion collisions, with elliptic, triangular and quadrangular flows progressively decreasing as their Fourier index rises, in line with hydrodynamic expectations. Like CMS’s charged hadron yields, ALICE’s preliminary neutral pion yields exhibit a suppression at large momenta.

In a previous fixed-target study, the LHCb collaboration also measured the elliptic and triangular components of the flow in lead–neon and lead–argon collisions, observing the distinctive shape of the neon nucleus. As for proton–oxygen collisions, LHCb’s forward-rapidity coverage can probe the partonic structure of nuclei at very small values of Bjorken-x – the fraction of the nucleon’s momentum carried by a quark or gluon. Such measurements help constrain nuclear parton distribution functions in the low-x region dominated by gluons and provide rare benchmarks for modelling ultra-high-energy cosmic rays colliding with atmospheric oxygen.

These initial results are just a smattering of those to come. In a whirlwind 11-day campaign, physicists made full use of the brief but precious opportunity to investigate the formation of quark–gluon plasma in the uncharted territory of light ions. Accelerator physicists and experimentalists came together to tackle peculiar problems, such as the appearance of polluting species in the beams due to nuclear transmutation (see “Alchemy by pure light“). Despite the tight schedule, luminosity targets for proton–oxygen, oxygen–oxygen and neon–neon collisions were exceeded by large factors, thanks to high accelerator availability and the high injector intensity delivered by the LHC team.

“These early oxygen and neon studies show that indications of collective flow and parton-energy-loss-like suppression persist even in much smaller systems, while providing new sensitivity to nuclear geometry and valuable prospects for forward-physics studies,” concludes Timmins. “The next step is to pin down oxygen’s nuclear parton distribution function. That will be crucial for understanding the hadron-suppression patterns we see, with proton–oxygen and ultra-peripheral collisions being great ways to get there.”

Prepped for re-entry

Francesca Luoni

When Francesca Luoni logs on each morning at NASA’s Langley Research Center in Virginia, she’s thinking about something few of us ever consider: how to keep astronauts safe from the invisible hazards of space radiation. As a research scientist in the Space Radiation Group, Luoni creates models to understand how high-energy particles from the Sun and distant supernovae interact with spacecraft structures and the human body – work that will help future astronauts safely travel deeper into space.

But Luoni is not a civil servant for NASA. She is contracted through the multinational engineering firm Analytical Mechanics Associates, continuing a professional slingshot from pure research to engineering and back again. Her career is an intriguing example of how to balance research with industrial engagement – a holy grail for early-career researchers in the late 2020s.

Leveraging expertise

Luoni’s primary aim is to optimise NASA’s Space Radiation Cancer Risk Model, which maps out the cancer incidence and mortality risk for astronauts during deep-space missions, such as NASA’s planned mission to Mars. To make this work, Luoni’s team leverages the expertise of all kinds of scientists, from engineers, statisticians and physicists, to biochemists, epidemiologists and anatomists.

“I’m applying my background in radiation physics to estimate the cancer risk for astronauts,” she explains. “We model how cosmic rays pass through the structure of a spacecraft, how they interact with shielding materials, and ultimately, what reaches the astronauts and their tissues.”

Before arriving in Virginia early this year, Luoni had already built a formidable career in space-radiation physics. After a physics PhD in Germany, she joined the GSI Helmholtz Centre for Heavy Ion Research, where she spent long nights at particle accelerators testing new shielding materials for spacecraft. “We would run experiments after the medical facility closed for the day,” she says. “It was precious work because there are so few facilities worldwide where you can acquire experimental data on how matter responds to space-like radiation.”

Her experiments combined experimental measurement data with Monte Carlo simulations to compare model predictions with reality – skills she honed during her time in nuclear physics that she still uses daily at NASA. “Modelling is something you learn gradually, through university, postgrads and research,” says Luoni. “It’s really about understanding physics, maths, and how things come together.”

In 2021 she accepted a fellowship in radiation protection at CERN. The work was different from the research she’d done before. It was more engineering-oriented, ensuring the safety of both scientists and surrounding communities from the intense particle beams of the LHC and SPS. “It may sound surprising, but at CERN the radiation is far more energetic than we see in space. We studied soil and water activation, and shielding geometries, to protect everyone on site. It was much more about applied safety than pure research.”

Luoni’s path through academia and research was not linear, to say the least. From being an experimental physicist collecting data at GSI, to working as an engineer and helping physicists conduct their own experiments at CERN, Luoni is excited to be diving back into pure research, even if it wasn’t her initially intended field.

Despite her industry–contractor title, Luoni’s day-to-day work at NASA is firmly research-driven. Most of her time is spent refining computational models of space-radiation-induced cancer risk. While the coding skills she honed at CERN apply to her role now, Luoni still experienced a steep learning curve when transitioning to NASA.

“I am learning biology and epidemiology, understanding how radiation damages human tissues, and also deepening my statistics knowledge,” she says. Her team codes primarily in Python and MATLAB, with legacy routines in Fortran. “You have to be patient with Fortran,” she remarks. “It’s like building with tiny bricks rather than big built-in functions.”

Luoni is quick to credit not just the technical skills but the personal resilience gained from moving between countries and disciplines. Born in Italy, she has worked in Germany, Switzerland and now the US. “Every move teaches you something unique,” she says. “But it’s emotionally demanding. You face bureaucracy, new languages, distance from family and friends. You need to be at peace with yourself, because there’s loneliness too.”

Bravery and curiosity

But in the end, she says, it’s worth the price. Above all, Luoni counsels bravery and curiosity. “Be willing to step out of your comfort zone,” she says. “It takes strength to move to a new country or field, but it’s worth it. I feel blessed to have experienced so many cultures and to work on something I love.”

While she encourages travel, especially at the PhD and postdoc stages in a researcher’s career, Luoni advises caution when presenting your experience on applications. Internships and shorter placements are welcome, but employers want to see that you have stayed somewhere long enough to really understand and harness that company’s training.

“Moving around builds a unique skill set,” she says. “Like it or not, big names on your CV matter – GSI, CERN, NASA – people notice. But stay in each place long enough to really learn from your mentors, a year is the minimum. Take it one step at a time and say yes to every opportunity that comes your way.”

Luoni had been looking for a way to enter space-research throughout her career, building up a diverse portfolio of skills throughout her various roles in academia and engineering. “Follow your heart and your passions,” she says. “Without that, even the smartest person can’t excel.”

The puzzle of an excess of bright early galaxies

Since the Big Bang, primordial density perturbations have continually merged and grown to form ever larger structures. This “hierarchical” model of galaxy formation has withstood observational scrutiny for more than four decades. However, understanding the emergence of the earliest galaxies in the first few hundred million years after the Big Bang has remained a key frontier in the field of astrophysics. This is also one of the key science aims of the James Webb Space Telescope (JWST), launched on Christmas Day in 2021.

Its large, cryogenically-cooled mirror and infrared instruments let it capture the faint, redshifted ultraviolet light from the universe’s earliest stars and galaxies. Since its launch, the JWST has collected unprecedented samples of astrophysical sources within the first 500 million years of the Big Bang, utterly transforming our understanding of early galaxy formation.

Stellar observations

Tantalisingly, JWST’s observations hint at an excess of galaxies very bright in the ultra-violet (UV) within the first 400 million years, as compared to expectations from early formation within the standard Lambda Cold Dark matter model. Given that UV photons are a key indicator of young star formation, these observations seem to imply that early galaxies in any given volume of space were overly efficient at forming stars in the infancy of the universe.

However, extraordinary claims demand extraordinary evidence. These puzzling observations have come under immense scrutiny in confirming that the sources lie at the inferred redshifts, and do not just probe over-dense regions that might preferentially host galaxies with high star-formation rates. It could still be the case that the apparent excess of bright galaxies is cosmic variance – a statistical fluctuation caused by the relatively small regions of the sky probed by the JWST so far.

Such observational caveats notwith­standing, theorists have developed a number of distinct “families” of explanations.

UV photons are readily attenuated by dust at low redshifts. If, however, these early galaxies had ejected all of their dust, one might be able to observe almost all of the intrinsic UV light they produced, making them brighter than expected based on lower-redshift benchmarks.

Bias may also arise from detecting only those sources powered by rapid bursts of star formation that briefly elevate galaxies to extreme luminosities.

Extraordinary claims demand extraordinary evidence

Several explanations focus on modifying the physics of star formation itself, for example regarding “stellar feedback” – the energy and momentum that newly formed stars inject back into their surrounding gas, that can heat, ionise or expel gas, and slow or shut down further star formation. Early galaxies might have high star-formation rates because stellar feedback was largely inefficient, allowing them to retain most of their gas for further star formation, or perhaps because a larger fraction of gas was able to form stars in the first place.

While the relative number of low- and high-mass stars in a newly formed stellar population – the initial mass function (IMF) – has been mapped out in the local universe to some extent, its evolution with redshift remains an open question. Since the IMF crucially determines the total UV light produced per unit mass of star formed, a “top-heavy” IMF, with a larger fraction of massive stars compared to that in the local universe, could explain the observations.

Alternatively, the striking ultraviolet light may not arise solely from ordinary young stars – it could instead be powered by accretion onto black holes, which JWST is finding in unexpected numbers.

Alternative cosmologies

Finally, a number of works also appeal to alternative cosmologies to enhance structure formation at such early epochs, invoking an evolving dark-energy equation of state, primordial magnetic fields or even primordial black holes.

A key caveat involved in these observations is that redshifts are often inferred purely from broadband fluxes in different filters – a technique known as photometry. Spectroscopic data are urgently required, not only to verify their exact distances but also to distinguish between different physical scenarios such as bursty star formation, an evolving IMF or contamination by active galactic nuclei, where supermassive black holes accrete gas. Upcoming deep observations with facilities such as the Atacama Large Millimeter/submillimeter Array (ALMA) and the Northern Extended Millimeter Array (NOEMA) will be crucial for constraining the dust content of these systems and thereby clarifying their intrinsic star-formation rates. Extremely large surveys with facilities such as Euclid, the Nancy Grace Roman Space Telescope and the Extremely Large Telescope will also be crucial in surveying early galaxies over large volumes and sampling all possible density fields.

Combining these datasets will be critical in shedding light on this unexpected puzzle unearthed by the JWST.

bright-rec iop pub iop-science physcis connect