Comsol -leaderboard other pages

Topics

Biology at the Bragg peak

In 1895, mere months after Wilhelm Röntgen discovered X-rays, doctors explored their ability to treat superficial tumours. Today, the X-rays are generated by electron linacs rather than vacuum tubes, but the principle is the same, and radiotherapy is part of most cancer treatment programmes.

Charged hadrons offer distinct advantages. Though they are more challenging to manipulate in a clinical environment, protons and heavy ions deposit most of their energy just before they stop, at the so-called Bragg peak, allowing medical physicists to spare healthy tissue and target cancer cells precisely. Particle therapy has been an effective component of the most advanced cancer therapies for nearly 80 years, since it was proposed by Robert R Wilson in 1946.

With the incidence of cancer rising across the world, research into particle therapy is more valuable than ever to human wellbeing – and the science isn’t slowing down. Today, progress requires adapting accelerator physics to the demands of the burgeoning field of radiobiology. This is the scientific basis for developing and validating a whole new generation of treatment modalities, from FLASH therapy to combining particle therapy with immunotherapy.

Here are the top five facts accelerator physicists need to know about biology at the Bragg peak.

1. 100 keV/μm optimises damage to DNA

Repair shop

Almost every cell’s control centre is contained within its nucleus, which houses DNA – your body’s genetic instruction manual. If the cell’s DNA becomes compromised, it can mutate and lose control of its basic functions, leading the cell to die or multiply uncontrollably. The latter results in cancer.

For more than a century, radiation doses have been effective in halting the uncontrollable growth of cancerous cells. Today, the key insight from radiobiology is that for the same radiation dose, biological effects such as cell death, genetic instability and tissue toxicity differ significantly based on both beam parameters and the tissue being targeted.

Biologists have discovered that a “linear energy transfer” of roughly 100 keV/μm produces the most significant biological effect. At this density of ionisation, the distance between energy deposition events is roughly equal to the diameter of the DNA double helix, creating complex, repair-resistant DNA lesions that strongly reduce cell survival. Beyond 100 keV/μm, energy is wasted.

DNA is the main target of radiotherapy because it holds the genetic information essential for the cell’s survival and proliferation. Made up of a double helix that looks like a twisted ladder, DNA consists of two strands of nucleotides held together by hydrogen bonds. The sequence of these nucleotides forms the cell’s unique genetic code. A poorly repaired lesion on this ladder leaves a permanent mark on the genome.

When radiation induces a double-strand break, repair is primarily attempted through two pathways: either by rejoining the broken ends of the DNA, or by replacing the break with an identical copy of healthy DNA (see “Repair shop” image). The efficiency of these repairs decreases dramatically when the breaks occur in close spatial proximity or if they are chemically complex. Such scenarios frequently result in lethal mis-repair events or severe alterations in the genetic code, ultimately compromising cell survival.

This fundamental aspect of radiobiology strongly motivates the use of particle therapy over conventional radiotherapy. Whereas X-rays deliver less than 10 keV/μm, creating sparse ionisation events, protons deposit tens of keV/μm near the Bragg peak, and heavy ions 100 keV/μm or more.

2. Mitochondria and membranes matter too

For decades, radiobiology revolved around studying damage to DNA in cell nuclei. However, mounting evidence reveals that an important aspect of cellular dysfunction can be inflicted by damage to other components of cells, such as the cell membrane and the collection of “organelles” inside it. And the nucleus is not the only organelle containing DNA.

Self-destruct

Mitochondria generate energy and serve as the body’s cellular executioners. If a mitochondrion recognises that its cell’s DNA has been damaged, it may order the cell membrane to become permeable. Without the structure of the cell membrane, the cell breaks apart, its fragments carried away by immune cells. This is one mechanism behind “programmed cell death” – a controlled form of death, where the cell essentially presses its own self-destruct button (see “Self-destruct” image).

Irradiated mitochondrial DNA can suffer from strand breaks, base–pair mismatches and deletions in the code. In space-radiation studies, damage to mitochondrial DNA is a serious health concern as it can lead to mutations, premature ageing and even the creation of tumours. But programmed cell death can prevent a cancer cell from multiplying into a tumour. By disrupting the mitochondria of tumour cells, particle irradiation can compromise their energy metabolism and amplify cell death, increasing the permeability of the cell membrane and encouraging the tumour cell to self-destruct. Though a less common occurrence, membrane damage by irradiation can also directly lead to cell death.

3. Bystander cells exhibit their own radiation response

Communication

For many years, radiobiology was driven by a simple assumption: only cells directly hit by radiation would be damaged. This view started to change in the 1990s, when researchers noticed something unexpected: even cells that had not been irradiated showed signs of stress or injury when they were near the irradiated cells. This phenomenon, known as the bystander effect, revealed that irradiated cells can send bio-­chemical signals to their neighbours, which may in turn respond as if they themselves had been exposed, potentially triggering an immune response (see “Communication” image).

“Non-targeted” effects propagate not only in space, but also in time, through the phenomenon of radiation-induced genomic instability. This temporal dimension is characterised by the delayed appearance of genomic alterations across multiple cell generations. Radiation damage propagates across cells and tissues, and over time, adding complexity beyond the simple dose–response paradigm.

Although the underlying mechanisms remain unclear, the clustered ionisation events produced by carbon ions generate complex DNA damage and cell death, while largely preserving nearby, unirradiated cells.

4. Radiation damage activates the immune system

Cancer cells multiply because the immune system fails to recognise them as a threat (see “Immune response” image). The modern pharmaceutical-based technique of immunotherapy seeks to alert the immune system to the threat posed by cancer cells it has ignored by chemically tagging them. Radiotherapy seeks to activate the immune system by inflicting recognisable cellular damage, but long courses of photon radiation can also weaken overall immunity.

Immune response

This negative effect is often caused by the exposure of circulating blood and active blood-producing organs to radiation doses. Fortunately, particle therapy’s ability to tightly conform the dose to the target and subject surrounding tissues to a minimal dose can significantly mitigate the reduction of immune blood cells, better preserving systemic immunity. By inflicting complex, clustered DNA lesions, heavy ions have the strongest potential to directly trigger programmed cell death, even in the most difficult-to-treat cancer cells, bypassing some of the molecular tricks that tumours use to survive, and amplifying the immune response beyond conventional radiotherapy with X-rays. This is linked to the complex, clustered DNA lesions induced by high-energy-transfer radiation, which triggers the DNA damage–repair signals strongly associated with immune activation.

These biological differences provide a strong rationale for the rapidly emerging research frontier of combining particle therapy with immunotherapy. Particle therapy’s key advantage is its ability to amplify immunogenic cell death, where the cell’s surface changes, creating “danger tags” to recruit immune cells to come and kill it, recognise others like it, and kill those too. This ability for particle therapy to mitigate systemic immuno­suppression makes it a theoretically superior partner for immunotherapy compared to conventional X-rays.

5. Ultra-high dose rates protect healthy tissues

In recent years, the attention of clinicians and researchers has focused on the “FLASH” effect– a groundbreaking concept in cancer treatment where radiation is delivered at an ultra-high dose rate in excess of 40 J/kg/s. FLASH radiotherapy appears to minimise damage to healthy tissues while maintaining at least the same level of tumour control as conventional methods. Inflammation in healthy tissues is reduced, and the number of immune cells entering the tumour increased, helping the body fight cancer more effectively. This can significantly widen the therapeutic window – the optimal range of radiation doses that can successfully treat a tumour while minimising toxicity to healthy tissues.

Oxygen depletion

Though the radiobiological mechanisms behind this protective effect remain unclear, several hypotheses have been proposed. A leading theory focuses on oxygen depletion or “hypoxia”.

As tumours grow, they outpace the surrounding blood vessels’ ability to provide oxygen (see “Oxygen depletion” image). By condensing the dose in a very short time, it is thought that FLASH therapy may induce transient hypoxia within normal tissues too, reducing oxygen-dependent DNA damage there, while killing tumour cells at the same rate. Using a similar mechanism, FLASH therapy may also preserve mitochondrial integrity and energy production in normal tissues.

It is still under investigation whether a FLASH effect occurs with carbon ions, but combining the biological benefits of high-energy-transfer radiation with those of FLASH could be very promising.

The future of particle therapy

What excites you most about your research in 2025?

2025 has been a very exciting year. We just published a paper in Nature Physics about radioactive ion beams.

I also received an ERC Advanced Grant to study the FLASH effect with neon ions. We plan to go back to the 1970s, when Cornelius Tobias in Berkeley thought of using very heavy ions against radio-resistant tumours, but now using FLASH’s ultrahigh dose rates to reduce its toxicity to healthy tissues. Our group is also working on the simultaneous acceleration of different ions: carbon ions will stop in the tumour, but helium ions will cross the patient, providing an online monitor of the beam’s position during irradiation. The other big news in radiotherapy is vertical irradiation, where we don’t rotate the beam around the patient, but rotate the patient around the beam. This is particularly interesting for heavy-ion therapy, where building a rotating gantry that can irradiate the patient from multiple angles is almost as expensive as the whole accelerator. We are leading the Marie Curie UPLIFT training network on this topic.

Why are heavy ions so compelling?

Close to the Bragg peak, where very heavy ions are very densely ionising, the damage they cause is difficult to repair. You can kill the tumours much better than with protons. But carbon, oxygen and neon run the risk of inducing toxicity in healthy tissues. In Berkeley, more than 400 patients were treated with heavy ions. The results were not very good, and it was realised that these ions can be very toxic for normal tissue. The programme was stopped in 1992, and since then there has been no more heavy-ion therapy in the US, though carbon-ion therapy was established in Japan not long after. Today, most of the 130 particle-therapy centres worldwide use protons, but 17 centres across Asia and Europe offer carbon-ion therapy, with one now under construction at the Mayo Clinic in the US. Carbon is very convenient, because the plateau of the Bragg curve is similar to X-rays, while the peak is much more effective than protons. But still, there is evidence that it’s not heavy enough, that the charge is not high enough to get rid of very radio-resistant hypoxic tumours – tumours where you don’t have enough oxygenation. So that’s why we want to go heavier: neon. If we show that you can manage the toxicity using FLASH, then this is something that can be translated into the clinics.

There seems to be a lot of research into condensing the dose either in space, in microbeams or, in time, in the FLASH effect…

Absolutely.

Why does that spare healthy tissue at the expense of cancer cells?

That is a question I cannot answer. To be honest, nobody knows. We know that it works, but I want to make it very clear that we need more research to translate it completely to the clinic. It is true that if you either fractionate in space or compress in time, normal tissue is much more resistant, while the effect on the tumour is approximately the same, allowing you to increase the dose without harming the patient. The problem is that the data are still controversial.

So you would say that it is not yet scientifically established that the FLASH effect is real?

There is an overwhelming amount of evidence for the strong sparing of normal tissue at specific sites, especially for the skin and for the brain. But, for example, for gastrointestinal tumours the data is very controversial. Some data show no effect, some data show a protective effect, and some data show an increased effectiveness of FLASH. We cannot generalise.

Is it surprising that the effect depends on the tissue?

In medicine this is not so strange. The brain and the gut are completely different. In the gut, you have a lot of cells that are quickly duplicating, while in the brain, you almost have the same number of neurons that you had when you were a teenager – unfortunately, there is not much exchange in the brain.

So, your frontier at GSI is FLASH with neon ions. Would you argue that microbeams are equally promising?

Absolutely, yes, though millibeams more so than microbeams, because microbeams are extremely difficult to go into clinical translation. In the micron region, any kind of movement will jeopardise your spatial fractionation. But if you have millimetre spacing, then this becomes credible and feasible. You can create millibeams using a grid. Instead of having one solid beam, you have several stripes. If you use heavier ions, they don’t scatter very much and remain spatially fractionated. There is mounting evidence that fractionated irradiation of the tumour can elicit an immune response and that these immune cells eventually destroy the tumour. Research is still ongoing to understand whether it’s better to irradiate with a spatial fractionation of 1 millimetre or to only radiate the centre of the tumour, allowing the immune cells to migrate and destroy the tumour.

Radioactive-ion therapy

What’s the biology of the body’s immune response to a tumour?

To become a tumour, a cell has to fool the immune system, otherwise our immune system will destroy it. So, we are desperately trying to find a way to teach the immune system to say: “look, this is not a friend – you have to kill it, you have to destroy it.” This is immunotherapy, the subject of the Nobel Prize in medicine in 2018 and also related to the 2025 Nobel Prize in medicine on regulation of the immune system. But these drugs don’t work for every tumour. Radiotherapy is very useful in this sense, because you kill a lot of cells, and when the immune system sees a lot of dead cells, it activates. A combination of immunotherapy and radiotherapy is now being used more and more in clinical trials.

You also mentioned radioactive ion beams and the simultaneous acceleration of carbon and helium ions. Why are these approaches advantageous?

The two big problems with particle therapy are cost and range uncertainty. Having energy deposition concentrated at the Bragg peak is very nice, but if it’s not in the right position, it can do a lot of damage. Precision is therefore much more important in particle therapy than in conventional radiotherapy, as X-rays don’t have a Bragg peak – even if the patient moves a little bit, or if there is an anatomical change, it doesn’t matter. That’s why many centres prefer X-rays. To change that, we are trying to create ways to see the beam while we irradiate. Radioactive ions decay while they deposit energy in the tumour, allowing you to see the beam using PET. With carbon and helium, you don’t see the carbon beam, but you see the helium beam. These are both ways to visualise the beam during irradiation.

How significantly does radiation therapy improve human well-being in the world today?

When I started to work in radiation therapy at Berkeley, many people were telling me: “Why do you waste your time in radiation therapy? In 10 years everything will be solved.” At that time, the trend was gene therapy. Other trends have come and gone, and after 35 years in this field, radiation therapy is still a very important tool in a multidisciplinary strategy for killing tumours. More than 50% of cancer patients need radiotherapy, but, even in Europe, it is not available to all patients who need it.

Accelerator and detector physicists have to learn to speak the language of the non-specialist

What are the most promising initiatives to increase access to radiotherapy in low- and middle-income countries?

Simply making the accelerators cheaper. The GDP of most countries in Africa, South America and Asia is also steadily increasing, so you can expect that – let’s say – in 20 or 30 years from now, there will be a big demand for advanced medical technologies in these countries, because they will have the money to afford it.

Is there a global shortage of radiation physicists?

Yes, absolutely. This is true not only for particle therapy, which requires a high number of specialists to maintain the machine, but also for conventional X-ray radiotherapy with electron linacs. It’s also true for diagnostics because you need a lot of medical physicists for CT, PET and MRI.

What is your advice to high-energy physicists who have just completed a PhD or a postdoc, and want to enter medical physics?

The next step is a specialisation course. In about four years, you will become a specialised medical physicist and can start to work in the clinics. Many who take that path continue to do research alongside their clinical work, so you don’t have to give up your research career, just reorient it toward medical applications.

How does PTCOG exert leadership over global research and development?

The Particle Therapy Co-Operative Group (PTCOG) is a very interesting association. Every particle-therapy centre is represented in its steering committee. We have two big roles. One is research, so we really promote international research in particle therapy, even with grants. The second is education. For example, Spain currently has 11 proton therapy centres under construction. Each will need maybe 10 physicists. PTCOG is promoting education in particle therapy to train the next generation of radiation-therapy technicians and medical oncologists. It’s a global organisation, representing science worldwide, across national and continental branches.

Do you have a message for our community of accelerator physicists and detector physicists? How can they make their research more interdisciplinary and improve the applications?

Accelerator physicists especially, but also detector physicists, have to learn to speak the language of the non-specialist. Sometimes they are lost in translation. Also, they have to be careful not to oversell what they are doing, because you can create expectations that are not matched by reality. Tabletop laser-driven accelerators are a very interesting research topic, but don’t oversell them as something that can go into the clinics tomorrow, because then you create frustration and disappointment. There is a similar situation with linear accelerators for particle therapy. Since I started to work in this field, people have been saying “Why do we use circular accelerators? We should use linear accelerators.” After 35 years, not a single linear accelerator has been used in the clinics. There must also be a good connection with industry, because eventually clinics buy from industry, not academia.

Are there missed opportunities in the way that fundamental physicists attempt to apply their research and make it practically useful with industry and medicine?

In my opinion, it should work the other way around. Don’t say “this is what I am good at”; ask the clinical environment, “what do you need?” In particle therapy, we want accelerators that are cheaper and with a smaller footprint. So in whatever research you do, you have to prove to me that the footprint is smaller, and the cost lower.

Cave M

Do forums exist where medical doctors can tell researchers what they need?

PTCOG is definitely the right place for that. We keep medicine, physics and biology together, and it’s one of the meetings with the highest industry participation. All the industries in particle therapy come to PTCOG. So that’s exactly the right forum where people should talk. We expect 1500 people at the next meeting, which will take place in Deauville, France, from 8 to 13 June 2026, shortly after IPAC.

Are accelerator physicists welcome to engage in PTCOG even if they’ve not previously worked on medical applications?

Absolutely. This is something that we are missing. Accelerator physicists mostly go to IPAC but not to PTCOG. They should also come to PTCOG to speak more with medical physicists. I would say that PTCOG is 50% medical physics, 30% medicine and 20% biology. So, there are a lot of medical physicists, but we don’t have enough accelerator physicists and detector physicists. We need more particle and nuclear physicists to come to PTCOG to see what the clinical and biology community want, and whether they can provide something.

Do you have a message for policymakers and funding agencies about how they can help push forward research in radiotherapy?

Unfortunately, radiation therapy and even surgery are wrongly perceived as old technologies. There is not much investment in them, and that is a big problem for us. What we miss is good investment at the level of cooperative programmes that develop particle therapy in a collaborative fashion. At the moment, it’s becoming increasingly difficult. All the money goes into prevention and pharmaceuticals for immunotherapy and targeted therapy, and this is something that we are trying to revert.

Are large accelerator laboratories well placed to host cooperative research projects?

Both GSI and CERN face the same challenge: their primary mission is nuclear and particle physics. Technological transfer is fine, but they may jeopardise their funding if they stray too far from their primary goal. I believe they should invest more in technological transfer, lobbying their funding agencies to demonstrate that there is a translation of their basic science into something that is useful for public health.

How does your research in particle therapy transfer to astronaut safety?

Particle therapy and space-radiation research have a lot in common. They use the same tools and there are also a lot of overlapping topics, for example radiosensitivity. One patient is more sensitive, one patient is more resistant, and we want to understand what the difference is. The same is true of astronauts – and radiation is probably the main health risk for long-term missions. Space is also a hostile environment in terms of microgravity and isolation, but here we understand the risks, and we have countermeasures. For space radiation, the problem is that we don’t understand the risk very well, because the type of radiation is so exotic. We don’t have that type of radiation on Earth, so we don’t know exactly how big the risk is. Plus, we don’t have effective countermeasures, because the radiation is so energetic that shielding will not be enough to protect the crews effectively. We need more research to reduce the uncertainty on the risk, and most of this research is done in ground-based accelerators, not in space.

Radiation therapy is probably the best interdisciplinary field that you can work in

I understand that you’re even looking into cryogenics…

Hibernation is considered science fiction, but it’s not science fiction at all – it’s something we can recreate in the lab. We call it synthetic torpor. This can be induced in animals that are non-hibernating. Bears and squirrels hibernate; humans and rats don’t, but we can induce it. And when you go into hibernation, you become more radioresistant, providing a possible countermeasure to radiation exposure, especially for long missions. You don’t need much food, you don’t age very much, metabolic processes are slowed down, and you are protected from radiation. That’s for space. This could also be applied to therapy. Imagine you have a patient with multiple metastasis and no hope for treatment. If you can induce synthetic torpor, all the tumours will stop, because when you go into a low temperature and hibernation, the tumours don’t grow. This is not the solution, because when you wake the patient up, the tumours will grow again, but what you can do is treat the tumours while you are in hibernation, while healthy tissue is more radiation resistant. The number of research groups working on this is low, so we’re quite far from considering synthetic torpor for spaceflight or clinical trials for cancer treatment. First of all, we have to see how long we can keep an animal in synthetic torpor. Second, we should translate into bigger animals like pigs or even non-human primates.

In the best-case scenario, what can particle therapy look like in 10 years’ time?

Ideally, we should probably at least double the amount of particle-therapy centres that are now available, and expand into new regions. We finally have a particle-therapy centre in Argentina, which is the first one in South America. I would like to see many more in South America and in Africa. I would also like to see more centres that try to tackle tumours where there is no treatment option, like glioblastoma or pancreatic cancer, where the mortality is the same as the incidence. If we can find ways to treat such cancers with heavy ions and give hope to these patients, this would be really useful.

Is there a final thought that you’d like to leave with readers?

Radiation therapy is probably the best interdisciplinary field that you can work in. It’s useful for society and it’s intellectually stimulating. I really hope that big centres like CERN and GSI commit more and more to the societal benefits of basic research. We need it now more than ever. We are living in a difficult global situation, and we have to prove that when we invest money in basic research, this is very well invested money. I’m very happy to be a scientist, because in science, there are no barriers, there is no border. Science is really, truly international. I’m an advocate of saying scientific collaboration should never stop. It didn’t even stop during the Cold War. At that time, the cooperation between East and West at the scientist level helped to reduce the risk of nuclear weapons. We should continue this. We don’t have to think that what is happening in the world should stop international cooperation in science: it eventually brings peace.

Prepped for re-entry

Francesca Luoni

When Francesca Luoni logs on each morning at NASA’s Langley Research Center in Virginia, she’s thinking about something few of us ever consider: how to keep astronauts safe from the invisible hazards of space radiation. As a research scientist in the Space Radiation Group, Luoni creates models to understand how high-energy particles from the Sun and distant supernovae interact with spacecraft structures and the human body – work that will help future astronauts safely travel deeper into space.

But Luoni is not a civil servant for NASA. She is contracted through the multinational engineering firm Analytical Mechanics Associates, continuing a professional slingshot from pure research to engineering and back again. Her career is an intriguing example of how to balance research with industrial engagement – a holy grail for early-career researchers in the late 2020s.

Leveraging expertise

Luoni’s primary aim is to optimise NASA’s Space Radiation Cancer Risk Model, which maps out the cancer incidence and mortality risk for astronauts during deep-space missions, such as NASA’s planned mission to Mars. To make this work, Luoni’s team leverages the expertise of all kinds of scientists, from engineers, statisticians and physicists, to biochemists, epidemiologists and anatomists.

“I’m applying my background in radiation physics to estimate the cancer risk for astronauts,” she explains. “We model how cosmic rays pass through the structure of a spacecraft, how they interact with shielding materials, and ultimately, what reaches the astronauts and their tissues.”

Before arriving in Virginia early this year, Luoni had already built a formidable career in space-radiation physics. After a physics PhD in Germany, she joined the GSI Helmholtz Centre for Heavy Ion Research, where she spent long nights at particle accelerators testing new shielding materials for spacecraft. “We would run experiments after the medical facility closed for the day,” she says. “It was precious work because there are so few facilities worldwide where you can acquire experimental data on how matter responds to space-like radiation.”

Her experiments combined experimental measurement data with Monte Carlo simulations to compare model predictions with reality – skills she honed during her time in nuclear physics that she still uses daily at NASA. “Modelling is something you learn gradually, through university, postgrads and research,” says Luoni. “It’s really about understanding physics, maths, and how things come together.”

In 2021 she accepted a fellowship in radiation protection at CERN. The work was different from the research she’d done before. It was more engineering-oriented, ensuring the safety of both scientists and surrounding communities from the intense particle beams of the LHC and SPS. “It may sound surprising, but at CERN the radiation is far more energetic than we see in space. We studied soil and water activation, and shielding geometries, to protect everyone on site. It was much more about applied safety than pure research.”

Luoni’s path through academia and research was not linear, to say the least. From being an experimental physicist collecting data at GSI, to working as an engineer and helping physicists conduct their own experiments at CERN, Luoni is excited to be diving back into pure research, even if it wasn’t her initially intended field.

Despite her industry–contractor title, Luoni’s day-to-day work at NASA is firmly research-driven. Most of her time is spent refining computational models of space-radiation-induced cancer risk. While the coding skills she honed at CERN apply to her role now, Luoni still experienced a steep learning curve when transitioning to NASA.

“I am learning biology and epidemiology, understanding how radiation damages human tissues, and also deepening my statistics knowledge,” she says. Her team codes primarily in Python and MATLAB, with legacy routines in Fortran. “You have to be patient with Fortran,” she remarks. “It’s like building with tiny bricks rather than big built-in functions.”

Luoni is quick to credit not just the technical skills but the personal resilience gained from moving between countries and disciplines. Born in Italy, she has worked in Germany, Switzerland and now the US. “Every move teaches you something unique,” she says. “But it’s emotionally demanding. You face bureaucracy, new languages, distance from family and friends. You need to be at peace with yourself, because there’s loneliness too.”

Bravery and curiosity

But in the end, she says, it’s worth the price. Above all, Luoni counsels bravery and curiosity. “Be willing to step out of your comfort zone,” she says. “It takes strength to move to a new country or field, but it’s worth it. I feel blessed to have experienced so many cultures and to work on something I love.”

While she encourages travel, especially at the PhD and postdoc stages in a researcher’s career, Luoni advises caution when presenting your experience on applications. Internships and shorter placements are welcome, but employers want to see that you have stayed somewhere long enough to really understand and harness that company’s training.

“Moving around builds a unique skill set,” she says. “Like it or not, big names on your CV matter – GSI, CERN, NASA – people notice. But stay in each place long enough to really learn from your mentors, a year is the minimum. Take it one step at a time and say yes to every opportunity that comes your way.”

Luoni had been looking for a way to enter space-research throughout her career, building up a diverse portfolio of skills throughout her various roles in academia and engineering. “Follow your heart and your passions,” she says. “Without that, even the smartest person can’t excel.”

Machine learning and the search for the unknown

CMS figure 1

In particle physics, searches for new phenomena have traditionally been guided by theory, focusing on specific signatures predicted by models beyond the Standard Model. Machine learning offers a different way forward. Instead of targeting known possibilities, it can scan the data broadly for unexpected patterns, without assumptions about what new physics might look like. CMS analysts are now using these techniques to conduct model-independent searches for short-lived particles that could escape conventional analyses.

Dynamic graph neural networks operate on graph-structured data, processing both the attributes of individual nodes and the relationships between them. One such model is ParticleNet, which represents large-radius-jet constituents as networks to identify N-prong hadronic decays of highly boosted particles, predicting their parent’s mass. The tool recently aided a CMS search for the single production of a heavy vector-like quark (VLQ) decaying into a top quark and a scalar boson, either the Higgs or a new scalar particle. Alongside ParticleNet, a custom deep neural network was trained to identify leptonic top-quark decays by distinguishing them from background processes over a wide range of momenta. With this approach, the analysis achieved sensitivity to VLQ production cross-sections as small as 0.15 fb. Emerging methods such as transformer networks can provide even more sensitivity in future searches (see figure 1).

CMS figure 2

Another novel approach combined two distinct machine-learning tools in the search for a massive scalar X decaying into a Higgs boson and a second scalar Y. While ParticleNet identified Higgs-boson decays to two bottom quarks, potential Y signals were assigned an “anomaly score” by an autoencoder – a neural network trained to reproduce its input and highlight atypical features in the data. This technique provided sensitivity to a wide range of unexpected decays without relying on specific theoretical models. By combining targeted identification with model-independent anomaly detection, the analysis achieved both enhanced performance and broad applicability.

Searches at the TeV scale sit at the frontier where not only more and more data but also algorithmic innovation drives experimental discovery. Tools such as targeted deep neural networks, parametric neural networks (PNNs) – which efficiently scan multi-dimensional mass landscapes (see figure 2) – and model-independent anomaly detection, are opening new ways to search for deviations from the Standard Model. Analyses of the full LHC Run 2 dataset have already revealed intriguing hints, with several machine-learning studies reporting local excesses – including a 3.6σ excess in a search for V′  VV or VH  jets, and deviations up to 3.3σ in various X  HY searches. While no definitive signal has yet emerged, the steady evolution of neural-network techniques is already changing how new phenomena are sought, and anticipation is high for what they may reveal in the larger Run 3 dataset.

Standardising sustainability: step one

For a global challenge like environmental sustainability, the only panacea is international cooperation. In September, the Sustainability Working Group, part of the Laboratory Directors Group (LDG), took a step forward by publishing a report for standardising the evaluation of the carbon impact of accelerator projects. The report challenges the community to align on a common methodology for assessing sustainability and defining a small number of figures of merit that future accelerator facilities must report.

“There’s never been this type of report before,” says Maxim Titov (CEA Saclay), who co-chairs the LDG Sustainability Working Group. “The LDG Working Group consisted of representatives with technical expertise in sustainability evaluation from large institutions including CERN, DESY, IRFU, INFN, NIKHEF and STFC, as well as experts from future collider projects who signed off on the numbers.”

The report argues that carbon assessment cannot be left to the end of a project. Instead, facilities must evaluate their lifecycle footprint starting from the early design phase, all the way through construction, operation and decommissioning. Studies already conducted on civil-engineering footprints of large accelerator projects outline a reduction potential of up to 50%, says Titov.

In terms of accelerator technology, the report highlights cooling, ventilation, cryogenics, the RF cavities that accelerate charged particles and the klystrons that power them, as the largest sources of inefficiency. The report places particular emphasis on klystrons, and identifies three high-efficiency designs currently under development that could boost the energy efficiency of RF cavities from 60 to 90% (CERN Courier May/June 2025 p30).

Carbon assessment cannot be left to the end of a project

The report also addresses the growing footprint of computing and AI. Training algorithms on more efficient hardware and adapting trigger systems to reduce unnecessary computation are identified as ways to cut energy use without compromising scientific output.

“You need to perform a life-cycle assessment at every stage of the project in order to understand your footprint, not just to produce numbers, but to optimise design and improve it in discussions with policymakers,” emphasises Titov. “Conducting sustainability assessments is a complex process, as the criteria have to be tailored to the maturity of each project and separately developed for scientists, policymakers, and society applications.”

Established by the CERN Council, the LDG is an international coordination body that brings together directors and senior representatives of the world’s major accelerator laboratories. Since 2021, the LDG has been composed of five expert panels: high-field magnets, RF structures, plasma and laser acceleration, muon colliders and energy-recovery linacs. The Sustainability Working Group was added in January 2024.

ICFA meets in Madison

Once a year, the International Committee for Future Accelerators (ICFA) assembles for an in-person meeting, typically attached to a major summer conference. The 99th edition took place on 24 August at the Wisconsin IceCube Particle Astrophysics Center in downtown Madison, one day before Lepton–Photon 2025.

While the ICFA is neither a decision-making body nor a representation of funding agencies, its mandate assigns to the committee the important task of promoting international collaboration and coordination in all phases of the construction and exploitation of very-high-energy accelerators. This role is especially relevant in today’s context of strategic planning and upcoming decisions – with the ongoing European Strategy update, the Chinese decision process on CEPC in full swing, and the new perspectives emerging on the US–American side with the recent National Academy of Sciences report (CERN Courier September/October 2025 p10).

Consequently, the ICFA heard presentations on these important topics and discussed priorities and timelines. In addition, the theme of “physics beyond colliders” – and with it, the question of maintaining scientific diversity in an era of potentially vast and costly flagship projects – featured prominently. In this context, the importance of national laboratories capable of carrying out mid-sized particle-physics experiments was underlined. This also featured in the usual ICFA regional reports.

An important part of the work of the committee is carried out by the ICFA panels – groups of experts in specific fields of high relevance. The ICFA heard reports from the various panel chairs at the Wisconsin meeting, with a focus on the Instrumentation, Innovation and Development panel, where Stefan Söldner-Rembold (Imperial College London) recently took over as chair, succeeding the late Ian Shipsey. Among other things, the panel organises several schools and training events, such as the EDIT schools, as well as prizes that increase recognition for senior and early-career researchers working in the field of instrumentation.

Maintaining scientific diversity in an era of potentially vast and costly flagship projects  featured prominently

Another focus was the recent work of the Data Lifecycle panel chaired by Kati Lassila-Perini (University of Helsinki). This panel, together with numerous expert stakeholders in the field, recently published recommendations for best practices for data preservation and open science in HEP, advocating the application of the FAIR principles of findability, accessibility, interoperability and reusability at all levels of particle-physics research. The document provides guidance for researchers, experimental collaborations and organisations on implementing best-practice routines. It will now be distributed as broadly as possible and will hopefully contribute to the establishment of open and FAIR science practices.

Formally, the ICFA is a working group of the International Union for Pure and Applied Physics (IUPAP) and is linked to Commission C11, Particles and Fields. IUPAP has recently begun a “rejuvenation” effort that also involves rethinking the role of its working groups. Reflecting the continuity and importance of the ICFA’s work, Marcelo Gameiro Munhoz, chair of C11, presented a proposal to transform the ICFA into a standing committee under C11 – a new type of entity within IUPAP. This would allow ICFA to overcome its transient nature as a working group.

Finally, there were discussions on plans for a new set of ICFA seminars – triennial events in different world regions that assemble up to 250 leaders in the field. Following the 13th ICFA Seminar on Future Perspectives in High-Energy Physics, hosted by DESY in Hamburg in late 2023, the baton has now passed to Japan, which is finalising the location and date for the next edition, scheduled for late 2026.

CEPC matures, but approval is on hold

CEPC reference detector

In October, the Circular Electron–Positron Collider (CEPC) study group completed its full suite of technical design reports, marking a key step for China’s Higgs-factory proposal. However, CEPC will not be considered for inclusion in China’s next five-year plan (2026–2030).

“Although our proposal that CEPC be included in the next five-year plan was not successful, IHEP will continue this effort, which an international collaboration has developed for the past 10 years,” says study leader Wang Yifang, of the Institute of High Energy Physics (IHEP) in Beijing. “We plan to submit CEPC for consideration again in 2030, unless FCC is officially approved before then, in which case we will seek to join FCC, and give up CEPC.”

Electroweak precision

CEPC has been under development at IHEP since shortly after the discovery of the Higgs boson at CERN in 2012. To enable precision studies of the new particle, Chinese physicists formally proposed a dedicated electron–positron collider in September 2012. Sharing a concept similar to the Future Circular Collider (FCC) proposed in parallel at CERN, CEPC’s high-luminosity collisions would greatly improve precision in measuring Higgs and electroweak processes.

“CEPC is designed as a multi-purpose particle factory,” explains Wang. “It would not only serve as an efficient Higgs factory but would also precisely study other fundamental particles, and its tunnel can be re-used for a future upgrade to a more powerful super proton–proton collider.”

Following completion of the Conceptual Design Report in 2018, which defined the physics case and baseline layout, the CEPC collaboration entered a detailed technical phase to validate key technologies and complete subsystem designs. The accelerator Technical Design Report (TDR) was released in 2023, followed in October 2025 by the reference detector TDR, providing a mature blueprint for both components.

Although our proposal that CEPC be included in the next five-year plan was not successful, IHEP will continue this effort

Wang Yifang

Compared to the 2018 detector concept, the new technical report proposes several innovations. An electromagnetic calorimeter based on orthogonally oriented crystal bars and a hadronic calorimeter based on high-granularity scintillating glass have been optimised for advanced particle-flow algorithms, improving their energy resolution by a factor of 10 and a factor of two, respectively. A tracking detector employing AC-coupled low-gain avalanche-diode technology will enable simultaneous 10 µm position and 50 ps time measurements, enhancing vertex and flavour tagging. Meanwhile, a readout chip developed in 55 nm technology will achieve state-of-the-art performance at 65% power consumption, enabling better resolution, large-scale integration and reduced cooling-pipe materials. Among other advances, a new type of high-density, high-yield scintillating glass forms the possibility for a full absorption hadronic calorimeter.

To ensure the scientific soundness and feasibility of the design, the CEPC Study Group established an International Detector Review Committee in 2024, chaired by Daniela Bortoletto of the University of Oxford.

Design consolidation

“After three rounds of in-depth review, the committee concluded in September 2025 that the Reference Detector TDR defines a coherent detector concept with a clearly articulated physics reach,” says Bortoletto. “The collaboration’s ambitious R&D programme and sustained technical excellence have been key to consolidating the major design choices and positioning the project to advance from conceptual design into integrated prototyping and system validation.”

CEPC’s technical advance comes amid intense international interest in participating in a Higgs factory. Alongside the circular FCC concept at CERN, Higgs factories with linear concepts have been proposed in Europe and Japan, and both Europe and the US have named constructing or participating in a Higgs factory as a strategic priority. Following China’s decision to defer CEPC, attention now turns to Europe, where the ongoing update of the European Strategy for Particle Physics will prioritise recommendations for the laboratory’s flagship collider beyond the HL-LHC. Domestically, China will consider other large science projects for the 2026 to 2030 period, including a proposed Super Tau–Charm Facility to succeed the Beijing Electron–Positron Collider II.

With completion of its core technical designs, CEPC now turns to engineering design.

“The newly released detector report is the first dedicated to a circular electron–positron Higgs factory,” says Wang. “It showcases the R&D capabilities of Chinese scientists and lays the foundation for turning this concept into reality.”

Hidden treasures

Data resurrection

In 2009, the JADE experiment had been inoperational for 23 years. The PETRA electron–positron collider that served it had already completed a second life as a pre-accelerator for the HERA electron–proton collider and was preparing for a third life as an X-ray source. JADE and the other PETRA experiments were a piece of physics history, well known for seminal measurements of three-jet quark–quark-gluon events, and early studies of quark fragmentation and jet hadronisation. But two decades after being decommissioned, the JADE collaboration was yet to publish one of its signature measurements.

At high energies and short distances, the strong force becomes weaker. Quarks behave almost like free particles. This “asymptotic freedom” is a unique hallmark of QCD. In 2009, as now, JADE’s electron–positron data was unique in the low-energy range, with other data sets lost to history. When reprocessed with modern next-to-next-to-leading-order QCD and improved simulation tools, the DESY experiment was able to rival experiments at CERN’s higher-energy Large Electron–Positron (LEP) collider for precision on the strong coupling constant, contributing to a stunning proof of QCD’s most fundamental behaviour. The key was a farsighted and original initiative by Siggi Bethke to preserve JADE’s data and analysis software.

New perspectives

This data resurrection from JADE demonstrated how data can be reinterpreted to give new perspectives decades after an experiment ends. It was a timely demonstration. In 2009, HERA and SLAC’s PEP-II electron–positron collider had been recently decommissioned, and Fermilab’s Tevatron proton–antiproton collider was approaching the end of its operations. Each facility nevertheless had a strong analysis programme ahead, and CERN’s Large Hadron Collider (LHC) was preparing for its first collisions. How could all this data be preserved?

The uniqueness of these programmes, for which no upgrade or followup was planned for the coming decades, invited the consideration of data usability at horizons well beyond a few years. A few host labs risked a small investment, with dedicated data-preservation projects beginning, for example, at SLAC, DESY, Fermlilab, IHEP and CERN (see “Data preservation” dashboard). To exchange data-preservation concepts, methodologies and policies, and to ensure the long-term preservation of HEP data, the Data Preservation in High Energy Physics (DPHEP) group was created in 2014. DPHEP is a global initiative under the supervision of the International Committee for Future Accelerators (ICFA), with strong support from CERN from the beginning. It actively welcomes new collaborators and new partner experiments, to ensure a vibrant and long-term future for the precious data sets being collected at present and future colliders.

At the beginning of our efforts, DPHEP designed a four-level classification of data abstraction. Level 1 corresponds to the information typically found in a scientific publication or its associated HEPData entry (a public repository for high-energy physics data tables). Level 4 includes all inputs necessary to fully reprocess the original data and simulate the experiment from scratch.

The concept of data preservation had to be extended too. Simply storing data and freezing software is bound to fail as operating systems evolve and analysis knowledge disappears. A sensible preservation process must begin early on, while the experiments are still active, and take into account the research goals and available resources. Long-term collaboration organisation plays a crucial role, as data cannot be preserved without stable resources. Software must adapt to rapidly changing computing infrastructure to ensure that the data remains accessible in the long term.

Return on investment

But how much research gain could be expected for a reasonable investment in data preservation? We conservatively estimate that for dedicated investments below 1% of the cost of the construction of a facility, the scientific output increases by 10% or more. Publication records confirm that scientific outputs at major experimental facilities continue long after the end of operations (see “Publications per year, during and after data taking” panel). Publication rates remain substantial well beyond the “canonical” five years after the end of the data taking, particularly for experiments that pursued dedicated data-preservation programmes. For some experiments, the lifetime of the preservation system is by now comparable with the data-taking period, illustrating the need to carefully define collaborations for the long term.

Publication records confirm that scientific outputs at major experimental facilities continue long after the end of operations

The most striking example is BaBar, an electron–positron-collider experiment at SLAC that was designed to investigate the violation of charge-parity symmetry in the decays of B mesons, and which continues to publish using a preservation system now hosted outside the original experiment site. Aging infrastructure is now presenting challenges, raising questions about the very-long-term hosting of historical experiments – “preservation 2.0” – or the definitive end of the programme. The other historical b-factory, Belle, benefits from a follow-up experiment on site.

Publications per year, during and after data taking

Publications per year, during and after data taking

The publication record at experiments associated with the DPHEP initiative. Data-taking periods of the relevant facilities are shaded, and the fraction of peer-reviewed articles published afterwards is indicated as a percentage for facilities that are not still operational. The data, which exclude conference proceedings, were extracted from Inspire-HEP on 31 July 2025.

HERA, an electron– and positron–proton collider that was designed to study deep inelastic scattering (DIS) and the structure of the proton, continues to publish and even to attract new collaborators as the community prepares for the Electron Ion Collider (EIC) at BNL, nicely demonstrating the relevance of data preservation for future programmes. The EIC will continue studies of DIS in the regime of gluon saturation (CERN Courier January/February 2025 p31), with polarised beams exploring nucleon spin and a range of nuclear targets. The use of new machine-learning algorithms on the preserved HERA data has even allowed aspects of the EIC physics case to be explored: an example of those “treasures” not foreseen at the end of collisions.

IHEP in China conducts a vigorous data-preservation programme around BESIII data from electron–positron collisions in the BEPCII charm factory. The collaboration is considering using artificial intelligence to rank data priorities and user support for data reuse.

Remarkably, LEP experiments are still publishing physics analyses with archived ALEPH data almost 25 years after the completion of the LEP programme on 4 November 2000. The revival of the CERNLIB collection of FORTRAN data-analysis software libraries has also enabled the resurrection of the legacy software stacks of both DELPHI and OPAL, including the spectacular revival of their event displays (see “Data resurrection” figure). The DELPHI collaboration revised their fairly restrictive data-access policy in early 2024, opening and publishing their data via CERN’s Open Data Portal.

Some LEP data is currently being migrated into the standardised EDM4hep (event data model) format that has been developed for future colliders. As well as testing the format with real data, this will ensure data preservation and support software development, analysis training and detector design for the electron–positron collider phase of the proposed Future Circular Collider using real events.

The future is open

In the past 10 years, data preservation has grown in prominence in parallel with open science, which promotes free public access to publications, data and software in community-driven repositories, and according to the FAIR principles of findability, accessibility, interoperability and reusability. Together, data preservation and open science help maximise the benefits of fundamental research. Collaborations can fully exploit their data and share its unique benefits with the international community.

The two concepts are distinct but tightly linked. Data preservation focuses on maintaining data integrity and usability over time, whereas open data emphasises accessibility and sharing. They have in common the need for careful and resource-loaded planning, with a crucial role played by the host laboratory.

Treasure chest

Data preservation and open science both require clear policies and a proactive approach. Beginning at the very start of an experiment is essential. Clear guidelines on copyright, resource allocation for long-term storage, access strategies and maintenance must be established to address the challenges of data longevity. Last but not least, it is crucially important to design collaborations to ensure smooth international cooperation long after data taking has finished. By addressing these aspects, collaborations can create robust frameworks for preserving, managing and sharing scientific data effectively over the long term.

Today, most collaborations target the highest standards of data preservation (level 4). Open-source software should be prioritised, because the uncontrolled obsolescence of commercial software endangers the entire data-preservation model. It is crucial to maintain all of the data and the software stack, which requires continuous effort to adapt older versions to evolving computing environments. This applies to both software and hardware infrastructures. Synergies between old and new experiments can provide valuable solutions, as demonstrated by HERA and EIC, Belle and Belle II, and the Antares and KM3NeT neutrino telescopes.

From afterthought to forethought

In the past decade, data preservation has evolved from simply an afterthought as experiments wrapped up operations into a necessary specification for HEP experiments. Data preservation is now recognised as a source of cost-effective research. Progress has been rapid, but its implementation remains fragile and needs to be protected and planned.

In the past 10 years, data preservation has grown in prominence in parallel with open science

The benefits will be significant. Signals not imagined during the experiments’ lifetime can be searched for. Data can be reanalysed in light of advances in theory and observations from other realms of fundamental science. Education, training and outreach can be brought to life by demonstrating classic measurements with real data. And scientific integrity is fully realised when results are fully reproducible.

The LHC, having surpassed an exabyte of data, now holds the largest scientific data set ever accumulated. The High-Luminosity LHC will increase this by an order of magnitude. When the programme comes to an end, it will likely be the last data at the energy frontier for decades. History suggests that 10% of the LHC’s scientific programme will not yet have been published when collisions end, and a further 10% not even imagined. While the community discusses its strategy for future colliders, it must therefore also bear in mind data preservation. It is the key to unearthing hidden treasures in the data of the past, present and future.

Sensing at quantum limits

Atomic energy levels. Spin orientations in a magnetic field. Resonant modes in cryogenic, high-quality-factor radio-frequency cavities. The transition from superconducting to normal conducting, triggered by the absorption of a single infrared photon. These are all simple yet exquisitely sensitive quantum systems with discrete energy levels. Each can serve as the foundation for a quantum sensor – instruments that detect single photons, measure individual spins or record otherwise imperceptible energy shifts.

Over the past two decades, quantum sensors have taken on leading roles in the search for ultra-light dark matter and in precision tests of fundamental symmetries. Examples include the use of atomic clocks to probe whether Earth is sweeping through oscillating or topologically structured dark-matter fields, and cryogenic detectors to search for electric dipole moments – subtle signatures that could reveal new sources of CP violation. These areas have seen rapid progress, as challenges related to detector size, noise, sensitivity and complexity have been steadily overcome, opening new phase space in which to search for physics beyond the Standard Model. Could high-energy particle physics benefit next?

Low-energy particle physics

Most of the current applications of quantum sensors are at low energies, where their intrinsic sensitivity and characteristic energy scales align naturally with the phenomena being probed. For example, within the Project 8 experiment at the University of Washington, superconducting sensors are being developed to tackle a longstanding challenge: to distinguish the tiny mass of the neutrino from zero (see “Quantum-noise limited” image). Inward-looking phased arrays of quantum-noise-limited microwave receivers allow spectroscopy of cyclotron radiation from beta-decay electrons as they spiral in a magnetic field. The shape of the endpoint of the spectrum is sensitive to the mass of the neutrino and such sensors have the potential to be sensitive to neutrino masses as low as 40 meV.

Quantum-noise limited

Beyond the Standard Model, superconducting sensors play a central role in the search for dark matter. At the lowest mass scales (peV to meV), experiments search for ultralight bosonic dark-matter candidates such as axions and axion-like particles (ALPs) through excitations of the vacuum field inside high–quality–factor microwave and millimetre-wave cavities (see “Quantum sensitivity” image). In the meV range, light-shining-through-wall experiments aim to reveal brief oscillations into weakly coupled hidden-sector particles such as dark photons or ALPs, and may employ quantum sensors for detecting reappearing photons, depending on the detection strategy. In the MeV to sub-GeV mass range, superconducting sensors are used to detect individual photons and phonons in cryogenic scintillators, enabling sensitivity to dark-matter interactions via electron recoils. At higher masses, reaching into the GeV regime, superfluid helium detectors target nuclear recoils from heavier dark matter particles such as WIMPs.

These technologies also find broad application beyond fundamental physics. For example, in superconducting and other cryogenic sensors, the ability to detect single quanta with high efficiency and ultra-low noise is essential. The same capabilities are the technological foundation of quantum communication.

Raising the temperature

While many superconducting quantum sensors require ultra-low temperatures of a few mK, some spin-based quantum sensors can function at or near room temperature. Spin-based sensors, such as nitrogen-vacancy (NV) centres in diamonds and polarised rubidium atoms, are excellent examples.

NV centres are defects in the diamond lattice where a missing carbon atom – the vacancy – is adjacent to a lattice site where a carbon atom has been replaced by a nitrogen atom. The electronic spin states in NV centres have unique energy levels that can be probed by laser excitation and detection of spin-dependent fluorescence.

Researchers are increasingly exploring how quantum-control techniques can be integrated into high-energy-physics detectors

Rubidium is promising for spin-based sensors because it has unpaired electrons. In the presence of an external magnetic field, its atomic energy levels are split by the Zeeman effect. When optically pumped with laser light, spin-polarised “dark” sublevels – those not excited by the light – become increasingly populated. These aligned spins precess in magnetic fields, forming the basis of atomic magnetometers and other quantum sensors.

Being exquisite magnetometers, both devices make promising detectors for ultralight bosonic dark-matter candidates such as axions. Fermion spins may interact with spatial or temporal gradients of the axion field, leading to tiny oscillating energy shifts. The coupling of axions to gluons could also show up as an oscillating nuclear electric dipole moment. These interactions could manifest as oscillating energy-level shifts in NV centres, or as time-varying NMR-like spin precession signals in the atomic magnetometers.

Large-scale detectors

The situation is completely different in high-energy physics detectors, which require numerous interactions between a particle and a detector. Charged particles cause many ionisation events, and when a neutral particle interacts it produces charged particles that result in similarly numerous ionisations. Even if quantum control were possible within individual units of a massive detector, the number of individual quantum sub-processes to be monitored would exceed the possibilities of any realistic device.

Increasingly, however, researchers are exploring how quantum-control techniques – such as manipulating individual atoms or spins using lasers or microwaves – can be integrated into high-energy-physics detectors. These methods could enhance detector sensitivity, tune detector response or enable entirely new ways of measuring particle properties. While these quantum-enhanced or hybrid detection approaches are still in their early stages, they hold significant promise.

Quantum dots

Quantum dots are nanoscale semiconductor crystals – typically a few nanometres in diameter – that confine charge carriers (electrons and holes) in all three spatial dimensions. This quantum confinement leads to discrete, atom-like energy levels and results in optical and electronic properties that are highly tunable with size, shape and composition. Originally studied for their potential in optoelectronics and biomedical imaging, quantum dots have more recently attracted interest in high-energy physics due to their fast scintillation response, narrow-band emission and tunability. Their emission wavelength can be precisely controlled through nanostructuring, making them promising candidates for engineered detectors with tailored response characteristics.

Chromatic calorimetry

While their radiation hardness is still under debate and needs to be resolved, engineering their composition, geometry, surface and size can yield very narrow-band (20 nm) emitters across the optical spectrum and into the infrared. Quantum dots such as these could enable the design of a “chromatic calorimeter”: a stack of quantum-dot layers, each tuned to emit at a distinct wavelength; for example red in the first layer, orange in the second and progressing through the visible spectrum to violet. Each layer would absorb higher energy photons quite broadly but emit light in a narrow spectral band. The intensity of each colour would then correspond to the energy absorbed in that layer, while the emission wavelength would encode the position of energy deposition, revealing the shower shape (see “Chromatic calorimetry” figure). Because each layer is optically distinct, hermetic isolation would be unnecessary, reducing the overall material budget.

Rather than improving the energy resolution of existing calorimeters, quantum dots could provide additional information on the shape and development of particle showers if embedded in existing scintillators. Initial simulations and beam tests by CERN’s Quantum Technology Initiative (QTI) support the hypothesis that the spectral intensity of quantum-dot emission can carry information about the energy and species of incident particles. Ongoing work aims to explore their capabilities and limitations.

Beyond calorimetry, quantum dots could be formed within solid semiconductor matrices, such as gallium arsenide, to form a novel class of “photonic trackers”. Scintillation light from electronically tunable quantum dots could be collected by photodetectors integrated directly on top of the same thin semiconductor structure, such as in the DoTPiX concept. Thanks to a highly compact, radiation-tolerant scintillating pixel tracking system with intrinsic signal amplification and minimal material budget, photonic trackers could provide a scintillation-light-based alternative to traditional charge-based particle trackers.

Living on the edge

Low temperatures also offer opportunities at scale – and cryogenic operation is a well-established technique in both high-energy and astroparticle physics, with liquid argon (boiling point 87 K) widely used in time projection chambers and some calorimeters, and some dark-matter experiments using liquid helium (boiling point 4.2 K) to reach even lower temperatures. A range of solid-state detectors, including superconducting sensors, operate effectively at these temperatures and below, and offer significant advantages in sensitivity and energy resolution.

Single-photon phase transitions

Magnetic microcalorimeters (MMCs) and transition-edge sensors (TESs) operate in the narrow temperature range where a superconducting material undergoes a rapid transition from zero resistance to finite values. When a particle deposits energy in an MMC or TES, it slightly raises the temperature, causing a measurable increase in resistance. Because the transition is extremely steep, even a tiny temperature change leads to a detectable resistance change, allowing precise calorimetry.

Functioning at millikelvin temperatures, TESs provide much higher energy resolution than solid-state detectors made from high-purity germanium crystals, which work by collecting electron–hole pairs created when ionising radiation interacts with the crystal lattice. TESs are increasingly used in high-resolution X-ray spectroscopy of pionic, muonic or antiprotonic atoms, and in photon detection for observational astronomy, despite the technical challenges associated with maintaining ultra-low operating temperatures.

By contrast, superconducting nanowire and microwire single-photon detectors (SNSPDs and SMSPDs) register only a change in state – from superconducting to normal conducting – allowing them to operate at higher temperatures than traditional low-temperature sensors. When made from high–critical-temperature (Tc) superconductors, operation at temperatures as high as 10 K is feasible, while maintaining excellent sensitivity to energy deposited by charged particles and ultrafast switching times on the order of a few picoseconds. Recent advances include the development of large-area devices with up to 400,000 micron-scale pixels (see “Single-photon phase transitions” figure), fabrication of high-Tc SNSPDs and successful beam tests of SMSPDs. These technologies are promising candidates for detecting milli-charged particles – hypothetical particles arising in “hidden sector” extensions of the Standard Model – or for high-rate beam monitoring at future colliders.

Rugged, reliable and reproducible

Quantum sensor-based experiments have vastly expanded the phase space that has been searched for new physics. This is just the beginning of the journey, as larger-scale efforts build on the initial gold rush and new quantum devices are developed, perfected and brought to bear on the many open questions of particle physics.

Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance

To fully profit from their potential, a vigorous R&D programme is needed to scale up quantum sensors for future detectors. Ruggedness, reliability and reproducibility are key – as well as establishing “proof of principle” for the numerous imaginative concepts that have already been conceived. Challenges range from access to test infrastructures, to standardised test protocols for fair comparisons. In many cases, the largest challenge is to foster an open exchange of ideas given the numerous local developments that are happening worldwide. Finding a common language to discuss developments in different fields that at first glance may have little in common, builds on a willingness to listen, learn and exchange.

The European Committee for Future Accelerators (ECFA) detector R&D roadmap provides a welcome framework for addressing these challenges collaboratively through the Detector R&D (DRD) collaborations established in 2023 and now coordinated at CERN. Quantum sensors and emerging technologies are covered within the DRD5 collaboration, which ties together 112 institutes worldwide, many of them leaders in their particular field. Only a third stem from the traditional high-energy physics community.

These efforts build on the widespread expertise and enthusiastic efforts at numerous institutes and tie in with the quantum programmes being spearheaded at high-energy-physics research centres, among them CERN’s QTI. Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance. The best approach may prove to be “targeted blue-sky research”: a willingness to explore completely novel concepts while keeping their ultimate usefulness for particle physics firmly in mind.

Double plasma progress at DESY

What if, instead of using tonnes of metal to accelerate electrons, they were to “surf” on a wave of charge displacements in a plasma? This question, posed in 1979 by Toshiki Tajima and John Dawson, planted the seed for plasma wakefield acceleration (PWA). Scientists at DESY now report some of the first signs that PWA is ready to compete with traditional accelerators at low energies. The results tackle two of the biggest challenges in PWA: beam quality and bunch rate.

“We have made great progress in the field of plasma acceleration,” says Andreas Maier, DESY’s lead scientist for plasma acceleration, “but this is an endeavour that has only just started, and we still have a bit of homework to do to get the system integrated with the injector complexes of a synchrotron, which is our final goal.”

Riding a wave

PWA has the potential to radically miniaturise particle accelerators. Plasma waves are generated when a laser pulse or particle beam ploughs through a millimetres-long hydrogen-filled capillary, displacing electrons and creating a wake of alternating positive and negative charge regions behind it. The process is akin to flotsam and jetsam being accelerated in the wake of a speedboat, and the plasma “wakefields” can be thousands of times stronger than the electric fields in conventional accelerators, allowing particles to gain hundreds of MeV in just a few millimetres. But beam quality and intensity are significant challenges in such narrow confines.

In a first study, a team from the LUX experiment at DESY and the University of Hamburg demonstrated, for the first time, a two-stage correction system to dramatically reduce the energy spread of accelerated electron beams. The first stage stretches the longitudinal extent of the beam from a few femtoseconds to several picoseconds using a series of four zigzagging bending magnets called a magnetic chicane. Next, a radio-frequency cavity reduces the energy variation to below 0.1%, bringing the beam quality in line with conventional accelerators.

“We basically trade beam current for energy stability,” explains Paul Winkler, lead author of a recent publication on active energy compression. “But for the intended application of a synchrotron injector, we would need to stretch the electron bunches anyway. As a result, we achieved performance levels so far only associated with conventional accelerators.”

But producing high-quality beams is only half the battle. To make laser-driven PWA a practical proposition, bunches must be accelerated not just once a second, like at LUX, but hundreds or thousands of times per second. This has now been demonstrated by KALDERA, DESY’s new high-power laser system (see “Beam quality and bunch rate” image).

“Already, on the first try, we were able to accelerate 100 electron bunches per second,” says principal investigator Manuel Kirchen, who emphasises the complementarity of the two advances. The team now plans to scale up the energy and deploy “active stabilisation” to improve beam quality. “The next major goal is to demonstrate that we can contin­uously run the plasma accelerators with high stability,” he says.

With the exception of CERN’s AWAKE experiment (CERN Courier May/June 2024 p25), almost all plasma-wakefield accelerators are designed with medical or industrial applications in mind. Medical applications are particularly promising as they require lower beam energies and place less demanding constraints on beam quality. Advances such as those reported by LUX and KALDERA raise confidence in this new technology and could eventually open the door to cheaper and more portable X-ray equipment, allowing medical imaging and cancer therapy to take place in university labs and hospitals.

bright-rec iop pub iop-science physcis connect