Comsol -leaderboard other pages

Topics

A long-lived paradigm shift

Searches for new physics at high-energy colliders traditionally target heavy new particles with short lifetimes. These searches determine detector design, data acquisition and analysis methods. However, there could be new long-lived particles (LLPs) which travel through the detectors without decaying, either because they are light or have small couplings. Searches for LLPs have been going on at the LHC since the start of data taking, and at previous colliders, but they are attracting increasing interest in recent times, more so in light of the lack of new particles discovered in more mainstream searches.

Detecting LLPs at the LHC experiments requires a paradigm shift with respect to the usual data-analysis and trigger strategies. To that end, more than 200 experimentalists and theorists met online from 16 to 19 November for the eighth workshop of the LHC LLP community.

Dark quarks would undergo fragmentation and hadronisation, resulting in “dark showers”

Strong theoretical motivations underpin searches for LLPs. For example, dark matter could be part of a larger dark sector, parallel to the Standard Model (SM), with new particles and interactions. If dark quarks could be produced at the LHC, they would undergo fragmentation and hadronisation in the dark sector resulting in characteristic “dark showers” — one of the focuses of the workshop. Collider signatures for dark showers depend on the fraction of unstable particles they contain and their lifetime, with a range of categories presenting their own analysis challenges: QCD-like jets, semi-visible jets, emerging jets, and displaced vertices with missing transverse energy. Delegates agreed on the importance of connecting collider-level searches for dark showers with astrophysical and cosmological scales. In a similar spirit of collaboration across communities, a joint session with the HEP Software Foundation focused on triggering and reconstruction software for dedicated LLP detectors.

Heavy neutral leptons

The discovery of heavy neutral leptons (HNLs) could address different open questions of the SM. For example, neutrinos are expected to be left-handed and massless in the SM, but oscillate between flavours as their wavefunction evolves, providing evidence for as-yet immeasurably small masses. One way to fix this problem is to complete the field pattern of the SM with right-handed HNLs. The number and other characteristics of HNLs depend on the model considered, but in many cases HNLs are long-lived and connect to other important questions of the SM, such as dark matter and the baryon asymmetry of the universe. There are many ongoing searches for HNLs at the LHC and many more proposed elsewhere. During the November workshop the discussion touched on different models and simulations, reviewing what is available and what is needed for the different signal benchmarks.

Another focus was the reinterpretation of previous LLP searches. Recasting public results is common practice at the LHC and a good way to increase physics impact, but reinterpreting LLP searches is more difficult than prompt searches due to the use of non-standard selections and analysis-specific objects.

 

The latest results from CERN experiments were presented. ATLAS reported the first LHC search for sleptons using displaced-lepton final states, greatly improving sensitivity compared to LEP. CMS presented a search for strongly interacting massive particles with trackless jets, and a search for long-lived particles decaying to jets with displaced vertices. LHCb reported searches for low -mass di-muon resonances and a search for heavy neutrinos in the decay of a W boson into two muons and a jet, and the NA62 experiment at CERN’s SPS presented a search for π0 decays to invisible particles. These results bring important new constraints on the properties and parameters of LLP models.

Dedicated detectors

A series of dedicated LLP detectors at CERN — including the Forward Physics Facility for the HL-LHC, the CMS forward detector, FASER, Codex-b and Codex-ß, MilliQan, MoEDAL-MAPP, MATHUSLA, ANUBIS, SND@LHC, and FORMOSA – are in different stages between proposal and operation. These additional detectors, located at various distances from the LHC experiments, have diverse strengths: some, like MilliQan, look for specific particles (milli-charged particles, in that case), whereas others, like Mathusla, offer a very low background environment in which to search for neutral LLPs. These complementary efforts will, in the near future, provide all the different pieces needed to build the most complete picture possible of a variety of LLP searches, from axion-like particles to exotic Higgs decays, potentially opening the door to a dark sector.

ATLAS reported the first LHC search for sleptons using displaced-lepton final states

The workshop featured a dedicated session on future colliders for the first time. Designing these experiments with LLPs in mind would radically boost discovery chances. Key considerations will be tracking and the tracking volume, timing information, trigger and DAQ, as well as potential additional instrumentation in tunnels or using the experimental caverns.

Together with the range of new results presented and many more in the pipeline, the 2020 LLP workshop was representative of a vibrant research community, constantly pushing the “lifetime frontier”.

Nuclear win for ISOLDE physicists

2020 Lise Meitner winners

The nuclear physics division of the European Physical Society today awarded the 2020 Lise Meitner Prize to three physicists who have played a decisive role in turning a small-scale nuclear-physics experiment at CERN into a world-leading facility for the investigation of nuclear structure.

Klaus Blaum (Max Planck Institute for Nuclear Physics), Björn Jonson (Chalmers University of Technology) and Piet Van Duppen (KU Leuven) are recognised for the development and application of online instrumentation and techniques, and for the precise and systematic investigation of properties of nuclei far from stability at CERN’s Isotope mass Separator On-Line facility (ISOLDE).

Blaum has made key contributions to the high-precision determination of nuclear ground state properties with laser and mass spectroscopic methods and to the development of new techniques in this field, while Jonson was acknowledged for his studies of the lightest exotic nuclei, namely halo nuclei, where he was the first to explain its surprisingly large matter radius. Van Duppen was recognised for his push in the production and investigation of post-accelerated radioactive beams with REX-ISOLDE. Since the 1960s, the ISOLDE user facility has produced extreme nuclear systems to help physicists understand how the strong interaction binds the ingredients of atomic nuclei, with advanced traps and lasers recently offering new ways to look for physics beyond the Standard Model.

I’m very impressed by the breadth of the recent prize winners

Eckhard Elsen

The biennial Lise Meitner prize, named after one of the pioneers in the discovery of nuclear fission in 1939, was established in 2000 to acknowledge outstanding work in the fields of experimental, theoretical or applied nuclear science. Former winners include a quartet of physicists (Johanna Stachel, Peter Braun-Munzinger, Paolo Giubellino and Jürgen Schukraft) from the ALICE collaboration in 2014, for the experimental exploration of the quark-gluon plasma using ultra-relativistic nucleus-nucleus collisions, and for the design and construction of the ALICE detector.

This year’s awards were officially presented during the 2020 ISOLDE workshop and users meeting held online on 26-27 November. “I’m very impressed by the breadth of the recent prize winners….covering a range of topics and varying between individuals and teams,” said CERN director for research and computing Eckhard Elsen during the award ceremony. “It is a good indicator of the health and the push of the field – it is truly alive.

A unique period for computing, but will it last?

Monica Marinucci and Ivan Deloose

Twenty-five years ago in Rio de Janeiro, at the 8th International Conference on Computing in High-Energy and Nuclear Physics (CHEP-95), I presented a paper on behalf of my research team titled “The PC as Physics Computer for LHC”. We highlighted impressive improvements in price and performance compared to other solutions on offer. In the years that followed, the community started moving to PCs in a massive way, and today the PC remains unchallenged as the workhorse for high-energy physics (HEP) computing.

HEP-computing demands have always been greater than the available capacity. However, our community does not have the financial clout to dictate the way computing should evolve, demanding constant innovation and research in computing and IT to maintain progress. A few years before CHEP-95, RISC workstations and servers had started complementing the mainframes that had been acquired at high cost at the start-up of LEP in 1989. We thought we could do even better than RISC. The increased-energy LEP2 phase needed lots of simulation, and the same needs were already manifest for the LHC. These were our inspirations that led PC servers to start populating our computer centres – a move that was also helped by a fair amount of luck.

Fast change

HEP programs need good floating-point compute capabilities and early generations of the Intel x86 processors, such as the 486/487 chips, offered mediocre capabilities. The Pentium processors that emerged in the mid-1990s changed the scene significantly, and the competitive race between Intel and AMD was a major driver of continued hardware innovation.

Another strong tailwind came from the relentless efforts to shrink transistor sizes in line with Moore’s law, which saw processor speeds increase from 50/100 MHz to 2000/3000 MHz in little more than a decade. After 2006, when speed increases became impossible for thermal reasons, efforts moved to producing multi-core chips. However, HEP continued to profit. Since all physics events at colliders such as the LHC are independent of all others, it was sufficient to split a job into multiple jobs across all cores.

Sverre Jarp

The HEP community was also lucky with software. Back in 1995 we had chosen Windows/NT as the operating system, mainly because it supported multiprocessing, which significantly enhanced our price/performance. Physicists, however, insisted on Unix. In 1991, Linus Thorvalds released Linux version 0.01 and it quickly gathered momentum as a worldwide open-source project. When release 2.0 appeared in 1996, multiprocessing support was included and the operating system was quickly adopted by our community.

Furthermore, HEP adopted the Grid concept to cope with the demands of the LHC. Thanks to projects such as Enabling Grids for E-science, we built the Worldwide LHC Computing Grid, which today handles more than two million tasks across one million PC cores every 24 hours. Although grid computing remained mainly amongst scientific users, the analogous concept of cloud computing had the same cementing effect across industry. Today, all the major cloud-computing providers overwhelmingly rely on PC servers.

In 1995 we had seen a glimmer, but we had no idea that the PC would remain an uncontested winner during a quarter of a century of scientific computing. The question is whether it will last for another quarter century?

The contenders

The end of CPU scaling, argued a recent report by the HEP Software Foundation, demands radical changes in computing and software to ensure the success of the LHC and other experiments into the 2020s and beyond. There are many contenders that would like to replace the x86 PC architecture. It could be graphics processors, where both Intel, AMD and Nvidia are active. A wilder guess is quantum computing, whereas a more conservative guess would be processors similar to the x86, but based on other architectures, such as ARM or RISC-V.

The end of CPU scaling demands radical changes to ensure the success of the LHC and other high-energy physics experiments

During the PC project we collaborated with Hewlett-Packard, which had a division in Grenoble, not too far away. Such R&D collaborations have been vital to CERN and the community since the beginning and they remain so today. They allow us to get insight into forthcoming products and future plans, while our feedback can help to influence the products in plan. CERN openlab, which has been the focal point for such collaborations for two decades, early-on coined the phrase “You make it, we break it”. However, whatever the future holds, it is fair to assume that PCs will remain the workhorse for HEP computing for many years to come.

Beating cardiac arrhythmia

EBAMed’s technical team

In December last year, a beam of protons was used to treat a patient with cardiac arrhythmia – an irregular beating of the heart that affects around 15 million people in Europe and North America alone. The successful procedure, performed at the National Center of Oncological Hadrontherapy (CNAO) in Italy, signalled a new application of proton therapy, which has been used to treat upwards of 170,000 cancer patients worldwide since the early 1990s.

In parallel to CNAO – which is based on accelerator technologies developed in conjunction with CERN via the TERA Foundation – a Geneva-based start-up called EBAMed (External Beam Ablation) founded by CERN alumnus Adriano Garonna aims to develop and commercialise image-guidance solutions for non-invasive treatments of heart arrhythmias. EBAMed’s technology is centred on an ultrasound imaging system that monitors a patient’s heart activity, interprets the motion in real time and sends a signal to the proton-therapy machine when the radiation should be sent. Once targeted, the proton beam ablates specific heart tissues to stop the local conduction of disrupted electrical signals.

Fast learner

“Our challenge was to find a solution using the precision of proton therapy on a fast and irregular moving target: the heart,” explains Garonna. “The device senses motion at a very fast rate, and we use machine learning to interpret the images in real time, which allows robust decision-making.” Unlike current treatments, which can be lengthy and costly, he adds, people can be treated as outpatients; the intervention is non-invasive and “completely pain-free”.

The recipient of several awards – including TOP 100 Swiss Startups 2019, Venture Business Plan 2018, MassChallenge 2018, Venture Kick 2018 and IMD 2017 Start-up Competition – EBAMed recently received a €2.4 million grant from the European Union to fund product development and the first human tests.

Garonna’s professional journey began when he was a summer student at CERN in 2007, working on user-interface software for a new optical position-monitoring system at LHC Point 5 (CMS). Following his graduation, Garonna returned to CERN as a PhD student with the TERA Foundation and École Polytechnique Fédérale de Lausanne, and then as a fellow working for the Marie Curie programme PARTNER, a training network for European radiotherapy. This led to a position as head of therapy accelerator commissioning at MedAustron in Austria – a facility for proton and ion therapy based, like CNAO, on TERA Foundation/CERN technology. After helping deliver the first patient treatments at MedAustron, Garonna returned to CERN and entered informal discussions with TERA founder Ugo Amaldi, who was one of Garonna’s PhD supervisors, about how to take the technology further. Along with former CERN engineer Giovanni Leo and arrhythmia expert Douglas Packer, the group founded EBAMed in 2018.

“Becoming an entrepreneur was not my initial purpose, but I was fascinated by the project and convinced that a start-up was the best vehicle to bring it to market,” says Garonna. Not having a business background, he benefitted from the CERN Knowledge Transfer entrepreneurship seminars as well as the support from the Geneva incubator Fongit and courses organised by Innosuisse, the Swiss innovation agency. Garonna also drew on previous experience gained while at CERN. “At CERN most of my projects involved exploring new areas. While I benefitted from the support of my supervisors, I had to drive projects on my own, seek the right solutions and build the appropriate ecosystem to obtain results. This certainly developed an initiative-driven, entrepreneurial streak in me.”

Healthy competition

Proton therapy is booming, with almost 100 facilities operating worldwide and more than 35 under construction. EBAMed’s equipment can be installed in any proton-therapy centre irrespective of its technology, says Garonna. “We already have prospective patients contacting us as they have heard of our device and wish to benefit from the treatment. As a company, we want to be the leaders in our field. We do have a US competitor, who has developed a planning system using conventional radiotherapy, and we are grateful that there is another player on the market as it helps pave the way to non-invasive treatments. Additionally, it is dangerous to be alone, as that could imply that there is no market in the first place.”

Leaving the security of a job to risk it all with a start-up is a gradual process, says Garonna. “It’s definitely challenging to jump into what seems like cold water… you have to think if it is worth the journey. If you believe in what you are doing, I think it will be worth it.”

In pursuit of the possible

Giulia Zanderighi

What do collider phenomenologists do?

I tend to prefer the term particle phenomenology because the collider is just the tool that we use. However, compared to other experiments, such as those searching for dark matter or axions, colliders provide a controlled laboratory where you decide how many collisions and what energy these collisions should have. This is quite unique. Today, accelerators and detectors have reached an immense level of sophistication, and this allows us to perform a vast amount of fundamental measurements. So, the field spans precision measurements of fundamental properties of particles, in particular of the Higgs boson, consistency tests of the Standard Model (SM), direct and indirect searches for new physics, measurements of rare decays, and much more. For essentially all these topics we have had new results in recent years, so it’s a very active and continuously evolving field. But of course we do not just measure things for the sake of it. We have big, fundamental questions and we are looking for hints from LHC data as to how to address them.

Whats hot in the field today?

One topic that I think is very cool is that we can benefit from the LHC, in its current setup, also as lepton collider. In fact, at the LHC we are looking at elementary collisions between the proton’s constituents, quarks and gluons. But since the proton is charged, it also emits photons, and one can talk about the photon parton-distribution function (PDF), i.e. the photonic content of protons. These photons can split into lepton pairs, so when one collides protons one is also colliding leptons. The fascinating thing is that the “content” of leptons in protons is rather democratic, so one can look at collisions between, say, a muon and a tau lepton – something that can’t be done even at future proposed lepton colliders. Furthermore, by picking up a lepton from one proton and a quark from the other proton, one can place new constraints on leptoquarks, and plenty of other things. This idea was already proposed in the 1990s, but was essentially forgotten because the lepton PDF was not known. Now we know this very precisely, bringing new possibilities. But let me stress that this is just one idea – there are many other new ideas out there. For instance, one major branch of phenomenology is to use machine learning or deep learning to recognise the SM and extract from data what is not SM-like.

I’m the first female director, which of course is a great responsibility

How does the Max Planck Institute differ from your previous positions, for example at CERN and Oxford?

A long time ago, somebody told me that the best thing that can happen to you in Germany is the Max Planck Society. It’s true. You are given independence and the means to fully focus on research and ideas, largely free of teaching duties or the need to apply for grants. Also, there are very valuable interactions with universities, be it in research or via the International Max Planck Research Schools for PhD students. Our institute in Munich is a very unique place. One can feel it immediately. As a guest in the theory department, for example, you get to sit in the Heisenberg office, which feels like going back in time. Our institute was founded in Berlin in 1917 with Albert Einstein as a first director. In 1958 the institute moved to Munich with Werner Heisenberg as director. After more than 100 years I’m the first female director, which of course is a great responsibility. But I also really loved both CERN and Oxford. At CERN I felt like I was at the centre of the world. It is such a vibrant environment, and I loved the proximity to the experiments and the chats in the cafeteria about calculations or measurements. In Oxford I loved the multidisciplinary aspect, the dinners in college sitting next to other academics working in completely different fields. I guess I’m lucky that I’ve been in so many and such different places.

What is the biggest challenge to reach higher precision in quantum-field-theory calculations of key SM processes?

Scattering processes

The biggest challenge is that often there is no single biggest challenge. For instance, for inclusive Higgs-boson production we have a number of theoretical uncertainties, but they are all quite comparable in size. This means that to reduce the overall uncertainty considerably, one needs to reduce all uncertainties, and they all have very different physics origins and difficulties – from a better understanding of the incoming parton densities and a better knowledge of the strong coupling constant, to higher order QCD or electroweak effects and effects related to heavy particles in virtual loops, etc. Computing power can be a liming factor for certain calculations, so making things numerically more efficient is also important. One of the main goals of the coming year will be the calculation of two to three scattering processes at the LHC at next-to-next-to-leading order (NNLO) in QCD. For instance, a milestone will be the calculation of top-pair production in association with a Higgs boson at that level of accuracy. This is the process where we can measure most directly the top-Yukawa coupling. The importance of this measurement can’t be overstressed. While the big discovery at the LHC is so far the Higgs boson, one should also remember that the Yukawa interaction is a new type of fundamental interaction, which is proportional to the mass of the particle, just like gravity, but yet so different from gravity. For some calculations, NNLO is already enough in terms of perturbative precision; going to N3LO doesn’t really add much yet. But there are a few cases where it helps already, such as super-clean Drell–Yan processes.

Is there a level of precision of LHC measurements beyond which indirect searches for new physics are no longer fruitful?

We will never rule out precision measurements as a route to search for new physics. We can always extend the reach and enhance the sensitivity of indirect searches. By increasing precision, we are exploring deeper in the ultraviolet region, where we can start to become sensitive to states exchanged in the loops that are more and more heavy. There is a limit, but we are very far from it. It’s like looking with a better and better microscope: the better the resolution, the more one can explore. However, the experimental precision has to go hand in hand with theoretical precision, and this is where the real challenge for phenomenologists lies. Of course, if you have a super precise measurement but no theory prediction, or vice versa, then you can’t do much with it. With the Higgs boson I am confident that the theory calculations will not be the deal breaker. We will eventually hit the wall in terms of experimental precision, but you can’t put a figure on where this will happen. Until you see a deviation you are never really done.

How would you characterise the state of particle physics today?

When I entered the field as a student, there were high expectations that supersymmetry would be discovered quickly at the LHC. Now things have turned out to be different, but this is what makes it exciting and challenging – even more so, because the same mysteries are still there. We have big, fundamental questions. We have hints from theory, from experiments. We have a powerful, multi-purpose machine – the LHC – that is only just getting started and will provide much more data. Of course, expectations like the quick discovery of supersymmetry have not been fulfilled, but nature is how it is. I think that progress in physics is driven by experiments. We have beautiful exceptions where progress comes from theory, like general relativity, or the postulation of the mechanism for electroweak symmetry breaking. When I think about the Higgs mechanism, I am still astonished that such a simple and powerful idea postulated in 1964 turns out to be realised in nature. But these cases, where theory precedes experiments, are the exception not the rule. In most cases progress in physics comes from observations. After all, it is a natural science, it is not mathematics.

There are some questions that are really tough, and we may never really see an answer to. But with the LHC there are many other smaller questions we certainly can address, such as understanding the proton structure or studying the interaction potential between nucleons and strange baryons, which are relevant to understand the physics of neutron stars, and these are still advancing knowledge. The brightest minds are attracted to the biggest problems, and this will always draw young researchers into the field.

Is naturalness a good guiding force in fundamental research?

Yes. We have plenty of examples where naturalness, in the sense of a quadratic sensitivity to an unknown ultraviolet scale, leads to postulating a new particle: the energy of the electron field (leading to the positron), the charged and neutral pion mass difference (leading to the rho-meson) or the kaon transition rates and mixing, which led to the postulation of the existence of the charm quark in 1970, before its direct discovery in 1974 at SLAC and Brookhaven. In everyday life we constantly assume naturalness, so yes, it is puzzling that the Higgs mass appears to be fine-tuned. Certainly, there is a lot we still don’t understand here, but I would not give up on naturalness, at least not that easily. In the case of the electroweak naturalness problem, it is clear that any solution, such as supersymmetry or compositeness, will also leave an imprint in the Higgs couplings. So the LHC can, in principle, tell us about naturalness even if we do not discover new physics directly; we just have to measure very precisely if the Higgs boson couplings align on a straight line in the mass-versus-coupling plane.

The presence of dark matter is overwhelming in the universe and it is embarrassing that we know little to nothing about its nature

Which collider should follow the LHC?

That is the billion-dollar question – I mean, the 25 billion-dollar question! To me one should go for the machines that explore as much as possible the new energy frontier, namely a 100 TeV hadron collider. It is a compromise between what we might be able to achieve from a machine-building/accelerator/engineering point of view and really exploring a new frontier. For instance, at a 100 TeV machine one can measure the Higgs self-coupling, which is intimately connected with the Higgs potential and to the profound question of the stability of the vacuum.

Which open question would you most like to see answered during your career?

Probably the nature of dark matter. The presence of dark matter is overwhelming in the universe and it is embarrassing that we know little to nothing about its nature and properties. There are many exciting possibilities, ranging from the lightest neutral states in new-physics models to a non-particle-like interpretation, like black holes. Either way, an answer to this question would be an incredible breakthrough.

How to find a Higgs boson

How to Find a Higgs Boson

Finding Higgs bosons can seem esoteric to the uninitiated. The spouse of a colleague of mine has such trouble describing what their partner does that they read from a card in the event that they are questioned on the subject. Do you experience similar difficulties in describing what you do to loved ones? If so, then Ivo van Vulpen’s book How to find a Higgs boson may provide you with an ideal gift opportunity.

Readers will feel like they are talking physics over a drink with van Vulpen, who is a lecturer at the University of Amsterdam and a member of the ATLAS collaboration. Originally published as De melodie van de natuur, the book’s Dutch origins are unmistakable. We read about Hans Lippershey’s lenses, Antonie van Leeuwenhoeck’s microbiology, Antonius van den Broek’s association of charge with the number of electrons in an atom, and even Erik Verlinde’s theory of gravity as an emergent entropic force. Though the Higgs is dangled at the end of chapters as a carrot to get the reader to keep reading, van Vulpen’s text isn’t an airy pamphlet cashing in on the 2012 discovery, but a realistic representation of what it’s like to be a particle physicist. When he counsels budding scientists to equip themselves better than the North Pole explorer who sets out with a Hugo Boss suit, a cheese slicer and a bicycle, he tells us as much about himself as about what it’s like to be a physicist.

Van Vulpen is a truth teller who isn’t afraid to dent the romantic image of serene progress orchestrated by a parade of geniuses. 9999 out of every 10,000 predictions from “formula whisperers” (theorists) turn out to be complete hogwash, he writes, in the English translation by David McKay. Sociological realities such as “mixed CMS–ATLAS” couples temper the physics, which is unabashedly challenging and unvarnished. The book boasts a particularly lucid and intelligible description of particle detectors for the general reader, and has a nice focus on applications. Particle accelerators are discussed in relation to the “colour X-rays” of the Medipix project. Spin in the context of MRI. Radioactivity with reference to locating blocked arteries. Antimatter in the context of PET scans. Key ideas are brought to life in cartoons by Serena Oggero, formerly of the LHCb collaboration.

The weak interaction is like a dog on an attometre-long chain.

Attentive readers will occasionally be frustrated. For example, despite a stated aim of the book being to fight “formulaphobia”, Bohr’s famous recipe for energy levels lacks the crucial minus sign just a few lines before a listing of –3.6 eV (as opposed to –13.6 eV) for the energy of the ground state. Van Vulpen compares the beauty seen by physicists in equations to the beauty glimpsed by musicians as they read sheet music, but then prints Einstein’s field equations with half the tensor indices missing. But to quibble about typos in the English translation would be to miss the point of the book, which is to allow readers “to impress friends over a drink,” and talk physics “next time you’re in a bar”. Van Vulpen’s writing is always entertaining, but never condescending. Filled with amusing but perceptive one-liners, the book is perfectly calibrated for readers who don’t usually enjoy science. Life in a civilisation that evolved before supernovas would have no cutlery, he observes. Neutrinos are the David Bowie of particles. The weak interaction is like a dog on an attometre-long chain.

This book could be the perfect gift for a curious spouse. But beware: fielding questions on the excellent last chapter, which takes in supersymmetry, SO(10), and millimetre-scale extra dimensions, may require some revision.

Heavy-flavour highlights from Beauty 2020

ATLAS, CMS and LHCb results

The international conference devoted to b-hadron physics at frontier machines, Beauty 2020, took place from 21 to 24 September, hosted virtually by Kavli IPMU, University of Tokyo. This year’s edition, the 19th in the series, attracted around 350 registrants, significantly more than have attended physical Beauty conferences in the past. Two days were devoted to parallel sessions, a change in approach necessitated by the online format, stimulating lively discussion. There were 64 invited talks, of which 13 were overviews given by theorists.

Studies of beauty hadrons have great sensitivity to possible physics beyond the Standard Model (SM), as was stressed by Gino Isidori (University of Zurich) in the opening talk of the conference. Possible lepton-universality anomalies that have emerged from analyses of decays into pairs of leptons and accompanying hadrons are particularly tantalising, as they show significant deviations from the SM in a manner that could be explained by the existence of new particles such as leptoquarks or Z′ bosons. We will know much more when LHCb releases measurements from the updated analysis of the full Run-2 data set. In the meantime, the combined results from ATLAS, CMS and LHCb for the branching ratio of the ultra-rare decay Bs μ+μ generated much discussion. This final state is produced only a few times every billion Bs decays, but is now measured to a remarkable precision of 13%. Intriguingly, the observed value of the branching ratio lies two standard deviations below the SM prediction (see “Ultra-rare” figure) – an effect that some commentators have noted could be driven by the same new particles invoked to explain the other flavour anomalies.

Recent impressive results were shown in the field of CP violation. LHCb presented the first ever observation of time-dependent CP violation in the Bs system – a phenomenon that has eluded previous experiments on account of the very fast (a rate of about 3 × 1012 Hz) Bs oscillations and inadequate sample sizes. In addition, new LHCb results were shown for the CP-violating phase γ. The most precise of these comes from an analysis that isolates B → DK decays which are followed by D → KSπ+π decays, and the distributions of the final-state particles compared depending on whether they originate from B or B+ mesons. This analysis is based on the full Run 1 and Run 2 data sets and constrains γ to a precision of five degrees, which from this single analysis alone represents around a four-fold improvement compared to when the LHC began operation. Further improvements are expected over the coming years.

Participants were eager to learn about the progress of the SuperKEKB accelerator and Belle II experiment. SuperKEKB is now operating at higher luminosity than any previous electron–positron machine, and the data set collected by Belle II (of the order 100 fb–1) is already sufficient to demonstrate the capabilities of the detector and to allow for important early physics studies, which were shown during the week. Belle II has superior performance to the first-generation B-factory experiments, BaBar and Belle, in areas such as flavour tagging and proper-time resolution, and will collect around 50 times the integrated luminosity. By the end of the decade Belle II will have accumulated 50 ab–1 of data, from which many precise and exciting physics measurements are expected.

Recent impressive results were shown in the field of CP violation

Studies of kaon decays provide important insights into flavour physics that are complementary to those obtained from b-hadrons. The NA62 collaboration presented its updated branching ratio for the ultra-rare decay K+ π+νv, which is predicted to be around 10–10 in the SM. The data set is now sufficiently large to see a signal with a significance of more than three standard deviations. Future running is planned to allow a measurement to be made with a 10–20% precision, which will provide a powerful test of the SM prediction (CERN Courier September/October 2020 p9).

The concluding plenary session focused on the future of flavour physics. The LHCb experiment is currently being upgraded, and a further upgrade is foreseen at the end of the decade. In parallel, the upgrades of ATLAS and CMS will increase their capabilities for beauty studies. In the electron–positron domain, Belle II will continue to accumulate data, and there is the exciting possibility of a super-tau-charm factory, situated in either China or Russia, which will collect very large data sets at lower energies. These prospects were surveyed by Phillip Urquijo (University of Melbourne) in the summary talk of the conference, who stressed the importance of exploiting these ongoing and future facilities to the maximum. Flavour studies have a bright future, and they are sure to retain a central role in our search for physics beyond the SM.

Strong interest in feeble interactions

Searches for axion-like particles

Since the discovery of the Higgs boson in 2012, great progress has been made in our understanding of the Standard Model (SM) and the prospects for the discovery of new physics beyond it. Despite excellent advances in Higgs-sector measurements, searches for WIMP dark matter and exploration of very rare processes in the flavour realm, however, no unambiguous signals of new fundamental physics have been seen. This is the reason behind the explosion of interest in feebly interacting particles (FIPs) over the past decade or so.

The inaugural FIPs 2020 workshop, hosted online by CERN from 31 August to 4 September, convened almost 200 physicists from around the world. Structured around the four “portals” that may link SM particles and fields to a rich dark sector – axions, dark photons, dark scalars and heavy neutral leptons – the workshop highlighted the synergies and complementarities among a great variety of experimental facilities, and called for close collaboration across different physics communities.

Today, conventional experimental efforts are driven by arguments based on the naturalness of the electroweak scale. They result in searches for new particles with sizeable couplings to the SM, and masses near the electroweak scale. FIPs represent an alternative paradigm to the traditional beyond-the-SM physics explored at the LHC. With masses below the electroweak scale, FIPs could belong to a rich dark sector and answer many open questions in particle physics (see “Four portals” figure). Diverse searches using proton beams (CERN and Fermilab), kaon beams (CERN and JPARC), neutrino beams (JPARC and Fermilab) and muon beams (PSI) today join more idiosyncratic experiments across the globe in a worldwide search for FIPs.

FIPs can arise from the presence of feeble couplings in the interactions of new physics with SM particles and fields. These may be due to a dimensionless coupling constant or to a “dimensionful” scale, larger than that of the process being studied, which is defined by a higher dimension operator that mediates the interaction. The smallness of these couplings can be due to the presence of an approximate symmetry that is only slightly broken, or to the presence of a large mass hierarchy between particles, as the absence of new-physics signals from direct and indirect searches seems to suggest.

A selection of open questions

Take the axion, for example. This is the particle that may be responsible for the conservation of charge–parity symmetry in strong interactions. It may also constitute a fraction or the entirety of dark matter, or explain the hierarchical masses and mixings of the SM fermions – the flavour puzzle.

Or take dark photons or dark Z′ bosons, both examples of new vector gauge bosons. Such particles are associated with extensions of the SM gauge group, and, in addition to indicating new forces beyond the four we know, could lead to evidence of dark-matter candidates with thermal origins and masses in the MeV to GeV range.

Exotic Higgs bosons could also have been responsible for cosmological inflation

Then there are exotic Higgs bosons. Light dark scalar or pseudoscalar particles related to the SM Higgs may provide novel ways of addressing the hierarchy problem, in which the Higgs mass can be stabilised dynamically via the time evolution of a so-called “relaxion” field. They could also have been responsible for cosmological inflation.

Finally, consider right-handed neutrinos, often referred to as sterile neutrinos or heavy neutral leptons, which could account for the origin of the tiny, nearly-degenerate masses of the neutrinos of the SM and their oscillations, as well as providing a mechanism for our universe’s matter–antimatter asymmetry.

Scientific diversity

No single experimental approach can cover the large parameter space of masses and couplings that FIPs models allow. The interconnections between open questions require that we construct a diverse research programme incorporating accelerator physics, dark-matter direct detection, cosmology, astrophysics, and precision atomic experiments, with a strong theoretical involvement. The breadth of searches for axions or axion-like particles (ALPs) is a good indication of the growing interest in FIPs (see “Scaling the ALPs” figure). Experimental efforts here span particle and astroparticle physics. In the coming years, helioscopes, which aim to detect solar axions by their conversion into photons (X-rays) in a strong magnetic field, will improve the sensitivity by more than 10 orders of magnitude in mass in the sub-eV range. Haloscopes, which work by converting axions into visible photons inside a resonant microwave cavity placed inside a strong magnetic field, will complement this quest by increasing the sensitivity for small couplings by six orders of magnitude (down to the theoretically motivated gold band in a mass region where the axions can be a dark-matter candidate). Accelerator-based experiments, meanwhile, can probe the strongly motivated QCD scale (MeV–GeV) and beyond for larger couplings. All these results
will be complemented by a lively theo­retical activity aimed at interpreting astrophysical signals within axion and ALP models.

FIPs 2020 triggered lively discussions that will continue in the coming months via topical meetings on different subjects. Topics that motivated particular interest between communities included possible ways of comparing results from direct-detection dark-matter experiments in the MeV–GeV range against those obtained at extracted beam line and collider experiments; the connection between right-handed neutrino properties and active neutrino parameters; and the interpretation of astrophysical and cosmological bounds, which often overwhelm the interpretation of each of the four portals.

The next FIPs workshop will take place at CERN next year.

Neutrinos for peace

The PROSPECT neutrino detector

The first nuclear-weapons test shook the desert in New Mexico 75 years ago. Weeks later, Hiroshima and Nagasaki were obliterated. So far, these two Japanese cities have been the only ones to suffer such a fate. Neutrinos can help to ensure that no other city has to be added to this dreadful list.

At the height of the arms race between the US and the USSR, stockpiles of nuclear weapons exceeded 50,000 warheads, with the majority being thermonuclear designs vastly more destructive than the fission bombs used in World War II. Significant reductions in global nuclear stockpiles followed the end of the Cold War, but the US and Russia still have about 12,500 nuclear weapons in total, and the other seven nuclear-armed nations have about 1500. Today, the politics of non-proliferation is once again tense and unpredictable. New nuclear security challenges have appeared, often from unexpected actors, as a result of leadership changes on both sides of the table. Nuclear arms races and the dissolution of arms-control treaties have yet again become a real possibility. A regional nuclear war involving just 1% of the global arsenal would cause a massive loss of life, trigger climate effects leading to crop failures and jeopardise the food supply of a billion people. Until we achieve global disarmament, nuclear non-proliferation efforts and arms control are still the most effective tools for nuclear security.

Not a bang but a whimper

The story of the neutrino is closely tied to nuclear weapons. The first serious proposal to detect the particle hypothesised by Pauli, put forward by Clyde Cowan and Frederick Reines in the early 1950s, was to use a nuclear explosion as the source (see “Daring experiment” figure). Inverse beta decay, whereby an electron-antineutrino strikes a free proton and transforms it into a neutron and a positron, was to be the detection reaction. The proposal was approved in 1952 as an addition to an already planned atmospheric nuclear-weapons test. However, while preparing for this experiment, Cowan and Reines realised that by capturing the neutron on a cadmium nucleus, and observing the delayed coincidence between the positron and this neutron, they could use the lower, but steady flux of neutrinos from a nuclear reactor instead (see “First detection” figure). This technique is still used today, but with gadolinium or lithium in place of cadmium.

Proposal to discover particles using a nuclear explosion

The P reactor at the Savannah River site at Oak Ridge National Laboratory, which had been built and used to make plutonium and tritium for nuclear weapons, eventually hosted the successful experiment to first detect the neutrino in 1956. Neutrino experiments testing the properties of the neutrino including oscillation searches continued there until 1988, when the P reactor was shut down.

Neutrinos are not produced in nuclear fission itself, but by the beta decays of neutron-rich fission fragments – on average about six per fission. In a typical reactor fuelled by natural uranium or low-enriched uranium, the reactor starts out with only uranium-235 as its fuel. During operation a significant number of neutrons are absorbed on uranium-238, which is far more abundant, leading to the formation of uranium-239, which after two beta decays becomes plutonium-239. Plutonium-239 eventually contributes to about 40% of the fissions, and hence energy production, in a commercial reactor. It is also the isotope used in nuclear weapons.

The dual-use nature of reactors is at the crux of nuclear non-proliferation. What distinguishes a plutonium-production reactor from a regular reactor producing electricity is whether it is operated in such a way that the plutonium can be taken out of the reactor core before it deteriorates and becomes difficult to use in weapons applications. A reactor with a low content of plutonium-239 makes more and higher energy neutrinos than one rich in plutonium-239.

Lev Mikaelyan and Alexander Borovoi, from the Kurchatov Institute in Moscow, realised that neutrino emissions can be used to infer the power and plutonium content of a reactor. In a series of trailblazing experiments at the Rovno nuclear power plant in the 1980s and early 1990s, their group demonstrated that a tonne-scale underground neutrino detector situated 10 to 20 metres from a reactor can indeed track its power and plutonium content.

The significant drawback of neutrino detectors in the 1980s was that they needed to be situated underground, beneath a substantial overburden of rock, to shield them from cosmic rays. This greatly limited potential deployment sites. There was a series of application-related experiments – notably the successful SONGS experiment conducted by researchers at Lawrence Livermore National Laboratory, which aimed to reduce cost and improve the robustness and remote operation of neutrino detectors – but all of these detectors still needed shielding.

From cadmium to gadolinium

Synergies with fundamental physics grew in the 1990s, when the evidence for neutrino oscillations was becoming impossible to ignore. With the range of potential oscillations frequencies narrowing, the Palo Verde and Chooz reactor experiments placed multi-tonne detectors about 1 km from nuclear reactors, and sought to measure the relatively small θ13 parameter of the neutrino mixing matrix, which expresses the mixing between electron neutrinos and the third neutrino mass eigenstate. Both experiments used large amounts of liquid organic scintillator doped with gadolinium. The goal was to tag antineutrino events by capturing the neutrons on gadolinium, rather than the cadmium used by Reines and Cowan. Gadolinium produces 8 MeV of gamma rays upon de-excitation after a neutron capture. As it has an enormous neutron-capture cross section, even small amounts greatly enhance an experiment’s ability to identify neutrons.

Delayed coincidence detection scheme

Eventually, neutrino oscillations became an accepted fact, redoubling the interest in measuring θ13. This resulted in three new experiments: Double Chooz in France, RENO in South Korea, and Daya Bay in China. Learning lessons from Palo Verde and Chooz, the experiments successfully measured θ13 more precisely than any other neutrino mixing parameter. A spin-off from the Double Chooz experiment was the Nucifer detector (see “Purpose driven” figure), which demonstrated the operation of a robust sub-tonne-scale detector designed with missions to monitor reactors in mind, in alignment with requirements formulated at a 2008 workshop held by the International Atomic Energy Agency (IAEA). However, Nucifer still needed a significant overburden.

In 2011, however, shortly before the experiments established that θ13 is not zero, fundamental research once again galvanised the development of detector technology for reactor monitoring. In the run-up to the Double Chooz experiment, a group at Saclay started to re-evaluate the predictions for reactor neutrino fluxes – then and now based on measurements at the Institut Laue-Langevin in the 1980s – and found to their surprise that the reactor flux prediction came out 6% higher than before. Given that all prior experiments were in agreement with the old flux predictions, neutrinos were missing. This “reactor-antineutrino anomaly” persists to this day. A sterile neutrino with a mass of about 1 eV would be a simple explanation. This mass range has been suggested by experiments with accelerator neutrinos, most notably LSND and MiniBooNE, though it conflicts with predictions that muon neutrinos should oscillate into such a sterile neutrino, which experiments such as MINOS+ have failed to confirm.

To directly observe the high-frequency oscillations of an eV-scale sterile neutrino you need to get within about 10 m of the reactor. At this distance, backgrounds from the operation of the reactor are often non-negligible, and no overburden is possible – the same conditions a detector on a safeguards mission would encounter.

From gadolinium to lithium

Around half a dozen experimental groups are chasing sterile neutrinos using small detectors close to reactors. Some of the most advanced designs use fine spatial segmentation to reject backgrounds, and replace gadolinium with lithium-6 as the nucleus to capture and tag neutrons. Lithium has the advantage that upon neutron capture it produces an alpha particle and a triton rather than a handful of photons, resulting in a very well localised tag. In a small detector this improves event containment and thus efficiency, and also helps constrain event topology.

Following the lithium and finely segmented technical paths, the PROSPECT collaboration and the CHANDLER collaboration (see “Rapid deployment” figure), in which I participate, independently reported the detection of a neutrino spectrum with minimal overburden and high detection efficiency in 2018. This is a major milestone in making non-proliferation applications a reality, since it is the first demonstration of the technology needed for tonne-scale detectors capable of monitoring the plutonium content of a nuclear reactor that could be universally deployed without the need for special site preparation.

The story of the neutrino is closely tied to nuclear weapons

The main difference between the two detectors is that PROSPECT, which reported its near-final sterile neutrino limit at the Neutrino 2020 conference, uses a traditional approach with liquid scintillator, whereas CHANDLER, currently an R&D project, uses plastic scintillator. The use of plastic scintillator allows the deployment time-frame to be shortened to less than 24 hours. On the other hand, liquid scintillator allows the exploitation of pulse-shape discrimination to reject cosmic-ray neutron backgrounds, allowing PROSPECT to achieve a much better signal-to-background ratio than any plastic detector to date. Active R&D is seeking to improve topological reconstruction in plastic detectors and imbue them with pulse-shape discrimination. In addition, a number of safeguard-specific detector R&D experiments have successfully detected reactor neutrinos using plastic scintillator in conjunction with gadolinium. In the UK, the VIDARR collaboration has seen neutrinos from the Wylfa reactor, and in Japan the PANDA collaboration successfully operated a truck-mounted detector.

In parallel to detector development, studies are being undertaken to understand how reactor monitoring with neutrinos would impact nuclear security and support non-proliferation objectives. Two very relevant situations being studied are the 2015 Iran Deal – the Joint Comprehensive Plan of Action (JCPOA) – and verification concepts for a future agreement with North Korea.

Nuclear diplomacy

One of the sticking points in negotiating the 2015 Iran deal was the future of the IR-40 reactor, which was being constructed at Arak, an industrial city in central Iran. The IR-40 was planned to be a 40 MW reactor fuelled by natural uranium and moderated with heavy water, with a stated purpose of isotope production for medical and scientific use. The choice of fuel and moderator is interesting, as it meshes with Iranian capabilities and would serve the stated purpose well and be cost effective, since no uranium enrichment is needed. Equally, however, if one were to design a plutonium-production reactor for a nascent weapons programme, this combination would be one of the top choices: it does not require uranium enrichment, and with the stated reactor power would result in the annual production of about 10 kg of rather pure plutonium-239. This matches the critical mass of a bare plutonium-239 sphere, and it is known that as little as 4 kg can be used to make an effective nuclear explosive. Within the JCPOA it was eventually agreed that the IR-40 could be redesigned, down-rated in power to 20 MW and the new core fuelled with 3.7% enriched fuel, reducing the annual plutonium production by a factor of six.

A spin off from Double Chooz

A 10 to 20 tonne neutrino detector 20 m from the reactor would be able to measure its plutonium content with a precision of 1 to 2 kg. This would be particularly relevant in the so-called N-th month scenario, which models a potential crisis in Iran based on events in North Korea in June 1994. During the 1994 crisis, which risked precipitating war with the US, the nuclear reactor at Yongbyon was shut down, and enough spent fuel rods removed to make several bombs. IAEA protocols were sternly tested. The organisation’s conventional safeguards for operating reactors consist of containment and surveillance – seals, for example, to prevent the unnoticed opening of the reactor, and cameras to record the movement of fuel, most crucially during reactor shutdowns. In the N-th month scenario, the IR-40 reactor, in its pre-JCPOA configuration (40 MW, rather than the renegotiated power of 20 MW), runs under full safeguards for N–1 months. In month N, a planned reactor shutdown takes place. At this point the reactor would contain 8 kg of weapons-grade plutonium. For unspecified reasons the safeguards are then interrupted. In month N+1, the reactor is restarted and full safeguards are restored. The question is: are the 8 kg of plutonium still in the reactor core, or has the core been replaced with fresh fuel and the 8 kg of plutonium illicitly diverted?

The disruption of safeguards could either be due to equipment failure – a more frequent event than one might assume – or due to events in the political realm ranging from a minor unpleasantness to a full-throttle dash for a nuclear weapon. Distinguishing the two scenarios would be a matter of utmost urgency. According to an analysis including realistic backgrounds extrapolated from the PROSPECT results, this could be done in 8 to 12 weeks with a neutrino detector.

Neutrino detectors could be effective in addressing the safeguard challenges presented by advanced reactors

No conventional non-neutrino technologies can match this performance without shutting the reactor down and sampling a significant fraction of the highly radioactive fuel. The conventional approach would be extremely disruptive to reactor operations and would put inspectors and plant operators at risk of radiation exposure. Even if the host country were to agree in principle, developing a safe plan and having all sides agree on its feasibility would take months at the very least, creating dangerous ambiguity in the interim and giving hardliners on both sides time to push for an escalation of the crisis. The conventional approach would also be significantly more expensive than a neutrino detector.

New negotiating gambit

The June 1994 crisis at Yongbyon still overshadows negotiations with North Korea, since, as far as North Korea is concerned, it discredited the IAEA. Both during the crisis, and subsequently, international attempts at non-proliferation failed to prevent North Korea from acquiring nuclear weapons – its first nuclear-weapons test took place in 2006 – or even to constrain its progress towards a small-scale operational nuclear force. New approaches are therefore needed, and recent attempts by the US to achieve progress on this issue prompted an international group of about 20 neutrino experts from Europe, the US, Russia, South Korea, China and Japan to develop specific deployment scenarios for neutrino detectors at the Yongbyon nuclear complex.

The main concern is the 5 MWe reactor, which, though named for its electrical power, has a thermal power of 20 MW. This gas-cooled graphite-moderated reactor, fuelled with natural uranium, has been the source of all of North Korea’s plutonium. The specifics of this reactor, and in particular its fuel cladding, which makes prolonged wet-storage of irradiated fuel impossible, represent such a proliferation risk that anything but a monitored shutdown prior to a complete dismantling appears inappropriate. To safeguard against the regime reneging on such a deal, were it to be agreed, a relatively modest tonne-scale neutrino detector right outside the reactor building could detect a powering up of this reactor within a day.

The MiniCHANDLER detector

North Korea is also constructing the Experimental Light Water Reactor at Yongbyon. A 150 MW water-moderated reactor running with low-enriched fuel, this reactor would not be particularly well suited to plutonium production. Its design is not dissimilar to much larger reactors used throughout the world to produce electricity, and it could help address the perennial lack of electricity that has limited the development and growth of the country’s economy. North Korea may wish to operate it indefinitely. A larger, 10 tonne neutrino detector could detect any irregularities during its refuelling – a tell-tale sign of a non-civilian use of the reactor – on a timescale of three months, which is within the goals set by the IAEA.

In a different scenario, wherein the goal would be to monitor a total shutdown of all reactors at Yongbyon, it would be feasible to bury a Daya-Bay-style 50 tonne single volume detector under the Yak-san, a mountain about 2 km outside of the perimeter of the nuclear installations (see “A different scenario” figure). The cost and deployment timescale would be more onerous than in the other scenarios.

In the case of longer distances between reactor and detector, detector masses must increase to compensate an inverse-square reduction in the reactor-neutrino flux. As cosmic-ray backgrounds remain constant, the detectors must be deployed deep underground, beneath an overburden of several 100 m of rock. To this end, the UK’s Science and Technology Facilities Council, the UK Atomic Weapons Establishment and the US Department of Energy, are funding the WATCHMAN collaboration to pursue the construction of a multi-kilo-tonne water-Cherenkov detector at the Boulby mine, 20 km from two reactors in Hartlepool, in the UK. The goal is to demonstrate the ability to monitor the operational status of the reactors, which have a combined power of 3000 MW. In a use-case context this would translate to excluding the operation of an undeclared 10 to 20 MW reactor within a radius of a few kilometres , but no safeguards scenario has emerged where this would give a unique advantage.

Inverse-square scaling eventually breaks down around 100 km, as at that distance the backgrounds caused by civilian reactors far outshine any undeclared small reactor almost anywhere in the northern hemisphere. Small signals also prevent the use of neutrino detectors for nuclear-explosion monitoring, or to confirm the origin of a suspicious seismic event as being nuclear, as conventional technologies are more feasible than the very large detectors that would be needed. A more promising future application of neutrino-detector technology is to meet the new challenges posed by advanced nuclear-reactor designs.

Advanced safeguards

The current safeguards regime relies on two key assumptions: that fuel comes in large, indivisible and individually identifiable units called “fuel assemblies”, and that power reactors need to be refuelled frequently. Most advanced reactor designs violate at least one of these design characteristics. Fuel may come in thousands of small pebbles or be molten, and its coolant may not be transparent, in contrast to current designs, where water is used as moderator, coolant and storage medium in the first years after discharge. Either way, counting and identification of the fuel by serial number may be impossible. And unlike current power reactors, which are refuelled on a 12-to-18-month cycle, allowing in-core fuel to be verified as well, advanced reactors may be refuelled only once in their lifetime.

Three 20 tonne neutrino detectors

Neutrino detectors would not be hampered by any of these novel features. Detailed simulations indicate that they could be effective in addressing the safeguard challenges presented by advanced reactors. Crucially, they would work in a very similar fashion for any of the new reactor designs.

In 2019 the US Department of Energy chartered and funded a study (which I co-chair) with the goal of determining the utility of the unique capabilities offered by neutrino detectors for nuclear security and energy applications. This study includes investigators from US national laboratories and academia more broadly, and will engage and interview nuclear security and policy experts within the Department of Energy, the State Department, NGOs, academia, and international agencies such as the IAEA. The results are expected early in 2021. They should provide a good understanding of where neutrinos can play a role in current and future monitoring and verification agreements, and may help to guide neutrino detectors towards their first real-world applications.

The idea of using neutrinos to monitor reactors has been around for about 40 years. Only very recently, however, as a result of a surge of interest in sterile neutrinos, has detector technology become available that would be practical in real-world scenarios such as the JCPOA or a new North Korean nuclear agreement. The most likely initial application will be near-field reactor monitoring with detectors inside the fence of the monitored facility as part of a regional nuclear deal. Such detectors will not be a panacea to all verification and monitoring needs, and can only be effective if there is a sincere political will on both sides, but they do offer more room for creative diplomacy, and a technology that is robust against the kinds of political failures which have derailed past agreements. 

CLIC lights the way for FLASH therapy

High-gradient accelerating structure

Technology developed for the proposed Compact Linear Collider (CLIC) at CERN is poised to make a novel cancer radio‑therapy facility a reality. Building on recently revived research from the 1970s, oncologists believe that ultrafast bursts of electrons damage tumours more than healthy tissue. This “FLASH effect” could be realised by using high-gradient accelerator technology from CLIC to create a new facility at Switzerland’s Lausanne University Hospital (CHUV).

Traditional radiotherapy scans photon beams from multiple angles to focus a radiation dose on tumours inside the body. More recently, hadron therapy has offered a further treatment modality: by tuning the energy of a beam of protons or ions so that they stop in the tumour, the particles deposit most of the radiation dose there (the so-called Bragg peak), while sparing the surrounding healthy tissue by comparison. Both of these treatments deliver small doses of radiation to a patient over an extended period, whereas FLASH radiotherapy is thought to require a maximum of three doses, all lasting less than 100 ms.

Look again

When the FLASH effect was first studied in the 1970s, it was assumed that all tissues suffer less damage when a dose is ultrafast, regardless of whether they are healthy or tumorous. In 2014, however, CHUV researchers published a study in which 200 mice were given a single dose of 4.5 MeV gamma rays at a conventional therapy dose-rate, while others were given an equivalent dose at the much faster FLASH-therapy rate. The results showed explicitly that while the normal tissue was damaged significantly less by the ultrafast bursts, the damage to the tumour stayed consistent for both therapies. In 2019, CHUV applied the first FLASH treatment to a cancer patient, finding similarly positive results: a 3.5 cm diameter skin tumour completely disappeared using electrons from a 5.6 MeV linear accelerator, “with nearly no side effects”. The challenge was to reach deeper tumours.

Now, using high-gradient “X-band” radio-frequency cavity technology developed for CLIC, CHUV has teamed up with CERN to develop a facility that can produce electron beams with energies around 100 MeV, in order to reach tumour depths of up to 20 cm. The idea came about three years ago when it was realised that CLIC technology was almost a perfect match for what CHUV were looking for: a high-powered accelerator, which uses X-band technology to accelerate particles over a short distance, has a high luminosity, and utilises a high current that allows a higher volume of tumour to be targeted.

“CLIC has the ability to accelerate a large amount of charge to get enough luminosity for physics studies,” explains Walter Wuensch of CERN, who heads the FLASH project at CERN. “People tend to focus on the accelerating gradient, but as important, or arguably more important, is the ability to control high-current, low-emittance beams.”

It really looks like it has the potential to be an important complement to existing radiation therapies

The first phase of the collaboration is nearing completion, with a conceptual design report, funded by CHUV, being created together by CERN and CHUV. The development and construction of the first facility, which would be housed at CHUV, is predicted to cost around €25 million, and CHUV aims to complete the facility within three years.

“The intention of CERN and the team is to be heavily involved in the process of getting the facility built and operating,” states Wuensch. “It really looks like it has the potential to be an important complement to existing radiation therapies.”

Cancer therapies have taken advantage of particle accelerators for many decades, with proton radiotherapy entering the scene in the 1990s. The CERN-based Proton-Ion Medical Machine Study, spawned by the TERA Foundation, resulted in the National Centre for Cancer Hadron Therapy (CNAO) in Italy and MedAustron in Austria, which have made significant progress in the field of proton and ion therapy. FLASH radiotherapy would add electrons to the growing modality of particle therapy.

bright-rec iop pub iop-science physcis connect