Finding Higgs bosons can seem esoteric to the uninitiated. The spouse of a colleague of mine has such trouble describing what their partner does that they read from a card in the event that they are questioned on the subject. Do you experience similar difficulties in describing what you do to loved ones? If so, then Ivo van Vulpen’s book How to find a Higgs boson may provide you with an ideal gift opportunity.
Readers will feel like they are talking physics over a drink with van Vulpen, who is a lecturer at the University of Amsterdam and a member of the ATLAS collaboration. Originally published as De melodie van de natuur, the book’s Dutch origins are unmistakable. We read about Hans Lippershey’s lenses, Antonie van Leeuwenhoeck’s microbiology, Antonius van den Broek’s association of charge with the number of electrons in an atom, and even Erik Verlinde’s theory of gravity as an emergent entropic force. Though the Higgs is dangled at the end of chapters as a carrot to get the reader to keep reading, van Vulpen’s text isn’t an airy pamphlet cashing in on the 2012 discovery, but a realistic representation of what it’s like to be a particle physicist. When he counsels budding scientists to equip themselves better than the North Pole explorer who sets out with a Hugo Boss suit, a cheese slicer and a bicycle, he tells us as much about himself as about what it’s like to be a physicist.
Van Vulpen is a truth teller who isn’t afraid to dent the romantic image of serene progress orchestrated by a parade of geniuses. 9999 out of every 10,000 predictions from “formula whisperers” (theorists) turn out to be complete hogwash, he writes, in the English translation by David McKay. Sociological realities such as “mixed CMS–ATLAS” couples temper the physics, which is unabashedly challenging and unvarnished. The book boasts a particularly lucid and intelligible description of particle detectors for the general reader, and has a nice focus on applications. Particle accelerators are discussed in relation to the “colour X-rays” of the Medipix project. Spin in the context of MRI. Radioactivity with reference to locating blocked arteries. Antimatter in the context of PET scans. Key ideas are brought to life in cartoons by Serena Oggero, formerly of the LHCb collaboration.
The weak interaction is like a dog on an attometre-long chain.
Attentive readers will occasionally be frustrated. For example, despite a stated aim of the book being to fight “formulaphobia”, Bohr’s famous recipe for energy levels lacks the crucial minus sign just a few lines before a listing of –3.6 eV (as opposed to –13.6 eV) for the energy of the ground state. Van Vulpen compares the beauty seen by physicists in equations to the beauty glimpsed by musicians as they read sheet music, but then prints Einstein’s field equations with half the tensor indices missing. But to quibble about typos in the English translation would be to miss the point of the book, which is to allow readers “to impress friends over a drink,” and talk physics “next time you’re in a bar”. Van Vulpen’s writing is always entertaining, but never condescending. Filled with amusing but perceptive one-liners, the book is perfectly calibrated for readers who don’t usually enjoy science. Life in a civilisation that evolved before supernovas would have no cutlery, he observes. Neutrinos are the David Bowie of particles. The weak interaction is like a dog on an attometre-long chain.
This book could be the perfect gift for a curious spouse. But beware: fielding questions on the excellent last chapter, which takes in supersymmetry, SO(10), and millimetre-scale extra dimensions, may require some revision.
The international conference devoted to b-hadron physics at frontier machines, Beauty 2020, took place from 21 to 24 September, hosted virtually by Kavli IPMU, University of Tokyo. This year’s edition, the 19th in the series, attracted around 350 registrants, significantly more than have attended physical Beauty conferences in the past. Two days were devoted to parallel sessions, a change in approach necessitated by the online format, stimulating lively discussion. There were 64 invited talks, of which 13 were overviews given by theorists.
Studies of beauty hadrons have great sensitivity to possible physics beyond the Standard Model (SM), as was stressed by Gino Isidori (University of Zurich) in the opening talk of the conference. Possible lepton-universality anomalies that have emerged from analyses of decays into pairs of leptons and accompanying hadrons are particularly tantalising, as they show significant deviations from the SM in a manner that could be explained by the existence of new particles such as leptoquarks or Z′ bosons. We will know much more when LHCb releases measurements from the updated analysis of the full Run-2 data set. In the meantime, the combined results from ATLAS, CMS and LHCb for the branching ratio of the ultra-rare decay Bs→ μ+μ– generated much discussion. This final state is produced only a few times every billion Bs decays, but is now measured to a remarkable precision of 13%. Intriguingly, the observed value of the branching ratio lies two standard deviations below the SM prediction (see “Ultra-rare” figure) – an effect that some commentators have noted could be driven by the same new particles invoked to explain the other flavour anomalies.
Recent impressive results were shown in the field of CP violation. LHCb presented the first ever observation of time-dependent CP violation in the Bs system – a phenomenon that has eluded previous experiments on account of the very fast (a rate of about 3 × 1012 Hz) Bs oscillations and inadequate sample sizes. In addition, new LHCb results were shown for the CP-violating phase γ. The most precise of these comes from an analysis that isolates B → DK decays which are followed by D → KSπ+π– decays, and the distributions of the final-state particles compared depending on whether they originate from B– or B+ mesons. This analysis is based on the full Run 1 and Run 2 data sets and constrains γ to a precision of five degrees, which from this single analysis alone represents around a four-fold improvement compared to when the LHC began operation. Further improvements are expected over the coming years.
Participants were eager to learn about the progress of the SuperKEKB accelerator and Belle II experiment. SuperKEKB is now operating at higher luminosity than any previous electron–positron machine, and the data set collected by Belle II (of the order 100 fb–1) is already sufficient to demonstrate the capabilities of the detector and to allow for important early physics studies, which were shown during the week. Belle II has superior performance to the first-generation B-factory experiments, BaBar and Belle, in areas such as flavour tagging and proper-time resolution, and will collect around 50 times the integrated luminosity. By the end of the decade Belle II will have accumulated 50 ab–1 of data, from which many precise and exciting physics measurements are expected.
Recent impressive results were shown in the field of CP violation
Studies of kaon decays provide important insights into flavour physics that are complementary to those obtained from b-hadrons. The NA62 collaboration presented its updated branching ratio for the ultra-rare decay K+→ π+νv, which is predicted to be around 10–10 in the SM. The data set is now sufficiently large to see a signal with a significance of more than three standard deviations. Future running is planned to allow a measurement to be made with a 10–20% precision, which will provide a powerful test of the SM prediction (CERN Courier September/October 2020 p9).
The concluding plenary session focused on the future of flavour physics. The LHCb experiment is currently being upgraded, and a further upgrade is foreseen at the end of the decade. In parallel, the upgrades of ATLAS and CMS will increase their capabilities for beauty studies. In the electron–positron domain, Belle II will continue to accumulate data, and there is the exciting possibility of a super-tau-charm factory, situated in either China or Russia, which will collect very large data sets at lower energies. These prospects were surveyed by Phillip Urquijo (University of Melbourne) in the summary talk of the conference, who stressed the importance of exploiting these ongoing and future facilities to the maximum. Flavour studies have a bright future, and they are sure to retain a central role in our search for physics beyond the SM.
Since the discovery of the Higgs boson in 2012, great progress has been made in our understanding of the Standard Model (SM) and the prospects for the discovery of new physics beyond it. Despite excellent advances in Higgs-sector measurements, searches for WIMP dark matter and exploration of very rare processes in the flavour realm, however, no unambiguous signals of new fundamental physics have been seen. This is the reason behind the explosion of interest in feebly interacting particles (FIPs) over the past decade or so.
The inaugural FIPs 2020 workshop, hosted online by CERN from 31 August to 4 September, convened almost 200 physicists from around the world. Structured around the four “portals” that may link SM particles and fields to a rich dark sector – axions, dark photons, dark scalars and heavy neutral leptons – the workshop highlighted the synergies and complementarities among a great variety of experimental facilities, and called for close collaboration across different physics communities.
Today, conventional experimental efforts are driven by arguments based on the naturalness of the electroweak scale. They result in searches for new particles with sizeable couplings to the SM, and masses near the electroweak scale. FIPs represent an alternative paradigm to the traditional beyond-the-SM physics explored at the LHC. With masses below the electroweak scale, FIPs could belong to a rich dark sector and answer many open questions in particle physics (see “Four portals” figure). Diverse searches using proton beams (CERN and Fermilab), kaon beams (CERN and JPARC), neutrino beams (JPARC and Fermilab) and muon beams (PSI) today join more idiosyncratic experiments across the globe in a worldwide search for FIPs.
FIPs can arise from the presence of feeble couplings in the interactions of new physics with SM particles and fields. These may be due to a dimensionless coupling constant or to a “dimensionful” scale, larger than that of the process being studied, which is defined by a higher dimension operator that mediates the interaction. The smallness of these couplings can be due to the presence of an approximate symmetry that is only slightly broken, or to the presence of a large mass hierarchy between particles, as the absence of new-physics signals from direct and indirect searches seems to suggest.
Take the axion, for example. This is the particle that may be responsible for the conservation of charge–parity symmetry in strong interactions. It may also constitute a fraction or the entirety of dark matter, or explain the hierarchical masses and mixings of the SM fermions – the flavour puzzle.
Or take dark photons or dark Z′ bosons, both examples of new vector gauge bosons. Such particles are associated with extensions of the SM gauge group, and, in addition to indicating new forces beyond the four we know, could lead to evidence of dark-matter candidates with thermal origins and masses in the MeV to GeV range.
Exotic Higgs bosons could also have been responsible for cosmological inflation
Then there are exotic Higgs bosons. Light dark scalar or pseudoscalar particles related to the SM Higgs may provide novel ways of addressing the hierarchy problem, in which the Higgs mass can be stabilised dynamically via the time evolution of a so-called “relaxion” field. They could also have been responsible for cosmological inflation.
Finally, consider right-handed neutrinos, often referred to as sterile neutrinos or heavy neutral leptons, which could account for the origin of the tiny, nearly-degenerate masses of the neutrinos of the SM and their oscillations, as well as providing a mechanism for our universe’s matter–antimatter asymmetry.
Scientific diversity
No single experimental approach can cover the large parameter space of masses and couplings that FIPs models allow. The interconnections between open questions require that we construct a diverse research programme incorporating accelerator physics, dark-matter direct detection, cosmology, astrophysics, and precision atomic experiments, with a strong theoretical involvement. The breadth of searches for axions or axion-like particles (ALPs) is a good indication of the growing interest in FIPs (see “Scaling the ALPs” figure). Experimental efforts here span particle and astroparticle physics. In the coming years, helioscopes, which aim to detect solar axions by their conversion into photons (X-rays) in a strong magnetic field, will improve the sensitivity by more than 10 orders of magnitude in mass in the sub-eV range. Haloscopes, which work by converting axions into visible photons inside a resonant microwave cavity placed inside a strong magnetic field, will complement this quest by increasing the sensitivity for small couplings by six orders of magnitude (down to the theoretically motivated gold band in a mass region where the axions can be a dark-matter candidate). Accelerator-based experiments, meanwhile, can probe the strongly motivated QCD scale (MeV–GeV) and beyond for larger couplings. All these results
will be complemented by a lively theoretical activity aimed at interpreting astrophysical signals within axion and ALP models.
FIPs 2020 triggered lively discussions that will continue in the coming months via topical meetings on different subjects. Topics that motivated particular interest between communities included possible ways of comparing results from direct-detection dark-matter experiments in the MeV–GeV range against those obtained at extracted beam line and collider experiments; the connection between right-handed neutrino properties and active neutrino parameters; and the interpretation of astrophysical and cosmological bounds, which often overwhelm the interpretation of each of the four portals.
The next FIPs workshop will take place at CERN next year.
The first nuclear-weapons test shook the desert in New Mexico 75 years ago. Weeks later, Hiroshima and Nagasaki were obliterated. So far, these two Japanese cities have been the only ones to suffer such a fate. Neutrinos can help to ensure that no other city has to be added to this dreadful list.
At the height of the arms race between the US and the USSR, stockpiles of nuclear weapons exceeded 50,000 warheads, with the majority being thermonuclear designs vastly more destructive than the fission bombs used in World War II. Significant reductions in global nuclear stockpiles followed the end of the Cold War, but the US and Russia still have about 12,500 nuclear weapons in total, and the other seven nuclear-armed nations have about 1500. Today, the politics of non-proliferation is once again tense and unpredictable. New nuclear security challenges have appeared, often from unexpected actors, as a result of leadership changes on both sides of the table. Nuclear arms races and the dissolution of arms-control treaties have yet again become a real possibility. A regional nuclear war involving just 1% of the global arsenal would cause a massive loss of life, trigger climate effects leading to crop failures and jeopardise the food supply of a billion people. Until we achieve global disarmament, nuclear non-proliferation efforts and arms control are still the most effective tools for nuclear security.
Not a bang but a whimper
The story of the neutrino is closely tied to nuclear weapons. The first serious proposal to detect the particle hypothesised by Pauli, put forward by Clyde Cowan and Frederick Reines in the early 1950s, was to use a nuclear explosion as the source (see “Daring experiment” figure). Inverse beta decay, whereby an electron-antineutrino strikes a free proton and transforms it into a neutron and a positron, was to be the detection reaction. The proposal was approved in 1952 as an addition to an already planned atmospheric nuclear-weapons test. However, while preparing for this experiment, Cowan and Reines realised that by capturing the neutron on a cadmium nucleus, and observing the delayed coincidence between the positron and this neutron, they could use the lower, but steady flux of neutrinos from a nuclear reactor instead (see “First detection” figure). This technique is still used today, but with gadolinium or lithium in place of cadmium.
The P reactor at the Savannah River site at Oak Ridge National Laboratory, which had been built and used to make plutonium and tritium for nuclear weapons, eventually hosted the successful experiment to first detect the neutrino in 1956. Neutrino experiments testing the properties of the neutrino including oscillation searches continued there until 1988, when the P reactor was shut down.
Neutrinos are not produced in nuclear fission itself, but by the beta decays of neutron-rich fission fragments – on average about six per fission. In a typical reactor fuelled by natural uranium or low-enriched uranium, the reactor starts out with only uranium-235 as its fuel. During operation a significant number of neutrons are absorbed on uranium-238, which is far more abundant, leading to the formation of uranium-239, which after two beta decays becomes plutonium-239. Plutonium-239 eventually contributes to about 40% of the fissions, and hence energy production, in a commercial reactor. It is also the isotope used in nuclear weapons.
The dual-use nature of reactors is at the crux of nuclear non-proliferation. What distinguishes a plutonium-production reactor from a regular reactor producing electricity is whether it is operated in such a way that the plutonium can be taken out of the reactor core before it deteriorates and becomes difficult to use in weapons applications. A reactor with a low content of plutonium-239 makes more and higher energy neutrinos than one rich in plutonium-239.
Lev Mikaelyan and Alexander Borovoi, from the Kurchatov Institute in Moscow, realised that neutrino emissions can be used to infer the power and plutonium content of a reactor. In a series of trailblazing experiments at the Rovno nuclear power plant in the 1980s and early 1990s, their group demonstrated that a tonne-scale underground neutrino detector situated 10 to 20 metres from a reactor can indeed track its power and plutonium content.
The significant drawback of neutrino detectors in the 1980s was that they needed to be situated underground, beneath a substantial overburden of rock, to shield them from cosmic rays. This greatly limited potential deployment sites. There was a series of application-related experiments – notably the successful SONGS experiment conducted by researchers at Lawrence Livermore National Laboratory, which aimed to reduce cost and improve the robustness and remote operation of neutrino detectors – but all of these detectors still needed shielding.
From cadmium to gadolinium
Synergies with fundamental physics grew in the 1990s, when the evidence for neutrino oscillations was becoming impossible to ignore. With the range of potential oscillations frequencies narrowing, the Palo Verde and Chooz reactor experiments placed multi-tonne detectors about 1 km from nuclear reactors, and sought to measure the relatively small θ13 parameter of the neutrino mixing matrix, which expresses the mixing between electron neutrinos and the third neutrino mass eigenstate. Both experiments used large amounts of liquid organic scintillator doped with gadolinium. The goal was to tag antineutrino events by capturing the neutrons on gadolinium, rather than the cadmium used by Reines and Cowan. Gadolinium produces 8 MeV of gamma rays upon de-excitation after a neutron capture. As it has an enormous neutron-capture cross section, even small amounts greatly enhance an experiment’s ability to identify neutrons.
Eventually, neutrino oscillations became an accepted fact, redoubling the interest in measuring θ13. This resulted in three new experiments: Double Chooz in France, RENO in South Korea, and Daya Bay in China. Learning lessons from Palo Verde and Chooz, the experiments successfully measured θ13 more precisely than any other neutrino mixing parameter. A spin-off from the Double Chooz experiment was the Nucifer detector (see “Purpose driven” figure), which demonstrated the operation of a robust sub-tonne-scale detector designed with missions to monitor reactors in mind, in alignment with requirements formulated at a 2008 workshop held by the International Atomic Energy Agency (IAEA). However, Nucifer still needed a significant overburden.
In 2011, however, shortly before the experiments established that θ13 is not zero, fundamental research once again galvanised the development of detector technology for reactor monitoring. In the run-up to the Double Chooz experiment, a group at Saclay started to re-evaluate the predictions for reactor neutrino fluxes – then and now based on measurements at the Institut Laue-Langevin in the 1980s – and found to their surprise that the reactor flux prediction came out 6% higher than before. Given that all prior experiments were in agreement with the old flux predictions, neutrinos were missing. This “reactor-antineutrino anomaly” persists to this day. A sterile neutrino with a mass of about 1 eV would be a simple explanation. This mass range has been suggested by experiments with accelerator neutrinos, most notably LSND and MiniBooNE, though it conflicts with predictions that muon neutrinos should oscillate into such a sterile neutrino, which experiments such as MINOS+ have failed to confirm.
To directly observe the high-frequency oscillations of an eV-scale sterile neutrino you need to get within about 10 m of the reactor. At this distance, backgrounds from the operation of the reactor are often non-negligible, and no overburden is possible – the same conditions a detector on a safeguards mission would encounter.
From gadolinium to lithium
Around half a dozen experimental groups are chasing sterile neutrinos using small detectors close to reactors. Some of the most advanced designs use fine spatial segmentation to reject backgrounds, and replace gadolinium with lithium-6 as the nucleus to capture and tag neutrons. Lithium has the advantage that upon neutron capture it produces an alpha particle and a triton rather than a handful of photons, resulting in a very well localised tag. In a small detector this improves event containment and thus efficiency, and also helps constrain event topology.
Following the lithium and finely segmented technical paths, the PROSPECT collaboration and the CHANDLER collaboration (see “Rapid deployment” figure), in which I participate, independently reported the detection of a neutrino spectrum with minimal overburden and high detection efficiency in 2018. This is a major milestone in making non-proliferation applications a reality, since it is the first demonstration of the technology needed for tonne-scale detectors capable of monitoring the plutonium content of a nuclear reactor that could be universally deployed without the need for special site preparation.
The story of the neutrino is closely tied to nuclear weapons
The main difference between the two detectors is that PROSPECT, which reported its near-final sterile neutrino limit at the Neutrino 2020 conference, uses a traditional approach with liquid scintillator, whereas CHANDLER, currently an R&D project, uses plastic scintillator. The use of plastic scintillator allows the deployment time-frame to be shortened to less than 24 hours. On the other hand, liquid scintillator allows the exploitation of pulse-shape discrimination to reject cosmic-ray neutron backgrounds, allowing PROSPECT to achieve a much better signal-to-background ratio than any plastic detector to date. Active R&D is seeking to improve topological reconstruction in plastic detectors and imbue them with pulse-shape discrimination. In addition, a number of safeguard-specific detector R&D experiments have successfully detected reactor neutrinos using plastic scintillator in conjunction with gadolinium. In the UK, the VIDARR collaboration has seen neutrinos from the Wylfa reactor, and in Japan the PANDA collaboration successfully operated a truck-mounted detector.
In parallel to detector development, studies are being undertaken to understand how reactor monitoring with neutrinos would impact nuclear security and support non-proliferation objectives. Two very relevant situations being studied are the 2015 Iran Deal – the Joint Comprehensive Plan of Action (JCPOA) – and verification concepts for a future agreement with North Korea.
Nuclear diplomacy
One of the sticking points in negotiating the 2015 Iran deal was the future of the IR-40 reactor, which was being constructed at Arak, an industrial city in central Iran. The IR-40 was planned to be a 40 MW reactor fuelled by natural uranium and moderated with heavy water, with a stated purpose of isotope production for medical and scientific use. The choice of fuel and moderator is interesting, as it meshes with Iranian capabilities and would serve the stated purpose well and be cost effective, since no uranium enrichment is needed. Equally, however, if one were to design a plutonium-production reactor for a nascent weapons programme, this combination would be one of the top choices: it does not require uranium enrichment, and with the stated reactor power would result in the annual production of about 10 kg of rather pure plutonium-239. This matches the critical mass of a bare plutonium-239 sphere, and it is known that as little as 4 kg can be used to make an effective nuclear explosive. Within the JCPOA it was eventually agreed that the IR-40 could be redesigned, down-rated in power to 20 MW and the new core fuelled with 3.7% enriched fuel, reducing the annual plutonium production by a factor of six.
A 10 to 20 tonne neutrino detector 20 m from the reactor would be able to measure its plutonium content with a precision of 1 to 2 kg. This would be particularly relevant in the so-called N-th month scenario, which models a potential crisis in Iran based on events in North Korea in June 1994. During the 1994 crisis, which risked precipitating war with the US, the nuclear reactor at Yongbyon was shut down, and enough spent fuel rods removed to make several bombs. IAEA protocols were sternly tested. The organisation’s conventional safeguards for operating reactors consist of containment and surveillance – seals, for example, to prevent the unnoticed opening of the reactor, and cameras to record the movement of fuel, most crucially during reactor shutdowns. In the N-th month scenario, the IR-40 reactor, in its pre-JCPOA configuration (40 MW, rather than the renegotiated power of 20 MW), runs under full safeguards for N–1 months. In month N, a planned reactor shutdown takes place. At this point the reactor would contain 8 kg of weapons-grade plutonium. For unspecified reasons the safeguards are then interrupted. In month N+1, the reactor is restarted and full safeguards are restored. The question is: are the 8 kg of plutonium still in the reactor core, or has the core been replaced with fresh fuel and the 8 kg of plutonium illicitly diverted?
The disruption of safeguards could either be due to equipment failure – a more frequent event than one might assume – or due to events in the political realm ranging from a minor unpleasantness to a full-throttle dash for a nuclear weapon. Distinguishing the two scenarios would be a matter of utmost urgency. According to an analysis including realistic backgrounds extrapolated from the PROSPECT results, this could be done in 8 to 12 weeks with a neutrino detector.
Neutrino detectors could be effective in addressing the safeguard challenges presented by advanced reactors
No conventional non-neutrino technologies can match this performance without shutting the reactor down and sampling a significant fraction of the highly radioactive fuel. The conventional approach would be extremely disruptive to reactor operations and would put inspectors and plant operators at risk of radiation exposure. Even if the host country were to agree in principle, developing a safe plan and having all sides agree on its feasibility would take months at the very least, creating dangerous ambiguity in the interim and giving hardliners on both sides time to push for an escalation of the crisis. The conventional approach would also be significantly more expensive than a neutrino detector.
New negotiating gambit
The June 1994 crisis at Yongbyon still overshadows negotiations with North Korea, since, as far as North Korea is concerned, it discredited the IAEA. Both during the crisis, and subsequently, international attempts at non-proliferation failed to prevent North Korea from acquiring nuclear weapons – its first nuclear-weapons test took place in 2006 – or even to constrain its progress towards a small-scale operational nuclear force. New approaches are therefore needed, and recent attempts by the US to achieve progress on this issue prompted an international group of about 20 neutrino experts from Europe, the US, Russia, South Korea, China and Japan to develop specific deployment scenarios for neutrino detectors at the Yongbyon nuclear complex.
The main concern is the 5 MWe reactor, which, though named for its electrical power, has a thermal power of 20 MW. This gas-cooled graphite-moderated reactor, fuelled with natural uranium, has been the source of all of North Korea’s plutonium. The specifics of this reactor, and in particular its fuel cladding, which makes prolonged wet-storage of irradiated fuel impossible, represent such a proliferation risk that anything but a monitored shutdown prior to a complete dismantling appears inappropriate. To safeguard against the regime reneging on such a deal, were it to be agreed, a relatively modest tonne-scale neutrino detector right outside the reactor building could detect a powering up of this reactor within a day.
North Korea is also constructing the Experimental Light Water Reactor at Yongbyon. A 150 MW water-moderated reactor running with low-enriched fuel, this reactor would not be particularly well suited to plutonium production. Its design is not dissimilar to much larger reactors used throughout the world to produce electricity, and it could help address the perennial lack of electricity that has limited the development and growth of the country’s economy. North Korea may wish to operate it indefinitely. A larger, 10 tonne neutrino detector could detect any irregularities during its refuelling – a tell-tale sign of a non-civilian use of the reactor – on a timescale of three months, which is within the goals set by the IAEA.
In a different scenario, wherein the goal would be to monitor a total shutdown of all reactors at Yongbyon, it would be feasible to bury a Daya-Bay-style 50 tonne single volume detector under the Yak-san, a mountain about 2 km outside of the perimeter of the nuclear installations (see “A different scenario” figure). The cost and deployment timescale would be more onerous than in the other scenarios.
In the case of longer distances between reactor and detector, detector masses must increase to compensate an inverse-square reduction in the reactor-neutrino flux. As cosmic-ray backgrounds remain constant, the detectors must be deployed deep underground, beneath an overburden of several 100 m of rock. To this end, the UK’s Science and Technology Facilities Council, the UK Atomic Weapons Establishment and the US Department of Energy, are funding the WATCHMAN collaboration to pursue the construction of a multi-kilo-tonne water-Cherenkov detector at the Boulby mine, 20 km from two reactors in Hartlepool, in the UK. The goal is to demonstrate the ability to monitor the operational status of the reactors, which have a combined power of 3000 MW. In a use-case context this would translate to excluding the operation of an undeclared 10 to 20 MW reactor within a radius of a few kilometres , but no safeguards scenario has emerged where this would give a unique advantage.
Inverse-square scaling eventually breaks down around 100 km, as at that distance the backgrounds caused by civilian reactors far outshine any undeclared small reactor almost anywhere in the northern hemisphere. Small signals also prevent the use of neutrino detectors for nuclear-explosion monitoring, or to confirm the origin of a suspicious seismic event as being nuclear, as conventional technologies are more feasible than the very large detectors that would be needed. A more promising future application of neutrino-detector technology is to meet the new challenges posed by advanced nuclear-reactor designs.
Advanced safeguards
The current safeguards regime relies on two key assumptions: that fuel comes in large, indivisible and individually identifiable units called “fuel assemblies”, and that power reactors need to be refuelled frequently. Most advanced reactor designs violate at least one of these design characteristics. Fuel may come in thousands of small pebbles or be molten, and its coolant may not be transparent, in contrast to current designs, where water is used as moderator, coolant and storage medium in the first years after discharge. Either way, counting and identification of the fuel by serial number may be impossible. And unlike current power reactors, which are refuelled on a 12-to-18-month cycle, allowing in-core fuel to be verified as well, advanced reactors may be refuelled only once in their lifetime.
Neutrino detectors would not be hampered by any of these novel features. Detailed simulations indicate that they could be effective in addressing the safeguard challenges presented by advanced reactors. Crucially, they would work in a very similar fashion for any of the new reactor designs.
In 2019 the US Department of Energy chartered and funded a study (which I co-chair) with the goal of determining the utility of the unique capabilities offered by neutrino detectors for nuclear security and energy applications. This study includes investigators from US national laboratories and academia more broadly, and will engage and interview nuclear security and policy experts within the Department of Energy, the State Department, NGOs, academia, and international agencies such as the IAEA. The results are expected early in 2021. They should provide a good understanding of where neutrinos can play a role in current and future monitoring and verification agreements, and may help to guide neutrino detectors towards their first real-world applications.
The idea of using neutrinos to monitor reactors has been around for about 40 years. Only very recently, however, as a result of a surge of interest in sterile neutrinos, has detector technology become available that would be practical in real-world scenarios such as the JCPOA or a new North Korean nuclear agreement. The most likely initial application will be near-field reactor monitoring with detectors inside the fence of the monitored facility as part of a regional nuclear deal. Such detectors will not be a panacea to all verification and monitoring needs, and can only be effective if there is a sincere political will on both sides, but they do offer more room for creative diplomacy, and a technology that is robust against the kinds of political failures which have derailed past agreements.
Technology developed for the proposed Compact Linear Collider (CLIC) at CERN is poised to make a novel cancer radio‑therapy facility a reality. Building on recently revived research from the 1970s, oncologists believe that ultrafast bursts of electrons damage tumours more than healthy tissue. This “FLASH effect” could be realised by using high-gradient accelerator technology from CLIC to create a new facility at Switzerland’s Lausanne University Hospital (CHUV).
Traditional radiotherapy scans photon beams from multiple angles to focus a radiation dose on tumours inside the body. More recently, hadron therapy has offered a further treatment modality: by tuning the energy of a beam of protons or ions so that they stop in the tumour, the particles deposit most of the radiation dose there (the so-called Bragg peak), while sparing the surrounding healthy tissue by comparison. Both of these treatments deliver small doses of radiation to a patient over an extended period, whereas FLASH radiotherapy is thought to require a maximum of three doses, all lasting less than 100 ms.
Look again
When the FLASH effect was first studied in the 1970s, it was assumed that all tissues suffer less damage when a dose is ultrafast, regardless of whether they are healthy or tumorous. In 2014, however, CHUV researchers published a study in which 200 mice were given a single dose of 4.5 MeV gamma rays at a conventional therapy dose-rate, while others were given an equivalent dose at the much faster FLASH-therapy rate. The results showed explicitly that while the normal tissue was damaged significantly less by the ultrafast bursts, the damage to the tumour stayed consistent for both therapies. In 2019, CHUV applied the first FLASH treatment to a cancer patient, finding similarly positive results: a 3.5 cm diameter skin tumour completely disappeared using electrons from a 5.6 MeV linear accelerator, “with nearly no side effects”. The challenge was to reach deeper tumours.
Now, using high-gradient “X-band” radio-frequency cavity technology developed for CLIC, CHUV has teamed up with CERN to develop a facility that can produce electron beams with energies around 100 MeV, in order to reach tumour depths of up to 20 cm. The idea came about three years ago when it was realised that CLIC technology was almost a perfect match for what CHUV were looking for: a high-powered accelerator, which uses X-band technology to accelerate particles over a short distance, has a high luminosity, and utilises a high current that allows a higher volume of tumour to be targeted.
“CLIC has the ability to accelerate a large amount of charge to get enough luminosity for physics studies,” explains Walter Wuensch of CERN, who heads the FLASH project at CERN. “People tend to focus on the accelerating gradient, but as important, or arguably more important, is the ability to control high-current, low-emittance beams.”
It really looks like it has the potential to be an important complement to existing radiation therapies
The first phase of the collaboration is nearing completion, with a conceptual design report, funded by CHUV, being created together by CERN and CHUV. The development and construction of the first facility, which would be housed at CHUV, is predicted to cost around €25 million, and CHUV aims to complete the facility within three years.
“The intention of CERN and the team is to be heavily involved in the process of getting the facility built and operating,” states Wuensch. “It really looks like it has the potential to be an important complement to existing radiation therapies.”
Cancer therapies have taken advantage of particle accelerators for many decades, with proton radiotherapy entering the scene in the 1990s. The CERN-based Proton-Ion Medical Machine Study, spawned by the TERA Foundation, resulted in the National Centre for Cancer Hadron Therapy (CNAO) in Italy and MedAustron in Austria, which have made significant progress in the field of proton and ion therapy. FLASH radiotherapy would add electrons to the growing modality of particle therapy.
Energetic beams of charged particles are essential for high-energy physics research, as well as for studies of nuclear structure and dynamics, and deciphering complex molecular structures. In principle, generating such beams is simple: provide an electric field for acceleration and a magnetic field for bending particle trajectories. In practice, however, the task becomes increasingly challenging as the desired particle energy goes up. Very high electric fields are required to attain the highest energy beams within practical real-estate constraints.
The most efficient way to generate the very high electric fields in a vacuum environment required to transport a beam is to build up a resonant excitation of radio waves inside a metallic cavity. There is something of an art to shaping such cavities to “get the best bang for the buck” for a particular application. The radio-frequency (RF) fields are inherently time-varying, and bunches of charged particles need to arrive with the right timing if they are to see only forward-accelerating electric fields. Desirable very high resonant electric fields (e.g. 5–40 MV/m) require the existence of very high currents in the cavity walls. These currents are simply not sustainable for long durations using even the best normal-conducting materials, as they would melt from resistive heating.
Superconducting materials, on the other hand, can support sustainable high-accelerating gradients with an affordable electricity bill. Early pioneering work demonstrating the first beam-acceleration using superconducting radio-frequency (SRF) cavities took place in the late 1960s and early 1970s at Stanford, Caltech, the University of Wuppertal and Karlsruhe. The potential for real utility was clear, but techniques and material refinements were needed. Several individual laboratories began to take up the challenge for their own research needs. Solutions were developed for electron acceleration at CESR, HERA, TRISTAN, LEP II and CEBAF, while heavy-ion SRF acceleration solutions were developed at Stony Brook, ATLAS, ALPI and others. The community of SRF accelerator physicists was small but the lessons learned were consistently shared and documented. By the early 1990s, SRF technology had matured such that complex large-scale systems were credible and the variety of designs and applications began to blossom.
The TESLA springboard
In 2020, the TESLA Technology Collaboration (TTC) celebrates 30 years of collaborative efforts on SRF technologies. The TTC grew out of the first international TESLA (TeV Energy Superconducting Linear Accelerator) workshop, which was held at Cornell University in July 1990. Its aim was to define the parameters for a superconducting linear collider for high-energy physics operating in the TeV region and to explore how to increase the gradients and lower the costs of the accelerating structures. It was clear from the beginning that progress would require a large international collaboration, and the Cornell meeting set in motion a series of successes that are ongoing to this day – including FLASH and the European XFEL at DESY. The collaboration also led to proposals for several large SRF-based research facilities including SNS, LCLS-II, ESS, PIP-II and SHINE, as well as a growing number of smaller facilities around the world.
Accelerating gradients above 40 MV/m are now attainable with niobium
At the time of the first TESLA collaboration meeting, the state-of-the-art in accelerating gradients for electrons was around 5 MV/m in the operating SRF systems of TRISTAN at KEK, HERA at DESY, LEP-II at CERN and CEBAF at Jefferson Lab (JLab), which were then under construction. Many participants in this meeting agreed to push for a five-fold increase in the design accelerating gradient to 25 MV/m to meet the dream goal for TESLA at a centre-of-mass energy of 1 TeV. The initial focus of the collaboration was centred on the design, construction and commissioning of a technological demonstrator, the TESLA Test Facility (TTF) at DESY. In 2004, SRF was selected as the basis for an International Linear Collider (ILC) design and, shortly afterwards, the TESLA collaboration was re-formed as the TESLA Technology Collaboration with a scope beyond the original motivation of high-energy physics. The TTC, with its incredible worldwide collaboration spirit, has had a major role in the growth of the SRF community, facilitating numerous important contributions over the past 30 years.
30 years of gradient march
Conceptually, the objective of simply providing “nice clean” niobium surfaces on RF structures seems pretty straightforward. Important subtleties begin to emerge, however, as one considers that the high RF-surface currents required to support magnetic fields up to ~100 mT flow only in the top 100 nm of the niobium surface, which must offer routine surface resistances at the nano-ohm level over areas of around 1 m2. Achieving blemish-free, contamination-free surfaces that present excellent crystal lattice structure even in this thin surface layer is far from easy.
The march of progress in cavity gradient for linacs and the many representative applications over the past 50 years (see figure “Gradient growth”) are due to breakthroughs in three main areas: material purity, fabrication and processing techniques. The TTC had a major impact on each of these areas.
With some notable exceptions, bulk niobium cavities fabricated from sheet stock material have been the standard, even though the required metallurgical processes present challenges. Cycles of electron-beam vacuum refining, rolling, and intermediate anneals are provided by only a few international vendors. Pushing up the purity of deliverable material required a concerted push, resulting in the avoidance of foreign material inclusions, which can be deadly to performance when uncovered in the final step of surface processing. The figure-of-merit for purity is the ratio of room-temperature to cryogenic normal-conducting resistivity – the residual resistance ratio, RRR. The common cavity-grade niobium material specification has thus come to be known as high-RRR grade.
Another later pursuit of pure niobium is the so-called “large grain” or “direct-from-ingot” material. Rather than insist on controlled ~30 µm grain-size distribution (grains being microcrystals in the structure), this material uses sheet slices cut directly from large ingots having much larger, but arbitrarily sized, grains. Although not yet widely used, this material has produced the highest gradient TESLA-style cavities to date – 45 MV/m with a quality factor Q0 > 1010. Here again, though the topic was initiated at JLab, this fruitful work was accomplished via worldwide international collaborations.
As niobium is a refractory metal that promptly cloaks itself with about 4 nm of dielectric oxide, welding niobium components has to be performed by vacuum electron beam welding. Collaborative efforts in Europe, North America and Asia refined the parameters required to yield consistent niobium welds. The community gradually realised that extreme cleanliness is required in the surface-weld preparation, since even microscopic foreign material will be vaporised during the weld process, leaving behind small voids that become performance-limiting defects.
Having the best niobium is not sufficient, however. Superconductors have inherent critical magnetic field limitations, or equivalently local surface-current density limitations. Because the current flow is so shallow, local magnetic field enhancements induced by microscopic topography translate into gradient-limiting quench effects. Etching of fabricated surfaces has routinely required a combination of hydrofluoric and nitric acids, buffered with phosphoric acid. This exothermic etching process inherently yields step-edge faceting at grain boundaries, which in turn creates local, even nanoscopic, field enhancements, anomalous losses and quenches as the mean surface field is increased. A progression of international efforts at KEK, DESY, CEA-Saclay and JLab eliminated this problem through the development of electro-polishing techniques. Following a deeper understanding of the underlying electrochemistry, accelerating gradients above 40 MV/m are now attainable with niobium.
Another vexing problem that TTC member institutions helped to solve was the presence of “Q-drop” in the region of high surface magnetic field, for which present explanations point to subtle migration of near-surface oxygen deeper into the lattice, where it inhibits the subsequent formation of lossy nanohydrides on cool-down. Avoidance of nanohydrides, whose superconductivity by proximity effect breaks down in the Q-drop regime, is required to sustain accelerating gradients above 25 MV/m for some structures.
Cleaning up
TTC members have also shared analyses and best practices in cleaning and cleanroom techniques, which have evolved dramatically during the past 30 years. This has helped to beat down the most common challenge for developers and users of SRF accelerating cavities: particulate-induced field emission, whereby very high peak surface electric fields can turn even micron-scale foreign material into parasitic electron field emission sources, with resulting cryogenic and radiation burdens. Extended interior final rinsing with high-pressure ultra-pure water prior to cavity assembly has become standard practice, while preparation and assembly of all beamline vacuum hardware under ISO 4 cleanroom conditions is necessary to maintain these clean surfaces for accelerator operations.
The most recent transformation has come with the recognition that interstitial doping of the niobium surface with nitrogen can reduce SRF surface resistance much more than was dreamed possible, reducing the cryogenic heat load to be cooled. While still the subject of material research, this new capability was rapidly adopted into the specification for LCLS-II cavities and is also being considered for an ILC. The effort started in the US and quickly propagated internationally via the TTC, for example in cavity tests at the European Spallation Source (see “Vertical test” image). Earlier this year, Q-values of 3–4 × 1010 at 2 K at 30 MV/m were reported in TESLA-style cavities – representing tremendous progress, but with much optimisation still to be carried out.
One of the main goals of the TTC has been to bridge the gap between state-of-the-art R&D on laboratory prototypes and actual accelerator components in operating facilities, with the clear long-term objective to enable superconducting technology for a TeV-scale linear collider. This objective demanded a staged approach and intense work on the development of all the many peripherals and subcomponents. The collaboration embraced a joint effort between the initial partners to develop the TTF at DESY, which aimed to demonstrate reliable operation of an electron superconducting linac at gradients above 15 MV/m in “vector sum” control – whereby many cavities are fed by a single high-power RF source to improve cost effectiveness. In 1993 the collaboration finalised a 1.3 GHz cavity design that is still the baseline of large projects like the European XFEL, LCLS-II and SHINE, and nearly all L-band-based facilities.
Towards a linear collider
An intense collaborative effort started for the development of all peripheral components, for example power couplers, high-order mode dampers, digital low-level RF systems and cryomodules with unprecedented heat load performances. Several of these components were designed by TTC partners in an open collaborative and competitive effort, and a number of them can be found in existing projects around the world. The tight requirements imposed by the scale of a linear collider required an integrated design of the accelerating modules, containing the cavities and their peripheral components, which led to the concept of the “TESLA style” cryomodules, variants of which provide the building blocks of the linacs in TTF, European XFEL, LCLS-II and SHINE.
The success of the TTF, which delivered its first beam in 1997, led it to become the driver for a next-generation light source at DESY, the VUV-FEL, which produced first light in 2005 and which later became the FLASH facility. The European XFEL built on this strong heritage, its large scale demanding a new level of design consolidation and industrialisation. It is remarkable to note that the total number of such TESLA-style cavities installed or to be installed in presently approved accelerators is more than 1800. Were a 250 GeV ILC to go ahead in Japan, approximately 8000 such units would be required. (Note that an alternative proposal for a high-energy linear collider, the Compact Linear Collider, relies on a novel dual-beam acceleration scheme that does not require SRF cavities.)
Since the partners collaborating on the early TESLA goal of a linear collider were also involved in other national and international projects for a variety of applications and domains, the first decade of the 21st century saw the TTC broaden its reach. For example, we started including reports from other projects, most notably the US Spallation Neutron Source, and gradually opened to the community working on low-beta ion and proton superconducting cavities, such as the half-wave resonator string collaboratively developed at Argonne National Lab and now destined for use in PIP-II at Fermilab (see “Low-beta cavities” image). TTC meetings include topical sessions with industries to discuss how to shorten the path from development to production. Recently, the TTC has also begun to facilitate collaborative exchanges on alternative SRF materials to bulk niobium, such as Nb3Sn and even hybrid multilayer films, for potential accelerator applications.
Sustaining success
The mission of the TTC is to advance SRF technology R&D and related accelerator studies across the broad diversity of scientific applications. It is to provide a bridge for open communication and sharing of ideas, development and testing across associated projects. The TTC supports and encourages the free and open exchange of scientific and technical knowledge, engineering designs and equipment. Furthermore, it is based on cooperative work on SRF accelerator technology by research groups at TTC member institution laboratories and test facilities. The current TTC membership consists of 60 laboratories and institutes in 12 countries across Europe, North America and Asia. Since progress in cavity performance and related SRF technologies is so rapid, the major TTC meetings have been frequent.
Particle accelerators using SRF technologies have been applied widely, from small facilities for medical applications up to large-scale projects for particle physics, nuclear physics, neutron sources and free-electron lasers (see “Global view” figure). Five large-scale (> 100 cavities) SRF projects are currently under construction in three regions: ESS in Europe, FRIB and LCLS-II in the US, and SHINE (China) and RAON (Korea) in Asia. Close international collaboration will continue to support progress in these and future projects, including SRF thin-film technology relevant for a possible future circular electron–positron collider. Perhaps the next wave of SRF technology will be the maturation of economical small-scale applications with high multiplicity and international standards. As an ultimate huge future SRF project, realising an ILC will indeed require sustained broad international collaboration.
The open and free-exchange model that for 30 years has enabled the TTC to make broad progress in SRF technology is a major contribution to science diplomacy efforts on a worldwide scale. We celebrate the many creative and collaborative efforts that have served the international community well via the TESLA Technology Collaboration.
The 2020 Nobel Prize in Physics, announced on 6 October, has recognised seminal achievements in the theoretical and experimental understanding of black holes. One half of the SEK 10 million ($1.15 million) award went to Roger Penrose of the University of Oxford “for the discovery that black-hole formation is a robust prediction of the general theory of relativity”. The other half was awarded jointly to Andrea Ghez of the University of California, Los Angeles and Reinhard Genzel of the Max Planck Institute for Extraterrestrial Physics “for the discovery of a supermassive compact object at the centre of our galaxy”, after the pair led separate research teams during the 1990s to identify a black hole at the centre of the Milky Way.
You might ask where the greatest entropy is in the universe – by an absolutely enormous factor it is in black holes
Roger Penrose
As soon as Einstein had completed his theory of general relativity in 1915, it was clear that solutions in the vicinity of a spherically symmetric, non-rotating mass allow space–time to be “pinched” to a point, or singularity, where known physics ceases to apply. Few people, including Einstein himself, however, thought that black holes really exist. But 50 years later, Penrose invented a mathematical tool called a trapped surface to show that black holes are a natural consequence of general relativity, proving that they each hide a singularity. His groundbreaking article (Phys. Rev. Lett.14 57) is heralded as the first post-Einsteinian result in general relativity.
Penrose is also known for the “Penrose process”, whereby a particle–antiparticle pair that forms close to the event horizon of a black hole can become separated, with one of the two particles falling into the black hole and the other one escaping and carrying away energy and angular momentum. He also proposed twistor theory, which has evolved into a rich branch of theoretical and mathematical physics with potential relevance to the unification of general relativity and quantum mechanics, among many other contributions.
“I really had to have a good idea of the space–time geometry. Not just 3D, you had to think of the whole 4D space–time… I do most of my thinking in visual terms, rather than writing down equations,” said Penrose in an interview with the Nobel Foundation following the award. “Black holes have become more and more important, also in ways that people don’t normally appreciate. They are the basis of the second law of thermodynamics… You might ask where the greatest entropy is in the universe – by an absolutely enormous factor it is in black holes.”
On 10 September the International Committee for Future Accelerators (ICFA) announced the structure and members of a new organisational team to prepare a “pre-laboratory” for an International Linear Collider (ILC) in Japan. The ILC International Development Team (ILC-IDT), which consists of an executive board and three working groups governing the pre-lab setup, accelerator, and physics and detectors, aims to complete the preparatory phase for the pre-lab on a timescale of around 1.5 years.
We hope that the effort by our Japanese colleagues will result in a positive move by the Japanese government
Tatsuya Nakada
The aim of the pre-lab is to prepare the ILC project, should it be approved, for construction. It is based on a memoranda of understanding among participating national and regional laboratories, rather than intergovernmental agreements, explains chair of the ILC-IDT executive board Tatsuya Nakada of École Polytechnique Fédérale de Lausanne. “The ILC-IDT is preparing a proposal for the organisational and operational framework of the pre-lab, which will have a central office in Japan hosted by the KEK laboratory,” says Nakada. “In parallel to our activities, we hope that the effort by our Japanese colleagues will result in a positive move by the Japanese government that is equally essential for establishing the pre-laboratory.”
In June the Linear Collider Board and Linear Collider Collaboration, which were established by ICFA in 2013 to promote the case for an electron–positron linear collider and its detectors as a worldwide collaborative project, reached the end of their terms in view of ICFA’s decision to set up the ILC-IDT.
The ILC has been on the table for almost two decades. Shortly after the discovery of the Higgs boson in 2012, the Japanese high-energy physics community proposed to host the estimated $7 billion project, with Japan’s prime minister at that time, Yoshihiko Noda, stressing the importance of establishing an international framework. In 2018 ICFA backed the ILC as a Higgs factory operating at a centre-of-mass energy of 250 GeV – half the energy set out five years earlier in the ILC’s technical design report.
Higgs factory
An electron–positron Higgs factory is the highest-priority next collider, concluded the 2020 update of the European strategy for particle physics (ESPPU). The ESPPU recommended that Europe, together with its international partners, explore the feasibility of a future hadron collider at CERN at the energy frontier with an electron–positron Higgs factory as a possible first stage, noting that the timely realisation of the ILC in Japan “would be compatible with this strategy”. Two further proposals exist: the Compact Linear Collider at CERN and the Circular Electron–Positron Collider in China. While the ILC is the most technically ready Higgs-factory proposal (see p35), physicists are still awaiting a concrete decision about its future.
In March 2019 Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT) expressed “continued interest” in the ILC, but announced that it had “not yet reached declaration” for hosting the project, arguing that it required further discussion in formal academic decision-making processes. In February KEK submitted an application for the ILC project to be considered in the MEXT 2020 roadmap for large-scale research projects. KEK withdrew the application the following month, announcing the move in September following the establishment of the ILC-IDT.
The ministry will keep an eye on discussions by the international research community
Koichi Hagiuda
“The ministry will keep an eye on discussions by the international research community while exchanging opinions with government authorities in the US and Europe,” said Koichi Hagiuda, Japanese minister of education, culture, sports, science and technology, at a press conference on 11 September.
Steinar Stapnes of CERN, who is a member of the ILC-IDT executive board representing Europe, says that clear support from the Japanese government is needed for the ILC pre-lab. “The overall project size is much larger than the usual science projects being considered in these processes and it is difficult to see how it could be funded within the normal MEXT budget for large-scale science,” he says. “During the pre-lab phase, intergovernmental discussions and negotiation about the share of funding and responsibilities for the ILC construction need to take place and hopefully converge.”
“It is our vision for CERN to be a role model for environmentally responsible research,” writes CERN Director-General Fabiola Gianotti in her introduction to a landmark environmental report released by the laboratory on 9 September. While CERN has a longstanding framework in place for environmental protection, and has documented its environmental impact for decades, this is its first public report. Two years in the making, and prepared according to the Global Reporting Initiative Sustainability Reporting Standards, it details the status of CERN’s environmental footprint, along with objectives for the coming years.
Given the energy consumption of large particle accelerators, environmental impact is a topic of increasing importance for high-energy physics research worldwide. Among the recommendations of the 2020 update of the European strategy for particle physics was a strong emphasis on the need to continue with efforts to minimise the environmental impact of accelerator facilities and maximise the energy efficiency of future projects.
When the Large Hadron Collider (LHC) is operating, CERN uses an average of 4300 TJ of electricity every year (30–50% less when not in operation) – enough energy to power just under half of the 200,000 homes in the canton of Geneva. “This is an inescapable fact, and one that CERN has always taken into consideration when designing new facilities,” states Frédérick Bordry, director for accelerators and technology.
Action plan
An energy-management panel established at CERN in 2015 has already led to actions, including free cooling and air-flow optimisation, better optimised LHC cryogenics, and the implementation of SPS magnetic cycles and stand-by modes, which significantly reduce energy consumption. The LHC delivered twice as much data per Joule in its second run (2015–2018) compared to its first (2010–2013), states the new report. With the High-Luminosity LHC due to deliver a tenfold increase in luminosity towards the end of the decade, CERN has made it a priority to limit the increase in energy consumption to 5% up to the end of 2024, with longer-term objectives to be set in future reports.
CERN procures its electricity mainly from France, whose production capacity is 87.9% carbon-free. In terms of direct greenhouse-gas emissions, the 192,000 tonnes of carbon-dioxide equivalent emitted by CERN in 2018 is mainly due to fluorinated gases used in the LHC detectors for cooling, particle detection, air conditioning and electrical insulation. CERN has set a formal objective that, by 2024, direct greenhouse emissions will be reduced by 28% by replacing fluorinated gases – which were designed in the 1990s to be ozone-friendly – with carbon dioxide, which has a global-warming potential several thousand times lower.
CERN has set a formal objective that, by 2024, direct greenhouse emissions will be reduced
by 28%
Other areas of environmental significance studied in the report include radiation exposure, noise and waste. CERN commits to limit the emission of ionising radiation to no more than 0.3 mSv per year – less than a third of the annual dose limit for public exposure set by the European Council. The report states that the actual dose to any member of the public living in the immediate vicinity of CERN due to the laboratory’s activities is below 0.02 mSv per year, which is less than the exposure received from cosmic radiation during a transatlantic flight.
A 2018 measurement campaign showed that noise levels at CERN have not changed since the early 1990s, and are low by urban standards. Nevertheless, CERN has have invested 0.7 million CHF to reduce noise at its perimeters to below 70 dB during the day and 60 dB at night (which corresponds to the level of conversational speech). The organisation has also introduced approaches to preserve the local landscape and protect flora, including 15 species of orchid growing on CERN’s sites.
Waste not
Water consumption, mostly drawn from Lac Léman, has slowly decreased over the past 10 years, the report notes, and CERN commits to keeping the increase in water consumption below 5% to the end of 2024, despite a growing demand for cooling from upgraded facilities. CERN also eliminates 100% of its waste, states the report, and has a recycling rate of 56% for non-hazardous waste (which comprises 81% of the total). A major project under construction since last year will see waste hot water from the cooling system for LHC Point 8 (where the LHCb experiment is located) channeled to a heating network in the nearby town of Ferney-Voltaire from 2022, with LHC Points 2 and 5 being considered for similar projects.
CERN plans to release further environment reports every two years. “Today, more than ever, science’s flag-bearers need to demonstrate their relevance, their engagement, and their integration into society as a whole,” writes Gianotti. “This report underlines our strong commitment to environmental protection, both in terms of minimising our impact and applying CERN technologies for environmental protection.”
On 17 January 1957, a few months after Chien-Shiung Wu’s discovery of parity violation, Wolfgang Pauli wrote to Victor Weisskopf: “Ich glaube aber nicht, daß der Herrgott ein schwacher Linkshänder ist” (I cannot believe that God is a weak left-hander). But maximal parity violation is now well established within the Standard Model (SM). The weak interaction only couples to left-handed particles, as dramatically seen in the continuing absence of experimental evidence for right-handed neutrinos. In the same way, the polarisation of photons originating from transitions that involve the weak interaction is expected to be completely left-handed.
The LHCb collaboration recently tested the handedness of photons emitted in rare flavour-changing transitions from a b-quark to an s-quark. These are mediated by the bosons of the weak interaction according to the SM – but what if new virtual particles contribute too? Their presence could be clearly signalled by a right-handed contribution to the photon polarisation.
New virtual particles could be clearly signalled by a right-handed contribution to the photon polarisation
The b → sγ transition is rare. Fewer than one in a thousand b-quarks transform into an s-quark and a photon. This process has been studied for almost 30 years at particle colliders around the world. By precise measurements of its properties, physicists hope to detect hints of new heavy particles that current colliders are not powerful enough to produce.
The probability of this b-quark decay has been measured in previous experiments with a precision of about 5%, and found to agree with the SM prediction, which bears a similar theoretical uncertainty. A promising way to go further is to study the polarisation of the emitted photon. Measuring the b → sγ polarisation is not easy though. The emitted photons are too energetic to be analysed by a polarimeter and physicists must find innovative ways to probe them indirectly. For example, a right-handed polarisation contribution could induce a charge-parity asymmetry in the B0→ KSπ0γ or Bs0→ φγ decays. It could also contribute to the total rate of radiative b → sγ decays, containing any strange meson, B → Xsγ.
The LHCb collaboration has pioneered a new method to perform this measurement using virtual photons and the largest sample of the very rare B0→ K*0e+e– decay ever collected. First, the sub-sample of decays that come from B0→ K*0γ with a virtual photon that materialises in an electron–positron pair is isolated. The angular distributions of the B0→ K*0e+e– decay products are then used as a polarimeter to measure the handedness of the photon. The number of decays with a virtual photon is small compared to the decays with a real photon, but these latter decays cannot be used as the information on the polarisation is lost.
The size of the right-handed contribution to b → sγ is encoded in the magnitude of the complex parameter C′7/C7. This is a ratio of the right- and left-handed Wilson coefficients that are used in the effective description of b → s transitions. The new B0→ K*0e+e– analysis by the LHCb collaboration constrains the value of C′7/C7, and thus the photon polarisation, with unprecedented precision (figure 1). The measurement is compatible with the SM prediction.
This result showcases the exceptional capability of the LHCb experiment to study b → sγ transitions. The uncertainty is currently dominated by the data sample size, and thus more accurate studies are foreseen with the large data sample expected in Run 3 of the LHC. More precise measurements may yet unravel a small right-handed polarisation.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.