Don Cossairt and Matthew Quinn’s recently published book summarises both basic concepts of the propagation of particles through matter and fundamental aspects of protecting personnel and environments against prompt radiation and radioactivity. It constitutes a compact and comprehensive compendium for radiation-protection professionals working at accelerators. The book’s content originates in a course taught by Cossairt, a longstanding and recently retired radiation expert at Fermilab at numerous sessions of the US Particle Accelerator School (USPAS) since the early 1990s. It is also available as a Fermilab report which has stood the test of time as one of the standard health-physics handbooks for accelerator facilities for more than 20 years. Quinn, the book’s co-author, is the laboratory’s radiation-physics department manager.
The book begins with a short overview of physical and radiological quantities relevant for radiation protection assessments, and briefly sketches the mechanisms for energy loss and scattering during particle transport in matter. The introductory part concludes with chapters on the Boltzmann equation, which in this context describes the transport of particles through matter, and its solution using Monte Carlo methods. The following chapters illustrate the radiation fields which are induced by the interactions of electron, hadron and ion beams with beamline components. The tools described in these chapters are parametrised equations, handy rules-of-thumb and graphs of representative particle spectra and yields which serve for back-of-the-envelope calculations and describe the fundamental characteristics of radiation fields.
Practical questions
The second half of the book deals with practical questions encountered in everyday radiation-protection assessments, such as the selection of the most efficient shielding material for a given radiation field, the energy spectra to be expected outside of shielding where personnel might be present, and lists of radiologically relevant nuclides which are typically produced around accelerators. It also provides a compact introduction to activation at accelerators. The final chapter gives a comprehensive overview of radiation-protection instrumentation traditionally used at accelerators, helping the reader to select the most appropriate detector for a given radiation field.
Nowadays, assessments are more readily and accurately obtained with Monte Carlo simulations
Some topics have evolved since the time when the material upon which the book is based was written. For example, the “rules-of-thumb” presented in the text are nowadays mostly used for cross-checking results obtained with much more powerful and user-friendly Monte Carlo transport programs. The reader will not, however, find information on the use and limitations of such codes. For example, the chapter on aspects of radiation dose attenuation through passage ways and ducts as well as environmental doses due to prompt radiation (“skyshine”) gives only analytical formulae, while assessments are nowadays more readily and accurately obtained with Monte Carlo simulations. There is risk, however, that such codes be treated as a “black box”, and their results blindly believed. In this regard, the book gives many tools necessary for obtaining rough but valuable estimates for setting up simulations and cross-checking results.
The global scientific community has mobilised at an unprecedented rate in response to the COVID-19 pandemic, beyond just pharmaceutical and medical researchers. The world’s most powerful analytical tools, including neutron sources, harbour the unique ability to reveal the invisible, structural workings of the virus – which will be essential to developing effective treatments. Since the outbreak of the pandemic, researchers worldwide have been using large-scale research infrastructures such as synchrotron X-ray radiation sources (CERN Courier May/June 2020 p29), as well as cryogenic electron microscopy (cryo-EM) and nuclear magnetic resonance (NMR) facilities, to determine the 3D structures of proteins of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which can lead to COVID-19 respiratory disease, and to identify potential drugs that can bind to these proteins in order to disable the viral machinery. This effort has already delivered a large number of structures and increased our understanding of what potential drug candidates might look like in a remarkably short amount of time, with the number increasing each week.
COVID-19 impacted the operation of all advanced neutron sources worldwide. With one exception (ANSTO in Australia, which continued the production of radioisotopes) all of them were shut down in the context of national lockdowns aimed at reducing the spread of the disease. The neutron community, however, lost no time in preparing for the resumption of activities. Some facilities like Oak Ridge National Laboratory (ORNL) in the US have now restarted operation of their sources exclusively for COVID-19 studies. Here in Europe, while waiting (impatiently) for the restart of neutron facilities such as the Institut Laue-Langevin (ILL) in Grenoble, which is scheduled to be operational by mid-August, scientists have been actively pursuing SARS-CoV-2-related projects. Special research teams on the ILL site have been preparing for experiments using a range of neutron-scattering techniques including diffraction, small-angle neutron scattering, reflectometry and spectroscopy. Neutrons bring to the table what other probes cannot, and are set to make an important contribution to the fight against SARS-CoV-2.
Unique characteristics
Discovered almost 90 years ago, the neutron has been put to a multitude of uses to help researchers understand the structure and behaviour of condensed-matter. These applications include a steadily growing number of investigations into biological systems. For the reasons explained below, these investigations are complementary to the use of X-rays, NMR and cryo-EM. The necessary infrastructure for neutron-scattering experiments is provided to the academic and industrial user communities by a global network of advanced neutron sources. Leading European neutron facilities include the ILL in Grenoble, France, MLZ in Garching, Germany, ISIS in Didcot, UK, and PSI in Villigen, Switzerland. The new European flagship neutron source – the European Spallation Source (ESS) – is under construction in Lund, Sweden.
Determining the biological structures that make up a virus such as SARS-CoV-2 (pictured) allows scientists to see what they look like in three dimensions and to understand better how they function, speeding up the design of more effective anti-viral drugs. Knowledge of the structures highlights which parts are the most important: for example, once researchers know what the active site in an enzyme looks like, they can try to design drugs that fit well into the active site – the classic “lock-and-key” analogy. This is also useful in the development of vaccines. Knowledge of the structural components that make up a virus are important since vaccines are often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins.
Neutrons are a particularly powerful tool for the study of biological macromolecules in solutions, crystals and partially ordered systems. Their neutrality means neutrons can penetrate deep into matter without damaging the samples, so that experiments can be performed at room temperature, much closer to physiological temperatures. Furthermore, in contrast to X-rays, which are scattered by electrons, neutrons are scattered by atomic nuclei, and so neutron-scattering lengths show no correlation with the number of electrons, but rather depend on nuclear forces, which can even vary between different isotopes. As such, while hydrogen (H) scatters X-rays very weakly, and protons (H+) do not scatter X-rays at all, with neutrons hydrogen scatters at a similar level to the other common elements (C, N, O, S, P) of biological macromolecules, allowing them to be located. Moreover, since hydrogen and its isotope deuterium (2H/D) exhibit different scattering lengths and signs, this can be exploited in neutron studies to enhance the visibility of specific structural features by substituting one isotope for the other. Examples of this include small-angle neutron scattering (SANS) studies of macromolecular structures that provide low-resolution 3D information on molecular shape without the need for crystallization, and neutron-crystallography studies of proteins that provide high-resolution structures of proteins, including the locations of individual hydrogen atoms that have been exchanged for deuterium to make them particularly visible. Indeed, neutron crystallography can provide unique information on the chemistry occurring within biological macromolecules, such as enzymes, as recent studies on HIV-1 protease, an enzyme essential for the life-cycle of the HIV virus, illustrate.
Treating and stopping COVID-19
Proteases are like biological scissors that cleave polypeptide chains – the primary structure of proteins – at precise locations. If the cleavage is inhibited, for example, by appropriate anti-viral drugs, then so-called poly-proteins remain in their original state and the machinery of virus replication is blocked. For the treatment to be efficient this inhibition has to be robust—that is, the drug occupying the active site should be strongly bound, ideally to atoms in the main chain of the protease. This will increase the likelihood that treatments are effective in the long run, despite mutations of the enzyme, since mutations occur only within the side chains of the enzyme. Neutron research, therefore, provides essential input into the long-term development of pharmaceuticals. This role will be further enhanced in the context of advanced computer-aided drug development that will rely on an orchestrated combination of high-power computing, artificial intelligence and broad-band experimental data on structures.
Neutron crystallography data add supplementary structural information to X-ray data by providing key details regarding hydrogen atoms and protons, which are critical players in the binding of such drugs to their target enzyme through hydrogen bonding, and revealing important details of protein chemistry that help researchers decipher the exact enzyme catalytic pathway. In this way, neutron crystallography data can be hugely beneficial towards understanding how these enzymes function and the design of more effective medications to target them. For example, in the study of complexes between HIV-1 protease – the enzyme responsible for maturation of virus particles into infectious HIV virions – and drug molecules, neutrons can reveal hydrogen-bonding interactions that offer ways to enhance drug-binding and reduce drug-resistance of anti-retroviral therapies.
More than half of the SARS-CoV-2-related structures determined thus far are high-resolution X-ray structures of the virus’s main protease, with the majority of these bound to potential inhibitors. One of the main challenges for performing neutron crystallography is that larger crystals are required than for comparable X-ray crystallography studies, owing to the lower flux of neutron beams relative to X-ray beam intensities. Nevertheless, given the benefits provided by the visualisation of hydrogen-bonding networks for understanding drug-binding, scientists have been optimising crystallisation conditions for the growth of larger crystals, in combination with the production of fully deuterated protein in preparation for neutron crystallography experiments in the near future. Currently, teams at ORNL, ILL and the DEMAX facility in Sweden are growing crystals for SARS-CoV-2 investigations.
Proteases are, however, not the only proteins where neutron crystallography can provide essential information. For example, the spike protein (S-protein) of SARS-CoV-2 that is responsible for mediating the attachment and entry into human cells is of great relevance for developing therapeutic defence strategies against the virus. Here, neutron crystallography can potentially provide unique information about the specific domain of the S-protein where the virus binds to human cell receptors. Comparison of the structure of this region between different variations of coronavirus (SARS-CoV-2 and SARS-CoV) obtained using X-rays suggests small alterations to the amino-acid sequence may enhance the binding affinity of the S-protein to the human receptor hACE2, making SARS-CoV-2 more infectious. Neutron studies will provide further insight into this binding, which is crucial for the attachment of the virus. These experiments are scheduled to take place, e.g. at ILL and ORNL (and possibly MLZ), as soon as large enough crystals have been grown.
The big picture
Biological systems have a hierarchy of structures: starting from molecules that assemble into structures such as proteins; these form complexes which, as supramolecular arrangements like membranes, are the building blocks of cells. These are of course the building blocks of our bodies. Every part of this huge machinery is subject to continuous reorganisation. To understand the functioning, or in the case of a disease, the malfunctioning of a biological system, we therefore must get insight into the biological mechanism on all of these different length scales.
When it comes to studying the function of larger biological complexes such as assembled viruses, SANS becomes an important analytical tool. The technique’s capacity to distinguish specific regions (RNA, proteins and lipids) of the virus – thanks to advanced deuteration methods – enables researchers to map out the arrangement of the various components, contributing invaluable information to structural studies of SARS-CoV-2. While other analytical techniques provide the detailed atomic-
resolution structure of small biological assemblies, neutron scattering allows researchers to pan back to see the larger picture of full molecular complexes, at lower resolution. Neutron scattering is also uniquely suited to determining the structure of functional membrane proteins in physiological conditions. Neutron scattering will therefore make it possible to map out the structure of the complex formed by the S-protein and the hACE2 receptor.
Neutrons can penetrate deep into matter without damaging the samples
Last but not least, a full understanding of the virus’s life cycle requires the study of the interaction of the virus with the cell membrane, and the mechanism it uses to penetrate the host cell. SARS-CoV-2 is a virus, like HIV, that possesses a viral envelope composed of lipids, proteins and sugars. By providing information on its molecular structure and composition, the technique of neutron reflectometry – whereby highly collimated neutrons are incident on a flat surface and the intensity of reflected radiation is measured as a function of angle or neutron wavelength – helps to elucidate the precise mechanism the virus uses to penetrate the cell. Like in the case of SANS, the strength of neutron reflectometry relies on the fact that it provides a different contrast to X-rays, and that this contrast can be varied via deuteration allowing, for example, to distinguish a protein inserted into the membrane from the membrane itself. Regarding SARS-CoV-2, this implies that neutron reflectometry can in fact provide detailed structural information on the interaction of small protein fragments, so-called peptides, that mimic the S-protein and that are believed to be responsible for binding with the receptor of the host cell. Defining this mechanism, which is decisive for the infection, will be essential to controlling the virus and its potential future mutations in the long term.
Tool of choice
And we should not forget that viruses in their physiological environments are highly dynamic systems. Knowing how they move, deform and cluster is essential for optimising diagnostic and therapeutic treatments. Neutron spectroscopy, which is ideally suited to follow the motion of matter from small chemical groups to large macromolecular assemblies, is the tool of choice to provide this information.
The League of Advanced European Neutron Sources (CERN Courier May/June 2020 p49) has rapidly mobilised to conduct all relevant experiments. We are equally in close contact with our international partners, some of whom have, or are just in the process of, reopening their facilities. Scientists have to make sure that each research subject is provided with the best-suited analytical tool – in other words, those that have the samples will be given the necessary beam time. Neutron facilities are fast-adapting with special access channels to beam time having been implemented to allow the scientific community to respond without delay to the challenge posed by COVID-19.
Since the inception of the Standard Model (SM) of particle physics half a century ago, experiments of all shapes and sizes have put it to increasingly stringent tests. The largest and most well-known are collider experiments, which in particular have enabled the direct discovery of various SM particles. Another approach utilises the tools of atomic physics. The relentless improvement in the precision of tools and techniques of atomic physics, both experimental and theoretical, has led to the verification of the SM’s predictions with ever greater accuracy. Examples include measurements of atomic parity violation that reveal the effects of the Z boson on atomic states, and measurements of atomic energy levels that verify the predictions of quantum electrodynamics (QED). Precision atomic physics experiments also include a vast array of searches for effects predicted by theories beyond-the-SM (BSM), such as fifth forces and permanent electric dipole moments that violate parity- and time-reversal symmetry. These tests probe potentially subtle yet constant (or controllable) changes of atomic properties that can be revealed by averaging away noise and controlling systematic errors.
But what if the glimpses of BSM physics that atomic spectroscopists have so painstakingly searched for over the past decades are not effects that persist over the many weeks or months of a typical measurement campaign, but rather transient events that occur only sporadically? For example, might not cataclysmic astrophysical events such as black-hole mergers or supernova explosions produce hypothetical ultralight bosonic fields impossible to generate in the laboratory? Or might not Earth occasionally pass through some invisible “cloud” of a substance (such as dark matter) produced in the early universe? Such transient phenomena could easily be missed by experimenters when data are averaged over long times to increase the signal-to-noise ratio.
Detecting such unconventional events represents several challenges. If a transient signal heralding new physics was observed with a single detector, it would be exceedingly difficult to confidently distinguish the exotic-physics signal from the many sources of noise that plague precision atomic physics measurements. However, if transient interactions occur over a global scale, a network of such detectors geographically distributed over Earth could search for specific patterns in the timing and amplitude of such signals that would be unlikely to occur randomly. By correlating the readouts of many detectors, local effects can be filtered away and exotic physics could be distinguished from mundane physics.
This idea forms the basis for the Global Network of Optical Magnetometers to search for Exotic physics (GNOME), an international collaboration involving 14 institutions from all over the world (see “Correlated” figure). Such an idea, like so many others in physics, is not entirely new. The same concept is at the heart of the worldwide network of interferometers used to observe gravitational waves (LIGO, Virgo, GEO, KAGRA, TAMA, CLIO), and the global network of proton-precession magnetometers used to monitor geomagnetic and solar activity. What distinguishes GNOME from other global sensor networks is that it is specifically dedicated to searching for signals from BSM physics that have evaded detection in earlier experiments.
GNOME is a growing network of more than a dozen optical atomic magnetometers, with stations in Europe, North America, Asia and Australia. The project was proposed in 2012 by a team of physicists from the University of California at Berkeley, Jagiellonian University, California State University – East Bay, and the Perimeter Institute. The network started taking preliminary data in 2013, with the first dedicated science-run beginning in 2017. With more data on the way, the GNOME collaboration, consisting of more than 50 scientists from around the world, is presently combing the data for signs of the unexpected, with its first results expected later this year.
Exotic-physics detectors
Optical atomic magnetometers (OAMs) are among the most sensitive devices for measuring magnetic fields. However, the atomic vapours that are the heart of GNOME’s OAMs are placed inside multi-layer shielding systems, reducing the effects of external magnetic fields by a factor of more than a million. Thus, in spite of using extremely sensitive magnetometers, GNOME sensors are largely insensitive to magnetic signals. The reasoning is that many BSM theories predict the existence of exotic fields that couple to atomic spins and would penetrate through magnetic shields largely unaffected. Since the OAM signal is proportional to the spin-dependent energy shift regardless of whether or not a magnetic field causes the energy shift, OAMs – even enclosed within magnetic shields – are sensitive to a broad class of exotic fields.
The basic principle behind OAM operation (see “Optical rotation” figure) involves optically measuring spin-dependent energy shifts by controlling and monitoring an ensemble of atomic spins via angular momentum exchange between the atoms and light. The high efficiency of optical pumping and probing of atomic spin ensembles, along with a wide array of clever techniques to minimise atomic spin relaxation (even at high atomic vapour densities), have enabled OAMs to achieve sensitivities to spin-dependent energy shifts at levels well below 10–20 eV after only one second of integration. One of the 14 OAM installations, at California State University – East Bay, is shown in the “Benchtop physics” image.
However, one might wonder: do any of the theoretical scenarios suggesting the existence of exotic fields predict signals detectable by a magnetometer network while also evading all existing astrophysical and laboratory constraints? This is not a trivial requirement, since previous high-precision atomic spectroscopy experiments have established stringent limits on BSM physics. In fact, OAM techniques have been used by a number of research groups (including our own) over the past several decades to search for spin-dependent energy shifts caused by exotic fields sourced by nearby masses or polarised spins. Closely related work has ruled out vast areas of BSM parameter space by comparing measurements of hyperfine structure in simple hydrogen-like atoms to QED calculations. Furthermore, if exotic fields do exist and couple strongly enough to atomic spins, they could cause noticeable cooling of stars and affect the dynamics of supernovae. So far, all laboratory experiments have produced null results and all astrophysical observations are consistent with the SM. Thus if such exotic fields exist, their coupling to atomic spins must be extremely feeble.
Despite these constraints and requirements, theoretical scenarios both consistent with existing constraints and that predict effects measurable with GNOME do exist. Prime examples, and the present targets of the GNOME collaboration’s search efforts, are ultralight bosonic fields. A canonical example of an ultralight boson is the axion. The axion emerged from an elegant solution, proposed by Roberto Peccei and Helen Quinn in the late 1970s, to the strong–CP problem. The Peccei–Quinn mechanism explains the mystery of why the strong interaction, to the highest precision we can measure, respects the combined CP symmetry whereas quantum chromodynamics naturally accommodates CP violation at a level ten orders of magnitude larger than present constraints. If CP violation in the strong interaction can be described not by a constant term but rather by a dynamical (axion) field, it could be significantly suppressed by spontaneous symmetry breaking at a high energy scale. If the symmetry breaking scale is at the grand-unification-theory (GUT) scale (~1016 GeV), the axion mass is around 10-10 eV, and at the Planck scale (1019 GeV) around 10-13 eV – both many orders of magnitude less massive than even neutrinos. Searching for ultralight axions therefore offers the exciting possibility of probing physics at the GUT and Planck scales, far beyond the direct reach of any existing collider.
Beyond the Standard Model
In addition to the axion, there are a wide range of other hypothetical ultralight bosons that couple to atomic spins and could generate signals potentially detectable with GNOME. Many theories predict the existence of spin-0 bosons with properties similar to the axion (so-called axion-like particles, ALPs). A prominent example is the relaxion, proposed by Peter Graham, David Kaplan and Surjeet Rajendran to explain the hierarchy problem: the mystery of why the electroweak force is about 24 orders-of-magnitude stronger than the gravitational force. In 2010, Asimina Arvanitaki and colleagues found that string theory suggests the existence of many ALPs of widely varying masses, from 10-33 eV to 10-10 eV. From the perspective of BSM theories, ultralight bosons are ubiquitous. Some predict ALPs such as “familons”, “majorons” and “arions”. Others predict new ultralight spin-1 bosons such as dark and hidden photons. There is even a possibility of exotic spin-0 or spin-1 gravitons: while the graviton for a quantum theory of gravity matching that described by general relativity must be spin-2, alternative gravity theories (for example torsion gravity and scalar-vector-tensor gravity) predict additional spin-0 and/or spin-1 gravitons.
It also turns out that such ultralight bosons could explain dark matter. Most searches for ultralight bosonic dark matter assume the bosons to be approximately uniformly distributed throughout the dark matter halo that envelopes the Milky Way. However, in some theoretical scenarios, the ultralight bosons can clump together into bosonic “stars” due to self-interactions. In other scenarios, due to a non-trivial vacuum energy landscape, the ultralight bosons could take the form of “topological” defects, such as domain walls that separate regions of space with different vacuum states of the bosonic field (see “New domains” figure). In either of these cases, the mass-energy associated with ultralight bosonic dark matter would be concentrated in large composite structures that Earth might only occasionally encounter, leading to the sort of transient signals that GNOME is designed to search for.
Yet another possibility is that intense bursts of ultralight bosonic fields might be generated by cataclysmic astrophysical events such as black-hole mergers. Much of the underlying physics of coalescing singularities is unknown, possibly involving quantum-gravity effects far beyond the reach of high-energy experiments on Earth, and it turns out that quantum gravity theories generically predict the existence of ultralight bosons. Furthermore, if ultralight bosons exist, they may tend to condense in gravitationally bound halos around black holes. In these scenarios, a sizable fraction of the energy released when black holes merge could plausibly be emitted in the form of ultralight bosonic fields. If the energy density of the ultralight bosonic field is large enough, networks of atomic sensors like GNOME might be able to detect a signal.
In order to use OAMs to search for exotic fields, the effects of environmental magnetic noise must be reduced, controlled, or cancelled. Even though the GNOME magnetometers are enclosed in multi-layer magnetic shields so that signals from external electromagnetic fields are significantly suppressed, there is a wide variety of phenomena that can mimic the sorts of signals one would expect from ultralight bosonic fields. These include vibrations, laser instabilities, and noise in the circuitry used for data acquisition. To combat these spurious signals, each GNOME station uses auxiliary sensors to monitor electromagnetic fields outside the shields (which could leak inside the shields at a far-reduced level), accelerations and rotations of the apparatus, and overall magnetometer performance. If the auxiliary sensors indicate data may be suspect, the data are flagged and ignored in the analysis (see “Spurious signals” figure).
GNOME data that have passed this initial quality check can then be scanned to see if there are signals matching the patterns expected based on various exotic physics hypotheses. For example, to test the hypothesis that dark matter takes the form of ALP domain walls, one searches for a signal pattern resulting from the passage of Earth through an astronomical-sized plane having a finite thickness given by the ALP’s Compton wavelength. The relative velocity between the domain wall and Earth is unknown, but can be assumed to be randomly drawn from the velocity distribution of virialised dark matter, having an average speed of about one thousandth the speed of light. The relative timing of signals appearing in different GNOME magnetometers should be consistent with a single velocity v: i.e. nearby stations (in the direction of the wall propagation) should detect signals with smaller delays and stations that are far apart should detect signals with larger delays, and furthermore the time delays should occur in a sensible sequence. The energy shift that could lead to a detectable signal in GNOME magnetometers is caused by an interaction of the domain-wall field φ with the atomic spin S whose strength is proportional to the scalar product of the spin with the gradient of the field, S∙∇φ. The gradient of the domain-wall field ∇φ is proportional to its momentum relative to S, and hence the signals appearing in different GNOME magnetometers are proportional to S∙v. Both the signal-timing pattern and the signal-amplitude pattern should be consistent with a single value of v; signals inconsistent with such a pattern can be rejected as noise.
If such exotic fields exist, their coupling to atomic spins must be extremely feeble
To claim discovery of a signal heralding BSM physics, detections must be compared to the background rate of spurious false-positive events consistent with the expected signal pattern but not generated by exotic physics. The false-positive rate can be estimated by analysing time-shifted data: the data stream from each GNOME magnetometer is shifted in time relative to the others by an amount much larger than any delays resulting from propagation of ultralight bosonic fields through Earth. Such time-shifted data can be assumed to be free of exotic-physics signals, so any detections are necessarily false positives: merely random coincidences due to noise. When the GNOME data are analysed without timeshifts, to be regarded as an indication of BSM physics, the signal amplitude must surpass the 5σ threshold as compared to the background determined with the time-shifted data. This means that, for a year-long data set, an event due to noise coincidentally matching the assumed signal pattern throughout the network would occur only once every 3.5 million years.
Inspiring efforts
Having already collected over a year of data, and with more on the way, the GNOME collaboration is presently combing the data for signs of BSM physics. New results based on recent GNOME science runs are expected in 2020. This would represent the first ever search for such transient exotic spin-dependent effects. Improvements in magnetometer sensitivity, signal characterisation, and data-analysis techniques are expected to improve on these initial results over the next several years. Significantly, GNOME has inspired similar efforts using other networks of precision quantum sensors: atomic clocks, interferometers, cavities, superconducting gravimeters, etc. In fact, the results of searches for exotic transient signals using clock networks have already been reported in the literature, constraining significant parameter space for various BSM scenarios. We would suggest that all experimentalists should seriously consider accurately time-stamping, storing, and sharing their data so that searches for correlated signals due to exotic physics can be conducted a posteriori. One never knows what nature might be hiding just beyond the frontier of the precision of past measurements.
A career as a surveyor offers the best of two worlds, thinks Dominique Missiaen, a senior member of CERN’s survey, mechatronics and measurements (SMM) group: “I wanted to be a surveyor because I felt I would like to be inside part of the time and outside the other, though being at CERN is the opposite because the field is in the tunnels!” After qualifying as a surveyor and spending time doing metrology for a cement plant in Burma and for the Sorbonne in Paris, Missiaen arrived at CERN as a stagier in 1986. He never left, starting in a staff position working on the alignment of the pre-injector for LEP, then of LEP itself, and then leading the internal metrology of the magnets for the LHC. From 2009–2018 he was in charge of the whole survey section, and since last year has a new role as a coordinator for special projects, such as the development of a train to remotely survey the magnets in the arcs of the LHC.
“Being a surveyor at CERN is completely different to other surveying jobs,” explains Missiaen. “We are asked to align components within a couple of tenths of a millimetre, whereas in the normal world they tend to work with an accuracy of 1–2 cm, so we have to develop new and special techniques.”
A history of precision
When building the Proton Synchrotron in the 1950s, engineers needed an instrument to align components to 50 microns in the horizontal plane. A device to measure such distances did not exist on the market, so the early CERN team invented the “distinvar” – an instrument to ensure the nominal tension of an invar wire while measuring the small length to be added to obtain the distance between two points. It was still used as recently as 10 years ago, says Missiaen. Another “stretched wire” technique developed for the ISR in the 1960s and still in use today replaces small-angle measurements by a short-distance measurement: instead of measuring the angle between two directions, AB and AC, using a theodolite, it measures the distance between the point B and the line AC. The AC line is realised by a nylon wire, while the distance is measured using a device invented at CERN called the “ecartometer”.
Invention and innovation haven’t stopped. The SMM group recently adapted a metrology technique called frequency sweeping interferometry for use in a cryogenic environment to align components inside the sealed cryostats of the future High-Luminosity LHC (HL-LHC), which contract by up to 12 mm when cooled to operational temperatures. Another recent innovation, in collaboration with the Institute of Plasma Physics in Prague that came about while developing the challenging alignment system for HIE-ISOLDE, is a non-diffractive laser beam with a central axis that diverges by just a few mm over distances of several hundred metres and which can “reconstruct” itself after meeting an obstacle.
The specialised nature of surveying at CERN means the team has to spend a lot of time finding the right people and training newcomers. “It’s hard to measure at this level and to maintain the accuracy over long distances, so when we recruit we look for people who have a feeling for this level of precision,” says Missiaen, adding that a constant feed of students is important. “Every year I go back to my engineering school and give a talk about metrology, geodesy and topometry at CERN so that the students understand there is something special they can do in their career. Some are not interested at all, while others are very interested – I never find students in between!”
We see the physics results as a success that we share in too
CERN’s SMM group has more than 120 people, with around 35 staff members. Contractors push the numbers up further during periods such as the current long-shutdown two (LS2), during which the group is tasked with measuring all the components of the LHC in the radial and vertical direction. “It takes two years,” says Jean-Frederic Fuchs, who is section leader for accelerators, survey and geodesy. “During a technical stop, we are in charge of the 3D-position determination of the components in the tunnels and their alignment at the level of a few tenths of a millimetre. There is a huge number of various accelerator elements along the 63 km of beam lines at CERN.”
Fuchs did his master’s thesis at CERN in the domain of photogrammetry and then left to work in Portugal, where he was in charge of guiding a tunnel-boring machine for a railway project. He returned to CERN in the early 2000s as a fellow, followed by a position as a project associate working on the assembly and alignment of the CMS experiment. He then left to join EDF where he worked on metrology inside nuclear power plants, finally returning to CERN as a staff member in 2011 working on accelerator alignment. “I too sought a career in which I didn’t have to spend too much time in the office. I also liked the balance between measurements and calculations. Using theodolites and other equipment to get the data is just one aspect of a surveyor’s job – post-treatment of the data and planning for measurement campaigns is also a big part of what we do.”
With experience in both experiment and accelerator alignment, Fuchs knows all too well the importance of surveying at CERN. Some areas of the LHC tunnel are moving by about 1 mm per year due to underground movement inside the rock. The tunnel is rising at point 5 (where CMS is located) and falling between P7 and P8, near ATLAS, while the huge mass of the LHC experiments largely keeps them at the same vertical position, therefore requiring significant realignment of the LHC magnets. During LS2, the SMM group plans to lower the LHC at point 5 by 3 mm to better match the CMS interaction point by adjusting jacks that allow the LHC to be raised or lowered by around 20 mm in each direction. For newer installations, the movement can be much greater. For example, LINAC4 has moved up by 5 mm in the source area, leading to a slope that must be corrected. The new beam-dump tunnels in the LHC and the freshly excavated HL-LHC tunnels in points 1 and 5 are also moving slightlycompared to the main LHC tunnel. “Today we almost know all the places where it moves,” says Fuchs. “For sure, if you want to run the LHC for another 18 years there will be a lot of measurement and realignment work to be done.” His team also works closely with machine physicists to compare its measurements to those performed with the beams themselves.
It is clear that CERN’s accelerator infrastructure could not function at the level it does without the field and office work of surveyors. “We see the physics results as a success that we share in too,” says Missiaen. “When the LHC turned on you couldn’t know if a mistake had been made somewhere, so in seeing the beam go from one point to another, we take pride that we have made that possible.”
A new record for the highest luminosity at a particle collider has been set by SuperKEKB at the KEK laboratory in Tsukuba, Japan. On 15 June, electron–positron collisions at the 3 km-circumference double-ring collider reached an instantaneous luminosity of 2.22×1034 cm-2 s-1 — surpassing the LHC’s record of 2.14×1034 cm-2s-1 set with proton–proton collisions in 2018. A few days later, SuperKEKB pushed the luminosity record to 2.4×1034 cm-2s-1. This milestone follows more than two years of commissioning of the new machine, which delivers asymmetric electron–positron collisions to the Belle II detector at energies corresponding to the Υ(4S) resonance (10.57 GeV) to produce copious amounts of B and D mesons and τ leptons.
We can spare no words in thanking KEK for their pioneering work in achieving results that push forward both the accelerator frontier and the related physics frontier
Pantaleo Raimondi
SuperKEKB is an upgrade of the KEKB b-factory, which operated from 1998 until June 2010 and held the luminosity record of 2.11×1034 cm−2s−1 for almost ten years until the LHC edged past it. SuperKEKB’s new record was achieved with a product of beam currents less than 25% of that at KEKB thanks to a novel “nano-beam” scheme originally proposed by accelerator physicist Pantaleo Raimondi of the ESRF, Grenoble. The scheme, which works by focusing the very low-emittance beams using powerful magnets at the interaction point, squeezes the vertical height of the beams at the collision point to about 220 nm. This is expected to decrease to approximately 50 nm by the time SuperKEKB reaches its design performance.
“We, as the accelerator community, have been working together with the KEK team since a very very long time and we can spare no words in thanking KEK for their pioneering work in achieving results that push forward both the accelerator frontier and the related physics frontier,” says Raimondi.
The first collider to employ the nano-beam scheme and to achieve a β*y focusing parameter of 1 mm, SuperKEKB required significant upgrades to KEKB including a new low-energy ring beam pipe, a new and complex system of superconducting final-focusing magnets, a positron damping ring, and an advanced injector. The most recent improvement, completed in April, was the introduction of crab-waist technology, which stabilises beam-beam blowup using carefully tuned sextupole magnets located symmetrically on either side of the interaction point (IP). It was first used at DAΦNE, which had much less demanding tolerances than SuperKEKB, and differs from the “crab-crossing” technology based on special radio-frequency cavities which was used to boost the luminosity at KEKB and is now being implemented at CERN for the high-luminosity LHC.
This luminosity milestone marks the start of the super B-factory era
Yukiyoshi Ohnishi
“The vertical beta at the IP is 1 mm which is the smallest value for colliders in the world. Now we are testing 0.8 mm,” says Yukiyoshi Ohnishi, commissioning leader for SuperKEKB. “The difference between DAΦNE and SuperKEKB is the size of the Piwinski angle, which is much larger than 1 as found in ordinary head-on or small crossing-angle colliders.”
In the coming years, the luminosity of SuperKEKB is to be increased by a factor of around 40 to reach its design target of 8×1035 cm−2s−1. This will deliver to Belle II, which produced its first physics result in April, around 50 times more data than its predecessor, Belle, at KEKB over the next ten years. The large expected dataset, containing about 50 billion B-meson pairs and similar numbers of charm mesons and tau leptons, will enable Belle II to study rare decays and test the Standard Model with unprecedented precision, allowing deeper investigations of the flavour anomalies reported by LHCb and sensitive searches for very weakly interacting dark-sector particles.
“This luminosity milestone, which was the result of extraordinary efforts of the SuperKEKB and Belle II teams, marks the start of the super B-factory era. It was a special thrill for us, coming in the midst of a global pandemic that was difficult in so many ways for work and daily life,” says Ohnishi. “In the coming years, we will significantly increase the beam currents and focus the beams even harder, reducing the β*y parameter far below 1 mm. However, there will be many more difficult technical challenges on the long road ahead to design luminosity, which is expected towards the end of the decade.”
The crab-waist scheme is also envisaged for a possible Super Tau Charm factory and for the proposed Future Circular Collider (FCC-ee) at CERN, says Raimondi. “For both these projects there is a solid design based on this concept and in general all circular lepton colliders are apt to take benefit from it.”
First isolated in 2004 by physicists at the University of Manchester using pieces of sticky tape and a graphite block, the one-atom-thick carbon allotrope graphene has been touted as a wonder material on account of its exceptional electrical, thermal and physical properties. Turning these properties into scalable commercial devices has proved challenging, however, which makes a recently agreed collaboration between CERN and UK firm Paragraf on graphene-based Hall-probe sensors especially novel.
There is probably no other facility in the world to be able to confirm this, so the project has been a big win on both sides
Ellie Galanis
With particle accelerators requiring large numbers of normal and superconducting magnets, high-precision and reliable magnetic measurements are essential. While the workhorse for these measurements is the rotating-coil magnetometer with a resolution limit of the order of 10–8 Vs, the most important tool for local field mapping is the Hall probe, which passes electrical current proportional to the field strength when the sensor is perpendicular to a magnetic field. However, measurement uncertainties in the 10–4 range required for determining field multipoles are difficult to obtain, even with the state-of-the-art devices. False signals caused by non-perpendicular field components in the three-dimensional sensing region of existing Hall probes can increase the measurement uncertainty, requiring complex and time-consuming calibration and processing to separate true signals from systematic errors. With an active sensing component made of atomically thin graphene, which is effectively two-dimensional, a graphene-based Hall probe in principle suffers negligible planar Hall effects and therefore could enable higher precision mapping of local magnetic fields.
Inspiration strikes
Stephan Russenschuck, head of the magnetic measurement section at CERN, spotted the potential of graphene-based Hall probes when he heard about a talk given by Paragraf – a recent spin-out from the department of materials science at the University of Cambridge – at a magnetic measurement conference in December 2018. This led to a collaboration, formalised between CERN and Paragraf in April, which has seen several graphene sensors installed and tested at CERN during the past year. The firm sought to develop and test the device ahead of a full product launch by the end of this year, and the results so far, based on well-calibrated field measurements in CERN’s reference magnets, have been very promising. “The collaboration has proved that the sensor has no planar effect,” says Paragraf’s Ellie Galanis. “This was a learning step. There is probably no other facility in the world to be able to confirm this, so the project has been a big win on both sides.”
The graphene Hall sensor also operates over a wide temperature range, down to liquid-helium temperatures at which superconducting magnets in the LHC operate. “How these sensors behave at cryogenic temperatures is very interesting,” says Russenschuck. “Usually the operation of Hall sensors at cryogenic temperatures requires careful calibration and in situ cross-calibration with fluxmetric methods. Moreover, we are now exploring the sensors on a rotating shaft, which could be a breakthrough for extracting local, transversal field harmonics. Graphene sensors could get rid of the spurious modes that come from nonlinearities and planar effects.”
CERN and Paragraf, which has patented a scalable process for depositing two-dimensional materials directly onto semiconductor-compatible substrates, plan to release a joint white paper communicating the results so far and detailing the sensor’s performance across a range of magnetic fields.
More than 3000 accelerator specialists gathered in cyber-space from 11 to 14 May for the 11th International Particle Accelerator Conference (IPAC). The conference was originally destined for the GANIL laboratory in Caen, a charming city in Normandy, and host to the flagship radioactive-ion-beam facility SPIRAL-2, but the coronavirus pandemic forced the cancellation of the in-person meeting and the French institutes CNRS/IN2P3, CEA/IRFU, GANIL, Soleil and ESRF agreed to organise a virtual conference. Oral presentations and the accelerator-prize session were maintained, though unfortunately the poster and industry sessions had to be cancelled. The scientific programme committee whittled down more than 2000 proposals for talks into 77 presentations which garnered more than 43,000 video views across 60 countries, making IPAC’20 an involuntary pioneer of virtual conferencing and a lighthouse of science during the lockdown.
Recent trends indicate a move towards the use of permanent magnets
IPAC’20’s success relied on a programme of recent technical highlights, new developments and future plans in the accelerator world. Weighing in at 1,998 views, the most popular talk of the conference was by Ben Shepherd from STFC’s Daresbury Laboratory in the UK, who spoke on high-technology permanent magnets. Accelerators do not only accelerate ensembles of particles, but also use strong magnetic fields to guide and focus them into very small volumes, typically just micro or nanometres in size. Recent trends indicate a move towards the use of permanent magnets that provide strong fields but do not require external power, and can provide outstanding field quality. Describing the major advances for permanent magnets in terms of production, radiation resistance, tolerances and field tuning, Shepherd presented high tech devices developed and used for the SIRIUS, ESRF-EBS, SPRING-8, CBETA, SOLEIL and CUBE-ECRIS facilities, and also presented the Zero-Power Tunable Optics (ZEPTO) collaboration between STFC and CERN, which offers 15 – 60 T/m tunability in quadrupoles and 0.46 – 1.1 T in dipoles.
Top of the talks
The seven IPAC’20 presentations with the most views included four by outstanding female scientists. CERN Director General Fabiola Gianotti presented strategic considerations for future accelerator-based particle physics. While pointing out the importance of Europe participating in projects elsewhere in the world, she made the strong point that CERN should host an ambitious future collider, and discussed the options being considered, pointing to the update of the European Strategy for Particle Physics soon to be approved by the CERN Council. Sarah Cousineau from Oakridge reported on accelerator R&D as a driver for science in general, pointing out that accelerators have directly contributed to more than 25 Nobel Prizes, including the Higgs-boson discovery at the LHC in 2012. The development of superconducting accelerator technology has enabled projects for colliders, photon science, nuclear physics and neutron spallation sources around the world, with several light sources and neutron facilities currently engaged in COVID-19 studies.
SPIRAL-2 will explore exotic nuclei near the limits of the periodic table
The benefits of accelerator-based photon science for society was also emphasized by Jerry Hastings from Stanford University and SLAC, who presented the tremendous progress in structural biology driven by accelerator-based X-ray sources, and noted that research can be continued during COVID-19 times thanks to the remote synchrotron access pioneered at SSRL. Stressing the value of international collaboration, Hastings presented the outcome of an international X-ray facilities meeting that took place in April and defined an action plan for ensuring the best possible support to COVID-19 research. GANIL Director Alahari Navin presented new horizons in nuclear science, reviewing facilities around the world and presenting his own laboratory’s latest activities. GANIL has now started commissioning SPIRAL-2, which will allow users to explore the as-yet unknown properties of exotic nuclei near the limits of the periodic table of elements, and has performed its initial science experiment. Liu Lin from LNLS in Brazil presented the commissioning results for the new 4th generation SIRIUS light source, showing that the functionality of the facility has already been demonstrated by storing 15 mA of beam current. Last, but not least in the top-seven most-viewed talks, Anke-Susanne Müller from KIT presented the status of the study for a 100 km Future Circular Collider – just one of the options for an ambitious post-LHC project at CERN.
Many other highlights from the accelerator field were presented during IPAC’20. Kyo Shibata (KEK) discussed the progress in physics data-taking at the SuperKEKb factory, where the BELLE II experiment recently reported its first result. Ferdinand Willeke (BNL) presented the electron-ion collider approved to be built at BNL, Porntip Sudmuang (SLRI) showed construction plans for a new light source in Thailand, and Mohammed Eshraqi (ESS) discussed the construction of the European Spallation Source in Sweden. At the research frontier towards compact accelerators, Chang Hee Nam (IBS, Korea) explained prospects for laser-driven GeV-electron beams from plasma-wakefield accelerators and Arnd Specka (LLR/CNRS) showed plans for compact European plasma-accelerator facility EuPRAXIA, which is entering its next phase after successful completion of a conceptual-design report. The accelerator-application session rounded the picture off with presentations by Annalisa Patriarca (Institute Curie) about accelerator challenges in a new radiation-therapy technique called FLASH, in which ultra-fast delivery of radiation dose reduces damage to healthy tissue, by Charlotte Duchemin (CERN) on the production of non-conventional radionuclides for medical research at the MEDICIS hadron beam facility, by Toms Torims (Riga Technical University) on the treatment of marine exhaust gases using electron beams and by Adrian Fabich (SCK-CEN) on proton-driven nuclear-waste transmutation.
To the credit of the French organisers, the virtual setup worked seamlessly. The concept relied on pre-recorded presentations and a text-driven chat function which allowed registered participants to participate from time zones across the world. Activating the sessions in half-day steps preserved the appearance of live presentations to some degree, before a final live session, during which the four prizes of the accelerator group of the European Physical Society were awarded.
The steady increase in the energy of colliders during the past 40 years, which has fuelled some of the greatest discoveries in particle physics, was possible thanks to progress in superconducting materials and accelerator magnets. The highest particle energies have been reached by proton–proton colliders, where beams of high-rigidity travelling on a piecewise circular trajectory require magnetic fields largely in excess of those that can be produced using resistive electromagnets. Starting from the Tevatron in 1983, through HERA in 1991, RHIC in 2000 and finally the LHC in 2008, all large-scale hadron colliders were built using superconducting magnets.
Large superconducting magnets for detectors are just as important to high-energy physics experiments as beamline magnets are to particle accelerators. In fact, detector magnets are where superconductivity took its stronghold, right from the infancy of the technology in the 1960s, with major installations such as the large bubble-chamber solenoid at Argonne National Laboratory, followed by the giant BEBC solenoid at CERN, which held the record for the highest stored energy for many years. A long line of superconducting magnets has provided the magnetic fields for detectors of all large-scale high-energy physics colliders, with the most recent and largest realisation being the LHC experiments, CMS and ATLAS.
Optimisation
All past accelerator and detector magnets had one thing in common: they were built using composite Nb–Ti/Cu wires and cables. Nb–Ti is a ductile alloy with a critical field of 14.5 T and critical temperature of 9.2 K, made from almost equal parts of the two constituents. It was discovered to be superconducting in 1962 and its performance, quality and cost have been optimised over more than half a century of research, development and large-scale industrial production. Indeed, it is unlikely that the performance of the LHC dipole magnets, operated so far at 7.7 T and expected to reach nominal conditions at 8.33 T, can be surpassed using the same superconducting material, or any foreseeable improvement of this alloy.
And yet, approved projects and studies for future circular machines are all calling for the development of superconducting magnets that produce fields beyond those produced for the LHC. These include the High-Luminosity LHC (HL-LHC), which is currently taking shape, and the Future Circular Collider design study (FCC), both at CERN, together with studies and programmes outside Europe, such as the Super proton–proton Collider in China (SppC) or the past studies of a Very Large Hadron Collider at Fermilab and the US–DOE Muon Accelerator Program (see HL-LHC quadrupole successfully tested). This requires that we turn to other superconducting materials and novel magnet technology.
The HL-LHC springboard
To reach its main objective, to increase the levelled LHC luminosity at ATLAS and CMS, and the integrated luminosity by a factor of 10, the HL-LHC requires very large-aperture quadrupoles, with field levels at the coil in the range of 12 T in the interaction regions. These quadrupoles, currently being built and tested at CERN and Fermilab (see HL-LHC quadrupole successfully tested), are the main fruit of the 10-year US-DOE LHC Accelerator Research Program (US–LARP) – a joint venture between CERN, Brookhaven National Laboratory, Fermilab and Lawrence Berkeley National Laboratory. In addition, the increased beam intensity calls for collimators to be inserted in locations within the LHC “dispersion suppressor”, the portion of the accelerator where the regular magnet lattice is modified to ensure that off-momentum particles are centered in the interaction points. To gain the required space, standard arc dipoles will be substituted by dipoles of shorter length and higher field, approximately 11 T. As described earlier, such fields require the use of new materials. For the HL-LHC, the material of choice is the intermetallic compound of niobium and tin Nb3Sn, which was discovered in 1954. Nb3Sn has a critical field of about 30 T and a critical temperature of about 18 K, outperforming Nb–Ti by a factor of two. Though discovered before Nb–Ti, and exhibiting better performance, Nb3Sn has not been used for accelerator magnets so far because in its final form it is brittle and cannot withstand large stress and strain without special precautions.
The HL-LHC is the springboard to the future of high-field accelerator magnets
In fact, Nb3Sn was one of the candidate materials considered for the LHC in the late 1980s and mid 1990s. Already at that time it was demonstrated that accelerator magnets could be built with Nb3Sn, but it was also clear that the technology was complex, with a number of critical steps, and not ripe for large-scale production. A good 20 years of progress in basic material performance, cable development, magnet engineering and industrial process control was necessary to reach the present state, during which time the success of the production of Nb3Sn for the ITER fusion experiment has given confidence in the credibility of this material for large-scale applications. As a result, magnet experts are now convinced that Nb3Sn technology is sufficiently mature to satisfy the challenging field levels required by the HL-LHC.
A difficult recipe
The present manufacturing recipe for Nb3Sn accelerator magnets consists of winding the magnet coil with glass-fibre insulated cables made of multi-filamentary wires that contain Nb and Sn precursors in a Cu matrix. In this form the cables can be handled and plastically deformed without breakage. The coils then undergo heat treatment, typically at a temperature of around 650 °C, during which the precursor elements react chemically and form the desired Nb3Sn superconducting phase. At this stage, the reacted coil is extremely fragile and needs to be protected from any mechanical action. This is done by injecting a polymer, which fills the interstitial spaces among cables, and is subsequently cured to become a matrix of hardened plastic providing cohesion and support to the cables.
The above process, though conceptually simple, has a number of technical difficulties that call for top-of-the-line engineering and production control. To give some examples, the texture of the electrical insulation, consisting of a few tenths of mm of glass fibre, needs to be able to withstand the high-temperature heat-treatment step, but also retain dielectric and mechanical properties at liquid-helium temperatures 1000 °C lower. The superconducting wire also changes its dimensions by a few percent, which is orders of magnitude larger than the dimensional accuracy requested for field quality and therefore must be predicted and accommodated for by appropriate magnet and tooling design. The finished coil, even if it is made solid by the polymer cast, still remains stress and strain sensitive. The level of stress that can be tolerated without breakage can be up to 150 MPa, to be compared to the electromagnetic stress of optimised magnets operating at 12 T that can reach levels in the range of 100 MPa. This does not leave much headroom for engineering margins and manufacturing tolerances. Finally, protecting high-field magnets from quenches, with their large stored energy, requires that the protection system has a very fast reaction – three times faster than at the LHC – and excellent noise rejection to avoid false trips related to flux jumps in the large Nb3Sn filaments.
The next jump
The CERN magnet group, in collaboration with the US–DOE laboratories participating in the LHC Accelerator Upgrade Project, is in the process of addressing these and other challenges, finding solutions suitable for a magnet production on the scale required for the HL-LHC. A total of six 11 T dipoles (each about 6 m long) and 20 inner triplet quadrupoles (up to 7.5 m long) are in production at CERN and in the US, and the first magnets have been tested (see “Power couple” image). And yet, it is clear that we are not ready to extrapolate such production on a much larger scale, i.e. to the thousands of magnets required for a possible future hadron collider such as FCC-hh. This is exactly why the HL-LHC is so critical to the development of high-field magnets for future accelerators: not only will it be the first demonstration of Nb3Sn magnets in operation, steering and colliding beams, but by building it on a scale that can be managed at the laboratory level we have a unique opportunity to identify all the areas of necessary development, and the open technology issues, to allow the next jump. Beyond its prime physics objective, the HL-LHC is therefore the springboard to the future of high-field accelerator magnets.
Climb to higher peak fields
For future circular colliders, the target dipole field has been set at 16 T for FCC-hh, allowing proton–proton collisions at an energy of 100 TeV, while China’s proposed pp collider (SppC) aims at a 12 T dipole field, to be followed by a 20 T dipole. Are these field levels realistic? And based on which technology?
Looking at the dipole fields produced by Nb3Sn development magnets during the past 40 years (figure 1), fields up to 16 T have been achieved in R&D demonstrators, suggesting that the FCC target can be reached. In 2018 “FRESCA2” – a large-aperture (100 mm) dipole developed over the past decade through a collaboration between CERN and CEA-Saclay in the framework of the European Union project EuCARD – attained a record field of 14.6 T at 1.9 K (13.9 T at 4.5 K). Another very recent result, obtained in June 2019, is the successful test at Fermilab by the US Magnet Development Programme (MDP) of a “cos-theta” dipole with an aperture of 60 mm called MDPCT1 (see “Cos-theta 1” image), which reached a field of 14.1 T a t 4.5 K (CERN Courier September/October 2019 p7). In February this year, the CERN magnet group set a new Nb3Sn record with an enhanced racetrack model coil (eRMC), developed in the framework of the FCC study. The setup, which consists of two racetrack coils assembled without mid-plane gap (see “Racetrack demo” image), produced a 16.36 T central field at 1.9 K and a 16.5 T peak field on the coil, which is the highest ever reached for a magnet of this configuration. The magnet was also tested at 4.5 K and reached a field of about 16.3 T (see HL-LHC quadrupole successfully tested). These results send a positive signal for the feasibility of next-generation hadron colliders.
A field of 16 T seems to be the upper limit that can be reached with a Nb3Sn accelerator magnet. Indeed, though the conductor performance can still be improved, as demonstrated by recent results obtained at the National High Magnetic Field Laboratory (NHMFL), Ohio State University and Fermilab within the scope of the US-MDP, this is the point at which the material itself will run out of steam. As for any other superconductor, the critical current density drops as the field grows, requiring an increasing amount of material to carry a given current. The effect becomes dramatic when approaching a significant fraction of the critical field. Akin to Nb-Ti in the region of 8 T, a further field increase with Nb3Sn beyond 16 T would require an exceedingly large coil and an impractical amount of conductor. Reaching the ultimate performance of Nb3Sn, which will be situated between the present 12 T and the expected maximum of 16 T, still requires much work. The technology issues identified by the ongoing work on the HL-LHC magnets are exacerbated by the increase in field, electromagnetic force and stored energy. Innovative industrial solutions will be needed, and the conductor itself brought to a level of maturity comparable to Nb–Ti in terms of performance, quality and cost. This work is the core of the ongoing FCC magnet-development programme that CERN is pursuing in collaboration with laboratories, universities and industries worldwide.
As the limit of Nb3Sn comes into view, we see history repeating itself: the only way to push beyond it to higher fields will be to resort to new materials. Since Nb3Sn is technically the low-temperature superconductor (LTS) with the highest performance, this will require a shift to high-temperature superconductors.
High-temperature superconductivity (HTS), discovered in 1986, is of great relevance in the quest for high fields. When operated at low temperature (the same liquid-helium range as LTS), HTS materials have exceedingly large critical fields in the range of 100 T and above. And yet, only recently has the material and magnet engineering reached the point where HTS materials can generate magnetic fields in excess of LTS ones. The first user applications coming to fruition are ultra-high-field NMR magnets, as recently delivered by Bruker Biospin, and the intense magnetic fields required by materials science, for example the 32 T all-superconducting user facility built at NHMFL.
As for their application in accelerator magnets, the potential of HTS to make a quantum leap is enormous. But it is also clear that the tough challenges that needed to be solved for Nb3Sn will escalate to a formidable level in HTS accelerator magnets. The magnetic force scales with the square of the field produced by the magnet, and for HTS the problem will no longer be whether the material can carry the super-currents, but rather how to manage stresses approaching structural material limits. Stored energy has the same square-dependence on the field, and quench detection and protection in large HTS magnets are still a spectacular challenge. In fact, HTS magnet engineering will probably differ so much from the LTS paradigm that it is fair to say that we do not yet know whether we have identified all the issues that need to be solved. HTS is the most exciting class of material to work with; the new world for brave explorers. But it is still too early to count on practical applications, not least because the production cost for this rather complex class of ceramic materials is about two orders of magnitude higher than that of good-old Nb–Ti.
It is thus logical to expect the near future to be based mainly on Nb3Sn. With the first demonstration to come imminently in the LHC, we need to consolidate the technology and bring it to the maturity necessary on a large-scale production. This may likely take place in steps – exploring 12 T territory first, while seeking the solutions to the challenges of ultimate Nb3Sn performance towards 16 T – and could take as long as a decade. For China’s SppC, iron-based HTS has been suggested as a route to 20 T dipoles. This technology study is interesting from the point of view of the material, but the magnet technology for iron-based superconductors is still rather far away.
Meanwhile, nurtured by novel ideas and innovative solutions, HTS could grow from the present state of a material of great potential to its first applications. The LHC already uses HTS tapes (based on Bi-2223) for the superconducting part of the current leads. The HL-LHC will go further, by pioneering the use of MgB2 to transport the large currents required to power the new magnets over considerable distances (thereby shielding power converters and making maintenance much easier). The grand challenges posed by HTS will likely require a revolution rather than an evolution of magnet technology, and significant technology advancement leading to large-scale application in accelerators can only be imagined on the 25-year horizon.
Road to the future
There are two important messages to retain from this rather simplified perspective on high-field magnets for accelerators. Firstly, given the long lead times of this technology, and even in times of uncertainty, it is important to maintain a healthy and ambitious programme so that the next step in technology is at hand when critical decisions on the accelerators of the future are due. The second message is that with such long development cycles and very specific technology, it is not realistic to rely on the private sector to advance and sustain the specific demands of HEP. In fact, the business model of high-energy physics is very peculiar, involving long investment times followed by short production bursts, and not sustainable by present industry standards. So, without taking the place of industry, it is crucial to secure critical know-how and infrastructure within the field to meet development needs and ensure the long-term future of our accelerators, present and to come.
Physics beyond the Standard Model must exist, to account for dark matter, the smallness of neutrino masses and the dominance of matter over antimatter in the universe; but we have no real clue of its energy scale. It is also widely recognised that new and more precise tools will be needed to be certain that the 125 GeV boson discovered in 2012 is indeed the particle postulated by Brout, Englert, Higgs and others to have modified the base potential of the whole universe, thanks to its coupling to itself, liberating energy for the masses of the W and Z bosons.
To tackle these big questions, and others, the Future Circular Collider (FCC) study, launched in 2014, proposed the construction of a new 100 km circular tunnel to first host an intensity-frontier 90 to 365 GeV e+e– collider (FCC-ee), and then an energy-frontier (> 100 TeV) hadron collider, which could potentially also allow electron–hadron collisions. Potentially following the High-Luminosity LHC in the late 2030s, FCC-ee would provide 5 × 1012 Z decays – over five orders of magnitude more than the full LEP era, followed by 108 W pairs, 106 Higgs bosons (ZH events) and 106 top-quark pairs. In addition to providing the highest parton centre-of-mass energies foreseeable today (up to 40 TeV), FCC-hh would also produce more than 1013 top quarks and W bosons, and 50 billion Higgs bosons per experiment.
Rising to the challenge
Following the publication of the four-volume conceptual design report and submissions to the European strategy discussions, the third FCC Physics and Experiments Workshop was held at CERN from 13 to 17 January, gathering more than 250 participants for 115 presentations, and establishing a considerable programme of work for the coming years. Special emphasis was placed on the feasibility of theory calculations matching the experimental precision of FCC-ee. The theory community is rising to the challenge. To reach the required precision at the Z-pole, three-loop calculations of quantum electroweak corrections must include all the heavy Standard Model particles (W±, Z, H, t).
In parallel, a significant focus of the meeting was on detector designs for FCC-ee, with the aim of forming experimental proto-collaborations by 2025. The design of the interaction region allows for a beam vacuum tube of 1 cm radius in the experiments – a very promising condition for vertexing, lifetime measurements and the separation of bottom and charm quarks from light-quark and gluon jets. Elegant solutions have been found to bring the final-focus magnets close to the interaction point, using either standard quadrupoles or a novel magnet design using a superposition of off-axis (“canted”) solenoids. Delegates discussed solutions for vertexing, tracking and calorimetry during a Z-pole run at FCC-ee, where data acquisition and trigger electronics would be confronted with visible Z decays at 70 kHz, all of which would have to be recorded in full detail. A new subject was π/K/p identification at energies from 100 MeV to 40 GeV – a consequence of the strategy process, during which considerable interest was expressed in the flavour-physics programme at FCC-ee.
Physicists cannot refrain from investigating improvements
The January meeting showed that physicists cannot refrain from investigating improvements, in spite of the impressive statistics offered by the baseline design of FCC-ee. Increasing the number of interaction points from two to four is a promising way to nearly double the total delivery of luminosity for little extra power consumption, but construction costs and compatibility with a possible subsequent hadron collider must be determined. A bolder idea discussed at the workshop aims to improve both luminosity (by a factor of 10) and energy reach (perhaps up to 600 GeV), by turning FCC-ee into a 100 km energy-recovery linac. The cost, and how well this would actually work, are yet to be established. Finally, a tantalising possibility is to produce the Higgs boson directly in the s-channel: e+e– → H, sitting exactly at a centre-of-mass energy equal to that of the Higgs boson. This would allow unique access to the tiny coupling of the Higgs boson to the electron. As the Higgs width (4.2 MeV in the Standard Model) is more than 20 times smaller than the natural energy spread of the beam, this would require a beam manipulation called monochromatisation and a careful running procedure, which a task force was nominated to study.
The ability to precisely probe the self-coupling of the Higgs boson is the keystone of the FCC physics programme. As said above, this self-interaction is the key to the electroweak phase transition, and could have important cosmological implications. Building on the solid foundation of precise and model-independent measurements of Higgs couplings at FCC-ee, FCC-hh would be able to access Hμμ, Hγγ, HZγ and Htt couplings at sub-percent precision. Further study of double Higgs production at FCC-hh shows that a measurement of the Higgs self-coupling could be done with a statistical precision of a couple of percent with the full statistics – which is to say that after the first few years of running the precision will already have been reduced to below 10%. This is much faster than previously realised, and definitely constituted the highlight of the workshop
High-energy particle colliders have proved to be indispensable tools in the investigation of the nature of the fundamental forces. The LHC, at which the discovery of the Higgs boson was made in 2012, is a prime recent example. Several major projects have been proposed to push our understanding of the universe once the LHC reaches the end of its operations in the late 2030s. These have been the focus of discussions for the soon-to-conclude update of the European strategy for particle physics. An electron–positron Higgs factory that allows precision measurements of the Higgs boson’s couplings and the Higgs potential seems to have garnered consensus as the best machine for the near future. The question is: what type will it be?
Today, mature options for electron–positron colliders exist: the Future Circular Collider (FCC-ee) and the Compact Linear Collider (CLIC) proposals at CERN; the International Linear Collider (ILC) in Japan; and the Circular Electron–Positron Collider (CEPC) in China. FCC-ee offers very high luminosities at the required centre-of-mass energies. However, the maximum energy that can be reached is limited by the emission of synchrotron radiation in the collider ring, and corresponds to a centre-of-mass energy of 365 GeV for a 100 km-circumference machine. Linear colliders accelerate particles without the emission of synchrotron radiation, and hence can reach higher energies. The ILC would initially operate at 250 GeV, extendable to 1 TeV, while the highest energy proposal, CLIC, has been designed to reach 3 TeV. However, there are two principal challenges that must be overcome to go to higher energies with a linear machine: first, the beam has to be accelerated to full energy in a single passage through the main linac; and, second, it can only be used once in a single collision. At higher energies the linac has to be longer (around 50 km for a 1 TeV ILC and a 3 TeV CLIC) and is therefore more costly, while the single collision of the beam also limits the luminosity that can be achieved for a reasonable power consumption.
Beating the lifetime
An ingenious solution to overcome these issues is to replace the electrons and positrons with muons and anti-muons. In a muon collider, fundamental particles that are not constituents of ordinary matter would collide for the first time. Being 200 times heavier than the electron, the muon emits about two billion times less synchrotron radiation. Rings can therefore be used to accelerate muon beams efficiently and to bring them into collision repeatedly. Also, more than one experiment can be served simultaneously to increase the amount of data collected. Provided the technology can be mastered, it appears possible to reach a ratio of luminosity to beam power that increases with energy. The catch is that muons live on average for 2.2 μs, which leads to a reduction in the number of muons produced by about an order of magnitude before they enter the storage ring. One therefore has to be rather quick in producing, accelerating and colliding the muons; this rapid handling provides the main challenges of such a project.
The development of a muon collider is not as advanced as the other lepton-collider options that were submitted to the European strategy process. Therefore the unique potential of a multi-TeV muon collider deserves a strong commitment to fully demonstrate its feasibility. Extensivestudies submitted to the strategy update show that a muon collider in the multi-TeV energy range would be competitive both as a precision and as a discovery machine, and that a full effort by the community could demonstrate that a muon collider operating at a few TeV can be ready on a time scale of about 20 years. While the full physics capabilities at high energies remain to be quantified, and provided the beam energy and detector resolutions at a muon collider can be maintained at the parts-per-mille level, the number of Higgs bosons produced would allow the Higgs’ couplings to fermions and bosons to be measured with extraordinary precision. A muon collider operating at lower energies, such as those for the proposed FCC-ee (250 and 365 GeV) or stage-one CLIC (380 GeV) machines, has not been studied in detail since the beam-induced background will be harsher and careful optimisation of machine parameters would be required to reach the needed luminosity. Moreover, a muon collider generating a centre-of-mass energy of 10 TeV or more and with a luminosity of the order of 1035 cm–2 s–1 would allow a direct measurement of the trilinear and quadrilinear self-couplings of the Higgs boson, enabling a precise determination of the shape of the Higgs potential. While the precision on Higgs measurements achievable at muon colliders is not yet sufficiently evaluated to perform a comparison to other future colliders, theorists have recently shown that a muon collider is competitive in measuring the trilinear Higgs coupling and that it could allow a determination of the quartic self-coupling that is significantly better than what is currently considered attainable at other future colliders. Owing to the muon’s greater mass, the coupling of the muon to the Higgs boson is enhanced by a factor of about 104 compared to the electron–Higgs coupling. To exploit this, previous studies have also investigated a muon collider operating at a centre-of-mass energy of 126 GeV (the Higgs pole) to measure the Higgs-boson line-shape. The specifications for such a machine are demanding as it requires knowledge of the beam-energy spread at the level of a few parts in 105.
The idea of a muon collider was first introduced 50 years ago by Gersh Budker and then developed by Alexander Skrinsky and David Neuffer until the Muon Collider Collaboration became a formal entity in 1997, with more than 100 physicists from 20 institutions in the US and a few more from Russia, Japan and Europe. Brookhaven’s Bob Palmer was a key figure in driving the concept forward, leading the outline of a “complete scheme” for a muon collider in 2007. Exploratory work towards a muon collider and neutrino factory was also carried out at CERN around the turn of the millennium. It was only when the Muon Accelerator Program (MAP), directed by Mark Palmer of Brookhaven, was formally approved in 2011 in the US, that a systematic effort started to develop and demonstrate the concepts and critical technologies required to produce, capture, condition, accelerate and store intense beams of muons for a muon collider on the Fermilab site. Although MAP was wound down in 2014, it generated a reservoir of expertise and enthusiasm that the current international effort on physics, machine and detector studies can not do without.
So far, two concepts have been proposed for a muon collider (figure 1). The first design, developed by MAP, is to shoot a proton beam into a target to produce pions, many of which decay into muons. This cloud of muons (with positive and negative charge) is captured and an ionisation cooling system of a type first imagined by Budker rapidly cools the muons from the showers to obtain a dense beam. The muons are cooled in a chain of low-Z absorbers in which they lose energy by ionising the matter, reducingtheir phase space volume; the lost energy would then be replaced by acceleration. This is so far the only concept that can achieve cooling within the timeframe of the muon lifetime. The beams would be accelerated in a sequence of linacs and rings, and injected at full energy into the collider ring. A fully integrated conceptual design for the MAP concept remains to be developed.
The unique potential of a multi-TeV muon collider deserves a strong commitment to fully demonstrate its feasibility
The alternative approach to a muon collider, proposed in 2013 by Mario Antonelli of INFN-LNF and Pantaleo Raimondi of the ESRF, avoids a specific cooling apparatus. Instead, the Low Emittance Muon Accelerator (LEMMA) scheme would send 45 GeV positrons into a target where they collide with electrons to produce muon pairs with a very small phase space (the energy of the electron and positron in the centre-of-mass frame are small, so little transverse momentum can be generated). The challenge with LEMMA is that the probability for a positron to produce a muon pair is exceedingly low, requiring an unprecedented positron-beam current and inducing a high stress in the target system. The muon beams produced would be circulated about 1000 times, limited by the muon lifetime, in a ring collecting muons produced from as many positron bunches as possible before they are accelerated and collided in a fashion similar to the proton-driven scheme of MAP. The low emittance of the LEMMA beams potentially allows the use of lower muon currents, easing the challenges of operating a muon collider due to the remnants of the decaying muons. The initial LEMMA scheme offered limited performance in terms of luminosity, and further studies are required to optimise all parameters of the source before capture and fast acceleration. With novel ideas and a dedicated expert team, LEMMA could potentially be shown to be competitive with the MAP scheme.
Concerning the ambitious muon ionisation-cooling complex (figure 2), which is the key challenge of MAP’s proton-driven muon-collider scheme, the Muon Ionization Cooling Experiment (MICE) collaboration recently published results demonstrating the feasibility of the technique (CERN Courier March/April 2020 p7). Since muons produced from proton interactions in a target emerge in a rather undisciplined state, MICE set out to show that their transverse phase-space could be cooled by passing the beam through an energy-absorbing material and accelerating structures embedded within a focusing magnetic lattice – all before the muons have time to decay. For the scheme to work, the cooling (squeezing the beam in transverse phase space) due to ionisation energy loss must exceed the heating due to multiple Coulomb scattering within the absorber. Materials with low multiple scattering and a long radiation length, such as liquid hydrogen and lithium hydride, are therefore ideal.
MICE, which was based at the ISIS neutron and muon source at the Rutherford Appleton Laboratory in the UK, was approved in 2005. Using data collected in 2018, the MICE collaboration was able to determine the distance of a muon from the centre of the beam in 4D phase space (its so-called amplitude or “single-particle emittance”) both before and after it passed through the absorber, from which it was possible to estimate the degree of cooling that had occurred. The results (figure 3) demonstrated that ionisation cooling occurs with a liquid-hydrogen or lithium-hydride absorber in place. Data from the experiment were found to be well described by a Geant4-based simulation, validating the designs of ionisation cooling channels for an eventual muon collider. The next important step towards a muon collider would be to design and build a cooling module combining the cavities with the magnets and absorbers, and to achieve full “6D” cooling. This effort could profit from tests at Fermilab of accelerating cavities that can operate in a very high magnetic field, and also from the normal-conducting cavity R&D undertaken for the CLIC study, which pushed accelerating gradients to the limit.
Collider ring
The collider ring itself is another challenging aspect of a muon collider. Since the charge of the injected beams decreases over time due to the random decays of muons, superconducting magnets with the highest possible field are needed to minimise the ring circumference and thus maximise the average number of collisions. A larger muon energy makes it harder to bend the beam and thus requires a larger ring circumference. Fortunately, the lifetime of the muon also increases with its energy, which fully compensates for this effect. Dipole magnets with a field of 10.5 T would allow the muons to survive about 2000 turns. Such magnets, which are about 20% more powerful than those in the LHC, could be built from niobium-tin (Nb3Sn) as used in the new magnets for the HL-LHC (see Taming the superconductors of tomorrow).
The electrons and positrons produced when muons decay pose an additional challenge for the magnet design. The decay products will hit the magnets and can lead to a quench (whereby the magnet suddenly loses its superconductivity, rapidly releasing an immense amount of stored energy). It is therefore important to protect the magnets. The solutions considered include the use of large-aperture magnets in which shielding material can be placed, or designs where the magnets have no superconductor in the plane of the beam. Future magnets based on high-temperature superconductors could also help to improve the robustness of the bends against this problem since they can tolerate a higher heat load.
Other systems necessary for a muon collider are only seemingly more conventional. The ring that accelerates the beam to the collision energy is a prime example. It has to ramp the beam energy in a period of milliseconds or less, which means the beam has to circulate at very different energies through the same magnets. Several solutions are being explored. One, featuring a so-called fixed-field alternating-gradient ring, uses a complicated system of magnets that enables particles at a wider than normal range of energies to fly on different orbits that are close enough to fit into the same magnet apertures. Another possibility is to use a fast-ramping synchrotron: when the beam is injected at low energy it is kept on its orbit by operating the bending magnets at low field. The beam is then accelerated and the strength of the bends is increased accordingly until the beam can be extracted into the collider. It is very challenging to ramp superconducting magnets at the required speed, however. Normal-conducting magnets can do better, but their magnetic field is limited. As a consequence, the accelerator ring has to be larger than the collider ring, which can use superconducting magnets at full strength without the need to ramp them. Systems that combine static superconducting and fast-ramping normal-conducting bends have been explored by the MAP collaboration. In these designs, the energy in the fields of the fast-ramping bends will be very high, so it is important that the energy is recuperated for use in a subsequent accelerating cycle. This requires a very efficient energy-recovery system which extracts the energy after each cycle and reuses it for the next one. Such a system, called POPS (“power for PS”), is used to power the magnets of CERN’s Proton Synchrotron. The muon collider, however, requires more stored energy and much higher power flow, which calls for novel solutions.
High occupancy
Muon decays also induce the presence of a large amount of background in the detectors at a muon collider – a factor that must be studied in detail since it strongly depends on the beam energy at the collision point and on the design of the interaction region. The background particles reaching the detector are mainly produced by the interactions between the decay products of the muon beams and the machine elements. Their type, flux and characteristics therefore strongly depend on the machine lattice and the configuration of the interaction point, which in turn depends on the collision energy. The background particles (mainly photons, electrons and neutrons) may be produced tens of metres upstream of the interaction point. To mitigate the effects of the beam-induced background inside the detector, tungsten shielding cones, called nozzles, are proposed in this configuration and their opening angle has to be optimised for a specific beam energy, which affects the detector acceptance (see figure 4). Despite these mitigations, a large particle flux reaches the detector, causing a very high occupancy in the first layers of the tracking system, which impacts the detector performance. Since the arrival time in each sub-detector is asynchronous with respect to the beam crossing, due to the different paths taken by the beam-induced background and the muons, new-generation 4D silicon sensors that allow exploitation of the time distribution will be needed to remove a significant fraction of the background hits.
Energy expansion
It was recently demonstrated, by a team supported by INFN and Padova University in collaboration with MAP researchers, that state-of-the-art detector technology for tracking and jet reconstruction would make one of the most critical measurements at a muon collider – the vector-boson fusion channel μ+μ– → (W*W*) ν ν → H ν ν, with H → b b – feasible in this harsh environment, with a high level of precision, competitive to other proposed machines (figure 5). A muon collider could in principle expand its energy reach to several TeV with good luminosity, allowing unprecedented exploration in direct searches and high-precision tests of Standard Model phenomena, in particular the Higgs self-couplings.
The technology for a muon collider also underpins a so-called neutrino factory, in which beams of equal numbers of electron and muon neutrinos are produced from the decay of muons circulating in a storage ring – in stark contrast to the neutrino beams used at T2K and NOvA, and envisaged for DUNE and Hyper-K, which use neutrinos from the decays of pions and kaons from proton collisions on a fixed target. In such a facility it is straightforward to tune the neutrino-beam energy because the neutrinos carry away a substantial fraction of the muon’s energy. This, combined with the excellent knowledge of the beam composition and energy spectrum that arises from the precise knowledge of muon-decay characteristics, makes a neutrino factory an attractive place to measure neutrino oscillations with great precision and to look for oscillation phenomena that are outside the standard, three-neutrino-mixing paradigm. One proposal – nuSTORM, an entry-level facility proposed for the precise measurement of neutrino-scattering and the search for sterile neutrinos – can provide the ideal test-bed for the technologies required to deliver a muon collider.
Muon-based facilities have the potential to provide lepton–antilepton collisions at centre-of-mass energies in excess of 3 TeV and to revolutionise the production of neutrino beams. Where could such a facility be built? A 14 TeV muon collider in the 27 km-circumference LHC tunnel has recently been discussed, while another option is to use the LHC tunnel to accelerate the muons and construct a new, smaller tunnel for the actual collider. Such a facility is estimated to provide a physics reach comparable to a 100 TeV circular hadron collider, such as the proposed Future Circular Collider, FCC-hh. A LEMMA-like positron driver scheme with a potentially lower neutrino radiation could possibly extend this energy range still further. Fermilab, too, has long been considered a potential site for a muon collider, and it has been demonstrated that the footprint of a muon facility is small enough to fit in the existing Fermilab or CERN sites. However, the realistic performance and feasibility of such a machine would have to be confirmed by a detailed feasibility study identifying the required R&D to address its specific issues, especially the compatibility of existing facilities with muon decays. Minimising off-site neutrino radiation is one of the main challenges to the design and civil-engineering aspects of a high-energy muon collider because, while the interaction probability is tiny, the total flux of neutrinos is sufficiently high in a very small area in the collider plane to produce localised radiation that can reach a fraction of natural-radiation levels. Beam wobbling, whereby the lattice is modified periodically so that the neutrino flux pointing to Earth’s surface is spread out, is one of the promising solutions to alleviate the problem, although it requires further studies.
It was only when the Muon Accelerator Program was formally approved in 2011 in the US that a systematic effort started
A muon collider would be a unique lepton-collider facility at the high-energy frontier. Today, muon-collider concepts are not as mature as those for FCC-ee, CLIC, ILC or CEPC. It is now important that a programme is established to prove the feasibility of the muon collider, address the key remaining technical challenges, and provide a conceptual design that is affordable and has an acceptable power consumption. The promises for the very high-energy lepton frontier suggests that this opportunity should not be missed.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.