Comsol -leaderboard other pages

Topics

Accelerators meet gravitational waves

Gravitational waves (GWs) crease and stretch the fabric of spacetime as they ripple out across the universe. As they pass through regions where beams circulate in storage rings, they should therefore cause charged-particle orbits to seem to contract, as they climb new peaks and plumb new troughs, with potentially observable effects.

SRGW2021

Proposals in this direction have appeared intermittently over the past 50 years, including during and after the construction of LEP and the LHC. Now that the existence of GWs has been established by the LIGO and VIRGO detectors, and as new, even larger storage rings are being proposed in Europe and China, this question has renewed relevance. We are on the cusp of the era of GW astronomy — a young and dynamic domain of research with much to discover, in which particle accelerators could conceivably play a major role.

From 2 February to 31 March this year, a topical virtual workshop titled “Storage Rings and Gravitational Waves” (SRGW2021) shone light on this tantalising possibility. Organised within the European Union’s Horizon 2020 ARIES project, the meeting brought together more than 100 accelerator experts, particle physicists and members of the gravitational-physics community to explore several intriguing proposals.

Theoretically subtle

GWs are extremely feebly interacting. The cooling and expanding universe should have become “transparent” to them early in its history, long before the timescales probed through other known phenomena. Detecting cosmological backgrounds of GWs would, therefore, provide us with a picture of the universe at earlier times that we can currently access, prior to photon decoupling and Big-Bang nucleosynthesis. It could also shed light on high-energy phenomena, such as high-temperature phase transitions, inflation and new heavy particles that cannot be directly produced in the laboratory.

Gravitational wave sources and sensitivities

In the opening session of the workshop, Jorge Cervantes (ININ Mexico) presented a vivid account of the history of GWs, revealing how subtle they are theoretically. It took about 40 years and a number of conflicting papers to definitively establish their existence. Bangalore S. Sathyaprakash (Penn State and Cardiff) reviewed the main expected sources of GWs: the gravitational collapse of binaries of compact objects such as black holes, neutron stars and white dwarfs; supernovae and other transient phenomena; spinning neutron stars; and stochastic backgrounds with either astrophysical or cosmological origins. The GW frequency range of interest extends from 0.1 nHz to 1 MHz (see figure “Sources and sensitivities”).

The frequency range of interest extends from 0.1 nHz to 1 MHz

Raffaele Flaminio (LAPP Annecy) reviewed the mindboggling precision of VIRGO and LIGO, which can measure motion 10,000 times smaller than the width of an atomic nucleus. Jörg Wenninger (CERN) reported the similarly impressive sensitivity of LEP and the LHC to small effects, such as tides and earthquakes on the other side of the planet. Famously, LEP’s beam-energy resolution was so precise that it detected a diurnal distortion of the 27 km ring at an amplitude of a single millimetre, and the LHC beam-position-monitor system can achieve measurement resolutions on the average circumference approaching the micrometre scale over time intervals of one hour. While impressive, given that these machines are designed with completely different goals in mind, it is still far from the precision achieved by LIGO and VIRGO. However, one can strongly enhance the sensitivity to GWs by exploiting resonant effects and the long distances travelled by the particles over their storage times. In one hour, protons at the LHC travel through the ring about 40 million times. In principle, the precision of modern accelerator optics could allow storage rings and accelerator technologies to cover a portion of the enormous GW frequency range of interest.

Resonant Responses

Since the invention of the synchrotron, storage rings have been afflicted by difficult-to-control resonance effects which degrade beam quality. When a new ring is commissioned, accelerator physicists work diligently to “tune” the machine’s parameters to avoid such effects. But could accelerator physicists turn the tables and seek to enhance these effects and observe resonances caused by the passage of GWs?

In accelerators and storage rings, charged particles are steered and focused in the two directions transverse to their motion by dipole, quadrupole and higher-order magnets — the “betatron motion” of the beam. The beam is also kept bunched in the longitudinal plane as a result of an energy-dependent path length and oscillating electric fields in radio-frequency (RF) cavities — the “synchrotron motion” of the beam. A gravitational wave can resonantly interact with either the transverse betatron motion of a stored beam, at a frequency of several kHz, or with the longitudinal synchrotron motion at a frequency of tens of hertz.

Antenna optics

Katsunobu Oide (KEK and CERN) discussed the transverse betatron resonances that a gravitational wave can excite for a beam circulating in a storage ring. Typical betatron frequencies for the LHC are a few kHz, offering potentially sensitivity to GWs with frequencies of a similar order of magnitude. Starting from a standard 30 km ring, Oide-san proposed special beam-optical insertions with a large beta function, which would serve as “GW antennas” to enhance the resonance strength, resulting in 37.5 km-long optics (see figure “Antenna optics”). Among several parameters, the sensitivity to GWs should depend on the size of the ring. Oide derived a special resonance condition of kGWR±2=Qx, with R the ring radius, kGW the GW wave number and Qx the horizontal betatron tune. 

Suvrat Rao (Hamburg University) presented an analysis of the longitudinal beam response of the LHC. An impinging GW affects the revolution period, in a similar way to the static gravitational gradient effect due to the presence of the Mont Blanc (which alters the revolution time at the level of 10-16 s) and the diurnal effect of the changing locations of sun and moon (10-18 s) — the latter effect being about six orders of magnitude smaller than the tidal effect on the ring circumference.

The longitudinal beam response to a GW should be enhanced for perturbations close to the synchrotron frequency, which, for the LHC, would be in the range 10 to 60 Hz. Raffaele D’Agnolo (IPhT) estimated the sensitivity to the gravitational strain, h, at the synchrotron frequency, without any backgrounds, as h~10-13, and listed three possible paths to further improve the sensitivity by several orders of magnitude. Rao also highlighted that storage-ring GW detection potentially allows for an earth-based GW observatory sensitive to millihertz GWs, which could complement space-based laser interferometers such as LISA, which is planned to be launched in 2034. This would improve the sky-localisation GW-source, which is useful for electromagnetic follow-up studies with astronomical telescopes.

Out of the ordinary

More exotic accelerators were also mooted. A “coasting-beam” experiment might have zero restoring voltage and no synchrotron oscillations. Cold “crystalline” beams of stable ordered 1D, 2D or 3D structures of ions could open up a whole new frequency spectrum, as the phonon spectrum which could be excited by a GW could easily extend up to the MHz range. Witek Krasny (LPNHE) suggested storing beams consisting of “in the LHC: decay times and transition rates could be modified by an incident GW. The stored particles could, for example, include the excited partially stripped heavy ions that are the basis of a “gamma factory”.

Finally on the storage-ring front, Andrey Ivanov (TU Vienna) and co-workers discussed the possibly shrinking circumference of a storage ring, such as the 1.4 km light source SPring-8 in Japan, under the influence of the relic GW background.

The Gertsenshtein effect

Delegates at SRGW2021 also proposed completely different ways of using accelerator technology to detect GWs. Sebastian Ellis (IPhT) explained how an SRF cavity might act as a resonant bar or serve as a Gertsenshtein converter, in both cases converting a graviton into a photon in the presence of a strong background magnetic field and yielding a direct electromagnetic signal — similar to axion searches. Related attempts at GW detection using cavities were pioneered in the 1970s by teams in the Soviet Union and Italy, but RF technology has made big strides in quality factors, cooling and insulation since then, and a new series of experiments appears to be well justified.

Another promising approach for GW detection is atomic-beam interferometry. Instead of light interference, as in LIGO and VIRGO, an incident GW would cause interference between carefully prepared beams of cold atoms. This approach is being pursued by the recently approved AION experiment using ultra-cold-strontium atomic clocks over increasingly large path lengths, including the possible use of an LHC access shaft to house a 100-metre device targeting the 0.01 to 1 Hz range. Meanwhile, a space-based version, AEDGE, could be realised with a pair of satellites in medium earth orbit separated by 4.4×107 m.

Storage rings as sources

Extraordinarily, storage rings could act not only as GW detectors, but also as observable sources of GWs. Pisin Chen (NTU Taiwan) discussed how relativistic charged particles executing circular orbital motion can emit gravitational waves in two channels: “gravitational synchrotron radiation” (GSR) emitted directly by the massive particle, and  “resonant conversion” in which, via the Gertsenshtein effect, electromagnetic synchrotron radiation (EMSR) is converted into GWs.

Gravitons could be emitted via “gravitational beamstrahlung”

John Jowett (GSI, retired from CERN) and Fritz Caspers (also retired from CERN) recalled that GSR from beams at the SPS and other colliders had been discussed at CERN as early  as the 1980s. It was realised that these beams would be among the most powerful terrestrial sources of gravitational radiation although the total radiated power would still be many orders of magnitude lower than from regular synchrotron radiation. The dominant frequency of direct GSR is the revolution frequency, 10 kHz, while the dominant frequency of resonant EMSR-GSR conversion is a factor γ3 higher, around 10 THz at the LHC, conceivably allowing the observation of gravitons. If all particles and bunches of a beam excited the GW coherently, the space-time metric perturbation has been estimated to be as large as hGSR~10-18. Gravitons could also be emitted via “gravitational beamstrahlung” during the collision with an opposing beam, perhaps producing the most prominent GW signal at future proposed lepton colliders. At the LHC, argued Caspers, such signals could be detected by a torsion-balance experiment with a very sensitive, resonant mechanical pickup installed close to the beam in one of the arcs. In a phase-lock mode of operation, an effective resolution bandwidth of millihertz or below could be possible, opening the exciting prospect of detecting synthetic sources of GWs.

Towards an accelerator roadmap

The concluding workshop discussion, moderated by John Ellis (King’s College London), focused on the GW-detection proposals considered closest to implementations: resonant betatron oscillations near 10 kHz; changes in the revolution period using “low-energy” coasting ion-beams without a longitudinally focusing RF system; “heterodyne” detection using SRF cavities up to 10 MHz; beam-generated GWs at the LHC; and atomic interferometry. These potential components of a future R&D plan cover significant regions of the enormous GW frequency space.

Apart from an informal meeting at CERN in the 1990s, SRGW2021 was the first workshop to link accelerators and GWs and bring together the implicated scientific communities. Lively discussions in this emerging field attest to the promise of employing accelerators in a completely different way to either detect or generate GWs. The subtleties of the particle dynamics when embedded in an oscillating fabric of space and time, and the inherent sensitivity problems in detecting GWs, pose exceptional challenges. The great interest prompted by SRGW2021, and the tantalising preliminary findings from this workshop, call for more thorough investigations into harnessing future storage rings and accelerator technologies for GW physics.

Intercepting the beams

The SPS internal beam dump

Imagine standing in the LHC tunnel when the machine is operating. Proton beams are circulating around the 27 km ring more than 11,000 times per second, colliding at four points to generate showers of particles that are recorded by ATLAS, CMS, ALICE, LHCb and other detectors. After a few hours of operation, the colliding beams need to be disposed of to allow a new physics fill. Operators in the CERN control centre instruct beam-transfer equipment to shunt the circulating beams into external trajectories that transport them away from the cryogenic superconducting magnets. Each beam exits the ring and travel for 600 metres in a straight line before reaching a compact cavern housing a large steel cylinder roughly 9 m long, 70 cm in diameter and containing about 4.4 tonnes of graphitic material. Huge forces are generated in the impact. If you could witness the event up close, you would hear a massive “bang” – like a bell – generated by the sudden expansion and successive contraction of the steel shell. 

What you will have witnessed is a beam-intercepting system in action. Of course, experiencing a beam dump in person is not possible, due to the large amount of radiation generated in the impact, which is one of the reasons why access to high-energy accelerators is strictly forbidden during operation.

Beam-intercepting systems are essential devices designed to absorb the energy and power of a particle beam. Generally, they are classified in three categories depending on their use: particle-producing devices, such as targets; systems for beam cleaning and control, such as collimators or scrapers; and those with safety functions, such as beam dumps or beam stoppers. During the current long-shutdown 2 (LS2), several major projects have been undertaken to upgrade some of the hundreds of beam-intercepting systems across CERN’s accelerator complex, in particular to prepare the laboratory for the high-luminosity LHC era.

Withstanding stress

Beam-intercepting devices have to withstand enormous mechanical and thermally-induced stresses. In the case of the LHC beam dump, for example, upgrades of the LHC injectors will deliver a beam which at high energy will have a kinetic energy equivalent to 560 MJ during LHC Run 3, roughly corresponding to the energy required to melt 2.7 tonnes of copper. Released in a period of just 86 μs, this corresponds to a peak power of 6.3 TW or, put differently, 8.6 billion horse power. 

The upgraded LHC beam dump

In general, the energy deposited in beam-intercepting devices is directly proportional to the beam energy, its intensity and the beam-spot size, as well as to the density of the absorbing material. From the point of view of materials, this energy is transformed into heat. In a beam dump, for example, the collision volume (which is usually much smaller than the beam-intercepting device itself) is heated to temperatures of 1500 C or more. This heat causes the small volume to try to expand but, because the surrounding area has a much lower temperature, there is no room for expansion. Instead, the hot volume pushes against the colder surrounding area, risking breaking the structure. To reach a sufficient attenuation, due to the high energy of the beams in CERN’s accelerators, we need devices that in some cases are several metres long.

Beam-intercepting devices must be able to withstand routine operation and also accident scenarios, where they serve to protect more delicate equipment such as cryomagnets. Amongst the many challenges that need to be faced are operation under ultra-high-vacuum conditions, and maintaining integrity and functionality when enduring energy densities up to several kJ/cm3 or power densities up to several MW/cm3. For physics applications, optimisation processes have led to the use of low-strength materials, such as pure lead for the generation of neutrons at the n_TOF facility or iridium and tantalum for the generation of antiprotons at the Antiproton Decelerator (AD) facility.

Preparing for HL-LHC 

The LHC Injectors Upgrade (LIU) Project, which was launched in 2010 and for which the hardware was installed during LS2, will allow beams with a higher intensity and a smaller spot size to be injected into the LHC. This is a precondition for the full execution of the High-Luminosity LHC (HL-LHC), which will enable a large increase in the integrated luminosity collected by the experiments. To safely protect sensitive equipment in the accelerator chain, the project required a series of new devices in the injector complex from the PS Booster to the SPS, including new beam-intercepting devices. One example is the new SPS internal beam dump, the so-called TIDVG (Target Internal Dump Vertical Graphite), which was installed in straight-section five of the SPS during 2020 (see “Structural integrity” image). The main challenge faced for this device was the need to dissipate a large amount of power from the device rapidly and efficiently to avoid reaching temperatures not acceptable by the beam-dump materials.

Dispersion-suppressor collimators being installed and checked

The TIDVG is used to dispose of the SPS circulating beam whenever necessary, for example in case of emergency during LHC beam-setup, filling or machine-development periods, and to dispose of the part of the beam dedicated to fixed-target experiments that remains after the slow-extraction process. Aiming at reducing the energy density deposited in the dump core’s absorbing material (and hence minimising the associated thermo-mechanical stresses), the beam is diluted by kicker magnets, producing a sinusoidal pattern on the front of the first absorbing block. The dump is designed to absorb all beam energies in the SPS, from 14 GeV (injection from the PS) to 450 GeV. 

The LHC Injectors Upgrade Project will allow beams with a higher intensity and a smaller spot size to be injected into the LHC

With respect to the pre-LS2 device, the beam power to be absorbed by the dump will be four-times higher, with an average power of 300 kW. To reduce the local energy deposition whilst maintaining the total required beam absorption, the length of the new dump has been increased by 70 cm, leading to a 5 m-long dump. The dump blocks are arranged so that the density of the absorbing materials increases as the beam passes through the device: 4.4 m of isostatic graphite, 20 cm of a molybdenum alloy and 40 cm of pure tungsten. This ensures that the stresses associated with the resulting thermal gradients are kept within acceptable values. The core of the component, which receives the highest thermal load, is cooled directly by a dedicated copper-alloy jacket surrounding the blocks, which can only release their heat through the contact with the jacket; to maximise the thermal conductivity at the interfaces between the stainless-steel cooling pipes and the copper alloy, these materials are diffusion-bonded by means of hot isostatic pressing. The entire core is embedded in an air-cooled, seamless 15 mm-thick stainless-steel hollow cylinder. Due to the high activation of the dump expected after operation, in addition to the first cast-iron shielding, the assembly is surrounded by a massive, multi-layered external shield comprising an inner layer of 50 cm of concrete, followed by 1 m of cast iron and an external layer 40 cm of marble. Marble is used on the three sides accessible by personnel to minimise the residual dose rate in the vicinity after short cool-down times. 

Collimator system upgrades

Beam collimators and masks are essential components in accelerator systems. They act as intermediate absorbers and dilutors of the beam in case of beam losses, minimising the thermal energy received by components such as superconducting magnets (leading to quench) or delicate materials in the LHC experiments. The other function of the collimators is to clean up the halo of the beam, by removing particles moving away from the correct orbit. Collimators generally consist of two jaws – moveable blocks of robust materials – that close around the beam to clean it of stray particles. More than 100 of these vital devices are placed around the LHC in critical locations.

Upgraded LHC external dumps

The jaw materials can withstand extreme temperatures and stresses (resulting in deposited energy densities up to 6 kJ/cm3), while maintaining – at least for the LHC collimators – good electrical conductivity to reduce the impedance contribution to the machine. Several developments were incorporated in the SPS-to-LHC transfer line collimators built in the framework of the LIU project, as well as in the LHC collimators for the HL-LHC. For the former, dedicated and extremely robust 3D carbon-composite materials were developed at CERN in collaboration with European industry, while for the latter, dedicated molybdenum carbide-graphite composites were developed, again in collaboration with European firms. For these cases, more than 30 new collimators have been built and installed in the SPS and LHC during LS2 (see “New collimators” image). 

LHC beam-dump upgrades

Several challenges associated with the LHC beam dump system had to be overcome, especially on the dump-block itself: it needs to be ready at any time to accept protons, from injection at 450 GeV up to top energy (6.5 TeV, with up to 7 TeV in the future); it must be reliable (~200 dump events per year); and it must accept fast-extracted beams, given that the entire LHC ring is emptied in just 86 μs. At 560 MJ, the projected stored beam energy during Run 3 will also be 75% higher than it was during Run 2. 

Welding of the upstream cover and proton window

The dump core (around 8 m long) consists of a sandwich of graphitic materials of sufficiently low density to limit the temperature rise – and therefore the resulting thermal-induced stresses – in the material (see “End of the line” image). The graphite is contained in a 12 mm-thick special stainless-steel grade (see “Dump upgrades” image) and the assembly is surrounded by shielding blocks. Roughly 75% (±430 MJ) of the energy that gets deposited by either electromagnetic shower and ionisation losses of hadrons and muons is deposited in the graphite, while around 5% (±25 MJ) is deposited in the thin steel vessel, and the remaining energy is deposited in the shielding assembly. Despite the very low density (1.1 g/cm3) employed in the middle section of the core, temperatures up to 1000 C have been reached during Run 2. From Run 3, temperatures up to 1500 C will be reached. These temperatures could be much higher if it were not for the fact that the beam is “painted” on the face of the dump by means of dilution kickers situated hundreds of metres upstream. The dump must also guarantee its structural integrity even in the case of failures of these dilution systems. 

Although the steel vessel is responsible for absorbing just 5% of the deposited energy, the short timescales involved lead to a semi-instantaneous rise in temperature of more than 150 C, generating accelerations up to 2000 g and forces of several hundred tonnes. Following the operational experience during LHC Run 1 and Run 2, during LS2 several upgrades have been implemented on the dump. These include complex instrumentation to yield information and operational feedback during Run 3, until 2025. In the later HL-LHC era, the dump will have to absorb an additional 50% more energy per dump than during Run 3 (up to 750 MJ/dump), presenting one of numerous beam-interception challenges to be faced.

Fixed-target challenges 

Beyond the LHC, challenging conditions are also encountered for antiproton production at CERN’s Antiproton Decelerator (AD), which serves several antimatter experiments. In this case, high-density materials are required to make sources as point-like as possible to improve the capture capabilities of the downstream magnetic-horn focusing system. Energy densities up to 7 kJ/cm3 and temperatures up to 2500 C are reached in refractory materials such as iridium, tantalum and tungsten. Such intense energy densities and the large gradients resulting from the very small transverse beam size generate large thermal stresses and produce damage in the target material, which must be minimised to maintain the reliability of the AD’s physics programme. To this end, a new air-cooled antiproton production target will be installed in the antiproton target area this year. Similar challenges are faced when producing neutrons for the n_TOF facility: in this case a new nitrogen-cooled pure lead spallation target weighing roughly 1.5 tonnes will be commissioned this year, ready to produce neutrons spanning 11 orders of magnitude in energy, from 25 meV to several GeV (see “Neutron production target” image). 

Preparation for irradiation of graphite and copper alloy

Reliability is a key aspect in the construction of beam-intercepting devices, not just because machine operation strongly depends on them, but because replacing devices is not easy due to their residual radioactivation after operation. But how do we know that new devices will fulfill their function successfully once installed in the machine? CERN’s HiRadMat facility, which allows single proton pulse testing using a high-intensity beam from the SPS, is one solution. Extremely high energy densities can be reached in test materials and in complex systems, allowing the experimental teams to investigate – in a controlled manner – the behaviour of materials or complex mechanical systems when impacted by proton (or ion) beams. During the past few years, the facility was heavily employed by both CERN and external teams from laboratories such as STFC, Fermilab, KEK and GSI, testing materials from graphite to copper and iridium across the whole spectrum of densities (see “Material integrity test” image). To be able to correctly predict the behaviour of materials when impacted by protons and other charged particles, a full understanding of thermo-physical and material properties is mandatory. Examples of critical properties include the coefficient of thermal expansion, heat capacity, thermal and electrical conductivity as well as the Young’s modulus and yield strength, as well as their temperature dependence. 

Dealing with radiation damage is becoming increasingly important as facilities move to higher beam intensities and energies, presenting potential show-stoppers for some beam-intercepting devices. To better understand and predict the radiation response of materials, the RaDIATE collaboration was founded in 2012, bringing together the high-energy physics, nuclear and related communities. The collaboration’s research includes determining the effect of high-energy proton irradiation on the mechanical properties of potential target and beam-window materials,   and developing our understanding via micro-structural studies. The goal is to enable accurate lifetime predictions for materials subjected to beam impact, to design robust components for high-intensity beams, and to develop new materials to extend lifetimes. CERN is partner to this collaboration, as well as Fermilab, STFC/UKRI, Oak Ridge, KEK, Pacific Northwest National Laboratory, and other institutions and laboratories worldwide.

Future projects 

High-energy physics laboratories across the world are pursuing new energy and/or intensity frontiers, either with hadron or lepton machines. In all cases, whether collider physics or fixed-target, neutrino or beam-dump experiments, beam-intercepting devices are at the heart of accelerator operations. For the proposed 100 km-circumference Future Circular Collider (FCC), several challenges have already been identified. Owing to the small emittances and high luminosities involved in a first electron–positron FCC phase, the positron source system, and its target and capture system, will require dedicated R&D and testing as well as the two lepton dumps. FCC’s proton–proton phase, further in the future, will draw on lessons from the HL-LHC operation, but it will also operate at uncharted energy densities for beam-intercepting devices, both for beam cleaning and shaping collimators as well as for the beam dumps.

Installation of the tantalum-clad pure tungsten block

The recently launched muon-collider initiative, meanwhile, will require a target system capable of providing copious amounts of muons generated either by proton beams or electrons impacting on a target, depending on the scheme under consideration. For the former, beams of several MW could collide on a production target, which will have to be very efficient to produce muons of the required momenta while being sufficiently reliable to operate without failure for long periods. The muon collider target and front-end systems will also require magnets and shielding to be located quite close to the production target and will have to cope with radiation load and heat deposition. These challenges will be tackled extensively in the next few years, both from a physics and engineering perspective.

Successful beam-intercepting devices require extensive knowledge and skills

As one of the front-runner projects in the Physics Beyond Colliders initiative, the proposed Beam Dump Facility at CERN would require the construction of a general-purpose high-intensity and high-energy fixed-target complex, initially foreseen to be exploited by the Search for Hidden Particles (SHiP) experiment. At the heart of the installation resides a target/dump assembly that can safely absorb the full high-intensity 400 GeV/c SPS beam, while maximising the production of charm and beauty mesons and using high-Z materials, such as pure tungsten and molybdenum alloy, to reduce muon background for the downstream experiment. The nature of the beam pulse induces very high temperature excursions between pulses (up to 100 °C), leading to considerable thermally induced stresses and long-term fatigue considerations. The high average power deposited on target (305 kW) also creates a challenge for heat removal. A prototype target was built and tested at the end of 2018, at one tenth of the nominal power but able to reach the equivalent energy densities and thermal stresses (see “Beam-dump facility” image).

Human efforts

The development, construction and operation of successful beam-intercepting devices require extensive knowledge and skills, ranging from mechanical and nuclear engineering, to physics, vacuum technologies and advanced production techniques. Technicians also constitute the backbone of the design, assembly and installation of such equipment. International exchanges with experts in the fields and with laboratories working with similar challenges is essential, as is cross-discipline collaboration, for example in aerospace, nuclear and advanced materials. In addition, universities provide key students and personnel capable of mastering and developing these techniques both at CERN and in CERN’s member states’ laboratories and industries. This intense multidisciplinary effort is vital to successfully tackle the challenges related to current and future high-energy and high-intensity facilities and infrastructures, as well as to develop systems with broader societal impact, for example in X-ray synchrotrons, medical linacs, and the production of radioisotopes for nuclear medicine. 

Lectures on Accelerator Physics

Lectures on Accelerator Physics

Alex Chao, one of the leading practitioners in the field, has written an introductory textbook on accelerator physics. It is a lucid and insightful presentation of the principles behind the workings of modern accelerators, touching on a multitude of aspects, from elegant mathematical concepts and fundamental electromagnetism to charged-particle optics and the stability of charged particle beams. At the same time, numerous practical examples illustrate key concepts employed in the most advanced machines currently in operation, from high-energy colliders to free-electron lasers. 

The author is careful to keep the text rigorous, yet not to overload it with formal derivations, and exhibits a keen sense for finding simple, convincing arguments to introduce the basic physics. A large number of homework problems (most of them with solutions) facilitate the stated aim to stimulate thinking. The variety of these is the fruit of extensive teaching experience. The book assumes only a basic understanding of special relativity and electromagnetism, while readers with advanced language skills will benefit from occasional remarks in Chinese, mainly philosophical in nature (translated in most cases). The present reviewer could not help wondering about the missed punchlines. 

The discussion on “symplecticity” and Liouville’s theorem lets physics ideas stand out against the background of mathematics

Beginners and advanced students alike will find pleasure in striking derivations of basic properties of simple physical systems by dimensional analysis. Students will also find the presentation on the use of phase-space (coordinate-momentum space) concepts in classical mechanics capable of clearing the fog in their heads. In particular, an insightful presentation of transverse and longitudinal phase-space manipulation techniques provides modern-day examples of advanced designs. Furthermore, an important discussion on “symplecticity” and Liouville’s theorem – ideas that yield powerful constraints on the evolution of dynamical systems – lets physics ideas stand out against the background of formal mathematics. The discussion should help students avoid imagining typical unphysical ideas such as beams focused to infinitesimally small dimensions: the infamous “death rays” first dreamt up in the 1920s and 1930s. The treatment of the stability criteria for linear and non-linear systems, in the latter case introducing the notion of dynamical aperture (the stable region of phase space in a circular accelerator), serves as a concrete illustration of these deep and beautiful concepts of classical mechanics.

The physics of synchrotron radiation and its detailed effects on beam dynamics of charged-particle beams provide the essentials for understanding the properties of lepton and future very-high-energy hadron colliders. Lectures on Accelerator Physics also describes the necessary fundamentals of accelerator-based synchrotron light sources, reaching as far as the physics principles of free-electron lasers and diffraction-limited storage rings.

A chapter on collective instability intro­duces some of the most important effects related to the stability of beams as multi-particle systems. A number of essential effects, including head–tail instability and the Landau damping mechanism, which play a crucial role in the operation of present and future particle accelerators and colliders, are explained with great elegance. The beginner, armed with the insights gained from these lectures, is well advised to turn to Chao’s classic 1993 text Physics of Collective Beam Instabilities in High Energy Accelerators for a more in-depth treatment of these phenomena.

This book is a veritable “All you wanted to know about accelerators physics but were afraid to ask”. It is a compilation of ideas, and can be used as a less dry companion to yet another classic compilation, in this case of formulas: the Handbook of Accelerator Physics and Engineering, edited by Chao and Maury Tigner.

High-power linac shows promise for accelerator-driven reactors

Physicists at the Institute of Modern Physics (IMP) in Lanzhou, China, have achieved a significant milestone towards an accelerator-driven sub-critical system – a proposed technology for sustainable fission energy. In February, the institute’s prototype front-end linac for the China Accelerator Driven Subcritical System (C-ADS) reached its design goal with the successful commissioning of a 10 mA, 205 kW continuous-wave (CW) proton beam at an energy of 20 MeV. The result breaks the world record for a high-power CW superconducting linac, says Yuan He, director of IMP’s Linac Center: “This result consists of ten years of hard work by IMP scientists, and brings the realisation of an actual ADS facility one step closer to the world.”

The ADS concept, which was proposed by Carlo Rubbia at CERN in late 1990s, offers a potential technology for nuclear-waste transmutation and the development of safe, sustainable nuclear power. The idea is to sustain fission reactions in a subcritical reactor core with neutrons generated by directing a high-energy proton or electron beam, which can be switched on or off at will, at a heavy-metal spallation target. Such a system could run on non-fissile thorium fuel, which is more abundant than uranium and produces less waste. The challenge is to design an accelerator with the required beam power and long-term reliability, for which a superconducting proton linac is a promising candidate.

CAFe is the world’s first CW superconducting proton linac stepping into the hundred-kilowatt level

Yuan He

In 2011, a team at IMP launched a programme to build a superconducting proton linac (CAFe) with an unprecedented 10 mA beam current. It was upgraded in 2018 by replacing the radio-frequency quadrupole and a cryomodule, but the team faced difficulties in reaching the design goals. Challenges including beam-loss control and detection, heavy beam loading and rapid fault recovery were finally overcome in early 2021, enabling the 38 m-long facility to achieve its design performance at the start of the Chinese new year. CAFe’s beam availability during long-term, high-power operation was measured to be 93- 96%, indicating high reliability: 12 hours of operation at 174 kW/10 mA and 108 hours at 126 kW/7.3 mA.

The full C-ADS project is expected to be completed this decade. A similar project called MYRHHA is under way at SCK CEN in Belgium, the front-end linac for which recently entered construction. Other ADS projects are under study in Japan, India and other countries.

“CAFe is the world’s first CW superconducting proton linac stepping into the hundred-kilowatt level,” says He. “The successful operation of the 10 mA beam meets the beam-intensity requirement for an experimental ADS demo facility – a breakthrough for ADS linac development and an outstanding achievement in the accelerator field.”

 

The CERN Quantum Technology Initiative

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

Quantum technologies have the potential to revolutionise science and society, but are still in their infancy. In recent years, the growing importance and the potential impact of quantum technology development has been highlighted by increasing investments in R&D worldwide in both academia and industry.

Cutting-edge research in quantum systems has been performed at CERN for many years to investigate the many open questions in quantum mechanics and particle physics. However, only recently, the different ongoing activities in quantum computing, sensing communications and theory have been brought under a common strategy to assess the potential impact on future CERN experiments.

This webinar, presented by Alberto Di Meglio, will introduce the new CERN Quantum Technology Initiative, give an overview of the Laboratory’s R&D activities and plans in this field, and give examples of the potential impact on research. It will also touch upon the rich international network of activities and how CERN fosters research collaborations.

Want to learn more on this subject?

Alberto Di Meglio is the head of CERN openlab in the IT Department at CERN and co-ordinator of the CERN Quantum Technology Initiative. Alberto is an aerospace engineer (MEng) and electronic engineer (PhD) by education and has extensive experience in the design, development and deployment of distributed computing and data infrastructures and software services for both commercial and research applications.

He joined CERN in 1998 as data centre systems engineer. In 2004, he took part in the early stages of development of the High-Energy Physics Computing Grid. From 2010 to 2013, Alberto was project director of the European Middleware Initiative (EMI), a project responsible for developing and maintaining most of the software services powering the Worldwide LHC Computing Grid.

Since 2013, Alberto has been leading CERN openlab, a long-term initiative to organise public–private collaborative R&D projects between CERN, academia and industry in ICT, computer and data science, covering many aspects of today’s technology, from heterogenous architecture and distributed computing to AI and quantum technologies.









Tooling up to hunt dark matter

Bullet Cluster

The past century has seen ever stronger links forged between the physics of elementary particles and the universe at large. But the picture is mostly incomplete. For example, numerous observations indicate that 87% of the matter of the universe is dark, suggesting the existence of a new matter constituent. Given a plethora of dark-matter candidates, numerical tools are essential to advance our understanding. Fostering cooperation in the development of such software, the TOOLS 2020 conference attracted around 200 phenomenologists and experimental physicists for a week-long online workshop in November.

The viable mass range for dark matter spans 90 orders of magnitude, while the uncertainty about its interaction cross section with ordinary matter is even larger (see “Theoretical landscape” figure). Dark matter may be new particles belonging to theories beyond-the-Standard Model (BSM), an aggregate of new or SM particles, or very heavy objects such as primordial black holes (PBHs). On the latter subject, Jérémy Auffinger (IP2I Lyon) updated TOOLS 2020 delegates on codes for very light PBHs, noting that “BlackHawk” is the first open-source code for Hawking-radiation calculations.

Flourishing models

Weakly interacting massive particles (WIMPs) have enduring popularity as dark-matter candidates, and are amenable to search strategies ranging from colliders to astrophysical observations. In the absence of any clear detection of WIMPs at the electroweak scale, the number of models has flourished. Above the TeV scale, these include general hidden-sector models, FIMPs (feebly interacting massive particles), SIMPs (strongly interacting massive particles), super-heavy and/or composite candidates and PBHs. Below the GeV scale, besides FIMPs, candidates include the QCD axion, more generic ALPs (axion-like particles) and ultra-light bosonic candidates. ALPs are a class of models that received particular attention at TOOLS 2020, and is now being sought in fixed-target experiments across the globe.

For each dark-matter model, astro­particle physicists must compute the theoretical predictions and characteristic signatures of the model and confront those predictions with the experimental bounds to select the model parameter space that is consistent with observations. To this end, the past decade has seen the development of a huge variety of software – a trend mapped and encouraged by the TOOLS conference series, initiated by Fawzi Boudjema (LAPTh Annecy) in 1999, which has brought the community together every couple of years since.

Models connecting dark matter with collider experiments are becoming ever more optimised to the needs of users

Three continuously tested codes currently dominate generic BSM dark-matter model computations. Each allows for the computation of relic density from freeze-out and predictions for direct and indirect detection, often up to next-to-leading corrections. Agreement between them is kept below the percentage level. “micrOMEGAs” is by far the most used code, and is capable of predicting observables for any generic model of WIMPs, including those with multiple dark-matter candidates. “DarkSUSY” is more oriented towards supersymmetric theories, but it can be used for generic models as the code has a very convenient modular structure. Finally, “MadDM” can compute WIMP observables for any BSM model from MeV to hundreds of TeV. As MadDM is a plugin of MadGraph, it inherits unique features such as its automatic computation of new dark-matter observables, including indirect-detection processes with an arbitrary number of final-state particles and loop-induced processes. This is essential for analysing sharp spectral features in indirect-detection gamma-ray measurements that cannot be mimicked by any known astrophysical background.

Interaction cross sections versus mass

Both micrOMEGAs and MadDM permit the user to confront theories with recast experimental likelihoods for several direct and indirect detection experiments. Jan Heisig (UCLouvain) reported that this is a work in progress, with many more experimental data sets to be included shortly. Torsten Bringmann (University of Oslo) noted that a strength of DarkSUSY is the modelling of qualitatively different production mechanisms in the early universe. Alongside the standard freeze-out mechanism, several new scenarios can arise, such as freeze-in (FIMP models, as chemical and kinetic equilibrium cannot be achieved), dark freeze-out, reannihilation and “cannibalism”, to name just a few. Freeze-in is now supported by micrOMEGAs.

Models connecting dark matter with collider experiments are becoming ever more optimised to the needs of users. For example, micrOMEGAs interfaces with SModelS, which is capable of quickly applying all possible LHC-relevant supersymmetric searches. The software also includes long-lived particles, as commonly found in FIMP models. As MadDM is embedded in MadGraph, noted Benjamin Fuks (LPTHE Paris), tools such as MadAnalysis may be used to recast CMS and ATLAS searches. Celine Degrande (UCLouvain) described another nice tool, FeynRules, which produces model files in both the MadDM and micrOMEGAs formats given the Lagrangian for the BSM model, providing a very useful automatised chain from the model directly to the dark-matter observables, high-energy predictions and comparisons with experimental results. Meanwhile, MadDump expands MadGraph’s predictions and detector simulations from the high-energy collider limits to fixed-target experiments such as NA62. To complete a vibrant landscape of development efforts, Tomas Gonzalo (Monash) presented the GAMBIT collaboration’s work to provide tools for global fits to generic dark-matter models.

A phenomenologists dream

Huge efforts are underway to develop a computational platform to study new directions in experimental searches for dark matter, and TOOLS 2020 showed that we are already very close to the phenomenologist’s dream for WIMPs. TOOLS 2020 wasn’t just about dark matter either – it also covered developments in Higgs and flavour physics, precision tests and general fitting, and other tools. Interested parties are welcome to join in the next TOOLS conference due to take place in Annecy in 2022.

Iodine aerosol production could accelerate Arctic melting

Sea ice

Researchers at CERN’s CLOUD experiment have uncovered a new mechanism that could accelerate the loss of Arctic sea ice. In a paper published in Science on 5 February, the team showed that aerosol particles made of iodic acid can form extremely rapidly in the marine boundary layer – the portion of the atmosphere that is in direct contact with the ocean. Aerosol particles are important for the climate because they provide the seeds on which cloud droplets form. Marine new-particle formation is especially important since particle concentrations are low and the ocean is vast. However, how new aerosol particles form and influence clouds and climate remain relatively poorly understood.

In polar regions, aerosols and clouds have a warming effect because they absorb infrared radiation otherwise lost to space and then radiate it back down to the surface

Jasper Kirkby

“Our measurements are the first to show that the part-per-trillion-by-volume iodine levels found in marine regions will lead to rapid formation and growth of iodic acid particles,” says CLOUD spokesperson Jasper Kirkby of CERN, adding that the particle formation rate is also strongly enhanced by ions from galactic cosmic rays. “Although most atmospheric particles form from sulphuric acid, our study shows that iodic acid – which is produced by the action of sunlight and ozone on molecular iodine emitted by the sea surface, sea ice and exposed seaweed – may be the main driver in pristine marine regions.”

CLOUD is a one-of-a kind experiment that uses an ultraclean cloud chamber to measure the formation and growth of aerosol particles from a mixture of vapours under precisely controlled atmospheric conditions, including the use of a high-energy beam from the Proton Synchrotron  to simulate cosmic rays up to the top of the troposphere. Last year, the team found that small inhomogeneities in the concentrations of ammonia and nitric acid can have a major role in driving winter smog episodes in cities. The latest result is similarly important but in a completely different area, says Kirkby.

“In polar regions, aerosols and clouds have a warming effect because they absorb infrared radiation otherwise lost to space and then radiate it back down to the surface, whereas they reflect no more incoming sunlight than the snow-covered surface. As more sea surface is exposed by melting ice, the increased iodic acid aerosol and cloud-seed formation could provide a previously unaccounted positive feedback that accelerates the loss of sea ice. However, the effect has not yet been modelled so we can’t quantify it yet.”

Farewell Daya Bay, hello JUNO

Daya Bay

In October 2007, neutrino physicists broke ground 55 km north-east of Hong Kong to build the Daya Bay Reactor Neutrino Experiment. Comprising eight 20-tonne liquid-scintillator detectors sited within 2 km of the Daya Bay nuclear plant, its aim was to look for the disappearance of electron antineutrinos as a function of distance to the reactor. This would constitute evidence for mixing between the electron and the third neutrino mass eigenstate, as described by the parameter θ13. Back then, θ13 was the least well known angle in the Pontecorvo–Maki–Nakagawa–Sakata matrix, which quantifies lepton mixing, with only an upper limit available. Today, it is the best known angle by some margin, and the knowledge that it is nonzero has opened the door to measuring leptonic CP violation at long-baseline accelerator-neutrino experiments.

Daya Bay was one of a trio of experiments located in close proximity to nuclear reactors, along with RENO in South Korea and Double Chooz in France, which were responsible for this seminal measurement. Double Chooz published the first hint that θ13 was nonzero in 2011, before Daya Bay and RENO established this conclusively the following spring. The experiments also failed to dispel the reactor–antineutrino anomaly, whereby observed neutrino fluxes are a few percent lower than calculations predict. This has triggered a slew of new experiments located mere metres from nuclear-reactor cores, in search of evidence for oscillations involving additional, sterile light neutrinos. As the Daya Bay experiment’s detectors are dismantled, after almost a decade of data taking, the three collaborations can reflect on the rare privilege of having pencilled the value of a previously unknown parameter into the Standard- Model Lagrangian.

Particle physics is fundamental and influential, and deserves to be supported

Yi-Fang Wang

Founding Daya Bay co-spokesperson Yi-Fang Wang says the experiment has had a transformative effect on Chinese particle physics, emboldening the country to explore major projects such as a circular electron–positron collider. “One important lesson we learnt from Daya Bay is that we should just go ahead and do it if it is a good project, rather than waiting until everything is ready. We convinced our government that we could do a great job, that world-class jobs need to be international, and that particle physics is fundamental and influential, and deserves to be supported.”

JUNO

The experiment has also paved the way for China to build a successor, the Jiangmen Underground Neutrino Observatory (JUNO), for which Wang is now spokesperson. JUNO will tackle the neutrino mass hierarchy – the question of whether the third neutrino mass eigenstate is the most or least massive of the three. An evolution of Daya Bay, the new experiment will also measure a deficit of electron antineutrinos, but at a distance of 53 km, seeking to resolve fast and shallow oscillations that are expected to differ depending on the neutrino mass hierarchy. Excavation of a cavern for the 20 kilotonne liquid-scintillator detector 700 m beneath the Dashi hill in Guangdong was completed at the end of 2020. The construction of a concrete water pool is the next step.

The next steps in reactor-neutrino physics will involve an extraordinary miniaturisation

Thierry Lasserre

The detector concept that the three experiments used to uncover θ13 was designed by the Double Chooz collaboration. Thierry Lasserre, one of the experiment’s two founders, recalls that it was difficult, 20 years ago, to convince the community that the measurement was possible at reactors. “It should not be forgotten that significant experimental efforts were also undertaken in Angra dos Reis, Braidwood, Diablo Canyon, Krasnoyarsk and Kashiwazaki,” he says. “Reactor neutrino detectors can now be used safely, routinely and remotely, and some of them can even be deployed on the surface, which will be a great advantage for non-proliferation applications.” The next steps in reactor-neutrino physics, he explains, will now involve an extraordinary miniaturisation to cryogenic detectors as small as 10 grams, which take advantage of the much larger cross section of coherent neutrino scattering.

HPC computing collaboration kicks off

CERN has welcomed more than 120 delegates to an online kick-off workshop for a new collaboration on high-performance computing (HPC). CERN, SKAO (the organisation leading the development of the Square Kilometre Array), GÉANT (the pan-European network and services provider for research and education) and PRACE (the Partnership for Advanced Computing in Europe) will work together to realise the full potential of the coming generation of HPC technology for data-intensive science.

It is an exascale project for an exascale problem

Maria Girone

“It is an exascale project for an exascale problem,” said Maria Girone, CERN coordinator of the collaboration and CERN openlab CTO, in opening remarks at the workshop. “HPC is at the intersection of several important R&D activities: the expansion of computing resources for important data-intensive science projects like the HL-LHC and the SKA, the adoption of new techniques such as artificial intelligence and machine learning, and the evolution of software to maximise the potential of heterogeneous hardware architectures.”

The 29 September workshop, which was organised with the support of CERN openlab, saw participants establish the collaboration’s foundations, outline initial challenges and begin to define the technical programme. Four main initial areas of work were discussed at the event: training and centres of expertise, benchmarking, data access, and authorisation and authentication.

One of the largest challenges in using new HPC technology is the need to adapt to heterogeneous hardware. This involves the development and dissemination of new programming skills, which is at the core of the new HPC collaboration’s plan. A number of examples showing the potential of heterogeneous systems were discussed. One is the EU-funded DEEP-EST project, which is developing a modular supercomputing prototype for exascale computing. DEEP-EST has already contributed to the re-engineering of high-energy physics algorithms for accelerated architectures, highlighting the significant mutual benefits of collaboration across fields when it comes to HPC. PRACE’s excellent record of providing support and training will also be critical to the success of the collaboration.

Benchmarking press

Establishing a common benchmark suite will help the organisations to measure and compare the performance of different types of computing resources for data-analysis workflows from astronomy and particle physics. The suite will include applications representative of the HEP and astrophysics communities – reflecting today’s needs, as well as those of the future – and augment the existing Unified European Applications Benchmark Suite.

Access is another challenge when using HPC resources. Data from the HL-LHC and the SKA will be globally distributed and will be moved over high-capacity networks, staged and cached to reduce latency, and eventually processed, analysed and redistributed. Accessing the HPC resources themselves involves adherence to strict cyber-security protocols. A technical area devoted to authorisation and authentication infrastructure is defining demonstrators to enable large scientific communities to securely access protected resources.

The collaboration will now move forward with its ambitious technical programme. Working groups are forming around specific challenges, with the partner organisations providing access to appropriate testbed resources. Important activities are already taking place in all four areas of work, and a second collaboration workshop will soon be organised.

Learning language by machine

Lingvist CEO Mait Müntel talks to Rachel Bray

Mait Müntel came to CERN as a summer student in 2004 and quickly became hooked on particle physics, completing a PhD in the CMS collaboration in 2008 with a thesis devoted to signatures of double-charged Higgs bosons. Continuing in the field, he was one of the first to do shifts in the CMS control room when the LHC ramped up. It was then that he realised that the real LHC data looked nothing like the Monte Carlo simulations of his student days. Many things had to be rectified, but Mait admits he was none too fond of coding and didn’t have any formal training. “I thought I would simply ‘learn by doing’,” he says. “However, with hindsight, I should probably have been more systematic in my approach.” Little did he know that, within a few years, he would be running a company with around 40 staff developing advanced language-learning algorithms.

Memory models

Despite spending long periods in the Geneva region, Mait had not found the time to pick up French. Frustrated, he began to take an interest in the use of computers to help humans learn languages at an accelerated speed. “I wanted to analyse from a statistical point of view the language people were actually speaking, which, having spent several years learning both Russian and English, I was convinced was very different to what is found in academic books and courses,” he says. Over the course of one weekend, he wrote a software crawler that enabled him to download a collection of French subtitles from a film database. His next step was to study memory models to understand how one acquires new knowledge, calculating that, if a computer program could intelligently decide what would be optimal to learn in the next moment, it would be possible to learn a language in only 200 hours. He started building some software using ROOT (the object-oriented program and library developed by CERN for data analysis) and, within two weeks, was able to read a proper book in French. “I had included a huge book library in the software and as the computer knew my level of vocabulary, it could recommend books for me. This was immensely gratifying and pushed me to progress even further.” Two months later, he passed the national French language exam in Estonia.

Mait became convinced that he had to do something with his idea. So he went on holiday, and hired two software developers to develop his code so it would work on the web. Whilst on holiday, he happened to meet a friend of a friend, who helped him set up Lingvist as a company. Estonia, he says, has a fantastic start-up and software-development culture thanks to Skype, which was invented there. Later, Mait met the technical co-founder of Skype at a conference, who coincidentally had been working on software to accelerate human learning. He dropped his attempts and became Lingvist’s first investor.

Short-term memory capabilities can differ between five minutes and two seconds!

Mait Müntel

The pair secured a generous grant from the European Union Horizon 2020 programme and things were falling into place, though it wasn’t all easy says Mait: “You can use the analogy of sitting in a nice warm office at CERN, surrounded by beautiful mountains. In the office, you are safe and protected, but if you go outside and climb the mountains, you encounter rain and hail, it is an uphill struggle and very uncomfortable, but immensely satisfying when you reach the summit. Even if you work more than 100 hours per week.”

Lingvist currently has three million users, and Mait is convinced that the technology can be applied to all types of education. “What our data have demonstrated is that levels of learning in people are very different. Short-term memory capabilities can differ between five minutes and two seconds! Currently, based on our data, the older generation has much better memory characteristics. The benefit of our software is that it measures memory, and no matter one’s retention capabilities, the software will help improve retention rates.”

New talents

Faced with a future where artificial intelligence will make many jobs extinct, and many people will need to retrain, competitiveness will be derived from the speed at which people can learn, says Mait. He is now building Lingvist’s data-science research team to grow the company to its full potential, and is always on the lookout for new CERN talent. “Traditionally, physicists have excellent modelling, machine-learning and data-analysis skills, even though they might not be aware of it,” he says.

bright-rec iop pub iop-science physcis connect