Topics

CERN shares beampipe know-how for gravitational-wave observatories

The direct detection of gravitational waves in 2015 opened a new window to the universe, allowing researchers to study the cosmos by merging data from multiple sources. There are currently four gravitational wave telescopes (GWTs) in operation: LIGO at two sites in the US, Virgo in Italy, KAGRA in Japan, and GEO600 in Germany. Discussions are ongoing to establish an additional site in India. The detection of gravitational waves is based on Michelson laser interferometry with Fabry-Perot cavities, which reveals the expansion and contraction of space at the level of ten-thousandths of the size of an atomic nucleus, i.e. 10-19 m. Despite the extremely low strain that needs to be detected, an average of one gravitational wave is measured per week of measurement by studying and minimising all possible noise sources, including seismic vibration and residual gas scattering. The latter is reduced by placing the interferometer in a pipe where ultrahigh vacuum is generated. In the case of Virgo, the vacuum inside the two perpendicular 3 km-long arms of the interferometer is lower than 10-9 mbar.

While current facilities are being operated and upgraded, the gravitational-wave community is also focusing on a new generation of GWTs that will provide even better sensitivity. This would be achieved by longer interferometer arms, together with a drastic reduction of noise that might require cryogenic cooling of the mirrors. The two leading studies are the Einstein Telescope (ET) in Europe and the Cosmic Explorer (CE) in the US. The total length of the vacuum vessels envisaged for the ET and CE interferometers is 120 km and 160 km, respectively, with a tube diameter of 1 to 1.2 m. The required operational pressures are typical to those needed for modern accelerators (i.e. in the region of 10-10 mbar for hydrogen and even lower for other gas species). The next generation of GWTs would therefore represent the largest ultrahigh vacuum systems ever built.

The next generation of gravitational-wave telescopes would represent the largest ultrahigh vacuum systems ever built.

Producing these pressures is not difficult, as present vacuum systems of GWT interferometers have a comparable degree of vacuum. Instead, the challenge is cost. Indeed, if the previous generation solutions were adopted, the vacuum pipe system would amount to half of the estimated cost of CE and not far from one-third of ET, which is dominated by underground civil engineering. Reducing the cost of vacuum systems requires the development of different technical approaches with respect to previous-generation facilities. Developing cheaper technologies is also a key subject for future accelerators and a synergy in terms of manufacturing methods, surface treatments and installation procedures is already visible.

Within an official framework between CERN and the lead institutes of the ET study –  Nikhef in the Netherlands and INFN in Italy – CERN’s TE-VSC and EN-MME groups  are sharing their expertise in vacuum, materials, manufacturing and surface treatments with the gravitational-wave community. The activity started in September 2022 and is expected to conclude at the end of 2025 with a technical design report and a full test of a vacuum-vessel pilot sector. During the workshop “Beampipes for Gravitational Wave Telescopes 2023”, held at CERN from 27 to 29 March, 85 specialists from different communities encompassing accelerator and gravitational-wave technologies and from companies that focus on steel production, pipe manufacturing and vacuum equipment gathered to discuss the latest progress. The event followed a similar one hosted by LIGO Livingston in 2019, which gave important directions for research topics.

Plotting a course
In a series of introductory contributions, the basic theoretical elements regarding vacuum requirements and the status of CE and ET studies were presented, highlighting initiatives in vacuum and material technologies undertaken in Europe and the US. The detailed description of current GWT vacuum systems provided a starting point for the presentations of ongoing developments. To conduct an effective cost analysis and reduction, the entire process must be taken into account — including raw material production and treatment, manufacturing, surface treatment, logistics, installation, and commissioning in the tunnel. Additionally, the interfaces with the experimental areas and other services such as civil engineering, electrical distribution and ventilation are essential to assess the impact of technological choices for the vacuum pipes.

The selection criteria for the structural materials of the pipe were discussed, with steel currently being the material of choice. Ferritic steels would contribute to a significant cost reduction compared to austenitic steel, which is currently used in accelerators, because they do not contain nickel. Furthermore, thanks to their body-centred cubic crystallographic structure, ferritic steels have a much lower content of residual hydrogen – the first enemy for the attainment of ultrahigh vacuum – and thus do not require expensive solid-state degassing treatments. The cheapest ferritic steels are “mild steels” which are common materials in gas pipelines after treatment to fight corrosion. Ferritic stainless steels, which contain more than 12% in weight of dissolved chromium, are also being studied for GWT applications. While first results are encouraging, the magnetic properties of these materials must be considered to avoid anomalous transmission of electromagnetic signals and of the induced mechanical vibrations.

Four solutions regarding the design and manufacturing of the pipes and their support system were discussed at the March workshop. The baseline is a 3 to 4 mm-thick tube similar to the ones operational in Virgo and LIGO, with some modifications to cope with the new tunnel environment and stricter sensitivity requirements. Another option is a 1 to 1.5 mm-thick corrugated vessel that does not require reinforcement and expansion bellows. Additionally, designs based on double-wall pipes were discussed, with the inner wall being thin and easy to heat and the external wall performing the structural role. An insulation vacuum would be generated between the two walls without the cleanliness and pressure requirements imposed on the laser beam vacuum. The forces acting on the inner wall during pressure transients would be minimised by opening axial movement valves, which are not yet fully designed. Finally, a gas-pipeline solution was also considered, which would be produced by a half-inch thick wall made of mild steel. The main advantage of this solution is its relatively low cost, as it is a standard approach used in the oil and gas industry. However, corrosion protection and ultrahigh vacuum needs would require surface treatment on both sides of the pipe walls. These treatments are currently under consideration.  For all types of design, the integration of optical baffles (which provide an intermittent reduction of the pipe aperture to block scattered photons) is a matter of intense study, with options for position, material, surface treatment, and installation reported. The transfer of vibrations from the tunnel structure to the baffle is also another hot topic.

The manufacturing of the pipes directly from metal coils and their surface treatment can be carried out at supplier facilities or directly at the installation site. The former approach would reduce the cost of infrastructure and manpower, while the latter would reduce transport costs and provide an additional degree of freedom to the global logistics as storage area would be minimized. The study of in-situ production was brought to its limit in a conceptual study of a process that from a coil could deliver pipes as long as desired directly in the underground areas: The metal coil arrives in the tunnel; then it is installed in a dedicated machine that unrolls the coil and welds the metallic sheet to form the pipe to any length.

These topics will undergo further development in the coming months, and the results will be incorporated into a comprehensive technical design report. This report will include a detailed cost optimization and will be validated in a pilot sector at CERN. With just under two and a half years of the project remaining, its success will demand a substantial effort and resolute motivation. The optimism instilled by the enthusiasm and collaborative approach demonstrated by all participants at the workshop is therefore highly encouraging.

Europe targets a user facility for plasma acceleration

Electron-driven plasma wakefield acceleration

Energetic beams of particles are used to explore the fundamental forces of nature, produce known and unknown particles such as the Higgs boson at the LHC, and generate new forms of matter, for example at the future FAIR facility. Photon science also relies on particle beams: electron beams that emit pulses of intense synchrotron light, including soft and hard X-rays, in either circular or linear machines. Such light sources enable time-resolved measurements of biological, chemical and physical structures on the molecular down to the atomic scale, allowing a diverse global community of users to investigate systems ranging from viruses and bacteria to materials science, planetary science, environmental science, nanotechnology and archaeology. Last but not least, particle beams for industry and health support many societal applications ranging from the X-ray inspection of cargo containers to food sterilisation, and from chip manufacturing to cancer therapy. 

This scientific success story has been made possible through a continuous cycle of innovation in the physics and technology of particle accelerators, driven for many decades by exploratory research in nuclear and particle physics. The invention of radio-frequency (RF) technology in the 1920s opened the path to an energy gain of several tens of MeV per metre. Very-high-energy accelerators were constructed with RF technology, entering the GeV and finally the TeV energy scales at the Tevatron and the LHC. New collision schemes were developed, for example the mini “beta squeeze” in the 1970s, advancing luminosity and collision rates by orders of magnitudes. The invention of stochastic cooling at CERN enabled the discovery of the W and Z bosons 40 years ago. 

However, intrinsic technological and conceptual limits mean that the size and cost of RF-based particle accelerators are increasing as researchers seek higher beam energies. Colliders for particle physics have reached a circumference of 27 km at LEP/LHC and close to 100 km for next-generation facilities such as the proposed Future Circular Collider. Machines for photon science, operating in the GeV regime, occupy a footprint of up to several km and the approval of new facilities is becoming limited by physical and financial constraints. As a result, the exponential progress in maximum beam energy that has taken place during the past several decades has started to saturate (see “Levelling off” figure). For photon science, where beam-time on the most powerful facilities is heavily over-subscribed, progress in scientific research and capabilities threatens to become limited by access. It is therefore hoped that the development of innovative and compact accelerator technology will provide a practical path to more research facilities and ultimately to higher beam energies for the same investment. 

Maximum beam acceleration

At present the most successful new technology relies on the concept of plasma acceleration. Proposed in 1979, this technique promises energy gains up to 100 GeV per metre of acceleration and therefore up to 1000 times higher than is possible in RF accelerators. In essence, the metallic walls of an RF cavity, with their intrinsic field limitations, are replaced by a dynamic and robust plasma structure with very high fields. First, the free electrons in a neutral plasma are used to convert the transverse ponderomotive force of a laser, or the transverse space charge force of a charged particle beam, into a longitudinal accelerating field. While the “light” electrons in the plasma column are expelled from the path of the driving force, the “heavy” plasma ions remain in place. The ions therefore establish a restoring force and re-attract the oscillating plasma electrons. A plasma cavity forms behind the drive pulse in which the main electron beam is placed and accelerated with up to 100 GV per metre. Difficulties in the plasma-acceleration scheme arise from the small scales involved (sub-mm transverse diameter), the required micrometre tolerances and stability. Different concepts include laser-driven plasma wakefield acceleration (LWFA), electron-driven plasma wakefield acceleration (PWFA) and proton-beam-driven plasma wakefield acceleration. Gains in electron energy have reached 8 GeV (BELLA, Berkeley), 42 GeV (FFTB, SLAC) and 2 GeV (AWAKE, CERN) in these three schemes, respectively. 

At the same time, the beam quality of plasma-acceleration schemes has advanced sufficiently to reach the quality required for free-electron lasers (FELs): linac-based facilities that produce extremely brilliant and short pulses of radiation for the study of ultrafast molecular and other processes. There have been several reports of free-electron lasing in plasma-based accelerators in recent years, one relying on LWFA by a team in China and one on PWFA by the EuPRAXIA team in Frascati, Italy. Another publication by a French and German team has recently demonstrated seeding of the FEL process in a LWFA plasma accelerator. 

Scientific and technical progress in plasma accelerators is driven by several dozen groups and a number of major test facilities worldwide, including internationally leading programmes at CERN, STFC, CNRS, DESY, various centres and institutes in the Helmholtz Association, INFN, LBNL, RAL, Shanghai XFEL, SCAPA, SLAC, SPRING-8, Tsinghua University and others. In Europe, the 2020 update of the European strategy for particle physics included plasma accelerators as one of five major themes, and a strategic analysis towards a possible plasma-based collider was published in a 2022 CERN Yellow Report on future accelerator R&D. 

Enter EuPRAXIA

In 2014 researchers in Europe agreed that a combined, coordinated R&D effort should be set up to realise a larger plasma-based accelerator facility that serves as a demonstrator. The project should aim to produce high-quality 5 GeV electron beams via innovative laser- and electron-driven plasma wakefield acceleration, achieving a significant reduction in size and possible savings in cost over state-of-the-art RF accelerators. This project was named the European Plasma Research Accelerator with Excellence in Applications (EuPRAXIA) and it was agreed that it should deliver pulses of X rays, photons, electrons and positrons to users from several disciplines. EuPRAXIA’s beams will mainly serve the fields of structural biology, chemistry, material science, medical imaging, particle-physics detectors and archaeology. It is not a dedicated particle-physics facility but will be an important stepping stone towards any plasma-based collider. 

EuPRAXIA project consortia

The EuPRAXIA project started in 2015 with a design study, which was funded under the European Union (EU) Horizon 2020 programme and culminated at the end of 2019 with the publication of the worldwide first conceptual design report for a plasma-accelerator facility. The targets set out in 2014 could all be achieved in the EuPRAXIA conceptual design. In particular, it was shown that sufficiently competitive performances could be reached and that an initial reduction in facility size by a factor of two-to-three is indeed achievable for a 5 GeV plasma-based FEL facility. The published design includes realistic constraints on transfer lines, facility infrastructure, laser-lab space, undulator technologies, user areas and radiation shielding. Several innovative solutions were developed, including the use of magnetic chicanes for high quality, multi-stage plasma accelerators. The EuPRAXIA conceptual design report was submitted to peer review and published in 2020. 

The EuPRAXIA implementation plan proposes a distributed research infrastructure with two construction and user sites and several centres of excellence. The presently foreseen centres, in the Czech Republic, France, Germany, Hungary, Portugal and the UK, will support R&D, prototyping and the construction of machine components for the two user sites. This distributed concept will ensure international competitiveness and leverage existing investments in Europe in an optimal way. Having received official government support from Italy, Portugal, the Czech Republic, Hungary and UK, the consortium applied in 2020 to the European Strategy Forum on Research Infrastructures (ESFRI). The proposed facility for a free-electron laser was then included in the 2021 ESFRI roadmap, which identifies those research facilities of pan-European importance that correspond to the long-term needs of European research communities. EuPRAXIA is the first ever plasma-accelerator project on the ESFRI roadmap and the first accelerator project since the 2016 placement of the High-Luminosity LHC. 

Stepping stones to a user facility 

In 2023 the European plasma-accelerator community received a major impulse for the development of a user-ready plasma-accelerator facility with the funding of several multi-million euro initiatives under the umbrella of the EuPRAXIA project. These are the EuPRAXIA preparatory phase, EuPRAXIA doctoral network and EuPRAXIA advanced photon sources, as well as funding for the construction of one of the EuPRAXIA sites in Frascati, near Rome (see “Frascati future” image). 

Proposed EuPRAXIA building

The EU, Switzerland and the UK have awarded €3.69 million to the EuPRAXIA preparatory phase, which comprises 34 participating institutes from Italy, the Czech Republic, France, Germany, Greece, Hungary, Israel, Portugal, Spain, Switzerland, the UK, the US and CERN as an international organisation. The new grant will give the consortium a unique chance to prepare the full implementation of EuPRAXIA over the next four years. The project will fund 548 person-months, including additional funding from the UK and Switzerland, and will be supported by an additional 1010 person-months in-kind. The preparatory-phase project will connect research institutions and industry from the above countries plus China, Japan, Poland and Sweden, which signed the EuPRAXIA ESFRI consortium agreement, and define the full implementation of the €569 million EuPRAXIA facility as a new, distributed research infrastructure for Europe. 

Alongside the EuPRAXIA preparatory phase, a new Marie Skłodowska-Curie doctoral network, coordinated by INFN, has also been funded by the EU and the UK. The network, which started in January 2023 and benefits from more than €3.2 million over its four-year duration, will offer 12 high-level fellowships between 10 universities, six research centres and seven industry partners that will carry out an interdisciplinary and cross-sector plasma-accelerator research and training programme. The project’s focus is on scientific and technical innovations, and on boosting the career prospects of its fellows.

EuPRAXIA at Frascati

Italy is supporting the EuPRAXIA advanced photon sources project (EuAPS) with €22 million. This project has been promoted by INFN, CNR and Tor Vergata University of Rome. EuAPS will fulfil some of the scientific goals defined in the EuPRAXIA conceptual design report by building and commissioning a distributed user facility providing users with advanced photon sources; these consist of a plasma-based betatron source delivering soft X-rays, a mid-power, high-repetition-rate laser and a high-power laser. The funding comes in addition to about €120 million for construction of the beam-driven facility and the FEL facility of EuPRAXIA at Frascati. R&D activities for the beam-driven facility are currently being performed at the INFN SPARC_LAB laboratory. 

EuPRAXIA is the first ever plasma-accelerator project on the ESFRI roadmap 

EuPRAXIA will be the user facility of the future for the INFN Frascati National Laboratory. The European site for the second, laser-driven leg of EuPRAXIA will be decided in 2024 as part of the preparatory-phase project. Present candidate sites include ELI-Beamlines in the Czech Republic, the future EPAC facility in the UK and CNR in Italy. With its foreseen electron energy range of 1–5 GeV, the facility will enable applications in diverse domains, for instance, as a compact free-electron laser, compact sources for medical imaging and positron generation, tabletop test beams for particle detectors, and deeply penetrating X-ray and gamma-ray sources for materials testing. The first parts of EuPRAXIA are foreseen to enter into operation in 2028 at Frascati and are designed to be a stepping stone for possible future plasma-based facilities, such as linear colliders at the energy frontier. The project is driven by the excellence, ingenuity and hard work of several hundred physicists, engineers, students and support staff who have worked on EuPRAXIA since 2015, connecting, at present, 54 institutes and industries from 18 countries in Europe, Asia and the US.

Cosmic rays for cultural heritage

In 1965, three years before being awarded a Nobel prize for his decisive contributions to elementary particle physics, Luis Alvarez proposed to use cosmic muons to look inside an Egyptian pyramid. A visit to the Giza pyramid complex a few years earlier had made him ponder why, despite the comparable size of the Great Pyramid of Khufu and the Pyramid of Khafre, the latter was built with a simpler structure – simpler even than the tomb of Khufu’s great-grandfather Sneferu, under whose reign there had been architectural experimentation and pyramids had grown in complexity. Only one burial chamber is known in the superstructure of Khafre’s pyramid, while two are located in the tombs of each of his two predecessors. Alvarez’s doubts were not shared by many archaeologists, and he was certainly aware that the history of architecture is not a continuous process and that family relationships can be complicated; but like many adventurers before him, he was fascinated by the idea that some hidden chambers could still be waiting to be discovered. 

The principles of muon radiography or “muography” were already textbook knowledge at that time. Muons are copiously produced in particle cascades originating from naturally occurring interactions between primary cosmic rays and atmospheric nuclei. The energy of most of those cosmogenic muons is large enough that, despite their relatively short intrinsic lifetime, relativistic dilation allows most of them to survive the journey from the upper atmosphere to Earth’s surface – where their penetration power makes them a promising tool to probe the depths of very large and dense volumes non-destructively. Thick and dense objects can attenuate the cosmic-muon flux significantly by stopping its low-energy component, thus providing a “shadow” analogous to conventional radiographies. The earliest known attempt to use the muon flux attenuation for practical purposes was the estimation of the overburden of a tunnel in Australia using Geiger counters on a rail, published in 1955 in an engineering journal. The obscure precedent was probably unknown to Alvarez, who didn’t cite it.

Led by Alvarez, the Joint Pyramid Project was officially established in 1966. The detector that the team built and installed in the known large chamber at the bottom of Khafre’s pyramid was based on spark chambers, which were standard equipment for particle-physics experiments at that time. Less common were the computers provided by IBM for Monte Carlo simulations, which played a crucial role in the data interpretation. It took some time for the project to take off. Just as the experiment was ready to take data, the Six-Day War broke out, delaying progress by several months until diplomatic relationships were restored between Cairo and Washington. All this might sound like a promising subject for a Hollywood blockbuster were it not for its anticlimax: no hidden chamber was found. Alvarez always insisted that there is a difference between not finding what you search for and conclusively excluding its existence, but despite this important distinction, one wonders how much muography’s fame would have benefitted from a discovery. Their study, published in Science in 1970, set an example that was followed in subsequent decades by many more interdisciplinary applications.  

The second pyramid to be muographed was in Mexico more than 30 years later, when researchers from the National Autonomous University of Mexico (UNAM) started to search for hidden chambers in the Pyramid of the Sun at Teotihuacan. Built by the Aztecs about 1800 years ago, it is the third largest pyramid in the world after Khufu’s and Khafre’s, and its purpose is still a mystery. Although there is no sign that it contains burial chambers, the hypothesis that this monument served as a tomb is not entirely ruled out. After more than a decade of data taking, the UNAM muon detector (composed of six layers of multi-wire chambers occupying a total volume of 1.5 m3) found no hidden chamber. But the researchers did find evidence, reported in 2013, for a very wide low-density volume in the southern side, which is still not understood and led to speculation that this side of the pyramid might be in danger of collapse.

Big void 

Muography returned to Egypt with the ScanPyramids project, which has been taking data since 2015. The project made the headlines in 2017 by revealing an unexpected low-density anomaly in Khufu’s Great Pyramid, tantalisingly similar in size and shape to the Grand Gallery of the same building. Three teams of physicists from Japan and France participated in the endeavour, cross-checking each other by using different detector technologies: nuclear emulsions, plastic scintillators and micromegas. The latter, being gaseous detectors, had to be located externally to the pyramid to comply with safety regulations. Publishing in Nature Physics, all three teams reported a statistically significant excess in muon flux originating from the same 3D position (see “Khufu’s pyramid” figure). 

Khufu’s pyramid

This year, based on a larger data sample, the Scan­Pyramids team concluded that this “Big Void” is a horizontal corridor about 9 m long with a transverse section of around 2 × 2 m2. Confidence in the solidity of these conclusions was provided by a cross-check measurement with ground-penetrating radar and ultrasounds, by Egyptian and German experts, which took data since 2020 and was published simultaneously. The consistency of the data from muography and conventional methods motivated visual inspection via an endoscope, confirming the claim. While the purpose of this unexpected feature of the pyramid is not yet known, the work represents the first characterisation of the position and dimensions of a void detected by cosmic-ray muons with a sensitivity of a few centimetres.

New projects exploring the Giza pyramids are now sprouting. A particularly ambitious project by researchers in Egypt, the US and the UK – Exploring the Great Pyramid (EGP) – uses movable large-area detectors to perform precise 3D tomography of the pyramid. Thanks to its larger surface and some methodological improvements, EGP aims to surpass ScanPyramids’ sensitivity after two years of data taking. Although still at the simulation studies stage, the detector technology – plastic scintillator bars with a triangular section and encapsulated wavelength shifter fibres – is already being used by the ongoing MURAVES muography project to scan the interior of the Vesuvius volcano in Italy. The project will also profit from synergy with the upcoming Mu2e experiment at Fermilab, where the very same detectors are used. Finally, proponents of the ScIDEP (Scintillator Imaging Detector for the Egyptian Pyramids) experiment from Egypt, the US and Belgium are giving Khafre’s pyramid a second look, using a high-resolution scintillator-based detector to take data from the same location as Alvarez’s spark chambers.

Muography data in the Xi’an city walls

Pyramids easily make headlines, but there is no scarcity of monuments around the world where muography can play a role. Recently, a Russian team used emulsion detectors to explore the Svyato–Troitsky Danilov Monastery, the main buildings of which have undergone several renovations across the centuries but with associated documentation lost. The results of their survey, published in 2022, include evidence for two unknown rooms and areas of significantly higher density (possible walls) in the immured parts of certain vaults, and of underground voids speculated to be ancient crypts or air ducts. Muography is also being used to preserve buildings of historical importance. The defensive wall structures of Xi’an, one of the Four Great Ancient Capitals of China, suffered serious damage due to heavy rainfall, but repairs in the 1980s were insufficiently documented, motivating non-destructive techniques to assess their internal status. Taking data from six different locations using a compact and portable muon detector to extract a 3D density map of a rampart, a Chinese team led by Lanzhou University has recently reported density anomalies that potentially pose safety hazards (see “Falling walls” figure). 

The many flavours of muography

All the examples described so far are based on the same basic principle as Alvarez’s experiment: the attenuation of the muon flux through dense matter. But there are other ways to utilise muons as probes. For example, it is possible to exploit their deflection in matter due to Coulomb scattering from nuclei, offering the possibility of elemental discrimination. Such muon scattering tomography (MST) has been proposed to help preserve the Santa Maria del Fiore cathedral in Florence, built between 1420 and 1436 by Filippo Brunelleschi, the iconic dome of which is cracking under its own weight. Accurate modelling is needed to guide reinforcement efforts, but uncertainties exist on the internal structure of the walls. According to some experts, Brunelleschi might have inserted iron chains inside the masonry of the dome to stabilise it; however, no conclusive evidence has been obtained with traditional remote-sensing methods. Searching for iron within masonry is therefore the goal of the proposed experiment (see “Preserving a masterpiece” figure), for which a proof-of-principle test on a mock-up wall has already been carried out in Los Alamos.

Beyond cultural heritage, muography has also been advocated as a powerful remote-sensing method for a variety of applications in the nuclear sector. It has been used, for example, to assess the damage and impact of radioactivity in the Fukushima power plant, where four nuclear reactors were damaged in 2011. Absorption-based muography was applied to determine the difference in the density, for example the thickness of the walls, within the nuclear reactor while MST was applied to locate the nuclear fuel. Muography, especially MST, has allowed the investigation of other extreme systems, including blast furnaces and nuclear waste barrels. 

Santa Maria del Fiore cathedral

Volcanology is a further important application of muography, where it is used to discover empty magma chambers and voids. As muons are better absorbed by thick and dense objects, such as rocks on the bottom of a volcano, the absorption provides key information about its inner structure. The density images created via muography can even be fed into machine-learning models to help predict eruptive patterns, and similar methods can be applied to glaciology, as has been done to estimate the topography of mountains hidden by overlaying glaciers. Among these projects is Eiger-μ, designed to explore the mechanisms of glacial erosion.

Powerful partnership 

Muography creates bridges across the world between particle physics and cultural-heritage preservation. The ability to perform radiography of a large object from a distance or from pre-existing tunnels is very appealing in situations where invasive excavations are impossible, as is often the case in highly populated urban or severely constrained areas. Geophysical remote-sensing methods are already part of the archaeological toolkit, but in general they are expensive, have a limited resolution and demand strong model assumptions for interpreting the data. Muography is now gaining acceptance in the cultural-heritage preservation world because its data are intrinsically directional and can be easily interpreted in terms of density distributions.

From the pioneering work of Alvarez to the state-of-the-art systems available today, progress in muography has gone hand-in-hand with the development of detectors for particle physics. The ScanPyramids project, for example, uses micropattern gaseous detectors such as those developed within the CERN RD51 collaboration and nuclear emulsion detectors as those of the OPERA neutrino experiment, while the upcoming EGP project will benefit from detector technologies for the Mu2e experiment at Fermilab. R&D for next-generation muography includes the development of scintillator-based muon detectors, resistive plate chambers, trackers based on multi-wire proportional chambers and more. There are proposals to use microstrip silicon detectors from the CMS experiment and Cherenkov telescopes inspired by the CTA astrophysics project, showing how R&D for fundamental physics continues to drive exotic applications in archaeology and cultural-heritage preservation.

ASAPP 2023

Advances in Space AstroParticle Physics:Frontier technologies for particle measurements in space

The ASAPP 2023 International Conference aims in reviewing the progresses in design, development, integration and test of instrumentation for measurement of particles and high-energy radiation in Space. The deployment and operation of novel instrumentation for particle and high-energy radiation measurement in space will pave the road to future astroparticle missions for investigations of fundamental physics and the Cosmos (e.g., cosmic ray physics, search for Dark Matter, matter-antimatter asymmetry, multimessenger astronomy, …); applications for monitoring of the space radiation environment; investigations of the impact of low energy ionizing particles on instrumentation, Space Weather, and Earth sciences.

This conference will be a unique opportunity in the international scenario to host a direct discussion between different experimental communities towards achieving common targets and foster synergies. The conference program will be consequently planned with plenary talks only with large slots dedicated to questions and discussions, taking advantage of the conference venue and social events to foster constructive discussions between participants in view of establishing nets of expertise.

The event is planned to be held with in-person participation. No remote connection will be provided unless specific exceptional events will require it.

SCIENTIFIC PROGRAM (in brief)

The conference organizers invite the community to submit abstract proposals on the topics listed below (see the Scientific Program page for details)

  • Instrumentation and missions for direct high-energy cosmic ray measurements in space
  • Instrumentation and missions for indirect high-energy cosmic ray measurements in space
  • Instrumentation and missions for direct low-energy cosmic ray measurements in space
  • Instrumentation and missions for hard X-ray and γ-ray direct measurements in space
  • R&D of novel approaches and instruments for particle and high-energy radiation measurements in space, including (and not limited to):
    • Tracking detectors
    • Calorimetry detectors
    • Fast Time-of-Flight systems
    • Detectors for particle ID
    • High Temperature Superconducting magnets
    • FE and DAQ systems

In consideration of the variegate approaches that have been consolidating in the current era of space observations, contributions that target all opportunities of space platforms will be addressed, from cubesats and nanosatellite constellations up to large-size space missions, including stratospheric balloon flight missions. 

26TH INTERNATIONAL CONFERENCE ON COMPUTING IN HIGH ENERGY & NUCLEAR PHYSICS

The CHEP conferences address the computing, networking and software issues for the world’s leading data‐intensive science experiments that currently analyze hundreds of petabytes of data using worldwide computing resources. The Conference provides a unique opportunity for computing experts across Particle and Nuclear Physics to come together to learn from each other and typically attracts over 500 participants. The event features plenary sessions, parallel sessions, and poster presentations; it publishes peer-reviewed proceedings.

The focus of the conference evolves with time to highlight changing technologies and major scientific initiatives. Through the plenary sessions, related scientific and computing topics are presented to ensure a broad and thoughtful program that engages the community. This edition of the conference will place special emphasis on high-performance data organization, management, and access (DOMA), a topic of interest and relevance throughout the scientific community.

The nine parallel session tracks focus on specific topics and often have very animated discussions on the technical merits of various approaches. Birds of a feather sessions promote international communities of common interest.

The CHEP 2023 organizers are committed to fostering a supportive and diverse environment with opportunities for everyone. We take a positive attitude towards having full participation from the whole community and everybody in the field is encouraged to attend. Attendance of students at CHEP 2023 is strongly encouraged. A diversity event will be scheduled.

The CHEP conference location rotates between the Americas, Asia and Europe, and is typically held eighteen months apart. The CHEP 2023 conference will be hosted by the Thomas Jefferson National Accelerator Facility (Jefferson Lab) at the newly renovated Norfolk Marriott Waterside hotel in Norfolk, Virginia.

The conference will be held from Monday, May 8, 2023, through Friday, May 12, 2023. For the most up-to-date travel guidelines, please reference https://www.cdc.gov/coronavirus/2019-ncov/travelers/international-travel-during-covid19.html.

A WLCG/HSF pre-conference workshop will be held on the prior weekend (May 6-7).

Muon4Future

The Muon4Future workshop aims to start a discussion that, for the first time, compares the results of the muon-based experiments, involving both the experimental and theoretical communities. Such a comparison is indispensable today since many of the discrepancies between the Standard Model and the measurements are concentrated in the muon sector. The purpose of the workshop is not only limited to examining the experiments currently carried out in data taking or already approved and/or under construction, but it also aims at discussing possible future proposals. The goal is to identify the most promising physics experiments and measurements that would allow to further test the Standard Model and search for new physics, comparing new ideas, relevant issues and related challenges.

The proceedings of the workshop will be published. A report containing the summary and the findings of the discussions of each session including  Wednesday afternoon, will be prepared and published. The session conveners will be the editors and the workshop participants the authors of such a publication.

The Workshop will be held in presence in Venice, at “Palazzo Franchetti” of the “Istituto Veneto di Scienze, Lettere ed Arti”. The Workshop will have plenary sessions and talks are given by invited speakers.

To participate to the workshop a fee has to be paid and registration is mandatory through the relevant registration form on this site.
Before registering and make the payment, please visit the page “Fee and Payment” on this site, for all detailed instruction.

The Workshop is organized by INFN-Sezione di Padova with the support of the Physics and Astronomy Department of Padova University.

FCC Week 2023

The ninth edition of the Future Circular Collider (FCC) Conference will take place in London, United Kingdom from 5 to 9 June 2023. The meeting brings together the international scientific community pursuing a feasibility study for a visionary post-LHC research infrastructure at CERN and is organized with the support of the EU-funded H2020 FCCIS project.

Leading experts from academia and industry will review the recent progress en route to the completion of the feasibility study in 2025 and set the near-term goals for the coming years. The physics opportunities opened by the FCC integrated programme as well as the status of key technology R&D programmes will be discussed along with the technological opportunities on offer for building new collaborative projects. The meeting is an excellent opportunity to reinforce the bonds between the FCC collaborating institutes and to draft the work plans for the submission of the FCC mid-term review to the CERN’s Council later this year.

The FCC Week 2023 will follow the traditional layout of plenary and parallel sessions covering all aspects of the study: physics, experiments, machine design, technologies, infrastructures and civil engineering. Monday features a set of plenary keynote presentations with top-ranking international speakers from the world of science, industry and European affairs, offering an overview about the ongoing activities across all parts of the study and serve to inform study members about the updated boundary conditions from placement studies, the latest machine parameters and progress on understanding the physics potential that the FCC integrated programme can offer during its lifetime. Parallel sessions will focus on specific areas. Satellite meetings for UK-related projects and for the governance bodies of the FCC study will be included in the programme that is being developed. Participation of industry is highly encouraged as addressing the technological challenges of a new research infrastructure presents opportunities for co-innovation.

The work carried out in the framework of the FCC Feasibility Study will inform the next update of the European Strategy and benefit society in areas beyond particle physics. We strongly encourage submission of proposals for posters via Indico on the FCCW2023 site. Oral contributions are by invitation.

New superconducting technologies for the HL-LHC and beyond

The python

The era of high-temperature superconductivity started in 1986 with the discovery, by IBM researchers Georg Bednorz and Alex Muller, of superconductivity in a lanthanum barium copper oxide. This discovery was revolutionary: not only did the new, brittle superconducting compound belong to the family of ceramic oxides, which are generally insulators, but it had the highest critical temperature ever recorded (up to 35 K, compared with about 18 K in conventional superconductors). In the following years, scientists discovered other cuprate superconductors (bismuth–strontium–copper oxide and yttrium–barium–copper oxide) and achieved superconductivity at temperatures above 77 K, the boiling point of liquid nitrogen (see “Heat is rising” figure). The possibility of operating superconducting systems with inexpensive, abundant and inert liquid nitrogen generated tremendous enthusiasm in the superconducting community. 

Several applications of high-temperature superconducting materials with a potentially high impact on society were studied. Among them, superconducting transmission lines were identified as an innovative and effective solution for bulk power transmission. The unique advantages of superconducting transmission are high capacity, very compact volume and low losses. This enables the sustainable transfer of up to tens of GW of power at low and medium voltages in narrow channels, together with energy savings. Demonstrators have been built worldwide in conjunction with industry and utility companies, some of which have successfully operated in national electricity grids. However, widespread adoption of the technology has been hindered by the cost of cuprate superconductors. 

Critical temperature of superconductors

In particle physics, superconducting magnets allow high-energy beams to circulate in colliders and provide stronger fields for detectors to be able to handle higher collision energies. The LHC is the largest superconducting machine ever built, and the first to also employ high-temperature superconductors at scale. Realising its high-luminosity upgrade and possible future colliders is driving the use of next-generation superconducting materials, with applications stretching far beyond fundamental research.

High-temperature superconductivity (HTS) was discovered at the time when the conceptual study for the LHC was ongoing. While the new materials were still in a development phase, the potential of HTS for use in electrical transmission was immediately recognised. The powering of the LHC magnets (which are based on the conventional superconductor niobium titanium, cooled by superfluid helium) requires the transfer of about 3.4 MA of current, generated at room temperature, in and out of the cryogenic environment. This is done via devices called current leads, of which more than 3000 units are installed at different underground locations around the LHC’s circumference. The conventional current–lead design, based on vapour-cooled metallic conductors, imposes a lower limit (about 1.1 W/kA) on the heat in-leak into the liquid helium. The adoption of the HTS BSCCO 2223 (bismuth–strontium–calcium copper oxide ceramic) tape – operated in the LHC current leads in the temperature range 4.5 to 50 K – enabled thermal conduction and ohmic dissipation to be disentangled. Successful multi-disciplinary R&D followed by prototyping at CERN and then industrialisation, with series production of the approximately 1100 LHC HTS current leads starting in 2004, resulted in both capital and operational savings (avoiding one extra cryoplant and an economy of about 5000 l/h of liquid helium). It also encouraged wider adoption of BSCCO 2223 current–lead technology, for instance in the magnet circuits for the ITER tokamak, which benefit via a collaboration agreement with CERN on the development and design of HTS current leads.

MgB2 links at the HL-LHC 

The discovery of superconductivity in magnesium diboride (MgB2) in 2001 generated new enthusiasm for HTS applications. This material, classified as medium-temperature superconductor, has remarkable features: it has a critical temperature (39 K) some 30 K higher than that of niobium titanium, a high current density (to date in low and medium magnetic fields) and, crucially, it can be industrially produced as round multi-filamentary wire in long (km) lengths. These characteristics, along with a cost that is intrinsically lower than other available HTS materials, make it a promising candidate for electrical applications.

At the LHC the current leads are located in the eight straight sections. For the high-luminosity upgrade of the LHC (HL-LHC), scheduled to be operational in 2029, the decision was taken to locate the power converters in new, radiation-free underground technical galleries above the LHC tunnel. The distance between the power converters and the HL-LHC magnets spans about 100 m and includes a vertical path via an 8 m shaft connecting the technical galleries and the LHC tunnel. The large current to be transferred across such distance, the need for compactness, and the search for energy efficiency and potential savings led to the selection of HTS transmission as the enabling technology.

Complex cabling

The electrical connection, at cryogenic temperature, between the HL-LHC current leads and the magnets is performed via superconducting links based on MgB2 technology. MgB2 wire is assembled in cables with different layouts to transfer currents ranging from 0.6 kA to 18 kA. The individual cables are then arranged in a compact assembly that constitutes the final cable feeding the magnet circuits of either the HL-LHC inner triplets (a series of quadrupole magnets that provides the final focusing of the proton beams before collision in ATLAS and CMS) or the HL-LHC matching sections (which match the optics in the arcs to those at the entrance of the final-focus quadrupoles), and the final cable is incorporated in a flexible cryostat with an external diameter of up to 220 mm. The eight HL-LHC superconducting links are about 100 m long and transfer currents of about 120 kA for the triplets and 50 kA for the matching sections at temperatures up to 25 K, with cryogenic cooling performed with helium gas.

The R&D programme for the HL-LHC superconducting links started in around 2010 with the evaluation of the MgB2 conductor and the development, with industry, of a round wire with mechanical properties enabling cabling after reaction. Brittle superconductors, such as Nb3Sn – used in the HL-LHC quadrupoles and also under study for future high-field magnets – need to be reacted into the superconducting phase via heat treatments, at high temperatures, performed after their assembly in the final configuration. In other words, those conductors are not superconducting until cabling and winding have been performed. When the R&D programme was initiated, industrial MgB2 conductor existed in the form of multi-filamentary tape, which was successfully used by ASG Superconductors in industrial open MRI systems for transporting currents of a few hundred amperes. The requirement for the HL-LHC to transfer current to multiple circuits for a total of up to 120 kA in a compact configuration, with multiple twisting and transposition steps necessary to provide uniform current distribution in both the wires and cables, called for the development of an optimised multi-filamentary round wire. 

Carried out in conjunction with ASG Superconductors, this development led to the introduction of thin niobium barriers around the MgB2 superconducting filaments to separate MgB2 from the surrounding nickel and avoid the formation of brittle MgB2–Ni reaction layers that compromise electro-mechanical performance; the adoption of higher purity boron powder to increase current capability; the optimisation in the fraction of Monel (a nickel-copper alloy used as the main constituent of the wire) in the 1 mm-diameter wire to improve mechanical properties; the minimisation of filament size (about 55 µm) and twist pitch (about 100 mm) for the benefit of electro-mechanical properties; the addition of a copper stabiliser around the Monel matrix; and the coating of tin–silver onto the copper to ensure the surface quality of the wire and a controlled electrical resistance among wires (inter-strand resistance) when assembled into cables. After successive implementation and in-depth experimental validation of all improvements, a robust 1 mm-diameter MgB2 wire with required electro-mechanical characteristics was produced. 

REBCO tape and cables

The next step was to manufacture long unit lengths of MgB2 wire via larger billets (the assembled composite rods that are then extruded and drawn down in a long wire). The target unit length of several kilometres was reached in 2018 when series procurement of the wire was launched. In parallel, different cable layouts were developed and validated at CERN. This included round MgB2 cables in a co-axial configuration rated for 3 kA and for 18 kA at 25 K (see “Complex cabling” figure). While the prototypes made at CERN were 20 to 30 m long, the cable layout incorporated, from the outset, characteristics to enable production via industrial cabling machines of the type used for conventional cables. Splice techniques as well as detection and protection aspects were addressed in parallel with wire and cable development. Both technologies are strongly dependent on the characteristics of the superconductor, and are of key importance for the reliability of the final system. 

The first qualification at 24 K of a 20 kA MgB2 cable produced at CERN, comprising two 20 m lengths connected together, took place in 2014. This followed the qualification at CERN of short-model cables and other technological aspects, as well as the construction of a dedicated test station enabling the measurement of long cables operated at higher temperatures, in a forced flow of helium gas. The cables were then industrially produced at TRATOS Cavi via a contract with ICAS, in a close and fruitful collaboration that enabled – while operating heavy industrial equipment – the requirements identified during the R&D phase. The complexity of the final cables required a multi-step process that used different cabling, braiding and electrically insulating lines, and the implementation of a corresponding quality-assurance programme. The first industrial cables, which were 60 m long, were successfully qualified at CERN in 2018. Final prototype cables of the type needed for the HL-LHC (for both the triplets and matching sections) were validated at CERN in 2020, when series production of the final cables was launched. As of today, the full series of about 1450 km of MgB2 wire – the first large-scale production of this material – and five of the eight final MgB2 cables needed for the HL-LHC have been produced.

The use of hydrogen can diversify energy sources as it significantly reduces greenhouse-gas emissions and environmental pollution during energy conversion

Superconducting wire and cables are the core of a superconducting system, but the system itself requires a global optimisation, which is achieved via an integrated design. Following this approach, the challenge was to investigate and develop, in industry, long and flexible cryostats for the superconducting links with enhanced cryogenic performance. The goal was to achieve a low static heat load (< 1.5 W/m) into the cryogenic volume of the superconducting cables while adopting a design – a two-wall cryostat without intermediate thermal screen – that simplifies the cooling of the system, improves the mechanical flexibility of the links and eases handling during transport and installation. This development, which ran in parallel with the wire and cable activities, led to the desired results and, after an extensive test campaign at CERN, the developed technology was adopted. Series production of these cryostats is taking place at Cryoworld in the Netherlands.

The optimised system minimises the cryogenic cost for the cooling such that a superconducting link transfers – from the tunnel to the technical galleries – just enough helium gas to cool the resistive section of the current leads and brings it to the temperature (about 20 K) for which the leads are optimised. In other words, the superconducting link does not add cryogenic cost to the refrigeration of the system. The links, which are rated for currents up to 120 kA, are sufficiently flexible to be transported, as for conventional power cables, on drums about 4 m in diameter and can be manually pulled, without major tooling, during installation (see “kA currents” image). The challenge of dealing with the thermal contraction of the superconducting links, which shrink by about 0.5 m when cooled down to cryogenic temperature, was also addressed. An innovative solution, which takes advantage of bends and is compatible with the fixed position of the current lead cryostat, was validated with prototype tests. 

Novel HTS leads

Whereas MgB2 cables transfer high DC currents from the 4.5 K liquid helium environment in the LHC tunnel to about 20 K in the HL-LHC new underground galleries, a different superconducting material is required to transfer the current from 20 to 50 K, where the resistive part of the current leads makes the bridge to room temperature. To cope with the system requirements, novel HTS current leads based on REBCO (rare-earth barium copper oxide) HTS superconducting tape – a material still in a development phase at the time of the LHC study – have been conceived, constructed and qualified to perform this task (see “Bridging the gap” image). Compact, round REBCO cables ensure, across a short (few-metre-long) length, the electrical transfer from the MgB2 to 50 K, after which the resistive part of the current leads finally brings the current to room temperature. In view of the complexity of dealing with the REBCO conductor, the corresponding R&D was done at CERN, where a complex dedicated cabling machine was also constructed. 

Cable assembly

While REBCO tape is procured from industry, the challenges encountered during the development of the cables were many. Specific issues associated with the tape conductor, for example electrical resistance internal to the tape and the dependence of electrical properties on temperature and cycles applied during soldering, were identified and solved with the tape manufacturers. A conservative approach imposing zero critical current degradation of the tape after cabling was implemented. The lessons learnt from this development are also instrumental for future projects employing REBCO conductors, including the development of high-field REBCO coils for future accelerator magnets. 

The series components of the HL-LHC cold-powering systems (superconducting links with corresponding terminations) are now in production, with the aim to have all systems available and qualified in 2025 for installation in the LHC underground areas during the following years. Series production and industrialisation were preceded by the completion of R&D and technological validations at CERN. Important milestones have been the test of a sub-scale 18 kA superconducting link connected to a pair of novel REBCO current leads in 2019, and the test of full-cross section, 60 m-long superconducting lines of the type needed for the LHC triplets and for the matching sections, both in 2020. 

The complex terminations of the superconducting links involve two types of cryostat that contain, at the 20 K side, the HTS current leads and the splices between REBCO and MgB2 cables and, at the 4.2 K side, the splices between the niobium titanium and the MgB2 cables. A specific development in the design was to increase compactness and enable the connection of the cryostat with the current leads to the superconducting link at the surface, prior to installation in the HL-LHC underground areas (see “End of the line” figure). The series production of the two cryostat terminations is taking place via collaboration agreements with the University of Southampton and Uppsala University.     

The displacement of the current leads via the adoption of superconducting links brings a number of advantages. These include freeing precious space in the main collider ring, which becomes available for other accelerator equipment, and the ability to locate powering equipment and associated electronics in radiation-free areas. The latter relaxes radiation-hardness requirements for the hardware and eases access for personnel to carry out the various interventions required during accelerator operations. 

Cooling with low-density helium gas also makes electrical transfer across long vertical distances feasible. The ability to transfer high currents from underground tunnels to surface buildings – as initially studied for the HL-LHC – is therefore of interest for future machines, such as the proposed Future Circular Collider at CERN. Flexible superconducting links can also be applied to “push–pull” arrangements of detectors at linear colliders such as the proposed CLIC and ILC, where the adoption of flexible powering lines can simplify and reduce the time for the exchange of experiments sharing the same interaction region.

An enabling technology

Going beyond fundamental research in physics, superconductivity is an enabling technology for the transfer of GWs of power across long distances. The main benefits, in addition to incomparably higher power transmission, are small size, low total electrical losses, minimised environmental impact and more sustainable transmission. HTS offers the possibility of replacing resistive high-voltage overhead lines, operated across thousands of kilometres at voltages reaching about 1000 kV, with lower voltage lines, laid underground with reduced footprints.

Cryostat termination

Long-distance power transmission using hydrogen- cooled MgB2 superconducting links, potentially associated with renewable energy sources, is identified as one of the leading ways towards a future sustainable energy system. Since hydrogen is liquid at 20 K (the temperature at which MgB2 is superconducting), large amounts can be stored and used as a coolant for superconducting lines, acting at the same time as the energy vector and cryogen. In this direction, CERN participated – at a very early stage of the HL-LHC superconducting links development – in a project launched by Carlo Rubbia as scientific director of the Institute for Advanced Sustainability Studies (IASS) in Potsdam. Around 10 years ago, CERN and IASS joint research culminated in the record demonstration of the first 20 kA MgB2 transmission line operated at liquid hydrogen temperature. This activity continued with a European initiative called BestPaths, which demonstrated a monopole MgB2 cable system operated in helium gas at 20 K. This was qualified in industry for 320 kV operation and at 10 kA at CERN, proving 3.2 GW power transmission capability. This initiative involved European industry and France’s transmission system operator. In Italy, the INFN has recently launched a project called IRIS based on similar technology (see CERN Courier January/February 2023 p9).

In addition to transferring power across long distances with low losses and minimal environmental impact, the development of high-performance, low-cost, sustainable and environmentally friendly energy storage and production systems is a key challenge for society. The use of hydrogen can diversify energy sources as it significantly reduces greenhouse-gas emissions and environmental pollution during energy conversion. In aviation, alternative-propulsion systems are studied to reduce CO2 emission and move toward zero-emission flights. Scaling up electric propulsion to larger aircraft is a major challenge. Superconducting technologies are a promising solution as they can increase power density in the propulsion chain while significantly lowering the mass of the electrical distribution system. In this context, a collaboration agreement has recently been launched between CERN and Airbus UpNext. The construction of a demonstrator of superconducting distribution in aircraft called SCALE (Super-Conductor for Aviation with Low Emissions), which uses the HL-LHC superconducting link technology, was recently launched at CERN. 

CERN’s developed experience in superconducting-link technology is also of interest to large data centres, with a collaboration agreement between CERN and Meta under discussion. The possibility of locating energy equipment remotely from servers, of transferring efficiently large power in a compact volume, and of meeting sustainability goals by reducing carbon footprints are motivating a global re-evaluation of conventional systems in light of the potential of superconducting transmission.

Such applications demonstrate the virtuous circle between fundamental and applied research. The requirements of fundamental exploration in particle physics research have led to the development of increasingly powerful and sophisticated accelerators. In this endeavour, scientists and engineers engage in developments initially conceived to address specific challenges. This often requires a multi-disciplinary approach and collaboration with industry to transform prototypes into mature technology ready for large-scale application. Accelerator technology is a key driver of innovation that may also have a wider impact on society. The superconducting-link system for the HL-LHC project is a shining example.

Deep learning for safer driving

How quickly can a computer make sense of what it sees without losing accuracy? And to what extent can AI tasks on hardware be performed with limited computing resources? Aiming to answer these and other questions, car-safety software company Zenseact, founded by Volvo Cars, sought out CERN’s unique capabilities in real-time data analysis to investigate applications of machine-learning to autonomous driving. 

In the future, self-driving cars are expected to considerably reduce the number of road-accident fatalities. To advance developments, in 2019 CERN and Zenseact began a three-year project to research machine-learning models that could enable self-driving cars to make better decisions faster. Carried out in an open-source software environment, the project’s focus was “computer vision” – an AI discipline dealing with how computers interpret the visual world and then automate actions based on that understanding.

“Deep learning has strongly reshaped computer vision in the last decade, and the accuracy of image-recognition applications is now at unprecedented levels. But the results of our research show that there’s still room for improvement when it comes to running the deep-learning algorithms faster and being more energy-efficient on resource-limited on-device hardware,” said Christoffer Petersson, research lead at Zenseact. “Simply put, machine-learning techniques might help drive faster decision-making in autonomous cars.” 

The need to react fast and make quick decisions imposes strict runtime requirements on the neural networks that run on embedded hardware in an autonomous vehicle. By compressing the neural networks, for example using fewer parameters and bits, the algorithms can be executed faster and use less energy. For this task, the CERN–Zenseact team chose field-programmable gate arrays (FPGAs) as the hardware benchmark. Used at CERN for many years, especially for trigger readout electronics in the large LHC experiments, FPGAs are configurable integrated circuits that can execute complex decision-making algorithms in periods of microseconds. The main result of the FPGA experiment, says Petersson, was a practical demonstration that computer-vision tasks for automotive applications can be performed with high accuracy and short latency, even on a processing unit with limited computational resources. “The project clearly opens up for future directions of research. The developed workflows could be applied to many industries.”

The compression techniques in FPGAs elucidated by this project could also have a significant effect on “edge” computing, explains Maurizio Pierini of CERN: “Besides improving the trigger systems of ATLAS and CMS, future development of this research area could be used for on-site computation tasks, such as on portable devices, satellites, drones and obviously vehicles.”

CLEAR highlights and goals

Particle accelerators have revolutionised our understanding of nature at the smallest scales, and continue to do so with facilities such as the LHC at CERN. Surprisingly, however, the number of accelerators used for fundamental research represents a mere fraction of the 50,000 or so accelerators currently in operation worldwide. Around two thirds of these are employed in industry, for example in chip manufacturing, while the rest are used for medical purposes, in particular radiotherapy. While many of these devices are available “off-the-shelf”, accelerator R&D in particle physics remains the principal driver of innovative, next-generation accelerators for applications further afield.

The CERN Linear Electron Accelerator for Research (CLEAR) is a prominent example. Launched in August 2017 (CERN Courier November 2017 p8), CLEAR is a user facility developed from the former CTF3 project which existed to test technologies for the Compact Linear Collider (CLIC) – a proposed e+e collider at CERN that would follow the LHC. During the past five years, beams with a wide range of parameters have been provided to groups from more than 30 institutions across more than 10 nations.

CLEAR was proposed as a response to the low availability of test-beam facilities in Europe. In particular, there was very little time available to users on accelerators with electron beams with an energy of a few hundred MeV, as these tend to be used in dedicated X-ray light-source and other specialist facilities. CLEAR therefore serves as a unique facility to perform R&D towards a wide range of accelerator-based technologies in this energy range. Independent of CERN’s other accelerator installations, CLEAR has been able to provide beams for around 35 weeks per year since 2018, as well as during long shutdowns, and even managing successful operation during the COVID-19 pandemic. 

Flexible physics

As a relatively small facility, CLEAR operates in a flexible fashion. Operators can vary the range of beams available with relative ease by tailoring many different parameters, such as the bunch charge, length and energy, for each user. There is regular weekly access to the machine and, thanks to the low levels of radioactivity, it is possible to gain access to the facility several times per day to adjust experimental setups if needed. Along with CLEAR’s location at the heart of CERN, the facility has attracted an eager stream of users from day one.

CLEAR has attracted an eager stream of users from day one

Among the first was a team from the European Space Agency working in collaboration with the Radiation to Electronics (R2E) group at CERN. The users irradiated electronic components for the JUICE (Jupiter Icy Moons Explorer) mission with 200 MeV electron beams. Their experiments demonstrated that high-energy electrons trapped in the strong magnetic fields around Jupiter could induce faults, so-called single event upsets, in the craft’s electronics, leading to the development and validation of components with the appropriate radiation-hardness. The initial experiment has been built upon by the R2E group to investigate the effect of electron beams on electronics.

Inspecting beamline equipment

As the daughter of CTF3, CLEAR has continued to be used to test the key technological developments necessary for CLIC. There are two prototype CLIC accelerating structures in the facility’s beamline. Originally installed to test CLIC’s unique two-beam acceleration scheme, the structures have been used to study short-range “wakefield kicks” that can deflect the beam away from the planned path and reduce the luminosity of a linear collider. Additionally, prototypes of the high-resolution cavity beam position monitors, which are vital to measure and control the CLIC beam, have been tested, showing promising initial results.

One of the main activities at CLEAR concerns the development and testing of beam instrumentation. Here, the flexibility and the large beam-parameter range provided by the facility, together with easy access, especially in its dedicated in-air test station, have proven to be very effective. CLEAR covers all phases of the development of novel beam diagnostics devices, from the initial exploration of a concept or physical mechanism to the first prototyping and to the testing of the final instrument adapted for use in an operational accelerator. Examples are beam-loss monitors based on optical fibres, and beam-position and bunch-length monitors based on Cherenkov diffraction radiation under development by the beam instrumentation group at CERN.

Advanced accelerator R&D

There is a strong collaboration between CLEAR and the Advanced Wakefield Experiment (AWAKE), a facility at CERN used to investigate proton-driven plasma wakefield acceleration. In this scheme, which promises higher acceleration gradients than conventional radio-frequency accelerator technology and thus more compact accelerators, charged particles such as electrons are accelerated by forcing them to “surf” atop a longitudinal plasma wave that contains regions of positive and negative charges. Several beam diagnostics for the AWAKE beamline were first tested and optimised at CLEAR. A second phase of the AWAKE project, presently being commissioned for operation in 2026, requires a new source of electron beams to provide shorter, higher quality beams. Before its final installation in AWAKE, it is proposed to use this source to increase the range of beam parameters available at CLEAR.

Installation of novel microbeam position monitors

Further research into compact, plasma-based accelerators has been undertaken at CLEAR thanks to the installation of an active plasma lens on the beamline. Such lenses use gases ionised by very high electric currents to provide focusing for beams many orders of magnitude stronger than can be achieved with conventional magnets. Previous work on active plasma lenses had shown that the focusing force was nonlinear and reduced the beam quality. However, experiments performed at CLEAR showed, for the first time, that by simply swapping the commonly used helium gas for a heavier gas like argon, a linear magnetic field could be produced and focusing could be achieved without reducing the beam quality (CERN Courier December 2018 p8). 

Plasma acceleration is not the only novel accelerator technology that has been studied at CLEAR over the past five years. The significant potential of using accelerators to produce intense beams of radiation in the THz frequency range has also been demonstrated. Such light, on the boundary between microwaves and infrared, is difficult to produce, but has a variety of different uses ranging from imaging and security scanning to the control of materials at the quantum level. Compact linear accelerator-based sources of THz light could potentially be advantageous to other sources as they tend to produce significantly higher photon fluxes. By using long trains of ultrashort, sub-ps bunches, it was shown at CLEAR that THz radiation can be generated through coherent transition radiation in thin metal foils, through coherent Cherenkov radiation, and through coherent “Smith–Purcell” radiation in periodic gratings. The peak power emitted in experiments at CLEAR was around 0.1 MW. However, simulations have shown that with relatively minor reductions in the length of the electron bunches it will be possible to generate a peak power of more than 100 MW. 

FLASH forward

Advances in high-gradient accelerator technology for projects like CLIC (CERN Courier April 2018 p32) have led to a surge of interest in using electron beams with energies between 50–250 MeV to perform radiotherapy, which is one of the key tools used in the treatment of cancer. The use of so-called very-high energy electron (VHEE) beams could provide advantages over existing treatment types. Of particular interest is using VHEE beams to perform radiotherapy at ultra-high dose rates, which could potentially generate the so-called FLASH effect in patients. Here, tumour cells are killed while sparing the surrounding healthy tissues, with the potential to significantly improve treatment outcomes. 

FLASH radiotherapy

So far, CLEAR has been the only facility in the world studying VHEE radiotherapy and FLASH with 200 MeV electron beams. As such, there has been a large increase in beam-time requests in this field. Initial tests performed by researchers from the University of Manchester demonstrated that, unlike other types of radiotherapy beams, VHEE beams are relatively insensitive to inhomogeneities in tissue that typically result in less targeted treatment. The team, along with another from the University of Strathclyde, also looked at how focused VHEE beams could be used to further target doses inside a patient by mimicking the Bragg peak seen in proton radiotherapy. Experiments with the University Hospital of Lausanne to try to demonstrate whether the FLASH effect can be induced with VHEE beams are ongoing (CERN Courier January/February 2023 p8). 

Even if the FLASH effect can be produced in the lab, there are issues that need to be overcome to bring it to the clinic. Chief among them is the development of novel dosimetric methods. As CLEAR and other facilities have shown, conventional real-time dosimetric methods do not work at ultra-high dose rates. Ionisation chambers, the main pillar of conventional radiotherapy dosimetry, were shown to have very nonlinear behaviour at such dose rates, and recombination times that were too long. Due to this, CLEAR has been involved in the testing of modified ionisation chambers as well as other more innovative detector technologies from the world of particle physics for use in a future FLASH facility. 

High impact 

As well as being a test-bed for new technologies and experiments, CLEAR has provided an excellent training infrastructure for the next generation of physicists and engineers. Numerous masters and doctoral students have spent a large portion of their time performing experiments at CLEAR either as one-time users or long-term collaborators. Additionally, CLEAR is used for practical accelerator training for the Joint Universities Accelerator School.

Numerous masters and doctoral students have spent time performing experiments at CLEAR

As in all aspects of life, the COVID-19 pandemic placed significant strain on the facility. The planned beam schedule for 2020 and beyond had to be scrapped as beam operation was halted during the first lockdown and external users were barred from travelling. However, through the hard work of the team, CLEAR was able to recover and run at almost full capacity within weeks. Several internal CERN users, many of whom were unable to travel to external facilities, were able to use CLEAR during this period to continue their research. Furthermore, CLEAR was involved in CERN’s own response to the pandemic by undertaking sterilisation tests of personal protective equipment.

Test-beam facilities such as CLEAR are vital for developing future physics technology, and the impact that such a small facility has been able to produce in just a few years is impressive. A variety of different experiments from several different fields of research have been performed, with many more that are not mentioned in this article. Unfortunately for the world of high-energy physics, the aforementioned shortage of accelerator test facilities has not gone away. CLEAR will continue to play its role in helping provide test beams, with operations due to continue until at least 2025 and perhaps long after. There is an exciting physics programme lined up for the next few years, featuring many experiments similar to those that have already been performed but also many that are new, to ensure that accelerator technology continues to benefit both science and society.

bright-rec iop pub iop-science physcis connect