Comsol -leaderboard other pages

Topics

First beam at SLAC plasma facility

First beam

FACET-II, a new facility for accelerator research at SLAC National Accelerator Laboratory in California, has produced its first electrons. FACET-II is an upgrade to the Facility for Advanced Accelerator Experimental Tests (FACET), which operated from 2011 to 2016, and will produce high-quality electron beams to develop plasma-wakefield acceleration techniques. The $26 million project, recently approved by the US Department of Energy (DOE), will also operate as a federally sponsored research facility for advanced accelerator research that is open to scientists on a competitive, peer-reviewed basis.

“As a strategically important national user facility, FACET-II will allow us to explore the feasibility and applications of plasma-driven accelerator technology,” said James Siegrist of the DOE Office of Science. “We’re looking forward to seeing the groundbreaking science in this area that FACET-II promises, with the potential for a significant reduction in the size and cost of future accelerators, including free-electron lasers and medical accelerators.”

Whereas conventional accelerators impart energy to charged particles via radiofrequency fields inside metal structures, plasma-wakefield accelerators send a bunch of very energetic particles through a hot ionised gas to create a plasma wake on which a trailing bunch can “surf” and gain energy. This leads to acceleration gradients that are much higher and therefore potentially to smaller machines, but several crucial steps are required before plasma accelerators can become a reality. This is where FACET-II comes in, offering higher-quality beams than FACET, explains project scientist Mark Hogan. “We need to show that we’re able to preserve the quality of the beam as it passes through plasma. High-quality beams are an absolute requirement for future applications in particle and X-ray laser physics.”

Aerial view

SLAC has a rich history in developing such techniques, and the previous FACET facility enabled researchers to demonstrate electron-driven plasma acceleration for both electrons and positrons. FACET-II will use the middle third (corresponding to a length of 1 km) of SLAC’s linear accelerator to generate a 10 GeV electron beam, kitted out with diagnostics and computational tools that will accurately measure and simulate the physics of the new facility’s beams. The FACET-II design also allows for adding the capability to produce and accelerate positrons at a later stage, paving the way for plasma-based electron–positron colliders.

FACET-II has issued its first call for proposals for experiments that will run when the facility goes online in 2020. In mid-October, prospective users of FACET-II presented their ideas for a first round of experiments for evaluation, and the number of proposals is already larger than the number of experiments that can possibly be scheduled for the facility’s first run.

Last year, the AWAKE experiment at CERN demonstrated the first ever acceleration of a beam in a proton-driven plasma (CERN Courier October 2018 p7). Laser-driven plasma-wakefield acceleration is also receiving much attention thanks to advances in high-power lasers (CERN Courier November 2018 p7). “The FACET-II programme is very interesting, with many plasma-wakefield experiments,” says technical coordinator and CERN project leader for AWAKE, Edda Gschwendtner, who is also chair of the FACET-II programme advisory committee.

LHCb’s momentous metamorphosis

Tender loving care

In November 2018 the LHC brilliantly fulfilled its promise to the LHCb experiment, delivering a total integrated proton–proton luminosity of 10 fb–1 from Run 1 and Run 2 combined. This is what LHCb was designed for, and more than 450 physics papers have come from the adventure so far. Having recently finished swallowing these exquisite data, however, the LHCb detector is due some tender loving care.

In fact, during the next 24 months of long-shutdown two (LS2), the 4500 tonne detector will be almost entirely rebuilt. When it emerges from this metamorphosis, LHCb will be able to collect physics events at a rate 10 times higher than today. This will be achieved by installing new detectors capable of sustaining up to five times the instantaneous luminosity seen at Run 2, and by implementing a revolutionary software-only trigger that will enable LHCb to process signal data in an upgraded CPU farm at the frenetic rate of 40 MHz – a pioneering step among the LHC experiments.

Subdetector structure

LHCb is unique among the LHC experiments in that it is asymmetric, covering only one forward region. That reflects its physics focus: B mesons, which, rather than flying out uniformly in all directions, are preferentially produced at small angles (i.e. close to the beam direction) in the LHC’s proton collisions. The detector stretches for 20 m along the beam pipe, with its sub-detectors stacked behind each other like books on a shelf, from the vertex locator (VELO) to a ring-imaging Cherenkov detector (RICH1), the silicon upstream tracker (UT), the scintillating fibre tracker (SciFi), a second RICH (RICH2), the calorimeters and, finally, the muon detector.

The LHCb upgrade was first outlined in 2008, proposed in 2011 and approved the following year at a cost of about 57 million Swiss francs. The collaboration started dismantling the current detector just before the end of 2018 and the first elements of the upgrade are about to be moved underground.

Physics boost

The LHCb collaboration has so far made numerous important measurements in the heavy-flavour sector, such as the first observation of the rare decay B0s  µ+µ, precise measurement of quark-mixing parameters and the observation of new baryonic and pentaquark states. However, many crucial measurements are currently statistically limited. The LHCb upgrade will boost the experiment’s physics reach by allowing the software trigger to handle an input rate around 30 times higher than before, bringing greater precision to theoretically clean observables.

Under construction

Flowing at an immense rate of 4 TB/s, data will travel from the cavern, straight from the detector electronics via some 9000 300 m-long optical fibres, into front-end computers located in a brand-new data centre that is currently nearing completion. There, around 500 powerful custom-made boards will receive the data and transfer it to thousands of processing cores. Current trigger-hardware equipment will be removed and new front-end electronics have been designed for all the experiment’s sub-detectors to cope with the substantially higher readout rates.

For the largest and heaviest LHCb devices, namely the calorimeters and muon stations, the detector elements will remain mostly in place. All the other LHCb detector systems are to be entirely replaced, apart from a few structural frames, the dipole magnet, shielding elements and gas or vacuum enclosures.

Development

Subdetector activities

The VELO at the heart of LHCb, which allows precise measurements of primary and displaced vertices of short-lived particles, is one of the key detectors to be upgraded during LS2. Replacing the current system based on silicon microstrip modules, the new VELO consists of 26 tracking layers made from 55 × 55 µm2 pixel technology, which offers better hit resolution and simpler track reconstruction. The new VELO will also be closer to the beam axis, which poses significant design challenges. A new chip, the VELOPIX, capable of collecting signal hits from 256 × 256 pixels and sending data at a rate of up to 15 Gb/s, was developed for this purpose. Pixel modules include a cutting-edge cooling substrate based on an array of microchannels trenched out of a 260 µm-thick silicon wafer that carry liquid carbon dioxide to keep the silicon at a temperature of –20 °C. This is vital to prevent thermal run-away, since these sensors will receive the heaviest irradiation of all LHC detectors. Prototype modules have recently been assembled and characterised in tests with high-energy particles at the Super Proton Synchrotron.

The RICH detector will still be composed of two systems: RICH1, which discriminates kaons from pions in the low-momentum range, and RICH2, which performs this task in the high-momentum range. The RICH mirror system, which is required to deflect and focus Cherenkov photons onto photodetector planes, will be replaced with a new one that has been optimised for the much increased particle densities of future LHC runs. RICH detector columns are composed of six photodetector modules (PDMs), each containing four elementary cells hosting the multi-anode photomultiplier tubes. A full PDM was successfully operated during 2018, providing first particle signals.

Mounted just between RICH1 and the dipole magnet, the upstream tracker (UT) consists of four planes of silicon microstrip detectors. To counter the effects of irradiation, the detector is contained in a thermal enclosure and cooled to approximately –5 °C using a CO2 evaporative cooling system. Lightweight staves, with a carbon foam back-plane and embedded cooling pipe, are dressed with flex cables and instrumented with 14 modules, each composed of a polymide hybrid circuit, a boron nitride stiffener and a silicon microstrip sensor.

VELO upgrade

Further downstream, nestled between the RICH2 and the magnet, will sit the SciFi – a new tracker based on scintillating fibres and silicon photomultiplier (SiPM) arrays, which replaces the drift straw detectors and silicon microstrip sensors used by the current three tracking stations. The SciFi represents a major challenge for the collaboration, not only due to its complexity, but also because the technology has never been used for such a large area in such a harsh radiation environment. More than 11,000 km of fibre was ordered, meticulously verified and even cured from a few rare and local imperfections. From this, about 1400 mats of fibre layers were recently fabricated in four institutes and assembled into 140 rigid 5 × 0.5 m2 modules. In parallel, SiPMs were assembled on flex cables and joined in groups of 16 with a 3D-printed titanium cooling tube to form sophisticated photodetection units for the modules, which will be operated at about –40 °C.

As this brief overview demonstrates, the LHCb detector is undergoing a complete overhaul during LS2 – with large parts being totally replaced – to allow this unique LHC experiment to deepen and broaden its exploration programme. CERN support teams and the LHCb technical crew are now busily working in the cavern, and many of the 79 institutes involved in the LHCb collaboration from around the world have shifted their focus to this herculean task. The entire installation will have to be ready for the commissioning of the new detector by mid-2020 so that it is ready for the start of Run 3 in 2021.

ALICE revitalised

ALICE (A Large Ion Collider Experiment) will soon have enhanced physics capabilities thanks to a major upgrade of the detectors, data-taking and data-processing systems. These upgrades will improve the precision on measurements of the high-density, high-temperature phase of strongly interacting matter, the quark–gluon plasma (QGP), together with the exploration of new phenomena in quantum chromodynamics (QCD). Since the start of the LHC programme, ALICE has been participating in all data runs, with the main emphasis on heavy-ion collisions, such as lead–lead, proton–lead, and xenon–xenon collisions. The collaboration has been making major inroads into the understanding of the dynamics of the QGP – a state of matter that prevailed in the first instants of the universe and is recreated in droplets at the LHC.

To perform precision measurements of strongly interacting matter, ALICE must focus on rare probes – such as heavy-flavour particles, quarkonium states, real and virtual photons, and low-mass dileptons – as well as the study of jet quenching and exotic nuclear states. Observing rare phenomena requires very large data samples, which is why ALICE is looking forward to the increased luminosity provided by the LHC in the coming years. The interaction rate of lead ions during the LHC Run 3 is foreseen to reach around 50 kHz, corresponding to an instantaneous luminosity of 6 × 1027 cm–2 s–1. This will enable ALICE to accumulate 10 times more integrated luminosity (more than 10 nb–1) and a data sample 100 times larger than what has been obtained so far. In addition, the upgraded detector system will have better efficiency for the detection of short-lived particles containing heavy-flavour quarks thanks to the improved precision of the tracking detectors.

During long-shutdown two (LS2), several major upgrades to the ALICE detector will take place. These include: a new inner tracking system (ITS) with a new high-resolution, low-material-budget silicon tracker, which extends to the forward rapidities with the new muon forward tracker (MFT); an upgraded time projection chamber (TPC) with gas electron multiplier (GEM) detectors, along with a new readout chip for faster readout; a new fast interaction trigger (FIT) detector and forward diffraction detector. New readout electronics will be installed in multiple subdetectors (the muon spectrometer, time-of-flight detector, transition radiation detector, electromagnetic calorimeter, photon spectrometer and zero-degree calorimeter) and an integrated online–offline (O2) computing system will be installed to process and store the large data volumes.

Detector upgrades

A new all-pixel silicon inner tracker based on CMOS monolithic active pixel sensor (MAPS) technology will be installed covering the mid-rapidity (|η| < 1.5) region of the ITS as well as the forward rapidity (–3.6 < η < –2.45) of the MFT. In MAPS technology, both the sensor for charge collection and the readout circuit for digitisation are hosted in the same piece of silicon instead of being bump-bonded together. The chip developed by ALICE is called ALPIDE, and uses a 180 nm CMOS process provided by TowerJazz. With this chip, the silicon material budget per layer is reduced by a factor of seven compared to the present ITS. The ALPIDE chip is 15 × 30 mm2 in area and contains more than half a million pixels organised in 1024 columns and 512 rows. Its low power consumption (< 40 mW/cm2) and excellent spatial resolution (~5 μm) are perfect for the inner tracker of ALICE.

Inner tracker

The ITS consists of seven cylindrical layers of ALPIDE chips, summing up to 12.5 billion pixels and a total area of 10 m2. The pixel chips are installed on staves with radial distances 22–400 mm away from the interaction point (IP). The beam pipe has also been redesigned with a smaller outer radius of 19 mm, allowing the first detection layer to be placed closer to the IP at a radius of 22.4 mm compared to 39 mm at present. The brand-new ITS detector will improve the impact parameter resolution by a factor of three in the transverse plane and by a factor of five along the beam axis. It will extend the tracking capabilities to much lower pT, allowing ALICE to perform measurements of heavy-flavour hadrons with unprecedented precision and down to zero pT.

In the forward-rapidity region, ALICE detects muons using the muon spectrometer. The new MFT detector is designed to add vertexing capabilities to the muon spectrometer and will enable a number of new measurements that are currently beyond reach. As an example, it will allow us to distinguish J/ψ mesons that are produced directly in the collision from those that come from decays of mesons that contain a beauty quark. The MFT consists of five disks, each composed of two MAPS detection planes, placed perpendicular to the beam axis between the IP and the hadron absorber of the muon spectrometer.

The TPC is the main device for tracking and charged-particle identification in ALICE. The readout rate of the TPC in its present form is limited by its readout chambers, which are based on multi-wire proportional chambers. In order to avoid drift-field distortions produced by ions from the amplification region, the present readout chambers feature a charge gating scheme to collect back-drifting ions that lead to a limitation of the readout rate to 3.5 kHz. To overcome this limitation, new readout chambers employing a novel configuration of stacks of four GEMs have been developed during an extensive R&D programme. This arrangement allows for continuous readout at 50 kHz with lead–lead collisions, at no cost to detector performance. The production of the 72 inner (one GEM stack each) and outer (three GEM stacks each) chambers is now practically completed and certified. The replacement of the chambers in the TPC will take place in summer 2019, once the TPC is extracted from the experimental cavern and transported to the surface.

The new forward interaction trigger, FIT, comprises two arrays of Cherenkov radiators with MCP–PMT sensors and a single, large-size scintillator ring. The arrays will be placed on both sides of the IP. It will be the primary trigger, luminosity and collision time-measurement detector in ALICE. The detector will be capable of triggering at an interaction rate of 50 kHz, with a time resolution better than 30 ps, with 99% efficiency.

The newly designed ALICE readout system presents a change in approach, as all lead–lead collisions that are produced in the accelerator, at a rate of 50 kHz, will be read out in a continuous stream. However, triggered readout will be used by some detectors and for commissioning and calibration runs and the central trigger processor is being upgraded to accommodate the higher interaction rate. The readout of the TPC and muon chambers will be performed by SAMPA, a newly developed, 32-channel front-end analogue-to-digital converter with integrated digital signal processor.

Performance boost

The significantly improved ALICE detector will allow the collaboration to collect 100 times more events during LHC Run 3 compared to Run 1 and Run 2, which requires the development and implementation of a completely new readout and computing system. The O2 system is designed to combine all the computing functionalities needed in the experiment: detector readout, event building, data recording, detector calibration, data reconstruction, physics simulation and analysis. The total data volume produced by the front-end cards of the detectors will increase significantly, reaching a sustained data throughput of up to 3 TB/s. To minimise the requirements of the computing system for data processing and storage, the ALICE computing model is designed for a maximal reduction in the data volume read out from the detectors as early as possible during the data processing. This is achieved by online processing of the data, including detector calibration and reconstruction of events in several steps synchronously with data taking. At its peak, the estimated data throughput to mass storage is 90 GB/s.

Enhancements

A new computing facility for the O2 system is being installed on the surface, near the experiment. It will have a data-storage system with a storage capacity large enough to accommodate a large fraction of data of a full year’s data taking, and will provide the interface to permanent data storage at the tier-0 Grid computing centre at CERN, as well as other data centres.

ALICE upgrade activities are proceeding at a frenetic pace. Soon after the machine stopped in December, experts entered the cavern to open the massive doors of the magnet and started dismounting the detector in order to prepare for the upgrade. Detailed planning and organisation of the work are mandatory to stay on schedule, as Arturo Tauro, the deputy technical coordinator of ALICE explains: “Apart from the new detectors, which require dedicated infrastructure and procedures, we have to install a huge number of services (for example, cables and optical fibres) and perform regular maintenance of the existing apparatus. We have an ambitious plan and a tight schedule ahead of us.”

When the ALICE detector emerges revitalised from the two busy and challenging years of work ahead, it will be ready to enter into a new era of high-precision measurements that will expand and deepen our understanding of the physics of hot and dense QCD matter and the quark–gluon plasma. 

J-PET’s plastic revolution

The J-PET detector

It is some 60 years since the conception of positron emission tomography (PET), which revolutionised the imaging of physiological and biochemical processes. Today, PET scanners are used around the world, in particular providing quantitative and 3D images for early-stage cancer detection and for maximising the effectiveness of radiation therapies. Some of the first PET images were recorded at CERN in the late 1970s, when physicists Alan Jeavons and David Townsend used the technique to image a mouse. While the principle of PET already existed, the detectors and algorithms developed at CERN made a major contribution to its development. Techniques from high-energy physics could now be about to enable another leap in PET technology.

In a typical PET scan, a patient is administered with a radioactive solution that concentrates in malignant cancers. Positrons from β+ decay annihilate with electrons from the body, resulting in the back-to-back emission of two 511 keV gamma rays that are registered in a crystal via the photoelectric effect. These signals are then used to reconstruct an image. Significant advances in PET imaging have taken place in the past few decades, and the vast majority of existing scanners use inorganic crystals – usually bismuth germanium oxide (BGO) or lutetium yttrium orthosilicate (LYSO) – organised in a ring to detect the emitted PET photons.

The main advantage of crystal detectors is their large stopping power, high probability of photoelectric conversion and good energy resolution. However, the use of inorganic crystals is expensive, limiting the number of medical facilities equipped with PET scanners. Moreover, conventional detectors are limited in their axial field of view: currently a distance of only about 20 cm along the body can be simultaneously examined from a single-bed position, meaning that several overlapping bed positions are needed to carry out a whole-body scan, and only 1% of quanta emitted from a patient’s body are collected. Extension of the scanned region from around 20 to 200 cm would not only improve the sensitivity and signal-to-noise ratio, but also reduce the radiation dose needed for a whole-body scan.

To address this challenge, several different designs for whole-body scanners have been introduced based on resistive-plate chambers, straw tubes and alternative crystal scintillators. In 2009, particle physicist Paweł Moskal of Jagiellonian University in Kraków, Poland, introduced a system that uses inexpensive plastic scintillators instead of inorganic ones for detecting photons in PET systems. Called the Jagiellonian PET (J-PET) detector, and based on technologies already employed in the ATLAS, LHCb, KLOE, COSY-11 and other particle-physics experiments, the aim is to allow cost effective whole-body PET imaging.

Whole-body imaging

The current J-PET setup comprises a ring of 192 detection modules axially arranged in three layers as a barrel-shaped detector and the construction is based on 17 patent-protected solutions. Each module consists of a 500 × 19 × 7 mm3 scintillator strip made of a commercially available material called EJ-230, with a photomultiplier tube (PMT) connected at each side. Photons are registered via the Compton effect and each analog signal from the PMTs is sampled in the voltage domain at four thresholds by dedicated field-programmable gate arrays.

In addition to recording the location and time of the electron—positron annihilation, J-PET determines the energy deposited by annihilation photons. The 2D position of a hit is known from the scintillator position, while the third space component is calculated from the time difference of signals arriving at both ends of scintillator, enabling direct 3D image reconstruction. PMTs connected to both sides of the scintillator strips compensate for the low detection efficiency of plastic compared to crystal scintillators and enable multi-layer detection. A modular and relatively easy to transport PET scanner with a non-magnetic and low density central part can be used as a magnetic resonance imaging (MRI) or computed-tomography compatible insert. Furthermore, since plastic scintillators are produced in various shapes, the J-PET approach can be also introduced for positron emission mammography (PEM) and as a range monitor for hadron therapy.

The J-PET detector offers a powerful new tool to test fundamental symmetries

J-PET can also build images from positronium (a bound state of electron and positron) that gets trapped in intermolecular voids. In about 40% of cases, positrons injected into the human body create positronium with a certain lifetime and other environmentally sensitive properties. Currently this information is neither recorded nor used for PET imaging, but recent J-PET measurements of the positronium lifetime in normal and cancer skin cells indicate that the properties of positronium may be used as diagnostic indicators for cancer therapy. Medical doctors are excited by the avenues opened by J-PET. These include a larger axial view (e.g. to check correlations between organs separated by more than 20  cm in the axial direction), the possibility of performing combined PET-MRI imaging at the same time and place, and the possibility of simultaneous PET and positronium (morphometric) imaging paving the way for in vivo determination of cancer malignancy.

Such a large detector is not only potentially useful for medical applications. It can also be used in materials science, where PALS enables the study of voids and defects in solids, while precise measurements of positronium atoms leads to morphometric imaging and physics studies. In this latter regard, the J-PET detector offers a powerful new tool to test fundamental symmetries.

Combinations of discrete symmetries (charge conjugation C, parity P, and time reversal T) play a key role in explaining the observed matter–antimatter asymmetry in the universe (CP violation) and are the starting point for all quantum field theories preserving Lorentz invariance, unitarity and locality (CPT symmetry). Positronium is a good system enabling a search for C, T, CP and CPT violation via angular correlations of annihilation quanta, while the positronium lifetime measurement can be used to separate the ortho- and para-positronium states (o-Ps and p-Ps). Such decays also offer the potential observation of gravitational quantum states, and are used to test Lorentz and CPT symmetry in the framework of the Standard Model Extension.

At J-PET, the following reaction chain is predominantly considered: 22Na 22Ne e+ νe, 22Ne 22Ne γ and e+e o-Ps 3γ annihilation. The detection of 1274 keV prompt γ emission from 22Ne de-excitation is the start signal for the positronium-lifetime measurement. Currently, tests of discrete symmetries and quantum entanglement of photons originating from the decay of positronium atoms are the main physics topics investigated by the J-PET group. The first data taking was conducted in 2016 and six data-taking campaigns have concluded with almost 1 PB of data. Physics studies are based on data collected with a point-like source placed in the centre of the detector and covered by a porous polymer to increase the probability of positronium formation. A test measurement with a source surrounded by an aluminium cylinder was also performed. The use of a cylindrical target (figure 1, left) allows researchers to separate in space the positronium formation and annihilation (cylinder wall) from the positron emission (source). Most recently, measurements by J-PET were also performed with a cylinder with the inner wall covered by the porous material.

Figure 1

The J-PET programme aims to beat the precision of previous measurements for C, CP and CPT symmetry tests in positronium, and to be the first to observe a potential T-symmetry violation. Tests of C symmetry, on the other hand, are conducted via searches for forbidden decays of the positronium triplet state (o-Ps) to 4γ and the singlet state (p-Ps) to 3γ. Tests of the other fundamental symmetries and their combinations will be performed by the measurement of the expectation values of symmetry-odd operators constructed using spin of o-Ps, momenta and polarisation vectors of photons originating from its annihilation (figure 1, right). The physical limit of such tests is expected at the level of about 10−9 due to photo–photon interaction, which is six orders of magnitude smaller than the present experimental limits (e.g. at the University of Tokyo and by the Gammasphere experiment).

Since J-PET is built of plastic scintillators, it provides an opportunity to determine the photon’s polarisation through the registration of primary and secondary Compton scatterings in the detector. This, in turn, enables the study of multi-partite entanglement of photons originating from the decays of positronium atoms. The survival of particular entanglement properties in the mixing scenario may make it possible to extract quantum information in the form of distinct entanglement features, e.g. from metabolic processes in human bodies.

Currently a new, fourth J-PET layer is under construction (figure 2), with a single unit of the layer comprising 13 plastic-scintillator strips. With a mass of about 2 kg per single detection unit, it is easy to transport and to build on-site a portable tomographic chamber whose radius can be adjusted for different purposes by using a given number of such units.

Figure 2

The J-PET group is a collaboration between several Polish institutions – Jagiellonian University, the National Centre for Nuclear Research Świerk and Maria Curie-Skłodowska University – as well as the University of Vienna and the National Laboratory in Frascati. The research is funded by the Polish National Centre for Research and Development, by the Polish Ministry of Science and Higher Education and by the Foundation for Polish Science. Although the general interest in improved quality of medical diagnosis was the first step towards this new detector for positron annihilation, today the basic-research programme is equally advanced. The only open question at J-PET is whether a high-resolution full human body tomographic image will be presented before the most precise test of one of nature’s fundamental symmetries.

The rise of deep learning

It is 1965 and workers at CERN are busy analysing photographs of trajectories of particles travelling through a bubble chamber. These and other scanning workers were employed by CERN and laboratories across the world to manually scan countless such photographs, seeking to identify specific patterns contained in them. It was their painstaking work – which required significant skill and a lot of visual effort – that put particle physics in high gear. Researchers used the photographs (see figures 1 and 3) to make discoveries that would form a cornerstone of the Standard Model of particle physics, such as the observation of weak neutral currents with the Gargamelle bubble chamber in 1973.

In the subsequent decades the field moved away from photographs to collision data collected with electronic detectors. Not only had data volumes become unmanageable, but Moore’s law had begun to take hold and a revolution in computing power was under way. The marriage between high-energy physics and computing was to become one of the most fruitful in science. Today, the Large Hadron Collider (LHC), with its hundreds of millions of proton–proton collisions per second, generates data at a rate of 25 GB/s – leading the CERN data centre to pass the milestone of 200 PB of permanently archived data last summer. Modelling, filtering and analysing such datasets would be impossible had the high-energy-physics community not invested heavily in computing and a distributed-computing network called the Grid.

Learning revolution

The next paradigm change in computing, now under way, is based on artificial intelligence. The so-called deep learning revolution of the late 2000s has significantly changed how scientific data analysis is performed, and has brought machine-learning techniques to the forefront of particle-physics analysis. Such techniques offer advances in areas ranging from event selection to particle identification to event simulation, accelerating progress in the field while offering considerable savings in resources. In many cases, images of particle tracks are making a comeback – although in a slightly different form from their 1960s counterparts.

Fig. 1.

Artificial neural networks are at the centre of the deep learning revolution. These algorithms are loosely based on the structure of biological brains, which consist of networks of neurons interconnected by signal-carrying synapses. In artificial neural networks these two entities – neurons and synapses – are represented by mathematical equivalents. During the algorithm’s “training” stage, the values of parameters such as the weights representing the synapses are modified to lower the overall error rate and improve the performance of the network for a particular task. Possible tasks vary from identifying images of people’s faces to isolating the particles into which the Higgs boson decays from a background of identical particles produced by other Standard Model processes.

Artificial neural networks have been around since the 1960s. But it took several decades of theoretical and computational development for these algorithms to outperform humans, in some specific tasks. For example: in 1996, IBM’s chess-playing computer Deep Blue won its first game against the then world chess champion Garry Kasparov; in 2016 Google DeepMind’s AlphaGo deep neutral-network algorithm defeated the best human players in the game of Go; modern self-driving cars are powered by deep neural networks; and in December 2017 the latest DeepMind algorithm, called AlphaZero, learned how to play chess in just four hours and defeated the world’s best chess-playing computer program. So important is artificial intelligence in potentially addressing intractable challenges that the world’s leading economies are establishing dedicated investment programmes to better harness its power.

Computer vision

The immense computing and data challenges of high-energy physics are ideally suited to modern machine-learning algorithms. Because the signals measured by particle detectors are stored digitally, it is possible to recreate an image from the outcome of particle collisions. This is most easily seen for cases where detectors offer discrete pixelised position information, such as in some neutrino experiments, but it also applies, on a more complex basis, to collider experiments. It was not long after computer-vision techniques, which are based on so-called convolutional neural networks (figure 2), were applied to the analysis of images that particle physicists applied them to detector images – first of jets and then of photons, muons and neutrinos, simplifying and making the task of understanding ever-larger and more abstract datasets more intuitive.

Fig. 2.

Particle physicists were among the first to use artificial-intelligence techniques in software development, data analysis and theoretical calculations. The first of a series of workshops on this topic, titled Artificial Intelligence in High-Energy and Nuclear Physics (AIHENP), was held in 1990. At the time, several changes were taking effect. For example, neural networks were being evaluated for event-selection and analysis purposes, and theorists were calling on algebraic or symbolic artificial-intelligence tools to cope with a dramatic increase in the number of terms in perturbation-theory calculations.

Over the years, the AIHENP series was renamed ACAT (Advanced Computing and Analysis Techniques) and expanded to span a broader range of topics. However, following a new wave of adoption of machine learning in particle physics, the focus of the 18th edition of the workshop, ACAT 2017, was again machine learning – featuring its role in event reconstruction and classification, fast simulation of detector response, measurements of particle properties, and AlphaGo-inspired calculations of Feynman loop integrals, to name a few examples.

Learning challenge

For these advances to happen, machine-learning algorithms had to improve and a physics community dedicated to machine learning needed to be built. In 2014 a machine-learning challenge set up by the ATLAS experiment to identify the Higgs boson garnered close to 2000 participants on the machine-learning competition platform Kaggle. To the surprise of many, the challenge was won by a computer scientist armed with an ensemble of artificial neural networks. In 2015 the Inter-experimental LHC Machine Learning working group was born at CERN out of a desire of physicists from across the LHC to have a platform for machine-learning work and discussions. The group quickly grew to include all the LHC experiments and to involve others outside CERN, like the Belle II experiment in Japan and neutrino experiments worldwide. More dedicated training efforts in machine learning are now emerging, including the Yandex machine learning school for high-energy physics and the INSIGHTS and AMVA4NewPhysics Marie Skłodowska-Curie Innovative Training Networks (see Learning machine learning).

Fig. 3.

Event selection, reconstruction and classification are arguably the most important particle-physics tasks to which machine learning has been applied. As in the time of manual scanning, when the photographs of particle trajectories were analysed to select events of potential physics interest, modern trigger systems are used by many particle-physics experiments, including those at the LHC, to select events for further analysis (figure 3). The decision of whether to save or throw away an event has to be made in a split microsecond and requires specialised hardware located directly on the trigger systems’ logic boards. In 2010 the CMS experiment introduced machine-learning algorithms to its trigger system to better estimate the momentum of muons, which may help identify physics beyond the Standard Model. At around the same time, the LHCb experiment also began to use such algorithms in their trigger system for event selection.

Neutrino experiments such as NOvA and MicroBooNE at Fermilab in the US have also used computer-vision techniques to reconstruct and classify various types of neutrino events. In the NOvA experiment, using deep learning techniques for such tasks is equivalent to collecting 30% more data, or alternatively building and using more expensive detectors – potentially saving global taxpayers significant amounts of money. Similar efficiency gains are observed by the LHC experiments.

Currently, about half of the Worldwide LHC Computing Grid budget in computing is spent simulating the numerous possible outcomes of high-energy proton–proton collisions. To achieve a detailed understanding of the Standard Model and any physics beyond it, a tremendous number of such Monte Carlo events needs to be simulated. But despite the best efforts by the community worldwide to optimise these simulations, the speed is still a factor of 100 short of the needs of the High-Luminosity LHC, which is scheduled to start taking data around 2026. If a machine-learning model could directly learn the properties of the reconstructed particles and bypass the complicated simulation process of the interactions between the particles and the material of the detectors, it could lead to simulations orders of magnitude faster than those currently available.

Competing networks

One idea for such a model relies on algorithms called generative adversarial networks (GANs). In these algorithms, two neural networks compete with each other for a particular goal, with one of them acting as an adversary that the other network is trying to fool. CERN’s openlab and software for experiments group, along with others in the LHC community and industry partners, are starting to see the first results of using GANs for faster event and detector simulations.

Particle physics has come a long way from the heyday of manual scanners in understanding elementary particles and their interactions. But there are gaps in our understanding of the universe that need to be filled – the nature of dark matter, dark energy, matter–antimatter asymmetry, neutrinos and colour confinement, to name a few. High-energy physicists hope to find answers to these questions using the LHC and its upcoming upgrades, as well as future lepton colliders and neutrino experiments. In this endeavour, machine learning will most likely play a significant part in making data processing, data analysis and simulation, and many other tasks, more efficient.

Driven by the promise of great returns, big companies such as Google, Apple, Microsoft, IBM, Intel, Nvidia and Facebook are investing hundreds of millions of dollars in deep learning technology including dedicated software and hardware. As these technologies find their way into particle physics, together with high-performance computing, they will boost the performance of current machine-learning algorithms. Another way to increase the performance is through collaborative machine learning, which involves several machine-learning units operating in parallel. Quantum algorithms running on quantum computers might also bring orders-of-magnitude improvement in algorithm acceleration, and there are probably more advances in store that are difficult to predict today. The availability of more powerful computer systems together with deep learning will likely allow particle physicists to think bigger and perhaps come up with new types of searches for new physics or with ideas to automatically extract and learn physics from the data.

That said, machine learning in particle physics still faces several challenges. Some of the most significant include understanding how to treat systematic uncertainties while employing machine-learning models and interpreting what the models learn. Another challenge is how to make complex deep learning algorithms work in the tight time window of modern trigger systems, to take advantage of the deluge of data that is currently thrown away. These challenges aside, the progress we are seeing today in machine learning and in its application to particle physics is probably just the beginning of the revolution to come.

CERN’s prowess in nothingness

From freeze-dried foods to flat-panel displays and space simulation, vacuum technology is essential in many fields of research and industry. Globally, vacuum technologies represent a multi-billion-dollar, and growing, market. However, it is only when vacuum is applied to particle accelerators for high-energy physics that the technology displays its full complexity and multidisciplinary nature – which bears little resemblance to the common perception of vacuum as being just about pumps and valves.

Particle beams require extremely low pressure in the pipes in which they travel to ensure that their lifetime is not limited by interactions with residual gas molecules and to minimise backgrounds in the physics detectors. The peculiarity of particle accelerators is that the particle beam itself is the cause of the main source of gas: ions, protons and electrons interact with the wall of the vacuum vessels and extract gas molecules, either due to direct beam losses or mediated by photons (synchrotron radiation) and electrons (for example by “multipacting”).

Nowadays, vacuum technology for particle accelerators is focused on this key challenge: understand, simulate, control and mitigate the direct and indirect effects of particle beams on material surfaces. It is thanks to major advances made at CERN and elsewhere in this area that machines such as the LHC are able to achieve the high beam stability that they do.

Since it is in the few-nanometre-thick top slice of materials that vacuum technology concentrates most effort, CERN has merged in the same group: surface-physics specialists, thin-film coating experts and galvanic-treatment professionals, together with teams of designers and colleagues dedicated to the operation of large vacuum equipment. Bringing this expertise together “under one roof” makes CERN one of the world’s leading R&D centres for extreme vacuum technology, contributing to major existing and future accelerator projects at CERN and beyond.

Intersecting history

Vacuum technology for particle accelerators has been pioneered by CERN since its early days, with the Intersecting Storage Rings (ISR) bringing the most important breakthroughs. At the turn of the 1960s and 1970s, this technological marvel – the world’s first hadron collider – required proton beams of unprecedented intensity (of the order of 10 A) and extremely low vacuum pressures in the interaction areas (below 10–11 mbar). The former challenge stimulated studies about ion instabilities and led to innovative surface treatments – for instance glow-discharge cleaning – to mitigate the effects. The low-vacuum requirement, on the other hand, drove the development of materials and their treatments – both chemical and thermal – in addition to novel high-performance cryogenic pumps and vacuum gauges that are still in use today. The technological successes of the ISR also allowed a direct measurement in the laboratory of the lowest ever achieved pressure at room temperature, 2 × 10–14 mbar, a record that still stands today.

The Large Electron Positron collider (LEP) inspired the next chapter in CERN’s vacuum story. Even though LEP’s residual gas density and current intensities were less demanding than those of the ISR, the exceptional length and the intense synchrotron-light power distributed along its 27 km ring triggered the need for unconventional solutions at reasonable cost. Responding to this challenge, the LEP vacuum team developed extruded aluminium vacuum chambers and introduced, for the first time, linear pumping by non-evaporable getter (NEG) strips.

In parallel, LEP project leader Emilio Picasso launched another fruitful development that led to the production of the first superconducting radio-frequency (RF) cavities based on niobium thin-film coating on copper substrates. The ability to attain very low vacuum gained with the ISR, the acquired knowledge in film deposition, and the impressive results obtained in surface treatments of copper were the ingredients for success. The present accelerating RF cavities of the LHC and HIE-ISOLDE (figure 1) are essentially based on the expertise assimilated for LEP (CERN Courier May 2018 p26).

The coexistence in the same team of both NEG and thin-film expertise was the seed for another breakthrough in vacuum technology: NEG thin-film coatings, driven by the LHC project requirements and the vision of LHC project leader Lyn Evans. The NEG material, a micron-thick coating made of a mixture of titanium, zirconium and vanadium, is deposited onto the inner wall of vacuum chambers and, after activation by heating in the accelerator, provides pumping for most of the gas species present in accelerators. The Low Energy Ion Ring (LEIR) was the first CERN accelerator to implement extensive NEG coating in around 2006. For the LHC, one of the technology’s key benefits is its low secondary-electron emission, which suppresses the growth of electron clouds in the room-temperature part of the machine (figure 2).

Studying clouds

Electron clouds had to be studied in depth for the LHC. CERN’s vacuum experts provided direct measurements of the effect in the Super Proton Synchrotron (SPS) with LHC beams, contributing to a deeper understanding of electron emission from technical surfaces over a large range of temperatures. New concepts for vacuum systems at cryogenic temperatures were invented, in particular the beam screen. Conceived at BINP (Russia) and further developed at CERN, this key technology is essential in keeping the gas density stable and to reduce the heat load to the 1.9 K cold-mass temperature of the magnets. This non-exhaustive series of advancements is another example of how CERN’s vacuum success is driven by the often daunting requirements of new projects to pursue fundamental research.

Preparing for the HL-LHC

As the LHC restarts this year for the final stage of Run 2 at a collision energy of 13 TeV, preparations for the high-luminosity LHC (HL-LHC) upgrade are getting under way. The more intense beams of HL-LHC will amplify the effect of electron clouds on both the beam stability and the thermal load to the cryogenic systems. While NEG coatings are very effective in eradicating electron multipacting, their application is limited for room-temperature beam pipes that needed to be heated (“bakeable” in vacuum jargon) to around 200 °C to activate them. Therefore, an alternative strategy has to be found for the parts of the accelerators that cannot be heated, for example those in the superconducting magnets of the LHC and the vacuum chambers in the SPS.

Thin-film coatings made from carbon offer a solution. The idea originated at CERN in 2006 following the observation that beam-scrubbed surfaces – those that have been cleared of trapped gas molecules which increase electron-cloud effects – are enriched in graphite-like carbon. During the past 10 years, this material has been the subject of intense study at CERN. Carbon’s characteristics at cryogenic temperatures are extremely interesting in terms of gas adsorption and electron emission, and the material has already been deposited on tens of SPS vacuum chambers within the LHC Injectors Upgrade project (CERN Courier October 2017 p32). By far, the HL-LHC project presents the most challenging activity in the coming years, namely the coating of the beam screens inserted in the triplet magnets to be situated on both sides of the four LHC experiments to squeeze the protons into tighter bunches. A dedicated sputtering source has been developed that allows alternate deposition of titanium, to improve adherence, and carbon. At the end of the process, the latter layer will be just 50 nm thick.

Another idea to fight electron clouds for the HL-LHC, originally proposed by researchers at the STFC Accelerator Science and Technology Centre (ASTeC) and the University of Dundee in the UK, involves laser-treating surfaces to make them more rough: secondary electrons are intercepted by the surrounding surfaces and cannot be accelerated by the beam. In collaboration with UK researchers and GE Inspection Robotics, CERN’s vacuum team has recently developed a miniature robot that can direct the laser onto the LHC beam screen (“Miniature robot” image). The possibility of in situ surface treatments by lasers opens new perspectives for vacuum technology in the next decades, including studies for future circular colliders.

An additional drawback of the HL-LHC’s intense beams is the higher rate of induced radioactivity in certain locations: the extremities of the detectors, owing to the higher flux of interaction debris, and the collimation areas due to the increased proton losses. To minimise the integrated radioactive dose received by personnel during interventions, it is necessary to properly design all components and define a layout that facilitates and accelerates all manual operations. Since a large fraction of the intervention time is taken up by connecting pieces of equipment, remote assembling and disassembling of flanges is a key area for potential improvements.

One interesting idea that is being developed by CERN’s vacuum team, in collaboration with the University of Calabria (Italy), concerns shape-memory alloys. Given appropriate thermomechanical pre-treatment, a ring of such materials delivers radial forces that tighten the connection between two metallic pipes: heating provokes the clamping, while cooling generates the unclamping. Both actions can be easily implemented remotely, reducing human intervention significantly. Although the invention was motivated by the HL-LHC, it has other applications that are not yet fully exploited, such as flanges for radioactive-beam accelerators and, more generally, the coupling of pipes made of different materials.

Synchrotron applications

Technology advancement sometimes verges off from its initial goals, and this phenomenon is clearly illustrated by one of our most recent innovations. In the main linac of the Compact Linear Collider (CLIC), which envisages a high-energy linear electron-positron collider, the quadrupole magnets need a beam pipe with a very small diameter (about 8 mm) and pressures in the ultra-high vacuum range. The vacuum requirement can be obtained by NEG-coating the vacuum vessel, but the coating process in such a high aspect-ratio geometry is not easy due to the very small space available for the material source and the plasma needed for its sputtering.

This troublesome issue has been solved by a complete change of the production process: the NEG material is no longer directly coated on the wall of the tiny pipe, but instead is coated on the external wall of a sacrificial mandrel made of high-purity aluminium (figure 3). On the top of the coated mandrel, the beam pipe is made by copper electroforming, a well-known electrolytic technique, and on the last production step the mandrel is dissolved chemically by a caustic soda solution. This production process has no limitations in the diameter of the coated beam pipe, and even non-cylindrical geometries can be conceived. The flanges can be assembled during electroforming so that welding or brazing is no longer necessary.

It turns out that the CLIC requirement is common with that of next-generation synchrotron-light sources. For these accelerators, future constraints for vacuum technology are quite clear: very compact magnets with magnetic poles as close as possible to the beam – to reduce costs and improve beam performance – call for very-small-diameter vacuum pipes (less than 5 mm in diameter and more than 2 m long). CERN has already produced prototypes that should fit with these requirements. Indeed, the collaboration between the CERN vacuum group and vacuum experts of light sources has a long history. It started with the need for photon beams for the study of vacuum chambers for LEP and beam screens for the LHC, and continued with NEG coating as an efficient choice for reducing residual gas density – a typical example is MAX IV, for which CERN was closely involved (CERN Courier September 2017 p38). The new way to produce small-diameter beam pipes represents another step in this fruitful collaboration.

Further technology transfer has come from the sophisticated simulations necessary for the HL-LHC and the Future Circular Collider study. A typical example is the integration of electromagnetic and thermomechanical phenomena during a magnet quench to assess the integrity of the vacuum vessel. Another example is the simulation of gas-density and photon-impingement profiles by Monte Carlo methods. These simulation codes have found a large variety of applications well beyond the accelerator field, from the coating of electronic devices to space simulation. For the latter, codes have been used to model the random motion and migration of any chemical species present on the surfaces of satellites at the time of their launch, which is a critical step for future missions to Mars looking for traces of organic compounds.

Of course, the main objective of the CERN vacuum group is the operation of CERN’s accelerators, in particular those in the LHC chain. Here, the relationship with industry is key because the vacuum industry across CERN’s Member and Associate Member states provides us with state-of-art components, valves, pumps, gauges and control equipment that have contributed to the high reliability of our vacuum systems. On the other hand, the LHC gives high visibility to industrial products that, in turn, can be beneficial for the image of our industrial partners. Collaborating with industry is a win–win situation.

The variety of projects and activities performed at CERN provide us with a continuous stimulation to improve and extend our competences in vacuum technology. The fervour of new collider concepts and experimental approaches in the physics community drives us towards innovation. Other typical examples are antimatter physics, which requires very low gas density (figure 4), and radioactive-beam physics that imposes severe controls on contamination and gas exhausting. New challenges are already visible at the horizon, for example physics with gas targets, higher-energy beams in the LHC, and coating beam pipes with high-temperature superconductors to reduce beam impedance.

An orthogonal driver of innovation is reducing the costs and operational downtime of CERN’s accelerators. In the long term, our dream is to avoid bakeout of vacuum systems so that very low pressure can be attained without the heavy operation of heating the vacuum vessels in situ, principally to remove water vapour. Such advances are possible only if the puzzling interaction between water molecules and technical materials is understood, where again only a very thin layer on top of material surfaces makes the difference. Achieving ultra-high vacuum in a matter of a few hours at a reduced cost would also have an impact well beyond the high-energy physics community. This and other challenges at CERN will guarantee that we continue to push the limits of vacuum technology well into the 21st century.

Charting a course for advanced accelerators

Progress in experimental particle physics is driven by advances in accelerators. The conversion of storage rings into colliders in the 1970s is one example, another is the use of superconducting magnets and RF structures that allow higher energies to be reached. CERN’s Large Hadron Collider (LHC) is halfway through its second run at an energy of 13 TeV, and its high-luminosity upgrade is expected to operate until the mid-2030s. Several machines are under consideration for the post-LHC era and many will be weighed up during the European Strategy for Particle Physics beginning in 2019. All are large facilities based on advanced but essentially existing accelerator technologies.

A completely different breed of accelerator based on novel accelerating technologies is also under intense study. Capable of operating with an accelerating gradient larger than 1 GV/m, advanced and novel accelerators (ANAs) could reach energies in the 1–10 TeV range in much more compact and efficient ways. The technological challenge is huge and the timescales are long, but the eventual goal is to have a linear electron–positron or an electron–proton collider at the energy frontier. Such a machine would have a smaller footprint than conventional collider designs and promises energies that otherwise are technologically extremely difficult and expensive to reach.

The first Advanced and Novel Accelerators for High Energy Physics Roadmap (ANAR) workshop took place at CERN in April, focusing on the application of ANAs to high-energy physics (CERN Courier June 2017 p7). The workshop was organised under the umbrella of the International Committee for Future Accelerators as a step towards an international ANA scientific roadmap for an advanced linear collider, with the aim of delivering a technical design report by 2035. The first task towards this goal is to take stock of the scientific landscape by outlining global priorities and identifying necessary facilities and existing programmes.

The ANA landscape

The first idea to accelerate particles in a plasma came as long ago as 1979, with a seminal publication by Tajima and Dawson. It involved the use of wakefields – accelerating longitudinal electric fields generated in a plasma in the wake of a driving laser pulse or a particle bunch – to accelerate and focus a relativistic bunch of particles. In ANAs using plasma as a medium, the wakefields are sustained by a charge separation in the plasma driven by a laser pulse or a particle beam. Large energy gains over short distances can also be reached in ANAs using dielectric material structures that can sustain maximum accelerating fields larger than is possible in metallic structures. These ANAs can accelerate electrons as well as positrons and can also be driven by laser pulses or particle bunches.

Initial experiments took place with electrons at SLAC and elsewhere in the 1990s, demonstrating the principles of the technique, but the advent of high-power lasers as wakefield drivers led to increased activity. After the first demonstration of peaked electron spectra in millimetre-scale plasmas in 2004, GeV electron beams were obtained with 40 TW laser pulses in 2006 and subsequently electron beams with multi-GeV energies have been reported with PW-class laser systems and few-centimetre-long plasmas. Advanced and novel technologies for accelerators have made remarkable progress over the past two decades. They are now capable of bringing electrons to energies of a few GeV over a distance of a few centimetres, compared to 0.1 MeV per centimetre for the Large Electron–Positron (LEP) collider. Reaching such energies with ANAs has therefore sparked interest for high-energy physics applications, in addition to their potential for industry, security or health sectors.

Several challenges must be addressed before proposing a technical design for an advanced linear collider (ALC), requiring the sustained efforts of a diverse community that currently includes more than 62 laboratories in more than 20 countries. The key challenges are either related to fundamental components of ANAs – such as the injectors, accelerating structures, staging of components and their reliability – or to beam dynamics at high energy and the preservation of energy spread, emittance and efficiency.

A major component necessary for the application of an ANA to high-energy physics is a wakefield driver. In practice, this could be an efficient and reliable laser pulse with a peak power topping 100 TW, or a particle bunch with an energy higher than 1 GeV. In both cases, however, the duration of the pulse must be shorter than 100 fs.

The plasma medium, separated into successive stages, is another key component. Assuming accelerating gradients in the region 10–50 GeV/m and energy gains of 10–20 GeV per stage, plasma media 20–200 cm long are required. The main challenges for the plasma medium are the reproducibility, density uniformity, density ramps at their entrance and exit, and the high repetition rate required for collider operation. Tailoring the density ramps is important to mitigate the usually large mismatch between the small transverse size of the accelerated beam inside the plasma and the relatively large beam size that inter-stage optics must handle between plasma modules.

Staging successive accelerator modules is a further challenge in itself. Staging is necessary because the energy carried by most drivers is much smaller than the final energy desired for the accelerated bunch, e.g. 1.6 kJ for 2 × 1010 electrons or positrons at an energy of 500 GeV. Since state-of-the-art femtosecond laser pulses and relativistic electron bunches carry less than 100 J, multiple drivers and multiple stages are needed. Staging has to achieve, in a compact way, coupling of the accelerated bunch out of one plasma module into the next one, while preserving all bunch properties, and evacuating the exhausted driver and bringing the fresh driver before entering the next stage. Staging has been demonstrated, although with low-energy beams (< 200 MeV), in a number of schemes, the most recent being the one performed at the BELLA Center at LBNL. Injection of electrons from a laser plasma injector into a plasma module providing acceleration to 5–10 GeV is one of the goals of the French APOLLON CILEX laser facility starting operation in 2018, and of the baseline explored in the design study EuPRAXIA (see panel on right). The AWAKE experiment at CERN, meanwhile, aims to use protons to drive a plasma wakefield in a single plasma section with the long-term goal of accelerating electrons to TeV energies.

Stability, reproducibility and reliability are trademarks of accelerators used for particle physics. Results obtained with ANAs often appear of lower stability and reproducibility than those obtained with conventional accelerators. However, it is important to note that these ANAs are run mostly as experiments and research tools, with limited resources put towards feedback and control systems – which are one of the major features of conventional accelerators. A strong effort therefore has to be put into developing proper tools and devices, for instance by exploiting synergies with the RF-accelerator community to develop more reliable technologies.

Testing the components for an eventual ALC requires major facilities, most likely located at national or international laboratories. ANA technology might be more compact than that of conventional accelerators, but the environment for producing even 10–100 GeV range prototypes is beyond the capability of university labs, requiring multiple engineering skills to demonstrate reliable operation in a safe environment. The size and cost of these facilities are better justified in a collaborative environment, in line with the development of accelerators relevant for high-energy physics.

Four-phase roadmap

Co-ordination of the advanced accelerators field is at different levels of advancement around the world. In the US, roadmaps were drawn up in 2016 for plasma- and structure-based ANAs with application to high-energy physics and the construction of a linear collider in the 2040s. One outcome of the ANAR workshop this year was a first attempt at an international scientific roadmap. Arranged into four distinct phases, the roadmap describes the stages deemed scientifically necessary to elaborate a design for a multi-TeV linear collider.

The first is a five-year-long period in which to develop injectors and accelerating structures with controlled parameters, such as an injector–accelerator unit producing GeV-range electron and positron beams with high-quality bunches, low emittance and low relative energy spread. A second five-year phase will lead to improved bunch quality at higher energy, with the staging of two accelerating structures and first proposals of conceptual ALC designs. The third phase, also lasting five years, will focus on the reliability of the acceleration process, while the fourth phase will be dedicated to technical design reports for an ALC by 2035, following selection of the most promising options.

Community effort

Many very important challenges remain, such as improving the quality, stability and efficiency of the accelerated beams with ANAs, but no show-stopper has been identified to date. However, the proposed time frame is achievable only if there is an intensive and co-ordinated R&D effort supported by sufficient funding for ANA technology with particle-physics applications. The preparation of an eventual technical design report for an ALC at the energy frontier should therefore be undertaken by the ANA community with significant contributions from the whole accelerator community.

From the current state of wakefield acceleration in plasmas and dielectrics, it is clear that advanced concepts offer several promising options for energy frontier electron–positron and electron–proton colliders. In view of the significant cost of intense R&D for an ALC, an international programme, with some level of international co-ordination, is more suitable than a regional approach. Following the April ANAR workshop, a study group towards advanced linear colliders, named ALEGRO for Advanced LinEar collider study GROup, has been set up to co-ordinate the preparation of a proposal for an ALC in the multi-TeV energy range. ALEGRO consists of scientists with expertise in advanced accelerator concepts or accelerator physics and technology, drawn from national institutions or universities in Asia, Europe and the US. The group will organise a series of workshops on relevant topics to engage the scientific community. Its first objective is to prepare and deliver, by the end of 2018, a document detailing the international roadmap and strategy of ANAs with clear priorities as input for the European Strategy Group. Another objective for ALEGRO is to provide a framework to amplify international co-ordination on this topic at the scientific level and to foster worldwide collaboration towards an ALC, and possibly broaden the community. After all, ANA technology represents the next-generation of colliders and could potentially define particle physics into the 22nd century.

EAAC workshop showcases advanced accelerator progress

The 3rd European Advanced Accelerator Concept (EAAC) workshop, held every two years, took place from 24 to 30 September on the Island of Elba, Italy. Around 300 scientists attended, with advanced linear colliders at the centre of discussions. Specialists from accelerator physics, RF technology, plasma physics, instrumentation and the laser field discussed ideas and directions towards a new generation of ultra-compact and cost-effective accelerators with novel applications in science, medicine and industry.

Among the many outstanding presentations at EAAC 2017, at which 70 PhD students presented their work, were reports on: laser-driven kHz generation of MeV beams at LOA/TU Vienna; dielectric acceleration results from PSI/DESY/Cockcroft; first results from the AWAKE experiment at CERN; 7 GeV electrons in laser plasma acceleration from LBNL; 0.5 nC electron bunches from HZDR; new R&D directions towards high-power lasers at LLNL; controllable electron beams from Osaka and LLNL; undulator X-ray generation after laser plasma accelerators from DESY/University of Hamburg/SOLEIL/LOA; important progress in hadron beams from plasma accelerators from Belfast/HZDR/GSI; and future collider plans from CERN.

A special session was devoted to the Horizon2020 design study EuPRAXIA (European Plasma Research Accelerator with eXcellence In Applications). EuPRAXIA is a consortium of 38 institutes, co-ordinated by DESY, which aims to design a European plasma accelerator facility. This future research infrastructure will deliver high-brightness electron beams of up to 5 GeV for pilot users interested in free-electron laser applications, tabletop test beams for high-energy physics, medical imaging and other applications. This study, conceived at the EAAC meeting in 2013, is strongly supported by the European laser industry.

The EAAC was founded by the European Network for Novel Accelerators in 2013 and has grown in its third edition into a meeting with worldwide visibility, rapidly catching up with the long tradition of the Advanced Accelerator Concepts workshop (AAC) in the US. The EAAC2017 workshop was supported by the EuroNNAc3 network through the EU project ARIES, INFN as the host organisation, DESY and the Helmholtz association, CERN and the industrial sponsors Amplitude, Vacuum FAB and Laser Optronic.

Ralph Assmann, DESY, Massimo Ferrario, INFN and Edda Gschwendtner, CERN.

CLEAR prospects for accelerator research

A new user facility for accelerator R&D, the CERN Linear Electron Accelerator for Research (CLEAR), started operation in August and is ready to provide beam for experiments. CLEAR evolved from the former CTF3 test facility for the Compact Linear Collider (CLIC), which ended a successful programme in December 2016. Following approval of the CLEAR proposal, the necessary hardware modifications started in January and the facility is now able to host and test a broad range of ideas in the accelerator field.

CLEAR’s primary goal is to enhance and complement the existing accelerator R&D programme at CERN, as well as offering a training infrastructure for future accelerator physicists and engineers. The focus is on general accelerator R&D and component studies for existing and possible future accelerator applications. This includes studies of high-gradient acceleration methods, such as CLIC X-band and plasma technologies, as well as prototyping and validation of accelerator components for the high-luminosity LHC upgrade.

The scientific programme for 2017 includes: a combined test of critical CLIC technologies, continuing previous tests performed at CTF3; measurements of radiation effects on electronic components to be installed on space missions in a Jovian environment and for dosimetry tests aimed at medical applications; beam instrumentation R&D; and the use of plasma for beam focusing. Further experiments, such as those exploring THz radiation for accelerator applications and direct impedance measurements of equipment to be installed in CERN accelerators, are also planned.

The experimental programme for 2018 and beyond is still open to new and challenging proposals. An international scientific committee is currently being formed to prioritise proposals, and a user request form is available at the CLEAR website: cern.ch/clear.

Milestone for US dark-matter detector

The US Department of Energy (DOE) has formally approved a key construction milestone for the LUX-ZEPLIN (LZ) experiment, propelling the project towards its April 2020 goal for completion. On 9 February the project passed a DOE review and approval stage known as “Critical Decision 3”, which accepts the final design and formally launches construction. The LZ detector, which will be built roughly 1.5 km underground at the Sanford Underground Research Facility in South Dakota and be filled with 10 tonnes of liquid xenon to detect dark-matter interactions, is considered one of the best bets to determine whether dark-matter candidates known as WIMPs exist.

The project stems from the merger of two previous experiments: LUX (Large Underground Xenon) and ZEPLIN (ZonEd Proportional scintillation in LIquid Noble gases). It was first approved in 2014 and currently has about 250 participating scientists in 37 institutions in the US, UK, Portugal, Russia and Korea. The detector is expected to be at least 50 times more sensitive to finding signals from dark-matter particles than its predecessor LUX, and will compete with other liquid-xenon experiments under development worldwide in the race to detect dark matter. A planned upgrade to the current XENON1T experiment (called XENONnT) at Gran Sasso National Laboratory in Italy and China’s plans to advance the PandaX-II detector, for instance, are both expected to have a similar schedule and scale to LZ.

The LZ collaboration plans to release a Technical Design Report later this year. “We will try to go as fast as we can to have everything completed by April 2020,” says LZ project director Murdock Gilchriese. “We got a very strong endorsement to go fast and to be first.”

Electron gun shrunk to matchbox size

An interdisciplinary team of researchers from DESY in Germany and MIT in the US has built a new kind of electron gun that is about the size of a matchbox. The new device uses laser-generated terahertz radiation, rather than traditional radio-frequency fields, to accelerate electrons from rest. Since terahertz radiation has a much shorter wavelength than radio waves, the new device measures just 34 × 24.5 × 16.8 mm – compared with the size of a car for traditional state-of-the-art electron guns.

This device reached an accelerating gradient of 350 MV per metre, which the team says is almost twice that of current electron guns. “We achieved an acceleration of a dense packet of 250,000 electrons from rest to 0.5 keV with minimal energy spread,” explains lead author W Ronny Huang of MIT, who carried out the work at the Center for Free-Electron Laser Science in Hamburg. The electron beams emerging from the device could already be used for low-energy electron diffraction experiments, he says, and will also have applications in ultrafast electron diffraction or for injecting electrons into linacs and X-ray light sources.

bright-rec iop pub iop-science physcis connect