On 20 June the European Space Agency (ESA) gave the official go-ahead for the Laser Interferometer Space Antenna (LISA), which will comprise a trio of satellites to detect gravitational waves in space. LISA is the third mission in ESA’s Cosmic Vision plan, set to last for the next two decades, and has been given a launch date of 2034.
Predicted a century ago by general relativity, gravitational waves are vibrations of space–time that were first detected by the ground-based Laser Interferometer Gravitational-Wave Observatory (LIGO) in September 2015. While upgrades to LIGO and other ground-based observatories are planned, LISA will access a much lower-frequency region of the gravitational-wave universe. Three craft, separated by 2.5M km in a triangular formation, will follow Earth in its orbit around the Sun, waiting to be distorted by a fractional amount by a passing gravitational wave.
Although highly challenging experimentally, a LISA test mission called Pathfinder has recently demonstrated key technologies needed to detect gravitational waves from space (CERN Courier January/February 2017 p34). These include free-falling test masses linked by lasers and isolated from all external and internal forces except gravity. LISA Pathfinder concluded its pioneering mission at the end of June, as LISA enters a more detailed phase of study. Following ESA’s selection, the design and costing of the LISA mission can be completed. The project will then be proposed for “adoption” before construction begins.
Following the first and second detections of gravitational waves by LIGO in September and December 2015, on 1 June the collaboration announced the detection of a third event (Phys. Rev. Lett.118 221101). Like the previous two, it is thought that “GW170104” – the signal for which arrived on Earth on 4 January – was produced when two black holes merged into a larger one billions of years ago.
CERN has recently implemented two important steps towards the High Luminosity LHC (HL-LHC) – an upgrade that will increase the intensity of the LHC’s collisions significantly from the early 2020s. Preparing CERN’s existing accelerator complex to cope with more intense proton beams presents several challenges, in particular concerning the system that injects protons into the LHC.
At a ceremony on 9 May, a major new linear accelerator, Linac 4, was inaugurated. Replacing Linac 2, which had been in service since 1978, it is CERN’s newest accelerator acquisition since the LHC and is due to feed the accelerator complex with higher-energy particle beams. After an extensive testing period, Linac 4 will be connected to the existing infrastructure during the long technical shutdown in 2019/2020.
To cope with the higher-intensity and higher-energy beams emerging from Linac 4, the Proton Synchrotron Booster (PSB), which is the second accelerator of the LHC injector chain, will be completely overhauled during that same period. At the beginning of June, the first radio-frequency cavity of the new PSB acceleration system was completed, with a further 27 under assembly. The new cavities are based on a composite magnetic material called FINEMET developed by Hitachi Metals, which allows them to operate with a large bandwidth and means that a single cavity can cover all necessary frequency bands. The PSB cavity project was launched in 2012 in collaboration with KEK in Japan, and involved intensive testing at CERN. KEK contributed a substantial fraction of the FINEMET cores and shared its experience with similar technology
On 12 June, two large detector modules for the ICARUS experiment were loaded onto trucks at CERN to begin a six-week journey to Fermilab in the US. ICARUS will form part of Fermilab’s short-baseline neutrino programme, which aims to make detailed measurements of neutrino interactions and search for eV-scale sterile neutrinos (CERN Courier June 2017 p25).
Based on advanced liquid-argon time projection technology, ICARUS began its life under a mountain at the Gran Sasso National Laboratory in Italy in 2010, recording data from neutrino beams sent from CERN. Since 2014, it has been at CERN undergoing an upgrade and refurbishment at the CERN Neutrino Platform (CERN Courier July/August 2016 p21). It left CERN in two parts by road and boarded a boat on the Rhine to a port in Antwerp, Belgium, where it was loaded onto a ship. As the Courier went to press, ICARUS was already heading across the Atlantic to Fermilab via the Great Lakes, equipped with a GPS unit that allows its progress to be tracked in real time (icarustrip.fnal.gov).
Just two days after ICARUS left CERN, another key component of the CERN Neutrino Platform was on the move, albeit on a smaller lorry. Baby MIND, a 75 tonne prototype for a magnetised iron neutrino detector that will precisely identify and track muons, was moved from its construction site in building 180 to the East Hall of the Proton Synchrotron. Following commissioning and full characterisation in the T9 test beam, at the end of July Baby MIND will be transported to Japan to be part of the WAGASCI experiment at JPARC, where it will contribute to a better understanding of neutrino interactions for the T2K experiment.
Massive stars are traditionally expected to end their life cycle by triggering a supernova, a violent event in which the stellar core collapses into a neutron star, potentially followed by a further collapse into a black hole. During this process, a shock wave ejects large amounts of material from the star into interstellar space with large velocities, producing heavy elements in the process, while the supernova outshines all the stars in its host galaxy combined.
In the past few years, however, there has been mounting evidence that not all massive-star deaths are accompanied by these catastrophic events. Instead, it seems that for some stars only a small part of their outer layers is ejected before the rest of the volume collapses into a massive black hole. For instance, there are hints that the birth rate and supernova rate of massive stars do not match. Furthermore, results from the LIGO gravitational-wave observatory in the US indicate the existence of black holes with masses more than 30 times that of the Sun, which is easier to explain if stars can collapse without a large explosion.
The results would explain why we observe less supernovae than expected
Motivated by this indirect evidence, researchers from Ohio State University began a search for stars that quietly form a black hole without triggering a supernova. Using the Large Binocular Telescope (LBT) in Arizona, in 2015 the team identified its first candidate. The star, called N6946-BH1, was approximately 25 times more massive than the Sun and lived in the Fireworks galaxy, which is known for hosting a large number of supernovae. Previously presenting a stable luminosity, the star was seen to become brighter, although not at the level expected for a supernova, during 2009, before completely disappearing in optical wavelengths in 2010 (see image).
The lack of emission observed by the LBT triggered follow-up searches for the star, both using the Hubble Space Telescope (HST) and the Spitzer Space Telescope (SST). While the HST did not find signs of the star in the optical wavelength, the SST did observe infrared emission. A careful analysis of the data disfavoured alternative explanations such as a large dust cloud obscuring the optical emission from the star, and the infrared data were also shown to be compatible with emission from remaining matter falling into a black hole.
If the star did indeed directly collapse into a black hole, as these findings suggest, the in-falling matter is expected to radiate in the X-ray region. The team is therefore waiting for observations from the space-based Chandra X-ray Observatory to search for this emission.
If confirmed in X-ray data, this result would be the first measurement of the birth of a black hole and the first measurement of a failed supernova. The results would explain why we observe less supernovae than expected and could reveal the origin of the massive black holes responsible for the gravitational waves seen by LIGO, in addition to having implications for the production of heavy elements in the universe.
The past few decades have witnessed an explosion in X-ray sources and techniques, impacting science and technology significantly. Large synchrotron X-ray facilities around the world based on advanced storage rings and X-ray optics are used daily by thousands of scientists across numerous disciplines. From the shelf life of washing detergents to the efficiency of fuel-injection systems, and from the latest pharmaceuticals to the chemical composition of archaeological remains, highly focused and brilliant beams of X-rays allow researchers to characterise materials over an enormous range of length and timescales, and therefore link the microscopic behaviour of a system with its bulk properties.
So-called third-generation light sources based on synchrotrons produce stable beams of X-rays over a wide range of photon energies and beam parameters. The availability of more intense, shorter and more coherent X-ray pulses opens even further scientific opportunities, such as making high-resolution movies of chemical reactions or providing industry with real-time nanoscale imaging of working devices. This boils down to maximising a parameter called peak brilliance. While accelerator physicists have made enormous strides in increasing the peak brilliance of synchrotrons, this quantity experienced a leap forward by many orders of magnitude when the first free-electron lasers (FELs) started operating in the X-ray range more than a decade ago.
FLASH, the soft-X-ray FEL at DESY in Hamburg, was inaugurated in 2005 and marked the beginning of this new epoch in X-ray science. Based on superconducting accelerating structures developed initially for a linear collider for particle physics (see “The world’s longest superconducting linac”), it provided flashes of VUV radiation with peak brilliances almost 10 orders of magnitude higher than any storage-ring-based source in the same wavelength range. The unprecedented peak power of the beam immediately led to groundbreaking new research in physics, chemistry and biology. But importantly, FLASH also demonstrated that the amplification scheme responsible for the huge gain of FELs – Self Amplified Spontaneous Emission (SASE) – was feasible at short wavelengths and could likely be extended to the hard-X-ray regime.
The first hard-X-ray FEL to enter operation based on the SASE principle was the Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory in California, which obtained first light in 2009 using a modified version of the old SLAC linac and operates at X-ray energies up to around 11 keV. Since then, several facilities have been inaugurated or are close to start-up: SACLA in Japan, Pohang FEL in South Korea, and Swiss-FEL in Switzerland. The European X-ray Free-Electron Laser (European XFEL) in Schenefeld-Hamburg, Germany, marks a further step-change in X-ray science, promising to produce the brightest beams with the highest photon energies and the highest repetition rates. Construction of the €1.2 billion facility began in January 2009 funded by 11 countries: Denmark, France, Germany, Hungary, Italy, Poland, Russia, Slovakia, Spain, Sweden and Switzerland, with Germany (58%) and Russia (27%) as the largest contributors. It is expected that the UK will join the European XFEL in 2017.
The European XFEL extends over a distance of 3.4 km in underground tunnels (figure 1). It begins with the electron injector at DESY in Bahrenfeld-Hamburg, which produces and injects electrons into a 2 km-long superconducting linear accelerator where the desired electron energy (up to 17.5 GeV) is achieved. Exiting the linac, electrons are then rapidly deflected in an undulating left–right pattern by traversing a periodic array of magnets called an undulator (figure 1, bottom right), causing the electrons to emit intense beams of X-ray photons. X-rays emerging from the undulator, via 1 km-long photon-transport tunnels equipped with various X-ray optics elements, finally arrive at the European XFEL headquarters in Schenefeld where the experiments will take place.
In addition to the development of the electron linac, which was commissioned earlier this year and involved a major effort by DESY in collaboration with numerous other accelerator facilities over the past decade (see “The world’s longest superconducting linac”), the European XFEL has driven the development of both undulator technology and advanced X-ray optics. This multinational and multidisciplinary effort now opens perspectives for novel scientific experiments. When fully commissioned, towards the end of 2018, the facility will deliver 4000 hours of accelerator time per year for user experiments that are approved via external peer review.
Manipulating X-rays
Synchrotron radiation was first detected experimentally at Cornell in 1947, and the first generation of synchrotron-radiation users were termed “parasitic” because they made use of X-rays produced as a byproduct of particle-physics experiments. Dedicated “second-generation” X-ray sources were established in the early 1970s, while much more brilliant “third-generation” sources based on devices called undulators started to appear in the early 1990s (figure 2). The SASE technology underpinning XFELs, which followed from work undertaken in the mid-1960s, ensures that the produced X-rays are much more intense and more coherent that those emitted by storage rings (see SASE panel below). Like the light coming from an optical laser, the X-rays generated by SASE are almost 100% transversely coherent compared to less than one per cent for third-generation synchrotrons, indicating that the radiation is an almost perfect plane wave. Even though the longitudinal-coherence length is not comparable to that of a single-mode optical laser, the use of the term “X-ray laser” is clearly justified for facilities such as the European XFEL.
A major challenge with X-ray lasers is to develop the mirrors, monochromators and other optical components that enable high-energy X-rays to be manipulated and their coherence to be preserved. Compared with the visible light emerging from a standard red helium-neon laser, which has a wavelength of 632 nm, the typical wavelength of hard X-rays is around 0.1 nm. Consequently, X-ray laser light is up to 6000 times more sensitive to distortions in the optics. On the other hand, X-ray mirrors work at extremely small grazing incidence angles (typically around 0.1° for hard X-rays at the European XFEL) because the interaction between X-rays and matter is so weak. This reduces the sensitivity to profile distortions and makes errors of up to 2 nm tolerable on a 1 m-long X-ray mirror, before the reflected X-ray wavefront becomes noticeably affected. Still, these requirements on profile errors are extremely high – about 10 times more stringent than for the Hubble Space Telescope mirror, for example.
The technology to produce these ultra-flat X-ray mirrors was only developed in recent years in Japan and Europe. It is based on a process called deterministic polishing, in which material is removed atomic layer by atomic layer according to a very precisely measured map of the initial profile’s deviations from an ideal shape. After years of development and many months of deterministic polishing iterations, the first 95 cm-long silicon X-ray mirror fulfilling the tight specifications of the European XFEL was completed in March 2016, with 10 more mirrors of similar quality following shortly thereafter. In the final configuration, 27 of these extremely precise mirrors will be used to steer the X-ray laser beam along the photon-transport tunnels to all the scientific instruments.
Managing the large heat loads on the European XFEL mirrors is a major challenge. To remove the heat generated by the X-ray laser beam without distorting the highly sensitive mirrors, a liquid-metal film is used to couple the mirror to a water-cooling system in a tension- and vibration-free fashion. Another mirror system will be cooled to a temperature of around 100 K, at which the thermal-expansion coefficient of silicon is close to zero. This solution, which is vital to deal with the high repetition rate of the European XFEL, is often employed for smaller silicon crystals acting as crystal monochromators but is rarely necessary for large mirror bodies where the grazing-incidence geometry spreads the heat over a large area.
Indeed, the SASE pulses have potentially devastating power – especially close to the sample where the beam may be focused to small dimensions. A typical SASE X-ray pulse of 100 fs duration contains about 2 mJ of thermal X-ray energy (corresponding to 1012 photons at 12 keV photon energy), which means that a copper beam-stop placed close behind the sample would be heated to a temperature of several 100,000 °C and could therefore be evaporated (along with the sample) from just one pulse. While this is not necessarily a problem for samples that can be replaced via advanced injection schemes and where data can be collected before destruction takes place, it could shorten the lifetime of slits, attenuators, windows and other standard beamline components. The solution is to intersect the beam only where it has a larger size and to use only light elements that absorb less X-ray energy per atom. Still, stopping the X-ray laser beam remains a challenge at the European XFEL, with up to 2700 pulses in a 600 μs pulse train (figure 3). Indeed, the entire layout of the photon-distribution system was adapted to counteract this damaging effect of the X-ray laser beam, and a facility-wide machine-protection system limits the pulse-train length to a safe limit, depending on the optical configuration. Since a misguided X-ray laser beam can quickly drill through the stainless-steel pipes of the vacuum system, diamond plates are positioned around the beam trajectory and will light up if hit by X-rays, triggering a dump of the electron beam.
The business end of things
At the European XFEL, the generation of X-ray beams is largely “behind the scenes”. The scientific interest in XFEL experiments stems from the ability to deliver around 1012 X-ray photons in one ultrafast pulse (with a duration in the range 10–100 fs) and with a high degree of coherence. Performing experiments within such short pulses allows users to generate ultrafast snapshots of dynamics that would be smeared out with longer exposure times and give rise to diffuse scattering. Combined with spectroscopic information, a complete picture of atomic motion and molecular rearrangements, as well as the charge and spin states and their dynamics, can be built up. This leads to the notion of a “molecular movie”, in which the dynamics are triggered by an external optical laser excitation (acting as an optical pump) and the response of a molecule is monitored by ultrafast X-ray scattering and spectroscopy (X-ray probe). Pump-probe experiments are typically ensemble-averaged measurements of many molecules that are randomly aligned with respect to each other and not distinguishable within the scattering volume. The power and coherence of the European XFEL beams will allow such investigations with unprecedented resolution in time and space compared to today’s best synchrotrons.
In particular, the coherence of the European XFEL beam allows users to distinguish features beyond those arising from average properties. These features are encoded in the scattering images as grainy regions of varying intensity called speckle, which results from the self-interference of the scattered beam and can be exploited to obtain higher spatial resolution than is possible in “incoherent” X-ray scattering experiments (figure 4). Since the speckles reflect the exact real-space arrangement of the scattering volume, even subtle structural changes can alter the speckle pattern dramatically due to interference effects.
The combination of ultrafast pulses, huge peak intensity and a high degree of beam coherence is truly unique to FEL facilities and has already enabled experiments that otherwise were impossible. In addition, the European XFEL has a huge average intensity due to the many pulses delivered each second. This allows a larger number of experimental sessions per operation cycle and/or better signal-to-noise ratios within a given experimental time frame. The destructive power of the beam means that many experiments will be of the single-shot type, which requires a continuous injection scheme because the sample cannot be reused. Other experiments will operate with reduced peak flux, allowing multi-exposure schemes as also demonstrated in work at LCLS and FLASH.
Six experimental stations are planned for the European XFEL start-up, two per SASE beamline. The first, situated at the hard-X-ray undulator SASE-1, is devoted to the study of single-particles and biomolecules, serial femtosecond crystallography, and femtosecond X-ray experiments in biology and chemistry. SASE-2 caters to dynamics investigations in condensed-matter physics and material-science experiments, specialising in extreme states of matter and plasmas. At the soft-X-ray branch SASE-3, two instruments will allow investigations of electronic states of matter and atomic/cluster physics, among other studies. The three SASE undulators will deliver photons in parallel and the instruments will share their respective beams in 12 hour shifts, so that three instruments are always operating at any given time.
Eight years after the project officially began, the European XFEL finally achieved first light in 2017 and its commissioning is progressing according to schedule. The facility is the culmination of a worldwide effort lead by DESY concerning the electron linac and by European XFEL Gmbh for the development of X-ray photon transport and experimental stations. The facility is conveniently situated among other European light sources – synchrotrons that are also continuously evolving towards larger brilliance – and a handful of hard-X-ray FELs worldwide. The European XFEL is by far the most powerful hard-X-ray source in the world and will remain at the forefront for at least the next 20–30 years. Continuous investment in instrumentation and detectors will be required to capitalise fully on the impressive specifications, and the facility has the potential to construct about six additional instruments and possibly even a second experimental hall, all fed by X-rays generated by the existing superconducting electron linac. Without a doubt, Europe has now entered the extreme X-ray era.
Self-Amplified Spontaneous Emission (SASE), the underlying principle of X-ray free-electron lasers, is based on the interaction between a relativistic electron beam and the radiation emitted by the electrons as they are accelerated through a long alternating magnetic undulator array (see image). If the undulator is short, on the order of a few metres, and the undulating path is well defined with a small amplitude, the radiation emitted by one electron adds up coherently at one particular wavelength as it travels through the undulator. Hence, the intensity is proportional to N2p, where Np is the number of undulator periods (typically around 100). This is the regular undulator radiation generated at third-generation synchrotron sources such as the ESRF in France or APS in the US, and also at the next generation of diffraction-limited storage rings, such as MAX IV in Sweden. On the other hand, if the undulator is very long, the interactions between the electrons and the radiation field that builds up will eventually lead to micro-bunching of the electron beam into coherent packages that radiate in phase (see image). This results in a huge amplification (lasing) of emitted intensity as it becomes proportional to N2e, where Ne is the number of electrons emitting in phase within the co-operation length (typically 106, or more). The hard-X-ray undulators of the European XFEL have magnetic lengths of 175 m in order to ensure that SASE works over a wide range of photon energies and electron-beam parameters. High electron energy, small energy spread and a small emittance (the product of beam size and divergence) are crucial for SASE to work in the X-ray range. Together with the requirement of very long undulators, it favours the use of linac sources, instead of storage rings, for X-ray lasers.
The European X-ray Free Electron Laser (European XFEL) now entering operations at Hamburg in Germany will generate ultrashort X-ray flashes at a rate of 27,000 per second with a peak brilliance one billion times higher than the best conventional X-ray sources. The outstanding characteristics of the facility will open up completely new research opportunities for scientists and industrial users (see see “Europe enters the extreme X-ray era”). Involving close co-operation with nearby DESY and other organisations worldwide, the European XFEL is a joint effort between many countries. No fewer than 17 European institutes contributed to the accelerator complex, with the largest in-kind (> 70%) and other contributions coming from DESY.
The story of the European XFEL is a wonderful example of R&D synergy between the high-energy physics and light-source worlds. At the heart of the European XFEL are superconducting radio-frequency (SRF) cavities that allow the 1.4 km-long linac to accelerate electrons highly efficiently. Despite the clear benefits of using SRF cavities, before the mid-1990s the technology was not mature enough and too expensive to be practical for a large facility. Experience gained at DESY and other major accelerator facilities – including LEP at CERN and CEBAF at Jefferson Lab – changed that picture. It became clear that superconducting accelerating structures with reasonably large gradients can produce high-energy electron beams in long continuous linac sections.
Enter TESLA
A major character in the European XFEL story is the TESLA (TeV Energy Superconducting Linear Accelerator) collaboration, which was founded in 1990 by key players of the SRF community. Among its challenges was to make SRF cavities more affordable. DESY offered to host essential infrastructure and a test facility to operate newly designed accelerator modules housing eight standardised cavities. The first module was built in the mid-1990s in collaboration with many of the later contributors to the European XFEL, and the first electron beam was accelerated in 1997.
The enormous flexibility in how electron bunches can be structured has meant that there has long been a close connection between free-electron lasers and superconducting accelerator technology from the beginning: examples can be found at Stanford University, Darmstadt University and Dresden Rossendorf, Jefferson Lab, and DESY. From the start of the TESLA R&D, it was envisaged that SRF technology would drive a superconducting linear collider operating at a centre-of-mass energy of 500 GeV, with the possibility of extending this to 800 GeV. This facility would have had two linear accelerators pointing towards one another: one for electrons, which would also be used to drive an X-ray laser facility, and one for positrons. At the time, high-energy physicists were weighing up other linear-collider designs in the US and Japan, but TESLA was unique in its choice of superconducting accelerating cavities. In 1997, DESY and the TESLA collaboration published a Conceptual Design Report for a superconducting linear collider with an integrated X-ray laser facility.
Although DESY was preparing for a hard-X-ray FEL, first the goal was to build an intermediate facility operating at slightly lower X-ray energies (corresponding to an output in the VUV region). In 2005 the VUV-FEL at DESY (today known as FLASH) produced laser light at a wavelength of 30 nm based on the principle of self-amplified-spontaneous-emission (SASE), which allows the generation of coherent X-ray light. The project preparation phase for the European XFEL began in 2007, with the official start declared in 2009 after the foundation of the European XFEL company. Plans to build a linear collider at DESY were dropped, but in 2004 the TESLA design was chosen for a new International Linear Collider (ILC). This machine is now “shovel ready” and the Japanese government has expressed interest in hosting it, although a final decision is awaited. Since the European XFEL uses TESLA technology at a large scale, the now finished superconducting linac can be considered as a prototype for the linear collider. Moreover, the successful technology transfer with industry that underpinned the construction of the European XFEL serves as a model for a worldwide linear collider effort.
The European XFEL, measuring 3.4 km in length, begins with the injector, which comprises a normal-conducting RF electron gun with a high bunch charge and low emittance. This is followed by a standard superconducting eight-cavity XFEL accelerator module, which takes the electron bunch to an energy of around 130 MeV. A harmonic 3.9 GHz accelerator module (provided by INFN and DESY) further alters the longitudinal beam profile, while a laser heater provided by Uppsala University increases the uncorrelated energy spread. At the end of the injector, 600 μs-long electron-bunch trains of typically 500 pC bunches are available for acceleration.
Once in the main linac of the European XFEL, the electron beam is accelerated in three sections. The first consists of four superconducting XFEL modules and presents a fairly modest gradient (far below the XFEL design gradient of 23.6 MV/m). The second linac section consists of 12 accelerator modules, from which the beam emerges with a relative energy spread of 0.3% at 2.4 GeV. The third and last linac section consists of 80 accelerator modules with an installed length of just less than 1 km. Bunch-compressor sections between the three main linac sections include dipole-magnet chicanes, further focusing elements and beam diagnostics.
Taking into account all installed main-linac accelerator modules, the achievable electron beam energy of the European XFEL is above its design energy of 17.5 GeV, although the exact figure will depend on optimising the RF control. The complete linac is suspended from the ceiling to keep the tunnel floor free for transport and the installation of electronics. During accelerator operation the electrons are distributed via fast kicker magnets into one of the two electron beamlines that feed several photon beamlines. Here, undulators provide X-ray photon beams for various experiments (see “Europe enters the extreme X-ray era”).
Meeting the production challenge
The superconducting accelerator modules for the European XFEL linac were contributed by DESY, CEA Saclay and LAL Orsay in France, INFN Milano in Italy, IPJ Swierk and Soltan Institute in Poland, CIEMAT in Spain and BINP in Russia. More than 100 modules were needed, and although they were based on a prototype developed for the TESLA linear collider, they had to be modified for large-scale industrial production. DESY, which had responsibility for the construction and operation of the particle accelerator, developed a consortium scheme in which collaborators could contribute in-kind, either by producing sub-components or by assuming responsibility for module assembly or component testing. A sophisticated supply chain was established and the pioneering work at FLASH provided invaluable help in dealing with initial challenges.
A standard accelerator module contains eight superconducting cavities, each supplied by one RF power coupler, and a superconducting quadrupole package, which includes correction coils and a beam-position monitor. Each module also contains cold vacuum components such as bellows and valves, and frequency tuners. During the R&D and project preparation phases, less than one accelerator module per year was assembled, thus it took a factor 30 increase in production rate to build the European XFEL. Two European companies – Research Instruments in Germany and Zanon in Italy – shared the task of producing 800 superconducting cavities from solid niobium. Cavity string and module assembly took place at CEA Saclay/Irfu based on completely new infrastructure called the XFEL village. Assembly was directly impacted by the availability of all accelerator module sub-components, and any break in the supply chain was seen as a risk for the overall project schedule. In the end, a total of 96 successfully tested XFEL modules were made available for tunnel installation within a period of just two years.
The operation of the superconducting accelerator modules also requires extensive dedicated infrastructure. DESY provided the RF high-power system and developed the required 10 MW multi-beam klystrons with industrial partners. A total of 27 klystrons, each supplying RF power for 32 superconducting structures (four accelerator modules), were ordered from two vendors. Precision regulation of the RF fields inside the accelerating cavities, which is essential to provide a highly reproducible and stable electron beam, is achieved by a powerful control system developed at DESY. BINP Novosibirsk produced and delivered major cryogenic equipment for the linac, while the cryogenic plant itself (an in-kind contribution from DESY) guarantees pressure variations will stay below 1%. The largest visible contributions to the warm beamline sections are the more than 700 beam-transport magnets and the 3 km vacuum system in the different sections. While most of the magnets were delivered by the Efremov Institute in St Petersburg, a small fraction was built by BINP Novosibirsk and completed at Stockholm University. Many metres of beamline, be it simple straight chambers or the more sophisticated flat bunch-compressor chambers, were also fabricated by BINP Novosibirsk.
State-of-the-art electron-beam diagnostics is vital for the success of the European XFEL. Thus 64 screens and 12 wire scanner stations, 460 beam-position monitors of eight different types, 36 toroids and six dark-current monitors are distributed along the accelerator. Longitudinal bunch properties are measured by bunch-compression monitors, beam-arrival monitors, electro-optical devices and transverse deflecting systems. Major contributions to the electron-beam diagnostics came from DESY, PSI in Switzerland, CEA Saclay in France, and from INR Moscow in Russia.
Technology goes full circle
Commissioning for the European XFEL accelerator began in December 2016 with the cool-down of the complete cryogenic system. First beam was injected into the main linac in January 2017, and by March bunches with a sufficient beam quality to allow lasing were accelerated to 12 GeV and stopped in a beam dump. After passing this beam through the “SASE1” undulator, first lasing at a wavelength of 0.9 nm was observed on 2 May. Further improvements to the beam quality and alignment led to lasing at 0.2 nm on 24 May. More than 90% of the installed accelerator modules are now in RF operation, with effective accelerating gradients reaching the expected performance in fully commissioned stations.
The first hard-X-ray SASE free-electron laser, the Linac Coherent Light Source (LCLS) at SLAC in the US, was based on a normal-conducting accelerator. Upgrades to LCLS-II now aim for continuous wave operation using 280 superconducting cavities of essentially the same design as those of the European XFEL. Improvements to the superconducting technology were made to further reduce the cryogenic load of the accelerator structures. New techniques such as nitrogen doping and infusion, developed by Fermilab and other LCLS-II partners, are also essential, and established procedures and expertise with series production will benefit future FEL user operation. The now existing European SRF expertise and collaboration scheme also sketches out a mechanism for a European in-kind contribution to a Japan-hosted ILC.
The European XFEL is one of the largest accelerator-based research facilities in the world, and is driven by the longest and most advanced superconducting linac ever constructed. This was possible thanks to the great collaborative effort and team spirit of all partners involved in this project over the past 20 years or more.
Natural diamonds are old, almost as old as the planet itself. They mostly originated in the Earth’s mantle around 1 to 3.5 billion years ago and typically were brought to the surface during deep and violent volcanic eruptions some tens of millions of years ago. Diamonds have been sought after for millennia and still hold status. They are also one of our best windows into our planet’s dynamics and can, in what is essentially a galactic narrative, convey a rich story of planetary science. Each diamond is unique in its chemical and crystallographic detail, with micro-inclusions and impurities within them having been protected over vast timescales.
Diamonds are usually found in or near the volcanic pipe that brought them to the surface. It was at one of these, in 1871 near Kimberley, South Africa, where the diamond rush first began – and where the mineral that hosts most diamonds got its name: kimberlite. Many diamond sources have since been discovered and there are now more than 6000 known kimberlite pipes (figure 1 overleaf). However, with current mining extraction technology, which generally involves breaking up raw kimberlite to see what’s inside, diamonds are often damaged and are steadily becoming mined out. Today, a diamond mine typically lasts for a few decades, and it costs around $10–26 to process each tonne of rock. With the number of new, economically viable diamond sources declining – combined with high rates of diamonds being extracted, ageing mines and increasing costs – most forecasts predict a decline in rough diamond production compared to demand, starting as soon as 2020.
A new diamond-discovery technology called MinPET (mineral positron emission tomography) could help to ensure that precious sources of natural diamonds last for much longer. Inspired by the same principles adapted in modern, high-rate, high-granularity detectors commonly found in high-energy physics experiments, MinPET uses a high-energy photon beam and PET imaging to scan mined kimberlite for large diamonds, before the rocks are smashed to pieces.
From eagle eyes to camera vision
Over millennia, humans have invented numerous ways to look for diamonds. Early techniques to recover loose diamonds used the principle that diamonds are hydrophobic, so resist water but stick readily to grease or fat. Some stories even tell of eagles recovering diamonds from deep, inaccessible valleys, when fatty meat thrown onto a valley floor might stick to a gem: a bird would fly down, devour the meat, and return to its nest, where the diamond could be recovered from its droppings. Today, technology hasn’t evolved much. Grease tables are still used to sort diamond from rock, and the current most popular technique for recovering diamonds (a process called dense media separation) relies on the principle that kimberlite particles float in a special slurry while diamonds sink. The excessive processing required with these older technologies wastes water, takes up huge amounts of land, releases dust into the surrounding atmosphere, and also leads to severe diamond breakage.
Just 1% of the world’s diamond sources have economically viable grades of diamond and are worth mining. At most sites the gemstones are hidden within the kimberlite, so diamond-recovery techniques must first crush each rock into gravel. The more barren rock there is compared to diamonds, the more sorting has to be done. This varies from mine to mine, but typically is under one carat per tonne – more dilute than gold ores. Global production was around 127 million carats in 2015, meaning that mines are wasting millions of dollars crushing and processing about 100 million tonnes of kimberlite per year that contains no diamonds. We therefore have an extreme case of a very high value particle within a large amount of worthless material – making it an excellent candidate for sensor-based sorting.
Early forms of sensor-based sorting, which have only been in use since 2010, use a technique called X-ray stimulated optical fluorescence, which essentially targets the micro impurities and imperfections in each diamond (figure 2). Using this method, the mined rocks are dropped during the extraction process at the plant, and the curtain of falling rock is illuminated by X-rays, allowing a proportion of liberated or exposed diamonds to fluoresce and then be automatically extracted. The transparency of diamond makes this approach quite effective. When Petra Diamonds Ltd introduced this technique with several X-ray sorting machines costing around $6 million, the apparatus paid for itself in just a few months when the firm recovered four large diamonds worth around $43 million. These diamonds, presumed to be fragments of a larger single one, were 508, 168, 58 and 53 carats, in comparison to the average one-carat engagement ring.
Very pure diamonds that do not fluoresce, and gems completely surrounded by rock, can remain hidden to these sensors. As such, a newer sensor-based sorting technique that uses an enhanced form of dual-energy X-ray transmission (XRT), similar to the technology for screening baggage in airports, has been invented to get around this problem. It can recover liberated diamonds down to 5 mm diameter, where 1 mm is usually the smallest size recovered commercially, and, unlike the fluorescing technique, can detect some locked diamonds. These two techniques have brought the benefits of sensor-based sorting into sharp focus for more efficient, greener mines and for reducing breakage.
Recent innovations in particle-accelerator and particle-detector technology, in conjunction with high-throughput electronics, image-processing algorithms and high-performance computing, have greatly enhanced the economic viability of a new diamond-sensing technology using PET imaging. PET, which has strongly benefitted from many innovations in detector development at CERN, such as BGO scintillating crystals for the LEP experiments, has traditionally been used to observe processes inside the body. A patient must first absorb a small amount of a positron-emitting isotope; the ensuing annihilations produce patterns of gamma rays that can be reconstructed to build a 3D picture of metabolic activity. Since a rock cannot be injected with such a tracer, MinPET requires us to irradiate rocks with a high-energy photon beam and generate the positron emitter via transmutation.
The birth of MinPET
The idea to apply PET imaging to mining began in 1988, in Johannesburg, South Africa, where our small research group of physicists used PET emitters and positron spectroscopy to study the crystal lattice of diamonds. We learnt of the need for intelligent sensor-based sorting from colleagues in the diamond mining industry and naturally began discussing how to create an integrated positron-emitting source.
Advances in PET imaging over the next two decades led to increased interest from industry, and in 2007 MinPET achieved its first major success in an experiment at Karolinska hospital in Stockholm, Sweden. With a kimberlite rock playing the role of a patient, irradiation was performed at the hospital’s photon-based cancer therapy facility and the kimberlite was then imaged at the small-animal PET facility in the same hospital. The images clearly revealed the diamond within, with PET imaging of diamond in kimberlite reaching an activity contrast of more than 50 (figure 3). This result led to a working technology demonstrator involving a conveyor belt that presented phantoms (rocks doped with a sodium PET-emitter were used to represent the kimberlite, some of which contained a sodium hotspot to represent a hidden diamond) to a PET camera. These promising results attracted funding, staff and students, enabling the team to develop a MinPET research laboratory at iThemba LABS in Johannesburg. The work also provided an important early contribution to South Africa’s involvement in the ATLAS experiment at CERN’s Large Hadron Collider.
By 2015 the technology was ready to move out of the lab and into a diamond mine. The MinPET process (figure 4) involves using a high-energy photon beam of some tens of MeV to irradiate a kimberlite rock stream, turning some of the light stable isotopes within the kimberlite into transient positron emitters, or PET isotopes, which can be imaged in a similar way to PET imaging for medical diagnostics. The rock stream is buffered for a period of 20 minutes before imaging the rock, because by then carbon is the dominant PET isotope. Since non-diamond sources of carbon have a much lower carbon concentration than diamond, or are diluted and finely dispersed within the kimberlite, diamonds show up on the image as a carbon-concentration hotspot.
The speed of imaging is crucial to the viability of MinPET. The detector system must process up to 1000 tonnes of rock per hour to meet the rate of commercial rock processing, with PET images acquired in just two seconds and image processing taking just five seconds. This is far in excess of medical-imaging needs and required the development of a very high-rate PET camera, which was optimised, designed and manufactured in a joint collaboration between the present authors and a nuclear electronic technology start-up called NeT Instruments. MinPET must also take into account rate capacity, granularity, power consumption, thermal footprints and improvements in photon detectors. The technology demonstrator is therefore still used to continually improve MinPET’s performance, from the camera to raw data event building and fast-imaging algorithms.
An important consideration when dealing with PET technology is that radiation remains within safe limits. If diamonds are exposed to extremely high doses of radiation, their colour can change – something that can be done deliberately to alter the gems, but which reduces customer confidence in a gem’s history. Despite being irradiated, the dose exposure to the diamonds during the MinPET activation process is well below the level it would receive from nature’s own background. It has turned out, quite amazingly, that MinPET offers a uniquely radiologically clean scenario. The carbon PET activity and a small amount of sodium activity are the only significant activations, and these have relatively short half-lives of 20 minutes and 15 hours, respectively. The irradiated kimberlite stream soon becomes indistinguishable from non-irradiated kimberlite, and therefore has a low activity and allows normal mine operation.
Currently, XRT imaging techniques require each particle of kimberlite rock being processed to be isolated and smaller than 75 mm; within this stream only liberated diamonds that are at least 5 mm wide can be detected and XRT can only provide 2D images. MinPET is far more efficient because it is currently able to image locked diamonds with a width of 4 mm within a 100 mm particle of rock, with full 3D imaging. The size of diamonds MinPET detects means it is currently ideally suited for mines that make their revenue predominantly from large diamonds (in some mines breakage is thought to cause up to a 50% drop in revenue). There is no upper limit for finding a liberated diamond particle using MinPET, and it is expected that larger diamonds could be detected in up to 160 mm-diameter kimberlite particles.
To crumble or shine
MinPET has now evolved from a small-scale university experiment to a novel commercial technology, and negotiations with a major financial partner are currently at an advanced stage. Discussions are also under way with several accelerator manufacturers to produce a 40 MeV beam of electrons with a power of 40–200 kW, which is needed to produce the original photon beam that kick-starts the MinPET detection system.
Although the MinPET detection system costs slightly more than other sorting techniques, overall expenditure is less because processing costs are reduced. Envisaged MinPET improvements over the next year are expected to take the lower limit of discovery down to as little as 1.5 mm for locked diamonds. The ability to reveal entire diamonds in 3D, and locating them before the rocks are crushed, means that MinPET also eliminates much of the breakage and damage that occurs to large diamonds. The technique also requires less plant, energy and water – all without causing any impact on normal mine activity.
The world’s diamond mines are increasingly required to be greener and more efficient. But the industry is also under pressure to become safer, and the ethics of mining operations are a growing concern among consumers. In a world increasingly favouring transparency and disclosure, the future of diamond mining has to be in using intelligent, sensor-based sorting that can separate diamonds from rock. MinPET is the obvious solution – eventually allowing marginal mines to become profitable and the lifetime of existing mines to be extended. And although today’s synthetic diamonds offer serious competition, natural stones are unique, billions of years old, and came to the surface in a violent fiery eruption as part of a galactic narrative. They will always hold their romantic appeal, and so will always be sought after.
Where were you on 4 July 2012, the day the Higgs boson discovery was announced? Many people will be able to answer without referring to their diary. Perhaps you were among the few who had managed to secure a seat in CERN’s main auditorium, or who joined colleagues in universities and laboratories around the world to watch the webcast. For me, the memory is indelible: 3.00 a.m. in Watertown, Massachusetts, huddled over my laptop at the kitchen table. It was well worth the tired eyes to witness remotely an event that will happen once in a lifetime.
“I think we have it, no?” was the question posed in the CERN auditorium on 4 July 2012 by Rolf Heuer, CERN’s Director-General at the time. The answer was as obvious as the emotion on faces in the crowd. The then ATLAS and CMS spokespersons, Fabiola Gianotti and Joe Incandela, had just presented the latest Higgs search results based on roughly two years of LHC operations at energies of 7 and 8 TeV. Given the hints for the Higgs presented a few months earlier in December 2011, the frenzy of rumours on blogs and intense media interest during the preceding weeks, and a title for the CERN seminar that left little to the imagination, the outcome was anticipated. This did not temper excitement.
Since then, we have learnt much about the properties of this new scalar particle, yet we are still at the beginning of our understanding. It is the final and most interesting particle of the Standard Model of particle physics (SM), and its connections to many of the deepest current mysteries in physics mean the Higgs will remain a focus of activities for experimentalists and theorists for the foreseeable future.
Speculative theories
The Higgs story began in the 1960s with speculative ideas. Theoretical physicists understood how the symmetries of materials can spontaneously break down, such as the spontaneous alignment of atoms when a magnet is cooled from high temperatures, but it was not yet understood how this might happen for the symmetries present in the fundamental laws of physics. Then, in three separate publications by Brout and Englert, by Higgs, and by Guralnik, Hagen and Kibble in 1964, the broad particle-physics structures for spontaneous symmetry breaking were fleshed out. In this and subsequent work it became clear that a scalar field was a cornerstone of the general symmetry-breaking mechanism. This field may be excited and oscillate, much like the ripples that appear on a disturbed pond, and the excitation of the Higgs field is known as the Higgs boson.
As the detailed theoretical structure of symmetry breaking in nature was later developed, in particular by Weinberg, Glashow, Salam, ’t Hooft and Veltman, the precise role of the Higgs in the SM evolved to its modern form. In addition to explaining what we see in modern particle detectors, the Higgs plays a leading role in the evolution of the universe. In the hot early epoch an infinitesimally small fraction of a second after the Big Bang, the Higgs field spontaneously “slipped” from having zero average value everywhere in space to having an average value equivalent to about 246 GeV. When this happened, any field that was previously kept massless by the SU(2) × U(1) gauge symmetries of the SM instantly became massive.
Before delving further into the vital role of the Higgs, it is worth revisiting a couple of common misconceptions. One is that the Higgs boson gives mass to all particles. Although all of the known massive fundamental particles obtain their mass by interacting with the pervasive Higgs field, there are non-elementary particles, such as the proton, whose mass is dominated by the binding energy of the strong force that holds its constituent gluons and quarks together. So very little of the mass we see in nature comes directly from the Higgs field. Another misconception is that the Higgs boson gives mass to everything it interacts with. On the contrary, the Higgs has very important interactions with two massless fundamental fields: the photon and the gluon. The Higgs is not charged under the forces associated with the photon and the gluon (quantum electrodynamics and quantum chromodynamics), and therefore cannot give them mass, but it can still interact with them. Indeed, somewhat ironically, it was precisely its interactions with massless gluons and photons that revealed the existence of the Higgs boson in the summer of 2012.
The one remaining unmeasured free parameter of the SM at that time, which governs which production and decay modes the particle can have, was the Higgs boson mass. In the early days it was not at all clear what the mass of the Higgs boson would be, since in the SM this is an input parameter of the theory. Indeed, in 1975, in the seminal paper about its experimental phenomenology by Ellis, Gaillard and Nanopoulos, it is notable that the allowed Higgs mass range at that time spanned four orders of magnitude, from 18 MeV to over 100 GeV, with experimental prospects in the latter energy range opaque at best (figure 1).
How the Higgs was found
By 4 July 2012 the picture was radically different. The Higgs no-show at previous colliders, including LEP at CERN and the Tevatron at Fermilab, had cornered its mass to be greater than 114 GeV and not to lie between 147–180 GeV, while theoretical limits on the allowed properties of W- and Z-boson scattering required it to be below around 800 GeV. If nature used the SM version of the Higgs mechanism, there was nowhere left to hide once CERN’s LHC switched on. In the end, the Higgs weighed in at the relatively light mass of 125 GeV. How the different Higgs cross-sections, which are related to the production rate for various processes, depend on the mass are shown in figure 2, left.
Producing the Higgs would alone not be sufficient for discovery. It would also have to be observed, which depends on the different fractional ways in which the Higgs boson will decay (figure 2, right). If heavy, one would have to search for decays to the weak gauge bosons, W and Z; if lighter, a cocktail of decays would light up detectors. Going further, if thousands of Higgs bosons could be produced, then decays to pairs of photons may show up. Thus, by the time of the LHC operation, the basic theoretical recipe was relatively simple: pick a Higgs mass, calculate the SM predictions and search.
On the other hand, the experimental recipe was far from simple. The LHC, a particle accelerator capable of colliding protons at energies far beyond anything previously achieved, was a necessity. But energy alone was not enough, as sufficient numbers of Higgs bosons also had to be produced. Although occurring at a low rate, Higgs decays into pairs of massless photons would prove to be experimentally clean and furnish the best opportunity for discovery. Once detection efficiencies, backgrounds, and requirements of statistical significance are folded into the mix, on the order of 100,000 Higgs bosons would be required for discovery. This is no short order, yet that is what the accelerator teams delivered to the detectors.
With the accelerator running, it remained to observe the thing. This would push ingenuity to its limits. Physicists on the ATLAS and CMS detectors would need to work night and day to filter through the particle detritus from innumerable proton–proton collisions to select data sets of interest. The search set tremendous challenges for the energy-resolution and particle-identification capabilities of the detectors, not to mention dealing with enormous volumes of data. In the end, the result of this labour reduced to a couple of plots (figure 3). The discovery was clear for each collaboration: a significance pushing the 5σ “discovery” threshold. In further irony for the mass-giving Higgs, the discovery was driven primarily by the rare but powerful diphoton decays, followed closely by Higgs decays to Z bosons. Global media erupted in a science-fuelled frenzy. It turns out that everyone gets excited when a fundamental building block of nature is discovered.
The hard work begins
The joy in the experimental and theoretical communities in the summer of 2012 was palpable. If we were to liken early studies of the electroweak forces to listening to a crackling radio, LEP had given us black and white TV and the LHC was about to show us the world in full cinematic colour. Particle physicists now had the work they had waited a lifetime to do. Is it the SM Higgs boson, or something else, something exotic? All we knew at the time was that there was a new boson, with mass of roughly 125 GeV, that decayed to photons and Z bosons.
Despite the huge success of the SM, there was every reason to hope that the new boson would not be of the common variety. The Higgs brings us face-to-face with questions that the SM cannot answer, such as what constitutes dark matter (observed to make up roughly 80% of all the matter in the universe). Unlike the other SMparticles, it is uncharged and without spin, and can therefore interact easily with any other neutral scalar particles. This makes it a formidable tool in the hunt for dark matter – a possibility we often call the “Higgs portal”. The ATLAS and CMS collaborations have been busy exploring the Higgs portal and we now know that the Higgs decay rate into invisible new dark particles must be less than 34% of its total rate into known particles. This is an incredible thing to know for a particle that is itself so elusive, and a significant early step for dark-sector physics.
Another deep puzzle, even more esoteric than dark matter and which has driven the theoretical community to distraction for decades, is called the hierarchy problem. We know that at higher energies (smaller sizes) there must be more structure to the laws of nature: the scale of quantum gravity, the Planck scale, is one example, but there are hints of others. For any other SM particle, this new physics at high energies has no dramatic effect, since fundamental particles with nonzero spin possess special protective symmetries that shield them from large quantum corrections. But the Higgs possesses no such symmetry, and is thus a sensitive creature: quantum-mechanical effects will give large corrections to its mass, pulling it all the way up to the masses of the new particles it is interacting with. That has clearly not happened, given the mass we measure in experiments, so what is going on?
Thus the discovery of the Higgs brings the hierarchy problem to the fore. If the Higgs is composite, being made up of other particles, in a similar fashion to the ubiquitous QCD pion, then the problem simply goes away because there is no fundamental scalar in the first place. Another popular theory, supersymmetry, postulates new space–time symmetries, which protect the Higgs boson from these quantum corrections and could modify its properties. Measurements of the Higgs interactions thus indirectly probe this deepest of questions in modern particle physics. For example, we now know the interaction between the Higgs boson and the Z boson to an accuracy at the level of 10%, a significant constraint on these theories.
It is also crucial that we understand the way the Higgs interacts with fermions. Anyone who has ever looked up the masses of the quarks and leptons will see that they follow cryptic hierarchical patterns, while families of fermions can also mix into one another through the emission of a W boson in peculiar patterns that we do not yet understand. By playing a star role in generating particle masses, and as a supporting actor by also generating the mixings, the Higgs could shed light on these mysteries.
At the time of the Higgs discovery in 2012, the only interactions we were certain of concerned bosons: photons, W and Z bosons, and, to a certain degree, gluons. There was emerging evidence for interactions with top quarks, but it was circumstantial, coming from the role of the top quark in the quantum-mechanical process that generates Higgs interactions with gluons and photons. After a four-year wait, in 2016 ATLAS and CMS combined forces to reach the first 5σ direct discovery of Higgs interactions with a fermion: the τ lepton, to be precise. This was a significant milestone, not least because it also happened to give the first direct evidence of Higgs interactions with leptons.
The scope of the Higgs programme has also broadened since the early days of the discovery. This applies not only to the precision with which certain couplings are measured, but also to the energy at which they are measured. For example, when the Higgs boson is produced via the fusion of two gluons at the LHC, additional gluons or quarks may be emitted at high energies. By observing such “associated production” we may gain information about the magnitude of a Higgs interaction and about its detailed structure. Hence, if new particles that influence Higgs boson interactions exist at high energies, probing Higgs couplings at high energies may reveal their existence. The price to be paid for associated production is that the probability, and hence the rate, is low (figure 2). As an ever increasing number of Higgs production events have been recorded at the LHC in the past five years, this has allowed physicists to begin mapping the nature of the Higgs boson’s interactions.
What’s next?
We have much to anticipate. Although the Higgs is too light to be able to decay into pairs of top quarks, experimentalists will study its interactions with the top quark by observing Higgs produced in association with pairs of top quarks. Another anticipated discovery, which is difficult to pick out above other background processes, is the decay of the Higgs to bottom quarks. Amazingly, despite the incredibly rare signal rate, the upgraded High-Luminosity LHC will be able to discover Higgs decays to muons. This would be the first observation of Higgs interactions with the second generation of fermions, pointing a floodlight towards the flavour puzzle. These measurements will bring the overall picture of how the Higgs generates particle masses into closer focus. Even now, after only five years, the picture is becoming clear: Higgs physics is becoming a precision science at the LHC (figure 4).
There is more to Higgs physics than a shopping list of couplings, however. By the end of the LHC’s operation in the mid-2030s, more than one hundred million Higgs bosons will have been produced. That will allow us to search for extremely rare and exotic Higgs production and decay modes, perhaps revealing a first crack in the SM. On the opposing flank, by observing the standard production processes in extreme kinematic corners, such as Higgs production at very high momentum, we will be able to measure its interactions over a range of energies. In both cases the challenge will not only be experimental, as the SM predictions must also keep pace with the accuracy of the measurements – a fact which is already driving revolutions in our theoretical understanding.
Setting our sights on the distant future of Higgs physics, it would be remiss to overlook the “white whale” of Higgs physics: the Higgs self-interaction. In yet another unique twist, the Higgs is the only particle in the SM that can scatter off itself (figure 5). In contrast, gluons only interact with other non-identical gluons. If we could access the Higgs self-interactions, by determining how a Higgs boson scatters on itself in measurements of Higgs boson pair-production processes, we would be measuring the shape of the Higgs scalar potential. This is tremendously important because, in theory, it determines the fate of the entire universe: if the scalar potential “turns back over” again at high field values, it would imply that we live in a metastable state. There is mounting evidence, in the form of the measured SM parameters such as the mass of the top quark, that this may be the case. Unfortunately, with the LHC we will not be able to measure this interaction well enough to definitively determine the shape of the Higgs scalar potential, and so we must ultimately look to future colliders to answer this question, among others.
The Higgs is the keystone of the SM and therefore everything we learn about this new particle is central to the deepest laws of nature. When huddled over my laptop at 3.00 a.m. on 4 July 2012, I was 27 years old and in the first year of my first postdoctoral position. To me, and presumably the rest of my generation, it felt like a new scientific continent had been discovered, one that would take a lifetime to explore. On that day we finally knew it existed. Today, after five years of feverish exploration, we have in our hands a sketch of the coastline. We have much to learn before the mountains and valleys of the enigmatic Higgs boson are revealed.
Supernova explosions, neutron-star mergers and rare radioactive ions might not seem to have much connection to terrestrial matters. Yet, while the lightest elements were synthesised immediately after the Big Bang, and elements up to iron were created in stellar cores, all of the heavy elements beyond gold and platinum were produced via complex production paths during extreme astrophysical events. Experiments with intense heavy-ion beams produced at the international Facility for Antiproton and Ion Research (FAIR), which is under construction at Darmstadt in Germany, promise new and detailed insights into the nuclear reactions and rare radioactive ion species that underpin the synthesis of heavy elements in the universe.
FAIR is a multipurpose accelerator facility that will provide beams, from protons up to uranium ions, with a wide range of intensities and energies, in addition to secondary beams of antiprotons and rare isotopes. Complementary to CERN’s Large Hadron Collider or Super Proton Synchrotron, FAIR is pushing the intensity rather than the energy frontier for hadron beams. It will enable scientists to produce and study reactions involving rare exotic hadronic states or rare, very short-lived radioactive nuclei. It will enable the investigation of processes under the extreme temperatures and pressures that prevail in large planets, stars and stellar explosions. FAIR will also allow physicists to produce and study dense hadronic matter and its transition to quark matter, and permit tests of quantum electrodynamics in the regime of very strong electromagnetic fields, to name but a few goals.
Overall, FAIR’s scientific programme comprises hadron physics, nuclear structure and astrophysics, atomic physics, plasma physics, materials research, and radiation biophysics and its applications in cancer therapy and space research. Its science is divided between four main pillars (see panel “FAIR’s four scientific pillars” below), including experiments similar in design to those in high-energy physics. After a lengthy and complex phase of development, a groundbreaking ceremony held on 4 July 2017 marked the start of construction of the FAIR facility.
Project evolution
FAIR was developed by the international science community and the GSI laboratory (the Helmholtz Centre for Heavy Ion Research) around the turn of the millennium. GSI, founded in 1969, has a long tradition in nuclear and atomic physics and, more generally, heavy-ion research, and was therefore a natural site on which to develop the next generation of accelerators and experiments for these fields. The initial start date of FAIR was 7 October 2010, when nine partner countries (Finland, France, Germany, India, Poland, Romania, Russia, Slovenia and Sweden) signed an intergovernmental agreement for its construction and operation. The UK joined FAIR as an associate member in 2013.
During late 2014, the then FAIR management reported difficulties surrounding new construction requirements. Although not unusual for a complex, one-of-a-kind facility such as FAIR, this caused major modifications of the civil-construction design and resulted in a delay and cost increase of the overall project. In September 2015 the FAIR Council, representing the nine shareholders, unanimously agreed to adapt the FAIR construction budget and timeline according to the necessary design modifications.
Following this key decision, FAIR was completely reorganised and consolidated: the FAIR and GSI GmbH companies aligned their managerial and administrative structures and processes, and a joint management team was installed in a stepwise process, with former spokesperson for the ALICE experiment at CERN, Paolo Giubellino, appointed as scientific managing director and spokesperson for FAIR-GSI in January 2017. Thanks to these and other changes, civil-construction work for the tunnel that will house FAIR’s main accelerator began on schedule this summer, with the goal to finish all FAIR buildings by the end of 2022. In parallel, procurement of the FAIR accelerator systems and construction of the FAIR detector instrumentation is progressing well. Following the installation and commissioning of the accelerators and experiments starting in 2020/2021, the FAIR science programme is expected to start operation in 2025.
A journey through FAIR
The FAIR accelerator complex is optimised to deliver intense and energetic beams of particles to different production targets. The resulting beams will then be steered to various fixed-target experiments or injected into storage-cooler rings for novel in-ring experiments with beams of secondary antiprotons or radioactive ions at the highest beam qualities. The central machines of FAIR are: the fast-ramping SIS100 synchrotron, which provides intense primary beams; the large-aperture Super Fragment Separator (Super-FRS), which filters out the exotic ion beams; and the cooler storage rings CR and HESR (see image). The SIS100 is the heart of FAIR. With a circumference of 1.1 km and a maximum magnetic bending power of 100 Tm, the machine will accelerate ion beams with maximum intensities ranging from 4 × 1013 protons at 29 GeV to 5 × 1011 uranium (28+) ions at 2.7 GeV/u. The existing GSI accelerators UNILAC and SIS18 will serve as injectors and pre-accelerators for SIS100, while a new proton linac will be installed for high-intensity injection into the SIS18/SIS100 synchrotron chain.
To maximise the luminosity of the SIS100, fast-ramped superconducting superferric magnets with a maximum field of 1.9 T and ramp rates up to 4 T per second have been developed to enable cycle times of the same order as the cooling rates in the storage rings (see image). Together with the upgraded SIS18 pre-accelerator, the SIS100 will provide uranium ion beams 10 times more intense than previously available beams at GSI. The cold machine design has a further advantage: the SIS100 beam pipe enables heavy residual gas components to be pumped, potentially stabilising the dynamic pressure. Due to the tight beam-loss budget, the iron yoke of the superconducting magnets must be built with the highest precision and reproducibility. Production has already started for the SIS100 dipole magnets, and the first beams from SIS100 are foreseen for 2025. Three test facilities at GSI, JINR/Dubna and CERN have been established to assess the different types of superconducting magnets.
Two production targets for rare isotope and antiproton beams will be served by the SIS100. A primary ion beam can either be slowly extracted to the Super-FRS over a period of many seconds to produce radioactive secondary beams for fixed-target experiments, or it can be extracted quickly in the form of a single, compressed, short bunch to produce a secondary beam of antiprotons or exotic ions. The in-flight-generated rare isotopes, produced via projectile fragmentation of all primary beams up to uranium-238 or alternatively via fission of uranium-238 beams, are efficiently separated in the large aperture of the Super-FRS. Due to the large acceptance of this machine, the gain in primary-beam intensities for uranium ions in the SIS100 translates into a factor of more than 1000 for secondary-beam intensities of rare, radioactive isotopes.
After production and separation, the hot secondary ion beams drive three experimental scenarios: they can be stopped to allow studies of their ground-state properties; used in in-flight and secondary reactions to produce even more exotic species; or stored and pre-cooled in the collector ring (CR). The fast stochastic cooling process in the CR relies on a fast de-bunching of the injected short bunch. Pre-cooled secondaries will then be transferred from the CR to the high-energy storage ring (HESR), where they can be accumulated and accelerated up to an energy of 15 GeV for antiprotons and about 5–6 GeV/u for very heavy ions. The HESR can also store and cool stable high-charge-state heavy-ion beams, directly injected from the SIS100 via the CR, for precision studies in atomic, nuclear and fundamental physics, such as tests of quantum electrodynamics (QED) in strong fields or tests of special relativity.
FAIR science ahead
About 3000 scientists including more than 500 PhD students from around the world will carry out experiments at FAIR to understand the fundamental structure of matter, explore its exotic forms, and to understand how the universe evolved from its primordial state. FAIR’s science programme is structured into four pillars and organised in four large collaborations with several hundred members each: APPA, serving communities in atomic, plasma physics and applications; CBM, the Compressed Baryonic Matter experiment; NUSTAR, the NUclear STructure, Astrophysics and Reactions programme; and PANDA (antiProton ANihilation in DArmstadt), which aims to study hadrons using antiproton beams. APPA and NUSTAR consist of several sub-collaborations, while CBM and PANDA are rather monolithic experiments involving large detectors (see panel on previous page).
Well before the start of the SIS100 operation in 2025, an upgrade of the GSI accelerators due for completion this year will allow extensive testing of FAIR components. This upgrade will also allow researchers to trial novel FAIR instrumentation for an attractive intermediate research programme, named FAIR phase 0. For instance, the NUSTAR “R3B” spectrometer, the CRYRING and the HITRAP facility will be available and will enable, in combination with the intensity-upgraded SIS18 synchrotron and GSI’s fragment separator, novel experiments in nuclear structure and reactions in so far unexplored areas of the nuclear chart.
The CRYRING and the HITRAP facility will enable physicists to further increase the precision of both atomic-physics measurements of QED effects in highly charged heavy ions and of measurements of fundamental constants. Moreover, the hadronic-matter experimental programme of HADES (High Acceptance Di-Electron Spectrometer) will benefit from the higher intensities from the SIS18. HADES is a versatile detector for the study of dielectron (e+e−) and hadron production in heavy-ion collisions, as well as in proton- and pion-induced reactions in the energy range of 1–4 GeV. These are just a few examples from the intermediate research programme, which will start in 2018 and offer about three months of beam time per year, thereby bridging the gap until the commissioning of the SIS100.
The FAIR phase 0 programme intends to maintain and further establish the FAIR-GSI community by offering attractive science before the full complex is up and running. It will also educate and train the next generation of scientists and engineers for FAIR and, last but not least, maintain and extend the technical skills required to operate such a large accelerator complex. While FAIR phase 0 is an important and necessary step offering new and excellent research opportunities for users, full exploitation of the unique science potential opened up by FAIR has to await the start of SIS100 operation in 2025.
Depending on how rich the scientific harvest from FAIR will be and in which specific directions it will be most prominent, one can conceive of several upgrade options. One is a further increase of intensities by up to two orders of magnitude for nuclear structure, reactions and astrophysics, which will also benefit dense-plasma research. Another option is a further increase of beam energy by a factor of 3–6 for hadron- and quark-matter research. Other upgrade possibilities include strengthening the antiproton research programme, via cooled low-energy antiproton beams, for the study of fundamental interactions and symmetries. FAIR is expected to be the flagship facility for hadron, nuclear and atomic physics – as well as related science fields exploiting intense beams of antiprotons and heavy ions – until around 2040.
FAIR's four scientific pillars
Atomic and Plasma Physics, and Applied sciences (APPA)
With about 700 participants, APPA is an umbrella for several sub-collaborations working across atomic physics, plasma physics and applied sciences, with specific programmes in biophysics, medical physics and materials science. Several experimental stations, in addition to the CRYRING and HESR storage rings and the trapping facility HITRAP, will allow the APPA community to tackle a variety of challenges. In atomic physics, for example, high-precision tests of bound-state QED in the non-perturbative regime become possible. A precise determination of fundamental constants such as the fine-structure constant is also a target, which involves very precise measurements of the bound-state g-factors in medium to high-Z hydrogen-like ions confined in a trap. Plasma physicists will be able to create and probe dense plasmas to test models of planetary and stellar structure. By means of FAIR beams, the high-energy component of galactic cosmic radiation can also be simulated to assess the risk of space missions for astronauts and electronic equipment by dedicated irradiation experiments. Finally, the material science and geoscience communities will be able to test how materials respond to the simultaneous application of irradiation and pressure, which is of interest for the synthesis of new materials from highly non-equilibrium conditions and for understanding processes in the Earth’s mantle.
The Compressed Baryonic Matter experiment (CBM)
The CBM experiment, which has more than 500 participants and is organised similarly to the LHC experiments at CERN, will use high-energy nucleus–nucleus collisions to investigate highly compressed nuclear matter. The fixed-target experiment is 10 m long and comprises a large-aperture superconducting dipole magnet and seven subsequent detector systems providing tracking and particle identification. CBM collisions will recreate the matter densities found in supernova explosions, the cores of neutron stars and neutron-star mergers. In contrast to the very high temperatures and low net-baryon densities reached at the Relativistic Heavy Ion Collider in Brookhaven and the LHC at CERN (conditions that are similar to the conditions that prevailed microseconds after the Big Bang), the energies of the FAIR beams are perfectly suited to study the QCD phase diagram of strongly interacting matter at large net baryon densities and low temperatures. Here, it is expected that the QCD phase diagram exhibits a rich structure such as a critical point, a first-order phase transition between hadronic and partonic matter, or new phases such as quarkyonic matter. Discovering these landmarks would be a breakthrough in our understanding of the strong interaction. The CBM experiment is designed to run at interaction rates of up to 10 MHz, which is 3–4 orders of magnitude higher than the rates reached in other high-energy heavy-ion experiments. It has very fast and radiation-hard detectors, a novel data read-out and analysis concept, and a high-performance computing cluster for online event reconstruction and selection.
The PANDA experiment
The antiProton ANihilation in DArmstadt (PANDA) collaboration is a co-operation of more than 400 scientists from 19 countries, similar to but smaller than the LHC experiments at CERN. Its goal is to understand hadrons using the power of an antiproton beam on fixed hydrogen or other nuclear targets. Antiproton–proton annihilations have enormous advantages compared to proton–proton collisions, such as small momentum-transfer at maximum released energy with well-defined initial states and high-precision mass scanning. The vast difference in mass between the proton and its individual quark constituents is a result of the binding among quarks in the confinement regime, and exotic hadrons such as tetra- and pentaquarks, hybrids and glueballs will reveal uncharted properties of this binding. PANDA will use proton form-factor measurements, deep virtual Compton scattering and quark dynamics, as well as the behaviour of hadrons inside nuclear media, as highly complementary tools with which to understand the very nature of hadrons. Strange quarks in hyperons, for instance, can be used as tags to trace quark dynamics with very high cross-sections and spin degrees of freedom. The PANDA experiment features a modern multipurpose detector with excellent tracking, calorimetry and particle-identification capabilities. Together with the high-quality antiproton beam at FAIR’s high-energy storage ring (HESR), an unprecedented annihilation rate and sophisticated event filtering, it will be ideally suited to address important questions in all aspects of this field.
NUclear STructure, Astrophysics and Reactions (NUSTAR)
The NUSTAR collaboration at FAIR has more than 800 participants from 180 institutes located in 38 countries. Similar to APPA, NUSTAR does not represent a single monolithic experiment but is structured in several sub-collaborations across different experimental set-ups tailored to various aspects of secondary radioactive ions, such as mass and lifetime measurements. A major goal of NUSTAR is to improve our knowledge of the synthesis and abundance of chemical elements, for which the collaboration will explore the structure and reaction properties of very rare radioactive ions produced for the first time by FAIR. Although much has been learnt about the behaviour of stable and unstable nuclei in past decades, we are still far from understanding how the very heavy elements are formed through reactions involving rare nuclei at the limit of stability. FAIR will allow scientists to artificially produce the nuclei that occur as radioactive intermediate products in the formation of stable isotopes, measuring directly in the laboratory the different processes involved. FAIR offers unique tools for such studies. The Super-FRS will make very efficient use of the highly intense beams at high energies to separate beams of the heaviest and most neutron-rich nuclei, while FAIR’s complex network of storage rings will allow mass and lifetime measurements. This will place NUSTAR at the forefront of this branch of science. Many of NUSTAR’s experimental set-ups are already complete, and the collaboration plans to transfer them into the new buildings starting from 2023.
It is perhaps no coincidence that many dystopian visions of the future in popular fiction, such as Nineteen Eighty-Four, Brave New World and Fahrenheit 451, have breach of data privacy at the core of their plots. With an ever growing level of interaction between humans and a global infrastructure tied together by the internet, there is always the fear that others know more about you than you would like. How can we save ourselves from such a bleak future?
The answer has been to create, over the past 20 years, a number of strict legal obligations and rights when dealing with the personal data of individuals. You will notice that you are increasingly asked for consent for use of your personal data on websites and to allow software to store cookies on your computer. Such legislation is sometimes criticised for generating bureaucracy that gets in the way of “real work”. But for those who work in the data-privacy arena, it is clear that we need to adapt quickly to a rapidly evolving digital environment. What you do, where you go and how long you spend there are valuable assets in the information world.
In 2012 the European Union (EU) proposed new data-protection reforms to strengthen the fundamental rights of citizens. Three years later, EU institutions reached agreement on the rules, and in May 2016 a new regulation was issued called the General Data Protection Regulation (GDPR), which enters into force in all European Economic Area (EEA) countries from 25 May 2018.
You probably haven’t heard much about the GDPR until now, yet it is almost certain to impact the way our field deals with personal data. The central idea is that your personal data is truly yours: it cannot be taken or processed without safeguards to its privacy, and any data collection or processing must have an appropriate legal basis. The new laws offer a very broad interpretation of what “personal data” and “processing” mean, and offer a number of legal bases that must be considered. Personal data is anything that could be used to identify you, including obvious things like name and address but also more subtle information like GPS location or IP address. Processing is equally loosely defined, from storing data in a database to viewing data on a screen and even copying a file.
Although in practice there are many details to be determined, the intention of the regulators is evident: to stop the use of people’s personal data except for well-defined purposes that must be clear when the data are collected to be fair to the individual. Crucially, the new regulations aim to be technology agnostic and therefore apply equally to online databases as well as a filing cabinet full of paper.
All EEA institutions, companies, labs and universities will be subject to the GDPR. Although CERN, as an international organisation, is not directly subject to EU regulations, in light of the coming changes it is reviewing its internal legislation to offer equivalent levels of personal-data protection. Consequently, in January this year CERN established the Office of Data Privacy Protection to assist services that process personal data and to help anyone who is concerned about how their personal data is being handled by the Organization.
Given the broad scope of personal data and data processing, it can be complicated and somewhat burdensome to comply with these new practices. For instance, it will require us to review how passport information should be sent, how records such as medical information and personal attributes are secured, as well as how photos and CCTV are used. At the same time, we need to recognise that protecting privacy is important and that adopting a “nothing to hide, nothing to fear” approach does not protect us from future unknown uses of our personal data.
So, if in any doubt, simply adopt the golden rule of personal data: if you don’t really need it, don’t collect and store it; if you do, delete it as soon as possible.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.