The LHC was built with a guaranteed discovery: the ATLAS and CMS experiments would either find a Higgs boson, or it would discover new physics in vector boson scattering (VBS) at high energies. The discovery of a Higgs-like boson in July 2012 confirmed that the W and Z bosons acquire mass through the Higgs mechanism, but to determine whether the observed particle corresponds to the single Higgs boson expected in the Standard Model (SM), it is now paramount to precisely measure the Higgs boson’s contributions to VBS. Since the behaviour of VBS amplitudes is sensitive to the way Higgs and vector bosons couple to one another and to the Higgs boson’s mass, models of physics beyond the SM predict enhancements to VBS via modifications to the Higgs sector or from the presence of additional resonances.
A recent analysis by CMS aimed to identify events in which a W-boson pair is produced purely via the electroweak interaction. Requiring events to have a same-sign W-boson pair reduces the probability of production via the strong interaction, making it an ideal signature for VBS studies. The first experimental results on this final state were reported by ATLAS and CMS based on 20 fb–1 of LHC data collected in 2012 at an energy of 8 TeV, but were insufficient to claim an observation. The new study is based on 36 fb–1 of data collected in 2016 at 13 TeV. Events were selected by requiring they contain two leptons (electrons or muons) with the same electric charge, moderate missing transverse momentum, and two jets with a large rapidity separation and a large dijet mass. About 67 signal events were expected, with the dominant sources of background events coming from top quark–antiquark pairs and WZ boson pairs. The event yield of the signal process is then extracted using a 2D fit of the dijet and dilepton mass distributions (figure, left).
The new CMS study provides the first observation of the electroweak production of same-sign W-boson pairs in proton–proton collisions, with an observed significance of 5.5 standard deviations. The result does not point to physics beyond the SM: a cross-section of 3.8±0.7 fb is measured within the defined fiducial signal region, corresponding to 90±22% of the result expected. An excess of events could have been caused by the presence of a doubly charged Higgs boson that couples to W bosons, and the analysis sets upper bounds on the product of the cross-section and branching fraction for such particles (figure, right). Bounds on the structure of quartic vector-boson interactions are also obtained in the framework of dimension-eight effective field theory operators, and the measurements set 95% confidence-level limits that are up to six times more stringent than previous results.
This first observation of the purely electroweak production of same-sign W-boson pairs is an important milestone towards precision tests of VBS at the LHC, and there is much more to be learned from the rapidly growing data sets. Studies demonstrate that the High Luminosity LHC, due to enter operation in the early 2020s, should even allow a direct investigation of longitudinal W-boson scattering.
The ATLAS collaboration has released new results on measurements of the properties of the Higgs boson using the full LHC proton–proton collision data set collected at a centre-of-mass energy of 13 TeV in 2015 and 2016, corresponding to an integrated luminosity of 36.1 fb–1.
One of the most sensitive measurement channels involves Higgs boson decays via two Z bosons to four leptons (two pairs of oppositely charged electrons or muons). Although only occurring in about one in every 8000 Higgs decays, it gives the cleanest signature of all the Higgs decay modes.
Using this channel, ATLAS measured both the inclusive and differential cross-sections for Higgs boson production. Although these have been measured before at lower LHC collision energy, the increased integrated luminosity and larger cross-section compared to LHC Run 1 allows their magnitudes to be determined with increased precision. In total, around 70 Higgs boson to four-lepton events were measured with a fit to the invariant mass distribution, allowing the inclusive cross-section to be measured with an accuracy of about 16%.
Candidate Higgs boson events were corrected for detector measurement effects and classified according to their kinematic properties to measure differential production cross-sections. Among these, the measurement of the momentum of the Higgs boson transverse to the beam axis probes different Higgs boson production mechanisms. By measuring the number and properties of jets produced in these events, Higgs boson production via the fusion of two gluons was studied. The measured inclusive and differential cross-sections were found to be in agreement with the Standard Model (SM) predictions. The results were used to constrain possible anomalous Higgs boson interactions with SM particles.
The precise particle-identification and momentum-measurement capabilities of the ALICE experiment allow researchers to reconstruct a variety of short-lived particles or resonances in heavy-ion collisions. These serve as a probe for in-medium effects during the last stages of evolution of the quark–gluon plasma (QGP). Recently, the ALICE collaboration has made a precise measurement of the yields (number of particles per event) of two such resonances: K*(892)0 and φ(1020). Both have similar masses and the same spin, and both are neutral strange mesons, yet their lifetimes differ by a factor of 10 (4.16±0.05 fm/c for K*0, and 46.3±0.4 fm/c for φ).
The shorter lifetime of the K*0 means that it decays within the medium, enabling its decay products (π and K) to re-scatter with other hadrons. This would be expected to inhibit the reconstruction of the parent K∗0, but the π and K in the medium may also scatter into a K∗0 resonance state, and the interplay of these two competing re-scattering and regeneration processes becomes relevant for determining the K*0 yield. The processes depend on the time interval between chemical freeze-out (vanishing inelastic collisions) and kinetic freeze-out (vanishing elastic collisions), in addition to the source size and the interaction cross-sections of the daughter hadrons. In contrast, due to the longer lifetime of the φ meson, both the re-scattering and regeneration effects are expected to be negligible.
Using lead–lead collision data recorded at an energy of 2.76 TeV, ALICE observed that the ratio K*0/K– decreases as a function of system size (see figure). In small impact-parameter collisions, the ratio is significantly less than in proton–proton collisions and models without re-scattering effects. In contrast, no such suppression was observed in the φ /K– ratio. This measurement thus suggests the existence of re-scattering effects on resonances in the last stages of heavy-ion collisions at LHC energies. Furthermore, the suppression of K*0 yields can be used to obtain the time difference between the chemical and the kinetic freeze-out of the system.
On the other hand, at higher momenta (pT > 8 GeV/c), these resonances were suppressed with respect to proton–proton collisions by similar amounts. The magnitude of this suppression for K*0 and φ mesons was also found to be similar to the suppression for pions, kaons, protons and D mesons. The striking independence of this suppression on particle mass, baryon number and the quark-flavour content of the hadron puts a stringent constraint on models dealing with particle-production mechanisms, fragmentation processes and parton energy loss in the QGP medium.
In future, it will be important to perform such measurements for high-multiplicity events in pp collisions at the LHC.
Researchers from the XENON1T dark-matter experiment at Gran Sasso National Laboratory in Italy reported their first results at the 13th Patras Workshop on Axions, WIMPs and WISPs, held in Thessaloniki from 15–19 May (see “Exploring axions and WIMPs in Greece” in Faces & Places). XENON1T is the first tonne-scale detector of its kind and is designed to search for WIMP dark matter by measuring nuclear recoils from WIMP–nucleus scattering. Continuing the programme of the previous XENON10 and XENON100 detectors, the new apparatus contains 3200 kg of ultra-pure liquid xenon (LXe) – 20 times more than its predecessor – in a dual-phase xenon time projection chamber (TPC) to detect nuclear recoils. The TPC encloses about 2000 kg of LXe, while another 1200 kg provides additional shielding.
The experiment started collecting data in November 2016. A blind search based on 34.2 live days of data acquired until January 2017, when earthquakes in the region temporarily suspended the run, revealed the data to be consistent with the background-only hypothesis. This allowed the collaboration to derive the most stringent exclusion limits on the spin-independent WIMP–nucleon interaction cross-section for WIMP masses above 10 GeV/c2, with a minimum of 7.7 × 10–47 cm2 for 35 GeV/c2 WIMPs at 90% confidence level.
These first results demonstrate that XENON1T has the lowest low-energy background level ever achieved by a dark-matter experiment, with the intrinsic background from krypton and radon reduced to unprecedented low levels. The sensitivity of XENON1T will continue to improve as the experiment records data until the end of 2018, when the collaboration plans to upgrade to a larger TPC due to come online by 2019. Several other experiments, such as PANDA-X and LUX-ZEPLIN, are also competing for the first WIMP detection.
“With our experiment working so beautifully, even exceeding our expectations, it is really exciting to have data in hand to further explore one of the most exciting secrets we have in physics: the nature of dark matter,” says XENON spokesperson Elena Aprile of Columbia University in the US.
The Muon g-2 experiment at Fermilab has begun its three-year-long campaign to measure the magnetic moment of the muon with unprecedented precision. On 31 May, a beam of muons was fired into the experiment’s 14 m-diameter storage ring, where powerful electromagnetic fields cause the magnetic moment, or spin, of individual muons to precess. The last time this experiment was performed, using the same electromagnet at Brookhaven National Laboratory in the late 1990s and early 2000s, the result disagreed with predictions by more than three standard deviations. This hinted at the presence of previously unknown particles or forces affecting the muon’s properties, and motivated further measurements to check the result.
Sixteen years later, the reincarnated Muon g-2 experiment will make use of Fermilab’s intense muon beams to definitively answer the questions raised by the Brookhaven experiment. It turned out to be 10 times cheaper to move the apparatus to Fermilab than it would have cost to build a new machine at Brookhaven, and the large, fragile superconducting magnet was transported in one piece from Long Island to the suburbs of Chicago in the summer of 2013.
Since it arrived, the Fermilab team reassembled the magnet and spent a year adjusting or “shimming” the uniformity of its field. The field created by the g-2 magnet is now three times more uniform than the one it created at Brookhaven. In the past year, the team has worked around the clock to install detectors, build a control room and prepare for first beam. The work has included: the creation of a new beamline to deliver a pure beam of muons; instrumentation to measure the magnetic field; and entirely new instrumentation to measure the muonʼs spin-precession signal.
Over the next few weeks the Muon g-2 team will test the equipment, with science-quality data expected later in the year. The experiment aims to achieve a precision on the anomalous magnetic moment of the muon of 0.14 parts per million, compared to around 0.54 parts per million previously. If the inconsistency with theory remains, it could indicate that the Standard Model of particle physics is in need of revision.
On 20 June the European Space Agency (ESA) gave the official go-ahead for the Laser Interferometer Space Antenna (LISA), which will comprise a trio of satellites to detect gravitational waves in space. LISA is the third mission in ESA’s Cosmic Vision plan, set to last for the next two decades, and has been given a launch date of 2034.
Predicted a century ago by general relativity, gravitational waves are vibrations of space–time that were first detected by the ground-based Laser Interferometer Gravitational-Wave Observatory (LIGO) in September 2015. While upgrades to LIGO and other ground-based observatories are planned, LISA will access a much lower-frequency region of the gravitational-wave universe. Three craft, separated by 2.5M km in a triangular formation, will follow Earth in its orbit around the Sun, waiting to be distorted by a fractional amount by a passing gravitational wave.
Although highly challenging experimentally, a LISA test mission called Pathfinder has recently demonstrated key technologies needed to detect gravitational waves from space (CERN Courier January/February 2017 p34). These include free-falling test masses linked by lasers and isolated from all external and internal forces except gravity. LISA Pathfinder concluded its pioneering mission at the end of June, as LISA enters a more detailed phase of study. Following ESA’s selection, the design and costing of the LISA mission can be completed. The project will then be proposed for “adoption” before construction begins.
Following the first and second detections of gravitational waves by LIGO in September and December 2015, on 1 June the collaboration announced the detection of a third event (Phys. Rev. Lett.118 221101). Like the previous two, it is thought that “GW170104” – the signal for which arrived on Earth on 4 January – was produced when two black holes merged into a larger one billions of years ago.
CERN has recently implemented two important steps towards the High Luminosity LHC (HL-LHC) – an upgrade that will increase the intensity of the LHC’s collisions significantly from the early 2020s. Preparing CERN’s existing accelerator complex to cope with more intense proton beams presents several challenges, in particular concerning the system that injects protons into the LHC.
At a ceremony on 9 May, a major new linear accelerator, Linac 4, was inaugurated. Replacing Linac 2, which had been in service since 1978, it is CERN’s newest accelerator acquisition since the LHC and is due to feed the accelerator complex with higher-energy particle beams. After an extensive testing period, Linac 4 will be connected to the existing infrastructure during the long technical shutdown in 2019/2020.
To cope with the higher-intensity and higher-energy beams emerging from Linac 4, the Proton Synchrotron Booster (PSB), which is the second accelerator of the LHC injector chain, will be completely overhauled during that same period. At the beginning of June, the first radio-frequency cavity of the new PSB acceleration system was completed, with a further 27 under assembly. The new cavities are based on a composite magnetic material called FINEMET developed by Hitachi Metals, which allows them to operate with a large bandwidth and means that a single cavity can cover all necessary frequency bands. The PSB cavity project was launched in 2012 in collaboration with KEK in Japan, and involved intensive testing at CERN. KEK contributed a substantial fraction of the FINEMET cores and shared its experience with similar technology
On 12 June, two large detector modules for the ICARUS experiment were loaded onto trucks at CERN to begin a six-week journey to Fermilab in the US. ICARUS will form part of Fermilab’s short-baseline neutrino programme, which aims to make detailed measurements of neutrino interactions and search for eV-scale sterile neutrinos (CERN Courier June 2017 p25).
Based on advanced liquid-argon time projection technology, ICARUS began its life under a mountain at the Gran Sasso National Laboratory in Italy in 2010, recording data from neutrino beams sent from CERN. Since 2014, it has been at CERN undergoing an upgrade and refurbishment at the CERN Neutrino Platform (CERN Courier July/August 2016 p21). It left CERN in two parts by road and boarded a boat on the Rhine to a port in Antwerp, Belgium, where it was loaded onto a ship. As the Courier went to press, ICARUS was already heading across the Atlantic to Fermilab via the Great Lakes, equipped with a GPS unit that allows its progress to be tracked in real time (icarustrip.fnal.gov).
Just two days after ICARUS left CERN, another key component of the CERN Neutrino Platform was on the move, albeit on a smaller lorry. Baby MIND, a 75 tonne prototype for a magnetised iron neutrino detector that will precisely identify and track muons, was moved from its construction site in building 180 to the East Hall of the Proton Synchrotron. Following commissioning and full characterisation in the T9 test beam, at the end of July Baby MIND will be transported to Japan to be part of the WAGASCI experiment at JPARC, where it will contribute to a better understanding of neutrino interactions for the T2K experiment.
Massive stars are traditionally expected to end their life cycle by triggering a supernova, a violent event in which the stellar core collapses into a neutron star, potentially followed by a further collapse into a black hole. During this process, a shock wave ejects large amounts of material from the star into interstellar space with large velocities, producing heavy elements in the process, while the supernova outshines all the stars in its host galaxy combined.
In the past few years, however, there has been mounting evidence that not all massive-star deaths are accompanied by these catastrophic events. Instead, it seems that for some stars only a small part of their outer layers is ejected before the rest of the volume collapses into a massive black hole. For instance, there are hints that the birth rate and supernova rate of massive stars do not match. Furthermore, results from the LIGO gravitational-wave observatory in the US indicate the existence of black holes with masses more than 30 times that of the Sun, which is easier to explain if stars can collapse without a large explosion.
The results would explain why we observe less supernovae than expected
Motivated by this indirect evidence, researchers from Ohio State University began a search for stars that quietly form a black hole without triggering a supernova. Using the Large Binocular Telescope (LBT) in Arizona, in 2015 the team identified its first candidate. The star, called N6946-BH1, was approximately 25 times more massive than the Sun and lived in the Fireworks galaxy, which is known for hosting a large number of supernovae. Previously presenting a stable luminosity, the star was seen to become brighter, although not at the level expected for a supernova, during 2009, before completely disappearing in optical wavelengths in 2010 (see image).
The lack of emission observed by the LBT triggered follow-up searches for the star, both using the Hubble Space Telescope (HST) and the Spitzer Space Telescope (SST). While the HST did not find signs of the star in the optical wavelength, the SST did observe infrared emission. A careful analysis of the data disfavoured alternative explanations such as a large dust cloud obscuring the optical emission from the star, and the infrared data were also shown to be compatible with emission from remaining matter falling into a black hole.
If the star did indeed directly collapse into a black hole, as these findings suggest, the in-falling matter is expected to radiate in the X-ray region. The team is therefore waiting for observations from the space-based Chandra X-ray Observatory to search for this emission.
If confirmed in X-ray data, this result would be the first measurement of the birth of a black hole and the first measurement of a failed supernova. The results would explain why we observe less supernovae than expected and could reveal the origin of the massive black holes responsible for the gravitational waves seen by LIGO, in addition to having implications for the production of heavy elements in the universe.
The past few decades have witnessed an explosion in X-ray sources and techniques, impacting science and technology significantly. Large synchrotron X-ray facilities around the world based on advanced storage rings and X-ray optics are used daily by thousands of scientists across numerous disciplines. From the shelf life of washing detergents to the efficiency of fuel-injection systems, and from the latest pharmaceuticals to the chemical composition of archaeological remains, highly focused and brilliant beams of X-rays allow researchers to characterise materials over an enormous range of length and timescales, and therefore link the microscopic behaviour of a system with its bulk properties.
So-called third-generation light sources based on synchrotrons produce stable beams of X-rays over a wide range of photon energies and beam parameters. The availability of more intense, shorter and more coherent X-ray pulses opens even further scientific opportunities, such as making high-resolution movies of chemical reactions or providing industry with real-time nanoscale imaging of working devices. This boils down to maximising a parameter called peak brilliance. While accelerator physicists have made enormous strides in increasing the peak brilliance of synchrotrons, this quantity experienced a leap forward by many orders of magnitude when the first free-electron lasers (FELs) started operating in the X-ray range more than a decade ago.
FLASH, the soft-X-ray FEL at DESY in Hamburg, was inaugurated in 2005 and marked the beginning of this new epoch in X-ray science. Based on superconducting accelerating structures developed initially for a linear collider for particle physics (see “The world’s longest superconducting linac”), it provided flashes of VUV radiation with peak brilliances almost 10 orders of magnitude higher than any storage-ring-based source in the same wavelength range. The unprecedented peak power of the beam immediately led to groundbreaking new research in physics, chemistry and biology. But importantly, FLASH also demonstrated that the amplification scheme responsible for the huge gain of FELs – Self Amplified Spontaneous Emission (SASE) – was feasible at short wavelengths and could likely be extended to the hard-X-ray regime.
The first hard-X-ray FEL to enter operation based on the SASE principle was the Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory in California, which obtained first light in 2009 using a modified version of the old SLAC linac and operates at X-ray energies up to around 11 keV. Since then, several facilities have been inaugurated or are close to start-up: SACLA in Japan, Pohang FEL in South Korea, and Swiss-FEL in Switzerland. The European X-ray Free-Electron Laser (European XFEL) in Schenefeld-Hamburg, Germany, marks a further step-change in X-ray science, promising to produce the brightest beams with the highest photon energies and the highest repetition rates. Construction of the €1.2 billion facility began in January 2009 funded by 11 countries: Denmark, France, Germany, Hungary, Italy, Poland, Russia, Slovakia, Spain, Sweden and Switzerland, with Germany (58%) and Russia (27%) as the largest contributors. It is expected that the UK will join the European XFEL in 2017.
The European XFEL extends over a distance of 3.4 km in underground tunnels (figure 1). It begins with the electron injector at DESY in Bahrenfeld-Hamburg, which produces and injects electrons into a 2 km-long superconducting linear accelerator where the desired electron energy (up to 17.5 GeV) is achieved. Exiting the linac, electrons are then rapidly deflected in an undulating left–right pattern by traversing a periodic array of magnets called an undulator (figure 1, bottom right), causing the electrons to emit intense beams of X-ray photons. X-rays emerging from the undulator, via 1 km-long photon-transport tunnels equipped with various X-ray optics elements, finally arrive at the European XFEL headquarters in Schenefeld where the experiments will take place.
In addition to the development of the electron linac, which was commissioned earlier this year and involved a major effort by DESY in collaboration with numerous other accelerator facilities over the past decade (see “The world’s longest superconducting linac”), the European XFEL has driven the development of both undulator technology and advanced X-ray optics. This multinational and multidisciplinary effort now opens perspectives for novel scientific experiments. When fully commissioned, towards the end of 2018, the facility will deliver 4000 hours of accelerator time per year for user experiments that are approved via external peer review.
Manipulating X-rays
Synchrotron radiation was first detected experimentally at Cornell in 1947, and the first generation of synchrotron-radiation users were termed “parasitic” because they made use of X-rays produced as a byproduct of particle-physics experiments. Dedicated “second-generation” X-ray sources were established in the early 1970s, while much more brilliant “third-generation” sources based on devices called undulators started to appear in the early 1990s (figure 2). The SASE technology underpinning XFELs, which followed from work undertaken in the mid-1960s, ensures that the produced X-rays are much more intense and more coherent that those emitted by storage rings (see SASE panel below). Like the light coming from an optical laser, the X-rays generated by SASE are almost 100% transversely coherent compared to less than one per cent for third-generation synchrotrons, indicating that the radiation is an almost perfect plane wave. Even though the longitudinal-coherence length is not comparable to that of a single-mode optical laser, the use of the term “X-ray laser” is clearly justified for facilities such as the European XFEL.
A major challenge with X-ray lasers is to develop the mirrors, monochromators and other optical components that enable high-energy X-rays to be manipulated and their coherence to be preserved. Compared with the visible light emerging from a standard red helium-neon laser, which has a wavelength of 632 nm, the typical wavelength of hard X-rays is around 0.1 nm. Consequently, X-ray laser light is up to 6000 times more sensitive to distortions in the optics. On the other hand, X-ray mirrors work at extremely small grazing incidence angles (typically around 0.1° for hard X-rays at the European XFEL) because the interaction between X-rays and matter is so weak. This reduces the sensitivity to profile distortions and makes errors of up to 2 nm tolerable on a 1 m-long X-ray mirror, before the reflected X-ray wavefront becomes noticeably affected. Still, these requirements on profile errors are extremely high – about 10 times more stringent than for the Hubble Space Telescope mirror, for example.
The technology to produce these ultra-flat X-ray mirrors was only developed in recent years in Japan and Europe. It is based on a process called deterministic polishing, in which material is removed atomic layer by atomic layer according to a very precisely measured map of the initial profile’s deviations from an ideal shape. After years of development and many months of deterministic polishing iterations, the first 95 cm-long silicon X-ray mirror fulfilling the tight specifications of the European XFEL was completed in March 2016, with 10 more mirrors of similar quality following shortly thereafter. In the final configuration, 27 of these extremely precise mirrors will be used to steer the X-ray laser beam along the photon-transport tunnels to all the scientific instruments.
Managing the large heat loads on the European XFEL mirrors is a major challenge. To remove the heat generated by the X-ray laser beam without distorting the highly sensitive mirrors, a liquid-metal film is used to couple the mirror to a water-cooling system in a tension- and vibration-free fashion. Another mirror system will be cooled to a temperature of around 100 K, at which the thermal-expansion coefficient of silicon is close to zero. This solution, which is vital to deal with the high repetition rate of the European XFEL, is often employed for smaller silicon crystals acting as crystal monochromators but is rarely necessary for large mirror bodies where the grazing-incidence geometry spreads the heat over a large area.
Indeed, the SASE pulses have potentially devastating power – especially close to the sample where the beam may be focused to small dimensions. A typical SASE X-ray pulse of 100 fs duration contains about 2 mJ of thermal X-ray energy (corresponding to 1012 photons at 12 keV photon energy), which means that a copper beam-stop placed close behind the sample would be heated to a temperature of several 100,000 °C and could therefore be evaporated (along with the sample) from just one pulse. While this is not necessarily a problem for samples that can be replaced via advanced injection schemes and where data can be collected before destruction takes place, it could shorten the lifetime of slits, attenuators, windows and other standard beamline components. The solution is to intersect the beam only where it has a larger size and to use only light elements that absorb less X-ray energy per atom. Still, stopping the X-ray laser beam remains a challenge at the European XFEL, with up to 2700 pulses in a 600 μs pulse train (figure 3). Indeed, the entire layout of the photon-distribution system was adapted to counteract this damaging effect of the X-ray laser beam, and a facility-wide machine-protection system limits the pulse-train length to a safe limit, depending on the optical configuration. Since a misguided X-ray laser beam can quickly drill through the stainless-steel pipes of the vacuum system, diamond plates are positioned around the beam trajectory and will light up if hit by X-rays, triggering a dump of the electron beam.
The business end of things
At the European XFEL, the generation of X-ray beams is largely “behind the scenes”. The scientific interest in XFEL experiments stems from the ability to deliver around 1012 X-ray photons in one ultrafast pulse (with a duration in the range 10–100 fs) and with a high degree of coherence. Performing experiments within such short pulses allows users to generate ultrafast snapshots of dynamics that would be smeared out with longer exposure times and give rise to diffuse scattering. Combined with spectroscopic information, a complete picture of atomic motion and molecular rearrangements, as well as the charge and spin states and their dynamics, can be built up. This leads to the notion of a “molecular movie”, in which the dynamics are triggered by an external optical laser excitation (acting as an optical pump) and the response of a molecule is monitored by ultrafast X-ray scattering and spectroscopy (X-ray probe). Pump-probe experiments are typically ensemble-averaged measurements of many molecules that are randomly aligned with respect to each other and not distinguishable within the scattering volume. The power and coherence of the European XFEL beams will allow such investigations with unprecedented resolution in time and space compared to today’s best synchrotrons.
In particular, the coherence of the European XFEL beam allows users to distinguish features beyond those arising from average properties. These features are encoded in the scattering images as grainy regions of varying intensity called speckle, which results from the self-interference of the scattered beam and can be exploited to obtain higher spatial resolution than is possible in “incoherent” X-ray scattering experiments (figure 4). Since the speckles reflect the exact real-space arrangement of the scattering volume, even subtle structural changes can alter the speckle pattern dramatically due to interference effects.
The combination of ultrafast pulses, huge peak intensity and a high degree of beam coherence is truly unique to FEL facilities and has already enabled experiments that otherwise were impossible. In addition, the European XFEL has a huge average intensity due to the many pulses delivered each second. This allows a larger number of experimental sessions per operation cycle and/or better signal-to-noise ratios within a given experimental time frame. The destructive power of the beam means that many experiments will be of the single-shot type, which requires a continuous injection scheme because the sample cannot be reused. Other experiments will operate with reduced peak flux, allowing multi-exposure schemes as also demonstrated in work at LCLS and FLASH.
Six experimental stations are planned for the European XFEL start-up, two per SASE beamline. The first, situated at the hard-X-ray undulator SASE-1, is devoted to the study of single-particles and biomolecules, serial femtosecond crystallography, and femtosecond X-ray experiments in biology and chemistry. SASE-2 caters to dynamics investigations in condensed-matter physics and material-science experiments, specialising in extreme states of matter and plasmas. At the soft-X-ray branch SASE-3, two instruments will allow investigations of electronic states of matter and atomic/cluster physics, among other studies. The three SASE undulators will deliver photons in parallel and the instruments will share their respective beams in 12 hour shifts, so that three instruments are always operating at any given time.
Eight years after the project officially began, the European XFEL finally achieved first light in 2017 and its commissioning is progressing according to schedule. The facility is the culmination of a worldwide effort lead by DESY concerning the electron linac and by European XFEL Gmbh for the development of X-ray photon transport and experimental stations. The facility is conveniently situated among other European light sources – synchrotrons that are also continuously evolving towards larger brilliance – and a handful of hard-X-ray FELs worldwide. The European XFEL is by far the most powerful hard-X-ray source in the world and will remain at the forefront for at least the next 20–30 years. Continuous investment in instrumentation and detectors will be required to capitalise fully on the impressive specifications, and the facility has the potential to construct about six additional instruments and possibly even a second experimental hall, all fed by X-rays generated by the existing superconducting electron linac. Without a doubt, Europe has now entered the extreme X-ray era.
Self-Amplified Spontaneous Emission (SASE), the underlying principle of X-ray free-electron lasers, is based on the interaction between a relativistic electron beam and the radiation emitted by the electrons as they are accelerated through a long alternating magnetic undulator array (see image). If the undulator is short, on the order of a few metres, and the undulating path is well defined with a small amplitude, the radiation emitted by one electron adds up coherently at one particular wavelength as it travels through the undulator. Hence, the intensity is proportional to N2p, where Np is the number of undulator periods (typically around 100). This is the regular undulator radiation generated at third-generation synchrotron sources such as the ESRF in France or APS in the US, and also at the next generation of diffraction-limited storage rings, such as MAX IV in Sweden. On the other hand, if the undulator is very long, the interactions between the electrons and the radiation field that builds up will eventually lead to micro-bunching of the electron beam into coherent packages that radiate in phase (see image). This results in a huge amplification (lasing) of emitted intensity as it becomes proportional to N2e, where Ne is the number of electrons emitting in phase within the co-operation length (typically 106, or more). The hard-X-ray undulators of the European XFEL have magnetic lengths of 175 m in order to ensure that SASE works over a wide range of photon energies and electron-beam parameters. High electron energy, small energy spread and a small emittance (the product of beam size and divergence) are crucial for SASE to work in the X-ray range. Together with the requirement of very long undulators, it favours the use of linac sources, instead of storage rings, for X-ray lasers.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.