The top quark – the heaviest known elementary particle – differs from the other quarks by its much larger mass and a lifetime that is shorter than the time needed to form hadronic bound states. Within the Standard Model (SM), the top quark decays almost exclusively into a W boson and a b quark, and the dominant production mechanism in proton–proton (pp) collisions is top-quark pair (tt) production.
Measurements of tt production at various pp centre-of-mass energies at the LHC probe different values of Bjorken-x, the fraction of the proton’s longitudinal momentum carried by the parton participating in the initial interaction. In particular, the fraction of tt events produced through quark–antiquark annihilation increases from 11% at 13 TeV to 25% at 5.02 TeV. A measurement of the tt production cross-section thus places additional constraints on the proton’s parton distribution functions (PDFs), which describe the probabilities of finding quarks and gluons at particular x values.
In November 2017, the ATLAS experiment recorded a week of pp-collision data at a centre-of-mass energy of 5.02 TeV. Although the main motivation of this 5.02 TeV dataset is to provide a proton reference sample for the ATLAS heavy-ion physics programme, it also provides a unique opportunity to study top-quark production at a previously unexplored energy in ATLAS. The majority of the data was recorded with a mean number of two inelastic pp collisions per bunch crossing compared to roughly 35 collisions during the 13 TeV runs. Due to much lower pileup conditions, the ATLAS calorimeter cluster noise thresholds were adjusted accordingly, and a dedicated jet-energy scale calibration was performed.
Now, the ATLAS collaboration has released its measurement of the tt production cross-section at 5.02 TeV in two final states. Events in the dilepton channel were selected by requiring opposite-charge pairs of leptons, resulting in a small, high-purity sample. Events in the single-lepton final states were separated into subsamples with different signal-to-background ratios, and a multivariate technique was used to further separate signal from background events. The two measurements were combined, taking the correlated systematic uncertainties into account.
The measured cross section in the dilepton channel (65.7 ± 4.9 pb) corresponds to a relative uncertainty of 7.5%, of which 6.8% is statistical. The single-lepton measurement (68.2 ± 3.1 pb), on the other hand, has a 4.5% uncertainty that is primarily systematic. This measurement is slightly more precise than the single-lepton measurement at 13 TeV, despite the much smaller (almost a factor of 500!) integrated luminosity. The combination of the two measurements gives 67.5 ± 2.6 pb, corresponding to an uncertainty of just 3.9%.
The new ATLAS result is consistent with the SM prediction and with a measurement by the CMS collaboration, though with a total uncertainty reduced by almost a factor of two. It thus improves our understanding of the top-quark production at different centre-of-mass energies and allows an important test of the compatibility with predictions from different PDF sets (see figure 1). The result also provides a new measurement of high-x proton structure and shows a 5% reduction in the gluon PDF uncertainty in the region around x = 0.1, which is relevant for Higgs-boson production. Moreover, the measurement paves the way for the study of top-quark production in collisions involving heavy ions.
For the past 60 years, the second has been defined in terms of atomic transitions between two hyperfine states of caesium-133. Such transitions, which correspond to radiation in the microwave regime, enable state-of-the art atomic clocks to keep time at the level of one second in more than 300 million years. A newer breed of optical clocks developed since the 2000s exploit frequencies that are about 105 times higher. While still under development, optical clocks based on aluminium ions are already reaching accuracies of about one second in 33 billion years, corresponding to a relative systematic frequency uncertainty below 1 × 10–18.
To further reduce these uncertainties, in 2003 Ekkehard Peik and Christian Tamm of Physikalisch-Technische Bundesanstalt in Germany proposed the use of a nuclear instead of atomic transition for time measurements. Due to the small nuclear moments (corresponding to the vastly different dimensions of atoms and nuclei), and thus the very weak coupling to perturbing electromagnetic fields, a “nuclear clock” is less vulnerable to external perturbations. In addition to enabling a more accurate timepiece, this offers the potential for nuclear clocks to be used as quantum sensors to test fundamental physics.
Clockwork
A clock typically consists of an oscillator and a frequency-counting device. In a nuclear clock (see “Nuclear clock schematic” figure), the oscillator is provided by the frequency of a transition between two nuclear states (in contrast to a transition between two states in the electronic shell in the case of an atomic clock). For the frequency-counting device, a narrow-band laser resonantly excites the nuclear-clock transition, while the corresponding oscillations of the laser light are counted using a frequency comb. This device (the invention of which was recognised by the 2005 Nobel Prize in Physics) is a laser source whose spectrum consists of a series of discrete, equally spaced frequency lines. After a certain number of oscillations, given by the frequency of the nuclear transition, one second has elapsed.
The need for direct laser excitation strongly constrains applicable nuclear-clock transitions: their energy has to be low enough to be accessible with existing laser technology, while simultaneously exhibiting a narrow linewidth. As the linewidth is determined by the lifetime of the excited nuclear state, the latter has to be long enough to allow for highly stable clock operation. So far, only the metastable (isomeric) first excited state of 229Th, denoted 229mTh, qualifies as a candidate for a nuclear clock, due to its exceptionally low excitation energy.
The existence of the isomeric state was conjectured in 1976 from gamma-ray spectroscopy of 229Th, and its excitation energy has only recently been determined to be 8.19 ± 0.12 eV (corresponding to a vacuum-ultraviolet wavelength of 151.4 ± 2.2 nm). Not only is it the lowest nuclear excitation among the roughly 184,000 excited states of the 3300 or so known nuclides, its expected lifetime is of the order of 1000 s, resulting in an extremely narrow relative linewidth (ΔE/E ~ 10–20) for its ground-state transition (see “Unique transition” figure). Besides high resilience against external perturbations, this represents another attractive property for a thorium nuclear clock.
Networks of ultra-precise synchronised nuclear clocks could enable a search for ultra light dark matter
Achieving optical control of the nuclear transition via a direct laser excitation would open a broad range of applications. A nuclear clock’s sensitivity to the gravitational redshift, which causes a clock’s relative frequency to change depending on its absolute height, could enable more accurate global positioning systems and high-sensitivity detections of fluctuations of Earth’s gravitational potential induced by seismic or tectonic activities. Furthermore, while the few-eV thorium transition emerges from a fortunate near-degeneracy of the two lowest nuclear-energy levels in 229Th, the Coulomb and strong-force contributions to these energies differ at the MeV level. This makes the nuclear-level structure of 229Th uniquely sensitive to variations of fundamental constants and ultralight dark matter. Many theories predict variations of the fine structure constant, for example, but on tiny yearly rates. The high sensitivity provided by the thorium isomer could allow such variations to be identified. Moreover, networks of ultra-precise synchronised clocks could enable a search for (ultra light) dark-matter signals.
Two different approaches have been proposed to realise a nuclear clock: one based on trapped ions and another using doped solid-state crystals. The first approach starts from individually trapped Th ions, which promises an unprecedented suppression of systematic clock-frequency shift and leads to an expected relative clock accuracy of about 1 × 10–19. The other approach relies on embedding 229Th atoms in a vacuum–ultraviolet (VUV) transparent crystal such as CaF2. This has the advantage of a large concentration (> 1015/cm3) of Th nuclei in the crystal, leading to a considerably higher signal-to-noise ratio and thus a greater clock stability.
Precise characterisation
A precise characterisation of the thorium isomer’s properties is a prerequisite for any kind of nuclear clock. In 2016 the present authors and colleagues made the
first direct identification of 229mTh by detecting electrons emitted from its dominant decay mode: internal-conversion (IC), whereby a nuclear excited state decays by the direct emission of one of its atomic electrons (see “Isomeric signal” figure). This brought the long-term objective of a nuclear clock into the focus of international research.
Currently, experimental access to 229mTh is possible only via radioactive decays of heavier isotopes or by X-ray pumping from higher-lying rotational nuclear levels, as shown by Takahiko Masuda and co-workers in 2019. The former, based on the alpha decay of 233U (2% branching ratio), is the most commonly used approach. Very recently, however, a promising new experiment exploiting β– decay from 229Ac was performed at CERN’s ISOLDE facility led by a team at KU Leuven. Here, 229Ac is online-produced and mass-separated before being implanted into a large-bandgap VUV-transparent crystal. In both population schemes, either photons or conversion electrons emitted during the isomeric decay are detected.
In the IC-based approach, a positively charged 229mTh ion beam is generated from alpha-decay daughter products recoiling off a 233U source placed inside a buffer-gas stopping cell. The decay products are thermalised, guided by electrical fields towards an exit nozzle, extracted into a longitudinally 15-fold segmented radiofrequency quadrupole (RFQ) that acts as an ion guide, phase-space cooler and optionally a beam buncher, followed by a quadrupole mass separator for beam purification. In charged thorium isomers, the otherwise dominant IC decay branch is energetically forbidden, leading to a prolongation of the lifetime by up to nine orders of magnitude.
Operating the segmented RFQ as a linear Paul trap to generate sharp ion pulses enables the half-life of the thorium isomer to be determined. In work performed by the present authors in 2017, pulsed ions from the RFQ were collected and neutralised on a metal surface, triggering their IC decay. Since the long ionic lifetime was inaccessible due to the limited ion-storage time imposed by the trap’s vacuum conditions, the drastically reduced lifetime of neutral isomers was targeted. Time-resolved detection of the low-energy conversion electrons determined the lifetime to be 7 ± 1 μs.
Excitation energy
Recently, considerable progress has been made in determining the 229mTh excitation energy – a milestone en route to a nuclear clock. In general, experimental approaches to determine the excitation energy fall into three categories: indirect measurements via gamma-ray spectroscopy of energetically low-lying rotational transitions in 229Th; direct spectroscopy of fluorescence photons emitted in radiative decays; and via electrons emitted in the IC decay of neutral 229mTh. The first approach led to the conjecture of the isomer’s existence and finally, in 2007, to the long-accepted value of 7.6 ± 0.5 eV. The second approach tries to measure the energy of photons emitted directly in the ground-state decay of the thorium isomer.
The first direct measurement of the thorium isomer’s excitation energy was reported by the present authors and co-workers in 2019. Using a compact magnetic-bottle spectrometer equipped with a repulsive electrostatic potential, followed by a microchannel-plate detector, the kinetic energy of the IC electrons emitted after an in-flight neutralisation of Th ions emitted from a 233U source could be determined. The experiment provided a value for the excitation energy of the nuclear-clock transition of 8.28 ± 0.17 eV. At around the same time in Japan, Masuda and co-workers used synchrotron radiation to achieve the first population of the isomer via resonant X-ray pumping into the second excited nuclear state of 229Th at 29.19 keV, which decays predominantly into 229mTh. By combining their measurement with earlier published gamma-spectroscopic data, the team could constrain the isomeric excitation energy to the range 2.5–8.9 eV. More recently, led by teams at Heidelberg and Vienna, the excited isomers were implanted into the absorber of a custom-built cryogenic magnetic micro-calorimeter and the isomeric energy was measured by detecting the temperature-induced change of the magnetisation using SQUIDs. This produced a value of 8.10 ± 0.17 eV for the clock-transition energy, resulting in a world-average of 8.19 ± 0.12 eV.
Besides precise knowledge of the excitation energy, another prerequisite for a nuclear clock is the possibility to monitor the nuclear excitation on short timescales. Peik and Tamm proposed a method to do this in 2003 based on the “double resonance” principle, which requires knowledge of the hyperfine structure of the thorium isomer. Therefore, in 2018, two different laser beams were collinearly superimposed on the 229Th ion beam, initiating a two-step excitation in the atomic shell of 229Th. By varying both laser frequencies, resonant excitations of hyperfine components both of the 229Th ground state and the 229mTh isomer could be identified and thus the hyperfine splitting signature of both states could be established by detecting their de-excitation (see “Hyperfine splitting” figure). The eventual observation of the 229mTh hyperfine structure in 2018 not only will in the future allow a non-destructive verification of the nuclear excitation, but enabled the isomer’s magnetic dipole and electrical quadrupole moments, and the mean-square charge radius, to be determined.
Roadmap towards a nuclear clock
So far, the identification and characterisation of the thorium isomer has largely been driven by nuclear physics, where techniques such as gamma spectroscopy, conversion-electron spectroscopy and radioactive decays offer a description in units of electron volts. Now the challenge is to refine our knowledge of the isomeric excitation energy with laser-spectroscopic precision to enable optical control of the nuclear-clock transition. This requires bridging a gap of about 12 orders of magnitude in the precision of the 229mTh excitation energy, from around 0.1 eV to the sub-kHz regime. In a first step, existing broad-band laser technology can be used to localise the nuclear resonance with an accuracy of about 1 GHz. In a second step, using VUV frequency-comb spectroscopy presently under development, it is envisaged to improve the accuracy into the (sub-)kHz range.
Another practical challenge when designing a high-precision ion-trap-based nuclear clock is the generation of thermally decoupled, ultra-cold 229Th ions via laser cooling. 229Th3+ is particularly suited due to its electronic level structure, with only one valence electron. Due to the high chemical reactivity of thorium, a cryogenic Paul trap is the ideal environment for laser cooling, since almost all residual gas atoms will freeze out at 4 K, increasing the trapping time into the region of a few hours. This will form the basis for direct laser excitation of 229mTh and will also enable a measurement of the not yet experimentally determined isomeric lifetime of 229Th ions. For the alternative development of a compact solid-state nuclear clock it will be necessary to suppress the 229mTh decay via internal conversion in a large band-gap, VUV transparent crystal and to detect the γ decay of the excited nuclear state. Proof-of-principle studies of this approach are currently ongoing at ISOLDE.
Laser-spectroscopy activities on the thorium isomer are also ongoing in the US, for example at JILA, NIST and UCLA
Many of the recent breakthroughs in understanding the 229Th clock transition emerged from the European Union project “nuClock”, which terminated in 2019. A subsequent project, ThoriumNuclearClock (ThNC), aims to demonstrate at least one nuclear clock by 2026. Laser-spectroscopy activities on the thorium isomer are also ongoing in the US, for example at JILA, NIST and UCLA.
In view of the large progress in recent years and ongoing worldwide efforts both experimentally and theoretically, the road is paved towards the first nuclear clock. It will complement highly precise optical atomic clocks, while in some areas, in the long run, nuclear clocks might even have the potential to replace them. Moreover, and beyond its superb timekeeping capabilities, a nuclear clock is a unique type of quantum sensor allowing for fundamental physics tests, from the variation of fundamental constants to searches for dark matter.
In particle accelerators, large vacuum systems guarantee that the beams travel as freely as possible. Despite being one 25-trillionth the density of Earth’s atmosphere, however, a tiny concentration of gas molecules remain. These pose a problem: their collisions with accelerated particles reduce the beam lifetime and induce instabilities. It is therefore vital, from the early design stage, to plan efficient vacuum systems and predict residual pressure profiles.
Surprisingly, it is almost impossible to find commercial software that can carry out the underlying vacuum calculations. Since the background pressure in accelerators (of the order 10–9–10–12 mbar) is so low, molecules rarely collide with one other and thus the results of codes based on computational fluid dynamics aren’t valid. Although workarounds exist (solving vacuum equations analytically, modelling a vacuum system as an electrical circuit, or taking advantage of similarities between ultra-high-vacuum and thermal radiation), a CERN-developed simulator “Molflow”, for molecular flow, has become the de-facto industry standard for ultra-high-vacuum simulations.
Instead of trying to analytically solve the surprisingly difficult gas behaviour over a large system in one step, Molflow is based on the so-called test-particle Monte Carlo method. In a nutshell: if the geometry is known, a single test particle is created at a gas source and “bounced” through the system until it reaches a pump. Then, repeating this millions of times, with each bounce happening in a random direction, just like in the real world, the program can calculate the hit-density anywhere, from which the pressure is obtained.
The idea for Molflow emerged in 1988 when the author (RK) visited CERN to discuss the design of the Elettra light source with CERN vacuum experts (see “From CERN to Elettra, ESRF, ITER and back” panel). Back then, few people could have foreseen the numerous applications outside particle physics that it would have. Today, Molflow is used in applications ranging from chip manufacturing to the exploration of the Martian surface, with more than 1000 users worldwide and many more downloads from the dedicated website.
Molflow in space
While at CERN we naturally associate ultra-high vacuum with particle accelerators, there is another domain where operating pressures are extremely low: space. In 2017, after first meeting at a conference, a group from German satellite manufacturer OHB visited the CERN vacuum group, interested to see our chemistry lab and the cleaning process applied to vacuum components. We also demoed Molflow for vacuum simulations. It turned out that they were actively looking for a modelling tool that could simulate specific molecular-contamination transport phenomena for their satellites, since the industrial code they were using had very limited capabilities and was not open-source.
Molflow has complemented NASA JPL codes to estimate the return flux during a series of planned fly-bys around Jupiter’s moon Europa
A high-quality, clean mirror for a space telescope, for example, must spend up to two weeks encapsulated in the closed fairing from launch until it is deployed in orbit. During this time, without careful prediction and mitigation, certain volatile compounds (such as adhesive used on heating elements) present within the spacecraft can find their way to and become deposited on optical elements, reducing their reflectivity and performance. It is therefore necessary to calculate the probability that molecules migrate from a certain location, through several bounces, and end up on optical components. Whereas this is straightforward when all simulation parameters are static, adding chemical processes and molecule accumulation on surfaces required custom development. Even though Molflow could not handle these processes “out of the box”, the OHB team was able to use it as a basis that could be built on, saving the effort of creating the graphical user interface and the ray-tracing parts from scratch. With the help of CERN’s knowledge-transfer team, a collaboration was established with the Technical University of Munich: a “fork” in the code was created; new physical processes specific to their application were added; and the code was also adapted to run on computer clusters. The work was made publicly available in 2018, when Molflow became open source.
From CERN to Elettra, ESRF, ITER and back
Molflow emerged in 1988 during a visit to CERN from its original author (RK), who was working at the Elettra light source in Trieste at the time. CERN vacuum expert Alberto Pace showed him a computer code written in Fortran that enabled the trajectories of particles to be calculated, via a technique called ray tracing. On returning to Trieste, and realising that the CERN code couldn’t be run there due to hardware and software incompatibilities, RK decided to rewrite it from scratch. Three years later the code was formally released. Once more, credit must be given to CERN for having been the birthplace of new ideas for other laboratories to develop their own applications.
Molflow was originally written in Turbo Pascal, had (black and white) graphics, and visualised geometries in 3D – even allowing basic geometry editing and pressure plots. While today such features are found in every simulator, at the time the code stood out and was used in the design of several accelerator facilities, including the Diamond Light Source, Spallation Neutron Source, Elettra, Alba and others – as well as for the analysis of a gas-jet experiment for the PANDA experiment at GSI Darmstadt. That said, the early code had its limitations. For example, the upper limit of user memory (640 kB for MS-DOS) significantly limited the number of polygons used to describe the geometry, and it was single-processor.
In 2007 the original code was given a new lease of life at the European Synchrotron Radiation Facility in Grenoble, where RK had moved as head of the vacuum group. Ported to C++, multi-processor capability was added, which is particularly suitable for Monte Carlo calculations: if you have eight CPU cores, for example, you can trace eight molecules at the same time. OpenGL (Open Graphics Library) acceleration made the visualisation very fast even for large structures, allowing the usual camera controls of CAD editors to be added. Between 2009 and 2011 Molflow was used at ITER, again following its original author, for the design and analysis of vacuum components for the international tokamak project.
In 2012 the project was resumed at CERN, where RK had arrived the previous year. From here, the focus was on expanding the physics and applications: ray-tracing terms like “hit density” and “capture probability” were replaced with real-world quantities such as pressure and pumping speed. To publish the code within the group, a website was created with downloads, tutorial videos and a user forum. Later that year, a sister code “Synrad” for synchrotron-radiation calculations, also written in Trieste in the 1990s, was ported to the modern environment. The two codes could, for the first time, be used as a package: first, a synchrotron-radiation simulation could determine where light hits a vacuum chamber, then the results could be imported to a subsequent vacuum simulation to trace the gas desorbed from the chamber walls. This is the so-called photon-stimulated desorption effect, which is a major hindrance to many accelerators, including the LHC.
Molflow and Synrad have been downloaded more than 1000 times in the past year alone, and anonymous user metrics hint at around 500 users who launch it at least once per month. The code is used by far the most in China, followed by the US, Germany and Japan. Switzerland, including users at CERN, places only fifth. Since 2018, the roughly 35,000-line code has been available open-source and, although originally written for Windows, it is now available for other operating systems, including the new ARM-based Macs and several versions of Linux.
One year later, the Contamination Control Engineering (CCE) team from NASA’s Jet Propulsion Laboratory (JPL) in California reached out to CERN in the context of its three-stage Mars 2020 mission. The Mars 2020 Perseverance Rover, built to search for signs of ancient microbial life, successfully landed on the Martian surface in February 2021 and has collected and cached samples in sealed tubes. A second mission plans to retrieve the cache canister and launch it into Mars orbit, while a third would locate and capture the orbital sample and return it to Earth. Each spacecraft experiences and contributes to its own contamination environment through thruster operations, material outgassing and other processes. JPL’s CCE team performs the identification, quantification and mitigation of such contaminants, from the concept-generation to the end-of-mission phase. Key to this effort is the computational physics modelling of contaminant transport from materials outgassing, venting, leakage and thruster plume effects.
Contamination consists of two types: molecular (thin-film deposition effects) and particulate (producing obscuration, optical scatter, erosion or mechanical damage). Both can lead to degradation of optical properties and spurious chemical composition measurements. As more sensitive space missions are proposed and built – particularly those that aim to detect life – understanding and controlling outgassing properties requires novel approaches to operating thermal vacuum chambers.
Just like accelerator components, most spacecraft hardware undergoes long-duration vacuum baking at relatively high temperatures to reduce outgassing. Outgassing rates are verified with quartz crystal microbalances (QCMs), rather than vacuum gauges as used at CERN. These probes measure the resonance frequency of oscillation, which is affected by the accumulation of adsorbed molecules, and are very sensitive: a 1 ng deposition on 1 cm2 of surface de-tunes the resonance frequency by 2 Hz. By performing free-molecular transport simulations in the vacuum-chamber test environment, measurements by the QCMs can be translated to outgassing rates of the sources, which are located some distance from the probes. For these calculations, JPL currently uses both Monte Carlo schemes (via Molflow) and “view factor matrix” calculations (through in-house solvers). During one successful Molflow application (see “Molflow in space” image, top) a vacuum chamber with a heated inner shroud was simulated, and optimisation of the chamber geometry resulted in a factor-40 increase of transmission to the QCMs over the baseline configuration.
From SPHEREx to LISA
Another JPL project involving free molecular-flow simulations is the future near-infrared space observatory SPHEREx (Spectro-Photometer for the History of the Universe and Ices Explorer). This instrument has cryogenically cooled optical surfaces that may condense molecules in vacuum and are thus prone to significant performance degradation from the accumulation of contaminants, including water. Even when taking as much care as possible during the design and preparation of the systems, some elements, such as water, cannot be entirely removed from a spacecraft and will desorb from materials persistently. It is therefore vital to know where and how much contamination will accumulate. For SPHEREx, water outgassing, molecular transport and adsorption were modelled using Molflow against internal thermal predictions, enabling a decontamination strategy to keep its optics free from performance-degrading accumulation (see “Molflow in space” image, left). Molflow has also complemented other NASA JPL codes to estimate the return flux (whereby gas particles desorbing from a spacecraft return to it after collisions with a planet’s atmosphere) during a series of planned fly-bys around Jupiter’s moon Europa. For such exospheric sampling missions, it is important to distinguish the actual collected sample from return-flux contaminants that originated from the spacecraft but ended up being collected due to atmospheric rebounds.
It is the ability to import large, complex geometries (through a triangulated file format called STL, used in 3D printing and supported by most CAD software) that makes Molflow usable for JPL’s molecular transport problems. In fact, the JPL team “boosted” our codes with external post-processing: instead of built-in visualisation, they parsed the output file format to extract pressure data on individual facets (polygons representing a surface cell), and sometimes even changed input parameters programmatically – once again working directly on Molflow’s own file format. They also made a few feature requests, such as adding histograms showing how many times molecules bounce before adsorption, or the total distance or time they travel before being adsorbed on the surfaces. These were straightforward to implement, and because JPL’s scientific interests also matched those of CERN users, such additions are now available for everyone in the public versions of the code. Similar requests have come from experiments employing short-lived radioactive beams, such as those generated at CERN’s ISOLDE beamlines. Last year, against all odds during COVID-related restrictions, the JPL team managed to visit CERN. While showing the team around the site and the chemistry laboratory, they held a seminar for our vacuum group about contamination control at JPL, and we showed the outlook for Molflow developments.
Our latest space-related collaboration, started in 2021, concerns the European Space Agency’s LISA mission, a future gravitational-wave interferometer in space (see CERN Courier September/October 2022 p51). Molflow is being used to analyse data from the recently completed LISA Pathfinder mission, which explored the feasibility of keeping two test masses in gravitational free-fall and using them as inertial sensors by measuring their motion with extreme precision. Because the satellite’s sides have different temperatures, and because the gas sources are asymmetric around the masses, there is a difference in outgassing between two sides. Moreover, the gas molecules that reach the test mass are slightly faster on one side than the other, resulting in a net force and torque acting on the mass, of the order of femtonewtons. When such precise inertial measurements are required, this phenomenon has to be quantified, along with other microscopic forces, such as Brownian noise resulting from the random bounces of molecules on the test mass. To this end, Molflow is currently being modified to add molecular force calculations for LISA, along with relevant physical quantities such as noise and resulting torque.
Sky’s the limit
Molflow has proven to be a versatile and effective computational physics model for the characterisation of free-molecular flow, having been adopted for use in space exploration and the aerospace sector. It promises to continue to intertwine different fields of science in unexpected ways. Thanks to the ever-growing gaming industry, which uses ray tracing to render photorealistic scenes of multiple light sources, consumer-grade graphics cards started supporting ray-tracing in 2019. Although intended for gaming, they are programmable for generic purposes, including science applications. Simulating on graphics-processing units is much faster than traditional CPUs, but it is also less precise: in the vacuum world, tiny imprecisions in the geometry can result in “leaks” or some simulated particles crossing internal walls. If this issue can be overcome, the speedup potential is huge. In-house testing carried out recently at CERN by PhD candidate Pascal Bahr demonstrated a speedup factor of up to 300 on entry-level Nvidia graphics cards, for example.
Our latest space-related collaboration concerns the European Space Agency’s LISA mission
Another planned Molflow feature is to include surface processes that change the simulation parameters dynamically. For example, some getter films gradually lose their pumping ability as they saturate with gas molecules. This saturation depends on the pumping speed itself, resulting in two parameters (pumping speed and molecular surface saturation) that depend on each other. The way around this is to perform the simulation in iterative time steps, which is straightforward to add but raises many numerical problems.
Finally, a much-requested feature is automation. The most recent versions of the code already allow scripting, that is, running batch jobs with physics parameters changed step-by-step between each execution. Extending these automation capabilities, and adding export formats that allow easier post-processing with common tools (Matlab, Excel and common Python libraries) would significantly increase usability. If adding GPU ray tracing and iterative simulations are successful, the resulting – much faster and more versatile – Molflow code will remain an important tool to predict and optimise the complex vacuum systems of future colliders.
Colliding particles at high energies is a tried and tested route to uncover the secrets of the universe. In a collider, charged particles are packed in bunches, accelerated and smashed into each other to create new forms of matter. Whether accelerating elementary electrons or composite hadrons, past and existing colliders all deal with matter constituents. Colliding force-carrying particles such as photons is more ambitious, but can be done, even at the Large Hadron Collider (LHC).
The LHC, as its name implies, collides hadrons (protons or ions) into one another. In most cases of interest, projectile protons break up in the collision and a large number of energetic particles are produced. Occasionally, however, protons interact through a different mechanism, whereby they remain intact and exchange photons that fuse to create new particles (see “Photon fusion” figure). Photon–photon fusion has a unique signature: the particles originating from this kind of interaction are produced exclusively, i.e. they are the only ones in the final state along with the protons, which often do not disintegrate. Despite this clear imprint, when the LHC operates at nominal instantaneous luminosities, with a few dozen proton–proton interactions in a single bunch crossing, the exclusive fingerprint is contaminated by extra particles from different interactions. This makes the identification of photon–photon fusion challenging.
The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2
Protons that survive the collision, having lost a small fraction of their momentum, leave the interaction point still packed within the proton bunch, but gradually drift away as they travel further along the beamline. During LHC Run 2, the CMS collaboration installed a set of forward proton detectors, the Precision Proton Spectrometer (PPS), at a distance of about 200 m from the interaction point on both sides of the CMS apparatus. The PPS detectors can get as close to the beam as a few millimetres and detect protons that have lost between 2% and 15% of their initial kinetic energy (see “Precision Proton Spectrometer up close” panel). They are the CMS detectors located the farthest from the interaction point and the closest to the beam pipe, opening the door to a new physics domain, represented by central-exclusive-production processes in standard LHC running conditions.
Testing the Standard Model
Central exclusive production (CEP) processes at the LHC allow novel tests of the Standard Model (SM) and searches for new phenomena by potentially granting access to some of the rarest SM reactions so far unexplored. The identification of such exclusive processes relies on the correlation between the proton momentum loss measured by PPS and the kinematics of the central system, allowing the mass and rapidity of the central system in the interaction to be inferred very accurately (see “Tagging exclusive events” and “Exclusive identification” figures). Furthermore, the rules for exclusive photon–photon interactions only allow states with certain quantum numbers (in particular, spin and parity) to be produced.
Precision Proton Spectrometer up close
PPS was born in 2014 as a joint project between the CMS and TOTEM collaborations (CERN Courier April 2017 p23), and in 2018 became a subsystem of CMS following an MoU between CERN, CMS and TOTEM. For the specialised PPS setup to work as designed, its detectors must be located within a few millimetres of the LHC proton beam. The Roman Pots technique – moveable steel “pockets” enclosing the detectors under moderate vacuum conditions with a thin wall facing the beam – is perfectly suited for this task. This technique has been successfully exploited by the TOTEM and ATLAS collaborations at the LHC and was used in the past by experiments at the ISR, the SPS, the Tevatron and HERA. The challenge for PPS is the requirement that the detectors operate continuously during standard LHC running conditions, as opposed to dedicated special runs with a very low interaction rate.
The PPS design for LHC Run 2 incorporated tracking and timing detectors on both sides of CMS. The tracking detector comprises two stations located 10 m apart, capable of reconstructing the position and angle of the incoming proton. Precise timing is needed to associate the production vertex of two protons to the primary interaction vertex reconstructed by the CMS tracker. The first tracking stations of the proton spectrometer were equipped with silicon-strip trackers from TOTEM – a precise and reliable system used since the start of the LHC. In parallel, a suitable detector technology for efficient operation during standard LHC runs was developed, and in 2017 half of the tracking stations (one per side) were replaced by new silicon pixel trackers designed to cope with the higher hit rate. The x, y coordinates provided by the pixels resolve multiple proton tracks in the same bunch crossing, while the “3D” technology used for sensor fabrication greatly enhances resistance against radiation damage. The transition from strips was completed in 2018, when the fully pixel-based tracker was employed.
In parallel, the timing system was set up. It is based on diamond pad sensors initially developed for a new TOTEM detector. The signal collection is segmented in relatively large pads, read out individually by custom, high-speed electronics. Each plane contributes to the time measurement of the proton hit with a resolution of about 100 ps. The design of the detector evolved during Run 2 with different geometries and set-ups, improving the performance in terms of efficiency and overall time resolution.
The most common and cleanest process in photon–photon collisions is the exclusive production of a pair of leptons. Theoretical calculations of such processes date back almost a century to the well-known Breit–Wheeler process. The first result obtained by PPS after commissioning in 2016 was the measurement of (semi-)exclusive production of e+e– and μ+μ– pairs using about 10 fb–1 of CMS data: 20 candidate events were identified with a di-lepton mass greater than 110 GeV. This process is now used as a “standard candle” to calibrate PPS and validate its performance. The cross section of this process has been measured by the ATLAS collaboration with their forward proton spectrometer, AFP (CERN Courier September/October 2020 p15).
An interesting process to study is the exclusive production of W-boson pairs. In the SM, electroweak gauge bosons are allowed to interact with each other through point-like triple and quartic couplings. Most extensions of the SM modify the strength of these couplings. At the LHC, electroweak self-couplings are probed via gauge-boson scattering, and specifically photon–photon scattering. A notable advantage of exclusive processes is the excellent mass resolution obtained from PPS, allowing the study of self-couplings at different scales with very high precision.
During Run 2, PPS reconstructed intact protons that lost down to 2% of their kinetic energy, which for proton–proton collisions at 13 TeV translates to sensitivity for
central mass values above 260 GeV. In the production of electroweak boson pairs, WW or ZZ, the quartic self-coupling mainly contributes to the high invariant-mass tail of the di-boson system. The analysis searched for anomalously large values of the quartic gauge coupling and the results provide the first constraint on γγZZ in an exclusive channel and a competitive constraint on γγWW compared to other vector-boson-scattering searches.
Many SM processes proceeding via photon fusion have a relatively low cross section. For example, the predicted cross section for CEP of top quark–antiquark pairs is of the order of 0.1 fb. A search for this process was performed early this year using about 30 fb–1 of CMS data recorded in 2017, with protons tagged by PPS. While the sensitivity of the analysis is not sufficient to test the SM prediction, it can probe possible enhancements due to additional contributions from new physics. Also, the analysis established tools with which to search for exclusive production processes in a multi-jet environment using machine-learning techniques.
Uncharted domains
The SM provides very accurate predictions for processes occurring at the LHC. Yet, it cannot explain the origin of several observations such as the existence of dark matter, the matter–antimatter asymmetry in the universe and neutrino masses. So far, the LHC experiments have been unable to provide answers to those questions, but the search is ongoing. Since physics with PPS mostly targets photon collisions, the only assumption is that the new physics is coupled to the electroweak sector, opening a plethora of opportunities for new searches.
Photon–photon scattering has already been observed in heavy-ion collisions by the LHC experiments, for example by ATLAS (CERN Courier December 2016 p9). But new physics would be expected to enter at higher di-photon masses, which is where PPS comes into play. Recently, a search for di-photon exclusive events was performed using about 100 fb–1 of CMS data at a di-photon mass greater than 350 GeV, where SM contributions are negligible. In the absence of an unexpected signal, a new best limit was set on anomalous four-photon coupling parameters. In addition, a limit on the coupling of axion-like particles to photon was set in the mass region 500–2000 GeV. These are the most restrictive limits to date.
A new, interesting possibility to look for unknown particles is represented by the “missing mass” technique. The exclusivity of CEP makes it possible, in two-particle final states, to infer the four-momentum of one particle if the other is measured. This is done by exploiting the fact that, if the protons are measured and the beam energy is known, the kinematics of the centrally produced final state can be determined: no direct measurements of the second particle are required, allowing us to “see the unseen”. This technique was demonstrated for the first time at the LHC this year, using around 40 and 2 fb–1 of Run 2 data in a search for pp → pZXp and pp → pγXp, respectively, where X represents a neutral, integer-spin particle with an unspecified decay mode. In the absence of an observed signal, the analysis sets the first upper limits for the production of an unspecified particle in the mass range 600–1600 GeV.
Looking forward with PPS
For LHC Run 3, which began in earnest on 5 July, the PPS team has implemented several upgrades to maximise the physics output from the expected increase in integrated luminosity. The mechanics and readout electronics of the pixel tracker have been redesigned to allow remote shifting of the sensors in several small steps, which better distributes the radiation damage caused by the highly non-uniform irradiation. All timing stations are now equipped with “double diamond” sensors, and from 2023 an additional, second station will be added to each PPS arm. This will improve the resolution of the measured arrival time of protons, which is crucial for reconstructing the z coordinate of a possible common vertex, by at least a factor of two. Finally, a new software trigger has been developed that requires the presence of tagged protons in both PPS arms, thus allowing the use of lower energy thresholds for the selection of events with two particle jets in CMS.
The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2, despite only a doubling of the integrated luminosity. This significant increase is due to the upgrade of the detectors, especially of the timing stations, thus placing PPS in the spotlight of the Run 3 research programme. Timing detectors also play a crucial role in the planning for the high-luminosity LHC (HL-LHC) phase. The CMS collaboration has released an expression of interest to pursue studies of CEP at the HL-LHC with the ambitious plan of installing near-beam proton spectrometers at 196, 220, 234, and 420 m from the interaction point. This would extend the accessible mass range to the region between 50 GeV and 2.7 TeV. The main challenge here is to mitigate high “pileup” effects using the timing information, for which new detector technologies, including synergies with the future CMS timing detectors, are being considered.
PPS significantly extends the LHC physics programme, and is a tribute to the ingenuity of the CMS collaboration in the ongoing search for new physics.
With so many new hadronic states being discovered at the LHC (67 and counting, with the vast majority seen by LHCb), it can be difficult to keep track of what’s what. While most are variations of known mesons and baryons, LHCb is uncovering an increasing number of exotic hadrons, namely tetraquarks and pentaquarks. A case in point is its recent discovery, announced at CERN on 5 July, of a new strange pentaquark (with quark content ccuds) and a new tetraquark pair: one constituting the first doubly charged open-charm tetraquark (csud) and the other a neutral isospin partner (csud). The situation has prompted the LHCb collaboration to introduce a new naming scheme. “We’re creating ‘particle zoo 2.0’,” says Niels Tuning, LHCb physics coordinator. “We’re witnessing a period of discovery similar to the 1950s, when a ‘zoo’ of hadrons ultimately led to the quark model of conventional hadrons in the 1960s.”
While the quark model allows the existence of multiquark states beyond two- and three-quark mesons and baryons, the traditional naming scheme for hadrons doesn’t make much allowance for what these particles should be called. When the first tetraquark candidate was discovered at the Belle experiment in 2003, it was denoted by “X” because it didn’t seem to be a conventional charmonium state. Shortly afterwards, a similarly mysterious but different state turned up at BaBar and was denoted “Y”. Subsequent exotic states seen at Belle and BESIII were dubbed “Z”, and more recently tetraquarks discovered at LHCb were labelled “T”.
Complicating matters further, the subscripts added to differentiate between the various states lack consistency. For example, the first known tetraquark states contained both charm and anticharm quarks, so a subscript “c” was added. But the recent discoveries of tetraquarks and pentaquarks containing a single strange quark require an extra subscript “s”. On top of all of that, explains LHCb’s Tim Gershon, who initiated the new naming scheme, tetraquarks discovered by LHCb in 2020 contain a single charm quark. “We couldn’t assign the subscript ‘c’ because we’ve always used that to denote states containing charm and anticharm, so we didn’t know what symbols to use,” he explains. “Things were starting to become a bit confusing, so we thought it was time to bring some kind of logic to the naming scheme. We have done this over an extended period, not only within LHCb but also involving other experiments and theorists in this field.”
Helpfully, the new proposal labels all tetraquarks “T” and all pentaquarks “P”, with a set of rules regarding the necessary subscripts and superscripts. In this scheme, the two different spin states of the open-charm tetraquarks discovered by LHCb in 2020 become Tcs0(2900)0 and Tcs1(2900)0 instead of X0(2900)0 and X1(2900)0, for example, while the latest pentaquark is denoted PΛψs(4338)0. The collaboration hopes that the new scheme, which can be extended to six- or seven-quark hadrons, will make it easier for experts to communicate while also helping newcomers to the field.
The new scheme could make it easier to spot patterns that might have been missed before
Importantly, it could make it easier to spot patterns that might have been missed before, perhaps shedding light on the central question of whether exotic hadrons are compact tightly bound multi-quark states or more loosely bound molecular-like states. The new LHCb scheme might even help researchers predict new exotic hadrons, just as the multiplets arising from the quark model made it possible to predict new mesons and baryons such as the Ω–.
“Before this new scheme it was almost like a Tower of Babel situation where it was difficult to communicate,” says Gershon. “We have created a document that people can use as a kind of dictionary, in the hope that it will help the field to progress more rapidly.”
The radio-frequency (RF) cavities that accelerate charged particles in machines like the LHC are powered by devices called klystrons. These electro-vacuum tubes, which amplify RF signals by converting an initial velocity modulation of a stream of electrons into an intensity modulation, produce RF power in a wide frequency range (from several hundred MHz to tens of GHz) and can be used in pulsed or continuous-wave mode to deliver RF power from hundreds of kW to hundreds of MW. The close connection between klystron performance and the power consumption of an accelerator has driven researchers at CERN to develop more efficient devices for current and future colliders.
The efficiency of a klystron is calculated as the ratio between generated RF power and the electrical power that is delivered from the grid. Experience with many thousands of such devices during the past seven decades has established that at low frequency and moderate RF power levels (as required by the LHC), klystrons can deliver an efficiency of 60–65%. For pulsed, high-frequency and high peak-RF power devices, efficiencies are about 40–45%. The efficiency of RF power production is a key element of an accelerator’s overall efficiency. Taking the proposed future electron–positron collider FCC-ee as an example: by increasing klystron efficiency from 65 to 80%, the electrical power savings over a 10-year period could be as much as 1 TWhr. In addition, reduced demand on the electrical power storage capacity and cooling and ventilation may further reduce the original investment cost.
In 2013 the development of high-efficiency klystrons started at CERN within the Compact Linear Collider study as a means to reduce the global energy consumed by the proposed collider. Thanks to strong support by management, this evolved into a project inside the CERN RF group. A small team of five people at CERN and Lancaster University, led by Igor Syratchev, developed accurate computer tools for klystron simulations and in-depth analysis of the beam dynamics, and used them to evaluate effects that limit klystron efficiency. Finally, the team proposed novel technological solutions (including new bunching methods and higher order harmonic cavities) that can improve klystron efficiency by 10–30% compared to commercial analogues. These new technologies were applied to develop new high-efficiency klystrons for use in the high-luminosity LHC (HL-LHC), FCC-ee and the CERN X-band high-gradient facilities, as well as in medical and industrial accelerators. Some of the new tube designs are now undergoing prototyping in close collaboration between CERN and industry.
The first commercial prototype of a high-efficiency 8 MW X-band klystron developed at CERN was built and tested by Canon Electron Tubes and Devices in July this year. Delivering an expected power level with an efficiency of 53.3% measured at their factory in Japan, it is the first demonstration of the technological solution developed at CERN that showed an efficiency increase of more than 10% compared to commercially available devices. In terms of RF power production, this translates to an overall increase of 25% using the same wall-plug power as the model currently working at CERN’s X-band facility. Later this year the klystron will arrive at CERN and replace Canon’s conventional 6 MW tube. The next project in progress aims to fabricate a high-efficiency version of the LHC klystron, which, if successful, could be used in the HL-LHC.
“These results give us confidence for the coming high-efficiency version of the LHC klystrons and for the development of FCC-ee,” says RF group leader Frank Gerigk. “It is also an excellent demonstration of the powerful collaboration between CERN and industry.”
A multidisciplinary team in the UK has received seed funding to investigate the feasibility of a new facility for ion-therapy research based on novel accelerator, instrumentation and computing technologies. At the core of the facility would be a laser-hybrid accelerator dubbed LhARA: a high-power pulsed laser striking a thin foil target would create a large flux of protons or ions, which are captured using strong-focusing electron–plasma lenses and then accelerated rapidly in a fixed-field alternating-gradient accelerator. Such a device, says the team, offers enormous clinical potential by providing more flexible, compact and cost-effective multi-ion sources.
High-energy X-rays are by far the most common radiotherapy tool, but recent decades have seen a growth in particle-beam radiotherapy. In contrast to X-rays, protons and ion beams can be manipulated to deliver radiation doses more precisely than conventional radiotherapy, sparing surrounding healthy tissue. Unfortunately, the number of ion treatment facilities is few because they require large synchrotrons to accelerate the ions. The Proton-Ion Medical Machine Study undertaken at CERN during the late 1990s, for example, underpinned the CNAO (Italy) and MedAustron (Austria) treatment centres that helped propel Europe to the forefront of the field – work that is now being continued by CERN’s Next Ion Medical Machine Study (CERN Courier July/August 2021 p23).
“LhARA will greatly accelerate our understanding of how protons and ions interact and are effective in killing cancer cells, while simultaneously giving us experience in running a novel beam,” says LhARA biological science programme manager Jason Parsons of the University of Liverpool. “Together, the technology and the science will help us make a big step forward in optimising radiotherapy treatments for cancer patients.”
A small number of laboratories in Europe already work on laser-driven sources for biomedical applications. The LhARA collaboration, which comprises physicists, biologists, clinicians and engineers, aims to build on this work to demonstrate the feasibility of capturing and manipulating the flux created in the laser-target interaction to provide a beam that can be accelerated rapidly to the desired energy. The laser-driven source offers the opportunity to capture intense, nanosecond-long pulses of protons and ions at an energy of 15 MeV, says the team. This is two orders of magnitude greater than in conventional sources, allowing the space-charge limit on the instantaneous dose to be evaded.
In July, UK Research and Innovation granted £2 million over the next two years to deliver a conceptual design report for an Ion Therapy Research Facility (ITRF) centred around LhARA. The first goal is to demonstrate the feasibility of the laser-hybrid approach in a facility dedicated to biological research, after which the team will work with national and international partnerships to develop the clinical technique. While the programme carries significant technical risk, says LhARA co-spokesperson Kenneth Long from Imperial College London/STFC, it is justified by the high level of potential reward: “The multidisciplinary approach of the LhARA collaboration will place the ITRF at the forefront of the field, partnering with industry to pave the way for significantly enhanced access to state-of-the-art particle-beam therapy.”
The keenly awaited first science-grade images from the James Webb Space Telescope were released on 12 July – and they did not disappoint. Thanks to Webb’s unprecedented 6.5 m mirror, together with its four main instruments (NIRCam, NIRSpec, NIRISS and MIRI), the $10 billion probe marks a new dawn for observational astrophysics.
The past six months since Webb’s launch from French Guiana have been devoted to commissioning, including alignment and calibration of the mirrors and bringing temperatures to cyrogenic levels to minimise noise from heat radiated from the equipment (CERN Courier March/April 2022 p7). Unlike the Hubble Space Telescope, Webb does not look at ultraviolet or visible light but is primarily sensitive to near- and mid-infrared wavelengths. This enables it to look at the farthest galaxies and stars, as early as a few hundred million years after the Big Bang.
Wealth of information
Pictured here are some of Webb’s early-release images. The first deep-field image (top) covers the same area of the sky as a grain of sand held at arm’s length, and is swarming with galaxies. At the centre is a cluster called SMACS 0723, whose combined mass is so high that its gravitational field bends the light of objects that lie behind it (resulting in arc-like features), revealing galaxies that existed when the universe was less than a billion years old. The image was taken using NIRCam and is a combination of images at different wavelengths. The spectrographs, NIRSpec and NIRISS, will provide a wealth of information on the composition of stars, galaxies and their clusters, offering a rare peak into the earliest stages of their formation and evolution.
Stephan’s Quintet (bottom left) is a visual group of five galaxy clusters that was first discovered in 1877 and remains one of the most studied compact galaxy groups. The actual grouping involves only four galaxies, which are predicted to eventually merge. The non-member, NGC 7320, which lies about 40 million light years from Earth rather than 290 million for the actual group, is seen on the left, with vast regions of active star formation in its numerous spiral arms.
A third stunning image, the Southern Ring nebula (bottom right), shows a dying star. With its reservoirs of light elements already exhausted, it starts using up any available heavier elements to sustain itself – a complex and violent process that results in large amounts of material being ejected from the star in intervals, visible as shells.
These images are just a taste, yet not all Webb data will be so visually spectacular. By extending Hubble’s observations of distant supernovae and other standard candles, for example, the telescope should enable the local rate of expansion to be determined more precisely, possibly shedding light on the nature of dark energy. By measuring the motion and gravitational lensing of early objects, it will also survey the distribution of dark matter, and might even hint at what it’s made of. Using transmission spectroscopy, Webb will also reveal exoplanets in unprecedented detail, learn about their chemical compositions and search for signatures of habitability.
An ambitious upgrade of the US’s flagship X-ray free-electron-laser facility – the Linac Coherent Light Source (LCLS) at SLAC in California – is nearing completion. Set for “first light” early next year, LCLS-II will deliver X-ray laser beams that are 10,000 times brighter than LCLS at repetition rates of up to a million pulses per second – generating more X-ray pulses in just a few hours than the current laser has delivered through the course of its 12-year operational lifetime. The cutting-edge physics of the new facility – underpinned by a cryogenically cooled superconducting radio-frequency (SRF) linac – will enable the two beams from LCLS and LCLS-II to work in tandem. This, in turn, will help researchers observe rare events that happen during chemical reactions and study delicate biological molecules at the atomic scale in their natural environments, as well as potentially shed light on exotic quantum phenomena with applications in next-generation quantum computing and communications systems.
Successful delivery of the LCLS-II linac was possible thanks to a multi-centre collaborative effort involving US national and university laboratories, following the decision to pursue an SRF-based machine in 2014 through the design, assembly, test, transportation and installation of a string of 37 SRF cryomodules (most of them more than 12 m long) into the SLAC tunnel. All told, this major undertaking necessitated the construction of forty 1.3 GHz SRF cryomodules (five of them spares) and three 3.9 GHz cryomodules (one spare) – with delivery of approximately one cryomodule per month from February 2019 until December 2020 to allow completion of the LCLS-II linac installation on schedule by November 2021.
This industrial-scale programme of works was shaped by a strategic commitment, early on in the LCLS-II design phase, to transfer, and ultimately iterate, the established SRF capabilities of the European XFEL in Hamburg into the core technology platform used for the LCLS-II SRF cryomodules. Put simply: it would not have been possible to complete the LCLS-II project, within cost and on schedule, without the sustained cooperation of the European XFEL consortium – in particular, colleagues at DESY, CEA Saclay and several other European laboratories as well as KEK – that generously shared their experiences and know-how.
Better together
These days, large-scale accelerator or detector projects are very much a collective endeavour. Not only is the sprawling scope of such projects beyond a single organisation, but the risks of overspend and slippage can greatly increase with a “do-it-on-your-own” strategy. When the LCLS-II project opted for an SRF technology pathway in 2014 to maximise laser performance, the logical next step was to build a broad-based coalition with other US Department of Energy (DOE) national laboratories and universities. In this case, SLAC, Fermilab, Jefferson Lab (JLab) and Cornell University contributed expertise for cryomodule production, while Argonne National Laboratory and Lawrence Berkeley National Laboratory managed delivery of the undulators and photoinjector for the project. For sure, the start-up time for LCLS-II would have increased significantly without this joint effort, extending the overall project by several years.
Each partner brought something unique to the LCLS-II collaboration. While SLAC was still a relative newcomer to SRF technologies, the lab had a management team that was familiar with building large-scale accelerators (following successful delivery of the LCLS). The priority for SLAC was therefore to scale up its small nucleus of SRF experts by recruiting experienced SRF technologists and engineers to the staff team. In contrast, the JLab team brought an established track-record in the production of SRF cryomodules, having built its own machine, the Continuous Electron Beam Accelerator Facility (CEBAF), as well as cryomodules for the Spallation Neutron Source (SNS) linac at Oak Ridge National Laboratory in Tennessee. Cornell, too, came with a rich history in SRF R&D – capabilities that, in turn, helped to solidify the SRF cavity preparation process for LCLS-II.
Finally, Fermilab had, at the time, recently built two cutting-edge cryomodules of the same style as that chosen for LCLS-II. To fabricate these modules, Fermilab worked closely with the team at DESY to set up the same type of production infrastructure used on the European XFEL. From that perspective, the required tooling and fixtures were all ready to go for the LCLS-II project. While Fermilab was the “designer of record” for the SRF cryomodule, with primary responsibility for delivering a working design to meet LCLS-II requirements, the realisation of an optimised technology platform was a team effort involving SRF experts from across the collaboration.
Collective problems, collective solutions
While the European XFEL provided the template for the LCLS-II SRF cryomodule design, several key elements of the LCLS-II approach subsequently evolved to align with the continuous-wavelength (CW) operation requirements and the specifics of the SLAC tunnel. Success in tackling these technical challenges – across design, assembly, testing and transportation of the cryomodules – is testament to the strength of the LCLS-II collaboration and the collective efforts of the participating teams in the US and Europe.
Challenges are inevitable when developing new facilities at the limits of known technology
For one, the thermal performance specification of the SRF cavities exceeded the state-of-the-art and required development and industrialisation of the concept of nitrogen doping (a process in which SRF cavities are heat-treated in a nitrogen atmosphere to increase their cryogenic efficiency and, in turn, lower the overall operating costs of the linac). The nitrogen-doping technique was invented at Fermilab in 2012 but, prior to LCLS-II construction, had been used only in an R&D setting.
The priority was clear: to transfer the nitrogen-doping capability to LCLS-II’s industry partners, so that the cavity manufacturers could perform the necessary materials-processing before final helium-vessel jacketing. During this knowledge transfer, it was found that nitrogen-doped cavities are particularly sensitive to the base niobium sheet material – something the collaboration only realised once the cavity vendors were into full production. This resulted in a number of process changes for the heat treatment temperature, depending on which material supplier was used and the specific properties of the niobium sheet deployed in different production runs. JLab, for its part, held the contract for the cavities and pulled out all stops to ensure success.
At the same time, the conversion from pulsed to CW operation necessitated a faster cooldown cycle for the SRF cavities, requiring several changes to the internal piping, a larger exhaust chimney on the helium vessel, as well as the addition of two new cryogenic valves per cryomodule. Also significant is the 0.5% slope in the longitudinal floor of the existing SLAC tunnel, which dictated careful attention to liquid-helium management in the cryomodules (with a separate two-phase line and liquid-level probes at both ends of every module).
However, the biggest setback during LCLS-II construction involved the loss of beamline vacuum during cryomodule transport. Specifically, two cryomodules had their beamlines vented and required complete disassembly and rebuilding – resulting in a five-month moratorium on shipping of completed cryomodules in the second half of 2019. It turns out that a small (what was thought to be inconsequential) change in a coupler flange resulted in the cold coupler assembly being susceptible to resonances excited by transport. The result was a bellows tear that vented the beamline. Unfortunately, initial “road-tests” with a similar, though not exactly identical, prototype cryomodule had not revealed this behaviour.
Such challenges are inevitable when developing new facilities at the limits of known technology. In the end, the problem was successfully addressed using the diverse talents of the collaboration to brainstorm solutions, with the available access ports allowing an elastomer wedge to be inserted to secure the vulnerable section. A key take-away here is the need for future projects to perform thorough transport analysis, verify the transport loads using mock-ups or dummy devices, and install adequate instrumentation to ensure granular data analysis before long-distance transport of mission-critical components.
Upon completion of the assembly phase, all LCLS-II cryomodules were subsequently tested at either Fermilab or JLab, with one module tested at both locations to ensure reproducibility and consistency of results. For high Q0 performance in nitrogen-doped cavities, cooldown flow rates of at least 30 g/s of liquid helium were found to give the best results, helping to expel magnetic flux that could otherwise be trapped in the cavity. Overall, cryomodule performance on the test stands exceeded specifications, with a total accelerating voltage per cryomodule of 158 MV (versus specification of 128 MV) and average Q0 of 3 × 1010 (versus specification of 2.7 × 1010). Looking ahead, attention is already shifting to the real-world cryomodule performance in the SLAC tunnel – something that was measured for the first time in 2022.
Transferable lessons
For all members of the collaboration working on the LCLS-II cryomodules, this challenging project holds many lessons. Most important is to build a strong team and use that strength to address problems in real-time as they arise. The mantra “we are all in this together” should be front-and-centre for any multi-institutional scientific endeavour – as it was in this case. Solutions need to be thought of in a more global sense, as the best answer might mean another collaborator taking more onto their plate. Collaboration implies true partnership and a working model very different to a transactional customer–vendor relationship.
From a planning perspective, it’s vital to ensure that the initial project cost and schedule are consistent with the technical challenges and preparedness of the infrastructure. Prototypes and pre-series production runs reduce risk and cost in the long term and should be part of the plan, but there must be sufficient time for data analysis and changes to be made after a prototype run in order for it to be useful. Time spent on detailed technical reviews is also time well spent. New designs of complex components need a comprehensive oversight and review, and should be controlled by a team, rather than a single individual, so that sign-off on any detailed design changes are made by an informed collective.
LCLS-II science: capturing atoms and molecules in motion like never before
The strobe-like pulses of the LCLS, which produced its first light in April 2009, are just a few millionths of a billionth of a second long, and a billion times brighter than previous X-ray sources. This enables users from a wide range of fields to take crisp pictures of atomic motions, watch chemical reactions unfold, probe the properties of materials and explore fundamental processes in living things. LCLS-II will provide a major jump in capability – moving from 120 pulses per second to 1 million, enabling experiments that were previously impossible. The scientific community has identified six areas where the unique capabilities of LCLS-II will be essential for further scientific progress:
Nanoscale materials dynamics, heterogeneity and fluctuations
Programmable trains of soft X-ray pulses at high rep rate will characterise spontaneous fluctuations and heterogeneities at the nanoscale across many decades, while coherent hard X-ray scattering will provide unprecedented spatial resolution of material structure, its evolution and relationship to functionality under operating conditions.
Fundamental energy and charge dynamics
High-repetition-rate soft X-rays will enable new techniques that will directly map charge distributions and reaction dynamics at the scale of molecules, while new nonlinear X-ray spectroscopies offer the potential to map quantum coherences in an element-specific way for the first time.
Catalysis and photocatalysis
Time-resolved, high-sensitivity, element- specific spectroscopy will provide the first direct view of charge dynamics and chemical processes at interfaces, characterise subtle conformational changes associated with charge accumulation, and capture rare chemical events in operating catalytic systems across multiple time and length scales – all of which are essential for designing new, more efficient systems for chemical transformation and solar-energy conversion.
Emergent phenomena in quantum materials
Fully coherent X-rays will enable new high- resolution spectroscopy techniques to map the collective excitations that define these new materials in unprecedented detail. Ultrashort X-ray pulses and optical fields will facilitate new methods for manipulating charge, spin and phonon modes to both advance fundamental understanding and point the way to new approaches for materials control.
Revealing biological function in real time
The high repetition rate of LCLS-II will provide a unique capability to follow the dynamics of macromolecules and interacting complexes in real time and in native environments. Advanced solution-scattering and coherent imaging techniques will characterise the conformational dynamics of heterogeneous ensembles of macromolecules, while the ability to generate “two-colour” hard X-ray pulses will resolve atomic-scale structural dynamics of biochemical processes that are often the first step leading to larger-scale protein motions.
Matter in extreme environments
The capability of LCLS-II to generate soft and hard X-ray pulses simultaneously will enable the creation and observation of extreme conditions that are far beyond our present reach, with the latter allowing the characterisation of unknown structural phases. Unprecedented spatial and temporal resolution will enable direct comparison with theoretical models relevant for inertial-confinement fusion and planetary science.
Work planning and control is another essential element for success and safety. This idea needs to be built into the “manufacturing system”, including into the cost and schedule, and to be part of each individual’s daily checklist. No one disagrees with this concept, but good intentions on their own will not suffice. As such, required safety documentation should be clear and unambiguous, and be reviewed by people with relevant expertise. Production data and documentation need to be collected, made easily available to the entire project team, and analysed regularly for trends, both positive and negative.
Supply chain, of course, is critical in any production environment – and LCLS-II is no exception. When possible, it is best to have parts procured, inspected, accepted and on-the-shelf before production begins, thereby eliminating possible workflow delays. Pre-stocking also allows adequate time to recycle and replace parts that do not meet project specifications. Also worth noting is that it’s often the smaller components – such as bellows, feedthroughs and copper-plated elements – that drive workflow slowdowns. A key insight from LCLS-II is to place purchase orders early, stay on top of vendor deliveries, and perform parts inspections as soon as possible post-delivery. Projects also benefit from having clearly articulated pass/fail criteria and established procedures for handling non-conformance – all of which alleviates the need to make critical go/no-go acceptance decisions in the face of schedule pressures.
As with many accelerator projects, LCLS-II is not an end-point in itself, more an evolutionary transition within a longer term roadmap
Finally, it’s worth highlighting the broader impact – both personal and professional – on individual team members participating in a big-science collaboration like LCLS-II. At the end of the build, what remained after designs were completed, problems solved, production rates met, and cryomodules delivered and installed, were the friendships that had been nurtured over several years. The collaboration amongst partners, both formal and informal, who truly cared about the project’s success, and had each other’s backs when there were issues arising: these are the things that solidified the mutual respect, the camaraderie and, in the end, made LCLS-II such a rewarding project.
First light
In April 2022 the new LCLS-II linac was successfully cooled to its 2 K operating temperature. The next step was to pump the SRF cavities with more than a megawatt of microwave power to accelerate the electron beam from the new source. Following further commissioning of the machine, first X-rays are expected to be produced in early 2023.
As with many accelerator projects, LCLS-II is not an end-point in itself, more an evolutionary transition within a longer term roadmap. In fact, work is already under way on LCLS-II HE – a project that will increase the energy of the CW SRF linac from 4 to 8 GeV, enabling the photon energy range to be extended to at least 13 keV, and potentially up to 20 keV at 1 MHz repetition rates. To ensure continuity of production for LCLS-II HE, 25 next-generation cryomodules are in the works, with even higher performance specifications versus their LCLS-II counterparts, while upgrades to the source and beam transport are also being finalised.
While the fascinating science opportunities for LCLS-II-HE continue to be refined and expanded, of one thing we can be certain: strong collaboration and the collective efforts of the participating teams are crucial.
What happened? A tragedy fell upon Ukraine and found many in despair or in a dilemma. After 70 mainly peaceful years for much of Europe, we were surprised by war, because we had forgotten that it takes an effort to maintain peace.
Having witnessed the horrors of war first hand, several years as a soldier and then as a displaced person, I could not imagine that humanity would unleash another war on the continent. As one of its last witnesses, I wonder what advice should be passed on, especially to younger colleagues, about what to do in the short term, and perhaps more importantly, what to do afterwards.
Scientists have a special responsibility. Fortunately, there is no doubt today that science is independent of political doctrines. There is no “German physics” any more. We have established human relationships with our colleagues based on our enthusiasm for our profession, which has led to mutual trust and tolerance.
This has been practised at CERN for 70 years and continued at SESAME, where delegates from Israel, Palestine, Iran, Cyprus, Turkey and other governments sit peacefully around a table. Another offshoot of CERN, the South East European International Institute for Sustainable Technologies (SEEIIST), is in the making in the Balkans. Apart from fostering science, the aim is to transfer ethical achievements from science to politics: science diplomacy, as it has come to be known. In practice, this is done, for example, in the CERN Council where each government sends a representative and an additional scientist who work effectively together on a daily basis.
In the case of imminent political conflicts, “Science for Peace” cannot of course help immediately, but occasionally opportunities arise even for this. In 1985, when disarmament negotiations between Gorbachev and Reagan in Geneva reached an impasse, one of the negotiators asked me to invite the key experts to CERN on neutral territory, and at a confidential dinner the knot was untied. This showed how trust built up in scientific cooperation can impact politics.
Hot crises put us in particularly difficult dilemmas. It is therefore understandable that the CERN Council has to follow, to a large extent, the guidelines of the individual governments and sometimes introduce harsh sanctions. This leads to considerable damage for many excellent projects, which should be mitigated as much as possible. But it seems equally important to prevent or at least alleviate human suffering among scientific colleagues and their families, and in doing so we should allow them tolerance and full freedom of expression. I am sure the CERN management will try to achieve this, as in the past.
Day after
But what I consider most important is to prepare for the situation after the war. Somehow and sometime there will be a solution to the Russian invasion. On that “day after”, it will be necessary to talk to each other again and build a new world out of the ruins. This was facilitated after World War II because, despite the Nazi reign of terror, some far-sighted scientists maintained human relations as well as scientific ones. I remember with pleasure how I was invited to spend a sabbatical year in 1948 in Sweden with Lise Meitner. I was also one of the first German citizens to be invited to a scientific conference in Israel in 1957, where I was received without resentment.
CERN was the first scientific organisation whose mission was not only to conduct excellent science, but also to help improve relations between nations. CERN did this initially in Europe with great success. Later, during the most intense period of the Cold War, it was CERN that signed an agreement with the Russian laboratory in Serpukhov in the 1960s. Together with contacts with JINR in Dubna, this offered one of the few opportunities for scientific West–East cooperation. CERN followed these principles during the occupation of the Czechoslovak Socialist Republic in 1968 and during the Afghanistan crisis in 1979.
The aim is to transfer ethical achievements from science to politics
CERN has become a symbol of what can be achieved when working on a common project without discrimination, for the benefit of science and humanity. In recent decades, when peace has reigned in Europe, this second goal of CERN has somewhat receded into the background. The present crisis reminds us to make greater efforts in this direction again, even more so when many powers disregard ethical principles or formal treaties by pretending that their fundamental interests are violated. Science for Peace tries to help create a minimum of human trust between governments. Without this, we run the risk that future political treaties will be based only on deterrence. That would be a gloomy world.
A vision for the day after requires courage and more Science for Peace than ever before.