Comsol -leaderboard other pages

Topics

Tracing molecules at the vacuum frontier

Thermal-radiation calculation

In particle accelerators, large vacuum systems guarantee that the beams travel as freely as possible. Despite being one 25-trillionth the density of Earth’s atmosphere, however, a tiny concentration of gas molecules remain. These pose a problem: their collisions with accelerated particles reduce the beam lifetime and induce instabilities. It is therefore vital, from the early design stage, to plan efficient vacuum systems and predict residual pressure profiles.

Surprisingly, it is almost impossible to find commercial software that can carry out the underlying vacuum calculations. Since the background pressure in accelerators (of the order 10–9–10–12 mbar) is so low, molecules rarely collide with one other and thus the results of codes based on computational fluid dynamics aren’t valid. Although workarounds exist (solving vacuum equations analytically, modelling a vacuum system as an electrical circuit, or taking advantage of similarities between ultra-high-vacuum and thermal radiation), a CERN-developed simulator “Molflow”, for molecular flow, has become the de-facto industry standard for ultra-high-vacuum simulations.

Instead of trying to analytically solve the surprisingly difficult gas behaviour over a large system in one step, Molflow is based on the so-called test-particle Monte Carlo method. In a nutshell: if the geometry is known, a single test particle is created at a gas source and “bounced” through the system until it reaches a pump. Then, repeating this millions of times, with each bounce happening in a random direction, just like in the real world, the program can calculate the hit-density anywhere, from which the pressure is obtained.

The idea for Molflow emerged in 1988 when the author (RK) visited CERN to discuss the design of the Elettra light source with CERN vacuum experts (see “From CERN to Elettra, ESRF, ITER and back” panel). Back then, few people could have foreseen the numerous applications outside particle physics that it would have. Today, Molflow is used in applications ranging from chip manufacturing to the exploration of the Martian surface, with more than 1000 users worldwide and many more downloads from the dedicated website.

Molflow in space 

While at CERN we naturally associate ultra-high vacuum with particle accelerators, there is another domain where operating pressures are extremely low: space. In 2017, after first meeting at a conference, a group from German satellite manufacturer OHB visited the CERN vacuum group, interested to see our chemistry lab and the cleaning process applied to vacuum components. We also demoed Molflow for vacuum simulations. It turned out that they were actively looking for a modelling tool that could simulate specific molecular-contamination transport phenomena for their satellites, since the industrial code they were using had very limited capabilities and was not open-source. 

Molflow has complemented NASA JPL codes to estimate the return flux during a series of planned fly-bys around Jupiter’s moon Europa

A high-quality, clean mirror for a space telescope, for example, must spend up to two weeks encapsulated in the closed fairing from launch until it is deployed in orbit. During this time, without careful prediction and mitigation, certain volatile compounds (such as adhesive used on heating elements) present within the spacecraft can find their way to and become deposited on optical elements, reducing their reflectivity and performance. It is therefore necessary to calculate the probability that molecules migrate from a certain location, through several bounces, and end up on optical components. Whereas this is straightforward when all simulation parameters are static, adding chemical processes and molecule accumulation on surfaces required custom development. Even though Molflow could not handle these processes “out of the box”, the OHB team was able to use it as a basis that could be built on, saving the effort of creating the graphical user interface and the ray-tracing parts from scratch. With the help of CERN’s knowledge-transfer team, a collaboration was established with the Technical University of Munich: a “fork” in the code was created; new physical processes specific to their application were added; and the code was also adapted to run on computer clusters. The work was made publicly available in 2018, when Molflow became open source.

From CERN to Elettra, ESRF, ITER and back

Molflow simulation

Molflow emerged in 1988 during a visit to CERN from its original author (RK), who was working at the Elettra light source in Trieste at the time. CERN vacuum expert Alberto Pace showed him a computer code written in Fortran that enabled the trajectories of particles to be calculated, via a technique called ray tracing. On returning to Trieste, and realising that the CERN code couldn’t be run there due to hardware and software incompatibilities, RK decided to rewrite it from scratch. Three years later the code was formally released. Once more, credit must be given to CERN for having been the birthplace of new ideas for other laboratories to develop their own applications.

Molflow was originally written in Turbo Pascal, had (black and white) graphics, and visualised geometries in 3D – even allowing basic geometry editing and pressure plots. While today such features are found in every simulator, at the time the code stood out and was used in the design of several accelerator facilities, including the Diamond Light Source, Spallation Neutron Source, Elettra, Alba and others – as well as for the analysis of a gas-jet experiment for the PANDA experiment at GSI Darmstadt. That said, the early code had its limitations. For example, the upper limit of user memory (640 kB for MS-DOS) significantly limited the number of polygons used to describe the geometry, and it was single-processor. 

In 2007 the original code was given a new lease of life at the European Synchrotron Radiation Facility in Grenoble, where RK had moved as head of the vacuum group. Ported to C++, multi-processor capability was added, which is particularly suitable for Monte Carlo calculations: if you have eight CPU cores, for example, you can trace eight molecules at the same time. OpenGL (Open Graphics Library) acceleration made the visualisation very fast even for large structures, allowing the usual camera controls of CAD editors to be added. Between 2009 and 2011 Molflow was used at ITER, again following its original author, for the design and analysis of vacuum components for the international tokamak project.

In 2012 the project was resumed at CERN, where RK had arrived the previous year. From here, the focus was on expanding the physics and applications: ray-tracing terms like “hit density” and “capture probability” were replaced with real-world quantities such as pressure and pumping speed. To publish the code within the group, a website was created with downloads, tutorial videos and a user forum. Later that year, a sister code “Synrad” for synchrotron-radiation calculations, also written in Trieste in the 1990s, was ported to the modern environment. The two codes could, for the first time, be used as a package: first, a synchrotron-radiation simulation could determine where light hits a vacuum chamber, then the results could be imported to a subsequent vacuum simulation to trace the gas desorbed from the chamber walls. This is the so-called photon-stimulated desorption effect, which is a major hindrance to many accelerators, including the LHC.

Molflow and Synrad have been downloaded more than 1000 times in the past year alone, and anonymous user metrics hint at around 500 users who launch it at least once per month. The code is used by far the most in China, followed by the US, Germany and Japan. Switzerland, including users at CERN, places only fifth. Since 2018, the roughly 35,000-line code has been available open-source and, although originally written for Windows, it is now available for other operating systems, including the new ARM-based Macs and several versions of Linux.

One year later, the Contamination Control Engineering (CCE) team from NASA’s Jet Propulsion Laboratory (JPL) in California reached out to CERN in the context of its three-stage Mars 2020 mission. The Mars 2020 Perseverance Rover, built to search for signs of ancient microbial life, successfully landed on the Martian surface in February 2021 and has collected and cached samples in sealed tubes. A second mission plans to retrieve the cache canister and launch it into Mars orbit, while a third would locate and capture the orbital sample and return it to Earth. Each spacecraft experiences and contributes to its own contamination environment through thruster operations, material outgassing and other processes. JPL’s CCE team performs the identification, quantification and mitigation of such contaminants, from the concept-generation to the end-of-mission phase. Key to this effort is the computational physics modelling of contaminant transport from materials outgassing, venting, leakage and thruster plume effects.

Contamination consists of two types: molecular (thin-film deposition effects) and particulate (producing obscuration, optical scatter, erosion or mechanical damage). Both can lead to degradation of optical properties and spurious chemical composition measurements. As more sensitive space missions are proposed and built – particularly those that aim to detect life – understanding and controlling outgassing properties requires novel approaches to operating thermal vacuum chambers. 

Just like accelerator components, most spacecraft hardware undergoes long-duration vacuum baking at relatively high temperatures to reduce outgassing. Outgassing rates are verified with quartz crystal microbalances (QCMs), rather than vacuum gauges as used at CERN. These probes measure the resonance frequency of oscillation, which is affected by the accumulation of adsorbed molecules, and are very sensitive: a 1 ng deposition on 1 cm2 of surface de-tunes the resonance frequency by 2 Hz. By performing free-molecular transport simulations in the vacuum-chamber test environment, measurements by the QCMs can be translated to outgassing rates of the sources, which are located some distance from the probes. For these calculations, JPL currently uses both Monte Carlo schemes (via Molflow) and “view factor matrix” calculations (through in-house solvers). During one successful Molflow application (see “Molflow in space” image, top) a vacuum chamber with a heated inner shroud was simulated, and optimisation of the chamber geometry resulted in a factor-40 increase of transmission to the QCMs over the baseline configuration. 

From SPHEREx to LISA

Another JPL project involving free molecular-flow simulations is the future near-infrared space observatory SPHEREx (Spectro-Photometer for the History of the Universe and Ices Explorer). This instrument has cryogenically cooled optical surfaces that may condense molecules in vacuum and are thus prone to significant performance degradation from the accumulation of contaminants, including water. Even when taking as much care as possible during the design and preparation of the systems, some elements, such as water, cannot be entirely removed from a spacecraft and will desorb from materials persistently. It is therefore vital to know where and how much contamination will accumulate. For SPHEREx, water outgassing, molecular transport and adsorption were modelled using Molflow against internal thermal predictions, enabling a decontamination strategy to keep its optics free from performance-degrading accumulation (see “Molflow in space” image, left). Molflow has also complemented other NASA JPL codes to estimate the return flux (whereby gas particles desorbing from a spacecraft return to it after collisions with a planet’s atmosphere) during a series of planned fly-bys around Jupiter’s moon Europa. For such exospheric sampling missions, it is important to distinguish the actual collected sample from return-flux contaminants that originated from the spacecraft but ended up being collected due to atmospheric rebounds.

Vacuum-chamber simulation for NASA

It is the ability to import large, complex geometries (through a triangulated file format called STL, used in 3D printing and supported by most CAD software) that makes Molflow usable for JPL’s molecular transport problems. In fact, the JPL team “boosted” our codes with external post-processing: instead of built-in visualisation, they parsed the output file format to extract pressure data on individual facets (polygons representing a surface cell), and sometimes even changed input parameters programmatically – once again working directly on Molflow’s own file format. They also made a few feature requests, such as adding histograms showing how many times molecules bounce before adsorption, or the total distance or time they travel before being adsorbed on the surfaces. These were straightforward to implement, and because JPL’s scientific interests also matched those of CERN users, such additions are now available for everyone in the public versions of the code. Similar requests have come from experiments employing short-lived radioactive beams, such as those generated at CERN’s ISOLDE beamlines. Last year, against all odds during COVID-related restrictions, the JPL team managed to visit CERN. While showing the team around the site and the chemistry laboratory, they held a seminar for our vacuum group about contamination control at JPL, and we showed the outlook for Molflow developments.

Our latest space-related collaboration, started in 2021, concerns the European Space Agency’s LISA mission, a future gravitational-wave interferometer in space (see CERN Courier September/October 2022 p51). Molflow is being used to analyse data from the recently completed LISA Pathfinder mission, which explored the feasibility of keeping two test masses in gravitational free-fall and using them as inertial sensors by measuring their motion with extreme precision. Because the satellite’s sides have different temperatures, and because the gas sources are asymmetric around the masses, there is a difference in outgassing between two sides. Moreover, the gas molecules that reach the test mass are slightly faster on one side than the other, resulting in a net force and torque acting on the mass, of the order of femtonewtons. When such precise inertial measurements are required, this phenomenon has to be quantified, along with other microscopic forces, such as Brownian noise resulting from the random bounces of molecules on the test mass. To this end, Molflow is currently being modified to add molecular force calculations for LISA, along with relevant physical quantities such as noise and resulting torque.

Sky’s the limit 

High-energy applications

Molflow has proven to be a versatile and effective computational physics model for the characterisation of free-molecular flow, having been adopted for use in space exploration and the aerospace sector. It promises to continue to intertwine different fields of science in unexpected ways. Thanks to the ever-growing gaming industry, which uses ray tracing to render photorealistic scenes of multiple light sources, consumer-grade graphics cards started supporting ray-tracing in 2019. Although intended for gaming, they are programmable for generic purposes, including science applications. Simulating on graphics-processing units is much faster than traditional CPUs, but it is also less precise: in the vacuum world, tiny imprecisions in the geometry can result in “leaks” or some simulated particles crossing internal walls. If this issue can be overcome, the speedup potential is huge. In-house testing carried out recently at CERN by PhD candidate Pascal Bahr demonstrated a speedup factor of up to 300 on entry-level Nvidia graphics cards, for example.

Our latest space-related collaboration concerns the European Space Agency’s LISA mission

Another planned Molflow feature is to include surface processes that change the simulation parameters dynamically. For example, some getter films gradually lose their pumping ability as they saturate with gas molecules. This saturation depends on the pumping speed itself, resulting in two parameters (pumping speed and molecular surface saturation) that depend on each other. The way around this is to perform the simulation in iterative time steps, which is straightforward to add but raises many numerical problems.

Finally, a much-requested feature is automation. The most recent versions of the code already allow scripting, that is, running batch jobs with physics parameters changed step-by-step between each execution. Extending these automation capabilities, and adding export formats that allow easier post-processing with common tools (Matlab, Excel and common Python libraries) would significantly increase usability. If adding GPU ray tracing and iterative simulations are successful, the resulting – much faster and more versatile – Molflow code will remain an important tool to predict and optimise the complex vacuum systems of future colliders.

CMS looks forward to new physics with PPS

PPS timing detector

Colliding particles at high energies is a tried and tested route to uncover the secrets of the universe. In a collider, charged particles are packed in bunches, accelerated and smashed into each other to create new forms of matter. Whether accelerating elementary electrons or composite hadrons, past and existing colliders all deal with matter constituents. Colliding force-carrying particles such as photons is more ambitious, but can be done, even at the Large Hadron Collider (LHC). 

The LHC, as its name implies, collides hadrons (protons or ions) into one another. In most cases of interest, projectile protons break up in the collision and a large number of energetic particles are produced. Occasionally, however, protons interact through a different mechanism, whereby they remain intact and exchange photons that fuse to create new particles (see “Photon fusion” figure). Photon–photon fusion has a unique signature: the particles originating from this kind of interaction are produced exclusively, i.e. they are the only ones in the final state along with the protons, which often do not disintegrate. Despite this clear imprint, when the LHC operates at nominal instantaneous luminosities, with a few dozen proton–proton interactions in a single bunch crossing, the exclusive fingerprint is contaminated by extra particles from different interactions. This makes the identification of photon–photon fusion challenging.

The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2

Protons that survive the collision, having lost a small fraction of their momentum, leave the interaction point still packed within the proton bunch, but gradually drift away as they travel further along the beamline. During LHC Run 2, the CMS collaboration installed a set of forward proton detectors, the Precision Proton Spectrometer (PPS), at a distance of about 200 m from the interaction point on both sides of the CMS apparatus. The PPS detectors can get as close to the beam as a few millimetres and detect protons that have lost between 2% and 15% of their initial kinetic energy (see “Precision Proton Spectrometer up close” panel). They are the CMS detectors located the farthest from the interaction point and the closest to the beam pipe, opening the door to a new physics domain, represented by central-exclusive-production processes in standard LHC running conditions.

Testing the Standard Model

Central exclusive production (CEP) processes at the LHC allow novel tests of the Standard Model (SM) and searches for new phenomena by potentially granting access to some of the rarest SM reactions so far unexplored. The identification of such exclusive processes relies on the correlation between the proton momentum loss measured by PPS and the kinematics of the central system, allowing the mass and rapidity of the central system in the interaction to be inferred very accurately (see “Tagging exclusive events” and “Exclusive identification” figures). Furthermore, the rules for exclusive photon–photon interactions only allow states with certain quantum numbers (in particular, spin and parity) to be produced. 

Precision Proton Spectrometer up close

Tracking station

PPS was born in 2014 as a joint project between the CMS and TOTEM collaborations (CERN Courier April 2017 p23), and in 2018 became a subsystem of CMS following an MoU between CERN, CMS and TOTEM. For the specialised PPS setup to work as designed, its detectors must be located within a few millimetres of the LHC proton beam. The Roman Pots technique – moveable steel “pockets” enclosing the detectors under moderate vacuum conditions with a thin wall facing the beam – is perfectly suited for this task. This technique has been successfully exploited by the TOTEM and ATLAS collaborations at the LHC and was used in the past by experiments at the ISR, the SPS, the Tevatron and HERA. The challenge for PPS is the requirement that the detectors operate continuously during standard LHC running conditions, as opposed to dedicated special runs with a very low interaction rate.

The PPS design for LHC Run 2 incorporated tracking and timing detectors on both sides of CMS. The tracking detector comprises two stations located 10 m apart, capable of reconstructing the position and angle of the incoming proton. Precise timing is needed to associate the production vertex of two protons to the primary interaction vertex reconstructed by the CMS tracker. The first tracking stations of the proton spectrometer were equipped with silicon-strip trackers from TOTEM – a precise and reliable system used since the start of the LHC. In parallel, a suitable detector technology for efficient operation during standard LHC runs was developed, and in 2017 half of the tracking stations (one per side) were replaced by new silicon pixel trackers designed to cope with the higher hit rate. The x, y coordinates provided by the pixels resolve multiple proton tracks in the same bunch crossing, while the “3D” technology used for sensor fabrication greatly enhances resistance against radiation damage. The transition from strips was completed in 2018, when the fully pixel-based tracker was employed.

In parallel, the timing system was set up. It is based on diamond pad sensors initially developed for a new TOTEM detector. The signal collection is segmented in relatively large pads, read out individually by custom, high-speed electronics. Each plane contributes to the time measurement of the proton hit with a resolution of about 100 ps. The design of the detector evolved during Run 2 with different geometries and set-ups, improving the performance in terms of efficiency and overall time resolution.

The most common and cleanest process in photon–photon collisions is the exclusive production of a pair of leptons. Theoretical calculations of such processes date back almost a century to the well-known Breit–Wheeler process. The first result obtained by PPS after commissioning in 2016 was the measurement of (semi-)exclusive production of e+e and μ+μ pairs using about 10 fb–1 of CMS data: 20 candidate events were identified with a di-lepton mass greater than 110 GeV. This process is now used as a “standard candle” to calibrate PPS and validate its performance. The cross section of this process has been measured by the ATLAS collaboration with their forward proton spectrometer, AFP (CERN Courier September/October 2020 p15). 

An interesting process to study is the exclusive production of W-boson pairs. In the SM, electroweak gauge bosons are allowed to interact with each other through point-like triple and quartic couplings. Most extensions of the SM modify the strength of these couplings. At the LHC, electroweak self-couplings are probed via gauge-boson scattering, and specifically photon–photon scattering. A notable advantage of exclusive processes is the excellent mass resolution obtained from PPS, allowing the study of self-couplings at different scales with very high precision. 

During Run 2, PPS reconstructed intact protons that lost down to 2% of their kinetic energy, which for proton–proton collisions at 13 TeV translates to sensitivity for
central mass values above 260 GeV. In the production of electroweak boson pairs, WW or ZZ, the quartic self-coupling mainly contributes to the high invariant-mass tail of the di-boson system. The analysis searched for anomalously large values of the quartic gauge coupling and the results provide the first constraint on γγZZ in an exclusive channel and a competitive constraint on γγWW compared to other vector-boson-scattering searches.

Final states produced via photon–photon fusion

Many SM processes proceeding via photon fusion have a relatively low cross section. For example, the predicted cross section for CEP of top quark–antiquark pairs is of the order of 0.1 fb. A search for this process was performed early this year using about 30 fb–1 of CMS data recorded in 2017, with protons tagged by PPS. While the sensitivity of the analysis is not sufficient to test the SM prediction, it can probe possible enhancements due to additional contributions from new physics. Also, the analysis established tools with which to search for exclusive production processes in a multi-jet environment using machine-learning techniques. 

Uncharted domains 

The SM provides very accurate predictions for processes occurring at the LHC. Yet, it cannot explain the origin of several observations such as the existence of dark matter, the matter–antimatter asymmetry in the universe and neutrino masses. So far, the LHC experiments have been unable to provide answers to those questions, but the search is ongoing. Since physics with PPS mostly targets photon collisions, the only assumption is that the new physics is coupled to the electroweak sector, opening a plethora of opportunities for new searches. 

Tagging exclusive events

Photon–photon scattering has already been observed in heavy-ion collisions by the LHC experiments, for example by ATLAS (CERN Courier December 2016 p9). But new physics would be expected to enter at higher di-photon masses, which is where PPS comes into play. Recently, a search for di-photon exclusive events was performed using about 100 fb–1 of CMS data at a di-photon mass greater than 350 GeV, where SM contributions are negligible. In the absence of an unexpected signal, a new best limit was set on anomalous four-photon coupling parameters. In addition, a limit on the coupling of axion-like particles to photon was set in the mass region 500–2000 GeV. These are the most restrictive limits to date.

A new, interesting possibility to look for unknown particles is represented by the “missing mass” technique. The exclusivity of CEP makes it possible, in two-particle final states, to infer the four-momentum of one particle if the other is measured. This is done by exploiting the fact that, if the protons are measured and the beam energy is known, the kinematics of the centrally produced final state can be determined: no direct measurements of the second particle are required, allowing us to “see the unseen”. This technique was demonstrated for the first time at the LHC this year, using around 40 and 2 fb–1 of Run 2 data in a search for pp  pZXp and pp  pγXp, respectively, where X represents a neutral, integer-spin particle with an unspecified decay mode. In the absence of an observed signal, the analysis sets the first upper limits for the production of an unspecified particle in the mass range 600–1600 GeV.

Looking forward with PPS

di-photon exclusive production

For LHC Run 3, which began in earnest on 5 July, the PPS team has implemented several upgrades to maximise the physics output from the expected increase in integrated luminosity. The mechanics and readout electronics of the pixel tracker have been redesigned to allow remote shifting of the sensors in several small steps, which better distributes the radiation damage caused by the highly non-uniform irradiation. All timing stations are now equipped with “double diamond” sensors, and from 2023 an additional, second station will be added to each PPS arm. This will improve the resolution of the measured arrival time of protons, which is crucial for reconstructing the z coordinate of a possible common vertex, by at least a factor of two. Finally, a new software trigger has been developed that requires the presence of tagged protons in both PPS arms, thus allowing the use of lower energy thresholds for the selection of events with two particle jets in CMS.

The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2, despite only a doubling of the integrated luminosity. This significant increase is due to the upgrade of the detectors, especially of the timing stations, thus placing PPS in the spotlight of the Run 3 research programme. Timing detectors also play a crucial role in the planning for the high-luminosity LHC (HL-LHC) phase. The CMS collaboration has released an expression of interest to pursue studies of CEP at the HL-LHC with the ambitious plan of installing near-beam proton spectrometers at 196, 220, 234, and 420 m from the interaction point. This would extend the accessible mass range to the region between 50 GeV and 2.7 TeV. The main challenge here is to mitigate high “pileup” effects using the timing information, for which new detector technologies, including synergies with the future CMS timing detectors, are being considered.

PPS significantly extends the LHC physics programme, and is a tribute to the ingenuity of the CMS collaboration in the ongoing search for new physics.

Exotic hadrons brought into order by LHCb

LHCb’s latest tetraquarks

With so many new hadronic states being discovered at the LHC (67 and counting, with the vast majority seen by LHCb), it can be difficult to keep track of what’s what. While most are variations of known mesons and baryons, LHCb is uncovering an increasing number of exotic hadrons, namely tetraquarks and pentaquarks. A case in point is its recent discovery, announced at CERN on 5 July, of a new strange pentaquark (with quark content ccuds) and a new tetraquark pair: one constituting the first doubly charged open-charm tetraquark (csud) and the other a neutral isospin partner (csud). The situation has prompted the LHCb collaboration to introduce a new naming scheme. “We’re creating ‘particle zoo 2.0’,” says Niels Tuning, LHCb physics coordinator. “We’re witnessing a period of discovery similar to the 1950s, when a ‘zoo’ of hadrons ultimately led to the quark model of conventional hadrons in the 1960s.” 

While the quark model allows the existence of multiquark states beyond two- and three-quark mesons and baryons, the traditional naming scheme for hadrons doesn’t make much allowance for what these particles should be called. When the first tetraquark candidate was discovered at the Belle experiment in 2003, it was denoted by “X” because it didn’t seem to be a conventional charmonium state. Shortly afterwards, a similarly mysterious but different state turned up at BaBar and was denoted “Y”. Subsequent exotic states seen at Belle and BESIII were dubbed “Z”, and more recently tetraquarks discovered at LHCb were labelled “T”. 

Complicating matters further, the subscripts added to differentiate between the various states lack consistency. For example, the first known tetraquark states contained both charm and anticharm quarks, so a subscript “c” was added. But the recent discoveries of tetraquarks and pentaquarks containing a single strange quark require an extra subscript “s”. On top of all of that, explains LHCb’s Tim Gershon, who initiated the new naming scheme, tetraquarks discovered by LHCb in 2020 contain a single charm quark. “We couldn’t assign the subscript ‘c’ because we’ve always used that to denote states containing charm and anticharm, so we didn’t know what symbols to use,” he explains. “Things were starting to become a bit confusing, so we thought it was time to bring some kind of logic to the naming scheme. We have done this over an extended period, not only within LHCb but also involving other experiments and theorists in this field.”  

Helpfully, the new proposal labels all tetraquarks “T” and all pentaquarks “P”, with a set of rules regarding the necessary subscripts and superscripts. In this scheme, the two different spin states of the open-charm tetraquarks discovered by LHCb in 2020 become Tcs0(2900)0 and Tcs1(2900)0 instead of X0(2900)0 and X1(2900)0, for example, while the latest pentaquark is denoted PΛψs(4338)0. The collaboration hopes that the new scheme, which can be extended to six- or seven-quark hadrons, will make it easier for experts to communicate while also helping newcomers to the field. 

The new scheme could make it easier to spot patterns that might have been missed before

Importantly, it could make it easier to spot patterns that might have been missed before, perhaps shedding light on the central question of whether exotic hadrons are compact tightly bound multi-quark states or more loosely bound molecular-like states. The new LHCb scheme might even help researchers predict new exotic hadrons, just as the multiplets arising from the quark model made it possible to predict new mesons and baryons such as the Ω. 

“Before this new scheme it was almost like a Tower of Babel situation where it was difficult to communicate,” says Gershon. “We have created a document that people can use as a kind of dictionary, in the hope that it will help the field to progress more rapidly.” 

CERN and Canon demonstrate efficient klystron

E37117 klystron

The radio-frequency (RF) cavities that accelerate charged particles in machines like the LHC are powered by devices called klystrons. These electro-vacuum tubes, which amplify RF signals by converting an initial velocity modulation of a stream of electrons into an intensity modulation, produce RF power in a wide frequency range (from several hundred MHz to tens of GHz) and can be used in pulsed or continuous-wave mode to deliver RF power from hundreds of kW to hundreds of MW. The close connection between klystron performance and the power consumption of an accelerator has driven researchers at CERN to develop more efficient devices for current and future colliders. 

The efficiency of a klystron is calculated as the ratio between generated RF power and the electrical power that is delivered from the grid. Experience with many thousands of such devices during the past seven decades has established that at low frequency and moderate RF power levels (as required by the LHC), klystrons can deliver an efficiency of 60–65%. For pulsed, high-frequency and high peak-RF power devices, efficiencies are about 40–45%. The efficiency of RF power production is a key element of an accelerator’s overall efficiency. Taking the proposed future electron–positron collider FCC-ee as an example: by increasing klystron efficiency from 65 to 80%, the electrical power savings over a 10-year period could be as much as 1 TWhr. In addition, reduced demand on the electrical power storage capacity and cooling and ventilation may further reduce the original investment cost.

In 2013 the development of high-efficiency klystrons started at CERN within the Compact Linear Collider study as a means to reduce the global energy consumed by the proposed collider. Thanks to strong support by management, this evolved into a project inside the CERN RF group. A small team of five people at CERN and Lancaster University, led by Igor Syratchev, developed accurate computer tools for klystron simulations and in-depth analysis of the beam dynamics, and used them to evaluate effects that limit klystron efficiency. Finally, the team proposed novel technological solutions (including new bunching methods and higher order harmonic cavities) that can improve klystron efficiency by 10–30% compared to commercial analogues. These new technologies were applied to develop new high-efficiency klystrons for use in the high-luminosity LHC (HL-LHC), FCC-ee and the CERN X-band high-gradient facilities, as well as in medical and industrial accelerators. Some of the new tube designs are now undergoing prototyping in close collaboration between CERN and industry.

The first commercial prototype of a high-efficiency 8 MW X-band klystron developed at CERN was built and tested by Canon Electron Tubes and Devices in July this year. Delivering an expected power level with an efficiency of 53.3% measured at their factory in Japan, it is the first demonstration of the technological solution developed at CERN that showed an efficiency increase of more than 10% compared to commercially available devices. In terms of RF power production, this translates to an overall increase of 25% using the same wall-plug power as the model currently working at CERN’s X-band facility. Later this year the klystron will arrive at CERN and replace Canon’s conventional 6 MW tube. The next project in progress aims to fabricate a high-efficiency version of the LHC klystron, which, if successful, could be used in the HL-LHC.

“These results give us confidence for the coming high-efficiency version of the LHC klystrons and for the development of FCC-ee,” says RF group leader Frank Gerigk. “It is also an excellent demonstration of the powerful collaboration between CERN and industry.” 

Exploring a laser-hybrid accelerator for radiotherapy

LhARA

A multidisciplinary team in the UK has received seed funding to investigate the feasibility of a new facility for ion-therapy research based on novel accelerator, instrumentation and computing technologies. At the core of the facility would be a laser-hybrid accelerator dubbed LhARA: a high-power pulsed laser striking a thin foil target would create a large flux of protons or ions, which are captured using strong-focusing electron–plasma lenses and then accelerated rapidly in a fixed-field alternating-gradient accelerator. Such a device, says the team, offers enormous clinical potential by providing more flexible, compact and cost-effective multi-ion sources.

High-energy X-rays are by far the most common radiotherapy tool, but recent decades have seen a growth in particle-beam radiotherapy. In contrast to X-rays, protons and ion beams can be manipulated to deliver radiation doses more precisely than conventional radiotherapy, sparing surrounding healthy tissue. Unfortunately, the number of ion treatment facilities is few because they require large synchrotrons to accelerate the ions. The Proton-Ion Medical Machine Study undertaken at CERN during the late 1990s, for example, underpinned the CNAO (Italy) and MedAustron (Austria) treatment centres that helped propel Europe to the forefront of the field – work that is now being continued by CERN’s Next Ion Medical Machine Study (CERN Courier July/August 2021 p23).

“LhARA will greatly accelerate our understanding of how protons and ions interact and are effective in killing cancer cells, while simultaneously giving us experience in running a novel beam,” says LhARA biological science programme manager Jason Parsons of the University of Liverpool. “Together, the technology and the science will help us make a big step forward in optimising radiotherapy treatments for cancer patients.” 

A small number of laboratories in Europe already work on laser-driven sources for biomedical applications. The LhARA collaboration, which comprises physicists, biologists, clinicians and engineers, aims to build on this work to demonstrate the feasibility of capturing and manipulating the flux created in the laser-target interaction to provide a beam that can be accelerated rapidly to the desired energy. The laser-driven source offers the opportunity to capture intense, nanosecond-long pulses of protons and ions at an energy of 15 MeV, says the team. This is two orders of magnitude greater than in conventional sources, allowing the space-charge limit on the instantaneous dose to be evaded. 

In July, UK Research and Innovation granted £2 million over the next two years to deliver a conceptual design report for an Ion Therapy Research Facility (ITRF) centred around LhARA. The first goal is to demonstrate the feasibility of the laser-hybrid approach in a facility dedicated to biological research, after which the team will work with national and international partnerships to develop the clinical technique. While the programme carries significant technical risk, says LhARA co-spokesperson Kenneth Long from Imperial College London/STFC, it is justified by the high level of potential reward: “The multi­disciplinary approach of the LhARA collaboration will place the ITRF at the forefront of the field, partnering with industry to pave the way for significantly enhanced access to state-of-the-art particle-beam therapy.” 

Webb opens new era in observational astrophysics

JWST

The keenly awaited first science-grade images from the James Webb Space Telescope were released on 12 July – and they did not disappoint. Thanks to Webb’s unprecedented 6.5 m mirror, together with its four main instruments (NIRCam, NIRSpec, NIRISS and MIRI), the $10 billion probe marks a new dawn for observational astrophysics.

The past six months since Webb’s launch from French Guiana have been devoted to commissioning, including alignment and calibration of the mirrors and bringing temperatures to cyrogenic levels to minimise noise from heat radiated from the equipment (CERN Courier March/April 2022 p7). Unlike the Hubble Space Telescope, Webb does not look at ultraviolet or visible light but is primarily sensitive to near- and mid-infrared wavelengths. This enables it to look at the farthest galaxies and stars, as early as a few hundred million years after the Big Bang.

Wealth of information

Pictured here are some of Webb’s early-release images. The first deep-field image (top) covers the same area of the sky as a grain of sand held at arm’s length, and is swarming with galaxies. At the centre is a cluster called SMACS 0723, whose combined mass is so high that its gravitational field bends the light of objects that lie behind it (resulting in arc-like features), revealing galaxies that existed when the universe was less than a billion years old. The image was taken using NIRCam and is a combination of images at different wavelengths. The spectrographs, NIRSpec and NIRISS, will provide a wealth of information on the composition of stars, galaxies and their clusters, offering a rare peak into the earliest stages of their formation and evolution. 

Stephan’s Quintet (bottom left) is a visual group of five galaxy clusters that was first discovered in 1877 and remains one of the most studied compact galaxy groups. The actual grouping involves only four galaxies, which are predicted to eventually merge. The non-member, NGC 7320, which lies about 40 million light years from Earth rather than 290 million for the actual group, is seen on the left, with vast regions of active star formation in its numerous spiral arms. 

A third stunning image, the Southern Ring nebula (bottom right), shows a dying star. With its reservoirs of light elements already exhausted, it starts using up any available heavier elements to sustain itself – a complex and violent process that results in large amounts of material being ejected from the star in intervals, visible as shells.

These images are just a taste, yet not all Webb data will be so visually spectacular. By extending Hubble’s observations of distant supernovae and other standard candles, for example, the telescope should enable the local rate of expansion to be determined more precisely, possibly shedding light on the nature of dark energy. By measuring the motion and gravitational lensing of early objects, it will also survey the distribution of dark matter, and might even hint at what it’s made of. Using transmission spectroscopy, Webb will also reveal exoplanets in unprecedented detail, learn about their chemical compositions and search for signatures of habitability. 

First light beckons at SLAC’s LCLS-II

The LCLS undulator hall

An ambitious upgrade of the US’s flagship X-ray free-electron-laser facility – the Linac Coherent Light Source (LCLS) at SLAC in California – is nearing completion. Set for “first light” early next year, LCLS-II will deliver X-ray laser beams that are 10,000 times brighter than LCLS at repetition rates of up to a million pulses per second – generating more X-ray pulses in just a few hours than the current laser has delivered through the course of its 12-year operational lifetime. The cutting-edge physics of the new facility – underpinned by a cryogenically cooled superconducting radio-frequency (SRF) linac – will enable the two beams from LCLS and LCLS-II to work in tandem. This, in turn, will help researchers observe rare events that happen during chemical reactions and study delicate biological molecules at the atomic scale in their natural environments, as well as potentially shed light on exotic quantum phenomena with applications in next-generation quantum computing and communications systems. 

Successful delivery of the LCLS-II linac was possible thanks to a multi-centre collaborative effort involving US national and university laboratories, following the decision to pursue an SRF-based machine in 2014 through the design, assembly, test, transportation and installation of a string of 37 SRF cryomodules (most of them more than 12 m long) into the SLAC tunnel. All told, this major undertaking necessitated the construction of forty 1.3 GHz SRF cryomodules (five of them spares) and three 3.9 GHz cryomodules (one spare) – with delivery of approximately one cryomodule per month from February 2019 until December 2020 to allow completion of the LCLS-II linac installation on schedule by November 2021. 

This industrial-scale programme of works was shaped by a strategic commitment, early on in the LCLS-II design phase, to transfer, and ultimately iterate, the established SRF capabilities of the European XFEL in Hamburg into the core technology platform used for the LCLS-II SRF cryomodules. Put simply: it would not have been possible to complete the LCLS-II project, within cost and on schedule, without the sustained cooperation of the European XFEL consortium – in particular, colleagues at DESY, CEA Saclay and several other European laboratories as well as KEK – that generously shared their experiences and know-how. 

Better together 

These days, large-scale accelerator or detector projects are very much a collective endeavour. Not only is the sprawling scope of such projects beyond a single organisation, but the risks of overspend and slippage can greatly increase with a “do-it-on-your-own” strategy. When the LCLS-II project opted for an SRF technology pathway in 2014 to maximise laser performance, the logical next step was to build a broad-based coalition with other US Department of Energy (DOE) national laboratories and universities. In this case, SLAC, Fermilab, Jefferson Lab (JLab) and Cornell University contributed expertise for cryomodule production, while Argonne National Laboratory and Lawrence Berkeley National Laboratory managed delivery of the undulators and photoinjector for the project. For sure, the start-up time for LCLS-II would have increased significantly without this joint effort, extending the overall project by several years.

Superconducting accelerator

Each partner brought something unique to the LCLS-II collaboration. While SLAC was still a relative newcomer to SRF technologies, the lab had a management team that was familiar with building large-scale accelerators (following successful delivery of the LCLS). The priority for SLAC was therefore to scale up its small nucleus of SRF experts by recruiting experienced SRF technologists and engineers to the staff team. In contrast, the JLab team brought an established track-record in the production of SRF cryomodules, having built its own machine, the Continuous Electron Beam Accelerator Facility (CEBAF), as well as cryomodules for the Spallation Neutron Source (SNS) linac at Oak Ridge National Laboratory in Tennessee. Cornell, too, came with a rich history in SRF R&D – capabilities that, in turn, helped to solidify the SRF cavity preparation process for LCLS-II. 

Finally, Fermilab had, at the time, recently built two cutting-edge cryomodules of the same style as that chosen for LCLS-II. To fabricate these modules, Fermilab worked closely with the team at DESY to set up the same type of production infrastructure used on the European XFEL. From that perspective, the required tooling and fixtures were all ready to go for the LCLS-II project. While Fermilab was the “designer of record” for the SRF cryomodule, with primary responsibility for delivering a working design to meet LCLS-II requirements, the realisation of an optimised technology platform was a team effort involving SRF experts from across the collaboration.

Collective problems, collective solutions 

While the European XFEL provided the template for the LCLS-II SRF cryomodule design, several key elements of the LCLS-II approach subsequently evolved to align with the continuous-wavelength (CW) operation requirements and the specifics of the SLAC tunnel. Success in tackling these technical challenges – across design, assembly, testing and transportation of the cryomodules – is testament to the strength of the LCLS-II collaboration and the collective efforts of the participating teams in the US and Europe.

Challenges are inevitable when developing new facilities at the limits of known technology

For one, the thermal performance specification of the SRF cavities exceeded the state-of-the-art and required development and industrialisation of the concept of nitrogen doping (a process in which SRF cavities are heat-treated in a nitrogen atmosphere to increase their cryogenic efficiency and, in turn, lower the overall operating costs of the linac). The nitrogen-doping technique was invented at Fermilab in 2012 but, prior to LCLS-II construction, had been used only in an R&D setting.

The priority was clear: to transfer the nitrogen-doping capability to LCLS-II’s industry partners, so that the cavity manufacturers could perform the necessary materials-processing before final helium-vessel jacketing. During this knowledge transfer, it was found that nitrogen-doped cavities are particularly sensitive to the base niobium sheet material – something the collaboration only realised once the cavity vendors were into full production. This resulted in a number of process changes for the heat treatment temperature, depending on which material supplier was used and the specific properties of the niobium sheet deployed in different production runs. JLab, for its part, held the contract for the cavities and pulled out all stops to ensure success.

SRF cryomodules

At the same time, the conversion from pulsed to CW operation necessitated a faster cooldown cycle for the SRF cavities, requiring several changes to the internal piping, a larger exhaust chimney on the helium vessel, as well as the addition of two new cryogenic valves per cryomodule. Also significant is the 0.5% slope in the longitudinal floor of the existing SLAC tunnel, which dictated careful attention to liquid-helium management in the cryomodules (with a separate two-phase line and liquid-level probes at both ends of every module). 

However, the biggest setback during LCLS-II construction involved the loss of beamline vacuum during cryomodule transport. Specifically, two cryomodules had their beamlines vented and required complete disassembly and rebuilding – resulting in a five-month moratorium on shipping of completed cryomodules in the second half of 2019. It turns out that a small (what was thought to be inconsequential) change in a coupler flange resulted in the cold coupler assembly being susceptible to resonances excited by transport. The result was a bellows tear that vented the beamline. Unfortunately, initial “road-tests” with a similar, though not exactly identical, prototype cryomodule had not revealed this behaviour. 

Such challenges are inevitable when developing new facilities at the limits of known technology. In the end, the problem was successfully addressed using the diverse talents of the collaboration to brainstorm solutions, with the available access ports allowing an elastomer wedge to be inserted to secure the vulnerable section. A key take-away here is the need for future projects to perform thorough transport analysis, verify the transport loads using mock-ups or dummy devices, and install adequate instrumentation to ensure granular data analysis before long-distance transport of mission-critical components. 

The last cryomodule from Fermilab

Upon completion of the assembly phase, all LCLS-II cryo­modules were subsequently tested at either Fermilab or JLab, with one module tested at both locations to ensure reproducibility and consistency of results. For high Q0 performance in nitrogen-doped cavities, cooldown flow rates of at least 30 g/s of liquid helium were found to give the best results, helping to expel magnetic flux that could otherwise be trapped in the cavity. Overall, cryomodule performance on the test stands exceeded specifications, with a total accelerating voltage per cryomodule of 158 MV (versus specification of 128 MV) and average Q0 of 3 × 1010 (versus specification of 2.7 × 1010). Looking ahead, attention is already shifting to the real-world cryomodule performance in the SLAC tunnel – something that was measured for the first time in 2022.

Transferable lessons

For all members of the collaboration working on the LCLS-II cryomodules, this challenging project holds many lessons. Most important is to build a strong team and use that strength to address problems in real-time as they arise. The mantra “we are all in this together” should be front-and-centre for any multi-institutional scientific endeavour – as it was in this case. Solutions need to be thought of in a more global sense, as the best answer might mean another collaborator taking more onto their plate. Collaboration implies true partnership and a working model very different to a transactional customer–vendor relationship.

From a planning perspective, it’s vital to ensure that the initial project cost and schedule are consistent with the technical challenges and preparedness of the infrastructure. Prototypes and pre-series production runs reduce risk and cost in the long term and should be part of the plan, but there must be sufficient time for data analysis and changes to be made after a prototype run in order for it to be useful. Time spent on detailed technical reviews is also time well spent. New designs of complex components need a comprehensive oversight and review, and should be controlled by a team, rather than a single individual, so that sign-off on any detailed design changes are made by an informed collective. 

LCLS-II science: capturing atoms and molecules in motion like never before

LCLS-II science

The strobe-like pulses of the LCLS, which produced its first light in April 2009, are just a few millionths of a billionth of a second long, and a billion times brighter than previous X-ray sources. This enables users from a wide range of fields to take crisp pictures of atomic motions, watch chemical reactions unfold, probe the properties of materials and explore fundamental processes in living things. LCLS-II will provide a major jump in capability – moving from 120 pulses per second to 1 million, enabling experiments that were previously impossible. The scientific community has identified six areas where the unique capabilities of LCLS-II will be essential for further scientific progress:

Nanoscale materials dynamics, heterogeneity and fluctuations 

Programmable trains of soft X-ray pulses at high rep rate will characterise spontaneous fluctuations and heterogeneities at the nanoscale across many decades, while coherent hard X-ray scattering will provide unprecedented spatial resolution of material structure, its evolution and relationship to functionality under operating conditions.

Fundamental energy and charge dynamics

High-repetition-rate soft X-rays will enable new techniques that will directly map charge distributions and reaction dynamics at the scale of molecules, while new nonlinear X-ray spectroscopies offer the potential to map quantum coherences in an element-specific way for the first time.

Catalysis and photocatalysis

Time-resolved, high-sensitivity, element- specific spectroscopy will provide the first direct view of charge dynamics and chemical processes at interfaces, characterise subtle conformational changes associated with charge accumulation, and capture rare chemical events in operating catalytic systems across multiple time and length scales – all of which are essential for designing new, more efficient systems for chemical transformation and solar-energy conversion.

Emergent phenomena in quantum materials

Fully coherent X-rays will enable new high- resolution spectroscopy techniques to map the collective excitations that define these new materials in unprecedented detail. Ultrashort X-ray pulses and optical fields will facilitate new methods for manipulating charge, spin and phonon modes to both advance fundamental understanding and point the way to new approaches for materials control.

Revealing biological function in real time

The high repetition rate of LCLS-II will provide a unique capability to follow the dynamics of macromolecules and interacting complexes in real time and in native environments. Advanced solution-scattering and coherent imaging techniques will characterise the conformational dynamics of heterogeneous ensembles of macromolecules, while the ability to generate “two-colour” hard X-ray pulses will resolve atomic-scale structural dynamics of biochemical processes that are often the first step leading to larger-scale protein motions.

Matter in extreme environments

The capability of LCLS-II to generate soft and hard X-ray pulses simultaneously will enable the creation and observation of extreme conditions that are far beyond our present reach, with the latter allowing the characterisation of unknown structural phases. Unprecedented spatial and temporal resolution will enable direct comparison with theoretical models relevant for inertial-confinement fusion and planetary science.

Work planning and control is another essential element for success and safety. This idea needs to be built into the “manufacturing system”, including into the cost and schedule, and to be part of each individual’s daily checklist. No one disagrees with this concept, but good intentions on their own will not suffice. As such, required safety documentation should be clear and unambiguous, and be reviewed by people with relevant expertise. Production data and documentation need to be collected, made easily available to the entire project team, and analysed regularly for trends, both positive and negative. 

Supply chain, of course, is critical in any production environment – and LCLS-II is no exception. When possible, it is best to have parts procured, inspected, accepted and on-the-shelf before production begins, thereby eliminating possible workflow delays. Pre-stocking also allows adequate time to recycle and replace parts that do not meet project specifications. Also worth noting is that it’s often the smaller components – such as bellows, feedthroughs and copper-plated elements – that drive workflow slowdowns. A key insight from LCLS-II is to place purchase orders early, stay on top of vendor deliveries, and perform parts inspections as soon as possible post-delivery. Projects also benefit from having clearly articulated pass/fail criteria and established procedures for handling non-conformance – all of which alleviates the need to make critical go/no-go acceptance decisions in the face of schedule pressures.

As with many accelerator projects, LCLS-II is not an end-point in itself, more an evolutionary transition within a longer term roadmap

Finally, it’s worth highlighting the broader impact – both personal and professional – on individual team members participating in a big-science collaboration like LCLS-II. At the end of the build, what remained after designs were completed, problems solved, production rates met, and cryomodules delivered and installed, were the friendships that had been nurtured over several years. The collaboration amongst partners, both formal and informal, who truly cared about the project’s success, and had each other’s backs when there were issues arising: these are the things that solidified the mutual respect, the camaraderie and, in the end, made LCLS-II such a rewarding project.

First light

In April 2022 the new LCLS-II linac was successfully cooled to its 2 K operating temperature. The next step was to pump the SRF cavities with more than a megawatt of microwave power to accelerate the electron beam from the new source. Following further commissioning of the machine, first X-rays are expected to be produced in early 2023. 

As with many accelerator projects, LCLS-II is not an end-point in itself, more an evolutionary transition within a longer term roadmap. In fact, work is already under way on LCLS-II HE – a project that will increase the energy of the CW SRF linac from 4 to 8 GeV, enabling the photon energy range to be extended to at least 13 keV, and potentially up to 20 keV at 1 MHz repetition rates. To ensure continuity of production for LCLS-II HE, 25 next-generation cryomodules are in the works, with even higher performance specifications versus their LCLS-II counterparts, while upgrades to the source and beam transport are also being finalised. 

While the fascinating science opportunities for LCLS-II-HE continue to be refined and expanded, of one thing we can be certain: strong collaboration and the collective efforts of the participating teams are crucial. 

Science for peace? More than ever!

What happened? A tragedy fell upon Ukraine and found many in despair or in a dilemma. After 70 mainly peaceful years for much of Europe, we were surprised by war, because we had forgotten that it takes an effort to maintain peace.

Having witnessed the horrors of war first hand, several years as a soldier and then as a displaced person, I could not imagine that humanity would unleash another war on the continent. As one of its last witnesses, I wonder what advice should be passed on, especially to younger colleagues, about what to do in the short term, and perhaps more importantly, what to do afterwards. 

Scientists have a special responsibility. Fortunately, there is no doubt today that science is independent of political doctrines. There is no “German physics” any more. We have established human relationships with our colleagues based on our enthusiasm for our profession, which has led to mutual trust and tolerance.

This has been practised at CERN for 70 years and continued at SESAME, where delegates from Israel, Palestine, Iran, Cyprus, Turkey and other governments sit peacefully around a table. Another offshoot of CERN, the South East European International Institute for Sustainable Technologies (SEEIIST), is in the making in the Balkans. Apart from fostering science, the aim is to transfer ethical achievements from science to politics: science diplomacy, as it has come to be known. In practice, this is done, for example, in the CERN Council where each government sends a representative and an additional scientist who work effectively together on a daily basis.

Herwig Schopper

In the case of imminent political conflicts, “Science for Peace” cannot of course help immediately, but occasionally opportunities arise even for this. In 1985, when disarmament negotiations between Gorbachev and Reagan in Geneva reached an impasse, one of the negotiators asked me to invite the key experts to CERN on neutral territory, and at a confidential dinner the knot was untied. This showed how trust built up in scientific cooperation can impact politics.

Hot crises put us in particularly difficult dilemmas. It is therefore understandable that the CERN Council has to follow, to a large extent, the guidelines of the individual governments and sometimes introduce harsh sanctions. This leads to considerable damage for many excellent projects, which should be mitigated as much as possible. But it seems equally important to prevent or at least alleviate human suffering among scientific colleagues and their families, and in doing so we should allow them tolerance and full freedom of expression. I am sure the CERN management will try to achieve this, as in the past.

Day after

But what I consider most important is to prepare for the situation after the war. Somehow and sometime there will be a solution to the Russian invasion. On that “day after”, it will be necessary to talk to each other again and build a new world out of the ruins. This was facilitated after World War II because, despite the Nazi reign of terror, some far-sighted scientists maintained human relations as well as scientific ones. I remember with pleasure how I was invited to spend a sabbatical year in 1948 in Sweden with Lise Meitner. I was also one of the first German citizens to be invited to a scientific conference in Israel in 1957, where I was received without resentment. 

CERN was the first scientific organisation whose mission was not only to conduct excellent science, but also to help improve relations between nations. CERN did this initially in Europe with great success. Later, during the most intense period of the Cold War, it was CERN that signed an agreement with the Russian laboratory in Serpukhov in the 1960s. Together with contacts with JINR in Dubna, this offered one of the few opportunities for scientific West–East cooperation. CERN followed these principles during the occupation of the Czechoslovak Socialist Republic in 1968 and during the Afghanistan crisis in 1979.

The aim is to transfer ethical achievements from science to politics

CERN has become a symbol of what can be achieved when working on a common project without discrimination, for the benefit of science and humanity. In recent decades, when peace has reigned in Europe, this second goal of CERN has somewhat receded into the background. The present crisis reminds us to make greater efforts in this direction again, even more so when many powers disregard ethical principles or formal treaties by pretending that their fundamental interests are violated. Science for Peace tries to help create a minimum of human trust between governments. Without this, we run the risk that future political treaties will be based only on deterrence. That would be a gloomy world.

A vision for the day after requires courage and more Science for Peace than ever before. 

Counting down to LISA

Stefano Vitale

What is LISA? 

LISA (Laser Interferometer Space Antenna) is a giant Michelson interferometer comprising three spacecraft that form an equilateral triangle with sides of about 2.5 million km. You can think of one satellite as the central building of a terrestrial observatory like Virgo or LIGO, and the other two as the end stations of the two interferometer arms. Mirrors at the two ends of each arm are replaced by a pair of free-falling test masses, the relative distance between which is measured by a laser interferometer. When a gravitational wave (GW) passes, it alternately stretches one arm and squeezes the other, causing these distances to oscillate by an almost imperceptible amount (just a few nm). The nature and position of the GW sources is encoded in the time evolution of this distortion. Unlike terrestrial observatories, which keep their arms locked in a fixed position, LISA must keep track of the satellite positions by counting the millions of wavelengths by which their separation changes each second. All interferometer signals are combined on the ground and a sophisticated analysis is used to determine the differential distance changes between the test masses. 

What will LISA tell us that ground-based observatories can’t?

Most GW sources, such as the merger of two black holes detected for the first time by LIGO and Virgo in 2015, consist of binary systems; as the two compact companions spiral into each other, they generate GWs. In these extreme binary mergers, the frequency of the GWs decrease both with the increasing mass of the objects and with increasing distance from their final merger. GWs with frequencies down to about a few Hz, corresponding to objects with masses up to a few thousand solar masses, are detectable from the ground. Below that, however, Earth’s gravity is too noisy. To access milli-Hertz and sub-milli-Hertz frequencies we need to go to space. This low-frequency regime is the realm of supermassive objects with millions of solar masses located in galactic centres, and also where tens of thousands of compact objects in our galaxy, including some of the Virgo/LIGO black holes, emit their signals for years and centuries as they peacefully rotate around each other before entering the final few seconds of their collapse. The LISA mission will therefore be highly complementary to existing and future ground-based observatories such as the Einstein Telescope. Theorists are excited about the physics that can be probed by multiband GW astronomy.

When and how did you get involved in LISA?

LISA was an idea by Pete Bender and colleagues in the 1980s. It was first proposed to the European Space Agency (ESA) in 1993 as a medium-sized mission, an envelope that it could not possibly fit. Nevertheless, ESA got excited by the idea and studies immediately began toward a larger mission. I became aware of the project around that time, immediately fell in love with it and, in 1995, joined the team of enthusiastic scientists, led by Karsten Danzmann. At the time it was not clear that a detection of GWs from ground was possible, whereas unless general relativity was deadly wrong, LISA would certainly detect binary systems in our galaxy. It soon became clear that such a daring project needed a technology precursor, to prove the feasibility of test-mass freefall. This built on my field of expertise, and I became principal investigator, with Karsten as a co-principal investigator, of LISA Pathfinder. 

LISA Pathfinder

What were the key findings of LISA Pathfinder? 

Pathfinder essentially squeezed one of LISA’s arms from millions of kilometres to half a metre and placed it into a single spacecraft: two test masses in a near-perfect gravitational freefall with their relative distance tracked by a laser interferometer. It launched in December 2015 and exceeded all expectations. We were able to control and measure the relative motion of the test masses with unprecedented accuracy using innovative technologies comprising capacitive sensors, optical metrology and a micro-Newton thruster system, among others. By reducing and eliminating all sources of disturbance, Pathfinder observed the most perfect freefall ever created: the test masses were almost motionless with respect to each other, with a relative acceleration less than a millionth of a billionth of Earth’s gravitational acceleration. 

What is LISA’s status today?

LISA is in its final study phase (“B1”) and marching toward adoption, possibly late next year, after which ESA will release the large industrial contracts to build the mission. Following Pathfinder, many necessary technologies are in a high state of maturity: the test masses will be the same, with only minor adjustments, and we also demonstrated a pm-resolution interferometer to detect the motion of the test masses inside the spacecraft – something we need in LISA, too. What we could not test in Pathfinder is the million-kilometre-long pm-resolution interferometer, which is very challenging. Whereas LIGO’s 4 km-long arms allow you to send laser light back and forth between the mirrors and reach kW powers, LISA will have a 1 W laser: if you try to reflect it off a small test-mass 2.5 million km away, you get back just 20 photons per second! The instrument therefore needs a transponder scheme: one spacecraft sends light to another, which collects and measures the frequency to see if there is a shift due to a passing GW. You do this with all six test masses (two per spacecraft), combining the signals in one heck of an analysis to make a “synthetic” LIGO. Since this is mostly a case of optics, you don’t need zero-g space tests, and based on laboratory evidence we are confident it will work. Although LISA is no longer a technology-research project, it will take a few more years to iron out some of the small problems and build the actual flight hardware, so there is no shortage of papers or PhD theses to be written. 

How is the LISA consortium organised?

ESA’s science missions are often a collaboration in which ESA builds, launches and operates the satellite and its member states – via their universities and industries – contribute all or part of the scientific instruments, such as a telescope or a camera. NASA is a major partner with responsibilities that include the lasers, the device to discharge the test masses as they get charged up by cosmic rays, and the telescope to exchange laser beams among the satellites. Germany, which holds the consortium’s leadership role, also shares responsibility for a large part of the interferometry with the UK. Italy leads the development of the test-mass system; France the science data centre and the sophisticated ground testing of LISA optics; and Spain the science-diagnostics development. Critical hardware components are also contributed by Switzerland, the Netherlands, Belgium, the Czech Republic, Denmark and Poland, while scientists worldwide contribute to various aspects of the preparation of mission operation, data analysis and science utilisation. The LISA consortium has around 1500 members. 

What is the estimated cost of the mission, and what is industry’s role?

A very crude estimate of the sum of ESA, NASA and member-state contributions may add up to something below two billion dollars. One of the main drivers of ESA’s scientific programme is to maintain the technological level of European aerospace, so the involvement of industry, in close cooperation with scientific institutes, is crucial. After having passed the adoption phase, ESA will grant contracts to prime industrial contractors who take responsibility for the mission. To foster industrial competition during the study phase, ESA has awarded contracts to two independent contractors, in our case Airbus and Thales Alenia. In addition, international partners and member-state contributions often, if not always, involve industry.

What scientific and technological synergies exist with other fields?

LISA will look for deviations from general relativity, in particular the case where compact objects fall into a supermassive black hole. In terms of their importance, deviations in general relativity are a very close cousin of deviations from the Standard Model of particle physics. Which will come first we don’t know, but LISA is certainly an outstanding laboratory for fundamental gravitational physics. Then there are expectations for cosmology, such as tracing the history of black-hole formation or maybe detecting stochastic backgrounds of GWs, such as “cusps” predicted in string theory. Wherever you push the frontiers to investigate the universe at large, you push the frontiers of fundamental interactions – so it’s not surprising that one of our best cosmologists now works at CERN! Technologically speaking, we just started a collaboration with CERN’s vacuum group. In LISA we have a tiny vacuum volume in the region where the test masses are located, and it is full of components and cables. It was a big challenge for Pathfinder, but for LISA we definitely need to understand more. The CERN vacuum group is really interested in understanding this, so we are very happy with this new collaboration. As with LIGO, Advanced Virgo and the Einstein Telescope, LISA is a CERN-recognised experiment.

There is no other space mission with as many papers published about its science expectations before it even leaves the ground

What’s the secret to maintaining the momentum in a complex, long-term global project in fundamental physics? 

The LISA mission is so fascinating that it is “self-selling”. Scientists liked it, engineers liked it, industry liked it, space agencies like it. Obviously Pathfinder helped a lot – it meant that even in the darkest moments we knew we were “real”. But in the meantime, our theory colleagues did so much work. As far as I know, there is no other space mission with as many papers published about its science expectations before it even leaves the ground. It’s not just that the science is inspiring, but the fact that you can calculate things. The instrumentation is also so fascinating that students want to do it. With Pathfinder, we faced many difficulties. We were naïve in thinking that we could take this thing that we built in the lab and turn it into an industrial project. Of course we needed to grow and learn, but because we loved the project so much, we never ever gave up. One needs this mind-set and resilience to make big scientific projects work. 

When do you envision launch? 

Currently it’s planned for the mid-2030s. This is a bit in the future at my age, but I am grateful to have seen the launch of LISA Pathfinder and I am happy to think that many of my young colleagues will see it, and share the same emotions we did with Pathfinder, as a new era in GW astronomy opens up.

Determining the lifetime of the Bs

LHCb figure 1

As the LHCb experiment prepares for data taking with an upgraded detector for LHC Run 3, the rich harvest of results using data collected in Run 1 and Run 2 of the LHC continues.

A fascinating area of study is the quantum-mechanical oscillation of neutral mesons between their particle and antiparticle states, implying a coupled system of two mesons with different lifetimes. The phenomenology of the Bs system is particularly interesting as it provides a sensitive probe to physics beyond the Standard Model. A Bs meson oscillates with a frequency of about 3 × 1012 Hz, or on average about nine times during its lifetime, τ. In addition, a sizeable difference between the decay widths of the heavy (ΓH) and light (ΓL) mass eigenstates is expected. Measuring the lifetime of a CP-even Bs-decay mode determines τL = 1/ΓL. 

LHCb has recently released a new and precise measurement of this parameter, making use of Bs J/ψη decays selected from 5.7 fb–1 of Run 2 data. The study improves the previous Run 1 precision by a factor of two. Due to the combinatorial background, the reconstruction of the η meson via its two-photon decay mode is a particular challenge for this analysis. Despite this, and even with the modest energy resolution of the calorimeter leading to a relatively broad mass peak overlapping partially with the signal from the B0 J/ψη decay, a competitive accuracy has been achieved. By exploiting the latest machine-learning techniques to reduce the background and the well understood LHCb detector, the Bs J/ψη decay is observed (figure 1), and τL is extracted from a two-dimensional fit to the mass and decay time.

LHCb figure 2

The analysis finds τL = 1.445 ± 0.016 (stat) ± 0.008 (syst) ps, which is the most precise measurement of this quantity. Combined with the LHCb Run 1 study of this and the Bs Ds+ Ds decay mode, τL = 1.437 ± 0.014 ps, which agrees well both with the Standard Model expectation (τL = 1.422 ± 0.013 ps) and the value inferred from measurements of Γs and ΔΓs in Bs J/ψφ decays. Further improvement in the knowledge of τL is expected both by considering other CP-even Bs decays to final states containing η or η′ mesons, the Bs Ds+ Ds dataset collected during Run 2 and from the upcoming Run 3. 

bright-rec iop pub iop-science physcis connect