Topics

ESO’s Extremely Large Telescope halfway to completion

The construction of the world’s largest optical telescope, the Extremely Large Telescope (ELT), has reached its mid-point, stated the European Southern Observatory (ESO) on 11 July. Originally planned to see first light in the early 2020s, operations will now start in 2028 due to delays inherent to building such a large and complex instrument, as well as the COVID-19 pandemic. 

The base and frame of the ELT’s dome structure on Cerro Armazones in the Chilean Atacama Desert have now been set. Meanwhile at European sites, the five-system mirrors for the ELT are being manufactured. More than 70% of the supports and blanks for the main mirror – which at 39 m across will be the biggest primary mirror ever built – are complete, and mirrors two and three are cast and now in the process of being polished.

Along with six laser guiding sources that will act as reference stars, mirrors four and five form part of a sophisticated adaptive-optics system to correct for atmospheric disturbances. The ELT will observe the universe in the near-infrared and visible regions to track down Earth-like exoplanets, investigate faint objects in the solar system and study the first stars and galaxies. It will also explore black holes, the dark universe and test fundamental constants (CERN Courier November/December 2019 p25).

ALICE ups its game for sustainable computing

The Large Hadron Collider (LHC) roared back to life on 5 July 2022, when proton–proton collisions at a record centre-of-mass energy of 13.6 TeV resumed for Run 3. To enable the ALICE collaboration to benefit from the increased instantaneous luminosity of this and future LHC runs, the ALICE experiment underwent a major upgrade during Long Shutdown 2 (2019–2022) that will substantially improve track reconstruction in terms of spatial precision and tracking efficiency, in particular for low-momentum particles. The upgrade will also enable an increased interaction rate of up to 50 kHz for lead–lead (PbPb) collisions in continuous readout mode, which will allow ALICE to collect a data sample more than 10 times larger than the combined Run 1 and Run 2 samples.

ALICE is a unique experiment at the LHC devoted to the study of extreme nuclear matter. It comprises a central barrel (the largest data producer) and a forward muon “arm”. The central barrel relies mainly on four subdetectors for particle tracking: the new inner tracking system (ITS), which is a seven-layer, 12.5 gigapixel monolithic silicon tracker (CERN Courier July/August 2021 p29); an upgraded time projection chamber (TPC) with GEM-based readout for continuous operation; a transition radiation detector; and a time-of-flight detector. The muon arm is composed of three tracking devices: a newly installed muon forward tracker (a silicon tracker based on monolithic active pixel sensors), revamped muon chambers and a muon identifier.

Due to the increased data volume in the upgraded ALICE detector, storing all the raw data produced during Run 3 is impossible. One of the major ALICE upgrades in preparation for the latest run was therefore the design and deployment of a completely new computing model: the O2 project, which merges online (synchronous) and offline (asynchronous) data processing into a single software framework. In addition to an upgrade of the experiment’s computing farms for data readout and processing, this necessitates efficient online compression and the use of graphics processing units (GPUs) to speed up processing. 

Pioneering parallelism

As their name implies, GPUs were originally designed to accelerate computer-graphics rendering, especially in 3D gaming. While they continue to be utilised for such workloads, GPUs have become general-purpose vector processors for use in a variety of settings. Their intrinsic ability to perform several tasks simultaneously gives them a much higher compute throughput than traditional CPUs and enables them to be optimised for data processing rather than, say, data caching. GPUs thus reduce the cost and energy consumption of associated computing farms: without them, about eight times as many servers of the same type and other resources would be required to handle the ALICE TPC online processing of PbPb collision data at a 50 kHz interaction rate. 

ALICE detector dataflow

Since 2010, when the high-level trigger online computer farm (HLT) entered operation, the ALICE detector has pioneered the use of GPUs for data compression and processing in high-energy physics. The HLT had direct access to the detector readout hardware and was crucial to compress data obtained from heavy-ion collisions. In addition, the HLT software framework was advanced enough to perform online data reconstruction. The experience gained during its operation in LHC Run 1 and 2 was essential for the design and development of the current O2 software and hardware systems.

For data readout and processing during Run 3, the ALICE detector front-end electronics are connected via radiation-tolerant gigabit-transceiver links to custom field programmable gate arrays (see “Data flow” figure). The latter, hosted in the first-level processor (FLP) farm nodes, perform continuous readout and zero-suppression (the removal of data without physics signal). In the case of the ALICE TPC, zero-suppression reduces the data rate from a prohibitive 3.3 TB/s at the front end to 900 GB/s for 50 kHz minimum-bias PbPb operations. This data stream is then pushed by the FLP readout farm to the event processing nodes (EPN) using data-distribution software running on both farms. 

Located in three containers on the surface close to the ALICE site, the EPN farm currently comprises 350 servers, each equipped with eight AMD GPUs with 32 GB of RAM each, two 32-core AMD CPUs and 512 GB of memory. The EPN farm is optimised for the fastest possible TPC track reconstruction, which constitutes the bulk of the synchronous processing, and provides most of its computing power in the form of GPU processing. As data flow from the front end into the farms and cannot be buffered, the EPN computing capacity must be sufficient for the highest data rates expected during Run 3.

Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing

Due to the continuous readout approach at the ALICE experiment, processing does not occur on a particular “event” triggered by some characteristic pattern in detector signals. Instead, all data is read out and stored during a predefined time slot in a time frame (TF) data structure. The TF length is usually chosen as a multiple of one LHC orbit (corresponding to about 90 microseconds). However, since a whole TF must always fit into the GPU’s memory, the collaboration chose to use 32 GB GPU memory to grant enough flexibility in operating with different TF lengths. In addition, an optimisation effort was put in place to reuse GPU memory in consecutive processing steps. During the proton run in 2022 the system was stressed by increasing the proton collision rates beyond those needed in order to maximise the integrated luminosity for physics analyses. In this scenario the TF length was chosen to be 128 LHC orbits. Such high-rate tests aimed to reproduce occupancies similar to the expected rates of PbPb collisions. The experience of ALICE demonstrated that the EPN processing could sustain rates nearly twice the nominal design value (600 GB/s) originally foreseen for PbPb collisions. Using high-rate proton collisions at 2.6 MHz the readout reached 1.24 TB/s, which was fully absorbed and processed on the EPNs. However, due to fluctuations in centrality and luminosity, the number of TPC hits (and thus the required memory size) varies to a small extent, demanding a certain safety margin. 

Flexible compression 

At the incoming raw-data rates during Run 3, it is impossible to store the data – even temporarily. Hence, the outgoing data is compressed in real time to a manageable size on the EPN farm. During this network transfer, event building is carried out by the data distribution suite, which collects all the partial TFs sent by the detectors and schedules the building of the complete TF. At the end of the transfer, each EPN node receives and then processes a full TF containing data from all ALICE detectors. 

GPUs manufactured by AMD

The detector generating by far the largest data volume is the TPC, contributing more than 90% to the total data size. The EPN farm compresses this to a manageable rate of around 100 GB/s (depending on the interaction rate), which is then stored to the disk buffer. The TPC compression is particularly elaborate, employing several steps including a track-model compression to reduce the cluster entropy before the entropy encoding. Evaluating the TPC space-charge distortion during data taking is also the most computing-intensive aspect of online calibrations, requiring global track reconstruction for several detectors. At the increased Run 3 interaction rate, processing on the order of one percent of the events is sufficient for the calibration.

During data taking, the EPN system operates synchronously and the TPC reconstruction fully loads the GPUs. With the EPN farm providing 90% of its compute performance via GPUs, it is also desirable to maximise the GPU utilisation in the asynchronous phase. Since the relative contribution of the TPC processing to the overall workload is much smaller in the asynchronous phase, GPU idle times would be high and processing would be CPU-limited if the TPC part only ran on the GPUs. To use the GPUs maximally, the central-barrel asynchronous reconstruction software is being implemented with native GPU support. Currently, around 60% of the workload can run on a GPU, yielding a speedup factor of about 2.25 compared to CPU-only processing. With the full adaptation of the central-barrel tracking software to the GPU, it is estimated that 80% of the reconstruction workload could be processed on GPUs.

In contrast to synchronous processing, asynchronous processing includes the reconstruction of data from all detectors, and all events instead of only a subset; physics analysis-ready objects produced from asynchronous processing are then made available on the computing Grid. As a result, the processing workload for all detectors, except the TPC, is significantly higher in the asynchronous phase. For the TPC, clustering and data compression are not necessary during asynchronous processing, while the tracking runs on a smaller input data set because some of the detector hits were removed during data compression. Consequently, TPC processing is faster in the asynchronous phase than in the synchronous phase. Overall, the TPC contributes significantly to asynchronous processing, but is not dominant. The asynchronous reconstruction will be divided between the EPN farm and the Grid sites. While the final distribution scheme is still to be decided, the plan is to split reconstruction between the online computing farm, the Tier 0 and the Tier 1 sites. During the LHC shutdown periods, the EPN farm nodes will almost entirely be used for asynchronous processing.

Great shape

In 2021, during the first pilot-beam collisions at injection energy, synchronous processing was running and successfully commissioned. In 2022 it was used during nominal LHC operations, where ALICE performed online processing of pp collisions at a 2.6 MHz inelastic interaction rate. At lower interaction rates (both for pp and PbPb collisions), ALICE ran additional processing tasks on free EPN resources, for instance online TPC charged-particle energy-loss determination, which would not be possible at the full 50 kHz PbPb collision rate. The particle-identification performance is demonstrated in the figure “Particle ID”, in which no additional selections on the tracks or detector calibrations were applied.

ALICE TPC performance

Another performance metric used to assess the quality of the online TPC reconstruction is the charged-particle tracking efficiency. The efficiency for reconstructing tracks from PbPb collisions at a centre-of-mass energy of 5.52 TeV per nucleon pair ranges from 94–100% for pT > 0.1 GeV/c. Here the fake-track rate is rather negligible, however the clone rate increases significantly for low-pT primary tracks due to incomplete track merging of very low-momentum particles that curl in the ALICE solenoidal field and leave and enter the TPC multiple times.

The effective use of GPU resources provides extremely efficient processors. Additionally, GPUs deliver improved data quality and compute cost and efficiency – aspects that have not been overlooked by the other LHC experiments. To manage their data rates in real time, LHCb developed the Allen project, a first-level trigger processed entirely on GPUs that reduces the data rate prior to the alignment, calibration and final reconstruction steps by a factor of 30–60. With this approach, 4 TB/s are processed in real time, with 10 GB of the most interesting collisions selected for physics analysis. 

At the beginning of Run 3, the CMS collaboration deployed a new HLT farm comprising 400 CPUs and 400 GPUs. With respect to a traditional solution using only CPUs, this configuration reduced the processing time of the high-level trigger by 40%, improved the data-processing throughput by 80% and reduced the power consumption of the farm by 30%. ATLAS uses GPUs extensively for physics analyses, especially for machine-learning applications. Focus has also been placed on data processing, anticipating that in the following years much of that can be offloaded to GPUs. For all four LHC experiments, the future use of GPUs is crucial to reduce the cost, size and power consumption within the higher luminosities of the LHC.

Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing. Today, 99% of synchronous processing is performed on GPUs, dominated by the largest contributor, the TPC.

More code

On the other hand, only about 60% of asynchronous processing (for 650 kHz pp collisions) is currently running on GPUs, i.e. offline data processing on the EPN farm. For asynchronous processing, even if the TPC is still an important contributor to the compute load, there are several other subdetectors that are important. In fact, there is an ongoing effort to port considerably more code to the GPUs. Such an effort will increase the fraction of GPU-accelerated code to beyond 80% for full barrel tracking. Eventually ALICE aims to run 90% of the whole asynchronous processing on GPUs.

PbPb collisions in the ALICE TPC

In November 2022 the upgraded ALICE detectors and central systems saw PbPb collisions for the first time during a two-day pilot run at a collision rate of about 50 Hz. High-rate PbPb processing was validated by injecting Monte Carlo data into the readout farm and running the whole data processing chain on 230 EPN nodes. Due to the TPC data volumes being somewhat larger than initially expected, this stress test is now being revalidated with continuously optimised TPC firmware using 350 EPN nodes together with the final TPC firmware to provide the required 20% compute margin with respect to foreseen 50 kHz PbPb operations in October 2023. Together with the upgraded detector components, the ALICE experiment has never been in better shape to probe extreme nuclear matter during the current and future LHC runs.

Report explores quantum computing in particle physics

A quantum computer built by IBM

Researchers from CERN, DESY, IBM Quantum and more than 30 other organisations have published a white paper identifying activities in particle physics that could benefit from quantum-computing technologies. Posted on arXiv on 6 July, the 40 page-long paper is the outcome of a working group set up at the QT4HEP conference held at CERN last November, which identified topics in theoretical and experimental high-energy physics where quantum algorithms may produce significant insights and results that are very hard or even not accessible by classical computers. 

Combining quantum and information theory, quantum computing is natively aligned with the underlying physics of the Standard Model. Quantum bits, or qubits, are the computational representation of a state that can be entangled and brought into superposition. Once measured, qubits do not represent discrete numbers 0 and 1 as their classical counterparts, but a probability ranging from 0 to 1. Hence quantum-computing algorithms can be exploited to achieve computational advantages in terms of speed and accuracy, especially for processes that are yet to be understood. 

“Quantum computing is very promising, but not every problem in particle physics is suited to this model of computing,” says Alberto Di Meglio, head of IT Innovation at CERN and one of the white paper’s lead authors alongside Karl Jansen of DESY and Ivano Tavernelli of IBM Quantum. “It’s important to ensure that we are ready and that we can accurately identify the areas where these technologies have the potential to be most useful.” 

Neutrino oscillations in extreme environments, such as supernovae, are one promising example given. In the context of quantum computing, neutrino oscillations can be considered strongly coupled many-body systems that are driven by the weak interaction. Even a two-flavour model of oscillating neutrinos is almost impossible to simulate exactly for classical computers, making this problem well suited for quantum computing. The report also identifies lattice-gauge theory and quantum field theory in general as candidates that could enjoy a quantum advantage. The considered applications include quantum dynamics, hybrid quantum/classical algorithms for static problems in lattice gauge theory, optimisation and classification problems. 

With quantum computing we address problems in those areas that are very hard to tackle with classical methods

In experimental physics, potential applications range from simulations to data analysis and include jet physics, track reconstruction and algorithms used to simulate the detector performance. One key advantage here is the speed up in processing time compared to classical algorithms. Quantum-computing algorithms might also be better at finding correlations in data, while Monte Carlo simulations could benefit from random numbers generated by a quantum computer. 

“With quantum computing we address problems in those areas that are very hard – or even impossible – to tackle with classical methods,” says Karl Jansen (DESY). “We can now explore physical systems to which we still do not have access.” 

The working group will meet again at CERN for a special workshop on 16 and 17 November, immediately before the Quantum Techniques in Machine Learning conference from 19 to 24 November.

Joined-up thinking in vacuum science

The first detection of gravitational waves in 2015 stands as a confirmation of Einstein’s prediction in his general theory of relativity and represents one of the most significant milestones in contemporary physics. Not only that, direct observation of gravitational ripples in the fabric of space-time opened up a new window on the universe that enables astronomers to study cataclysmic events such as black-hole collisions, supernovae and the merging of neutron stars. The hope is that the emerging cosmological data sets will, over time, yield unique insights to address fundamental problems in physics and astrophysics – the distribution of matter in the early universe, for example, and the search for dark matter and dark energy.

By contrast, an altogether more down-to-earth agenda – Beampipes for Gravitational Wave Telescopes 2023 – provided the backdrop for a three-day workshop held at CERN at the end of March. Focused on enabling technologies for current and future gravitational-wave observatories – specifically, their ultrahigh-vacuum (UHV) beampipe requirements – the workshop attracted a cross-disciplinary audience of 85 specialists drawn from the particle-accelerator and gravitational-wave communities alongside industry experts spanning steel production, pipe manufacturing and vacuum technologies (CERN Courier July/August 2023 p18). 

If location is everything, Geneva ticks all the boxes in this regard. With more than 125 km of beampipes and liquid-helium transfer lines, CERN is home to one of the world’s largest vacuum systems – and certainly the longest and most sophisticated in terms of particle accelerators. All of which ensured a series of workshop outcomes shaped by openness, encouragement and collaboration, with CERN’s technology and engineering departments proactively sharing their expertise in vacuum science, materials processing, advanced manufacturing and surface treatment with counterparts in the gravitational-wave community. 

Measurement science

To put all that knowledge-share into context, however, it’s necessary to revisit the basics of gravitational-wave metrology. The principal way to detect gravitational waves is to use a laser interferometer comprising two perpendicular arms, each several kilometres long and arranged in an L shape. At the intersection of the L, the laser beams in the two branches interact, whereupon the resulting interference signal is captured by photodetectors. When a gravitational wave passes through Earth, it induces differential length changes in the interferometer arms – such that the laser beams traversing the two arms experience dissimilar path lengths, resulting in a phase shift and corresponding alterations in their interference pattern. 

Better by design: the Einstein Telescope beampipes

Beampipe studies

The baseline for the Einstein Telescope’s beampipe design studies is the Virgo gravitational-wave experiment. The latter’s beampipe – which is made of austenitic stainless steel (AISI 304L) – consists of a 4 mm thick wall reinforced with stiffener rings and equipped with an expansion bellows (to absorb shock and vibration).

While steel remains the material of choice for the Einstein Telescope beampipe, other grades beyond AISI 304L are under consideration. Ferritic steels, for example, can contribute to a significant cost reduction per unit mass compared to austenitic stainless steel, which contains nickel. Ferrite also has a body-centred-cubic crystallographic structure that results in lower residual hydrogen levels versus face-centred-cubic austenite – a feature that eliminates the need for expensive solid-state degassing treatments when pumping down to UHV. 

Options currently on the table include the cheapest ferritic steels, known as “mild steels”, which are used in gas pipelines after undergoing corrosion treatment, as well as ferritic stainless steels containing more than 12% chromium by weight. While initial results with the latter show real promise, plastic deformation of welded joints remains an open topic, while the magnetic properties of these materials must also be considered to prevent anomalous transmission of electromagnetic signals and induced mechanical vibrations.

Along a related coordinate, CERN is developing an alternative solution with respect to the “baseline design” that involves corrugated walls with a thickness of 1.3 mm, eliminating the need for bellows and reinforcements. Double-wall pipe designs are also in the mix – either with an insulation vacuum or thermal insulators between the two walls. 

Beyond the beampipe material, studies are exploring the integration of optical baffles, which intermittently reduce the pipe aperture to block scattered photons. Various aspects such as positioning, material, surface treatment and installation are under review, while the transfer of vibrations from the tunnel structure to the baffle represents another line of enquiry. 

With this in mind, the design of the beampipe support system aims to minimise the transmission of vibrations to the baffles and reduce the frequency of the first vibration eigen mode within a range where the Einstein Telescope is expected to be less sensitive. Defining the vibration transfer function from the tunnel’s near-environment to the beampipe is another key objective, as are the vibration levels induced by airflow in the tunnel (around the beampipe) and stray electromagnetic fields from beampipe instrumentation.

Another thorny challenge is integration of the beampipes into the Einstein Telescope tunnel. Since the beampipes will be made up of approximately 15 m-long units, welding in the tunnel will be mandatory. CERN’s experience in welding cryogenic transfer lines and magnet junctions in the LHC tunnel will be useful in this regard, with automatic welding and cutting machines being one possible option to streamline deployment. 

Also under scrutiny is the logistics chain from raw material to final installation. Several options are being evaluated, including manufacturing and treating the beampipes on-site to reduce storage needs and align production with the pace of installation. While this solution would reduce the shipping costs of road and maritime transport, it would require specialised production personnel and dedicated infrastructure at the Einstein Telescope site.

Finally, the manufacturing and treatment processes of the beampipes will have a significant impact on cost and vacuum performance – most notably with respect to dust control, an essential consideration to prevent excessive light scattering due to falling particles and changes in baffle reflectivity. Dust issues are common in particle accelerators and the lessons learned at CERN and other facilities may well be transferable to the Einstein Telescope initiative. 

These are no ordinary interferometers, though. The instruments operate at the outer limits of measurement science and are capable of tracking changes in length down to a few tens of zeptometres (10–21 m), a length scale roughly 10,000 times smaller than the diameter of a proton. This achievement is the result of extraordinary progress in optical technologies over recent decades – advances in laser stability and mirror design, for example – as well as the ongoing quest to minimise sources of noise arising from seismic vibrations and quantum effects. 

With the latter in mind, the interferometer laser beams must also propagate through vacuum chambers to avoid potential scattering of the light by gas molecules. The residual gas present within these chambers introduces spatial and temporal fluctuations in the refractive index of the medium through which the laser beam propagates – primarily caused by statistical variations in gas density. 

As such, the coherence of the laser beam can be compromised as it traverses regions characterised by a non-uniform refractive index, resulting in phase distortions. To mitigate the detrimental effects of coherence degradation, it is therefore essential to maintain hydrogen levels at pressures lower than 10–9 mbar, while even stricter UHV requirements are in place for heavier molecules (depending on their polarisability and thermal speed).

Now and next

Right now, there are four gravitational-wave telescopes in operation: LIGO (across two sites in the US), Virgo in Italy, KAGRA in Japan, and GEO600 in Germany (while India has recently approved the construction of a new gravitational-wave observatory in the western state of Maharashtra). Coordination is a defining feature of this collective endeavour, with the exchange of data among the respective experiments crucial for eliminating local interference and accurately pinpointing the detection of cosmic events.

Meanwhile, the research community is already planning for the next generation of gravitational-wave telescopes. The primary objective: to expand the portion of the universe that can be comprehensively mapped and, ultimately, to detect the primordial gravitational waves generated by the Big Bang. In terms of implementation, this will demand experiments with longer interferometer arms accompanied by significant reductions in noise levels (necessitating, for example, the implementation of cryogenic cooling techniques for the mirrors). 

The beampipe for the ALICE experiment

Two leading proposals are on the table: the Einstein Telescope in Europe and the Cosmic Explorer in the US. The latter proposes a 40 km long interferometer arm with a 1.2 m diameter beampipe, configured in the traditional L shape and across two different sites (as per LIGO). Conversely, the former proposes six 60° Ls in an underground tunnel laid out in an equilateral triangle configuration (10 km long sides, 1 m beampipe diameter and with a high- and low-frequency detector at each vertex). 

For comparison, the current LIGO and Virgo installations feature arm lengths of 4 km and 3 km, respectively. As a result, the anticipated length of the vacuum vessel for the Einstein Telescope is projected to be 120 km, while for the Cosmic Explorer it is expected to be 160 km. In short: both programmes will require the most extensive and ambitious UHV systems ever constructed. 

Extreme vacuum 

At a granular level, the vacuum requirements for the Einstein Telescope and Cosmic Explorer assume that the noise induced by residual gas is significantly lower than the allowable noise budget of the gravitational interferometers themselves. This comparison is typically made in terms of amplitude spectral density. A similar approach is employed in particle accelerators, where an adequately low residual gas density is imperative to minimise any impacts on beam lifetimes (which are predominantly constrained by other unavoidable factors such as beam-beam interactions and collimation). 

The specification for the Einstein Telescope states that the contribution of residual gas density to the overall noise budget must not exceed 10%, which necessitates that hydrogen partial pressure be maintained in the low 10–10 mbar range. Achieving such pressures is commonplace in leading-edge particle accelerator facilities and, as it turns out, not far beyond the limits of current gravitational-wave experiments. The problem, though, comes when mapping current vacuum technologies to next-generation experiments like the Einstein Telescope. 

In such a scenario, the vacuum system would represent one of the biggest capital equipment costs – on a par, in fact, with the civil engineering works (the main cost-sink). As a result, one of the principal tasks facing the project teams is the co-development – in collaboration with industry – of scalable vacuum solutions that will enable the cost-effective construction of these advanced experiments without compromising on UHV performance and reliability. 

Follow the money

It’s worth noting that the upward trajectory of capital/operational costs versus length of the experimental beampipe is a challenge that’s common to both next-generation particle accelerators and gravitational-wave telescopes – and one that makes cost reduction mandatory when it comes to the core vacuum technologies that underpin these large-scale facilities. In the case of the proposed Future Circular Collider at CERN, for instance, a vacuum vessel exceeding 90 km in length would be necessary. 

Of course, while operational and maintenance costs must be prioritised in the initial design phase, the emphasis on cost reduction touches all aspects of project planning and, thereafter, requires meticulous optimisation across all stages of production – encompassing materials selection, manufacturing processes, material treatments, transport, logistics, equipment installation and commissioning. Systems integration is also paramount, especially at the interfaces between the vacuum vessel’s technical systems and adjacent infrastructure (for example, surface buildings, underground tunnels and caverns). Key to success in every case is a well-structured project that brings together experts with diverse competencies as part of an ongoing “collective conversation” with their counterparts in the physics community and industrial supply chain.

Welding services

Within this framework, CERN’s specialist expertise in managing large-scale infrastructure projects such as the HL-LHC can help to secure the success of future gravitational-wave initiatives. Notwithstanding CERN’s capabilities in vacuum system design and optimisation, other areas of shared interest between the respective communities include civil engineering, underground safety and data management, to name a few. 

Furthermore, such considerations align well with the  latest update of the European strategy for particle physics – which explicitly prioritises the synergies between particle and astroparticle physics – and are reflected operationally through a collaboration agreement (signed in 2020) between CERN and the lead partners on the Einstein Telescope feasibility study – Nikhef in the Netherlands and INFN in Italy. 

In this way, CERN is engaged directly as a contributing partner on the beampipe studies for the Einstein Telescope (see “Better by design: the Einstein Telescope beampipes”). The three-year project, which kicked off in September 2022, will deliver the main technical design report for the telescope’s beampipes. CERN’s contribution is structured in eight work packages, from design and materials choice to logistics and installation, including surface treatments and vacuum systems. 

CERN teams are engaged directly on the beampipe studies for the Einstein Telescope

The beampipe pilot sector will also be installed at CERN, in a building previously used for testing cryogenic helium transfer lines for the LHC. Several measurements are planned for 2025, including tests relating to installation, alignment, in-situ welding, leak detection and achievable vacuum levels. Other lines of enquiry will assess the efficiency of the bakeout process, which involves the injection of electrical current directly into the beampipe walls (heating them in the 100–150 °C range) to minimise subsequent outgassing levels under vacuum.

Given that installation of the beampipe pilot sector is time-limited, while details around the manufacturing and treatment of the vacuum chambers are still to be clarified, the engagement of industry partners in this early design stage is a given – an approach, moreover, that seeks to replicate the collaborative working models pursued as standard within the particle-accelerator community. While there’s a lot of ground to cover in the next two years, the optimism and can-do mindset of all participants at Beampipes for Gravitational Wave Telescopes 2023 bodes well.

Event displays in motion

The first event displays in particle physics were direct images of traces left by particles when they interacted with gases or liquids. The oldest event display of an elementary particle, published in Charles Wilson’s Nobel lecture from 1927 and taken between 1912 and 1913, showed a trajectory of an electron. It was a trail made by small droplets caused by the interaction between an electron coming from cosmic rays and gas molecules in a cloud chamber, the trajectory being bent due to the electrostatic field (see “First light” figure). Bubble chambers, which work in a similar way to cloud chambers but are filled with liquid rather than gas, were key in proving the existence of neutral currents 50 years ago, along with many other important results. In both cases a particle crossing the detector triggered a camera that took photographs of the trajectories. 

Following the discovery of the Higgs boson in particular, outreach has become another major pillar of event displays

Georges Charpak’s invention of the multi-wire proportional chamber in 1968, which made it possible to distinguish single tracks electronically, paved the way for three-dimensional (3D) event displays. With 40 drift chambers, and computers able to process the large amounts of data produced by the UA1 detector at the SppS, it was possible to display the tracks of decaying W and Z bosons along the beam axis, aiding their 1983 discovery (see “Inside events” figure, top).  

Design guidelines 

With the advent of LEP and the availability of more powerful computers and reconstruction software, physicists knew that the amount of data would increase to the point where displaying all of it would make pictures incomprehensible. In 1995 members of the ALEPH collaboration released guidelines – implemented in a programme called Dali, which succeeded Megatek – to make event displays as easy to understand as possible, and the same principles apply today. To make them better match human perception, two different layouts were proposed: the wire-frame technique and the fish-eye transformation. The former shows detector elements via a rendering of their shape, resulting in a 3D impression (see “Inside events” figure, bottom). However, the wire-frame pictures needed to be simplified when too many trajectories and detector layers were available. This gave rise to the fish-eye view, or projection in x versus y, which emphasised the role of the tracking system. The remaining issue of superimposed detector layers was mitigated by showing a cross section of the detector in the same event display (see “Inside events” figure, middle). Together with a colour palette that helped distinguish the different objects, such as jets, from one other, these design principles prevailed into the LHC era. 

First ever event display

The LHC not only took data acquisition, software and analysis algorithms to a new level, but also event displays. In a similar vein to LEP, the displays used to be more of a debugging tool for the experiments to visualise events and see how the reconstruction software and detector work. In this case, a static image of the event is created and sent to the control room in real time, which is then examined by experts for anomalies, for example due to incorrect cabling. “Visualising the data is really powerful and shows you how beautiful the experiment can be, but also the brutal truth because it can tell you something that does not work as expected,” says ALICE’s David Dobrigkeit Chinellato. “This is especially important after long shutdowns or the annual year-end-technical stops.”  

Largely based on the software used to create event displays at LEP, each of the four main LHC experiments developed their own tools, tailored to their specific analysis software (see “LHC returns” figure). The detector geometry is loaded into the software, followed by the event data; if the detector layout doesn’t change, the geometry is not recreated. As at LEP, both fish-eye and wire-frame images are used. Thanks to better rendering software and hardware developments such as more powerful CPUs and GPUs, wire-frame images are becoming ever more realistic (see “LHC returns” figure). Computing developments and additional pileup due to increased collisions have motivated more advanced event displays. Driven by the enthusiasm of individual physicists, and in time for the start of the LHC Run 3 ion run in October 2022, ALICE experimentalists have began to use software that renders each event to give it a more realistic and crisper view (see “Picture perfect” image). In particular, in lead–lead collisions at 5.36 TeV per nucleon pair measured with ALICE, the fully reconstructed tracks are plotted to achieve the most efficient visualisation.

Inside events

ATLAS also uses both fish-eye and wire-frame views. Their current event-display framework, Virtual Point 1 (VP1), creates interactive 3D event displays and integrates the detector geometry to draw a selected set of particle passages through the detector. As with the other experiments, different parts of the detector can be added or removed, resulting in a sliced view. Similarly, CMS visualises their events using in-house software known as Fireworks, while LHCb has moved from a traditional view using Panoramix software to a 3D one using software based on Root TEve.

In addition, ATLAS, CMS and ALICE have developed virtual-reality views. VP1, for instance, allows data to be exported in a format that is used for videos and 3D images. This enables both physicists and the public to fully immerse themselves in the detector. CMS physicists created a first virtual-reality version during a hackathon, which took place at CERN in 2016 and integrated this feature with small modifications in their application used for outreach. ALICE’s augmented-reality application “More than ALICE”, which is intended for visitors, overlays the description of detectors and even event displays, and works on mobile devices. 

Phoenix rising

To streamline the work on event displays at CERN, developers in the LHC experiments joined forces and published a visualisation whitepaper in 2017 to identify challenges and possible solutions. As a result it was decided to create an experiment-agnostic event display, later named Phoenix. “When we realised the overlap of what we are doing across many different experiments, we decided to develop a flexible browser-based framework, where we can share effort and leverage our individual expertise, and where users don’t need to install any special software,” says main developer Edward Moyse of ATLAS. While experiment-specific frameworks are closely tied to the experiments’ data format and visualise all incoming data, experiment-agnostic frameworks only deal with a simplified version of the detectors and a subset of the event data. This makes them lightweight and fast, and requires an extra processing step as the experimental data need to be put into a generic format and thus lose some detail. Furthermore, not every experiment has the symmetric layout of ATLAS and CMS. This applies to LHCb, for instance.

Event displays of the first LHC Run 3 collisions

Phoenix initially supported the geometry and event- display formats for LHCb and ATLAS, but those for CMS were added soon after and now FCC has joined. The platform had its first test in 2018 with the TrackML computing challenge using a fictious High-Luminosity LHC (HL-LHC) detector created with Phoenix. The main reason to launch this challenge was to find new machine-learning algorithms that can deal with the unprecedented increase in data collection and pile-up in detectors expected during the HL-LHC runs, and at proposed future colliders. 

Painting outreach

Following the discovery of the Higgs boson in particular, outreach has become another major pillar of event displays. Visually pleasing images and videos of particle collisions, which help in the communication of results, are tailor made for today’s era of social media and high-bandwidth internet connections. “We created a special event display for the LHCb master class,” mentions LHCb’s Ben Couturier. “We show the students what an event looks like from the detector to the particle tracks.” CMS’s iSpy application is web-based and primarily used for outreach and CMS masterclasses, and has also been extended with a virtual-reality application. “When I started to work on event displays around 2007, the graphics were already good but ran in dedicated applications,” says CMS’s Tom McCauley. “For me, the big change is that you can now use all these things on the web. You can access them easily on your mobile phone or your laptop without needing to be an expert on the specific software.” 

Event displays from LHCb and the simulated HL-LHC detector

Being available via a browser means that Phoenix is a versatile tool for outreach as well as physics. In cases or regions where the necessary bandwidth to create event displays is sparse, pre-created events can be used to highlight the main physics objects and to display the detector as clearly as possible. Another new way to experience a collision and to immerse fully into an event is to wear virtual-reality goggles. 

An even older and more experiment-agnostic framework than Phoenix using virtual-reality experiences exists at CERN, and is aptly called TEV (Total Event Display). Formerly used to show event displays in the LHC interactive tunnel as well as in the Microcosm exhibition, it is now used at the CERN Globe and the new Science Gateway centre. There, visitors will be able to play a game called “proton football”, where the collision energy depends on the “kick” the players give their protons. “This game shows that event displays are the best of both worlds,” explains developer Joao Pequenao of CERN. “They inspire children to learn more about physics by simply playing a soccer game, and they help physicists to debug their detectors.”

A soft spot for heavy metal

Welding is the technique of fusing two materials, often metals, by heating them to their melting points, creating a seamless union. Mastery of the materials involved, meticulous caution and remarkable steadiness are integral elements to a proficient welder’s skillset. The ability to adjust to various situations, such as mechanised or manual welding, is also essential. Audrey Vichard’s role as a welding engineer in CERN’s mechanical and materials engineering group (MME) encompasses comprehensive technical guidance in the realm of welding. She evaluates methodologies, improves the welding process, develops innovative solutions, and ensures compliance with global standards and procedures. This amalgamation of tasks allows for the effective execution of complex projects for CERN’s accelerators and experiments. “It’s a kind of art,” says Audrey. “Years of training are required to achieve high-quality welds.” 

Audrey is one of the newest additions to the MME group, which provides specific engineering solutions combining mechanical design, fabrication and material sciences for accelerator components and physics detectors to the CERN community. She joined the forming and welding section as a fellow in January 2023, having previously studied metallurgy in the engineering school at Polytech Nantes in France. “While in school, I did an internship in Toulon, where they build submarines for the army. I was in a group with a welder, who passed on his passion for welding to me – especially when applied in demanding applications.”

Extreme conditions

What sets welding at CERN apart are the variety of materials used and the environments the finished parts have to withstand. Radioactivity, high pressure to ultra-high vacuum and cryogenic temperatures are all factors to which the materials are exposed. Stainless steel is the most frequently used material, says Audrey, but rarer ones like niobium also come into play. “You don’t really find niobium for welding outside CERN – it is very specific, so it’s interesting and challenging to study niobium welds. To keep the purity of this material in particular, we have to apply a special vacuum welding process using an electron beam.” The same is true for titanium, which is a material of choice for its low density and high mechanical properties. It is currently under study for the next-generation HL-LHC beam dump. Whether it’s steel, titanium, copper, niobium or aluminium, each material has a unique metallurgical behaviour that will greatly influence the welding process. To meet the strict operating conditions over the lifetime of the components, the welding parameters are developed consequently, and rigorous control of the quality and traceability are essential.

“Although it is the job of the physicists at CERN to come up with the innovative machines they need to push knowledge further, it is an interesting exchange to learn from each other, juggling between ideal objects and industrial realities,” explains Audrey. “It is a matter of adaptation. The physicists come here and explain what they need and then we see if it’s feasible with our machines. If not, we can adapt the design or material, and the physicists are usually quite open to the change.”

Touring the main CERN workshop – which was one of CERN’s first buildings and has been in service since 1957 – Audrey is one of the few women present. “We are a handful of women graduating as International Welding Engineers (IWE). I am proud to be part of the greater scientific community and to promote my job in this domain, historically dominated by men.”

The physicists come here and explain what they need and then we see if it’s feasible with our machines

In the main workshop at CERN, Audrey is, along with her colleagues, a member of the welding experts’ team. “My daily task is to support welding activities for current fabrication projects CERN-wide. On a typical day, I can go from performing visual inspections of welds in the workshop to overseeing the welding quality, advising the CERN community according to the most recent standards, participating in large R&D projects and, as a welding expert, advising the CERN community in areas such as the framework of the pressure equipment directive.”

Together with colleagues from CERN’s vacuum, surfaces and coatings group (TE-VSC), and MME, Audrey is currently working on R&D for the Einstein Telescope – a proposed next-generation gravitational-wave observatory in Europe. It is part of a new collaboration between CERN, Nikhef and the INFN to design the telescope’s colossal vacuum system – the largest ever attempted (see CERN shares beampipe know-how for gravitational-wave observatories). To undertake this task, the collaboration is initially investigating different materials to find the best candidate combining ultra-high vacuum compatibility, weldability and cost efficiency. So far, one fully prototyped beampipe has been finished using stainless steel and another is in production with common steel; the third is yet to be done. The next main step will then be to go from the current 3 m-long prototype to a 50 m version, which will take about a year and a half. Audrey’s task is to work with the welders to optimise the welding parameters and ultimately provide a robust industrial solution to manufacture this giant vacuum chamber. “The design is unusual; it has not been used in any industrial application, at least not at this quality. I am very excited to work on the Einstein Telescope. Gravitational waves have always interested me, and it is great to be part of the next big experiment at such an early stage.”

A new TPC for T2K upgrade

In the latest milestone for the CERN Neutrino Platform, a key element of the near detector for the T2K (Tokai to Kamioka) neutrino experiment in Japan – a state-of-the-art time projection chamber (TPC) – is now fully operational and taking cosmic data at CERN. T2K detects a neutrino beam at two sites: a near-detector complex close to the neutrino production point and Super-Kamiokande 300 km away. The ND280 detector is one of the near detectors necessary to characterise the beam before the neutrinos oscillate and to measure interaction cross sections, both of which are crucial to reduce systematic uncertainties. 

To improve the latter further, the T2K collaboration decided in 2016 to upgrade ND280 with a novel scintillator tracker, two TPCs and a time-of-flight system. This upgrade, in combination with an increase in neutrino beam power from the current 500 kW to 1.3 MW, will increase the statistics by a factor of about four and reduce the systematic uncertainties from 6% to 4%. The upgraded ND280 is also expected to serve as a near detector of the next generation long-baseline neutrino oscillation experiment Hyper-Kamiokande. 

Meanwhile, R&D and testing for the prototype detectors for the DUNE experiment at the Long Baseline Neutrino Facility at Fermilab/SURF in the US is entering its final stages. 

Extreme detector design for a future circular collider

FCC-hh reference detector

The Future Circular Collider (FCC) is the most powerful post-LHC experimental infrastructure proposed to address key open questions in particle physics. Under study for almost a decade, it envisions an electron–positron collider phase, FCC-ee, followed by a proton–proton collider in the same 91 km-circumference tunnel at CERN. The hadron collider, FCC-hh, would operate at a centre-of-mass energy of 100 TeV, extending the energy frontier by almost an order of magnitude compared to the LHC, and provide an integrated luminosity a factor of 5–10 larger. The mass reach for direct discovery at FCC-hh will reach several tens of TeV and allow, for example, the production of new particles whose existence could be indirectly exposed by precision measurements at FCC-ee. 

The potential of FCC-hh offers an unprecedented opportunity to address fundamental unknowns about our universe

At the time of the kickoff meeting for the FCC study in 2014, the physics potential and the requirements for detectors at a 100 TeV collider were already heavily debated. These discussions were eventually channelled into a working group that provided the input to the 2020 update of the European strategy for particle physics and recently concluded with a detailed writeup in a 300-page CERN Yellow Report. To focus the effort, it was decided to study one reference detector that is capable of fully exploiting the FCC-hh physics potential. At first glance it resembles a super CMS detector with two LHCb detectors attached (see “Grand designs” image). A detailed detector performance study followed, allowing a very efficient study of the key physics capabilities. 

The first detector challenge at FCC-hh is related to the luminosity, which is expected to reach 3 × 1035 cm–2s–1. This is six times larger than the HL-LHC luminosity and 30 times larger than the nominal LHC luminosity. Because the FCC will operate beams with a 25 ns bunch spacing, the so-called pile-up (the number of pp collisions per bunch crossing) scales by approximately the same factor. This results in almost 1000 simultaneous pp collisions, requiring a highly granular detector. Evidently, the assignment of tracks to their respective vertices in this environment is a formidable task. 

Longitudinal cross-section of the FCC-hh reference detector

The plan to collect an integrated pp luminosity of 30 ab–1 brings the radiation hardness requirements for the first layers of the tracking detector close to 1018 hadrons/cm2, which is around 100 times more than the requirement for the HL-LHC. Still, the tracker volume with such high radiation load is not excessively large. From a radial distance of around 30 cm outwards, radiation levels are already close to those expected for the HL-LHC, thus the silicon technology for these detector regions is already available.

The high radiation levels also need very radiation-hard calorimetry, making a liquid-argon calorimeter the first choice for the electromagnetic calorimeter and forward regions of the hadron calorimeter. The energy deposit in the very forward regions will be 4 kW per unit of rapidity and it will be an interesting task to keep cryogenic liquids cold in such an environment. Thanks to the large shielding effect of the calorimeters, which have to be quite thick to contain the highest energy particles, the radiation levels in the muon system are not too different from those at the HL-LHC. So the technology needed for this system is available. 

Looking forward 

At an energy of 100 TeV, important SM particles such as the Higgs boson are abundantly produced in the very forward region. The forward acceptance of FCC-hh detectors therefore has to be much larger than at the LHC detectors. ATLAS and CMS enable momentum measurements up to pseudorapidities (a measure of the angle between the track and beamline) of around η = 2.5, whereas at FCC-hh this will have to be extended to η = 4 (see “Far reaching” figure). Since this is not achievable with a central solenoid alone, a forward magnet system is assumed on either side of the detector. Whether the optimum forward magnets are solenoids or dipoles still has to be studied and will depend on the requirements for momentum resolution in the very forward region. Forward solenoids have been considered that extend the precision of momentum measurements by one additional unit of rapidity. 

Momentum resolution versus pseudorapidity

A silicon tracking system with a radius of 1.6 m and a total length of 30 m provides a momentum resolution of around 0.6% for low-momentum particles, 2% at 1 TeV and 20% at 10 TeV (see “Forward momentum” figure). To detect at least 90% of the very forward jets that accompany a Higgs boson in vector-boson-fusion production, the tracker acceptance has to be extended up to η = 6. At the LHC such an acceptance is already achieved up to η = 4. The total tracker surface of around 400 m2 at FCC-hh is “just” a factor two larger than the HL-LHC trackers, and the total number of channels (16.5 billion) is around eight times larger.

It is evident that the FCC-hh reference detector is more challenging than the LHC detectors, but not at all out of reach. The diameter and length are similar to those of the ATLAS detector. The tracker and calorimeters are housed inside a large superconducting solenoid 10 m in diameter, providing a magnetic field of 4 T. For comparison, CMS uses a solenoid with the same field and an inner diameter of 6 m. This difference does not seem large at first sight, but of course the stored energy (13 GJ) is about five times larger than the CMS coil, which needs very careful design of the quench protection system.

For the FCC-hh calorimeters, the major challenge, besides the high radiation dose, is the required energy resolution and particle identification in the high pile-up environment. The key to achieve the required performance is therefore a highly segmented calorimeter. The need for longitudinal segmentation calls for a solution different from the “accordion” geometry employed by ATLAS. Flat lead/steel absorbers that are inclined by 50 degrees with respect to the radial direction are interleaved with liquid-argon gaps and straight electrodes with high-voltage and signal pads (see “Liquid argon” figure). The readout of these pads on the back of the calorimeter is then possible thanks to the use of multi-layer electrodes fabricated as straight printed circuit boards. This idea has already been successfully prototyped within the CERN EP detector R&D programme.

The considerations for a muon system for the reference detector are quite different compared to the LHC experiments. When the detectors for the LHC were originally conceived in the late 1980s, it was not clear whether precise tracking in the vicinity of the collision point was possible in this unprecedented radiation environment. Silicon detectors were excessively expensive and gas detectors were at the limit of applicability. For the LHC detectors, a very large emphasis was therefore put on muon systems with good stand-alone performance, specifically for the ATLAS detector, which is able to provide a robust measurement of, for example, the decay of a Higgs particle into four muons, with the muon system alone. 

Liquid argon

Thanks to the formidable advancement of silicon-sensor technology, which has led to full silicon trackers capable of dealing with around 140 simultaneous pp collisions every 25 ns at the HL-LHC, standalone performance is no longer a stringent requirement. The muon systems for FCC-hh can therefore fully rely on the silicon trackers, assuming just two muon stations outside the coil that measure the exit point and the angle of the muons. The muon track provides muon identification, the muon angle provides a coarse momentum measurement for triggering and the track position provides improved muon momentum measurement when combined with the inner tracker. 

The major difference between an FCC-hh detector and CMS is that there is no yoke for the return flux of the solenoid, as the cost would be excessive and its only purpose to shield the magnetic field towards the cavern. The baseline design assumes the cavern infrastructure can be built to be compatible with this stray field. Infrastructure that is sensitive to the magnetic field will be placed in the service cavern 50 m from the solenoid, where the stray field is sufficiently low.

Higgs self-coupling

The high granularity and acceptance of the FCC-hh reference detector will result in about 250 TB/s of data for calorimetry and the muon system, about 10 times more than the ATLAS and CMS HL-LHC scenarios. There is no doubt that it will be possible to digitise and read this data volume at the full bunch-crossing rate for these detector systems. The question remains whether the data rate of almost 2500 TB/s from the tracker can also be read out at the full bunch-crossing rate or whether calorimeter, muon and possible coarse tracker information need to be used for a first-level trigger decision, reducing the tracker readout rate to the few MHz level, without the loss of important physics. Even if the optical link technology for full tracker readout were available and affordable, sufficient radiation hardness of devices and infrastructure constraints from power and cooling services are prohibitive with current technology, calling for R&D on low-power radiation-hard optical links. 

Benchmarks physics

The potential of FCC-hh in the realms of precision Higgs and electroweak physics, high mass reach and dark-matter searches offers an unprecedented opportunity to address fundamental unknowns about our universe. The performance requirements for the FCC-hh baseline detector have been defined through a set of benchmark physics processes, selected among the key ingredients of the physics programme. The detector’s increased acceptance compared to the LHC detectors, and the higher energy of FCC-hh collisions, will allow physicists to uniquely improve the precision of measurements of Higgs-boson properties for a whole spectrum of production and decay processes complementary to those accessible at the FCC-ee. This includes measurements of rare processes such as Higgs pair-production, which provides a direct measure of the Higgs self-coupling – a crucial parameter for understanding the stability of the vacuum and the nature of the electroweak phase transition in the early universe – with a precision of 3 to 7% (see “Higgs self-coupling” figure).

Dark matters

Moreover, thanks to the extremely large Higgs-production rates, FCC-hh offers the potential to measure rare decay modes in a novel boosted kinematic regime well beyond what is currently studied at the LHC. These include the decay to second-generation fermions, muons, which can be measured to a precision of 1%. The Higgs branching fraction to invisible states can be probed to a value of 10–4, allowing the parameter space for dark matter to be further constrained. The much higher centre-of-mass energy of FCC-hh, meanwhile, significantly extends the mass reach for discovering new particles. The potential for detecting heavy resonances decaying into di-muons and di-electrons extends to 40 TeV, while for coloured resonances like excited quarks the reach extends to 45 TeV, thus extending the current limit by almost an order of magnitude. In the context of supersymmetry, FCC-hh will be capable of probing stop squarks with masses up to 10 TeV, also well beyond the reach of the LHC.

In terms of dark-matter searches, FCC-hh has immense potential – particularly for probing scenarios of weakly interacting massive particles such as higgsinos and winos (see “Dark matters” figure). Electroweak multiplets are typically elusive, especially in hadron collisions, due to their weak interactions and large masses (needed to explain the relic abundance of dark matter in our universe). Their nearly degenerate mass spectrum produces an elusive final state in the form of so-called “disappearing tracks”. Thanks to the dense coverage of the FCC-hh detector tracking system, a general-purpose FCC-hh experiment could detect these particle decays directly, covering the full mass range expected for this type of dark matter. 

A detector at a 100 TeV hadron collider is clearly a challenging project. But detailed studies have shown that it should be possible to build a detector that can fully exploit the physics potential of such a machine, provided we invest in the necessary detector R&D. Experience with the Phase-II upgrades of the LHC detectors for the HL-LHC, developments for further exploitation of the LHC and detector R&D for future Higgs factories will be important stepping stones in this endeavour.

End-to-end simulation of particle accelerators using Sirepo

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

This webinar will give a high-level overview of how scientists can model particle accelerators using Sirepo, an open-source scientific computing gateway.

The speaker, Jonathan Edelen, will work through examples using three of Sirepo’s applications that best highlight the different modelling regimes for simulating a free-electron laser.

Want to learn more on this subject?

Jonathan Edelen, president, earned a PhD in accelerator physics from Colorado State University, after which he was selected for the prestigious Bardeen Fellowship at Fermilab. While at Fermilab he worked on RF systems and thermionic cathode sources at the Advanced Photon Source. Currently, Jon is focused on building advanced control algorithms for particle accelerators including solutions involving machine learning.

CERN shares beampipe know-how for gravitational-wave observatories

The direct detection of gravitational waves in 2015 opened a new window to the universe, allowing researchers to study the cosmos by merging data from multiple sources. There are currently four gravitational wave telescopes (GWTs) in operation: LIGO at two sites in the US, Virgo in Italy, KAGRA in Japan, and GEO600 in Germany. Discussions are ongoing to establish an additional site in India. The detection of gravitational waves is based on Michelson laser interferometry with Fabry-Perot cavities, which reveals the expansion and contraction of space at the level of ten-thousandths of the size of an atomic nucleus, i.e. 10-19 m. Despite the extremely low strain that needs to be detected, an average of one gravitational wave is measured per week of measurement by studying and minimising all possible noise sources, including seismic vibration and residual gas scattering. The latter is reduced by placing the interferometer in a pipe where ultrahigh vacuum is generated. In the case of Virgo, the vacuum inside the two perpendicular 3 km-long arms of the interferometer is lower than 10-9 mbar.

While current facilities are being operated and upgraded, the gravitational-wave community is also focusing on a new generation of GWTs that will provide even better sensitivity. This would be achieved by longer interferometer arms, together with a drastic reduction of noise that might require cryogenic cooling of the mirrors. The two leading studies are the Einstein Telescope (ET) in Europe and the Cosmic Explorer (CE) in the US. The total length of the vacuum vessels envisaged for the ET and CE interferometers is 120 km and 160 km, respectively, with a tube diameter of 1 to 1.2 m. The required operational pressures are typical to those needed for modern accelerators (i.e. in the region of 10-10 mbar for hydrogen and even lower for other gas species). The next generation of GWTs would therefore represent the largest ultrahigh vacuum systems ever built.

The next generation of gravitational-wave telescopes would represent the largest ultrahigh vacuum systems ever built.

Producing these pressures is not difficult, as present vacuum systems of GWT interferometers have a comparable degree of vacuum. Instead, the challenge is cost. Indeed, if the previous generation solutions were adopted, the vacuum pipe system would amount to half of the estimated cost of CE and not far from one-third of ET, which is dominated by underground civil engineering. Reducing the cost of vacuum systems requires the development of different technical approaches with respect to previous-generation facilities. Developing cheaper technologies is also a key subject for future accelerators and a synergy in terms of manufacturing methods, surface treatments and installation procedures is already visible.

Within an official framework between CERN and the lead institutes of the ET study –  Nikhef in the Netherlands and INFN in Italy – CERN’s TE-VSC and EN-MME groups  are sharing their expertise in vacuum, materials, manufacturing and surface treatments with the gravitational-wave community. The activity started in September 2022 and is expected to conclude at the end of 2025 with a technical design report and a full test of a vacuum-vessel pilot sector. During the workshop “Beampipes for Gravitational Wave Telescopes 2023”, held at CERN from 27 to 29 March, 85 specialists from different communities encompassing accelerator and gravitational-wave technologies and from companies that focus on steel production, pipe manufacturing and vacuum equipment gathered to discuss the latest progress. The event followed a similar one hosted by LIGO Livingston in 2019, which gave important directions for research topics.

Plotting a course
In a series of introductory contributions, the basic theoretical elements regarding vacuum requirements and the status of CE and ET studies were presented, highlighting initiatives in vacuum and material technologies undertaken in Europe and the US. The detailed description of current GWT vacuum systems provided a starting point for the presentations of ongoing developments. To conduct an effective cost analysis and reduction, the entire process must be taken into account — including raw material production and treatment, manufacturing, surface treatment, logistics, installation, and commissioning in the tunnel. Additionally, the interfaces with the experimental areas and other services such as civil engineering, electrical distribution and ventilation are essential to assess the impact of technological choices for the vacuum pipes.

The selection criteria for the structural materials of the pipe were discussed, with steel currently being the material of choice. Ferritic steels would contribute to a significant cost reduction compared to austenitic steel, which is currently used in accelerators, because they do not contain nickel. Furthermore, thanks to their body-centred cubic crystallographic structure, ferritic steels have a much lower content of residual hydrogen – the first enemy for the attainment of ultrahigh vacuum – and thus do not require expensive solid-state degassing treatments. The cheapest ferritic steels are “mild steels” which are common materials in gas pipelines after treatment to fight corrosion. Ferritic stainless steels, which contain more than 12% in weight of dissolved chromium, are also being studied for GWT applications. While first results are encouraging, the magnetic properties of these materials must be considered to avoid anomalous transmission of electromagnetic signals and of the induced mechanical vibrations.

Four solutions regarding the design and manufacturing of the pipes and their support system were discussed at the March workshop. The baseline is a 3 to 4 mm-thick tube similar to the ones operational in Virgo and LIGO, with some modifications to cope with the new tunnel environment and stricter sensitivity requirements. Another option is a 1 to 1.5 mm-thick corrugated vessel that does not require reinforcement and expansion bellows. Additionally, designs based on double-wall pipes were discussed, with the inner wall being thin and easy to heat and the external wall performing the structural role. An insulation vacuum would be generated between the two walls without the cleanliness and pressure requirements imposed on the laser beam vacuum. The forces acting on the inner wall during pressure transients would be minimised by opening axial movement valves, which are not yet fully designed. Finally, a gas-pipeline solution was also considered, which would be produced by a half-inch thick wall made of mild steel. The main advantage of this solution is its relatively low cost, as it is a standard approach used in the oil and gas industry. However, corrosion protection and ultrahigh vacuum needs would require surface treatment on both sides of the pipe walls. These treatments are currently under consideration.  For all types of design, the integration of optical baffles (which provide an intermittent reduction of the pipe aperture to block scattered photons) is a matter of intense study, with options for position, material, surface treatment, and installation reported. The transfer of vibrations from the tunnel structure to the baffle is also another hot topic.

The manufacturing of the pipes directly from metal coils and their surface treatment can be carried out at supplier facilities or directly at the installation site. The former approach would reduce the cost of infrastructure and manpower, while the latter would reduce transport costs and provide an additional degree of freedom to the global logistics as storage area would be minimized. The study of in-situ production was brought to its limit in a conceptual study of a process that from a coil could deliver pipes as long as desired directly in the underground areas: The metal coil arrives in the tunnel; then it is installed in a dedicated machine that unrolls the coil and welds the metallic sheet to form the pipe to any length.

These topics will undergo further development in the coming months, and the results will be incorporated into a comprehensive technical design report. This report will include a detailed cost optimization and will be validated in a pilot sector at CERN. With just under two and a half years of the project remaining, its success will demand a substantial effort and resolute motivation. The optimism instilled by the enthusiasm and collaborative approach demonstrated by all participants at the workshop is therefore highly encouraging.

bright-rec iop pub iop-science physcis connect