Comsol -leaderboard other pages

Topics

HL-LHC counts down to LS3

Oliver Brüning and Markus Zerlauth describe the latest progress and next steps for the validation of key technologies, tests of prototypes and the series production of equipmentince the start of physics operations in 2010, the Large Hadron Collider (LHC) has enabled a global user community of more than 10,000 physicists to explore the high-energy frontier. This unique scientific programme – which has seen the discovery of the Higgs boson, countless measurements of high-energy phenomena, and exhaustive searches for new particles – has already transformed the field. To increase the LHC’s discovery potential further, for example by enabling higher precision and the observation of rare processes, the High-Luminosity LHC (HL-LHC) upgrade aims to boost the amount of data collected by the ATLAS and CMS experiments by a factor of 10 and enable CERN’s flagship collider to operate until the early 2040s.

Following the completion of the second long shutdown (LS2) in 2022, during which the LHC injectors upgrade project was successfully implemented, Run 3 commenced at a record centre-of-mass energy of 13.6 TeV. Only two years of operation remain before the start of LS3 in 2026. This is when the main installation phase of the HL-LHC will commence, starting with the excavation of the vertical cores that will link the LHC tunnel to the new HL-LHC galleries and followed by the installation of new accelerator components. Approved in 2016, the HL-LHC project is driving several innovative technologies, including: niobium-tin (Nb3Sn) accelerator magnets, a cold powering system made from MgB2 high-temperature superconducting cables and a flexible cryostat, the integration of compact niobium crab cavities to compensate for the larger beam crossing angle, and new technology for beam collimation and machine protection.

Efforts at CERN and across the HL-LHC collaboration are now focusing on the series production of all project deliverables in view of their installation and validation in the LHC tunnel. A centrepiece of this effort, which involves institutes from around the world and strong collaboration with industry, is the assembly and commissioning of the new insertion-region magnets that will be installed on either side of ATLAS and CMS to enable high-luminosity operations from 2029. In parallel, intense work continues on the corresponding upgrades of the LHC detectors: completely new inner trackers will be installed by ATLAS and CMS during LS3 (CERN Courier January/February 2023 p22 and 33), while LHCb and ALICE are working on proposals for radically new detectors for installation in the 2030s (CERN Courier March/April 2023 p22 and 35).

Civil-engineering complete

The targeted higher performance at the ATLAS and CMS interaction points (IPs) demands increased cooling capacity for the final focusing quadrupole magnets left and right of the experiments to deal with the larger flux of collision debris. Additional space is also needed to accommodate new equipment such as power converters and machine-protection devices, as well as shielding to reduce their exposure to radiation, and to allow easy access for faster interventions and thus improved machine availability. All these requirements have been addressed by the construction of new underground structures at ATLAS and CMS. Both sites feature a new access shaft and cavern that will house a new refrigerator cold box, a roughly 400 m-long gallery for the new power converters and protection equipment, four service tunnels and 12 vertical cores connecting the gallery to the existing LHC tunnel. A new staircase at each side of the experiment also connects the new underground structures to the existing LHC tunnel for personnel.

Buildings and infrastructure

Civil-engineering works started at the end of 2018 to allow the bulk of the interventions requiring heavy machinery to be carried out during LS2, since it was estimated that the vibrations would otherwise have a detrimental impact on the LHC performance. All underground civil-engineering works were completed in 2022 and the construction of the new surface buildings, five at each IP, in spring 2023. The new access lifts encountered a delay of about six months due to some localised concrete spalling inside the shafts, but the installation at both sites was completed in autumn 2023.

The installation of the technical infrastructures is now progressing at full speed in both the underground and surface areas (see “Buildings and infrastructure” image). It is remarkable that, even though the civil-engineering work extended throughout the COVID-19 shutdown period and was exposed to market volatility in the aftermath of Russia’s invasion of Ukraine, it could essentially be completed on schedule and within budget. This represents a huge milestone for the HL-LHC project and for CERN.

A cornerstone of the HL-LHC upgrade are the new triplet quadrupole magnets with increased radiation tolerance

A cornerstone of the HL-LHC upgrade are the new triplet quadrupole magnets with increased radiation tolerance. A total of 24 large-aperture Nb3Sn focusing quadrupole magnets will be installed around ATLAS and CMS to focus the beams more tightly, representing the first use of Nb3Sn magnet technology in an accelerator for particle physics. Due to the higher collision rates in the experiments, radiation levels and integrated dose rates will increase accordingly, requiring particular care in the choice of materials used to construct the magnet coils (as well as the integration of additional tungsten shielding into the beam screens). In order to have sufficient space for the shielding, the coil apertures need to be roughly doubled compared to the existing Nb-Ti LHC triplets, thus reducing the β* parameter (which relates to the beam size at the collision points) by a factor of four compared to the nominal LHC design and fully exploiting the improved beam emittances following the upgrade of the LHC injector chain.

Quadrupole magnets

For the HL-LHC, reaching the required integrated magnetic gradient with Nb-Ti technology and twice the magnet aperture would require a much longer triplet. Choosing Nb3Sn allows fields of 12 T to be reached, and therefore a doubling of the triplet aperture while keeping the magnet relatively compact (the total length is increased from 23 m to 32 m). Intensive R&D and prototyping of Nb3Sn magnets started 20 years ago under the US-based LHC Accelerator Research Program (LARP), which united LBNL, SLAC, Fermilab and BNL. Officially launched as a design study in 2011, it has since been converted into the Accelerator Upgrade Program (AUP, which involves LBNL, Fermilab and BNL) in the industrialisation and series-production phase of all main components.

The HL-LHC inner-triplet magnets are designed and constructed in a collaboration between AUP and CERN. The 10 (eight for installation and two spares) Q1 and Q3 cryo-assemblies, which contain two 4.2 m-long individual quadrupole magnets (MQXFA), will be provided as an in-kind contribution from AUP, while the 10 longer versions for Q2 (containing a single 7.2 m-long quadrupole magnet, MQXFB, and one dipole orbit-corrector assembly) will be produced at CERN. The first of these magnets was tested and fully validated in the US in 2019 and the first cryo-assembly consisting of two individual magnets was assembled, tested and validated at Fermilab in 2023. This cryo-assembly arrived at CERN in November 2023 and is now being prepared for validation and testing. The US cable and coil production reached completion in 2023 and the magnet and cryo-assembly production is picking up pace for series production. 

The first three Q2 prototype magnets showed some limitations. This prompted an extensive three-phase improvement plan after the second prototype test to address the different stages of coil production, the coil and stainless-steel shell assembly procedure, and welding for the final cold mass. All three improvement steps were implemented in the third prototype (MQXFBP3), which is the first magnet that no longer shows any limitations, neither at 1.9 K nor 4.5 K operating temperatures, and thus the first from the production that is earmarked for installation in the tunnel (see “Quadrupole magnets” image).

Dipole magnets

Beyond the triplets, the HL-LHC insertion regions require several other novel magnets to manipulate the beams. For some magnet types, such as the nonlinear corrector magnets (produced by LASA in Milan as an in-kind contribution from INFN), the full production has been completed and all magnets have been delivered to CERN. The new separation and recombination dipole magnets – which are located on the far side of the insertion regions to guide the two counterrotating beams from the separated apertures in the arc onto a common trajectory that allows collisions at the IPs – are produced as in-kind contributions from Japan and Italy. The single-aperture D1 dipole magnets are produced by KEK with Hitachi as the industrial partner, while the twin-aperture D2 dipole magnets are produced in industry by ASG in Genoa, again as an in-kind contribution from INFN. Even though both dipole types are based on established Nb-Ti superconductor technology (the workhorse of the LHC), they push the conductor into unchartered territory. For example, the D1 dipole features a large aperture of 150 mm and a peak dipole field of 5.6 T, resulting in very large forces in the coils during operation. Hitachi has already produced three of the six series magnets. The prototype D1 dipole magnet was delivered to CERN in 2023 and cryostated in its final configuration, and the D2 prototype magnet has been tested and fully validated at CERN in its final cryostat configuration and the first series D2 magnet has been delivered from ASG to CERN (see “Dipole magnets” image).

A novel cold powering system featuring a flexible cryostat and MgB2 cables can carry the required currents at temperatures of up to 50 K

Production of the remaining new HL-LHC magnets is also in full swing. The nested canted-cosine-theta magnets – a novel magnet design comprising two solenoids with canted coil layers, needed to correct the orbit next to the D2 dipole – is progressing well in China as an in-kind contribution from IHEP with Bama as the industrial partner. The nested dipole orbit-corrector magnets, required for the orbit correction within the triplet area, are based on Nb-Ti technology (an in-kind contribution from CIEMAT in Spain) and are also advancing well, with the final validation of the long-magnet version demonstrated in 2023 (see “Corrector magnets” image).

Superconducting link

With the new power converters in the HL-LHC underground galleries being located approximately 100 m away from and 8 m above the magnets in the tunnel, a cost- and energy-efficient way to carry currents of up to 18 kA between them was needed. It was foreseen that “simple” water-cooled copper cables and busbars would lead to an undesirable inefficiency in cooling-off the Ohmic losses, and that Nb-Ti links requiring cooling with liquid helium would be too technically challenging and expensive given the height difference between the new galleries and the LHC tunnel. Instead, it was decided to develop a novel cold powering system featuring a flexible cryostat and magnesium-diboride (MgB2) cables that can carry the required currents at temperatures of up to 50 K.

Corrector magnets

With this unprecedented system, helium boils off from the magnet cryostats in the tunnel and propagates through the flexible cryostat to the new underground galleries. This process cools both the MgB2 cable and the high-temperature superconducting current leads (which connect the normal-conducting power converters to the superconducting magnets) to nominal temperatures between 15 K and 35 K. The gaseous helium is then collected in the new galleries, compressed, liquefied and fed back into the cryogenic system. The new cables and cryostats have been developed with companies in Italy (ASG and Tratos) and the Netherlands (Cryoworld), and are now available as commercial materials for other projects (CERN Courier May/June 2023 p37).

Three demonstrator tests conducted in CERN’s SM18 facility have already fully validated the MgB2 cable and flexible-cryostat concept. The feed boxes that connect the MgB2 cable to the power converters in the galleries and the magnets in the tunnel have been developed and produced as in-kind contributions with the University of Southampton and Puma as industrial partner in the UK and the University of Uppsala and RFR as industrial partner in Sweden. A complete assembly of the superconducting link with the two feed boxes has been assembled and is being tested in SM18 in preparation for its installation in the inner-triplet string in 2024 (see “Superconducting feed” image).

IT string assembly

The inner-triplet (IT) string – which replicates the full magnet, powering and protection assembly left of CMS from the triplet magnets up to the D1 separation dipole magnet – is the next emerging milestone of the HL-LHC project (see “Inner-triplet string” image). The goal of the IT string is to validate the assembly and connection procedures and tools required for its construction. It also serves to assess the collective behaviour of the superconducting magnet chain in conditions as close as possible to those of their later operation in the HL-LHC, and as a training opportunity for the equipment teams for their later work in the LHC tunnel. The IT string includes all the systems required for operation at nominal conditions, such as the vacuum (albeit without the magnet beam screens), cryogenics, powering and protection systems. The installation is planned to be completed in 2024, and the main operational period will take place in 2025.

HL-LHC insertion regions

The entire IT string – measuring about 90 m long – just fits at the back of the SM18 test hall, where the necessary liquid-helium infrastructure is available. The new underground galleries are mimicked by a metallic structure situated above the magnets. The structure houses the power converters and quench-protection system, the electrical disconnector box, and the feed box that connects the superconducting link to normal-conducting powering systems. The superconducting link extends from the metallic structure above the magnet assembly to the D1 end of the IT string where (after a vertical descent mimicking the passage through the underground vertical cores) it is connected to a prototype of the feed box that connects to the magnets.

The inner-triplet string  is the next emerging milestone of the HL-LHC project

The installation of the normal-conducting powering and machine-protection systems of the IT string is nearing completion. Together with the already completed infrastructures of the facility, the complete normal-conducting powering system of the string entered its first commissioning phase in December 2023 with the execution of short-circuit tests. The cryogenic distribution line for the IT string has been successfully tested at cold temperatures and will soon undergo a second cooldown to nominal temperature, ahead of the installation of the magnets and cold-powering system this year.

Collimation

Controlling beam losses caused by high-energy particles deviating from their ideal trajectory is essential to ensure the protection and efficient operation of accelerator components, and in particular superconducting elements such as magnets and cavities. The existing LHC collimation system, which already comprises more than 100 individual collimators installed around the ring, needs to be upgraded to address the unprecedented challenges brought about by the brighter HL-LHC beams. Following a first upgrade of the LHC collimation and shielding systems deployed during LS2, the production of new insertion-region collimators and the second batch of low-impedance collimators is now being launched in industry.

String and installation

LS2 and the subsequent year-end technical stop also saw the completion of the novel crystal-collimation scheme (CERN Courier November/December 2022 p35). Located in “IR7” between CMS and LHCb, this scheme comprises four goniometers with bent crystals – one per beam and plane – to channel halo particles onto a downstream absorber (see “Crystal collimators” image). After extensive studies with beam during the past few years, crystal collimation was used operationally in a nominal physics run for the first time during the 2023 heavy-ion run, where it was shown to increase the cleaning efficiency by a factor of up to five compared to the standard collimation scheme. Following this successful deployment and comprehensive machine-development tests, the HL-LHC performance goals have been conclusively confirmed for both proton and ion operations. This has enabled the baseline solution using a standard collimator inserted in IR7 (which would have required replacing a standard 8.3 T LHC dipole with two short 11 T Nb3Sn dipoles to create the necessary space) to be descoped from the HL-LHC project.

Crab cavities

A second cornerstone of the HL-LHC project after the triplet magnets are the superconducting radiofrequency “crab” cavities. Positioned next to the D2 dipole and the Q4 matching-section quadrupole magnet in the insertion regions, these are necessary to compensate for the detrimental effect of the crossing angle on luminosity by applying a transverse momentum kick to each bunch entering the interaction regions of ATLAS and CMS. Two different types of cavities will be installed: the radio-frequency dipole (RFD) and the double quarter wave (DQW), deflecting bunches in the horizontal and vertical crossing planes, respectively (see “Crab cavities” image). Series production of the RFD cavities is about to begin at Zanon, Italy under the lead of AUP, while the DQW cavity series production is well underway at RI in Germany under the lead of CERN following the successful validation of two pre-series bare cavities.

Crystal collimators

A fully assembled DQW cryomodule has been undergoing highly successful beam tests in the Super Proton Synchrotron (SPS) since 2018, demonstrating the crabbing of proton beams and allowing for the development and validation of the necessary low-level RF and machine-protection systems (CERN Courier March/April 2022 p45). For the RFD, two dressed cavities were delivered at the end of 2021 to the UK collaboration after their successful qualification at CERN. These were assembled into a first complete RFD cryomodule that was returned to CERN in autumn 2023 and is currently undergoing validation tests at 1.9 K, revealing some non-conformities to be resolved before it is ready for installation in the SPS in 2025 for tests with beams. Series production of the necessary ancillaries and higher-order-mode couplers has also started for both cavity types at CERN and AUP after the successful validation of prototypes. Prior to fabrication, the crab-cavity concept underwent a long period of R&D with the support of LARP, JLAB, UK-STFC and KEK.

On schedule

2023 and 2024 are the last two years of major spending and allocation of industrial contracts for the HL-LHC project. With the completion of the civil-engineering contracts and the placement of contracts for the new cryogenic compressors and distribution systems, the project has now committed more than 75% of its budget at completion. An HL-LHC cost-and-schedule review held at CERN in November 2023, conducted by an international panel of accelerator experts from other laboratories, congratulated the project on the overall good progress and agreed with the projection to be ready for installation of the major equipment during LS3 starting in 2026.

Crab cavities

The major milestones for the HL-LHC project over the next two years will be the completion and operation of the IT-string installation in 2024 and 2025, and the completion of the installation of the technical infrastructures in the new underground galleries. All new magnet components should be delivered to CERN by the end of 2026, while the drilling of the vertical cores connecting the new and old underground areas should complete the major construction activities and mark the start of the installation of the new equipment in the LHC tunnel.

The HL-LHC will push the largest scientific instrument ever built to unprecedented levels of performance and extend the flagship collider of the European and US high-energy physics programme by another 15 years. It is the culmination of more than 25 years of R&D, with close cooperation with industry in CERN’s member states and the establishment of new accelerator technologies for the use of future projects. All hands are now on deck to ensure the brightest future possible for the LHC.

ESO’s Extremely Large Telescope halfway to completion

The construction of the world’s largest optical telescope, the Extremely Large Telescope (ELT), has reached its mid-point, stated the European Southern Observatory (ESO) on 11 July. Originally planned to see first light in the early 2020s, operations will now start in 2028 due to delays inherent to building such a large and complex instrument, as well as the COVID-19 pandemic. 

The base and frame of the ELT’s dome structure on Cerro Armazones in the Chilean Atacama Desert have now been set. Meanwhile at European sites, the five-system mirrors for the ELT are being manufactured. More than 70% of the supports and blanks for the main mirror – which at 39 m across will be the biggest primary mirror ever built – are complete, and mirrors two and three are cast and now in the process of being polished.

Along with six laser guiding sources that will act as reference stars, mirrors four and five form part of a sophisticated adaptive-optics system to correct for atmospheric disturbances. The ELT will observe the universe in the near-infrared and visible regions to track down Earth-like exoplanets, investigate faint objects in the solar system and study the first stars and galaxies. It will also explore black holes, the dark universe and test fundamental constants (CERN Courier November/December 2019 p25).

ALICE ups its game for sustainable computing

The Large Hadron Collider (LHC) roared back to life on 5 July 2022, when proton–proton collisions at a record centre-of-mass energy of 13.6 TeV resumed for Run 3. To enable the ALICE collaboration to benefit from the increased instantaneous luminosity of this and future LHC runs, the ALICE experiment underwent a major upgrade during Long Shutdown 2 (2019–2022) that will substantially improve track reconstruction in terms of spatial precision and tracking efficiency, in particular for low-momentum particles. The upgrade will also enable an increased interaction rate of up to 50 kHz for lead–lead (PbPb) collisions in continuous readout mode, which will allow ALICE to collect a data sample more than 10 times larger than the combined Run 1 and Run 2 samples.

ALICE is a unique experiment at the LHC devoted to the study of extreme nuclear matter. It comprises a central barrel (the largest data producer) and a forward muon “arm”. The central barrel relies mainly on four subdetectors for particle tracking: the new inner tracking system (ITS), which is a seven-layer, 12.5 gigapixel monolithic silicon tracker (CERN Courier July/August 2021 p29); an upgraded time projection chamber (TPC) with GEM-based readout for continuous operation; a transition radiation detector; and a time-of-flight detector. The muon arm is composed of three tracking devices: a newly installed muon forward tracker (a silicon tracker based on monolithic active pixel sensors), revamped muon chambers and a muon identifier.

Due to the increased data volume in the upgraded ALICE detector, storing all the raw data produced during Run 3 is impossible. One of the major ALICE upgrades in preparation for the latest run was therefore the design and deployment of a completely new computing model: the O2 project, which merges online (synchronous) and offline (asynchronous) data processing into a single software framework. In addition to an upgrade of the experiment’s computing farms for data readout and processing, this necessitates efficient online compression and the use of graphics processing units (GPUs) to speed up processing. 

Pioneering parallelism

As their name implies, GPUs were originally designed to accelerate computer-graphics rendering, especially in 3D gaming. While they continue to be utilised for such workloads, GPUs have become general-purpose vector processors for use in a variety of settings. Their intrinsic ability to perform several tasks simultaneously gives them a much higher compute throughput than traditional CPUs and enables them to be optimised for data processing rather than, say, data caching. GPUs thus reduce the cost and energy consumption of associated computing farms: without them, about eight times as many servers of the same type and other resources would be required to handle the ALICE TPC online processing of PbPb collision data at a 50 kHz interaction rate. 

ALICE detector dataflow

Since 2010, when the high-level trigger online computer farm (HLT) entered operation, the ALICE detector has pioneered the use of GPUs for data compression and processing in high-energy physics. The HLT had direct access to the detector readout hardware and was crucial to compress data obtained from heavy-ion collisions. In addition, the HLT software framework was advanced enough to perform online data reconstruction. The experience gained during its operation in LHC Run 1 and 2 was essential for the design and development of the current O2 software and hardware systems.

For data readout and processing during Run 3, the ALICE detector front-end electronics are connected via radiation-tolerant gigabit-transceiver links to custom field programmable gate arrays (see “Data flow” figure). The latter, hosted in the first-level processor (FLP) farm nodes, perform continuous readout and zero-suppression (the removal of data without physics signal). In the case of the ALICE TPC, zero-suppression reduces the data rate from a prohibitive 3.3 TB/s at the front end to 900 GB/s for 50 kHz minimum-bias PbPb operations. This data stream is then pushed by the FLP readout farm to the event processing nodes (EPN) using data-distribution software running on both farms. 

Located in three containers on the surface close to the ALICE site, the EPN farm currently comprises 350 servers, each equipped with eight AMD GPUs with 32 GB of RAM each, two 32-core AMD CPUs and 512 GB of memory. The EPN farm is optimised for the fastest possible TPC track reconstruction, which constitutes the bulk of the synchronous processing, and provides most of its computing power in the form of GPU processing. As data flow from the front end into the farms and cannot be buffered, the EPN computing capacity must be sufficient for the highest data rates expected during Run 3.

Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing

Due to the continuous readout approach at the ALICE experiment, processing does not occur on a particular “event” triggered by some characteristic pattern in detector signals. Instead, all data is read out and stored during a predefined time slot in a time frame (TF) data structure. The TF length is usually chosen as a multiple of one LHC orbit (corresponding to about 90 microseconds). However, since a whole TF must always fit into the GPU’s memory, the collaboration chose to use 32 GB GPU memory to grant enough flexibility in operating with different TF lengths. In addition, an optimisation effort was put in place to reuse GPU memory in consecutive processing steps. During the proton run in 2022 the system was stressed by increasing the proton collision rates beyond those needed in order to maximise the integrated luminosity for physics analyses. In this scenario the TF length was chosen to be 128 LHC orbits. Such high-rate tests aimed to reproduce occupancies similar to the expected rates of PbPb collisions. The experience of ALICE demonstrated that the EPN processing could sustain rates nearly twice the nominal design value (600 GB/s) originally foreseen for PbPb collisions. Using high-rate proton collisions at 2.6 MHz the readout reached 1.24 TB/s, which was fully absorbed and processed on the EPNs. However, due to fluctuations in centrality and luminosity, the number of TPC hits (and thus the required memory size) varies to a small extent, demanding a certain safety margin. 

Flexible compression 

At the incoming raw-data rates during Run 3, it is impossible to store the data – even temporarily. Hence, the outgoing data is compressed in real time to a manageable size on the EPN farm. During this network transfer, event building is carried out by the data distribution suite, which collects all the partial TFs sent by the detectors and schedules the building of the complete TF. At the end of the transfer, each EPN node receives and then processes a full TF containing data from all ALICE detectors. 

GPUs manufactured by AMD

The detector generating by far the largest data volume is the TPC, contributing more than 90% to the total data size. The EPN farm compresses this to a manageable rate of around 100 GB/s (depending on the interaction rate), which is then stored to the disk buffer. The TPC compression is particularly elaborate, employing several steps including a track-model compression to reduce the cluster entropy before the entropy encoding. Evaluating the TPC space-charge distortion during data taking is also the most computing-intensive aspect of online calibrations, requiring global track reconstruction for several detectors. At the increased Run 3 interaction rate, processing on the order of one percent of the events is sufficient for the calibration.

During data taking, the EPN system operates synchronously and the TPC reconstruction fully loads the GPUs. With the EPN farm providing 90% of its compute performance via GPUs, it is also desirable to maximise the GPU utilisation in the asynchronous phase. Since the relative contribution of the TPC processing to the overall workload is much smaller in the asynchronous phase, GPU idle times would be high and processing would be CPU-limited if the TPC part only ran on the GPUs. To use the GPUs maximally, the central-barrel asynchronous reconstruction software is being implemented with native GPU support. Currently, around 60% of the workload can run on a GPU, yielding a speedup factor of about 2.25 compared to CPU-only processing. With the full adaptation of the central-barrel tracking software to the GPU, it is estimated that 80% of the reconstruction workload could be processed on GPUs.

In contrast to synchronous processing, asynchronous processing includes the reconstruction of data from all detectors, and all events instead of only a subset; physics analysis-ready objects produced from asynchronous processing are then made available on the computing Grid. As a result, the processing workload for all detectors, except the TPC, is significantly higher in the asynchronous phase. For the TPC, clustering and data compression are not necessary during asynchronous processing, while the tracking runs on a smaller input data set because some of the detector hits were removed during data compression. Consequently, TPC processing is faster in the asynchronous phase than in the synchronous phase. Overall, the TPC contributes significantly to asynchronous processing, but is not dominant. The asynchronous reconstruction will be divided between the EPN farm and the Grid sites. While the final distribution scheme is still to be decided, the plan is to split reconstruction between the online computing farm, the Tier 0 and the Tier 1 sites. During the LHC shutdown periods, the EPN farm nodes will almost entirely be used for asynchronous processing.

Great shape

In 2021, during the first pilot-beam collisions at injection energy, synchronous processing was running and successfully commissioned. In 2022 it was used during nominal LHC operations, where ALICE performed online processing of pp collisions at a 2.6 MHz inelastic interaction rate. At lower interaction rates (both for pp and PbPb collisions), ALICE ran additional processing tasks on free EPN resources, for instance online TPC charged-particle energy-loss determination, which would not be possible at the full 50 kHz PbPb collision rate. The particle-identification performance is demonstrated in the figure “Particle ID”, in which no additional selections on the tracks or detector calibrations were applied.

ALICE TPC performance

Another performance metric used to assess the quality of the online TPC reconstruction is the charged-particle tracking efficiency. The efficiency for reconstructing tracks from PbPb collisions at a centre-of-mass energy of 5.52 TeV per nucleon pair ranges from 94–100% for pT > 0.1 GeV/c. Here the fake-track rate is rather negligible, however the clone rate increases significantly for low-pT primary tracks due to incomplete track merging of very low-momentum particles that curl in the ALICE solenoidal field and leave and enter the TPC multiple times.

The effective use of GPU resources provides extremely efficient processors. Additionally, GPUs deliver improved data quality and compute cost and efficiency – aspects that have not been overlooked by the other LHC experiments. To manage their data rates in real time, LHCb developed the Allen project, a first-level trigger processed entirely on GPUs that reduces the data rate prior to the alignment, calibration and final reconstruction steps by a factor of 30–60. With this approach, 4 TB/s are processed in real time, with 10 GB of the most interesting collisions selected for physics analysis. 

At the beginning of Run 3, the CMS collaboration deployed a new HLT farm comprising 400 CPUs and 400 GPUs. With respect to a traditional solution using only CPUs, this configuration reduced the processing time of the high-level trigger by 40%, improved the data-processing throughput by 80% and reduced the power consumption of the farm by 30%. ATLAS uses GPUs extensively for physics analyses, especially for machine-learning applications. Focus has also been placed on data processing, anticipating that in the following years much of that can be offloaded to GPUs. For all four LHC experiments, the future use of GPUs is crucial to reduce the cost, size and power consumption within the higher luminosities of the LHC.

Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing. Today, 99% of synchronous processing is performed on GPUs, dominated by the largest contributor, the TPC.

More code

On the other hand, only about 60% of asynchronous processing (for 650 kHz pp collisions) is currently running on GPUs, i.e. offline data processing on the EPN farm. For asynchronous processing, even if the TPC is still an important contributor to the compute load, there are several other subdetectors that are important. In fact, there is an ongoing effort to port considerably more code to the GPUs. Such an effort will increase the fraction of GPU-accelerated code to beyond 80% for full barrel tracking. Eventually ALICE aims to run 90% of the whole asynchronous processing on GPUs.

PbPb collisions in the ALICE TPC

In November 2022 the upgraded ALICE detectors and central systems saw PbPb collisions for the first time during a two-day pilot run at a collision rate of about 50 Hz. High-rate PbPb processing was validated by injecting Monte Carlo data into the readout farm and running the whole data processing chain on 230 EPN nodes. Due to the TPC data volumes being somewhat larger than initially expected, this stress test is now being revalidated with continuously optimised TPC firmware using 350 EPN nodes together with the final TPC firmware to provide the required 20% compute margin with respect to foreseen 50 kHz PbPb operations in October 2023. Together with the upgraded detector components, the ALICE experiment has never been in better shape to probe extreme nuclear matter during the current and future LHC runs.

Report explores quantum computing in particle physics

A quantum computer built by IBM

Researchers from CERN, DESY, IBM Quantum and more than 30 other organisations have published a white paper identifying activities in particle physics that could benefit from quantum-computing technologies. Posted on arXiv on 6 July, the 40 page-long paper is the outcome of a working group set up at the QT4HEP conference held at CERN last November, which identified topics in theoretical and experimental high-energy physics where quantum algorithms may produce significant insights and results that are very hard or even not accessible by classical computers. 

Combining quantum and information theory, quantum computing is natively aligned with the underlying physics of the Standard Model. Quantum bits, or qubits, are the computational representation of a state that can be entangled and brought into superposition. Once measured, qubits do not represent discrete numbers 0 and 1 as their classical counterparts, but a probability ranging from 0 to 1. Hence quantum-computing algorithms can be exploited to achieve computational advantages in terms of speed and accuracy, especially for processes that are yet to be understood. 

“Quantum computing is very promising, but not every problem in particle physics is suited to this model of computing,” says Alberto Di Meglio, head of IT Innovation at CERN and one of the white paper’s lead authors alongside Karl Jansen of DESY and Ivano Tavernelli of IBM Quantum. “It’s important to ensure that we are ready and that we can accurately identify the areas where these technologies have the potential to be most useful.” 

Neutrino oscillations in extreme environments, such as supernovae, are one promising example given. In the context of quantum computing, neutrino oscillations can be considered strongly coupled many-body systems that are driven by the weak interaction. Even a two-flavour model of oscillating neutrinos is almost impossible to simulate exactly for classical computers, making this problem well suited for quantum computing. The report also identifies lattice-gauge theory and quantum field theory in general as candidates that could enjoy a quantum advantage. The considered applications include quantum dynamics, hybrid quantum/classical algorithms for static problems in lattice gauge theory, optimisation and classification problems. 

With quantum computing we address problems in those areas that are very hard to tackle with classical methods

In experimental physics, potential applications range from simulations to data analysis and include jet physics, track reconstruction and algorithms used to simulate the detector performance. One key advantage here is the speed up in processing time compared to classical algorithms. Quantum-computing algorithms might also be better at finding correlations in data, while Monte Carlo simulations could benefit from random numbers generated by a quantum computer. 

“With quantum computing we address problems in those areas that are very hard – or even impossible – to tackle with classical methods,” says Karl Jansen (DESY). “We can now explore physical systems to which we still do not have access.” 

The working group will meet again at CERN for a special workshop on 16 and 17 November, immediately before the Quantum Techniques in Machine Learning conference from 19 to 24 November.

Joined-up thinking in vacuum science

The first detection of gravitational waves in 2015 stands as a confirmation of Einstein’s prediction in his general theory of relativity and represents one of the most significant milestones in contemporary physics. Not only that, direct observation of gravitational ripples in the fabric of space-time opened up a new window on the universe that enables astronomers to study cataclysmic events such as black-hole collisions, supernovae and the merging of neutron stars. The hope is that the emerging cosmological data sets will, over time, yield unique insights to address fundamental problems in physics and astrophysics – the distribution of matter in the early universe, for example, and the search for dark matter and dark energy.

By contrast, an altogether more down-to-earth agenda – Beampipes for Gravitational Wave Telescopes 2023 – provided the backdrop for a three-day workshop held at CERN at the end of March. Focused on enabling technologies for current and future gravitational-wave observatories – specifically, their ultrahigh-vacuum (UHV) beampipe requirements – the workshop attracted a cross-disciplinary audience of 85 specialists drawn from the particle-accelerator and gravitational-wave communities alongside industry experts spanning steel production, pipe manufacturing and vacuum technologies (CERN Courier July/August 2023 p18). 

If location is everything, Geneva ticks all the boxes in this regard. With more than 125 km of beampipes and liquid-helium transfer lines, CERN is home to one of the world’s largest vacuum systems – and certainly the longest and most sophisticated in terms of particle accelerators. All of which ensured a series of workshop outcomes shaped by openness, encouragement and collaboration, with CERN’s technology and engineering departments proactively sharing their expertise in vacuum science, materials processing, advanced manufacturing and surface treatment with counterparts in the gravitational-wave community. 

Measurement science

To put all that knowledge-share into context, however, it’s necessary to revisit the basics of gravitational-wave metrology. The principal way to detect gravitational waves is to use a laser interferometer comprising two perpendicular arms, each several kilometres long and arranged in an L shape. At the intersection of the L, the laser beams in the two branches interact, whereupon the resulting interference signal is captured by photodetectors. When a gravitational wave passes through Earth, it induces differential length changes in the interferometer arms – such that the laser beams traversing the two arms experience dissimilar path lengths, resulting in a phase shift and corresponding alterations in their interference pattern. 

Better by design: the Einstein Telescope beampipes

Beampipe studies

The baseline for the Einstein Telescope’s beampipe design studies is the Virgo gravitational-wave experiment. The latter’s beampipe – which is made of austenitic stainless steel (AISI 304L) – consists of a 4 mm thick wall reinforced with stiffener rings and equipped with an expansion bellows (to absorb shock and vibration).

While steel remains the material of choice for the Einstein Telescope beampipe, other grades beyond AISI 304L are under consideration. Ferritic steels, for example, can contribute to a significant cost reduction per unit mass compared to austenitic stainless steel, which contains nickel. Ferrite also has a body-centred-cubic crystallographic structure that results in lower residual hydrogen levels versus face-centred-cubic austenite – a feature that eliminates the need for expensive solid-state degassing treatments when pumping down to UHV. 

Options currently on the table include the cheapest ferritic steels, known as “mild steels”, which are used in gas pipelines after undergoing corrosion treatment, as well as ferritic stainless steels containing more than 12% chromium by weight. While initial results with the latter show real promise, plastic deformation of welded joints remains an open topic, while the magnetic properties of these materials must also be considered to prevent anomalous transmission of electromagnetic signals and induced mechanical vibrations.

Along a related coordinate, CERN is developing an alternative solution with respect to the “baseline design” that involves corrugated walls with a thickness of 1.3 mm, eliminating the need for bellows and reinforcements. Double-wall pipe designs are also in the mix – either with an insulation vacuum or thermal insulators between the two walls. 

Beyond the beampipe material, studies are exploring the integration of optical baffles, which intermittently reduce the pipe aperture to block scattered photons. Various aspects such as positioning, material, surface treatment and installation are under review, while the transfer of vibrations from the tunnel structure to the baffle represents another line of enquiry. 

With this in mind, the design of the beampipe support system aims to minimise the transmission of vibrations to the baffles and reduce the frequency of the first vibration eigen mode within a range where the Einstein Telescope is expected to be less sensitive. Defining the vibration transfer function from the tunnel’s near-environment to the beampipe is another key objective, as are the vibration levels induced by airflow in the tunnel (around the beampipe) and stray electromagnetic fields from beampipe instrumentation.

Another thorny challenge is integration of the beampipes into the Einstein Telescope tunnel. Since the beampipes will be made up of approximately 15 m-long units, welding in the tunnel will be mandatory. CERN’s experience in welding cryogenic transfer lines and magnet junctions in the LHC tunnel will be useful in this regard, with automatic welding and cutting machines being one possible option to streamline deployment. 

Also under scrutiny is the logistics chain from raw material to final installation. Several options are being evaluated, including manufacturing and treating the beampipes on-site to reduce storage needs and align production with the pace of installation. While this solution would reduce the shipping costs of road and maritime transport, it would require specialised production personnel and dedicated infrastructure at the Einstein Telescope site.

Finally, the manufacturing and treatment processes of the beampipes will have a significant impact on cost and vacuum performance – most notably with respect to dust control, an essential consideration to prevent excessive light scattering due to falling particles and changes in baffle reflectivity. Dust issues are common in particle accelerators and the lessons learned at CERN and other facilities may well be transferable to the Einstein Telescope initiative. 

These are no ordinary interferometers, though. The instruments operate at the outer limits of measurement science and are capable of tracking changes in length down to a few tens of zeptometres (10–21 m), a length scale roughly 10,000 times smaller than the diameter of a proton. This achievement is the result of extraordinary progress in optical technologies over recent decades – advances in laser stability and mirror design, for example – as well as the ongoing quest to minimise sources of noise arising from seismic vibrations and quantum effects. 

With the latter in mind, the interferometer laser beams must also propagate through vacuum chambers to avoid potential scattering of the light by gas molecules. The residual gas present within these chambers introduces spatial and temporal fluctuations in the refractive index of the medium through which the laser beam propagates – primarily caused by statistical variations in gas density. 

As such, the coherence of the laser beam can be compromised as it traverses regions characterised by a non-uniform refractive index, resulting in phase distortions. To mitigate the detrimental effects of coherence degradation, it is therefore essential to maintain hydrogen levels at pressures lower than 10–9 mbar, while even stricter UHV requirements are in place for heavier molecules (depending on their polarisability and thermal speed).

Now and next

Right now, there are four gravitational-wave telescopes in operation: LIGO (across two sites in the US), Virgo in Italy, KAGRA in Japan, and GEO600 in Germany (while India has recently approved the construction of a new gravitational-wave observatory in the western state of Maharashtra). Coordination is a defining feature of this collective endeavour, with the exchange of data among the respective experiments crucial for eliminating local interference and accurately pinpointing the detection of cosmic events.

Meanwhile, the research community is already planning for the next generation of gravitational-wave telescopes. The primary objective: to expand the portion of the universe that can be comprehensively mapped and, ultimately, to detect the primordial gravitational waves generated by the Big Bang. In terms of implementation, this will demand experiments with longer interferometer arms accompanied by significant reductions in noise levels (necessitating, for example, the implementation of cryogenic cooling techniques for the mirrors). 

The beampipe for the ALICE experiment

Two leading proposals are on the table: the Einstein Telescope in Europe and the Cosmic Explorer in the US. The latter proposes a 40 km long interferometer arm with a 1.2 m diameter beampipe, configured in the traditional L shape and across two different sites (as per LIGO). Conversely, the former proposes six 60° Ls in an underground tunnel laid out in an equilateral triangle configuration (10 km long sides, 1 m beampipe diameter and with a high- and low-frequency detector at each vertex). 

For comparison, the current LIGO and Virgo installations feature arm lengths of 4 km and 3 km, respectively. As a result, the anticipated length of the vacuum vessel for the Einstein Telescope is projected to be 120 km, while for the Cosmic Explorer it is expected to be 160 km. In short: both programmes will require the most extensive and ambitious UHV systems ever constructed. 

Extreme vacuum 

At a granular level, the vacuum requirements for the Einstein Telescope and Cosmic Explorer assume that the noise induced by residual gas is significantly lower than the allowable noise budget of the gravitational interferometers themselves. This comparison is typically made in terms of amplitude spectral density. A similar approach is employed in particle accelerators, where an adequately low residual gas density is imperative to minimise any impacts on beam lifetimes (which are predominantly constrained by other unavoidable factors such as beam-beam interactions and collimation). 

The specification for the Einstein Telescope states that the contribution of residual gas density to the overall noise budget must not exceed 10%, which necessitates that hydrogen partial pressure be maintained in the low 10–10 mbar range. Achieving such pressures is commonplace in leading-edge particle accelerator facilities and, as it turns out, not far beyond the limits of current gravitational-wave experiments. The problem, though, comes when mapping current vacuum technologies to next-generation experiments like the Einstein Telescope. 

In such a scenario, the vacuum system would represent one of the biggest capital equipment costs – on a par, in fact, with the civil engineering works (the main cost-sink). As a result, one of the principal tasks facing the project teams is the co-development – in collaboration with industry – of scalable vacuum solutions that will enable the cost-effective construction of these advanced experiments without compromising on UHV performance and reliability. 

Follow the money

It’s worth noting that the upward trajectory of capital/operational costs versus length of the experimental beampipe is a challenge that’s common to both next-generation particle accelerators and gravitational-wave telescopes – and one that makes cost reduction mandatory when it comes to the core vacuum technologies that underpin these large-scale facilities. In the case of the proposed Future Circular Collider at CERN, for instance, a vacuum vessel exceeding 90 km in length would be necessary. 

Of course, while operational and maintenance costs must be prioritised in the initial design phase, the emphasis on cost reduction touches all aspects of project planning and, thereafter, requires meticulous optimisation across all stages of production – encompassing materials selection, manufacturing processes, material treatments, transport, logistics, equipment installation and commissioning. Systems integration is also paramount, especially at the interfaces between the vacuum vessel’s technical systems and adjacent infrastructure (for example, surface buildings, underground tunnels and caverns). Key to success in every case is a well-structured project that brings together experts with diverse competencies as part of an ongoing “collective conversation” with their counterparts in the physics community and industrial supply chain.

Welding services

Within this framework, CERN’s specialist expertise in managing large-scale infrastructure projects such as the HL-LHC can help to secure the success of future gravitational-wave initiatives. Notwithstanding CERN’s capabilities in vacuum system design and optimisation, other areas of shared interest between the respective communities include civil engineering, underground safety and data management, to name a few. 

Furthermore, such considerations align well with the  latest update of the European strategy for particle physics – which explicitly prioritises the synergies between particle and astroparticle physics – and are reflected operationally through a collaboration agreement (signed in 2020) between CERN and the lead partners on the Einstein Telescope feasibility study – Nikhef in the Netherlands and INFN in Italy. 

In this way, CERN is engaged directly as a contributing partner on the beampipe studies for the Einstein Telescope (see “Better by design: the Einstein Telescope beampipes”). The three-year project, which kicked off in September 2022, will deliver the main technical design report for the telescope’s beampipes. CERN’s contribution is structured in eight work packages, from design and materials choice to logistics and installation, including surface treatments and vacuum systems. 

CERN teams are engaged directly on the beampipe studies for the Einstein Telescope

The beampipe pilot sector will also be installed at CERN, in a building previously used for testing cryogenic helium transfer lines for the LHC. Several measurements are planned for 2025, including tests relating to installation, alignment, in-situ welding, leak detection and achievable vacuum levels. Other lines of enquiry will assess the efficiency of the bakeout process, which involves the injection of electrical current directly into the beampipe walls (heating them in the 100–150 °C range) to minimise subsequent outgassing levels under vacuum.

Given that installation of the beampipe pilot sector is time-limited, while details around the manufacturing and treatment of the vacuum chambers are still to be clarified, the engagement of industry partners in this early design stage is a given – an approach, moreover, that seeks to replicate the collaborative working models pursued as standard within the particle-accelerator community. While there’s a lot of ground to cover in the next two years, the optimism and can-do mindset of all participants at Beampipes for Gravitational Wave Telescopes 2023 bodes well.

Event displays in motion

The first event displays in particle physics were direct images of traces left by particles when they interacted with gases or liquids. The oldest event display of an elementary particle, published in Charles Wilson’s Nobel lecture from 1927 and taken between 1912 and 1913, showed a trajectory of an electron. It was a trail made by small droplets caused by the interaction between an electron coming from cosmic rays and gas molecules in a cloud chamber, the trajectory being bent due to the electrostatic field (see “First light” figure). Bubble chambers, which work in a similar way to cloud chambers but are filled with liquid rather than gas, were key in proving the existence of neutral currents 50 years ago, along with many other important results. In both cases a particle crossing the detector triggered a camera that took photographs of the trajectories. 

Following the discovery of the Higgs boson in particular, outreach has become another major pillar of event displays

Georges Charpak’s invention of the multi-wire proportional chamber in 1968, which made it possible to distinguish single tracks electronically, paved the way for three-dimensional (3D) event displays. With 40 drift chambers, and computers able to process the large amounts of data produced by the UA1 detector at the SppS, it was possible to display the tracks of decaying W and Z bosons along the beam axis, aiding their 1983 discovery (see “Inside events” figure, top).  

Design guidelines 

With the advent of LEP and the availability of more powerful computers and reconstruction software, physicists knew that the amount of data would increase to the point where displaying all of it would make pictures incomprehensible. In 1995 members of the ALEPH collaboration released guidelines – implemented in a programme called Dali, which succeeded Megatek – to make event displays as easy to understand as possible, and the same principles apply today. To make them better match human perception, two different layouts were proposed: the wire-frame technique and the fish-eye transformation. The former shows detector elements via a rendering of their shape, resulting in a 3D impression (see “Inside events” figure, bottom). However, the wire-frame pictures needed to be simplified when too many trajectories and detector layers were available. This gave rise to the fish-eye view, or projection in x versus y, which emphasised the role of the tracking system. The remaining issue of superimposed detector layers was mitigated by showing a cross section of the detector in the same event display (see “Inside events” figure, middle). Together with a colour palette that helped distinguish the different objects, such as jets, from one other, these design principles prevailed into the LHC era. 

First ever event display

The LHC not only took data acquisition, software and analysis algorithms to a new level, but also event displays. In a similar vein to LEP, the displays used to be more of a debugging tool for the experiments to visualise events and see how the reconstruction software and detector work. In this case, a static image of the event is created and sent to the control room in real time, which is then examined by experts for anomalies, for example due to incorrect cabling. “Visualising the data is really powerful and shows you how beautiful the experiment can be, but also the brutal truth because it can tell you something that does not work as expected,” says ALICE’s David Dobrigkeit Chinellato. “This is especially important after long shutdowns or the annual year-end-technical stops.”  

Largely based on the software used to create event displays at LEP, each of the four main LHC experiments developed their own tools, tailored to their specific analysis software (see “LHC returns” figure). The detector geometry is loaded into the software, followed by the event data; if the detector layout doesn’t change, the geometry is not recreated. As at LEP, both fish-eye and wire-frame images are used. Thanks to better rendering software and hardware developments such as more powerful CPUs and GPUs, wire-frame images are becoming ever more realistic (see “LHC returns” figure). Computing developments and additional pileup due to increased collisions have motivated more advanced event displays. Driven by the enthusiasm of individual physicists, and in time for the start of the LHC Run 3 ion run in October 2022, ALICE experimentalists have began to use software that renders each event to give it a more realistic and crisper view (see “Picture perfect” image). In particular, in lead–lead collisions at 5.36 TeV per nucleon pair measured with ALICE, the fully reconstructed tracks are plotted to achieve the most efficient visualisation.

Inside events

ATLAS also uses both fish-eye and wire-frame views. Their current event-display framework, Virtual Point 1 (VP1), creates interactive 3D event displays and integrates the detector geometry to draw a selected set of particle passages through the detector. As with the other experiments, different parts of the detector can be added or removed, resulting in a sliced view. Similarly, CMS visualises their events using in-house software known as Fireworks, while LHCb has moved from a traditional view using Panoramix software to a 3D one using software based on Root TEve.

In addition, ATLAS, CMS and ALICE have developed virtual-reality views. VP1, for instance, allows data to be exported in a format that is used for videos and 3D images. This enables both physicists and the public to fully immerse themselves in the detector. CMS physicists created a first virtual-reality version during a hackathon, which took place at CERN in 2016 and integrated this feature with small modifications in their application used for outreach. ALICE’s augmented-reality application “More than ALICE”, which is intended for visitors, overlays the description of detectors and even event displays, and works on mobile devices. 

Phoenix rising

To streamline the work on event displays at CERN, developers in the LHC experiments joined forces and published a visualisation whitepaper in 2017 to identify challenges and possible solutions. As a result it was decided to create an experiment-agnostic event display, later named Phoenix. “When we realised the overlap of what we are doing across many different experiments, we decided to develop a flexible browser-based framework, where we can share effort and leverage our individual expertise, and where users don’t need to install any special software,” says main developer Edward Moyse of ATLAS. While experiment-specific frameworks are closely tied to the experiments’ data format and visualise all incoming data, experiment-agnostic frameworks only deal with a simplified version of the detectors and a subset of the event data. This makes them lightweight and fast, and requires an extra processing step as the experimental data need to be put into a generic format and thus lose some detail. Furthermore, not every experiment has the symmetric layout of ATLAS and CMS. This applies to LHCb, for instance.

Event displays of the first LHC Run 3 collisions

Phoenix initially supported the geometry and event- display formats for LHCb and ATLAS, but those for CMS were added soon after and now FCC has joined. The platform had its first test in 2018 with the TrackML computing challenge using a fictious High-Luminosity LHC (HL-LHC) detector created with Phoenix. The main reason to launch this challenge was to find new machine-learning algorithms that can deal with the unprecedented increase in data collection and pile-up in detectors expected during the HL-LHC runs, and at proposed future colliders. 

Painting outreach

Following the discovery of the Higgs boson in particular, outreach has become another major pillar of event displays. Visually pleasing images and videos of particle collisions, which help in the communication of results, are tailor made for today’s era of social media and high-bandwidth internet connections. “We created a special event display for the LHCb master class,” mentions LHCb’s Ben Couturier. “We show the students what an event looks like from the detector to the particle tracks.” CMS’s iSpy application is web-based and primarily used for outreach and CMS masterclasses, and has also been extended with a virtual-reality application. “When I started to work on event displays around 2007, the graphics were already good but ran in dedicated applications,” says CMS’s Tom McCauley. “For me, the big change is that you can now use all these things on the web. You can access them easily on your mobile phone or your laptop without needing to be an expert on the specific software.” 

Event displays from LHCb and the simulated HL-LHC detector

Being available via a browser means that Phoenix is a versatile tool for outreach as well as physics. In cases or regions where the necessary bandwidth to create event displays is sparse, pre-created events can be used to highlight the main physics objects and to display the detector as clearly as possible. Another new way to experience a collision and to immerse fully into an event is to wear virtual-reality goggles. 

An even older and more experiment-agnostic framework than Phoenix using virtual-reality experiences exists at CERN, and is aptly called TEV (Total Event Display). Formerly used to show event displays in the LHC interactive tunnel as well as in the Microcosm exhibition, it is now used at the CERN Globe and the new Science Gateway centre. There, visitors will be able to play a game called “proton football”, where the collision energy depends on the “kick” the players give their protons. “This game shows that event displays are the best of both worlds,” explains developer Joao Pequenao of CERN. “They inspire children to learn more about physics by simply playing a soccer game, and they help physicists to debug their detectors.”

A soft spot for heavy metal

Welding is the technique of fusing two materials, often metals, by heating them to their melting points, creating a seamless union. Mastery of the materials involved, meticulous caution and remarkable steadiness are integral elements to a proficient welder’s skillset. The ability to adjust to various situations, such as mechanised or manual welding, is also essential. Audrey Vichard’s role as a welding engineer in CERN’s mechanical and materials engineering group (MME) encompasses comprehensive technical guidance in the realm of welding. She evaluates methodologies, improves the welding process, develops innovative solutions, and ensures compliance with global standards and procedures. This amalgamation of tasks allows for the effective execution of complex projects for CERN’s accelerators and experiments. “It’s a kind of art,” says Audrey. “Years of training are required to achieve high-quality welds.” 

Audrey is one of the newest additions to the MME group, which provides specific engineering solutions combining mechanical design, fabrication and material sciences for accelerator components and physics detectors to the CERN community. She joined the forming and welding section as a fellow in January 2023, having previously studied metallurgy in the engineering school at Polytech Nantes in France. “While in school, I did an internship in Toulon, where they build submarines for the army. I was in a group with a welder, who passed on his passion for welding to me – especially when applied in demanding applications.”

Extreme conditions

What sets welding at CERN apart are the variety of materials used and the environments the finished parts have to withstand. Radioactivity, high pressure to ultra-high vacuum and cryogenic temperatures are all factors to which the materials are exposed. Stainless steel is the most frequently used material, says Audrey, but rarer ones like niobium also come into play. “You don’t really find niobium for welding outside CERN – it is very specific, so it’s interesting and challenging to study niobium welds. To keep the purity of this material in particular, we have to apply a special vacuum welding process using an electron beam.” The same is true for titanium, which is a material of choice for its low density and high mechanical properties. It is currently under study for the next-generation HL-LHC beam dump. Whether it’s steel, titanium, copper, niobium or aluminium, each material has a unique metallurgical behaviour that will greatly influence the welding process. To meet the strict operating conditions over the lifetime of the components, the welding parameters are developed consequently, and rigorous control of the quality and traceability are essential.

“Although it is the job of the physicists at CERN to come up with the innovative machines they need to push knowledge further, it is an interesting exchange to learn from each other, juggling between ideal objects and industrial realities,” explains Audrey. “It is a matter of adaptation. The physicists come here and explain what they need and then we see if it’s feasible with our machines. If not, we can adapt the design or material, and the physicists are usually quite open to the change.”

Touring the main CERN workshop – which was one of CERN’s first buildings and has been in service since 1957 – Audrey is one of the few women present. “We are a handful of women graduating as International Welding Engineers (IWE). I am proud to be part of the greater scientific community and to promote my job in this domain, historically dominated by men.”

The physicists come here and explain what they need and then we see if it’s feasible with our machines

In the main workshop at CERN, Audrey is, along with her colleagues, a member of the welding experts’ team. “My daily task is to support welding activities for current fabrication projects CERN-wide. On a typical day, I can go from performing visual inspections of welds in the workshop to overseeing the welding quality, advising the CERN community according to the most recent standards, participating in large R&D projects and, as a welding expert, advising the CERN community in areas such as the framework of the pressure equipment directive.”

Together with colleagues from CERN’s vacuum, surfaces and coatings group (TE-VSC), and MME, Audrey is currently working on R&D for the Einstein Telescope – a proposed next-generation gravitational-wave observatory in Europe. It is part of a new collaboration between CERN, Nikhef and the INFN to design the telescope’s colossal vacuum system – the largest ever attempted (see CERN shares beampipe know-how for gravitational-wave observatories). To undertake this task, the collaboration is initially investigating different materials to find the best candidate combining ultra-high vacuum compatibility, weldability and cost efficiency. So far, one fully prototyped beampipe has been finished using stainless steel and another is in production with common steel; the third is yet to be done. The next main step will then be to go from the current 3 m-long prototype to a 50 m version, which will take about a year and a half. Audrey’s task is to work with the welders to optimise the welding parameters and ultimately provide a robust industrial solution to manufacture this giant vacuum chamber. “The design is unusual; it has not been used in any industrial application, at least not at this quality. I am very excited to work on the Einstein Telescope. Gravitational waves have always interested me, and it is great to be part of the next big experiment at such an early stage.”

A new TPC for T2K upgrade

In the latest milestone for the CERN Neutrino Platform, a key element of the near detector for the T2K (Tokai to Kamioka) neutrino experiment in Japan – a state-of-the-art time projection chamber (TPC) – is now fully operational and taking cosmic data at CERN. T2K detects a neutrino beam at two sites: a near-detector complex close to the neutrino production point and Super-Kamiokande 300 km away. The ND280 detector is one of the near detectors necessary to characterise the beam before the neutrinos oscillate and to measure interaction cross sections, both of which are crucial to reduce systematic uncertainties. 

To improve the latter further, the T2K collaboration decided in 2016 to upgrade ND280 with a novel scintillator tracker, two TPCs and a time-of-flight system. This upgrade, in combination with an increase in neutrino beam power from the current 500 kW to 1.3 MW, will increase the statistics by a factor of about four and reduce the systematic uncertainties from 6% to 4%. The upgraded ND280 is also expected to serve as a near detector of the next generation long-baseline neutrino oscillation experiment Hyper-Kamiokande. 

Meanwhile, R&D and testing for the prototype detectors for the DUNE experiment at the Long Baseline Neutrino Facility at Fermilab/SURF in the US is entering its final stages. 

Extreme detector design for a future circular collider

FCC-hh reference detector

The Future Circular Collider (FCC) is the most powerful post-LHC experimental infrastructure proposed to address key open questions in particle physics. Under study for almost a decade, it envisions an electron–positron collider phase, FCC-ee, followed by a proton–proton collider in the same 91 km-circumference tunnel at CERN. The hadron collider, FCC-hh, would operate at a centre-of-mass energy of 100 TeV, extending the energy frontier by almost an order of magnitude compared to the LHC, and provide an integrated luminosity a factor of 5–10 larger. The mass reach for direct discovery at FCC-hh will reach several tens of TeV and allow, for example, the production of new particles whose existence could be indirectly exposed by precision measurements at FCC-ee. 

The potential of FCC-hh offers an unprecedented opportunity to address fundamental unknowns about our universe

At the time of the kickoff meeting for the FCC study in 2014, the physics potential and the requirements for detectors at a 100 TeV collider were already heavily debated. These discussions were eventually channelled into a working group that provided the input to the 2020 update of the European strategy for particle physics and recently concluded with a detailed writeup in a 300-page CERN Yellow Report. To focus the effort, it was decided to study one reference detector that is capable of fully exploiting the FCC-hh physics potential. At first glance it resembles a super CMS detector with two LHCb detectors attached (see “Grand designs” image). A detailed detector performance study followed, allowing a very efficient study of the key physics capabilities. 

The first detector challenge at FCC-hh is related to the luminosity, which is expected to reach 3 × 1035 cm–2s–1. This is six times larger than the HL-LHC luminosity and 30 times larger than the nominal LHC luminosity. Because the FCC will operate beams with a 25 ns bunch spacing, the so-called pile-up (the number of pp collisions per bunch crossing) scales by approximately the same factor. This results in almost 1000 simultaneous pp collisions, requiring a highly granular detector. Evidently, the assignment of tracks to their respective vertices in this environment is a formidable task. 

Longitudinal cross-section of the FCC-hh reference detector

The plan to collect an integrated pp luminosity of 30 ab–1 brings the radiation hardness requirements for the first layers of the tracking detector close to 1018 hadrons/cm2, which is around 100 times more than the requirement for the HL-LHC. Still, the tracker volume with such high radiation load is not excessively large. From a radial distance of around 30 cm outwards, radiation levels are already close to those expected for the HL-LHC, thus the silicon technology for these detector regions is already available.

The high radiation levels also need very radiation-hard calorimetry, making a liquid-argon calorimeter the first choice for the electromagnetic calorimeter and forward regions of the hadron calorimeter. The energy deposit in the very forward regions will be 4 kW per unit of rapidity and it will be an interesting task to keep cryogenic liquids cold in such an environment. Thanks to the large shielding effect of the calorimeters, which have to be quite thick to contain the highest energy particles, the radiation levels in the muon system are not too different from those at the HL-LHC. So the technology needed for this system is available. 

Looking forward 

At an energy of 100 TeV, important SM particles such as the Higgs boson are abundantly produced in the very forward region. The forward acceptance of FCC-hh detectors therefore has to be much larger than at the LHC detectors. ATLAS and CMS enable momentum measurements up to pseudorapidities (a measure of the angle between the track and beamline) of around η = 2.5, whereas at FCC-hh this will have to be extended to η = 4 (see “Far reaching” figure). Since this is not achievable with a central solenoid alone, a forward magnet system is assumed on either side of the detector. Whether the optimum forward magnets are solenoids or dipoles still has to be studied and will depend on the requirements for momentum resolution in the very forward region. Forward solenoids have been considered that extend the precision of momentum measurements by one additional unit of rapidity. 

Momentum resolution versus pseudorapidity

A silicon tracking system with a radius of 1.6 m and a total length of 30 m provides a momentum resolution of around 0.6% for low-momentum particles, 2% at 1 TeV and 20% at 10 TeV (see “Forward momentum” figure). To detect at least 90% of the very forward jets that accompany a Higgs boson in vector-boson-fusion production, the tracker acceptance has to be extended up to η = 6. At the LHC such an acceptance is already achieved up to η = 4. The total tracker surface of around 400 m2 at FCC-hh is “just” a factor two larger than the HL-LHC trackers, and the total number of channels (16.5 billion) is around eight times larger.

It is evident that the FCC-hh reference detector is more challenging than the LHC detectors, but not at all out of reach. The diameter and length are similar to those of the ATLAS detector. The tracker and calorimeters are housed inside a large superconducting solenoid 10 m in diameter, providing a magnetic field of 4 T. For comparison, CMS uses a solenoid with the same field and an inner diameter of 6 m. This difference does not seem large at first sight, but of course the stored energy (13 GJ) is about five times larger than the CMS coil, which needs very careful design of the quench protection system.

For the FCC-hh calorimeters, the major challenge, besides the high radiation dose, is the required energy resolution and particle identification in the high pile-up environment. The key to achieve the required performance is therefore a highly segmented calorimeter. The need for longitudinal segmentation calls for a solution different from the “accordion” geometry employed by ATLAS. Flat lead/steel absorbers that are inclined by 50 degrees with respect to the radial direction are interleaved with liquid-argon gaps and straight electrodes with high-voltage and signal pads (see “Liquid argon” figure). The readout of these pads on the back of the calorimeter is then possible thanks to the use of multi-layer electrodes fabricated as straight printed circuit boards. This idea has already been successfully prototyped within the CERN EP detector R&D programme.

The considerations for a muon system for the reference detector are quite different compared to the LHC experiments. When the detectors for the LHC were originally conceived in the late 1980s, it was not clear whether precise tracking in the vicinity of the collision point was possible in this unprecedented radiation environment. Silicon detectors were excessively expensive and gas detectors were at the limit of applicability. For the LHC detectors, a very large emphasis was therefore put on muon systems with good stand-alone performance, specifically for the ATLAS detector, which is able to provide a robust measurement of, for example, the decay of a Higgs particle into four muons, with the muon system alone. 

Liquid argon

Thanks to the formidable advancement of silicon-sensor technology, which has led to full silicon trackers capable of dealing with around 140 simultaneous pp collisions every 25 ns at the HL-LHC, standalone performance is no longer a stringent requirement. The muon systems for FCC-hh can therefore fully rely on the silicon trackers, assuming just two muon stations outside the coil that measure the exit point and the angle of the muons. The muon track provides muon identification, the muon angle provides a coarse momentum measurement for triggering and the track position provides improved muon momentum measurement when combined with the inner tracker. 

The major difference between an FCC-hh detector and CMS is that there is no yoke for the return flux of the solenoid, as the cost would be excessive and its only purpose to shield the magnetic field towards the cavern. The baseline design assumes the cavern infrastructure can be built to be compatible with this stray field. Infrastructure that is sensitive to the magnetic field will be placed in the service cavern 50 m from the solenoid, where the stray field is sufficiently low.

Higgs self-coupling

The high granularity and acceptance of the FCC-hh reference detector will result in about 250 TB/s of data for calorimetry and the muon system, about 10 times more than the ATLAS and CMS HL-LHC scenarios. There is no doubt that it will be possible to digitise and read this data volume at the full bunch-crossing rate for these detector systems. The question remains whether the data rate of almost 2500 TB/s from the tracker can also be read out at the full bunch-crossing rate or whether calorimeter, muon and possible coarse tracker information need to be used for a first-level trigger decision, reducing the tracker readout rate to the few MHz level, without the loss of important physics. Even if the optical link technology for full tracker readout were available and affordable, sufficient radiation hardness of devices and infrastructure constraints from power and cooling services are prohibitive with current technology, calling for R&D on low-power radiation-hard optical links. 

Benchmarks physics

The potential of FCC-hh in the realms of precision Higgs and electroweak physics, high mass reach and dark-matter searches offers an unprecedented opportunity to address fundamental unknowns about our universe. The performance requirements for the FCC-hh baseline detector have been defined through a set of benchmark physics processes, selected among the key ingredients of the physics programme. The detector’s increased acceptance compared to the LHC detectors, and the higher energy of FCC-hh collisions, will allow physicists to uniquely improve the precision of measurements of Higgs-boson properties for a whole spectrum of production and decay processes complementary to those accessible at the FCC-ee. This includes measurements of rare processes such as Higgs pair-production, which provides a direct measure of the Higgs self-coupling – a crucial parameter for understanding the stability of the vacuum and the nature of the electroweak phase transition in the early universe – with a precision of 3 to 7% (see “Higgs self-coupling” figure).

Dark matters

Moreover, thanks to the extremely large Higgs-production rates, FCC-hh offers the potential to measure rare decay modes in a novel boosted kinematic regime well beyond what is currently studied at the LHC. These include the decay to second-generation fermions, muons, which can be measured to a precision of 1%. The Higgs branching fraction to invisible states can be probed to a value of 10–4, allowing the parameter space for dark matter to be further constrained. The much higher centre-of-mass energy of FCC-hh, meanwhile, significantly extends the mass reach for discovering new particles. The potential for detecting heavy resonances decaying into di-muons and di-electrons extends to 40 TeV, while for coloured resonances like excited quarks the reach extends to 45 TeV, thus extending the current limit by almost an order of magnitude. In the context of supersymmetry, FCC-hh will be capable of probing stop squarks with masses up to 10 TeV, also well beyond the reach of the LHC.

In terms of dark-matter searches, FCC-hh has immense potential – particularly for probing scenarios of weakly interacting massive particles such as higgsinos and winos (see “Dark matters” figure). Electroweak multiplets are typically elusive, especially in hadron collisions, due to their weak interactions and large masses (needed to explain the relic abundance of dark matter in our universe). Their nearly degenerate mass spectrum produces an elusive final state in the form of so-called “disappearing tracks”. Thanks to the dense coverage of the FCC-hh detector tracking system, a general-purpose FCC-hh experiment could detect these particle decays directly, covering the full mass range expected for this type of dark matter. 

A detector at a 100 TeV hadron collider is clearly a challenging project. But detailed studies have shown that it should be possible to build a detector that can fully exploit the physics potential of such a machine, provided we invest in the necessary detector R&D. Experience with the Phase-II upgrades of the LHC detectors for the HL-LHC, developments for further exploitation of the LHC and detector R&D for future Higgs factories will be important stepping stones in this endeavour.

End-to-end simulation of particle accelerators using Sirepo

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

This webinar will give a high-level overview of how scientists can model particle accelerators using Sirepo, an open-source scientific computing gateway.

The speaker, Jonathan Edelen, will work through examples using three of Sirepo’s applications that best highlight the different modelling regimes for simulating a free-electron laser.

Want to learn more on this subject?

Jonathan Edelen, president, earned a PhD in accelerator physics from Colorado State University, after which he was selected for the prestigious Bardeen Fellowship at Fermilab. While at Fermilab he worked on RF systems and thermionic cathode sources at the Advanced Photon Source. Currently, Jon is focused on building advanced control algorithms for particle accelerators including solutions involving machine learning.

bright-rec iop pub iop-science physcis connect