Precision calculations in the Standard Model and beyond are very important for the experimental programme of the LHC, planned high-energy colliders and gravitational-wave detectors of the future. Following two years of pandemic-imposed virtual discussions, 25 invited experts gathered from 26 to 30 July at Cadenabbia on Lake Como, Italy, to present new results and discuss paths into the computational landscape of this year’s “Loop Summit”.
The conference surveyed topics relating to multi-loop and multi-leg calculations in quantum chromodynamics (QCD) and electroweak processes. In scattering processes, loops are closed particle lines and legs represent external particles. Both present computational challenges. Recent progress on many inclusive processes has been reported at three- or four-loop order, including for deep-inelastic scattering, jets at colliders, the Drell–Yan process, top-quark and Higgs-boson production, and aspects of bottom-quark physics. Much improved descriptions of scaling violations of parton densities, heavy-quark effects at colliders, power corrections, mixed QCD and electroweak corrections, and high-order QED corrections for e+e– colliders have also recently been obtained. These will be important for many processes at the LHC, and pave the way to physics at facilities such as the proposed Future Circular Collider (FCC).
Quantum field theory provides a very elegant way to solve Einsteinian gravity
Weighty considerations
Although merging black holes can have millions of solar masses, the physics describing them remains classical, and quantum gravity happened, if at all, shortly after the Big Bang. Nevertheless, quantum field theory provides an elegant way to solve Einsteinian gravity. At this year’s Loop Summit, perturbative approaches to gravity were discussed that use field-theoretic methods at the level of the 5th and 6th post-Newtonian approximations, where the nth post-Newtonian order corresponds to a classical n-loop calculation between black-hole world lines. These calculations allow predictions of the binding energy and periastron advance of spiralling-in pairs of black holes, and relate them to gravitational-wave effects. In these calculations, the classical loops all link to world lines in classical graviton networks within the framework of an effective-field-theory representation of Einsteinian gravity.
Other talks discussed important progress on advanced analytic computation technologies and new mathematical methods such as computational improvements in massive Dirac-algebra, new ways to calculate loop integrals analytically, new ways to deal consistently with polarised processes, the efficient reduction of highly connected systems of integrals, the solution of gigantic systems of differential equations, and numerical methods based on loop-tree duality. All these methods will decrease the theory errors for many processes due to be measured in the high-luminosity phase of the LHC, and beyond.
Half of the meeting was devoted to developing new ideas in subgroups. In-person discussions are invaluable for highly technical discussions such as these — there is still no substitute for gathering around the blackboard informally and jotting down equations and diagrams. The next Loop Summit in this triennial series will take place in summer 2024.
The Deep Underground Neutrino Experiment (DUNE) in the US is set to replicate that marvel of model-making, the ship-in-a-bottle, on an impressive scale. More than 3000 tonnes of steel and other components for DUNE’s four giant detector modules, or cryostats, must be lowered 1.5 km through narrow shafts beneath the Sanford Lab in South Dakota, before being assembled into four 66 × 19 × 18 m3 containers. And the maritime theme is more than a metaphor: to realise DUNE’s massive cryostats, each of which will keep 17.5 kt of liquid argon (LAr) at a temperature of –200°, CERN is working closely with the liquefied natural gas (LNG) shipping industry.
Since it was established in 2013, CERN’s Neutrino Platform has enabled significant European participation in long-baseline neutrino experiments in the US and Japan. For DUNE, which will beam neutrinos 1300 km through the Earth’s crust from Fermilab to Sanford, CERN has built and operated two large-scale prototypes for DUNE’s LAr time-projection chambers (TPCs). All aspects of the detectors have been validated. The “ProtoDUNE” detectors’ cryostats will now pave the way for the Neutrino Platform team to design and engineer cryostats that are 20 times bigger. CERN had already committed to build the first of these giant modules. In June, following approval from the CERN Council, the organisation also agreed to provide a second.
Scaling up
Weighing more than 70,000 tonnes, DUNE will be the largest ever deployment of LAr technology, which serves as both target and tracker for neutrino interactions, and was proposed by Carlo Rubbia in 1977. The first large-scale LAr TPC – ICARUS, which was refurbished at CERN and shipped to Fermilab’s short-baseline neutrino facility in 2017 – is a mere twentieth of the size of a single DUNE module.
Scaling LAr technology to industrial levels presents several challenges, explains Marzio Nessi, who leads CERN’s Neutrino Platform. Typical cryostats are carved from big chunks of welded steel, which does not lend itself to a modular design. Insulation is another challenge. In smaller setups, a vacuum installation comprising two stiff walls would be used. But at the scale of DUNE, the cryostats will deform by tens of cm when cooled from room temperature, potentially imperilling the integrity of instrumentation, and leading CERN to use an active foam with an ingenious membrane design.
The nice idea from the liquefied-natural-gas industry is to have an internal membrane which can deform like a spring
Marzio Nessi
“The nice idea from the LNG industry is that they have found a way to have an internal membrane, which can deform like a spring, as a function of the thermal conditions. It’s a really beautiful thing,” says Nessi. “We are collaborating with French LNG firm GTT because there is a reciprocal interest for them to optimise the process. They never went to LAr temperatures like these, so we are both learning from each other and have built a fruitful ongoing collaboration.”
Having passed all internal reviews at CERN and in the US, the first cryostat is now ready for procurement. Several different industries across CERN’s member states and beyond are involved, with delivery and installation at Sanford Lab expected to start in 2024. The cryostat is only one aspect of the ProtoDUNE project: instrumentation, readout, high-voltage supply and many other aspects of detector design have been optimised through more than five years of R&D. Two technologies were trialled at the Neutrino Platform: single- and dual-phase LAr TPCs. The single-phase design has been selected as the design for the first full-size DUNE module. The Neutrino Platform team is now qualifying a hybrid single/dual-phase version based on a vertical drift, which may prove to be simpler, more cost effective and easier-to-install.
Step change
In parallel with efforts towards the US neutrino programme, CERN has developed the BabyMIND magnetic spectrometer, which sandwiches magnetised iron and scintillator to detect relatively low-energy muon neutrinos, and participates in the T2K experiment, which sends neutrinos 295 km from Japan’s J-PARC accelerator facility to the Super-Kamiokande detector. CERN will contribute to the upgrade of T2K’s near detector, and a proposal has been made for a new water Cherenkov test-beam experiment at CERN, to later be placed about 1 km from the neutrino beam source of the Hyper Kamiokande experiment . Excavation of underground caverns for Hyper Kamiokande and DUNE has already begun.
DUNE and Hyper-Kamiokande, along with short-baseline experiments and major non-accelerator detectors such as JUNO in China, will enable high-precision neutrino-oscillation measurements to tackle questions such as leptonic CP violation, the neutrino mass hierarchy, and hints of additional “sterile” neutrinos, as well as a slew of questions in multi-messenger astronomy. Entering operation towards the end of the decade, Hyper-Kamiokande and DUNE will mark a step-change in the scale of neutrino experiments, demanding a global approach.
“The Neutrino Platform has become one of the key projects at CERN after the LHC,” says Nessi. “The whole thing is a wonderful example – even a prototype – for the global participation and international collaboration that will be essential as the field strives to build ever more ambitious projects like a future collider.”
The 25th International Conference on Computing in High-Energy and Nuclear Physics (CHEP) gathered more than 1000 participants online from 17 to 21 May. Dubbed “vCHEP”, the event took place virtually after this year’s in-person event in Norfolk, Virginia, had to be cancelled due to the COVID-19 pandemic. Participants tuned in across 20 time zones, from Brisbane to Honolulu, to live talks, recorded sessions, excellent discussions on chat apps (to replace the traditional coffee-break interactions) and special sessions that linked job seekers with recruiters.
Given vCHEP’s virtual nature this year, there was a different focus on the content. Plenary speakers are usually invited, but this time the organisers invited papers of up to 10 pages to be submitted, and chose a plenary programme from the most interesting and innovative. Just 30 had to be selected from more than 200 submissions — twice as many as expected — but the outcome was a diverse programme tackling the huge issues of data rate and event complexity in future experiments in nuclear and high-energy physics (HEP).
Artificial intelligence
So what were the hot topics at vCHEP? One outstanding one was artificial intelligence and machine learning. There were more papers submitted on this theme than any other, showing that the field is continuing to innovate in this domain.
Interest in using graph neural networks for the problem of charged-particle tracking was very high, with three plenary talks. Using a graph to represent the hits in a tracker as nodes and possible connections between hits as edges is a very natural way to represent the data that we get from experiments. The network can be effectively trained to pick out the edges representing the true tracks and reject those that are just spurious connections. The time needed to get to a good solution has improved dramatically in just a few years, and the scaling of the solution to dense environments, such as at the High-Luminosity LHC (HL-LHC), is very promising for this relatively new technique.
ATLAS showed off their new fast-simulation framework
On the simulation side, work was presented showcasing new neural-network architectures that use a “bounded information-bottleneck autoencoder” to improve training stability, providing a solution that replicates important features such as how real minimum-ionising particles interact with calorimeters. ATLAS also showed off their new fast-simulation framework, which combines traditional parametric simulation with generative adversarial networks, to provide better agreement with Geant4 than ever before.
New architectures
Machine learning is very well suited to new computing architectures, such as graphics processing units (GPUs), but many other experimental-physics codes are also being rewritten to take advantage of these new architectures. IceCube are simulating photon transport in the Antarctic ice on GPUs, and presented detailed work on their performance analysis that led to recent significant speed-ups. Meanwhile, LHCb will introduce GPUs to their trigger farm for Run 3, and showed how much this will improve the energy consumption per event of the high-level trigger. This will help to meet the physical constraints of power and cooling close to the detector, and is a first step towards bringing HEP’s overall computing energy consumption to the table as an important parameter.
LHCb will introduce GPUs to their trigger farm for Run 3
Encouraging work on porting event generation to GPUs was also presented — particularly appropriately, given the spiralling costs of higher order generators for HL-LHC physics. Looking at the long-term future of these new code bases, there were investigations of porting calorimeter simulation and liquid-argon time-projection chamber software to different toolkits for heterogeneous programming, a topic that will become even more important as computing centres diversify their offerings.
Keeping up with benchmarking and valuing these heterogeneous resources is an important topic for the Worldwide LHC Computing Grid, and a report from the HEPiX Benchmarking group pointed to the future for evaluating modern CPUs and GPUs for a variety of real-world HEP applications. Staying on the facilities topic, R&D was presented on how to optimise delivering reliable and affordable storage for HEP, based on CephFS and the CERN-developed EOS storage system. This will be critical to providing the massive storage needed in the future. The network between facilities will likely become dynamically configurable in the future, and how best to take advantage of machine learning for traffic prediction is being investigated.
Quantum computing
vCHEP was also the first edition of CHEP with a dedicated parallel session on quantum computing. Meshing very well with CERN’s Quantum Initiative, this showed how seriously investigations of how to use this technology in the future are being taken. Interesting results on using quantum support-vector machines to train networks for signal/background classification for B-meson decays were highlighted.
On a meta note, presentations also explored how to adapt outreach events to a virtual setup, to keep up public engagement during lockdown, and how best to use online software training to equip the future generation of physicists with the advanced software skills they will need.
Was vCHEP a success? So far, the feedback is overwhelmingly positive. It was a showcase for the excellent work going on in the field, and 11 of the best papers will be published in a special edition of Computing and Software for Big Science — another first for CHEP in 2021.
The annual International Particle Accelerator Conference (IPAC) promotes collaboration among scientists, engineers, technicians, students and industrial partners across the globe. Originally to be hosted this year by the Laboratório Nacional de Luz Síncrotron (LNLS) in Campinas, Brazil, the conference was moved online when it became clear that the global pandemic would prohibit travel. IPAC21 was nevertheless highly successful, attracting more than 1750 participants online from 24 to 28 May. Despite the technical and logistical challenges, the virtual platform provided many advantages, including low or zero registration fees and a larger, younger and more diverse demographic than typical in-person events, which tend to attract about 1000 delegates.
In order to allow worldwide virtual participation, live plenary presentations were limited to two hours daily. Highlights included Harry Westfahl, Jr. (LNLS) on the scientific capabilities of fourth-generation storage-ring light sources; Thomas Glasmacher (FRIB) on the newly commissioned Facility for Rare Isotope Beams at Michigan State University; Norbert Holtkamp (SLAC) on the future of high-power free-electron lasers; Houjun Qian (DESY) on radio-frequency photocathode guns; and Young-Kee Kim (University of Chicago) on future directions in US particle physics. The closing plenary talk was a sobering presentation on climate change and the Brazilian Amazonia region by Paulo Artaxo (University of São Paulo).
The remainder of the talks were pre-recorded with live Q&A sessions, and 400 teleconferencing rooms per day were set up to allow virtual poster sessions. Highlights in topical sessions included “Women in Science: The Inconvenient Truth” by Márcia Barbosa (Universidade Federal do Rio Grande do Sul) and an industrial forum hosted by Raffaella Geometrante (KYMA) on the intersection between government accelerator projects and industry.
IPAC22 is currently planned as an in-person conference in Bangkok, Thailand, from 17 to 22 June next year.
This year’s Future Circular Collider (FCC) Week took place online from 28 June to 2 July, attracting 700 participants from all over the world to debate the next steps needed to produce a feasibility report in 2025/2026, in time for the next update to the European Strategy for Particle Physics in 2026/2027. The current strategy, agreed in 2020, sets an electron–positron Higgs factory as the highest priority facility after the LHC, along with the investigation of the technical and financial feasibility of such a Higgs factory, followed by a high-energy hadron collider placed in the same 100 km tunnel. The FCC feasibility study will focus on the first stage (tunnel and e+e– collider) in the next five years.
Although the FCC is a long-term project with a horizon up to the 22nd century, its timescales are rather tight. A post-LHC collider should be operational around the 2040s, ensuring a smooth continuation from the High-Luminosity LHC, so construction would need to begin in the early 2030s. Placement studies to balance geological and territorial constraints with machine requirements and physics performance suggest that the most suitable scenarios are based on a 92 km-circumference tunnel with eight surface sites.
The next steps are subsurface investigations of high-risk areas, surface-site initial-state analysis and verification of in-principle feasibility with local authorities. A “Mining the Future” competition has been launched to solicit ideas for how to best use the nine million cubic metres of molasse that would be excavated from the tunnel.
The present situation in particle physics is reminiscent of the early days of superconductivity
A highlight of the week was the exploration of the physics case of a post-LHC collider. Matthew Reece (Harvard University) identified dark matter, the baryon asymmetry and the origin of primordial density perturbations as key experimental motivations, and the electroweak hierarchy problem, the strong CP problem and the mystery of flavour mixing patterns as key theoretical motivations. The present situation in particle physics is reminiscent of the early days of superconductivity, he noted, when we had a phenomenological description of symmetry breaking in superconductivity, but no microscopic picture. Constraining the shape of the Higgs potential could allow a similar breakthrough for electroweak symmetry breaking. Regarding recent anomalous measurements, such as those of the muon’s magnetic moment, Reece noted that while these measurements could give us the coefficients of one higher dimension operator in an effective-field-theory description of new physics, only colliders can systematically produce and characterise the nature of any new physics. FCC-ee and FCC-hh both have exciting and complementary roles to play.
A key technology for FCC-ee is the development of efficient superconducting radio-frequency (SRF) cavities to compensate for the 100 MW synchrotron radiation power loss in all modes of operation from the Z pole up to the top threshold at 365 GeV. A staged RF system is foreseen as the baseline scenario, with low-impedance single-cell 400 MHz Nb/Cu cavities for Z running replaced by four-cell Nb/Cu cavities for W and Higgs operation, and later augmented by five-cell 800 MHz bulk Nb cavities at the top threshold.
As well as investigations into the use of HIPIMS coating and the fabrication of copper substrates, an innovative slotted waveguide elliptical (SWELL) cavity design was presented that would operate at 600 or 650 MHz. SWELL cavities optimise the surface area, simplify the coating process and avoid the need for welding in critical areas, which could reduce the performance of the cavity. The design profits from previous work on CLIC, and may offer a simplified installation schedule while also finding applications outside of high-energy physics. A prototype will be tested later this year.
Several talks also pointed out synergies with the RF systems needed for the proposed electron–ion collider at Brookhaven and the powerful energy-recovery linac for experiments (PERLE) project at Orsay, and called for stronger collaboration between the projects.
Machine design
Another key aspect of the study regards the machine design. Since the conceptual design report last year, the pre-injector layout for FCC-ee has been simplified, and key FCC-ee concepts have been demonstrated at Japan’s SuperKEKB collider, including a new world-record luminosity of 3.12 × 1034 cm–2 s–1 in June with a betatron function of βγ* = 1 mm. Separate tests squeezed the beam to just βγ* = 0.8 mm in both rings.
Other studies reported during FCC Week 2021 demonstrated that hosting four experiments is compatible with a new four-fold symmetric ring. This redundancy is thought to be essential for high-precision measurements, and different detector solutions will be invaluable in uncovering hidden systematic biases. The meeting also followed up on the proposal for energy-recovery linacs (ERLs) at FCC-ee, potentially extending the energy reach to 600 GeV if deemed necessary during the previous physics runs. First studies for the use of the FCC-ee booster as a photon source were also presented, potentially leading to applications in medicine and industry, precision QED studies and fundamental-symmetry tests.
Participants also tackled concepts for power reduction and power recycling, to ensure that FCC is sustainable and environmentally friendly. Ideas relating to FCC-ee include making the magnets superconducting rather than normal conducting, improving the klystron efficiency, using ERLs and other energy-storage devices, designing “twin” dipole and quadrupole magnets with a factor-two power saving, and coating SRF cavities with a high-temperature superconductor.
All in all, FCC Week 2021 saw tremendous progress across different areas of the study. The successful completion of the FCC Feasibility Study (2021–2025) will be a crucial milestone for the future of CERN and the field.
Pixel detectors have their roots in photography. Up until 50 years ago, every camera contained a roll of film on which images were photochemically recorded with each exposure, after which the completed roll was sent to be “developed” to finally produce eagerly awaited prints a week or so later. For decades, film also played a big part in particle tracking, with nuclear emulsions, cloud chambers and bubble chambers. The silicon chip, first unveiled to the world in 1961, was to change this picture forever.
During the past 40 years, silicon sensors have transformed particle tracking in high-energy physics experiments
By the 1970s, new designs of silicon chips were invented that consisted of a 2D array of charge-collection sites or “picture elements” (pixels) below the surface of the silicon. During the exposure time, an image focused on the surface generated electron–hole pairs via the photoelectric effect in the underlying silicon, with the electrons collected as signal information in the pixels. These chips came in two forms: the charge-coupled device (CCD) and the monolithic active pixel sensor (MAPS) – more commonly known commercially as the CMOS image sensor (CIS). Willard Boyle and George Smith of Bell Labs in the US were awarded the Nobel Prize for Physics in 2009 for inventing the CCD.
In a CCD, the charge signals are sequentially transferred to a single on-chip output circuit by applying voltage pulses to the overlying electrode array that defines the pixel structure. At the output circuit the charge is converted to a voltage signal to enable the chip to interface with external circuitry. In the case of the MAPS, each pixel has its own charge-integrating detection circuitry and a voltage signal is again sequentially read out from each by on-chip switching or “scanning” circuitry. Both architectures followed rapid development paths, and within a couple of decades had completely displaced photographic film in cameras.
For the consumer camera market, CCDs had the initial lead, which passed to MAPS by about 1995. For scientific imaging, CCDs are preferred for most astronomical applications (most recently the 3.2 Gpixel optical camera for the Vera Rubin Observatory), while MAPS are the preferred option for fast imaging such as super-resolution microscopy, cryoelectron microscopy and pioneering studies of protein dynamics at X-ray free-electron lasers. Recent CMOS imagers with very small, low-capacitance pixels achieve sufficiently low noise to detect single electrons. A third member of the family is the hybrid pixel detector, which is MAPS-like in that the signals are read out by scanning circuitry, but in which the charges are generated in a separate silicon layer that is connected, pixel by pixel, to a readout integrated circuit (ROIC).
During the past 40 years, these devices (along with their silicon-microstrip counterparts, to be described in a later issue) have transformed particle tracking in high-energy physics experiments. The evolution of these device types is intertwined to such an extent that any attempt at historical accuracy, or who really invented what, would be beyond the capacity of this author, for which I humbly apologise. Space constraints have also led to a focus on the detectors themselves, while ignoring the exciting work in ROIC development, cooling systems, mechanical supports, not to mention the advanced software for device simulation, the simulation of physics performance, and so forth.
CCD design inspiration
The early developments in CCD detectors were disregarded by the particle-detector community. This is because gaseous drift chambers, with a precision of around 100 μm, were thought to be adequate for all tracking applications. However, the 1974 prediction by Gaillard, Lee and Rosner that particles containing charm quarks “might have lifetimes measurable in emulsions”, followed by the discovery of charm in 1975, set the world of particle-physics instrumentation ablaze. Many groups with large budgets tried to develop or upgrade existing types of detectors to meet the challenge: bubble chambers became holographic; drift chambers and streamer chambers were pressurised; silicon microstrips became finer-pitched, etc.
Illustrations of a CCD (left), MAPS (middle) and hybrid chip (right). The first two typically contain 1 k × 1 k pixels, up to 4 k × 4 k or beyond by “stitching”, with an active layer thickness (depleted) of about 20 µm and a highly doped bulk layer back-thinned to around 100 µm, enabling a low-mass tracker, even potentially bent into cylinders round the beampipe.
The CCD (where I is the imaging area, R the readout register, TG the transfer gate, CD the collection diode, and S, D, G the source, drain and gate of the sense transistor) is pixellised in the I direction by conducting gates. Signal charges are shifted in this direction by manipulating the gate voltages so that the image is shifted down, one row at a time. Charges from the bottom row are tipped into the linear readout register, within which they are transferred, all together in the orthogonal direction, towards the output node. As each signal charge reaches the output node, it modulates the voltage on the gate of the output transistor; this is sensed, and transmitted off-chip as an analog signal.
In a MAPS chip, pixellisation is implemented by orthogonal channel stops and signal charges are sensed in-pixel by a tiny front-end transistor. Within a depth of about 1 µm below the surface, each pixel contains complex CMOS electronics. The simplest readout is “rolling shutter”, in which peripheral logic along the chip edge addresses rows in turn, and analogue signals are transmitted by column lines to peripheral logic at the bottom of the imaging area. Unlike in a CCD, the signal charges never move from their “parent” pixel.
In the hybrid chip, like a MAPS, signals are read out by scanning circuitry. However, the charges are generated in a separate silicon layer that is connected, pixel by pixel, to a readout integrated circuit. Bump-bonding interconnection technology is used to keep up with pixel miniaturisation.
The ACCMOR Collaboration (Amsterdam, CERN, Cracow, Munich, Oxford, RAL) had built a powerful multi-particle spectrometer, operating at CERN’s Super Proton Synchrotron, to search for hadronic production of the recently-discovered charm particles, and make the first measurements of their lifetimes. We in the RAL group picked up the idea of CCDs from astronomers at the University of Cambridge, who were beginning to see deeper into space than was possible with photographic film (see left figure in “Pixel architectures” panel). The brilliant CCD developers in David Burt’s team at the EEV Company in Chelmsford (now Teledyne e2v) suggested designs that we could try for particle detection, notably to use epitaxial silicon wafers with an active-layer thickness of about 20 μm. At a collaboration meeting in Cracow in 1978, we demonstrated via simulations that just two postage-stamp-sized CCDs, placed 1 and 2 cm beyond a thin target, could cover the whole spectrometer aperture and might be able to deliver high-quality topological reconstruction of the decays of charm particles with expected lifetimes of around 10–13 s.
We still had to demonstrate that these detectors could be made efficient for particle detection. With a small telescope comprising three CCDs in the T6 beam from CERN’s Proton Synchrotron we established a hit efficiency of more than 99%, a track measurement precision of 4.5 μm in x and y, and two-track resolution of 40 μm. Nothing like this had been seen before in an electronic detector. Downstream of us, in the same week, a Yale group led by Bill Willis obtained signals from a small liquid-argon calorimeter. A bottle of champagne was shared!
It was then a simple step to add two CCDs to the ACCMOR spectrometer and start looking for charm particles. During 1984, on the initial shift, we found our first candidate (see “First charm” figure), which, after adding the information from the downstream microstrips, drift chambers (with two large aperture magnets for momentum measurement), plus a beautiful assembly of Cherenkov hodoscopes from the Munich group, proved to be a D+→ K+π+π– event.
It was more challenging to develop a CCD-based vertex detector for the SLAC Large Detector (SLD) at the SLAC Linear Collider (SLC), which became operational in 1989. The level of background radiation required a 25 mm-radius beam pipe, and the physics demanded large solid-angle coverage, as in all general-purpose collider detectors. The physics case for SLD had been boosted by the discovery in 1983 that the lifetime of particles containing b quarks was longer than for charm, in contrast to the theoretical expectation of being much shorter. So the case for deploying high-quality vertex detectors at SLC and LEP, which were under construction to study Z0 decays, was indeed compelling (see “Vertexing” figure). All four LEP experiments employed a silicon-microstrip vertex detector.
Early in the silicon vertex-detector programme, e2V perfected the art of “stitching” reticles limited to an area of 2 × 2 cm2, to make large CCDs (8 × 1.6 cm2 for SLD). This enabled us to make a high-performance vertex detector that operated from 1996 until SLD shut down in 1998, and which delivered a cornucopia of heavy-flavour physics from Z0 decays (see “Pioneering pixels” figure). During this time, the LEP beam pipe, limited by background to 54 mm radius, permitted its experiments’ microstrip-based vertex detectors to do pioneering b physics. But it had reduced capability for the more elusive charm, which was shorter lived and left fewer decay tracks.
Between LEP with its much higher luminosity and SLD with its small beam pipe, state-of-the-art vertex detector and highly polarised electron beam, the study of Z0 decays yielded rich physics. Highlights included very detailed studies of an enormous sample of gluon jets from Z0→ b b g events, with cleanly tagged b jets at LEP, and Ac, the parity-violation parameter in the coupling of the Z0 to c-quarks, at SLD. However, the most exciting discovery of that era was the top quark at Fermilab, in which the SVX microstrip detector of the CDF detector played an essential part (see “Top detector” figure). This triggered a paradigm shift. Before then, vertex detectors were an “optional extra” in experiments; afterwards, they became obligatory in every energy frontier detector system.
Hybrid devices
While CCDs pioneered the use of silicon pixels for precision tracking, their use was restricted by two serious limitations: poor radiation tolerance and long readout time (tens of ms due to the need to transfer the charge signals pixel by pixel through a single output circuit). There was clearly a need for pixel detectors in more demanding environments, and this led to the development of hybrid pixel detectors. The idea was simple: reduce the strip length of well-developed microstrip technology to equal its width, and you had your pixel sensor. However, microstrip detectors were read out at one end by ASIC (application-specific integrated circuit) chips having their channel pitch matched to that of the strips. For hybrid pixels, the ASIC readout required a front-end circuit for each pixel, resulting in modules with the sensor chip facing the readout chip, with electrical connections made by metal bump-bonds (see right figure in “Pixel architectures” panel). The use of relatively thick sensor layers (compared to CCDs) compensated for the higher node capacitance associated with the hybrid front-end circuit.
Although the idea was simple, its implementation involved a long and challenging programme of engineering at the cutting edge of technology. This had begun by about 1988, when Erik Heijne and colleagues in the CERN microelectronics group had the idea to fit full nuclear-pulse processing electronics in every pixel of the readout chip, with additional circuitry such as digitisation, local memory and pattern recognition on the chip periphery. With a 3 μm feature size, they were obliged to begin with relatively large pixels (75 × 500 μm), and only about 80 transistors per pixel. They initiated the RD19 collaboration, which eventually grew to 150 participants, with many pioneering developments over a decade, leading to successful detectors in at least three experiments: WA97 in the Omega Spectrometer; NA57; and forward tracking in DELPHI. As the RD19 programme developed, the steady reduction in feature size permitted the use of in-pixel discriminators and fast shapers that enhanced the noise performance, even at high rates. This would be essential for operation of large hybrid pixel systems in harsh environments, such as ATLAS and CMS at the LHC. RD19 initiated a programme of radiation hardness by design (enclosed-gate transistors, guard rings, etc), which was further developed and broadly disseminated by the CERN microelectronics group. These design techniques are now used universally across the LHC detector systems. There is still much to be learned, and advances to a smaller feature size bring new opportunities but also surprises and challenges.
The advantages of the hybrid approach include the ability to choose almost any commercial CMOS process and combine it with the sensor best adapted to the application. This can deliver optimal speed of parallel processing, and radiation hardness as good as can be engineered in the two component chips. The disadvantages include a complex and expensive assembly procedure, high power dissipation due to large node capacitance, and more material than is desirable for a tracking system. Thanks to the sustained efforts of many experts, an impressive collection of hybrid pixel tracking detectors has been brought to completion in a number of detector facilities. As vertex detectors, their greatest triumph has been in the inferno at the heart of ATLAS and CMS where, for example, they were key to the recent measurement of the branching ratio for H → b b .
Facing up to the challenge
The high-luminosity upgrade to the LHC (HL-LHC) is placing severe demands on ATLAS and CMS, none more so than developing even more powerful hybrid vertex detectors to accommodate a “pileup” level of 200 events per bunch crossing. For the sensors, a 3D variant invented by Sherwood Parker has adequate radiation hardness, and may provide a more secure option than the traditional planar pixels, but this question is still open. 3D pixels have already proved themselves in ATLAS, for the insertable B layer (IBL), where the signal charge is drifted transversally within the pixel to a narrow column of n-type silicon that runs through the thickness of the sensor. But for HL-LHC, the innermost pixels need to be at least five times smaller in area than the IBL, putting extreme pressure on the readout chip. The RD53 collaboration led by CERN has worked for years on the development of an ASIC using 65 nm feature size, which enables the huge amount of radiation-resistant electronics to fit within the pixel area, reaching the limit of 50 × 50 μm2. Assembling these delicate modules, and dealing with the thermal stresses associated with the power dissipation in the warm ASICs mechanically coupled to the cold sensor chips, is still a challenge. These pixel tracking systems (comprising five layers of barrel and forward trackers) will amount to about 6 Gpixels – seven times larger than before. Beyond the fifth layer, conditions are sufficiently relaxed that microstrip tracking will still be adequate.
The latest experiment to upgrade from strips to pixels is LHCb, which has an impressive track record of b and charm physics. Its adventurous Vertex Locator (VELO) detector has 26 disks along the beamline, equipped with orthogonally oriented r and ϕ microstrips, starting from inside the beampipe about 8 mm from the LHC beam axis. LHCb has collected the world’s largest sample of charmed hadrons, and with the VELO has made a number of world-leading measurements including the discovery of CP violation in charm. LHCb is now statistics-limited for many rare decays and will ramp up its event samples with a major upgrade implemented in two stages (see State-of-the-art-tracking for high luminosities).
For the first upgrade, due to begin operation early next year, the luminosity will increase by a factor of up to five, and the additional pattern recognition challenge will be addressed by a new pixel detector incorporating 55 μm pixels and installed even closer (5.1 mm) to the beam axis. The pixel detector uses evaporative CO2 microchannel cooling to allow operation under vacuum. LHCb will double its efficiency by removing the hardware trigger and reading out the data at the beam-crossing frequency of 40 MHz. The new “VeloPix” readout chip will achieve this with readout speeds of up to 20 Gb/s, and the software trigger will select heavy-flavour events based on full event reconstruction. For the second upgrade, due to begin in about 2032, the luminosity will be increased by a further factor of 7.5, allowing LHCb to eventually accumulate 10 times its current statistics. Under these conditions, there will be, on average, 40 interactions per beam crossing, which the collaboration plans to resolve by enhanced timing precision (around 20 ps) in the VELO pixels. The upgrade will require both an enhanced sensor and readout chip. This is an adventurous long-term R&D programme, and LHCb retain a fallback option with timing layers downstream of the VELO, if required.
Monolithic active pixels
Being monolithic, the architecture of MAPS is very similar to that of CCDs (see middle figure in “Pixel architectures” panel). The fundamental difference is that in a CCD, the signal charge is transported physically through some centimetres of silicon to a single charge-sensing circuit in the corner of the chip, while in a MAPS the communication between the signal charge and the outside world is via in-pixel electronics, with metal tracks to the edge of the chip. The MAPS architecture looked very promising from the beginning, as a route to solving the problems of both CCDs and hybrid pixels. With respect to CCDs, the radiation tolerance could be greatly increased by sensing the signal charge within its own pixel, instead of transporting it over thousands of pixels. The readout speed could also be dramatically increased by in-pixel amplitude discrimination, followed by sparse readout of only the hit pixels. With respect to hybrid pixel modules, the expense and complications of bump-bonded assemblies could be eliminated, and the tiny node capacitance opened the possibility of much thinner active layers than were needed with hybrids.
MAPS have emerged as an attractive option for a number of future tracking systems. They offer small pixels where needed (notably for inner-layer vertex detectors) and thin layers throughout the detector volume, thereby minimising multiple scattering and photon conversion, both in barrels and endcaps. Excess material in the forward region of tracking systems such as time-projection and drift chambers, with their heavy endplate structures, has in the past led to poor track reconstruction efficiency, loss of tracks due to secondary interactions, and excess photon conversions. In colliders at the energy frontier (whether pp or e+e–), however, interesting events for physics are often multi-jet, so there are nearly always one or more jets in the forward region.
The first MAPS devices contained little more than a collection diode, a front-end transistor operated as a source follower, reset transistor and addressing logic. They needed only relaxed charge-collection time, so diffusive collection sufficed. Sherwood Parker’s group demonstrated their capability for particle tracking in 1991, with devices processed in the Centre for Integrated Studies at Stanford, operating in a Fermilab test beam. In the decades since, advances in the density of CMOS digital electronics have enabled designers to pack more and more electronics into each pixel. For fast operation, the active volume below the collection diode needs to be depleted, including in the corners of the pixels, to avoid loss of tracking efficiency.
The Strasbourg group led by Marc Winter has a long and distinguished record of MAPS development. As well as highly appreciated telescopes in test beams at DESY for general use, the group supplied its MIMOSA-28 devices for the first MAPS-based vertex detector: a 356 Mpixel two-layer barrel system for the STAR experiment at Brookhaven’s Relativistic Heavy Ion Collider. Operational for a three-year physics run starting in 2014, this detector enhanced the capability to look into the quark–gluon plasma, the extremely hot form of matter that characterised the birth of the universe.
Advances in the density of CMOS digital electronics have enabled designers to pack more and more electronics into each pixel
An ingenious MAPS variant developed by the Semiconductor Laboratory of the Max Planck Society – the Depleted P-channel FET (DEPFET) – is also serving as a high-performance vertex detector in the Belle II detector at SuperKEKB in Japan, part of which is already operating. In the DEPFET, the signal charge drifts to a “virtual gate” located in a buried channel deeper than the current flowing in the sense transistor. As Belle II pushes to even higher luminosity, it is not yet clear which technology will deliver the required radiation hardness.
The small collection electrode of the standard MAPS pixel presents a challenge in terms of radiation hardness, since it is not easy to preserve full depletion after high levels of bulk damage. An important initiative to overcome this was initiated in 2007 by Ivan Perić of KIT, in which the collection electrode is expanded to cover most of the pixel area, below the level of the CMOS electronics, so the charge-collection path is much reduced. Impressive further developments have been made by groups at Bonn University and elsewhere. This approach has achieved high radiation resistance with the ATLASpix prototypes, for instance. However, the standard MAPS approach with small collection electrode may be tunable to achieve the required radiation resistance, while preserving the advantages of superior noise performance due to the much lower sensor capacitance. Both approaches have strong backing from talented design groups, but the eventual outcome is unclear.
Advanced MAPS
Advanced MAPS devices were proposed for detectors at the International Linear Collider (ILC). In 2008 Konstantin Stefanov of the Open University suggested that MAPS chips could provide an overall tracking system of about 30 Gpixels with performance far beyond the baseline options at the time, which were silicon microstrips and a gaseous time-projection chamber. This development was shelved due to delays to the ILC, but the dream has become a reality in the MAPS-based tracking system for the ALICE detector at the LHC, which builds on the impressive ALPIDE chip development by Walter Snoeys and his collaborators. The ALICE ITS-2 system, with 12.5 Gpixels, sets the record for any pixel system (see ALICE tracks new territories). This beautiful tracker has operated smoothly on cosmic rays and is now being installed in the overall ALICE detector. The group is already pushing to upgrade the three central layers using wafer-scale stitching and curved sensors to significantly reduce the material budget. At the 2021 International Workshop on Future Linear Colliders held in March, the SiD concept group announced that they will switch to a MAPS-based tracking system. R&D for vertexing at the ILC is also being revived, including the possibility of CCDs making a comeback with advanced designs from the KEK group led by Yasuhiro Sugimoto.
The most ambitious goal for MAPS-based detectors is for the inner-layer barrels at ATLAS and CMS, during the second phase of the HL-LHC era, where smaller pixels would provide important advantages for physics. At the start of high-luminosity operation, these layers will be equipped with hybrid pixels of 25 × 100 μm2 and 150 μm active thickness, the pixel area being limited by the readout chip, which is based on a 65 nm technology node. Encouraging work led by the CERN ATLAS and microelectronics groups and the Bonn group is underway, and could result in a MAPS option of 25 × 25 μm2, requiring an active-layer thickness of only about 20 μm, using a 28 nm technology node. The improvement in tracking precision could be accompanied by a substantial reduction in power dissipation. The four-times greater pixel density would be more than offset by the reduction in operating voltage, plus the much smaller node capacitance. This route could provide greatly enhanced vertex detector performance at a time when the hybrid detectors will be coming to the end of their lives due to radiation damage. However, this is not yet guaranteed, and an evolution to stacked devices may be necessary. A great advantage of moving to monolithic or stacked devices is that the complex processes are then in the hands of commercial foundries that routinely turn out thousands of 12 inch wafers per week.
High-speed and stacked
During HL-LHC operations there is a need for ultra-fast tracking devices to ameliorate the pileup problems in ATLAS, CMS and LHCb. Designs with a timing precision of tens of picoseconds are advancing rapidly – initially low-gain avalanche diodes, pioneered by groups from Torino, Barcelona and UCSC, followed by other ultra-fast silicon pixel devices. There is a growing list of applications for these devices. For example, ATLAS will have a layer adjacent to the electromagnetic calorimeter in the forward region, where the pileup problems will be severe, and where coarse granularity (~1 mm pixels) is sufficient. LHCb is more ambitious for its stage-two upgrade, as already mentioned. There are several experiments in which such detectors have potential for particle identification, notably π/K separation by time-of-flight up to a momentum limit that depends on the scale of the tracking system, typically 8 GeV/c.
Monolithic and hybrid pixel detectors answer many of the needs for particle tracking systems now and in the future. But there remain challenges, for example the innermost layers at ATLAS and CMS. In order to deliver the required vertexing capability for efficient, cleanly separated b and charm identification, we need pixels of dimensions about 25 × 25 μm, four times below the current goals for HL-LHC. They should also be thinner, down to say 20 μm, to preserve precision for oblique tracks.
Solutions to these problems, and similar challenges in the much bigger market of X-ray imaging, are coming into view with stacked devices, in which layers of CMOS-processed silicon are stacked and interconnected. The processing technique, in which wafers are bonded face-to-face, with electrical contacts made by direct-bond interconnects and through-silicon vias, is now a mature technology and is in the hands of leading companies such as Sony and Samsung. The CMOS imaging chips for phone cameras must be one of the most spectacular examples of modern engineering (see “Up close” figure).
Commercial CMOS image sensor development is a major growth area, with approximately 3000 patents per year. In future these developers, advancing to smaller-node chips, will add artificial intelligence, for example to take a number of frames of fast-moving subjects and deliver the best one to the user. Imagers under development for the automotive industry include those that will operate in the short-wavelength infrared region, where silicon is still sensitive. In this region, rain and fog are transparent, so a driverless car equipped with the technology will be able to travel effortlessly in the worst weather conditions.
While we developers of pixel imagers for science have not kept up with the evolution of stacked devices, several academic groups have over the past 15 years taken brave initiatives in this direction, most impressively a Fermilab/BNL collaboration led by Ron Lipton, Ray Yarema and Grzegorz Deptuch. This work was done before the technical requirements could be serviced by a single technology node, so they had to work with a variety of pioneering companies in concert with excellent in-house facilities. Their achievements culminated in three working prototypes, two for particle tracking and one for X-ray imaging, namely a beautiful three-tier stack comprising a thick sensor (for efficient X-ray detection), an analogue tier and a digital tier (see “Stacking for physics” figure).
The relatively recent term “technology node” embraces a number of aspects of commercial integrated circuit (IC) production. First and foremost is the feature size, which originally meant the minimum line width that could be produced by photolithography, for example the length of a transistor gate. With the introduction of novel transistor designs (notably the FinFET), this term has been generalised to indicate the functional density of transistors that is achievable. At the start of the silicon-tracker story, in the late 1970s, the feature size was about 3 µm. The current state-of-the-art is 5 nm, and the downward Moore’s law trend is continuing steadily, although such narrow lines would of course be far beyond the reach of photolithography. There are other aspects of ICs that are included in the description of any technology node. One is whether they support stitching, which means the production of larger chips by step-and-repeat of reticles, enabling the production of single devices of sizes 10 × 10 cm2 and beyond, in principle up to the wafer scale (which these days is a diameter of 200 or 300 mm, evolving soon to 450 mm). Another is whether they support wafer stacking, which is the production of multi-layer sandwiches of thinned devices using various interconnect technologies such as through-silicon vias and direct-bond interconnects. A third aspect is whether they can be used for imaging devices, which implies optimised control of dark current and noise. For particle tracking, the most advanced technology nodes are unaffordable (the development cost of a single 5 nm ASIC is typically about $500 million, so it needs a large market). However, other features that are desirable and becoming essential for our needs (imaging capability, stitching and stacking) are widely available and less expensive. For example, Global Foundries, which produces 3.5 million wafers per annum, offers these capabilities at their 32 and 14 nm nodes.
For the HL-LHC inner layers, one could imagine a stacked chip comprising a thin sensor layer (with excellent noise performance enabled by an on-chip front-end circuit for each pixel), followed by one or more logic layers. Depending on the technology node, one should be able to fit all the logic (building on the functionality of the RD53 chip) in one or two layers of 25 × 25 μm pixels. The overall thickness could be 20 μm for the imaging layer, and 6 μm per logic layer, with a bottom layer sufficiently thick (~100 μm) to give the necessary mechanical stability to the relatively large stitched chips. The resulting device would still be thin enough for a high-quality vertex detector, and the thin planar sensor-layer pixels including front-end electronics would be amenable to full depletion up to the 10-year HL-LHC radiation dose.
There are groups in Japan (at KEK led by Yasuo Arai, and at RIKEN led by Takaki Hatsui) that have excellent track records for developing silicon-on-insulator devices for particle tracking and for X-ray detection, respectively. The RIKEN group is now believed to be collaborating with Sony to develop stacked devices for X-ray imaging. Given Sony’s impressive achievements in visible-light imaging, this promises to be extremely interesting. There are many applications (for example at ITER) where radiation-resistant X-ray imaging will be of crucial importance, so this is an area in which stacked devices may well own the future.
Outlook
The story of frontier pixel detectors is a bit like that of an art form – say cubism. With well-defined beginnings 50 years ago, it has blossomed into a vast array of beautiful creations. The international community of designers see few boundaries to their art, being sustained by the availability of stitched devices to cover large-area tracking systems, and moving into the third dimension to create the most advanced pixels, which are obligatory for some exciting physics goals.
Face-to-face wafer bonding is now a commercially mature technology
Just like the attribute of vision in the natural world, which started as a microscopic light-sensitive spot on the surface of a unicellular protozoan, and eventually reached one of its many pinnacles in the eye of an eagle, with its amazing “stacked” data processing behind the retina, silicon pixel devices are guaranteed to continue evolving to meet the diverse needs of science and technology. Will they one day be swept away, like photographic film or bubble chambers? This seems unthinkable at present, but history shows there’s always room for a new idea.
The original silicon pixel detector for CMS – comprising three barrel layers and two endcap disks – was designed for a maximum instantaneous luminosity of 1034 cm–2 s–1 and a maximum average pile-up of 25. Following LHC upgrades in 2013–2014, it was replaced with an upgraded system (the CMS Phase-1 pixel detector) in 2017 to cope with higher instantaneous luminosities. With a lower mass and an additional barrel layer and endcap disk, it was an evolutionary upgrade maintaining the well-tested key features of the original detector while enabling higher-rate capability, improved radiation tolerance and more robust tracking. During Long Shutdown 2, maintenance work on the Phase-1 device included the installation of a new innermost layer (see “Present and future” image) to enable the delivery of high-quality data until the end of LHC Run 3.
During the next long shutdown, scheduled for 2025, the entire tracker detector will be replaced in preparation for the High-Luminosity LHC (HL-LHC). This Phase-2 pixel detector will need to cope with a pile-up and hit rate eight times higher than before, and with a trigger rate and radiation dose 7.5 and 10 times higher, respectively. To meet these extreme requirements, the CMS collaboration, in partnership with ATLAS via the RD53 collaboration, is developing a next-generation hybrid-pixel chip utilising 65 nm CMOS technology. The overall system is much bigger than the Phase-1 device (~5 m2 compared to 1.75 m2) with vastly more read-out channels (~2 billion compared to 120 million). With six-times smaller pixels, increased detection coverage, reduced material budget, a new readout chip to enable a lower detection threshold, and a design that continues to allow easy installation and removal, the state-of-the-art Phase-2 pixel detector will serve CMS well into the HL-LHC era.
LHCb’s Vertex Locator (VELO) has played a pivotal role in the experiment’s flavour-physics programme. Contributing to triggering, tracking and vertexing, and with a geometry optimised for particles traveling close to the beam direction, its 46 orthogonal silicon-strip half-disks have enabled the collaboration to pursue major results. These include the 2019 discovery of CP violation in charm using the world’s largest reconstructed samples of charm decays, a host of matter–antimatter asymmetry measurements and rare-decay searches, and the recent hints of lepton non-universality in B decays.
Placing the sensors as close as possible to the primary proton–proton interactions requires the whole VELO system to sit inside the LHC vacuum pipe (separated from the primary vacuum by a 1.1 m-long thin-walled “RF foil”), and a mechanical system to move the disks out of harm’s way during the injection and stabilisation of the beams. After more than a decade of service witnessing the passage of some 1026 protons, the original VELO is now being replaced with a new one to prepare for a factor-five increase in luminosity for LHCb in LHC Run 3.
The entirety of the new VELO will be read out at a rate of 40 MHz, requiring a huge data bandwidth: up to 20 Gbits/s for the hottest ASICs, and 3 Tbit/s in total. Cooling using the minimum of material is another major challenge. The upgraded VELO will be kept at –20° via the novel technique of evaporative CO2 circulating in 120 × 200 µm channels within a silicon substrate (see “Fine structure” image, left). The harsh radiation environment also demands a special ASIC, the VeloPix, which has been developed with the CERN Medipix group and will allow the detector to operate a much more efficient trigger. To cope with increased occupancies at higher luminosity, the original silicon strips have been replaced with pixels. The new sensors (in the form of rectangles rather than disks) will be located even closer to the interaction point (5.1 mm versus the previous 8.2 mm for the first measured point), which requires the RF foil to sit just 3.5 mm from the beam and 0.9 mm from the sensors. The production of the foil was a huge technical achievement. It was machined from a solid-forged aluminium block with 98% of the material removed and the final shape machined to a thickness of 250 µm, with further chemical etching taking it to just 100 µm (see “Fine structure” image, right).
Around half of the VELO-module production is complete, with the work shared between labs in the UK and the Netherlands (see “In production” image). Assembly of the 52 modules into the “hood”, which provides cooling, services and vacuum, is now under way, with installation in LHCb scheduled to start in August. The VELO Upgrade I is expected to serve LHCb throughout Run 3 and Run 4. Looking further to the future, the next upgrade will require the detector to operate with a huge jump in luminosity, where vertexing will pose a significant challenge. Proposals under consideration include a new “4D” pixel detector with time-stamp information per hit, which could conceivably be achieved by moving to a smaller CMOS node. At this stage, however, the collaboration is actively investigating all options, with detailed technical design reports expected towards the middle of the decade.
The ATLAS collaboration upgraded its original pixel detector in 2014, adding an innermost layer to create a four-layer device. The new layer contained a much smaller pitch, 3D sensors at large angles and CO2 cooling, and the pixel tracker will continue to serve ATLAS throughout LHC Run 3. Like CMS, the collaboration has long been working towards the replacement of the full inner tracker during the next long shutdown expected in 2025, in preparation for HL-LHC operations. The innermost layers of this state-of-the-art all-silicon tracker, called the ITk, will be built from pixel detectors with an area almost 10 times larger than that of the current device. With 13 m2 of active silicon across five barrel layers and two end caps, the pixel detector will contribute to precision tracking up to a pseudorapidity |η| = 4, with the innermost two layers expected to be replaced a few years into the HL-LHC era, and the outermost layers designed to last the lifetime of the project. Most of the detector will use planar silicon sensors, with 3D sensors (which are more radiation-hard and less power-hungry) in the innermost layer. Like the CMS Phase-2 pixel upgrade, the sensors will be read out by new chips being developed by the RD53 collaboration, with support structures made of low-mass carbon materials and cooling provided by evaporative CO2 flowing in thin-walled pipes. The device will have a total of 5.1 Gpixels (55 times more than the current one), and the very high expected HL-LHC data rates, especially in the innermost layers, will require the development of new technologies for high-bandwidth transmission and handling. The ITk pixel detector is now in the final stages of R&D and moving into production. After that, the final stages of integrating the subdetectors assembled in ATLAS institutes worldwide will take place on the surface at CERN before final installation underground.
Today, the tools of experimental particle physics are ubiquitous in hospitals and biomedical research. Particle beams damage cancer cells; high-performance computing infrastructures accelerate drug discoveries; computer simulations of how particles interact with matter are used to model the effects of radiation on biological tissues; and a diverse range of particle-physics-inspired detectors, from wire chambers to scintillating crystals to pixel detectors, all find new vocations imaging the human body.
CERN has actively pursued medical applications of its technologies as far back as the 1970s. At that time, knowledge transfer happened – mostly serendipitously – through the initiative of individual researchers. An eminent example is Georges Charpak, a detector physicist of outstanding creativity who invented the Nobel-prize-winning multiwire proportional chamber (MWPC) at CERN in 1968. The MWPC’s ability to record millions of particle tracks per second opened a new era for particle physics (CERN Courier December 1992 p1). But Charpak strived to ensure that the technology could also be used outside the field – for example in medical imaging, where its sensitivity promised to reduce radiation doses during imaging procedures – and in 1989 he founded a company that developed an imaging technology for radiography which is currently deployed as an orthopaedic application. Following his example, CERN has continued to build a culture of entrepreneurship ever since.
Triangulating tumours
Since as far back as the 1950s, a stand-out application for particle-physics detector technology has been positron-emission tomography (PET) – a “functional” technique that images changes in the metabolic process rather than anatomy. The patient is injected with a compound carrying a positron-emitting isotope, which accumulates in areas of the body with high metabolic activity (the uptake of glucose, for example, could be used to identify a malignant tumour). Pairs of back-to-back 511 keV photons are detected when a positron annihilates with an electron in the surrounding matter, allowing the tumour to be triangulated.
Pioneering developments in PET instrumentation took place in the 1970s. While most scanners were based on scintillating crystals, the work done with wire chambers at the University of California at Berkeley inspired CERN physicists David Townsend and Alan Jeavons to use high-density avalanche chambers (HIDACs) – Charpak’s detector plus a photon-conversion layer. In 1977, with the participation of CERN radiobiologist Marilena Streit-Bianchi, this technology was used to create some of the first PET images, most famously of a mouse. The HIDAC detector later contributed significantly to 3D PET image reconstruction, while a prototype partial-ring tomograph developed at CERN was a forerunner for combined PET and computed tomography (CT) scanners. Townsend went on to work at the Cantonal Hospital in Geneva and then in the US, where his group helped develop the first PET/CT scanner, which combines functional and anatomic imaging.
Crystal clear
In the onion-like configuration of a collider detector, an electromagnetic calorimeter often surrounds a descendant of Charpak’s wire chambers, causing photons and electrons to cascade and measuring their energy. In 1991, to tackle the challenges posed by future detectors at the LHC, the Crystal Clear collaboration was formed to study innovative scintillating crystals suitable for electromagnetic calorimetry. Since its early years, Crystal Clear also sought to apply the technology to other fields, including healthcare. Several breast, pancreas, prostate and animal-dedicated PET scanner prototypes have since been developed, and the collaboration continues to push the limits of coincidence-time resolution for time-of-flight (TOF) PET.
In TOF–PET, the difference between the arrival times of the two back-to-back photons is recorded, allowing the location of the annihilation along the axis connecting the detection points to be pinned down. Better time resolution therefore improves image quality and reduces the acquisition time and radiation dose to the patient. Crystal Clear continues this work to this day through the development of innovative scintillating-detector concepts, including at a state-of-the-art laboratory at CERN.
The dual aims of the collaboration have led to cross-fertilisation, whereby the work done for high-energy physics spills over to medical imaging, and vice versa. For example, the avalanche photodiodes developed for the CMS electromagnetic calorimeter were adapted for the ClearPEM breast-imaging prototype, and technology developed for detecting pancreatic and prostate cancer (EndoTOFPET-US) inspired the “barrel timing layer” of crystals that will instrument the central portion of the CMS detector during LHC Run 3.
Pixel perfect
In the same 30-year period, the family of Medipix and Timepix read-out chips has arguably made an even bigger impact on med-tech and other application fields, becoming one of CERN’s most successful technology-transfer cases. Developed with the support of four successive Medipix collaborations, involving a total of 37 research institutes, the technology is inspired by the high-resolution hybrid pixel detectors initially developed to address the challenges of particle tracking in the innermost layers of the LHC experiments. In hybrid detectors, the sensor array and the read-out chip are manufactured independently and later coupled by a bump-bonding process. This means that a variety of sensors can be connected to the Medipix and Timepix chips, according to the needs of the end user.
The first Medipix chip produced in the 1990s by the Medipix1 collaboration was based on the front-end architecture of the Omega3 chip used by the half-million-pixel tracker of the WA97 experiment, which studied strangeness production in lead–ion collisions. The upgraded Medipix1 chip also included a counter per pixel. This demonstrated that the chips could work like a digital camera, providing high-resolution, high-contrast and noise-hit-free images, making them uniquely suitable for medical applications. Medipix2 improved spatial resolution and produced a modified version called Timepix that offers time or amplitude measurements in addition to hit counting. Medipix3 and Timepix3 then allowed the energy of each individual photon to be measured – Medipix3 allocates incoming hits to energy bins in each pixel, providing colour X-ray images, while Timepix3 times hits with a precision of 1.6 ns, and sends the full hit data – coordinate, amplitude and time – off chip. Most recently, the Medipix4 collaboration, which was launched in 2016, is designing chips that can seamlessly cover large areas, and is developing new read-out architectures, thanks to the possibility of tiling the chips on all four sides.
Medipix and Timepix chips find applications in widely varied fields, from medical imaging to cultural heritage, space dosimetry, materials analysis and education. The industrial partners and licence holders commercialising the technology range from established enterprises to start-up companies. In the medical field, the technology has been applied to X-ray CT prototype systems for digital mammography, CT imagers for mammography, and beta- and gamma-autoradiography of biological samples. In 2018 the first 3D colour X-ray images of human extremities were taken by a scanner developed by MARS Bioimaging Ltd, using the Medipix3 technology. By analysing the spectrum recorded in each pixel, the scanner can distinguish multiple materials in a single scan, opening up a new dimension in medical X-ray imaging: with this chip, images are no longer black and white, but in colour (see “Colour X-ray” image).
Although the primary aim of the Timepix3 chip was applications outside of particle physics, its development also led directly to new solutions in high-energy physics, such as the VELOpix chip for the ongoing LHCb upgrade, which permits data-driven trigger-free operation for the first time in a pixel vertex detector in a high-rate experiment.
Dosimetry
CERN teams are also exploring the potential uses of Medipix technology in dosimetry. In 2019, for example, Timepix3 was employed to determine the exposure of medical personnel to ionising radiation in an interventional radiology theatre at Christchurch Hospital in New Zealand. The chip was able to map the radiation fluence and energy spectrum of the scattered photon field that reaches the practitioners, and can also provide information about which parts of the body are most exposed to radiation.
Meanwhile, “GEMPix” detectors are being evaluated for use in quality assurance in hadron therapy. GEMPix couples gas electron multipliers (GEMs) – a type of gaseous ionisation detector developed at CERN – with the Medipix integrated circuit as readout to provide a hybrid device capable of detecting all types of radiation with a high spatial resolution. Following initial results from tests on a carbon-ion beam performed at the National Centre for Oncological Hadrontherapy (CNAO) in Pavia, Italy, a large-area GEMPix detector with an innovative optical read-out is now being developed at CERN in collaboration with the Holst Centre in the Netherlands. A version of the GEMPix called GEMTEQ is also currently under development at CERN for use in “microdosimetry”, which studies the temporal and spatial distributions of absorbed energy in biological matter to improve the safety and effectiveness of cancer treatments.
As a publicly funded laboratory, CERN has a remit, in addition to its core mission to perform fundamental research in particle physics, to expand the opportunities for its technology and expertise to deliver tangible benefits to society. The CERN Knowledge Transfer group strives to maximise the impact of CERN technologies and know-how on society in many ways, including through the establishment of partnerships with clinical, industrial and academic actors, support to budding entrepreneurs and seed funding to CERN personnel.
Supporting the knowledge-transfer process from particle physics to medical research and the med-tech industry is a promising avenue to boost healthcare innovation and provide solutions to present and future health challenges. CERN has provided a framework for the application of its technologies to the medical domain through a dedicated strategy document approved by its Council in June 2017. CERN will continue its efforts to maximise the impact of the laboratory’s know-how and technologies on the medical sector.
Two further dosimetry applications illustrate how technologies developed for CERN’s needs have expanded into commercial medical applications. The B-RAD, a hand-held radiation survey meter designed to operate in strong magnetic fields, was developed by CERN in collaboration with the Polytechnic of Milan and is now available off-the-shelf from an Italian company. Originally conceived for radiation surveys around the LHC experiments and inside ATLAS with the magnetic field on, it has found applications in several other tasks, such as radiation measurements on permanent magnets, radiation surveys at PET-MRI scanners and at MRI-guided radiation therapy linacs. Meanwhile, the radon dose monitor (RaDoM) tackles exposure to radon, a natural radioactive gas that is the second leading cause of lung cancer after smoking. The RaDoM device directly estimates the dose by reproducing the energy deposition inside the lung instead of deriving the dose from a measurement of radon concentration in air; CERN also developed a cloud-based service to collect and analyse the data, to control the measurements and to drive mitigation measures based on real time data. The technology is licensed to the CERN spin-off BAQ.
Cancer treatments
Having surveyed the medical applications of particle detectors, we turn to the technology driving the beams themselves. Radiotherapy is a mainstay of cancer treatment, using ionising radiation to damage the DNA of cancer cells. In most cases, a particle accelerator is used to generate a therapeutic beam. Conventional radiation therapy uses X-rays generated by a linac, and is widely available at relatively low cost.
Medipix and Timepix read-out chips have become one of CERN’s most successful technology-transfer cases
Radiotherapy with protons was first proposed by Fermilab’s founding director Robert Wilson in 1946 while he was at Berkeley, and interest in the use of heavier ions such as carbon arose soon after. While X-rays lose energy roughly exponentially as they penetrate tissue, protons and other ions deposit almost all of their energy in a sharp “Bragg” peak at the very end of their path, enabling the dose to be delivered on the tumour target, while sparing the surrounding healthy tissues. Carbon ions have the additional advantage of a higher radiobiological effectiveness, and can control tumours that are radio-resistant to X-rays and protons. Widespread adoption of hadron therapy is, however, limited by the cost and complexity of the required infrastructures, and by the need for more pre-clinical and clinical studies.
PIMMS and NIMMS
Between 1996 and 2000, under the impulsion of Ugo Amaldi, Meinhard Regler and Phil Bryant, CERN hosted the Proton-Ion Medical Machine Study (PIMMS). PIMMS produced and made publicly available an optimised design for a cancer-therapy synchrotron capable of using both protons and carbon ions. After further enhancement by Amaldi’s TERA foundation, and with seminal contributions from Italian research organisation INFN, the PIMMS concept evolved into the accelerator at the heart of the CNAO hadron therapy centre in Pavia. The MedAustron centre in Wiener Neustadt, Austria, was then based on the CNAO design. CERN continues to collaborate with CNAO and MedAustron by sharing its expertise in accelerator and magnet technologies.
In the 2010s, CERN teams put to use the experience gained in the construction of Linac 4, which became the source of proton beams for the LHC in 2020, and developed an extremely compact high-frequency radio-frequency quadrupole (RFQ) to be used as injector for a new generation of high-frequency, compact linear accelerators for proton therapy. The RFQ accelerates the proton beam to 5 MeV after only 2 m, and operates at 750 MHz – almost double the frequency of conventional RFQs. A major advantage of using linacs for proton therapy is the possibility of changing the energy of the beam, and hence the depth of treatment in the body, from pulse to pulse by switching off some of the accelerating units. The RFQ technology was licensed to the CERN spin-off ADAM, now part of AVO (Advanced Oncotherapy), and is being used as an injector for a breakthrough linear proton therapy machine at the company’s UK assembly and testing centre at STFC’s Daresbury Laboratory.
In 2019 CERN launched the Next Ion Medical Machine Study (NIMMS) to develop cutting-edge accelerator technologies for a new generation of compact and cost-effective ion-therapy facilities. The goal is to propel the use of ion therapy, given that proton installations are already commercially available and that only four ion centres exist in Europe, all based on bespoke solutions.
NIMMS is organised along four different lines of activities. The first aims to reduce the footprint of facilities by developing new superconducting magnet designs with large apertures and curvatures, and for pulsed operation. The second is the design of a compact linear accelerator optimised for installation in hospitals, which includes an RFQ based on the design of the proton therapy RFQ, and a novel source for fully-stripped carbon ions. The third concerns two innovative gantry designs, with the aim of reducing the size, weight and complexity of the massive magnetic structures that allow the beam to reach the patient from different angles: the SIGRUM lightweight rotational gantry originally proposed by TERA, and the GaToroid gantry invented at CERN which eliminates the need to mechanically rotate the structure by using a toroidal magnet (see figure “GaToroid”). Finally, new high-current synchrotron designs will be developed to reduce the cost and footprint of facilities while reducing the treatment time compared to present European ion-therapy centres: these will include a superconducting and a room-temperature option, and advanced features such as multi-turn injection for 1010 particles per pulse, fast and slow extraction, and multiple ion operation. Through NIMMS, CERN is contributing to the efforts of a flourishing European community, and a number of collaborations have been already established.
Another recent example of frontier radiotherapy techniques is the collaboration with Switzerland’s Lausanne University Hospital (CHUV) to build a new cancer therapy facility that would deliver high doses of radiation from very-high-energy electrons (VHEE) in milliseconds instead of minutes. The goal here is to exploit the so-called FLASH effect, wherein radiation doses administered over short time periods appear to damage tumours more than healthy tissue, potentially minimising harmful side-effects. This pioneering installation will be based on the high-gradient accelerator technology developed for the proposed CLIC electron–positron collider. Various research teams have been performing their biomedical research related to VHEE and FLASH at the CERN Linear Electron Accelerator for Research (CLEAR), one of the few facilities available for characterising VHEE beams.
Radioisotopes
CERN’s accelerator technology is also deployed in a completely different way to produce innovative radioisotopes for medical research. In nuclear medicine, radioisotopes are used both for internal radiotherapy and for diagnosis of cancer and other diseases, and progress has always been connected to the availability of novel radioisotopes. Here, CERN has capitalised on the experience of its ISOLDE facility, which during the past 30 years has the proton beam from the CERN PS Booster to produce 1300 different isotopes from 73 chemical elements for research ranging from nuclear physics to the life sciences. A new facility, called ISOLDE-MEDICIS, is entirely dedicated to the production of unconventional radioisotopes with the right properties to enhance the precision of both patient imaging and treatment. In operation since late 2017, MEDICIS will expand the range of radioisotopes available for medical research – some of which can be produced only at CERN – and send them to partner hospitals and research centres for further studies. During its 2019 and 2020 harvesting campaigns, for example, MEDICIS demonstrated the capability of purifying isotopes such as 169Er or 153Sm to new purity grades, making them suitable for innovative treatments such as targeted radioimmunotherapy.
Data handling and simulations
The expertise of particle physicists in data handling and simulation tools are also increasingly finding applications in the biomedical field. The FLUKA and Geant4 simulation toolkits, for example, are being used in several applications, from detector modelling to treatment planning. Recently, CERN contributed its know-how in large-scale computing to the BioDynaMo collaboration, initiated by CERN openlab together with Newcastle University, which initially aimed to provide a standardised, high-performance and open-source platform to support complex biological simulations (see figure “Computational neuroscience”). By hiding its computational complexity, BioDynaMo allows researchers to easily create, run and visualise 3D agent-based simulations. It is already used by academia and industry to simulate cancer growth, accelerate drug discoveries and simulate how the SARS-CoV-2 virus spreads through the population, among other applications, and is now being extended beyond biological simulations to visualise the collective behaviour of groups in society.
The expertise of particle physicists in data handling and simulation tools are increasingly finding applications in the biomedical field
Many more projects related to medical applications are in their initial phases. The breadth of knowledge and skills available at CERN was also evident during the COVID-19 pandemic when the laboratory contributed to the efforts of the particle-physics community in fields ranging from innovative ventilators to masks and shields, from data management tools to open-data repositories, and from a platform to model the concentration of viruses in enclosed spaces to epidemiologic studies and proximity-sensing devices, such as those developed by Terabee.
Fundamental research has a priceless goal: knowledge for the sake of knowledge. The theories of relativity and quantum mechanics were considered abstract and esoteric when they were developed; a century later, we owe to them the remarkable precision of GPS systems and the transistors that are the foundation of the electronics-based world we live in. Particle-physics research acts as a trailblazer for disruptive technologies in the fields of accelerators, detectors and computing. Even though their impact is often difficult to track as it is indirect and diffused over time, these technologies have already greatly contributed to the advances of modern medicine and will continue to do so.
In the coming decade, the study of nucleus–nucleus, proton–nucleus and proton–proton collisions at the LHC will offer rich opportunities for a deeper exploration of the quark–gluon plasma (QGP). An expected 10-fold increase in the number of lead–lead (Pb–Pb) collisions should both increase the precision of measurements of known probes of the QGP medium as well as give access to new ones. By focusing on rare probes down to very low transverse momentum, such as heavy-flavour particles, quarkonium states, real and virtual photons, as well the study of jet quenching and exotic heavy nuclear states, very large data samples will be required.
To seize these opportunities, the ALICE collaboration has undertaken a major upgrade of its detectors to increase the event readout, online data processing and recording capabilities by nearly two orders of magnitude (CERN Courier January/February 2019 p25). This will allow Pb–Pb minimum-bias events to be recorded at rates in excess of 50 kHz, which is the expected Pb–Pb interaction rate at the LHC in Run 3, as well as proton–lead (p–Pb) and proton–proton (pp) collisions at rates of about 500 kHz and 1 MHz, respectively. In addition, the upgrade will improve the ability of the ALICE detector to distinguish secondary vertices of particle decays from the interaction vertex and to track very low transverse-momentum particles, allowing measurements of heavy-flavour hadrons and low-mass dileptons with unprecedented precision and down to zero transverse momentum.
These ambitious physics goals have motivated the development of an entirely new inner tracking system, ITS2. Starting from LHC Run 3 next year, the ITS2 will allow pp and Pb–Pb collisions to be read out 100 and 1000 times more quickly than was possible in previous runs, offering superior ability to measure particles at low transverse momenta (see “High impact” figure). Moreover, the inner three layers of the ITS2 feature a material budget three times lower than the original detector, which is also important for improving the tracking performance at low transverse momentum.
With its 10 m2 of active silicon area and nearly 13 billion pixels, the ITS2 is the largest pixel detector ever built. It is also the first detector at the LHC to use monolithic active pixel sensors (MAPS), instead of the more conventional and well-established hybrid pixels and silicon microstrips.
Change of scale
The particle sensors and the associated read-out electronics used for vertexing and tracking detection systems in particle-physics experiments have very demanding requirements in terms of granularity, material thickness, readout speed and radiation hardness. The development of sensors based on silicon-semiconductor technology and read-out integrated circuits based on CMOS technology revolutionised the implementation of such detection systems. The development of silicon microstrips, already successfully used at the Large Electron-Positron (LEP) collider, and, later, the development of hybrid pixel detectors, enabled the construction of tracking and vertexing detectors that meet the extreme requirements – in terms of particle rates and radiation hardness – set by the LHC. As a result, silicon microstrip and pixel sensors are at the heart of the particle-tracking systems in most particle-physics experiments today.
Nevertheless, compromises exist in the implementation of this technology. Perhaps the most significant is the interface between the sensor and the readout electronics, which are typically separate components. To go beyond these limitations and construct detection systems with higher granularity and less material thickness requires the development of new technology. The optimal way to achieve this is to integrate both sensor and readout electronics to create a single detection device. This is the approach taken with CMOS active pixel sensors (APSs). Over the past 20 years, extensive R&D has been carried out on CMOS APSs, making this a viable option for vertexing and tracking detection systems in particle and nuclear physics, although their performance in terms of radiation hardness is not yet at the level of hybrid pixel detectors.
ALPIDE, which is the result of an intensive R&D effort, is the building block of the ALICE ITS2
The first large-scale application of CMOS APS technology in a collider experiment was the STAR PXL detector at Brookhaven’s Relativistic Heavy-Ion Collider in 2014 (CERN Courier October 2015 p6). The ALICE ITS2 has benefitted from significant R&D since then, in particular concerning the development of a more advanced CMOS imaging sensor, named ALPIDE, with a minimum feature size of 180 nm. This has led to a significant improvement in the field of MAPS for single-particle detection, reaching unprecedented performance in terms of signal/noise ratio, spatial resolution, material budget and readout speed.
ALPIDE sensors
ALPIDE, which is the result of an intensive R&D effort carried out by ALICE over the past eight years, is the building block of the ALICE ITS2. The chip is 15 × 30 mm2 in area and contains more than half a million pixels organised in 1024 columns and 512 rows. Its very low power consumption (< 40 mW/cm2) and excellent spatial resolution (~5 μm) are perfect for the inner tracker of ALICE.
In ALPIDE the sensitive volume is a 25 μm-thick layer of high-resistivity p-type silicon (> 1 kΩ cm) grown epitaxially on top of a standard (low-resistivity) CMOS wafer (see “ALPIDE journeys” figure). The electric charge generated by particles traversing the sensitive volume is collected by an array of n–p diodes reverse-biased with a positive potential (~1 V) applied on the n-well electrode and a negative potential (down to a minimum of –6 V) applied to the substrate (backside). The possibility of varying the reverse-bias voltage in the range 1 to 7 V allows control over the size of the depleted volume (the fraction of the sensitive volume where the charge is collected by drift due to the presence of an electric field) and, correspondingly, the charge-collection time. Measurements carried out on sensors with characteristics identical to ALPIDE have shown an average charge-collection time consistently below 15 ns for a typical reverse-bias voltage of 4 V. Applying reverse substrate bias to the ALPIDE sensor also increases the tolerance to non-ionising energy loss to well beyond 1013 1 MeV neq/cm2, which is largely sufficient to meet ALICE’s requirements.
Another important feature of ALPIDE is the use of a p-well to shield the full CMOS circuitry from the epitaxial layer. Only the n-well collection electrode is not shielded. The deep p-well prevents all other n-wells – which contain circuitry – from collecting signal charge from the epitaxial layer, and therefore allows the use of full CMOS and consequently more complex readout circuitry in the pixel. ALICE is the first experiment where this has been used to implement a MAPS with a pixel front-end (amplifier and discriminator) and a sparsified readout within the pixel matrix similar to hybrid sensors. The low capacitance of the small collection electrode (about 2 × 2 μm2), combined with a circuit that performs sparsified readout within the matrix without a free-running clock, keeps the power consumption as low as 40 nW per pixel.
The ITS2 consists of seven layers covering a radial extension from 22 to 430 mm with respect to the beamline (see “Cylindrical structure” figure). The innermost three layers form the inner barrel (IB), while the middle two and the outermost two layers form the outer barrel (OB). The radial position of each layer was optimised to achieve the best combined performance in terms of pointing resolution, momentum resolution and tracking efficiency in the expected high track-density environment of a Pb–Pb collision. It covers a pseudo-rapidity range |η| < 1.22 for 90% of the most luminous beam interaction region, extending over a total surface of 10 m2 and containing about 12.5 Gpixels with binary readout, and is operated at room temperature using water cooling.
Given the small size of the ALPIDE (4.5 cm2), sensors are tiled-up to form the basic detector unit, which is called a stave. It consists of a “space-frame” (a carbon-fibre mechanical support), a “cold plate” (a carbon ply embedding two cooling pipes) and a hybrid integrated circuit (HIC) assembly in which the ALPIDE chips are glued and electrically connected to a flexible printed circuit. An IB HIC and an OB HIC include one row of nine chips and two rows of seven chips, respectively. The HICs are glued to the mechanical support: 1 HIC for the IB and 8 or 14 HICs for the two innermost and two outermost layers of the OB, respectively (see “State of the art” figure).
Zero-suppressed hit data are transmitted from the staves to a system of about 200 readout boards located 7 m away from the detector. Data is transmitted serially with a bit-rate up to 1.2 Gb/s over more than 3800 twin-axial cables reaching an aggregate bandwidth of about 2 Tb/s. The readout boards aggregate data and re-transmit it over 768 optical-fibre links to the first-level processors of the combined online/offline (O2) computing farm. The data are then sequenced in frames, each containing the hit information of the collisions occurring in contiguous time intervals of constant duration, typically 22 μs.
The process and procedures to build the HICs and staves are rather complex and time-intensive. More than 10 construction sites distributed worldwide worked together to develop the assembly procedure and to build the components. More than 120 IB and 2500 OB HICs were built using a custom-made automatic module-assembly machine, implementing electrical testing, dimension measurement, integrity inspection and alignment for assembly. A total of 96 IB staves, enough to build two copies of the three IB layers, and a total of 160 OB staves, including 20% spares, have been assembled.
A large cleanroom was built at CERN for the full detector assembly and commissioning activities. Here the same backend system that will be used in the experiment was installed, including the powering system, cooling system, full readout and trigger chains. Staves were installed on the mechanical support structures to form layers, and the layers are assembled in half-barrels, IB (layers 0, 1 and 2) top and bottom and OB (layers 3, 4, 5 and 6) top and bottom. Each stave is then connected to power-supply and readout systems. The commissioning campaign started in May 2019 to fully characterise and calibrate all the detector components, and installations of both the OB and IB were completed in May this year.
Physics ahead
After nearly 10 years of R&D, the upgrade of the ALICE experimental apparatus – which includes an upgraded time projection chamber, a new muon forward tracker, a new fast-interaction trigger detector, forward diffraction detector, new readout electronics and an integrated online–offline computing system – is close to completion. Most of the new or upgraded detectors, including the ITS2, have already been installed in the experimental area and the global commissioning of the whole apparatus will be completed this year, well before the start of Run 3, which is scheduled for the spring of 2022.
The significant enhancements to the performance of the ALICE detector will enable the exploration of new phenomena
The significant enhancements to the performance of the ALICE detector will enable detailed, quantitative characterisation of the high-density, high-temperature phase of strongly interacting matter, together with the exploration of new phenomena. The ITS2 is at the core of this programme. With improved pointing resolution and tracking efficiency at low transverse momentum, it will enable the determination of the total production cross-section of the charm quark. This is fundamental for understanding the interplay between the production of charm quarks in the initial hard scattering, their energy loss in the QGP and possible in-medium thermal production. Moreover, the ITS2 will also make it possible to measure a larger number of different charmed and beauty hadrons, including baryons, opening the possibility for determining the heavy-flavour transport coefficients. A third area where the new ITS will have a major impact is the measurement of electron–positron pairs emitted as thermal radiation during all stages of the heavy-ion collision, which offer an insight into the bulk properties and space–time evolution of the QGP.
More in store
The full potential of the ALPIDE chip underpinning the ITS2 is yet to be fully exploited. For example, a variant of ALPIDE explored by ALICE based on an additional low-dose deep n-type implant to realise a planar junction in the epitaxial layer below the wells containing the CMOS circuitry results in a much faster charge collection and significantly improved radiation hardness, paving the way for sensors that are much more resistant to radiation.
Further improvements to MAPS for high-energy physics detectors could come by exploiting the rapid progress in imaging for consumer applications. One of the features offered recently by CMOS imaging sensor technologies, called stitching, will enable a new generation of MAPS with an area up to the full wafer size. Moreover, the reduction in the sensor thickness to about 30–40 μm opens the door to large-area curved sensors, making it possible to build a cylindrical layer of silicon-only sensors with a further significant reduction in the material thickness. The ALICE collaboration is already preparing a new detector based on these concepts, which consists of three cylindrical layers based on curved wafer-scale stitched sensors (see “Into the future” figure). This new vertex detector will be installed during Long Shutdown 3 towards the middle of the decade, replacing the three innermost layers of the ITS2. With the first detection layer closer to the interaction point (from 23 to 18 mm) and a reduction in the material budget close to the interaction point by a factor of six, the new vertex detector will further improve the tracking precision and efficiency at low transverse momentum.
The technologies developed by ALICE for the ITS2 detector are now being used or considered for several other applications in high-energy physics, including the vertex detector of the sPHENIX experiment at RHIC, and the inner tracking system for the NICA MPD experiment at JINR. The technology is also being applied to areas outside of the field, including in medical and space applications. The Bergen pCT collaboration and INFN Padova’s iMPACT project, for example, are developing novel ALPIDE-based devices for clinical particle therapy to reconstruct 3D human body images. The HEPD02 detector for the Chinese–Italian CSES-02 mission, meanwhile, includes a charged-particle tracker made of three layers of ALPIDE sensors that represents a pioneering test for next-generation space missions. Driven by a desire to learn more about the fundamental laws of nature, it is clear that advanced silicon-tracker technology continues to make an impact on wider society, too.
Thanks to their large volumes and cost effectiveness, particle-physics experiments rely heavily on gaseous detectors. Unfortunately, environmentally harmful chlorofluorocarbons known as freons play an important role in traditional gas mixtures. To address this issue, more than 200 gas-detector experts participated in a workshop hosted online by CERN on 22 April to study the operational behaviour of novel gases and alternative gas mixtures.
Large gas molecules absorb energy in vibrational and rotational modes of excitation
Freon-based gases are essential to many detectors currently used at CERN, especially for tracking and triggering. Examples run from muon systems, ring-imaging Cherenkov (RICH) detectors and time-projection chambers (TPCs) to wire chambers, resistive-plate chambers (RPCs) and micro-pattern gas detectors (MPGDs). While the primary gas in the mixture is typically a noble gas, adding a “quencher” gas helps achieve a stable gas gain, well separated from the noise of the electronics. Large gas molecules such as freons absorb energy in relevant vibrational and rotational modes of excitation, thereby preventing secondary effects such as photon feedback and field emission. Extensive R&D is needed to reach the stringent performance required of each gas mixture.
CERN has developed several strategies to reduce greenhouse gas (GHG) emissions from particle detectors. As demonstrated by the ALICE experiment’s TPC, upgrading gas-recirculation systems can reduce GHGs by almost 100%. When it is not possible to recirculate all of the gas mixture, gas recuperation is an option – for example, the recuperation of CF4 by the CMS experiment’s cathode-stripchamber (CSC) muon detector and the LHCb experiment’s RICH-2 detector. A complex gas-recuperation system for the C2H2F4 (R134a) in RPC detectors is also under study, and physicists are exploring the use of commonplace gases. In the future, new silicon photomultipliers could reduce chromatic error and increase photon yield, potentially allowing CF4 to be replaced with CO2. Meanwhile, in LHCb’s RICH-1 detector, C4F10 could possibly be replaced with hydrocarbons like C4H10 if the flammability risk is addressed.
Eco-gases
Finally, alternative “eco-gases” are the subject of intense R&D. Eco-gases have a low global-warming potential because of their very limited stability in the atmosphere as they react with water or decompose in ultraviolet light. Unfortunately, these conditions are also present in gaseous detectors, potentially leading to detector aging. In addition to their stability, there is also the challenge of adapting current LHC detectors, given that access is difficult and many components cannot be replaced.
Roberto Guida (CERN), Davide Piccolo (Frascati), Rob Veenhof (Uludağ University) and Piet Verwilligen (Bari) convened workshop sessions at the April event. Groups from Turin, Frascati, Rome, CERN and GSI presented results based on the new hydro-fluoro-olefin (HFO) mixture with the addition of neutral gases such as helium and CO2 as a way of lowering the high working-point voltage. Despite challenges related to the larger signal charge and streamer probability, encouraging results have been obtained in test beams in the presence of LHClike background gamma rays. CMS’s CSC detector is an interesting example where HFO could replace CF4. In this case, its decomposition could even be a positive factor, however further studies are needed.
We now need to create a compendium of simulations and measurements for “green” gases in a similar way to the concerted effort in the 1990s and 2000s that proved indispensable to the design of the LHC detectors. To this end, the INRS-hosted LXCAT database enables the sharing and evaluation of data to model non-equilibrium low-temperature plasmas. Users can upload data on electron- and ion-scattering cross sections and compare “swarm” parameters. The ETH (Zürich), Aachen and HZDR (Dresden) groups illustrated measurements of transport parameters, opening possibilities of collaboration, while the Bari group sought feedback and collaboration on a proposal to precisely measure transport parameters for green gases in MPGDs using electron and laser beams.
Obtaining funding for this work can be difficult due to a lack of expected technological breakthroughs in low-energy plasma physics
Future challenges will be significant. The volumes of detector systems for the High-Luminosity LHC and the proposed Future Circular Collider, for example, range from 10 to 100 m3, posing a significant environmental threat in the case of leaks. Furthermore, since 2014 an EU “F-gas” regulation has come into force, with the aim of reducing sales to one-fifth by 2030. Given the environmental impact and the uncertain availability and price of freon-based gases, preparing a mitigation plan for future experiments is of fundamental importance to the high-energy-physics community, and the next generation of detectors must be completely designed around eco-mixtures. Although obtaining funding for this work can be difficult, for example due to a lack of expected technological breakthroughs in low-energy plasma physics, the workshop showed that a vibrant cadre of physicists is committed to taking the field forward. The next workshop will take place in 2022.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.