More than a century after its discovery, physicists are still working hard to understand how fundamental properties of the proton – such as its mass and spin – arise from its underlying structure. A particular puzzle concerns the proton’s size, which is an important input to understand nuclei, for example. Inelastic electron–proton scattering experiments in the late 1950s revealed the spatial distribution of charge inside the proton, allowing its radius to be deduced. A complementary way to determine this “charge radius”, and which relies on precise quantum-electrodynamics calculations, is to measure the shift it produces in the lowest energy levels of the hydrogen atom. Over the decades, numerous experiments have measured the proton’s size with increasing precision.
By 2006, based on results from scattering and spectroscopic measurements, the Committee on Data for Science and Technology (CODATA) had established the proton charge radius to be 0.8760(78) fm. Then, in 2010, came a surprise: the CREMA collaboration at the Paul Scherrer Institut (PSI) reported a value of 0.8418(7) fm based a novel, high-precision spectroscopic measurement of muonic hydrogen. Disagreeing with previous spectroscopic measurements, and lying more than 5σ below the CODATA world average, the result gave rise to the “proton radius puzzle”. While the most recent electron–proton scattering and hydrogen-spectroscopy measurements are in closer agreement with the latest muonic-hydrogen results, the discrepancies with earlier experiments are not yet fully understood.
Now, the MINERνA collaboration has brought a new tool to gauge the proton’s size: neutrino scattering. Whereas traditional scattering measurements probe the proton’s electric or magnetic charge distributions, which are encoded in vector form factors, scattering by neutrinos allows the analogous axial-vector form factor FA, which characterises the proton’s weak charge distribution, to be measured. In addition to providing a complementary probe of proton structure, FA is key to precise measurements of neutrino-oscillation parameters at experiments such as DUNE, Hyper-K, NOvA and T2K.
MINERνA is a segmented scintillator detector with hexagonal planes made from strips of triangular cross-section, which are assembled into planes perpendicular to the incoming beam. By studying how a beam of muon antineutrinos produced by Fermilab’s NuMI neutrino beamline interacts with a polystyrene target, which contains hydrogen closely bonded to carbon, the MINERνA researchers were able to make the first high-statistics measurement of the νμ p → μ+ n cross-section using the hydrogen atom in polystyrene. Extracting FA from 5580 ± 180 signal events (observed over an estimated background of 12,500), they measured the nucleon axial charge radius to be 0.73(17) fm, in agreement with the electric charge radius measured with electron scattering.
“If we weren’t optimists, we’d say [this measurement] was impossible,” says lead author Tejin Cai, who proposed the idea of using a polystyrene target to access neutrino-hydrogen scattering while a PhD student at the University of Rochester. “The hydrogen and carbon are chemically bonded, so the detector sees interactions on both at once. But then, I realised that the very nuclear effects that made scattering on carbon complicated also allowed us to select hydrogen and would allow us to subtract off the carbon interactions.”
A new experiment called AMBER, at the M2 beamline of CERN’s Super Proton Synchrotron, is about to open another perspective on the proton charge radius. AMBER is the successor to COMPASS, which played a major role towards resolving the proton “spin crisis” (the finding, by the European Muon Collaboration in 1987, that quarks account for less than a third of the total proton spin) by studying the contribution to the proton spin from gluons. Instead of electrons, AMBER will use muon scattering at unprecedented energies (around 100 GeV) to access the small momentum-transfer needed to measure the proton radius. A future experiment at PSI called MUSE, meanwhile, aims to determine the proton radius through simultaneous measurements of muon– and electron–proton scattering.
AMBER is scheduled to start with a pilot run in September 2023 and to operate for up to three years, with the goal to find a value for the proton radius in the range 0.84–0.88 fm, as expected from previous experiments, and with an uncertainty of about 0.01 fm. “Some colleagues say that there is no proton-radius puzzle, only problematic measurements,” says AMBER spokesperson Jan Friedrich of TU Munich. “The discrepancy between theory and experiments, as well as between individual experiments, will have to shrink and align as much as possible. After all, there is only one true proton radius.”
Gamma-ray bursts (GRBs) are the result of the most violent explosions in the universe. They are named for their bright burst of high-energy emission, mostly in the keV to MeV region, which can last from milliseconds to hundreds of seconds, and are followed by an afterglow that covers the full electromagnetic spectrum. The extreme nature and important role in the universe of these extragalactic events – for example in the production of heavy elements, potential cosmic-ray acceleration or even mass-extinction events on Earth-like planets – makes them one of the most studied astrophysical phenomena.
Since their discovery in 1967, detailed studies of thousands of GRBs show that they are the result of cataclysmic events, such as neutron-star binary mergers. The observed gamma-ray emission is produced (through a yet-unidentified mechanism) within relativistic jets that decelerate when they strike interstellar matter, resulting in the observed afterglow.
But interest in GRBs goes beyond astrophysics. Due to the huge energies involved, they are also a unique lab to study the laws of physics at their extremes. This once again became clear on 9 October 2022, when a GRB was detected that was not only the brightest ever but also appeared to have produced an emission that is difficult to explain using standard physics.
Eye-catching emission
“GRB 221009A” immediately caught the eye of the multi-messenger community, its gamma-ray emission being so bright that it saturated many observatories. As a result, it was also observed by a wide range of detectors covering the electromagnetic spectrum, including at energies exceeding 10 TeV. Two separate ground-based experiments – the Large High Altitude Air Shower Observatory (LHAASO) in China and the Carpet-2 air-shower array in Russia – claimed detections of photons with an energy of 18 TeV and 251 TeV, respectively. This is significantly higher, by an order of magnitude, than the previous record for TeV emission from GRBs reported by the MAGIC and HESS telescopes in 2019 (CERN Courier January/February 2020 p10). Adding further intrigue, such high-energy emission from GRBs should not be able to reach Earth at all.
For photons with energies exceeding several TeV, electron–positron pair-production with optical photons starts to become possible. Although the cross section for this process only just exceeds its threshold at an energy of 2.6 TeV, it is compensated by the billions of light years of space filled with optical light that the TeV photons need to traverse before reaching us. Despite uncertainties in the density of this so-called extragalactic background light, a rough calculation using the distance of GRB 221009A (z = 0.151) suggests that the probability for an 18 TeV photon to reach Earth is around 10–8.
Clearly we need to wait for the detailed analyses by LHAASO and Carpet-2 to confirm the measurements
The reported measurements have thus far only been provided through alerts shared among the multi-messenger community, while detailed data analyses are still ongoing. Their significance, however, led to tens of beyond-the-Standard Model (BSM) explanations being posted on the arXiv preprint server within days of the alert. While each differs in the specific mechanism hypothesised, the overall idea is similar: instead of being produced directly in the GRB, the photons are posited to be a secondary product of BSM particles produced during or close to the GRB. Examples range from light scalar particles or right-handed neutrinos produced in the GRB and decaying within our galaxy, to photons that converted into axions close to the GRB and turned back into photons in the galactic magnetic field before reaching Earth.
Clearly the community needs to wait for the detailed analyses by the LHAASO and Carpet-2 collaborations to confirm the measurements. The published energy resolution of LHAASO keeps open the possibility that their results can be explained with Standard Model physics, while the 251 TeV emission from Carpet-2 is more difficult to attribute to known systematic effects. This result could, however, be explained by secondary particles resulting from an ultra-high energy cosmic-ray (UHECR) produced in the GRB which, although would not represent new physics, would still confirm GRBs as a source of UHECRs for the first time. Analysis results from both collaborations are therefore highly anticipated.
The STEREO experiment, located at the high-flux research reactor at the Institut Laue-Langevin (ILL), Grenoble, is the latest to cast doubt on the existence of an additional, sterile neutrino state. Based on the full dataset collated from October 2017 until the experiment shut down in November 2020, the results support the conclusions of a global analysis of all neutrino data, that a normalisation bias in the beta-decay spectrum of 235U is the most probable explanation for a deficit of electron neutrinos seen at reactor experiments during the past decade.
The confirmation of neutrino oscillations 25 years ago showed that the lepton content of a given neutrino evolves as it propagates, generating a change of flavour. Numerous experiments based on solar, atmospheric, accelerator, reactor and geological neutrino sources have determined the oscillation parameters in detail, reaffirming the three-neutrino picture obtained by precise measurements of the Z boson’s decay width at LEP. However, several anomalies have also shown up, one of the most prominent being the so-called reactor antineutrino anomaly. Following a re-evaluation of the expected νe flux from nuclear reactors by a team at CEA and Subatech in 2011, a deficit in the number of νe detected by reactor neutrino experiments appeared. Combined with a longstanding anomaly reported by short-baseline accelerator-neutrino experiments such as LSND and a deficit in νe seen in calibration data for the solar-neutrino detectors GALLEX and SAGE, excitement grew that an additional neutrino state – a sterile or right-handed neutrino with non-standard interactions that arises in many extensions of the Standard Model – might be at play.
We anticipate that this result will allow progress towards finer tests of the fundamental properties of neutrinos
Designed specifically to investigate the sterile-neutrino hypothesis, STEREO was positioned about 10 m from the ILL reactor core to measure the evolution of the antineutrino energy spectrum from 235U fission at short distances with high precision. Comprising six cells filled with gadolinium-doped liquid scintillator positioned at different distances from the reactor core, producing six spectra, the setup allows the hypothesis that νeundergo a fast oscillation into a sterile neutrino to be tested independently of the predicted shape of the emitted νespectrum.
The measured antineutrino energy spectrum, based on 107,558 detected antineutrinos, suggests that the previously reported anomalies originate from biases in the nuclear experimental data used for the predictions, while rejecting the hypothesis of a light sterile neutrino with a mass of about 1 eV. “Our result supports the neutrino content of the Standard Model and establishes a new reference for the 235U antineutrino energy spectrum,” writes the team. “We anticipate that this result will allow progress towards finer tests of the fundamental properties of neutrinos but also to benchmark models and nuclear data of interest for reactor physics and for observations of astrophysical or geoneutrinos.”
Gallium remains
STEREO’s findings fit those reported recently by other neutrino-oscillation experiments. A 2021 analysis by the MicroBooNE collaboration at Fermilab, for example, favoured the Standard Model over an anomalous signal seen by its nearby experiment MiniBooNE, assuming the latter was due to the existence of a non-standard neutrino. Yet the story of the sterile neutrino is not over. In 2022, new results from the Baksan Experiment on Sterile Transitions (BEST) further confirmed the deficit in the νeflux emitted from radioactive sources as seen by the SAGE and GALLEX experiments – the so-called gallium anomaly – which, if interpreted in the context of neutrino oscillations, is consistent with νe → νs oscillations with a relatively large squared mass difference and mixing angle.
“Under the sterile neutrino hypothesis, a signal in MicroBooNE, MiniBooNE or LSND would require the sterile neutrino to mix with both νe and νμ, whereas for the gallium anomaly, mixing with νe alone is sufficient,” explains theorist Joachim Kopp of CERN. “Even though the reactor anomaly seems to be resolved, we’d still like to understand what’s behind the others.”
How quickly can a computer make sense of what it sees without losing accuracy? And to what extent can AI tasks on hardware be performed with limited computing resources? Aiming to answer these and other questions, car-safety software company Zenseact, founded by Volvo Cars, sought out CERN’s unique capabilities in real-time data analysis to investigate applications of machine-learning to autonomous driving.
In the future, self-driving cars are expected to considerably reduce the number of road-accident fatalities. To advance developments, in 2019 CERN and Zenseact began a three-year project to research machine-learning models that could enable self-driving cars to make better decisions faster. Carried out in an open-source software environment, the project’s focus was “computer vision” – an AI discipline dealing with how computers interpret the visual world and then automate actions based on that understanding.
“Deep learning has strongly reshaped computer vision in the last decade, and the accuracy of image-recognition applications is now at unprecedented levels. But the results of our research show that there’s still room for improvement when it comes to running the deep-learning algorithms faster and being more energy-efficient on resource-limited on-device hardware,” said Christoffer Petersson, research lead at Zenseact. “Simply put, machine-learning techniques might help drive faster decision-making in autonomous cars.”
The need to react fast and make quick decisions imposes strict runtime requirements on the neural networks that run on embedded hardware in an autonomous vehicle. By compressing the neural networks, for example using fewer parameters and bits, the algorithms can be executed faster and use less energy. For this task, the CERN–Zenseact team chose field-programmable gate arrays (FPGAs) as the hardware benchmark. Used at CERN for many years, especially for trigger readout electronics in the large LHC experiments, FPGAs are configurable integrated circuits that can execute complex decision-making algorithms in periods of microseconds. The main result of the FPGA experiment, says Petersson, was a practical demonstration that computer-vision tasks for automotive applications can be performed with high accuracy and short latency, even on a processing unit with limited computational resources. “The project clearly opens up for future directions of research. The developed workflows could be applied to many industries.”
The compression techniques in FPGAs elucidated by this project could also have a significant effect on “edge” computing, explains Maurizio Pierini of CERN: “Besides improving the trigger systems of ATLAS and CMS, future development of this research area could be used for on-site computation tasks, such as on portable devices, satellites, drones and obviously vehicles.”
Antinuclei can travel vast distances through the Milky Way without being absorbed, concludes a novel study by the ALICE collaboration. The results, published in December, indicate that the search for 3He in space is a highly promising way to probe dark matter.
First observed in 1965 in the form of the antideuteron at CERN’s Proton Synchrotron and Brookhaven’s Alternating Gradient Synchrotron, antinuclei are exceedingly rare. Since they annihilate on contact with regular matter, no natural sources exist on Earth. However,light antinuclei have been produced and studied at accelerator facilities, including recent precision measurements of the mass difference between deuterons and antideuterons and between 3He and 3He by ALICE, and between the hypertriton and antihypertriton by the STAR collaboration at RHIC.
Antinuclei can in principle also be produced in space, for example in collisions between cosmic rays and the interstellar medium. However, the expected production rates are very small. A more intriguing possibility is that light antinuclei are produced by the annihilation of dark-matter particles. In such a scenario, the detection of antinuclei in cosmic rays could provide experimental evidence for the existence of dark-matter particles. Space-based experiments such as AMS-02 and PAMELA, along with the upcoming Antarctic balloon mission GAPS, are among a few experiments that are able to detect light antinuclei. But to be able to interpret future results, precise knowledge of the production and disappearance probabilities of antinuclei is vital.
The latter is where the new ALICE study comes in. The unprecedented energies of proton–proton and lead–lead collisions at the LHC produce, on average, as many nuclei as antinuclei. By studying the change in the rate of 3He as a function of the distance to the production point, the collaboration was able to determine the inelastic cross section, or disappearance probability, of 3He nuclei for the first time. These values were then used as input for astrophysics simulations.
Two models of the 3Heflux expected near Earth after the nuclei’s journey from sources in the Milky Way were considered: one assumes that the sources are cosmic-
ray collisions with the interstellar medium, and the other annihilations of hypothetical weakly interacting massive particles (WIMPs). For each model, the Milky Way’s transparency to 3He– that is, its ability to let the nuclei through without being absorbed – was estimated. The WIMP dark-matter model led to a transparency of about 50%, whereas for the cosmic-ray model the transparency ranged from 25 to 90%, depending on the energy of the antinucleus. These values show that 3Heoriginating from dark-matter or cosmic-ray collisions can travel distances of several kiloparsecs in the Milky Way without being absorbed, even from as far away as the galactic centre.
“This new result illustrates the close connection between accelerator-based experiments and observations of particles produced in the cosmos,” says ALICE spokesperson Marco van Leeuwen. “In the near future, these studies will be extended to 4He and to the lower-momentum region with much larger datasets.”
The sixth annual LHC Career Networking Event, which took place at CERN on 21 November 2022, attracted more than 200 scientists and engineers (half in person) seeking to explore careers beyond CERN. Seven former members of the LHC-experiment collaborations and representatives from CERN’s knowledge transfer group discussed their experiences, good and bad, upon transitioning to the diverse employment world outside particle physics. Lively Q&A sessions and panel discussions enabled the audience to voice their questions and concerns.
While the motivations for leaving academia expressed by the speakers differed according to their personal stories, common themes emerged. The long time-scales of experimental physics coupled with job instability and the glacial pace of funding cycles for new projects, for example, sometimes led to demotivation, whereas the speakers found that industry had exciting shorter-term projects to explore. Several speakers sought a better work–life balance in subjects they could enthuse about, having previously experienced a sense of stagnation. Another factor related to that balance was the better ratio between salary and performance, and hours worked.
Case studies
Caterina Deplano, formerly an ALICE experimentalist, and Giorgia Rauco, ex-CMS, described the personal constraints that led them to search for a job in the local area, and showed that this need not be a limiting factor. Both assessed their skills frankly and opted for further training in their target sectors: education and data science, respectively. Deplano’s path to teaching in Geneva led her to go back and study for four years, improving her French-language skills while obtaining a Swiss teaching qualification. The reward was apparent in the enthusiasm with which she talked about her students and her chosen career. Rauco explained how she came to contemplate life outside academia and talked participants through the application process, emphasising that finding the “right” employment fit had meant many months of work with frequent disappointments, the memory of which was erased by the final acceptance letter. Both speakers gave links to valuable resources for training and further education, and Rauco offered some top-tips for prospective transitioners: be excited for what is coming next, start as soon as possible if you are thinking about changing and don’t feel guilty about your choice.
Maria Elena Stramaglia, formerly ATLAS, described the anguish of deciding whether to stay in academia or go to industry, and her frank assessment of transferable skills weighed up against personal desires and her own work–life balance. Her decision to join Hitachi Energy was based on the right mix of personal and technical motivation, she said. In moving from LHCb to data science and management, Albert Puig Navarro joined a newly established department at Proton (the developers of ProtonMail, which was founded by former ATLAS members; CERN Courier September/October 2019 p53), in which he ended up being responsible for hiring a mix of data scientists, engineers and operations managers, conducting more than 200 interviews in the process. He discussed the pitfalls of over-confidence, the rather different requirements of the industrial sector, and the shift in motivations between pure science and industry. Cécile Deterre, a former ATLAS physicist now working on technology for sustainable fish farming, focussed on CV-writing for industrial job applications, during which she emphasised transferable skills and how to make your technical experience more accessible to future employers.
With one foot still firmly in particle physics, Alex Winkler, formerly CMS, joined a company that makes X-ray detectors for medical, security and industrial applications; in a serendipitous exception among the speakers, he described how he was head-hunted while contemplating life beyond CERN, and mentioned the novel pressures implicit in working in a for-profit environment. Massimo Marino, ex-ATLAS, gave a lively talk about his experiences in a number of diverse environments: Apple, the World Economic Forum and the medical energy industries, to name a few. Diverting along the way to write a series of books, his talk covered the personal challenges and expectations in different roles and environments over a long career.
Throughout the evening, which culminated in a panel session, participants had the opportunity to quiz the speakers about their sectors and the personal decisions and processes that led them there. Head of CERN Alumni Relations Rachel Bray also explained how the Alumni Network can help facilitate contact between current CERN members and their predecessors who have left the field. The interest shown by the audience and the detailed testimonials of the speakers demonstrated that this event remains a vital source of information and encouragement for those considering a career transition.
Last year marked the 10th anniversary of the discovery of the Higgs particle. Ten years is a short lapse of time when we consider the profound implications of this discovery. Breakthroughs in science mark a leap in understanding, and their ripples may extend for decades and even centuries. Take Kirchhoffs’ blackbody proposal more than 150 years ago: a theoretical construction, an academic exercise that opened the path towards a quantum revolution, the implications of which we are still trying to understand today.
Imagine now the vast network of paths opened by ideas, such as emission theory, that led to no fruition despite their originality. Was pursuing these useful, or a waste of resources? Scientists would answer that the spirit of basic research is precisely to follow those paths with unknown destinations; it’s how humanity reached the level of knowledge that sustains modern life. As particle physicists, as long as the aim is to answer nature’s outstanding mysteries, the path is worth following. The Higgs-boson discovery is the latest triumph of this approach and, as for the quantum revolution, we are still working hard to make sense of it.
Particle discoveries are milestones in the history of our field, but they signify something more profound: the realisation of a new principle in nature. Naively, it may seem that the Higgs discovery marked the end of our quest to understand the TeV scale. The opposite is true. The behaviour of the Higgs boson, in the form it was initially proposed, does not make sense at a quantum level. As a fundamental scalar, it experiences quantum effects that grow with their energy, doggedly pushing its mass towards the Planck scale. The Higgs discovery solidified the idea that gauge symmetries could be hidden, spontaneously broken by the vacuum. But it did not provide an explanation of how this mechanism makes sense with a fundamental scalar sensitive to mysterious phenomena such as quantum gravity.
Now comes the hard part. From the plethora of ideas proposed during the past decades to make sense of the Higgs boson – supersymmetry being the most prominent – most physicists predicted that it would have an entourage of companion particles with electroweak or even strong couplings. Arguments of naturalness, that these companions should be close-by to prevent troublesome fine-tunings of nature, led to the expectation that discoveries would follow or even precede that of the Higgs. Ten years on, this wish has not been fulfilled. Instead, we are faced with a cold reality that can lead us to sway between attitudes of nihilism and hubris, especially when it comes to the question of whether particle physics has a future beyond the Higgs. Although these extremes do not apply to everyone, they are understandable reactions to viewing our field next to those with more immediate applications, or to the personal disappointment of a lifelong career devoted to ideas that were not chosen by nature.
Such despondence is not useful. Remember that the no-lose theorem we enjoyed when planning the LHC, i.e. the certainty that we would find something new, Higgs boson or not, at the TeV scale, was an exception to the rules of basic research. Currently, there is no no-lose theorem for the LHC, or for any future collider. But this is precisely the inherent premise of any exploration worth doing. After the incredible success we have had, we need to refocus and unify our discourse. We face the uncertainty of searching in the dark, with the hope that we will initiate the path to a breakthrough, still aware of the small likelihood that this actually happens.
The no-lose theorem we enjoyed when planning the LHC was an exception to the rules of basic research
Those hopes are shared by wider society, which understands the importance of exploring big questions. From searching for exoplanets that may support life to understanding the human mind, few people assume these paths will lead to immediate results. The challenge for our field is to work out a coherent message that can enthuse people. Without straying far from collider physics, we could notice that there is a different type of conversation going on in the search for dark matter. Here, there is no no-lose theorem either, and despite the current exclusion of most vanilla scenarios, there is excitement and cohesion, which are effectively communicated. As for our critics, they should be openly confronted and viewed as an opportunity to build stronger arguments.
We have powerful arguments to keep delving into the smallest scales, with the unknown nature of dark matter, neutrinos and the matter–antimatter asymmetry the most well-known examples. As a field, we need to renew the excitement that led us where we are, from the shock of watching alpha particles bounce back from a thin gold sheet, to building a colossus like the LHC. We should be outspoken about our ambition to know the true face of nature and the profound ideas we explore, and embrace the new path that the Higgs discovery has opened.
Particle accelerators have revolutionised our understanding of nature at the smallest scales, and continue to do so with facilities such as the LHC at CERN. Surprisingly, however, the number of accelerators used for fundamental research represents a mere fraction of the 50,000 or so accelerators currently in operation worldwide. Around two thirds of these are employed in industry, for example in chip manufacturing, while the rest are used for medical purposes, in particular radiotherapy. While many of these devices are available “off-the-shelf”, accelerator R&D in particle physics remains the principal driver of innovative, next-generation accelerators for applications further afield.
The CERN Linear Electron Accelerator for Research (CLEAR) is a prominent example. Launched in August 2017 (CERN Courier November 2017 p8), CLEAR is a user facility developed from the former CTF3 project which existed to test technologies for the Compact Linear Collider (CLIC) – a proposed e+e– collider at CERN that would follow the LHC. During the past five years, beams with a wide range of parameters have been provided to groups from more than 30 institutions across more than 10 nations.
CLEAR was proposed as a response to the low availability of test-beam facilities in Europe. In particular, there was very little time available to users on accelerators with electron beams with an energy of a few hundred MeV, as these tend to be used in dedicated X-ray light-source and other specialist facilities. CLEAR therefore serves as a unique facility to perform R&D towards a wide range of accelerator-based technologies in this energy range. Independent of CERN’s other accelerator installations, CLEAR has been able to provide beams for around 35 weeks per year since 2018, as well as during long shutdowns, and even managing successful operation during the COVID-19 pandemic.
Flexible physics
As a relatively small facility, CLEAR operates in a flexible fashion. Operators can vary the range of beams available with relative ease by tailoring many different parameters, such as the bunch charge, length and energy, for each user. There is regular weekly access to the machine and, thanks to the low levels of radioactivity, it is possible to gain access to the facility several times per day to adjust experimental setups if needed. Along with CLEAR’s location at the heart of CERN, the facility has attracted an eager stream of users from day one.
CLEAR has attracted an eager stream of users from day one
Among the first was a team from the European Space Agency working in collaboration with the Radiation to Electronics (R2E) group at CERN. The users irradiated electronic components for the JUICE (Jupiter Icy Moons Explorer) mission with 200 MeV electron beams. Their experiments demonstrated that high-energy electrons trapped in the strong magnetic fields around Jupiter could induce faults, so-called single event upsets, in the craft’s electronics, leading to the development and validation of components with the appropriate radiation-hardness. The initial experiment has been built upon by the R2E group to investigate the effect of electron beams on electronics.
As the daughter of CTF3, CLEAR has continued to be used to test the key technological developments necessary for CLIC. There are two prototype CLIC accelerating structures in the facility’s beamline. Originally installed to test CLIC’s unique two-beam acceleration scheme, the structures have been used to study short-range “wakefield kicks” that can deflect the beam away from the planned path and reduce the luminosity of a linear collider. Additionally, prototypes of the high-resolution cavity beam position monitors, which are vital to measure and control the CLIC beam, have been tested, showing promising initial results.
One of the main activities at CLEAR concerns the development and testing of beam instrumentation. Here, the flexibility and the large beam-parameter range provided by the facility, together with easy access, especially in its dedicated in-air test station, have proven to be very effective. CLEAR covers all phases of the development of novel beam diagnostics devices, from the initial exploration of a concept or physical mechanism to the first prototyping and to the testing of the final instrument adapted for use in an operational accelerator. Examples are beam-loss monitors based on optical fibres, and beam-position and bunch-length monitors based on Cherenkov diffraction radiation under development by the beam instrumentation group at CERN.
Advanced accelerator R&D
There is a strong collaboration between CLEAR and the Advanced Wakefield Experiment (AWAKE), a facility at CERN used to investigate proton-driven plasma wakefield acceleration. In this scheme, which promises higher acceleration gradients than conventional radio-frequency accelerator technology and thus more compact accelerators, charged particles such as electrons are accelerated by forcing them to “surf” atop a longitudinal plasma wave that contains regions of positive and negative charges. Several beam diagnostics for the AWAKE beamline were first tested and optimised at CLEAR. A second phase of the AWAKE project, presently being commissioned for operation in 2026, requires a new source of electron beams to provide shorter, higher quality beams. Before its final installation in AWAKE, it is proposed to use this source to increase the range of beam parameters available at CLEAR.
Further research into compact, plasma-based accelerators has been undertaken at CLEAR thanks to the installation of an active plasma lens on the beamline. Such lenses use gases ionised by very high electric currents to provide focusing for beams many orders of magnitude stronger than can be achieved with conventional magnets. Previous work on active plasma lenses had shown that the focusing force was nonlinear and reduced the beam quality. However, experiments performed at CLEAR showed, for the first time, that by simply swapping the commonly used helium gas for a heavier gas like argon, a linear magnetic field could be produced and focusing could be achieved without reducing the beam quality (CERN Courier December 2018 p8).
Plasma acceleration is not the only novel accelerator technology that has been studied at CLEAR over the past five years. The significant potential of using accelerators to produce intense beams of radiation in the THz frequency range has also been demonstrated. Such light, on the boundary between microwaves and infrared, is difficult to produce, but has a variety of different uses ranging from imaging and security scanning to the control of materials at the quantum level. Compact linear accelerator-based sources of THz light could potentially be advantageous to other sources as they tend to produce significantly higher photon fluxes. By using long trains of ultrashort, sub-ps bunches, it was shown at CLEAR that THz radiation can be generated through coherent transition radiation in thin metal foils, through coherent Cherenkov radiation, and through coherent “Smith–Purcell” radiation in periodic gratings. The peak power emitted in experiments at CLEAR was around 0.1 MW. However, simulations have shown that with relatively minor reductions in the length of the electron bunches it will be possible to generate a peak power of more than 100 MW.
FLASH forward
Advances in high-gradient accelerator technology for projects like CLIC (CERN Courier April 2018 p32) have led to a surge of interest in using electron beams with energies between 50–250 MeV to perform radiotherapy, which is one of the key tools used in the treatment of cancer. The use of so-called very-high energy electron (VHEE) beams could provide advantages over existing treatment types. Of particular interest is using VHEE beams to perform radiotherapy at ultra-high dose rates, which could potentially generate the so-called FLASH effect in patients. Here, tumour cells are killed while sparing the surrounding healthy tissues, with the potential to significantly improve treatment outcomes.
So far, CLEAR has been the only facility in the world studying VHEE radiotherapy and FLASH with 200 MeV electron beams. As such, there has been a large increase in beam-time requests in this field. Initial tests performed by researchers from the University of Manchester demonstrated that, unlike other types of radiotherapy beams, VHEE beams are relatively insensitive to inhomogeneities in tissue that typically result in less targeted treatment. The team, along with another from the University of Strathclyde, also looked at how focused VHEE beams could be used to further target doses inside a patient by mimicking the Bragg peak seen in proton radiotherapy. Experiments with the University Hospital of Lausanne to try to demonstrate whether the FLASH effect can be induced with VHEE beams are ongoing (CERN Courier January/February 2023 p8).
Even if the FLASH effect can be produced in the lab, there are issues that need to be overcome to bring it to the clinic. Chief among them is the development of novel dosimetric methods. As CLEAR and other facilities have shown, conventional real-time dosimetric methods do not work at ultra-high dose rates. Ionisation chambers, the main pillar of conventional radiotherapy dosimetry, were shown to have very nonlinear behaviour at such dose rates, and recombination times that were too long. Due to this, CLEAR has been involved in the testing of modified ionisation chambers as well as other more innovative detector technologies from the world of particle physics for use in a future FLASH facility.
High impact
As well as being a test-bed for new technologies and experiments, CLEAR has provided an excellent training infrastructure for the next generation of physicists and engineers. Numerous masters and doctoral students have spent a large portion of their time performing experiments at CLEAR either as one-time users or long-term collaborators. Additionally, CLEAR is used for practical accelerator training for the Joint Universities Accelerator School.
Numerous masters and doctoral students have spent time performing experiments at CLEAR
As in all aspects of life, the COVID-19 pandemic placed significant strain on the facility. The planned beam schedule for 2020 and beyond had to be scrapped as beam operation was halted during the first lockdown and external users were barred from travelling. However, through the hard work of the team, CLEAR was able to recover and run at almost full capacity within weeks. Several internal CERN users, many of whom were unable to travel to external facilities, were able to use CLEAR during this period to continue their research. Furthermore, CLEAR was involved in CERN’s own response to the pandemic by undertaking sterilisation tests of personal protective equipment.
Test-beam facilities such as CLEAR are vital for developing future physics technology, and the impact that such a small facility has been able to produce in just a few years is impressive. A variety of different experiments from several different fields of research have been performed, with many more that are not mentioned in this article. Unfortunately for the world of high-energy physics, the aforementioned shortage of accelerator test facilities has not gone away. CLEAR will continue to play its role in helping provide test beams, with operations due to continue until at least 2025 and perhaps long after. There is an exciting physics programme lined up for the next few years, featuring many experiments similar to those that have already been performed but also many that are new, to ensure that accelerator technology continues to benefit both science and society.
The LHCb collaboration is never idle. While building and commissioning its brand new Upgrade I detector, which entered operation last year with the start of LHC Run 3, planning for Upgrade II was already under way. This proposed new detector, envisioned to be installed during Long Shutdown 4 in time for High-Luminosity LHC (HL-LHC) operations continuing in Run 5, scheduled to begin in 2034/2035, would operate at a peak luminosity of 1.5 × 1034cm–2s–1. This is 7.5 times higher than at Run 3 and would generate data samples of heavy-flavoured hadron decays six times larger than those obtainable at the LHC, allowing the collaboration to explore a wide range of flavour-physics observables with extreme precision. Unprecedented tests of the CP-violation paradigm (see “On point” figure) and searches for new physics at double the mass scales possible during Run 3 are among the physics goals on offer.
Attaining the same excellent performance as the original detector has been a pivotal constraint in the design of LHCb Upgrade I. While achieving the same in the much harsher collision environments at the HL-LHC remains the guiding principle for Upgrade II, the LHCb collaboration is investigating the possibilities to go even further. And these challenges need to be met while keeping the existing footprint and arrangement of the detector (see “Looking forward” figure). Radiation-hard and fast 3D silicon pixels, a new generation of extremely fast and efficient photodetectors, and front-end electronics chips based on 28 nm semiconductor technology are just a few examples of the innovations foreseen for LHCb Upgrade II, and will also set the direction of R&D for future experiments.
Rethinking the data acquisition, trigger and data processing, along with intense use of hardware accelerators such as field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), will be fundamental to manage the expected five-times higher average data rate than in Upgrade I. The Upgrade II “framework technical design report”, completed in 2022, is also the first to consider the experiment’s energy consumption and greenhouse-gas emissions, as part of a close collaboration with CERN to define an effective environmental protection strategy.
Extreme tracking
At the maximum expected luminosity of the HL-LHC, around 2000 charged particles will be produced per bunch crossing within the LHCb apparatus. Efficiently reconstructing these particles and their associated decay vertices in real time represents a significant challenge. It requires the existing detector components to be modified to increase the granularity, reduce the amount of material and benefit from the use of precision timing.
The future VELO will be a true 4D-tracking detector
The new Vertex Locator (VELO) will be based, as it was for Upgrade I (CERN Courier May/June 2022 p38), on high-granularity pixels operated in vacuum in close proximity to the LHC beams. For Upgrade II, the trigger and online reconstruction will rely on the selection of events, or parts of events, with displaced tracks at the early stage of the event. The VELO must therefore be capable of independently reconstructing primary vertices and identifying displaced tracks, while coping with a dramatic increase in event rate and radiation dose. Excellent spatial resolution will not be sufficient, given the large density of primary interactions along the beam axis expected under HL-LHC conditions. A new coordinate – time – must be introduced. The future VELO will be a true 4D-tracking detector that includes timing information with a precision of better than 50 ps per hit, leading to a track time-stamp resolution of about 20 ps (see “Precision timing” figure).
The new VELO sensors, which include 28 nm technology application-specific integrated circuits (ASICs), will need to achieve this time resolution while being radiation-hard. The important goal of a 10 ps time resolution has recently been achieved with irradiated prototype 3D-trench silicon sensors. Depending on the rate-capability of the new detectors, the pitch may have to be reduced and the material budget significantly decreased to reach comparable spatial resolution to the current Run 3 detector. The VELO mechanics have to be redesigned, in particular to reduce the material of the radio-frequency foil that separates the secondary vacuum – where the sensors are located – from the machine vacuum. The detector must be built with micron-level precision to control systematic uncertainties.
The tracking system will take advantage of a detector located upstream of the dipole magnet, the Upstream Tracker (UT), and of a detector made of three tracking stations, the Mighty Tracker (MT), located downstream of the magnet. In conjunction with the VELO, the tracking system ensures the ability to reconstruct the trajectory of charged particles bending through the detector due to the magnetic field, and provides a high-precision momentum measurement for each particle. The track direction is a necessary input to the photon-ring searches in Ring Imaging Cherenkov (RICH) detectors, which identify the particle species. Efficient real-time charged-particle reconstruction in a very high particle-density environment requires not only good detector efficiency and granularity, but also the ability to quickly reject combinations of hits not produced by the same particle.
The UT and the inner region of the MT will be instrumented with high-granularity silicon pixels. The emerging radiation-hard monolithic active pixel sensor (MAPS) technology is a strong candidate for these detectors. LHCb Upgrade II would represent the first large-scale implementation of MAPS in a high-radiation environment, with the first prototypes currently being tested (see “Mighty pixels” figure). The outer region of the MT will be covered by scintillating fibres, as in Run 3, with significant developments foreseen to cope with the radiation damage. The availability of high-precision vertical-coordinate hit information in the tracking, provided for the first time in LHCb by pixels in the high-occupancy regions of the tracker, will be crucial to reject combinations of track segments or hits not produced by the same particle. To substantially extend the coverage of the tracking system to lower momenta, with consequent gains for physics measurements, the internal surfaces of the magnet side walls will be instrumented with scintillating bar detectors, the so-called magnet stations (MS).
Extreme particle identification
A key factor in the success of the LHCb experiment has been its excellent particle identification (PID) capabilities. PID is crucial to distinguish different decays with final-state topologies that are backgrounds to each other, and to tag the flavour of beauty mesons at production, which is a vital ingredient to many mixing and CP-violation measurements. For particle momenta from a few GeV/c up to 100 GeV/c, efficient hadron identification at LHCb is provided by two RICH detectors. Cherenkov light emitted by particles traversing the gaseous radiators of the RICHes is projected by mirrors onto a plane of photodetectors. To maintain Upgrade I performances, the maximum occupancy over the photodetector plane must be kept below 30%, the single-photon Cherenkov-angle resolution must be below 0.5 mrad, and the time resolution on single-photon hits should be well below 100 ps (see “RICH rewards” figure).
Next-generation silicon photomultipliers (SiPMs) with improved timing and a pixel size of 1 × 1 mm2, together with re-optimised optics, are deemed capable of delivering these specifications. The high “dark” rates of SiPMs, especially after elevated radiation doses, would be controlled with cryogenic cooling and neutron shielding. Vacuum tubes based on micro-channel plates (MCPs) are a potential alternative due to their excellent time resolution (30 ps) for single-photon hits and lower dark rate, but suffer in high-rate environments. New eco-friendly gaseous radiators with a lower refractive index can improve the PID performance at higher momenta (above 80 GeV/c), but meta-materials such as photonic crystals are also being studied. In the momentum region below 10 GeV/c, PID will profit from TORCH – an innovative 30 m2 time-of-flight detector consisting of quartz plates where charged particles produce Cherenkov light. The light propagates by internal reflection to arrays of high-granularity MCP–PMTs optimised to operate at high rates, with a prototype already showing performances close to the target of 70 ps per photon.
Excellent photon and π0 reconstruction and e–π separation are provided by LHCb’s electromagnetic calorimeter (ECAL). But the harsh occupancy conditions of the HL-LHC impose the development of 5D calorimetry, which complements precise position and energy measurements of electromagnetic clusters with a time resolution of about 20 ps. The most crowded inner regions will be equipped with so-called spaghetti calorimeter (SPACAL) technology, which consists of arrays of scintillating fibres either made of plastic or garnet crystals arranged along the beam direction, embedded in a lead or tungsten matrix. The less-crowded outer regions of the calorimeter will continue to be instrumented with the current “Shashlik” technology with refurbished modules and increased granularity. A timing layer, either based on MCPs or on alternated tungsten and silicon-sensor layers placed within the front and back ECAL sections, is also a possibility to achieve the ultimate time resolution. Several SPACAL prototypes have already demonstrated that time resolutions down to an impressive 15 ps are feasible (see “Spaghetti calorimetry” image).
The final main LHCb subdetector is the muon system, based on four stations of multiwire proportional chambers (MWPCs) interleaved with iron absorbers. For Upgrade II, it is proposed that MWPCs in the inner regions, where the rate will be as high as a few MHz/cm2, are replaced with new-generation micro-pattern gaseous detectors, the micro-RWELL, a prototype of which has proved able to reach a detection efficiency of approximately 97% and a rate-capability of around 10 MHz/cm2. The outer regions, characterised by lower rates, will be instrumented either by reusing a large fraction (∼95%) of the current MWPCs or by implementing other solutions based on resistive plate chambers or scintillating-tile-based detectors. As with all Upgrade II subdetectors, dedicated ASICs in the front-end electronics, which integrate fast time-to-digital converters or high-frequency waveform samplers, will be necessary to measure time with the required precision.
Trigger and computing
The detectors for LHCb Upgrade II will produce data at a rate of up to 200 Tbit/s (see “On the up” figure), which for practical reasons needs to be reduced by four orders of magnitude before being written to permanent storage. The data acquisition therefore needs to be reliable, scalable and cost-efficient. It will consist of a single type of custom-made readout board combined with readily available data-centre hardware. The readout boards collect the data from the various sub-detectors using the radiation-hard, low-power GBit transceiver links developed at CERN and transfer the data to a farm of readout servers via next- generation “PCI Express” connections or Ethernet. For every collision, the information from the subdetectors is merged by passing through a local area network to the builder server farm.
With up to 40 proton–proton interactions, every bunch crossing at the HL-LHC will contain multiple heavy-flavour hadrons within the LHCb acceptance. For efficient event selection, hits not associated with the proton–proton collision of interest need to be discarded as early as possible in the data-processing chain. The real-time analysis system performs reconstruction and data reduction in two high-level-trigger (HLT) stages. HLT1 performs track reconstruction and partial PID to apply inclusive selections, after which the data is stored in a large disk buffer while alignment and calibration tasks run in semi real-time. The final data reduction occurs at the HLT2 level, with exclusive selections based on full offline-quality event reconstruction. Starting from Upgrade I, all HLT1 algorithms are running on a farm of GPUs, which enabled, for the first time at the LHC, track reconstruction to be performed at a rate of 30 MHz. The HLT2 sequence, on the other hand, is run on a farm of CPU servers – a model that would be prohibitively costly for Upgrade II. Given the current evolution of processor performance, the baseline approach for Upgrade II is to perform the reconstruction algorithms of both HLT1 and HLT2 on GPUs. A strong R&D activity is also foreseen to explore alternative co-processors such as FPGAs and new emerging architectures.
The second computing challenge for LHCb Upgrade II derives from detector simulations. A naive extrapolation from the computing needs of the current detector implies that 2.5 million cores will be needed for simulation in Run 5, which is one order of magnitude above what is available with a flat budget assuming a 10% performance increase of processors per year. All experiments in high-energy physics face this challenge, motivating a vigorous R&D programme across the community to improve the processing time of simulation tools such as GEANT4, both by exploiting co-processors and by parametrising the detector response with machine-learning algorithms.
Intimately linked with digital technologies today are energy consumption and efficiency. Already in Run 3, the GPU-based HLT1 is up to 30% more energy-efficient than the originally planned CPU-based version. The data centre is designed for the highest energy-efficiency, resulting in a power usage that compares favourably with other large computing centres. Also for Upgrade II, special focus will be placed on designing efficient code and fully exploiting efficient technologies, as well as designing a compact data acquisition system and optimally using the data centre.
A flavour of the future
The LHC is a remarkable machine that has already made a paradigm-shifting discovery with the observation of the Higgs boson. Exploration of the flavour-physics domain, which is a complementary but equally powerful way to search for new particles in high-energy collisions, is essential to pursue the next major milestone. The proposed LHCb Upgrade II detector will be able to accomplish this by exploring energy scales well beyond those reachable by direct searches. The proposal has received strong support from the 2020 update of the European strategy for particle physics, and the framework technical design report was positively reviewed by the LHC experiments committee. The challenges of performing precision flavour physics in the very harsh conditions of the HL-LHC are daunting, triggering a vast R&D programme at the forefront of technology. The goal of the LHCb teams is to begin construction of all detector components in the next few years, ready to install the new detector at the time of Long Shutdown 4.
The ALICE experiment at the LHC was conceived to study the properties of the quark–gluon plasma (QGP), the state of matter prevailing a few microseconds after the Big Bang. Collisions between large nuclei in the LHC produce matter at temperatures of about 3 × 1012 K, sufficiently high to liberate quarks and gluons, and thus to study the deconfined QGP state in the laboratory. The heavy-ion programme at LHC Runs 1 and 2 has already enabled the ALICE collaboration to study the formation of the QGP, its collective expansion and its properties, using for example the interactions of heavy quarks and high-energy partons with the QGP. ALICE 3 builds on these discoveries to reach the next level of understanding.
One of the most striking discoveries at the LHC is that J/ψ mesons not only “melt” in the QGP but can also be regenerated from charm quarks produced in independent hard scatterings. The LHC programme has also shown that the energy loss of partons propagating through the plasma depends on their mass. Furthermore, collective behaviour and enhanced strange-baryon production have been observed in selected proton–proton collisions in which large numbers of particles are produced, signalling that high densities may be reached in such collisions.
During Long Shutdown 2, a major upgrade of the ALICE detector (ALICE 2) was completed on budget and in time for the start of Run 3 in 2022. Together with improvements in the LHC itself, the experiment will profit from a factor-50 higher Pb–Pb collision rate and also provide a better pointing resolution. This will bring qualitative improvements for the entire physics programme, in particular for the detection of heavy-flavour hadrons and thermal di-electron radiation. However, several important questions – for example concerning the mechanisms leading to thermal equilibrium and the formation of hadrons in the QGP – will remain open even after Runs 3 and 4. To address these, the collaboration is pursuing next-generation technologies to build a new detector with a significantly larger rapidity coverage and excellent pointing resolution and particle identification (see “Brand new” figure). A letter of intent for ALICE 3, to be installed in 2033/2034 (Long Shutdown 4) and operated during Runs 5 and 6 (starting in 2035), was submitted to the LHC experiments committee in 2021 and led to a positive evaluation by the extended review panel in March 2022.
Behind the curtain of hadronisation
In heavy-ion collisions at the LHC, a large amount of energy is deposited in a small volume, forming a QGP. The plasma immediately starts expanding and cooling down, eventually reaching a temperature at which hadrons are formed. Although hadrons formed at the boundary of this phase transition carry information about the expansion of the plasma, they do not inform us directly about the temperature and other properties of the hot plasma phase of the collision before hadronisation takes place. Photons and di-lepton pairs, which are produced as thermal radiation in electromagnetic processes and do not participate in the strong interaction, allow us to look behind the curtain of hadronisation. However, measurements of photon and dilepton emission are challenging due to the large background from electromagnetic decays of light hadrons and weak decays of heavy-flavour hadrons.
One of the goals of the current ALICE 2 upgrades is to enable the first measurements of the thermal emission of electron–positron pairs (from virtual photons), and thus to determine the average temperature of the system before the formation of hadrons, during Runs 3 and 4. To further understand the evolution of temperature with time, larger data samples and excellent background rejection are needed. The early-stage temperature is determined from the exponential slope of the mass distribution above the ρ resonance, i.e. pair masses larger than 1.2 GeV/c2 (see “Taking the temperature” figure, upper panel). ALICE 3 would be able to explore the time dependence of the temperature before hadronisation using more differential measurements, e.g. of the azimuthal asymmetry of di-electron emission and of the slope of the mass spectrum as a function of transverse momentum.
The di-electron mass spectrum also carries unique information about the mechanism of chiral symmetry breaking – a fundamental quantum-chromodynamics (QCD) effect that generates most of the hadron mass. At the phase transition to the QGP, chiral symmetry is restored and quarks and gluons are deconfined. One of the predicted signals of this transition is mixing between the ρ and a1 vector-meson states, which gives the di-electron invariant mass spectrum a characteristic exponential shape in the mass range above the ρ meson peak (0.8–1.1 GeV/c2). Only the excellent electron identification and rejection of electrons from heavy-flavour decays possible with ALICE 3 can give physicists experimental access to this effect (see “Taking the temperature” figure, lower panel).
Another important goal of the ALICE physics programme is to understand how energetic quarks and gluons interact with the QGP and eventually thermalise and form a plasma that behaves as a fluid with very low internal friction. The thermalisation process and the properties of the QGP are governed by low-momentum interactions between quarks and gluons, which cannot be calculated using perturbative techniques. Experimental input is therefore important to understand these phenomena and to link them to fundamental QCD.
Heavy quarks
The heavy charm and beauty quarks are of particular interest because their interactions with the plasma can be calculated using lattice-QCD techniques with good theoretical control. Heavy quarks and antiquarks are mostly produced as back-to-back pairs in hard scatterings in the early phase of the collision. Subsequent interactions between the quarks and the plasma change the angle between the quark and antiquark. In addition, the “drag” from the plasma leads to an asymmetry in the overall azimuthal distributions of heavy quarks (elliptic flow) with respect to the reaction plane. The size of these effects is a measure of the strength of the interactions with the plasma. Since quark flavour is conserved in interactions in the plasma, measurements of hadrons containing heavy quarks, such as the D meson and Λc baryon, are directly sensitive to the interactions between heavy quarks and the plasma. While the increase in statistics and the improved spatial resolution of ALICE 2 will already allow us to measure the production of charm baryons, measurements of azimuthal correlations of charm–hadron pairs are needed to directly address how they interact with the plasma. These will only become possible with the precision, statistics and acceptance of ALICE 3.
Heavier beauty quarks are expected to take longer to thermalise and therefore lose less information through their interactions with the QGP. Therefore, systematic measurements of transverse-momentum distributions and azimuthal asymmetries of beauty mesons and baryons in heavy-ion collisions are essential to map out the interactions of heavy-flavour quarks with the QGP and to understand the mechanisms that drive the system towards thermal equilibrium.
To understand how hadrons emerge from the QGP, those containing multiple heavy quarks are of particular interest because they can only be formed from quarks that were produced in separate hard-scattering processes. If full thermal equilibrium is reached in Pb–Pb collisions, the production rates of such states are expected to be enhanced by up to three orders of magnitude with respect to pp collisions. This implies enormous sensitivity to the probability for combining independently produced quarks during hadronisation and to the degree of thermalisation. At ALICE 3, the precision with which multi-charm baryon yields can be measured is enhanced (see “Multi-charm production” figure).
In addition to precision measurements of di-electrons and heavy-flavour hadrons, ALICE 3 will allow us to investigate many more aspects of the QGP. These include fluctuations of conserved quantum numbers, such as flavour and baryon number, which are sensitive to the nature of the deconfinement phase transition of QCD. ALICE 3 will also aim to answer questions in hadron physics, for example by searching for the existence of nuclei containing charm baryons (analogous to strange baryons in hypernuclei) and by studying the interaction potentials between unstable hadrons, which may elucidate the structure of exotic hadronic states that have recently been discovered in electron–positron collisions and in hadronic collisions at the LHC. In addition, ALICE 3 will use ultra-peripheral collisions to study the structure of resonances such as the ρ′ and to look for new fundamental particles, such as axion-like particles and dark photons. A dedicated detector system is foreseen to study very low-energy photon production, which can be used to test “soft theorems” that link the production of very soft photons in a collision to the hadronic final state.
Pushing the experimental limits
To pursue this ambitious physics programme, ALICE 3 is designed to be a compact, large-acceptance tracking and particle-identification detector with excellent pointing resolution as well as high readout rates. The main tracking information is provided by an all-silicon tracker in a magnetic field provided by a superconducting magnet system, complemented by a dedicated vertex detector that will have to be retractable to provide the required aperture for the LHC at injection energy. To achieve the ultimate pointing resolution, the first hits must be detected as close as possible to the interaction point (5 mm at the highest energy) and the amount of material in front of it be kept to a minimum. The inner tracking layers will also enable so-called strangeness tracking – the direct detection of strange baryons before they decay – to improve the pointing resolution and suppress combinatorial background, for example in the measurement of multi-charm baryon decays.
ALICE 3 is a compact, large-acceptance tracking and particle-identification detector with excellent pointing resolution as well as high readout rates
First feasibility studies of the mechanical design and the integration with the LHC for the vertex tracker have been conducted and engineering models have been produced to demonstrate the concept and explore production techniques for the components (see “Close encounters” image). The detection layers are to be constructed from bent, wafer-scale pixel sensors. The development of the next generation of CMOS pixel sensors in 65 nm technology with higher radiation tolerance and improved spatial resolution has already started in the context of the ITS 3 project in ALICE, which will be an important milestone on the way to ALICE 3 (see “Next-gen tracking” image). The outer tracker, which has to cover the cylindrical volume to a radius of 80 cm over a total length of ±4 m, will also use CMOS pixel sensors. These will be integrated into larger modules for an effective instrumentation of about 60 m2 while minimising the material used for mechanical support and services. The foreseen material budget for the tracker is 1% of a radiation length per layer for the outer tracker, and only 0.05% per layer for the vertex tracker.
For particle identification, five different detector systems are foreseen: a silicon-based time-of-flight system and a ring-imaging Cherenkov (RICH) detector that provide hadron and electron identification over a broad momentum range, a muon identifier starting from a transverse momentum of about 1.5 GeV/c, an electromagnetic calorimeter for photon detection and identification, and a forward tracker to reconstruct photons at very low momentum from their conversions to electron–positron pairs. For the time-of-flight system, the main R&D line aims at the integration of a gain layer in monolithic CMOS sensors to achieve the required time resolution of at least 20 ps (alternatively, low-gain avalanche diodes with external readout circuitry can be used). The calorimeter is based on a combination of lead-sampling and lead-tungstate segments, both of which would be read out by commercially available silicon photomultipliers (SiPMs). For the detection layers of the muon identifier, both resistive plate chambers and scintillating bars are being considered. Finally, for the RICH design, the R&D goal is to integrate the digital readout circuitry in SiPMs to enable efficient detection of photons in the visible range.
ALICE 3 provides a roadmap for an exciting heavy-ion physics programme, along with the other three large LHC experiments, in Runs 5 and 6. An R&D programme for the coming years is being set up to establish the technologies and enable the preparation of technical design reports in 2026/2027. These developments not only constitute an important contribution to the full physics exploitation of the LHC, but are of strategic interest for future particle detectors and will benefit the particle and nuclear physics community at large.