The Future Circular Collider (FCC) is envisaged to be a multi-stage facility for exploring the energy and intensity frontiers of particle physics. An initial electron–positron collider phase (FCC-ee) would focus on ultra-precise measurements at the centre-of-mass energies required to create Z bosons, W-boson pairs, Higgs bosons and top-quark pairs, followed by proton and heavy-ion collisions in a hadron-collider phase (FCC-hh), which would probe the energy frontier directly. As recommended by the 2020 update of the European strategy for particle physics, a feasibility study for the FCC is in full swing. Following the submission to the CERN Council of the study’s midterm report earlier this year (CERN Courier March/April 2024 pp25–38), and the signing of a joint statement of intent on planning for large research infrastructures by CERN and the US government (CERN Courier July/August 2024 p10), FCC Week 2024 convened more than 450 scientists, researchers and industry leaders in San Francisco from 10 to 14 June, with the aim of engaging the wider scientific community, in particular in North America. Since then, more than 20 groups have joined the FCC collaboration.
SLAC and LBNL directors John Sarrao and Mike Witherell opened the meeting by emphasising the vital roles of international collaboration between national laboratories in advancing scientific discovery. Sarrao highlighted SLAC’s historical contributions to high-energy physics and expressed enthusiasm for the FCC’s scientific potential. Witherell reflected on the legacy of particle accelerators in fundamental science and the importance of continued innovation.
CERN Director-General Fabiola Gianotti identified three pillars of her vision for the laboratory: flagship projects like the LHC; a diverse complementary scientific programme; and preparations for future projects. She identified the FCC as the best future match for this vision, asserting that it has unparalleled potential for discovering new physics and can accommodate a large and diverse scientific community. “It is crucial to design a facility that offers a broad scientific programme, many experiments and exciting physics to attract young talents,” she said.
International collaboration, especially with the US, is important in ensuring the project’s success
FCC-ee would operate at several centre-of-mass energies corresponding to the Z-boson pole, W-boson pair-production, Higgs-boson pole or top-quark pair production. The beam current at each of these points would be determined by the design value of 50 MW synchrotron-radiation power per beam. At lower energies, the machine could accommodate more bunches, achieving 1.3 amperes and a luminosity in excess of 1036 cm–2 s–1 at the Z pole. Measurements of electroweak observables and Higgs-boson couplings would be improved by a factor of between 10 and 50. Remarkably, FCC-ee would also provide 10 times the ambitious design statistics of SuperKEKB/Belle II for bottom and charm quarks, making it the world-leading machine at the intensity frontier. Along with other measurements of electroweak observables, FCC-ee will indirectly probe energies up to 70 TeV for weakly interacting particles. Unlike at proposed linear colliders, four interaction points would increase scientific robustness, reduce systematic uncertainties and allow for specialised experiments, maximising the collider’s physics output.
For FCC-hh, two approaches are being pursued for the necessary high-field superconducting magnets. The first involves advancing niobium–tin technology, which is currently mastered at 11–12 T for the High-Luminosity LHC, with the goal of reaching operational fields of 14 T. The second focuses on high-temperature superconductors (HTS) such as REBCO and iron-based superconductors (IBS). REBCO comes mainly in tape form (CERN CourierMay/June 2023 p37), whereas IBS comes in both tape and wire form. With niobium-tin, 14 T would allow proton–proton collision energies of 80 TeV in a 90 km ring. HTS-based magnets could potentially reach fields up to 20 T, and centre-of-mass energies proportionally higher, in the vicinity of 120 TeV. If HTS magnets prove technically feasible, they could greatly decrease the cryogenic power. The development of such technologies also holds great promise beyond fundamental research, for example in transportation and electricity transmission.
FCC study leader Michael Benedikt (CERN) outlined the status of the ongoing feasibility study, which is set to be completed by March 2025. No technical showstoppers have yet been found, paving the way for the next phase of detailed technical and environmental impact studies and critical site investigations. Benedikt stressed the importance of international collaboration, especially with the US, in ensuring the project’s success.
The next step for the FCC project is to provide information to the CERN Council, via the upcoming update of the European strategy for particle physics, to facilitate a decision on whether to pursue the FCC by the end of 2027 or in early 2028. This includes further developing the civil engineering and technical design of major systems and components to present a more detailed cost estimate, continuing technical R&D activities, and working with CERN’s host states on regional implementation development and authorisation processes along with the launch of an environmental impact study. FCC would intersect 31 municipalities in France and 10 in Switzerland. Detailed work is ongoing to identify and reserve plots of land for surface sites, address site-specific design aspects, and explore socio-economic and ecological opportunities such as waste-heat utilisation.
In the Large Hadron Collider (LHC), counter-rotating beams of protons travel in separate chambers under high vacuum to avoid scattering with gas molecules. At four places around the 27-km ring, the beams enter a single chamber, where they collide. To ensure that particles emerging from the high-energy collisions pass into the ALICE, ATLAS, CMS and LHCb detectors with minimal disturbance, the experiments’ vacuum chambers must be as transparent as possible to radiation, placing high demands on materials and production.
The sole material suitable for the beam pipes at the heart of the LHC experiments is beryllium — a substance used in only few other domains, such as the aerospace industry. Its low atomic number (Z = 4) leads to minimal interaction with high-energy particles, reducing scattering and energy loss. The only solid element with a lower atomic number is lithium (Z = 3), but it cannot be used as it oxidizes rapidly and reacts violently with moisture, producing flammable hydrogen gas. Despite being less dense than aluminium, beryllium is six times stronger than steel, and can withstand the mechanical stresses and thermal loads encountered during collider operations. Beryllium also has good thermal conductivity, which helps dissipate the heat generated during beam collisions, preventing the beam pipe from overheating.
But beryllium also has drawbacks. It is expensive to procure as it comes in the form of a powder that must be compressed at very high pressure to obtain metal rods, and as beryllium is toxic, all manufacturing steps require strict safety procedures.
By bringing beam-pipe production in-house, CERN will acquire unique expertise
The last supplier worldwide able to machine and weld beryllium beam pipes within the strict tolerances required by the LHC experiments decided to discontinue their production in 2023. Given the need for multiple new beam pipes as part of the forthcoming high-luminosity upgrade to the LHC (HL-LHC), CERN has decided to build a new facility to manufacture vacuum pipes on site, including parts made of beryllium. A 650 m2 workshop is scheduled to begin operations on CERN’s Prévessin site next year.
By insourcing beryllium beam-pipe production, CERN will gain direct control of the manufacturing process, allowing stricter quality assurance and greater flexibility to meet changing experimental requirements. The new facility will include several spaces to perform metallurgical analysis, machining of components, surface treatments, final assembly by electron-beam welding, and quality control steps such as metrology and non-destructive tests. As soon as beryllium beampipes are fabricated, they will follow the usual steps for ultra-high vacuum conditioning that are already available in CERN’s facilities. These include helium leak tests, non-evaporable-getter thin-film coatings, the installation of bakeout equipment, and final vacuum assessments.
Once the new workshop is operational, the validation of the different manufacturing processes will continue until mid-2026. Production will then begin for new beam pipes for the ALICE, ATLAS and CMS experiments in time for the HL-LHC, as each experiment will replace their pixel tracker – the sub-detector closest to the beam – and therefore require a new vacuum chamber. With stricter manufacturing requirements, never accomplishment before now, and a conical section designed to maximise transparency in the forward regions where particles pass through at smaller angles, ALICE’s vacuum chamber will pose a particular challenge. Together totalling 21 m in length, the first three beam pipes to be constructed at CERN will be installed in the detectors during the LHC’s Long Shutdown 3 from 2027 to 2028.
By bringing beam-pipe production in-house, CERN will acquire unique expertise that will be useful not only for the HL-LHC experiments, but also for future projects and other accelerators around the world, and preserve a fundamental technology for experimental beam pipes.
In 1885, in a darkened lab in Würzburg, Bavaria, Wilhelm Röntgen noticed that a screen coated with barium platinocyanide fluoresced, despite being shielded from the electron beam of his cathode-ray tube. Hitherto undiscovered “X”-rays were being emitted as the electrons braked sharply in the tube’s anode and glass casing. A week later, Röntgen imaged his wife’s hand using a photographic plate, and medicine was changed forever. X-rays would be used for non-invasive diagnosis and treatment, and would inspire countless innovations in medical imaging. Röntgen declined to patent the discovery of X-ray imaging, believing that scientific advancements should benefit all of humanity, and donated the proceeds of the first Nobel Prize for Physics to his university.
One hundred years later, medical imaging would once again be disrupted – not in a darkened lab in Bavaria, but in the heart of the Large Hadron Collider (LHC) at CERN. The innovation in question is the hybrid pixel detector, which allows remarkably clean track reconstruction. When the technology is adapted for use in a medical context, by modifying the electronics at the pixel level, X-rays can be individually detected and their energy measured, leading to spectroscopic X-ray images that distinguish between different materials in the body. In this way, black and white medical imaging is being reinvented in full colour, allowing more precise diagnoses with lower radiation doses.
The next step is to exploit precise timing in each pixel. The benefits will be broadly felt. Electron microscopy of biological samples can be clearer and more detailed. Biomolecules can be more precisely identified and quantified by imaging time-of-flight mass spectrometry. Radiation doses can be better controlled in hadron therapy, reducing damage to healthy tissue. Ultra-fast changes can be captured in detail at synchrotron light sources. Hybrid pixel detectors with fast time readout are even being used to monitor quantum-mechanical processes.
Digital-camera drawbacks
X-ray imaging has come a long way since the photographic plate. Most often, the electronics work in the same way as a cell-phone camera. A scintillating material converts X-rays into visible photons that are detected by light-sensitive diodes connected to charge-integrating electronics. The charge from high-energy and low-energy photons is simply added up within the pixel in the same way a photographic film is darkened by X-rays.
Charge integration is the technique of choice in the flat-panel detectors used in radiology as large surfaces can be covered relatively cheaply, but there are several drawbacks. It’s difficult to collect the scintillation light from an X-ray on a single pixel, as it spreads out. And information about the energy of the X-rays is lost.
By the 1990s, however, LHC detector R&D was driving the development of the hybrid pixel detector, which could solve both problems by detecting individual photons. It soon became clear that “photon counting” could be as useful in a hospital ward as it would prove to be in a high-energy-physics particle detector. In 1997 the Medipix collaboration first paired semiconductor sensors with readout chips capable of counting individual X-rays.
Nearly three decades later, hybrid pixel detectors are making their mark in hospital wards. Parallel to the meticulous process of preparing a technology for medical applications in partnership with industry, researchers have continued to push the limits of the technology, in pursuit of new innovations and applications.
Photon counting
In a hybrid pixel detector, semiconductor sensor pixels are individually fixed to readout chips by an array of bump bonds – tiny balls of solder that permit the charge signal in each sensor pixel to be passed to each readout pixel (see “Hybrid pixels” figure). In these detectors, low-noise pulse-processing electronics take advantage of the intrinsic properties of semiconductors to provide clean track reconstruction even at high rates (see “Semiconductor subtlety” panel).
Since silicon detectors are relatively transparent to the X-ray energies used in medical imaging (approximately 20 to 140 keV), denser sensor materials with higher stopping power are required to capture every photon passing through the patient. This is where hybrid pixel detectors really come into their own. For X-ray photons with an energy above about 20 keV, a highly absorbing material such as cadmium telluride can be used in place of the silicon used in the LHC experiments. Provided precautions are taken to deal with charge sharing between pixels, the number of X-rays in every energy bin can be recorded, allowing each pixel to measure the spectrum of the interacting X-rays.
Semiconductor subtlety
In insulators, the conduction band is far above the energy of electrons in the valence band, making it difficult for current to flow. In conductors, the two bands overlap and current flows with little resistance. In semiconductors, the gap is a just a couple of electron-volts. Passing charged particles, such as those created in the LHC experiments, promotes thousands of valence electrons into the conduction band, creating positively charged “holes” in the valence band, allowing current to flow.
Silicon has four valence electrons and therefore forms four covalent bonds with neighbouring atoms to fill up its outermost shell in silicon crystals. These crystals can be doped with impurities that either add additional electrons to the conduction band (n-type doping) or additional holes to the valence band (p-type doping). The silicon pixel sensors used at the LHC are made up of rectangular pixels doped with additional holes on one side coupled to a single large electrode doped with additional electrons on the rear (see “Pixel picture” figure).
In p-n junctions such as these, “depletion zones” develop at the pixel boundaries, where neighbouring electrons and holes recombine, generating a natural electric field. The depletion zones can be extended throughout the whole sensor by applying a strong “reverse-bias” field in the opposite direction. When a charged particle passes, electrons and holes are created as before, but thanks to the field a directed pulse of charge now flows across the bump bond into the readout chip. Charge collection is prompt, permitting the pixel to be ready for the next particle.
In each readout pixel the detected charge pulse is compared with an externally adjustable threshold. If the pulse exceeds the threshold, its amplitude and timing can be measured. The threshold level is typically set to be many times higher than the electronic noise of the detection circuit, permitting noise-free images. Because of the intimate contact between the sensor and the readout circuit, the noise is typically less than a root-mean-square value of 100 electrons, and any signal higher than a threshold of about 500 electrons can be unambiguously detected. Pixels that are not hit remain silent.
In the LHC, each passing particle liberates thousands of electrons, allowing clean images of the collisions to be taken even at very high rates. Hybrid pixels have therefore become the detector of choice in many large experiments where fast and clean images are needed, and are the heart of the ATLAS, CMS and LHCb experiments. In cases where the event rates are lower, such as the ALICE experiment at the LHC and the Belle II experiment at SuperKEKB at KEK in Japan, it has now become possible to use “monolithic” active pixel detectors, where the sensor and readout electronics are implemented in the same substrate. In the future, as the semiconductor industry shifts to three-dimensional chip and wafer stacking, the distinction between hybrid and monolithic pixel detectors will be blurred.
Protocols regarding the treatment of patients are strictly regulated in the interest of safety, making it challenging to introduce new technologies. Therefore, in parallel with the development of successive generations of Medipix readout chips, a workshop series on the medical applications of spectroscopic X-ray detectors has been hosted at CERN. Now in its seventh edition (see “Threshold moment for medical photon counting”), the workshop gathers representatives of cross-disciplinary specialists ranging from the designers of readout chips to specialists in the large equipment suppliers, and from medical physicists all the way up to opinion-leading radiologists. The role of the workshop is the formation and development of a community of practitioners from diverse fields willing to share knowledge – and, of course, reasonable doubts – in order to encourage the transition of spectroscopic photon counting from the lab to the clinic. CERN and the Medipix collaborations have played a pathfinding role in this community, exploring avenues well in advance of their introduction to medical practice.
The Medipix2 (1999–present), Medipix3 (2005–present) and Medipix4 (2016–present) collaborations are composed only of publicly funded research institutes and universities, which helps keep the development programmes driven by science. There have been hundreds of peer-reviewed publications and dozens of PhD theses written by the designers and users of the various chips. With the help of CERN’s Knowledge Transfer Office, several start-up companies have been created and commercial licences signed. This has led to many unforeseen applications and helped enormously with the dissemination of the technology. The publications of the clients of the industrial partners now represent a large share of the scientific outcome from these efforts, totalling hundreds of papers.
Spectroscopic X-ray imaging is now arriving in clinical practice. Siemens Healthineers were first to market in 2022 with the Naeotom Alpha photon counting CT scanner, and many of the first users have been making ground-breaking studies exploiting the newly available spectroscopic information in the clinical domain. CERN’s Medipix3 chip is at the heart of the MARS Bioimaging scanner, which brings unprecedented imaging performance to the point of patient care, opening up new patient pathways and saving time and money.
ASIC (application-specific integrated circuit) development is still moving forwards rapidly in the Medipix collaborations. For example, in the Medipix3 and Medipix4 chips, on-pixel circuitry mitigates the impact of X-ray fluorescenceand charge diffusion in the semiconductor by summing up the charge in a localised region and allocating the hit to one pixel. The fine segmentation of the detector not only leads to unprecedented spatial resolution but also mitigates “hole trapping” – a common bugbear of the high-density sensor materials used in medical imaging, whereby photons of the same energy induce different charges according to their interaction depth in the sensor. Where the pixel size is significantly smaller than the perpendicular sensor thickness – as in the Medipix case – only one of the charge species (usually electrons) contributes to the measured charge, and no matter where the X-ray is deposited in the sensor thickness, the total charge detected is the same.
But photon counting is only half the story. Another parameter that has not yet been exploited in high-spatial-resolution medical imaging systems can also be measured at the pixel level.
A new dimension
In 2005, Dutch physicists working with gas detectors requested a modification that would permit each pixel to measure arrival times instead of counting photons. The Medipix2 collaboration agreed and designed a chip with three acquisition modes: photon counting, arrival time and time over threshold, which provides a measure of energy. The Timepix family of pixel-detector readout chips was born.
The most recent generations of Timepix chips, such as Timepix3 (released in 2016) and Timepix4 (released in 2022) stream hit information off chip as soon as it is generated – a significant departure from Medipix chips, which process hits locally, assuming them to be photons, sending only a spectroscopic image off chip. With Timepix, each time a charge exceeds the threshold, a packet of information is sent off chip that contains the coordinates of the hit pixel, the particle’s arrival time and the time over threshold (66 bits in total per hit). This allows offline reconstruction of individual clusters of hits, opening up a myriad of potential new applications.
One advantage of Timepix is that particle event reconstruction is not limited to photons. Cosmic muons leave a straight track. Low-energy X-rays interact in a point-like fashion, lighting up only a small number of pixels. Electrons interact with atomic electrons in the sensor material, leaving a curly track. Alpha particles deposit a large quantity of charge in a characteristic blob. To spark the imagination of young people, Timepix chips have been incorporated on a USB thumb drive that can be read out on a laptop computer (see “Thumb-drive detector” figure). The CERN & Society Foundation is raising funds to make these devices widely available in schools.
Timepix chips have also been adapted to dose monitoring for astronauts. Following a calibration effort by the University of Houston, NASA and the Institute for Experimental and Applied Physics in Prague, a USB device identical to that used in classrooms precisely measures the doses experienced by flight crews in space. Timepix is now deployed on the International Space Station (see “Radiation monitoring” figure), the Artemis programme and several European space-weather studies, and will be deployed on the Lunar Gateway programme.
Stimulating innovation
Applications in science, industry and medicine are too numerous to mention in detail. In time-of-flight mass spectrometry, the vast number of channels allowed by Timepix promises new insights into biomolecules. Large-area time-resolved X-ray cameras are valuable at synchrotrons, where they have applications in structural biology, materials science, chemistry and environmental science. In the aerospace, manufacturing and construction industries, non-destructive X-ray testing using backscattering can probe the integrity of materials and structures while requiring access from one side only. Timepix chips also play a crucial role in X-ray diffraction for materials analysis and medical applications such as single-photon-emission computed tomography (SPECT), and beam tracking and dose-deposition monitoring in hadron therapy (see “Carbon therapy” figure). The introduction of noise-free hit streaming with timestamp precision down to 200 picoseconds has also opened up entirely new possibilities in quantum science, and early applications of Timepix3 in experiments exploring the quantum behaviour of particles are already being reported. We are just beginning to uncover the potential of these innovations.
It’s also important to note that applications of the Timepix chips are not limited to the readout of semiconductor pixels made of silicon or cadmium telluride. A defining feature of hybrid pixel detectors is that the same readout chip can be used with a variety of sensor materials and structures. In cases where visible photons are to be detected, an electron can be generated in a photocathode and then amplified using a micro-channel plate. The charge cloud from the micro-channel plate is then detected on a bare readout chip in much the same way as the charge cloud in a semiconductor sensor. Some gas-filled detectors are constructed using gas electron multipliers and micromegas foils, which amplify charge passing through holes in the foils. Timepix chips can be used for readout in place of the conventional pad arrays, providing much higher spatial and time resolution than would otherwise be available.
Successive generations of Timepix and Medipix chips have followed Moore’s law, permitting more and more circuitry to be fitted into a single pixel as the minimum feature size of transistors has shrunk. In the Timepix3 and Timepix4 chips, data-driven architecture and on-pixel time stamping are the unique features. The digital circuitry of the pixel has become so complex that an entirely new approach to chip design – “digital-on-top” – was employed. These techniques were subsequently deployed in ASIC developments for the LHC upgrades.
Just as hybrid-pixel R&D at the LHC has benefitted societal applications, R&D for these applications now benefits fundamental research. Making highly optimised chips available to industry “off the shelf” can also save substantial time and effort in many applications in fundamental research, and the highly integrated R&D model whereby detector designers keep one foot in both camps generates creativity and the reciprocal sparking of ideas and sharing of expertise. Timepix3 is used as readout of the beam–gas-interaction monitors at CERN’s Proton Synchrotron and Super Proton Synchrotron, providing non-destructive images of the beams in real time for the first time. The chips are also deployed in the ATLAS and MoEDAL experiments at the LHC, and in numerous small-scale experiments, and Timepix3 know-how helped develop the VeloPix chip used in the upgraded tracking system for the LHCb experiment. Timepix4 R&D is now being applied to the development of a new generation of readout chips for future use at CERN, in applications where a time bin of 50 ps or less is desired.
All these developments have relied on collaborating research organisations being willing to pool the resources needed to take strides into unexplored territory. The effort has been based on the solid technical and administrative infrastructure provided by CERN’s experimental physics department and its knowledge transfer, finance and procurement groups, and many applications have been made possible by hardware provided by the innovative companies that license the Medipix and Timepix chips.
With each new generation of chips, we have pushed the boundaries of what is possible by taking calculated risks ahead of industry. But the high-energy-physics community is under intense pressure, with overstretched resources. Can blue-sky R&D such as this be justified? We believe, in the spirit of Röntgen before us, that we have a duty to make our advancements available to a larger community than our own. Experience shows that when we collaborate across scientific disciplines and with the best in industry, the fruits lead directly back into advancements in our own community.
To test the most fundamental symmetry of the Standard Model, CPT symmetry, which implies exact equality between the fundamental properties of particles and their antimatter conjugates, antimatter particles must be cooled to the lowest possible temperatures. The BASE experiment, located at CERN, has passed a major milestone in this regard. Using a sophisticated system of Penning traps, the collaboration has reduced the time required to cool an antiproton by a factor of more than 100. The considerable improvement makes it possible to measure the antiproton’s properties with unparalleled precision, perhaps shedding light on the mystery of why matter outnumbers antimatter in the universe.
Magnetic moments
BASE (Baryon Antibaryon Symmetry Experiment) specialises in the study of antiprotons by measuring properties such as the magnetic moment and charge-to-mass ratio. The latter quantity has been shown to agree with that of the proton within an experimental uncertainty of 16 parts per trillion. While not nearly as precise due to much higher complexity, measurements of the antiproton’s magnetic moment provide an equally important probe of CPT symmetry.
To determine the antiproton’s magnetic moment, BASE measures the frequency of spin flips of single antiprotons – a remarkable feat that requires the particle to be cooled to less than 200 mK. BASE’s previous setup could achieve this, but only after 15 hours of cooling, explains lead author Barbara Latacz (RIKEN/CERN): “As we need to perform 1000 measurement cycles, it would have taken us three years of non-stop measurements, which would have been unrealistic. By reducing the cooling time to eight minutes, BASE can now obtain all of the 1000 measurements it needs – and thereby improve its precision – in less than a month.” By cooling antiprotons to such low energies, the collaboration has been able to detect antiproton spin transitions with an error rate (< 0.000023) more than three orders of magnitude better than in previous experiments.
Underpinning the BASE breakthrough is an improved cooling trap. BASE takes antiprotons that have been decelerated by the Antiproton Decelerator and the Extra Low Energy Antiproton ring (ELENA) and stores them in batches of around 100 in a Penning trap, which holds them in place using electric and magnetic fields. A single antiproton is then extracted into a system made up of two Penning traps: the first trap measures its temperature and, if it is too high, transfers the antiproton to a second trap to be cooled further. The particle goes back and forth between the two traps until the desired temperature is reached.
The new cooling trap has a diameter of just 3.8 mm, less than half the size of that used in previous experiments, and is equipped with innovative segmented electrodes to reduce the amplitude of one of the antiproton oscillations – the cyclotron mode – more effectively. The readout electronics have also been optimised to reduce background noise. The new system reduces the time spent by the antiproton in the cooling trap during each cycle from 10 minutes to 5 seconds, while improvements to the measurement trap have also made it possible to reduce the measurement time fourfold.
“Up to now, we have been able to compare the magnetic moments of the antiproton and the proton with a precision of one part per billion,” says BASE spokesperson Stefan Ulmer (Max Planck–RIKEN–PTB). “Our new device will allow us to reach a precision of a tenth of a billion and, on the very long-term, will even allow us to perform experiments with 10 parts-per-trillion resolution. The slightest discrepancy could help solve the mystery of the imbalance between matter and antimatter in the universe.”
The high data rate at the LHC creates challenges as well as opportunities. Great care is required to identify interesting events, as only a tiny fraction can trigger the detector’s readout. With the LHC achieving record-breaking instantaneous luminosity, the CMS collaboration has innovated to protect and expand its flavour-physics programme, which studies rare decays and subtle differences between particles containing beauty and charm quarks. Enhancements in the CMS data-taking strategy such as “data parking” have enabled the detector to surpass its initial performance limits. This has led to notable advances in charm physics, including CMS’s first analysis of CP violation in the charm sector and achieving world-leading sensitivity to the rare decay of the D0 meson into a pair of muons.
Data parking stores subsets of unprocessed data that cannot be processed promptly due to computing limitations. By parking events triggered by a single muon, CMS collected an inclusive sample of approximately 10 billion b-hadrons in 2018. This sample allowed CMS to reconstruct D0 and D0 decays into a pair of long-lived K0s mesons, which are relatively easy to detect in the CMS detector despite the high level of pileup and the large number of low-momentum tracks.
CP violation is necessary to explain the matter–antimatter asymmetry observed in the universe, but the magnitude of CP violation from known sources is insufficient. Charmed meson decays are the only meson decays involving an up-type quark where CP violation can be studied. CP violation would be evident if the decay rates for D0→ K0s K0s and D0→ K0s K0s were found to differ. In the analysis, the flavour of the initial D0 or D0 meson is determined from the charge of the pion accompanying its creation in the decay of a D*+ meson (see figure 1). To eliminate systematic effects arising from the charge asymmetry in production and detector response, the CP asymmetry is measured relative to that in D0→ K0s π+π–. The resulting asymmetry is found to be ACP(KSKS) = 6.2% ± 3.0% (stat) ± 0.2% (syst) ± 0.8% (PDG), consistent with no CP violation within 2.0 standard deviations. Previous analyses by LHCb and Belle were consistent with no CP violation within 2.7 and 1.8 standard deviations, respectively. Before data parking, searching for direct CP violation in the charm sector with a fully hadronic final state was deemed unattainable for CMS.
The CMS collaboration has expanded its flavour-physics programme
For Run 3 the programme was enhanced by introducing an inclusive dimuon trigger covering the low mass range up to 8.5 GeV. With improvements in the CMS Tier-0 prompt reconstruction workflow, Run-3 parking data is now reconstructed without delay using the former Run-2 high-level trigger farm at LHC Point 5 and European Tier-1 resources. In 2024 CMS is collecting data at rates seven times higher than the nominal rates for Run 2, already reaching approximately 70% of the nominal trigger rate for the HL-LHC.
Using the data collected in 2022 and 2023, CMS performed a search for the rare D0-meson decay into a pair of muons, which was presented at the ICHEP conference in Prague. Rare decays of the charm quark, less explored compared to those of the bottom quark, offer an opportunity to probe new physics effects beyond the direct reach of current colliders, thanks to possible quantum interference by unknown heavy virtual particles. In 2023, the LHCb collaboration set an upper limit for the branching ratio at 3.5 × 10–9 at a 95% confidence using Run-2 data. CMS surpassed the LHCb result, achieving a sensitivity of 2.6 × 10–9 at a 95% confidence. Given that the Standard Model prediction is four orders of magnitude smaller, there is still considerable territory to explore.
Beginning with the 2024 run, the CMS flavour-physics programme will gain an additional data stream known as data scouting. This stream captures at very high-rate events triggered by new high-purity single muon level-one triggers in a reduced format. This format is suitable for reconstructing decays of heavy hadrons, offering performance comparable to standard data processing.
From 29 January to 1 February, the Chamonix Workshop 2024 upheld its long tradition of fostering open and collaborative discussions within CERN’s accelerator and physics communities. This year marked a significant shift with more explicit inclusion of the injector complex, acknowledging its crucial role in shaping future research endeavours. Chamonix discussions focused on three main areas:maximising the remaining years of Run 3; the High-Luminosity LHC (HL-LHC), preparations for Long Shutdown 3 and operations in Run 4; and a look to the further future and the proposed Future Circular Collider (FCC).
Immense effort
Analysing the performance of CERN’s accelerator complex, speakers noted the impressive progress to date, examined limitations in the LHC and injectors and discussed improvements for optimal performance in upcoming runs. It’s difficult to do justice to the immense technical effort made by all systems, operations and technical infrastructure teams that underpins the exploitation of the complex. Machine availability emerged as a crucial theme, recognised as critical for both maximising the potential of existing facilities and ensuring the success of the HL-LHC. Fault tracking, dedicated maintenance efforts and targeted infrastructure improvements across the complex were highlighted as key contributors to achieving and maintaining optimal uptime.
As the HL-LHC project moves into full series production, the technical challenges associated with magnets, cold powering and crab cavities are being addressed (CERN Courier January/February 2024 p37). Looking beyond Long Shutdown 3 (LS3), potential limitations are already being targeted now, with, for example, electron-cloud mitigation measures planned to be deployed in LS3. The transition to the high-luminosity era will involve a huge programme of work that requires meticulous preparation and a well-coordinated effort across the complex during LS3, which will see the deployment of the HL-LHC, a widespread consolidation effort, and other upgrades such as that planned for the ECN3 cavern at CERN’s North Area.
The vision for the next decades of these facilities is diverse, imaginative and well-motivated from a physics perspective
The breadth and depth of the physics being performed at CERN facilities is quite remarkable, and the Chamonix workshop reconfirmed the high demand from experimentalists across the board. The unique capabilities of ISOLDE, n_TOF, AD-ELENA, and the East and North Areas were recognised. The North Area, for example, provides protons, hadrons, electrons and ion beams for detector R&D, experiments, the CERN neutrino platform, irradiation facilities and counts more than 2000 users. The vision for the next decades of these facilities is diverse, imaginative and well-motivated from a physics perspective. The potential for long-term exploitation and leveraging fully the capabilities of the LHC and other facilities is considerable, demanding continued support and development.
In the longer term, CERN is exploring the potential construction of the FCC via a dedicated feasibility study that has just delivered a mid-term report – a summary of which was presented at Chamonix. The initiative is accompanied by R&D on key accelerator technologies. The physics case for FCC-ee was well made for an audience of mostly non-particle physicists, concluding that the FCC is the only proposed collider that covers each key area in the field – electroweak, QCD, flavour, Higgs and searches for phenomena beyond the Standard Model – in paradigm-shifting depth.
Environmental consciousness
Sustainability was another focus of the Chamonix workshop. Building and operating future facilities with environmental consciousness is a top priority, and full life-cycle analyses will be performed for any options to help ensure a low-carbon future.
Interesting times, lots to do. To quote former CERN Director-General Herwig Schopper from 1983: “It is therefore clear that, for some time to come, there will be interesting work to do and I doubt whether accelerator experts will find themselves without a job.”
The seventh workshop on Medical Applications of Spectroscopic X-ray Detectors was held at CERN from 15 to 18 April. This year’s workshop brought together more than 100 experts in medical imaging, radiology, physics and engineering. The workshop focused on the latest advancements in spectroscopic X-ray detectors and their applications in medical diagnostics and treatment. Such detectors, whose origins are found in detector R&D for high-energy physics, are now experiencing a breakthrough moment in medical practice.
Spectroscopic X-ray detectors represent a significant advancement in medical imaging. Unlike traditional X-ray detectors that measure only the intensity of X-rays, these advanced detectors can differentiate the energies of X-ray photons. This enables enhanced tissue differentiation, improved tumour detection and advanced material characterisation, which may lead in certain cases to functional imaging without the need for radioactive tracers.
The technology has its roots in the 1980s and 1990s when the high-energy-physics community centred around CERN developed a combination of segmented silicon sensors and very large-scale integration (VLSI) readout circuits to enable precision measurements at unprecedented event rates, leading to the development of hybrid pixel detectors (see p37). In the context of the Medipix Collaborations, CERN has coordinated research on spectroscopic X-ray detectors including the development of photon-counting detectors and new semiconductor materials that offer higher sensitivity and energy resolution. By the late 1990s, several groups had proofs of concept, and by 2008, pre-clinical spectral photon-counting computed-tomography (CT) systems were under investigation.
Spectroscopic X-ray detectors offer unparalleled diagnostic capabilities, enabling more detailed imaging and earlier and precise disease detection
In 2011, leading researchers in the field decided to bring together engineers, physicists and clinicians to help address the scientific, medical and engineering challenges associated with guiding the technology toward clinical adoption. In 2021, the FDA approval of Siemens Healthineers’ photon-counting CT scanner marked a significant milestone in the field of medical imaging, validating the clinical benefits of spectroscopic X-ray detectors. The mobile CT scanner, OmniTom Elite from NeuroLogica, approved in March 2022, also integrates photon counting detector (PCD) technology. The 3D colour X-ray scanner developed by MARS Bioimaging, in collaboration with CERN based on Medipix3 technology, has already shown significant promise in pre-clinical and clinical trials. Clinical trials of MARS scanners demonstrated its applications for detecting acute fractures, evaluation of fracture healing and assessment of osseous integration at the bone–metal interface for fracture fixations and joint replacements. With more than 300 million CT scans being performed annually around the world, the potential impact for spectroscopic X-ray imaging is enormous, but technical and medical challenges remain, and the need for this highly specialised workshop continues.
The scientific presentations in the 2024 workshop covered the integration of spectroscopic CT in clinical workflows, addressed technical challenges in photon counting detector technology and explored new semiconductor materials for X-ray detectors. The technical sessions on detector physics and technology discussed new methodologies for manufacturing high-purity cadmium–zinc–tellurium semiconductor crystals and techniques to enhance the quantum efficiency of current detectors. Sessions on clinical applications and imaging techniques included case studies demonstrating the benefits of multi-energy CT in cardiology and neurology, and advances in using spectroscopic detectors for enhanced contrast agent differentiation. The sessions on computational methods and data processing covered the implementation of AI algorithms to improve image reconstruction and analysis, and efficient storage and retrieval systems for large-scale spectral imaging datasets. The sessions on regulatory and safety aspects focused on the regulatory pathway for new spectroscopic X-ray detectors, ensuring patient and operator safety with high-energy X-ray systems.
Enhancing patient outcomes
The field of spectroscopic X-ray detectors is rapidly evolving. Continued research, collaboration and innovation to enhance medical diagnostics and treatment outcomes will be essential. Spectroscopic X-ray detectors offer unparalleled diagnostic capabilities, enabling more detailed imaging and earlier and precise disease detection, which improves patient outcomes. To stay competitive and meet the demand for precision medicine, medical institutions are increasingly adopting advanced imaging technologies. Continued collaboration among researchers, physicists and industry leaders will drive innovation, benefiting patients, healthcare providers and research institutions.
Neutrino physics requires baselines both big and small, and neutrinos both artificial and astrophysical. One of the most prominent experiments of the past two decades is Tokai-to-Kamioka (T2K), which observes electron–neutrino appearance in an accelerator-produced muon–neutrino “superbeam” travelling coast to coast across Japan. To squeeze systematics in their hunt for leptonic CP violation, the collaboration recently brought online an upgraded near detector.
“The upgraded detectors are precision detectors for a precision-physics era,” says international co-spokesperson Kendall Mahn (Michigan State). “Our current systematic constraint is at the level of a few percent. To make progress we need to be able to probe regions we’ve not probed before.”
T2K studies the oscillations of 600 MeV neutrinos that have travelled 295 km from the J-PARC accelerator complex in Tokai to Super-Kamiokande – a 50 kton gadolinium-doped water-Cherenkov detector in Kamioka that has also been used to perform seminal measurements of atmospheric neutrino oscillations and constrain proton decay. Since the start of data taking in 2010, the collaboration made the first observation of the appearance of a neutrino flavour due to quantum-mechanical oscillations and the most precise measurement of the θ23 parameter in the neutrino mixing matrix. As well as placing limits on sterile-neutrino oscillation parameters, the collaboration has constrained a wide range of the parameters that describe neutrino interactions with matter. The uncertainties of such measurements typically limit the precision of fits to the fundamental parameters of the three-neutrino paradigm, and constraining neutrino-interaction systematics is the main purpose of near detectors in superbeam experiments such as T2K and NOvA, and the future ones Hyper-Kamiokande and DUNE.
T2K’s near-detector upgrade improves the acceptance and precision of particle reconstruction for neutrino interactions. A new fine-grained “SuperFGD” detector (see pink rectangle, left, on “New and improved” image) serves as the target for neutrino interactions in the new experimental phase. Comprised of two million 1 cm3 cubes of scintillator strung with optical fibres, SuperFGD lowers the detection threshold for protons ejected from nuclei to 300 MeV/c, improving the reconstruction of neutrino energy. Two new time-projection chambers flank it above and below to more closely mimic the isotropic reconstruction of Super-Kamiokande. Finally, six new scintillator planes suppress particle backgrounds from outside the detector by measuring time of flight.
Following construction and testing at CERN’s neutrino platform, the new detectors were successfully integrated in the experiment’s global DAQ and slow-control system. The first neutrino-beam data with the fully upgraded detector was collected in June, with the collaboration also benefitting from an upgraded neutrino beam with 50% greater intensity. Beam intensity is set to increase further in the coming years, in preparation for commissioning the new 260 kton Hyper-Kamiokande water Cherenkov detector. Cavern excavation is underway in Kamioka, with first data-taking planned for 2027.
But much can already be accomplished in the new phase of the T2K experiment, says the team. As well as improving precision on θ23 and another key mixing parameter Δm223, and refining the theoretical models used in neutrino generators, T2K will improve its fit to δCP, the fundamental parameter describing CP violation in the leptonic sector. Measuring its value could shed light on the question of why theuniverse is dominated by matter.
“T2K’s current best fit to δCP is –1.97,” says Mahn. “We expect to be able to observe leptonic CP violation at 3σ significance if the true value of δCP is –π/2.”
Metal cavities are at the heart of the vast majority of the world’s 30,000 or so particle accelerators. Excited by microwaves, these resonant structures are finely tuned to generate oscillating electric fields that accelerate particles over many metres. But what if similar energies could be delivered 100 times more rapidly in structures a few tens of microns wide or less?
The key is to reduce the wavelength of the radiation powering the structure down to the optical scale of lasers. By combining solid-state lasers and modern nanofabrication, accelerating structures can be as small as a single micron wide. Though miniaturisation will never allow bunch charges as large as in today’s science accelerators, field strengths can be much higher before structure damage sets in. The trick is to replace highly conductive structures with dielectrics like silicon, fused silica and diamond, which have a much higher damage threshold at optical wavelengths. The length of accelerators can thereby be reduced by orders of magnitude, with millions to billions of particle pulses accelerated per second, depending on the repetition rate of the laser.
Recent progress with “on chip” accelerators promises powerful, high-energy and high-repetition-rate particle sources that are accessible to academic laboratories. Applications may range from localised particle or X-ray irradiation in medical facilities to quantum communication and computation using ultrasmall bunches of electrons as qubits.
Laser focused
The inspiration for on-chip accelerators dates back to 1962, when Koichi Shimoda of the University of Tokyo proposed using early lasers – then called optical masers – as a way to accelerate charged particles. The first experiments were conducted by shining light onto an open metal grating, generating an optical surface mode that could accelerate electrons passing above the surface. This technique was proposed by Yasutugu Takeda and Isao Matsui in 1968 and experimentally demonstrated by Koichi Mizuno in 1987 using terahertz radiation. In the 1980s, accelerator physicist Robert Palmer of Brookhaven National Laboratory proposed using rows of free-standing pillars of subwavelength separation illuminated by a laser – an idea that has propagated to modern devices.
In the 1990s, the groups of John Rosenzweig and Claudio Pellegrini at UCLA and Robert Byer at Stanford began to use dielectric materials, which offer low power absorption at optical frequencies. For femtosecond laser pulses, a simple dielectric such as silica glass can withstand optical field strengths exceeding 10 GV/m. It became clear that combining lasers with on-chip fabrication using dielectric materials could subject particles to accelerating forces 10 to 100 times higher than in conventional accelerators.
In the intervening decades, the dream of realising a laser-driven micro-accelerator has been enabled by major technological advances in the silicon-microchip industry and solid-state lasers. These industrial technologies have paved the way to fabricate and test particle accelerators made from silicon and other dielectric materials driven by ultrashort pulses of laser light. The dielectric laser accelerator (DLA) has been born.
Accelerator on a chip
Colloquially called an accelerator on a chip, a DLA is a miniature microwave accelerator reinvented at the micron scale using the methods of optical photonics rather
than microwave engineering. In both cases, the wavelength of the driving field determines the typical transverse structure dimensions: centimetres for today’s microwave accelerators, but between one and 10 μm for optically powered devices.
Other laser-based approaches to miniaturisation are available. In plasma-wakefield accelerators, particles gain energy from electromagnetic fields excited in an ionised gas by a high-power drive laser (CERN Courier May/June 2024 p25). But the details are starkly different. DLAs are powered by lasers with thousands to millions of times lower peak energy. They operate with more than a million times lower electron charges, but at millions of pulses per second. And unlike plasma accelerators, but similarly to their microwave counterparts, DLAs use a solid material structure with a vacuum channel in which an electromagnetic mode continuously imparts energy to the accelerated particles.
This mode can be created by a single laser pulse perpendicular to the electron trajectory, two pulses from opposite sides, or a single pulse directed downwards into the plane of the chip. The latter two options offer better field symmetry.
As the laser impinges on the structure, its electrons experience an electromagnetic force that oscillates at the laser frequency. Particles that are correctly matched in phase and velocity experience a forward accelerating force (see “Continuous acceleration” image). Just as the imparted force begins to change sign, the particles enter the next accelerating cycle, leading to continuous energy gain.
In 2013, two early experiments attracted international attention by demonstrating the acceleration of electrons using structured dielectric devices. Peter Hommelhoff’s group in Germany accelerated 28 keV electrons inside a modified electron microscope using a single-sided glass grating (see “Evolution” image, left panel). In parallel, at SLAC, the groups of Robert Byer and Joel England accelerated relativistic 60 MeV electrons using a dual-sided grating structure, achieving an acceleration gradient of 310 MeV/m and 120 keV of energy gain (see “Evolution” image, middle panel).
Teaming up
Encouraged by the experimental demonstration of accelerating gradients of hundreds of MeV/m, and the power efficiency and compactness of modern solid-state fibre lasers, in 2015 the Gordon and Betty Moore Foundation funded an international collaboration of six universities, three government laboratories and two industry partners to form the Accelerator on a Chip International Program (ACHIP). The central goal is to demonstrate a compact tabletop accelerator based on DLA technology. ACHIP has since developed “shoebox” accelerators on both sides of the Atlantic and used them to demonstrate nanophotonics-based particle control, staging, bunching, focusing and full on-chip electron acceleration by laser-driven microchip devices.
Silicon’s compatibility with established nanofabrication processes makes it convenient, but reaching gradients of GeV/m requires materials with higher damage thresholds such as fused silica or diamond. In 2018, ACHIP research at UCLA accelerated electrons from a conventional microwave linac in a dual-sided fused silica structure powered by ultrashort (45 fs) pulses of 800 nm wavelength laser light. The result was an average energy gain of 850 MeV/m and accelerating fields up to 1.8 GV/m – more than double the prior world best in a DLA, and still a world record.
Since DLA structures are non-resonant, the interaction time and energy gain of the particles is limited by the duration of the laser pulse. However, by tilting the laser’s pulse front, the interaction time can be arbitrarily increased. In a separate experiment at UCLA, using a laser pulse tilted by 45˚, the interaction distance was increased to more than 700 µm – or 877 structure periods – with an energy gain of 0.315 MeV. The UCLA group has further extended this approach using a spatial light modulator to “imprint” the phase information onto the laser pulse, achieving more than 3 mm of interaction at 800 nm, or 3761 structure periods.
Under ACHIP, the structure design has evolved in several directions, from single-sided and double-sided gratings etched onto substrates to more recent designs with colonnades of free-standing silicon pillars forming the sides of the accelerating channel, as originally proposed by Robert Palmer some 30 years earlier. At present, these dual-pillar structures (see “Evolution” image, right panel) have proven to be the optimal trade-off between cleanroom fabrication complexity and experimental technicalities. However, due to the lower damage threshold of silicon as compared with fused silica, researchers have yet to demonstrate gradients above 350 MeV/m in silicon-based devices.
With the dual-pillar colonnade chosen as the fundamental nanophotonic building block, research has turned to making DLAs into viable accelerators with much longer acceleration lengths. To achieve this, we need to be able to control the beam and manipulate it in space and time, or electrons quickly diverge inside the narrow acceleration channel and are lost on impact with the accelerating structure. The ACHIP collaboration has made substantial progress here in recent years.
Focusing on nanophotonics
In conventional accelerators, quadrupole magnets focus electron beams in a near perfect analogy to how concave and convex lens arrays transport beams of light in optics. In laser-driven nanostructures it is necessary to harness the intrinsic focusing forces that are already present in the accelerating field itself.
On-chip accelerators promise powerful, high-energy and high-repetition-rate particle sources that are accessible to academic laboratories
In 2021, the Hommelhoff group guided an electron pulse through a 200 nm-wide and 80 µm-long structure based on a theoretical lattice designed by ACHIP colleagues at TU Darmstadt three years earlier. The lattice’s alternating-phase focusing (APF) periodically exchanges an electron bunch’s phase-space volume between the transverse dimension across the narrow width of the accelerating channel and the longitudinal dimension along the propagation direction of the electron pulse. In principle this technique could allow electrons to be guided through arbitrarily long structures.
Guiding is achieved by adding gaps between repeating sets of dual-pillar building-blocks (see “Beam control” image). Combined guiding and acceleration has been demonstrated within the past year. To achieve this, we select a design gradient and optimise the position of each pillar pair relative to the expected electron energy at that position in the structure. Initial electron energies are up to 30 keV in the Hommelhoff group, supplied by electron microscopes, and from 60 to 90 keV in the Byer group, using laser-assisted field emission from silicon nanotips. When accelerated, the electrons’ velocities change dramatically from 0.3 to 0.7 times the speed of light or higher, requiring the periodicity of the structure to change by tens of nanometres to match the velocity of the accelerating wave to the speed of the particles.
Although focusing in the narrow dimension of the channel is the most critical requirement, an extension of this method to focus beams in the transverse vertical dimension out of plane of the chip has been proposed, which varies the geometry of the pillars along the out-of-plane dimension. Without it, the natural divergence of the beam in the vertical direction eventually becomes dominant. This approach is awaiting experimental realisation.
Acceleration gradients can be improved by optimising material choice, pillar dimensions, peak optical field strength and the duration of the laser pulses. In recent demonstrations, both the Byer and Hommelhoff groups have kept pillar dimensions constant to ease difficulties in uniformly etching the structures during nanofabrication. The complete structure is then a series of APF cells with tapered cell lengths and tapered dual-pillar periodicity. The combination of tapers accommodates both the changing size of the electron beam and the phase matching required due to the increasing electron energy.
In these proof-of-principle experiments, the Hommelhoff group has designed a nanophotonic dielectric laser accelerator for an injection energy of 28.4 keV and an average acceleration gradient of at least 22.7 MeV/m, demonstrating a 43% energy increase over a 500 µm-long structure. The Byer group recently demonstrated the acceleration of a 96 keV beam at average gradients of 35 to 50 MeV/m, reaching a 25% energy increase over 708 µm. The APF periods were in the range of tens of microns and were tapered along with the energy-gain design curve. The beams were not bunched, and by design only 4% of the electrons were captured and accelerated.
One final experimental point has important implications for the future use of DLAs as compact tabletop tools for ultrafast science. Upon interaction with the DLA, electron pulses have been observed to form trains of evenly spaced sub-wavelength attosecond-scale bunches. This effect was shown experimentally by both groups in 2019, with electron bunches measured down to 270 attoseconds, or roughly 4% of the optical cycle.
From demonstration to application
To date, researchers have demonstrated high gradient (GeV/m) acceleration, compatible nanotip electron sources, laser-driven focusing, interaction lengths up to several millimetres, the staging of multiple structures, and attosecond-level control and manipulation of electrons in nanophotonic accelerators. The most recent experiments combine these techniques, allowing the capture of an accelerated electron bunch with net acceleration and precise control of electron dynamics for the first time.
These milestone experiments demonstrate the viability of the nanophotonic dielectric electron accelerator as a scalable technology that can be extended to arbitrarily long structures and ever higher energy gains. But for most applications, beam currents need to increase.
A compelling idea proposes to “copy and paste” the accelerator design in the cleanroom and make a series of parallel accelerating channels on one chip. Another option is to increase the repetition rate of the driving laser by orders of magnitude to produce more electron pulses per second. Optimising the electron sources used by DLAs would also allow for more electrons per pulse, and parallel arrays of emitters on multi-channel devices promise tremendous advantages. Eventually, active nanophotonics can be employed to integrate the laser and electron sources on a single chip.
Once laser and electron sources are combined, we expect on-chip accelerators to become ubiquitous devices with wide-ranging and unexpected applications, much like the laser itself. Future applications will range from medical treatment tools to electron probes for ultrafast science. According to the International Atomic Energy Agency
statistics, 13% of major accelerator facilities around the world power light sources. On-chip accelerators may follow a similar path.
Illuminating concepts
A concept has been proposed for a dielectric laser-driven undulator (DLU) which uses laser light to generate deflecting forces that wiggle the electrons so that they emit coherent light. Combining a DLA and a DLU could take advantage of the unique time structure of DLA electrons to produce ultrafast pulses of coherent radiation (see “Compact light source” image). Such compact new light sources – small enough to be accessible to individual universities – could generate extremely short flashes of light in ultraviolet or even X-ray wavelength ranges, enabling tabletop instruments for the study of material dynamics on ultrafast time scales. Pulse trains of attosecond electron bunches generated by a DLA could provide excellent probes of transient molecular electronic structure.
The generation of intriguing quantum states of light might also be possible with nanophotonic devices
The generation of intriguing quantum states of light might also be possible with nanophotonic devices. This quantum light results from shaping electron wavepackets inside the accelerator and making them radiate, perhaps even leading to on-chip quantum-communication light sources.
In the realm of medicine, an ultracompact self-contained multi-MeV electron source based on integrated photonic particle accelerators could enable minimally invasive cancer treatments with improved dose control.
One day, instruments relying on high-energy electrons produced by DLA technology may bring the science of large facilities into academic-scale laboratories, making novel science endeavours accessible to researchers across various disciplines and minimally invasive medical treatments available to those in need. These visionary applications may take decades to be fully realised, but we should expect developments to continue to be rapid. The biggest challenges will be increasing beam power and transporting beams across greater energy gains. These need to be addressed to reach the stringent beam quality and machine requirements of longer term and higher energy applications.
In 1968, deep underground in the Homestake gold mine in South Dakota, Ray Davis Jr. observed too few electron neutrinos emerging from the Sun. The reason, we now know, is that many had changed flavour in flight, thanks to tiny unforeseen masses.
At the same time, Steven Weinberg and Abdus Salam were carrying out major construction work on what would become the Standard Model of particle physics, building the Higgs mechanism into Sheldon Glashow’s unification of the electromagnetic and weak interactions. The Standard Model is still bulletproof today, with one proven exception: the nonzero neutrino masses for which Davis’s observations were in hindsight the first experimental evidence.
Today, neutrinos are still one of the most promising windows into physics beyond the Standard Model, with the potential to impact many open questions in fundamental science (CERN Courier May/June 2024 p29). One of the most ambitious experiments to study them is currently taking shape in the same gold mine as Davis’s experiment more than half a century before.
Deep underground
In February this year, the international Deep Underground Neutrino Experiment (DUNE) completed the excavation of three enormous caverns 1.5 kilometres below the surface at the new Sanford Underground Research Facility (SURF) in the Homestake mine. 800,000 tonnes of rock have been excavated over two years to reveal an underground campus the size of eight soccer fields, ready to house four 17,500 tonne liquid–argon time-projection chambers (LArTPCs). As part of a diverse scientific programme, the new experiment will tightly constrain the working model of three massive neutrinos, and possibly even disprove it.
DUNE will measure the disappearance of muon neutrinos and the appearance of electron neutrinos over 1300 km and a broad spectrum of energies. Given the long journey of its accelerator-produced neutrinos from the Long Baseline Neutrino Facility (LBNF) at Fermilab in Illinois to SURF in South Dakota, DUNE will be uniquely sensitive to asymmetries between the appearance of electron neutrinos and antineutrinos. One predicted asymmetry will be caused by the presence of electrons and the absence of positrons in the Earth’s crust. This asymmetry will probe neutrino mass ordering – the still unknown ordering of narrow and broad mass splittings between the three tiny neutrino masses. In its first phase of operation, DUNE will definitively establish the neutrino mass ordering regardless of other parameters.
If CP symmetry is violated, DUNE will then observe a second asymmetry between electron neutrinos and antineutrinos, which by experimental design is not degenerate with the first asymmetry. Potentially the first evidence for CP violation by leptons, this measurement will be an important experimental input to the fundamental question of how a matter–antimatter asymmetry developed in the early universe.
If CP violation is near maximal, DUNE will observe it at 3σ (99.7% confidence) in its first phase. In DUNE and LBNF’s recently reconceptualised second phase, which was strongly endorsed by the US Department of Energy’s Particle Physics Project Prioritization Panel (P5) in December (CERN Courier January/February 2024 p7), 3σ sensitivity to CP violation will be extended to more than 75% of possible values of δCP, the complex phase that parameterises this effect in the three-massive-neutrino paradigm.
Combining DUNE’s measurements with those by fellow next-generation experiments JUNO and Hyper-Kamiokande will test the three-flavour paradigm itself. This paradigm rotates three massive neutrinos into the mixtures that interact with charged leptons via the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix, which features three angles in addition to δCP.
As well as promising world-leading resolution on the PMNS angle θ23, DUNE’s measurements of θ13 and the Δm232 mass splitting will be different and complementary to those of JUNO in ways that could be sensitive to new physics. JUNO, which is currently under construction in China, will operate in the vicinity of a flux of lower-energy electron antineutrinos from nuclear reactors. DUNE and Hyper-Kamiokande, which is currently under construction in Japan, will both study accelerator-produced sources of muon neutrinos and antineutrinos, though using radically different baselines, energy spectra and detector designs.
Innovative and impressive
DUNE’s detector technology is innovative and impressive, promising millimetre-scale precision in imaging the interactions of neutrinos from accelerator and astrophysical sources (see “Millimetre precision” image). The argon target provides unique sensitivity to low-energy electron neutrinos from supernova bursts, while the detectors’ imaging capabilities will be pivotal when searching for beyond-the-Standard-Model physics such as dark matter, sterile-neutrino mixing and non-standard neutrino interactions.
First proposed by Nobel laureate Carlo Rubbia in 1977, LArTPC technology demonstrated its effectiveness as a neutrino detector at Gran Sasso’s ICARUS T600 detector more than a decade ago, and also more recently in the MicroBooNE experiment at Fermilab. Fermilab’s short-baseline neutrino programme now includes ICARUS and the new Short Baseline Neutrino Detector, which is due to begin taking neutrino data this year.
The first phase of DUNE will construct one LArTPC in each of the two detector caverns, with the second phase adding an additional detector in each. A central utility cavern between the north and south caverns will house infrastructure to support the operation of the detectors.
Following excavation by Thyssen Mining, final concrete work was completed in all the underground caverns and drifts, and the installation of power, lighting, plumbing, heating, ventilation and air conditioning is underway. 90% of the subcontracts for the installation of the civil infrastructure have already been awarded, with LBNF and DUNE’s economic impact in Illinois and South Dakota estimated to be $4.3 billion through fiscal years 2022 to 2030.
Once the caverns are prepared, two large membrane cryostats will be installed to house the detectors and their liquid argon. Shipment of material for the first of the two cryostats being provided by CERN is underway, with the first of approximately 2000 components having arrived at SURF in January; the remainder of the steel for the first cryostat was due to have been shipped from its port in Spain by the end of May. The manufacture of the second cryostat by Horta Coslada is ongoing (see “Cryostat creation” image).
Procedures for lifting and manipulating the components will be tested in South Dakota in spring 2025, allowing the collaboration to ensure that it can safely and efficiently handle bulky components with challenging weight distributions in an environment where clearances can reach as little as 3 inches on either side. Lowering detector components down the Homestake mine’s Ross shaft will take four months.
Two configurations
The two far-detector modules needed for phase one of the DUNE experiment will use the same LArTPC technology, though with different anode and high-voltage configurations. A “horizontal-drift” far detector will use 150 6 m-by-2.3 m anode plane assemblies (APAs). Each will be wound with 4000 150 μm diameter copper-beryllium wires to collect ionisation signals from neutrino interactions with the argon.
A second “vertical-drift” far detector will instead use charge readout planes (CRPs) – printed circuit boards perforated with an array of holes to capture the ionisation signals. Here, a horizontal cathode plane will divide the detector into two vertically stacked volumes. This design yields a slightly larger instrumented volume, which is highly modular in design, and simpler and more cost-effective to construct and install. A small amount of xenon doping will significantly enhance photo detection, allowing more light to be collected beyond a drift length of 4 m.
The construction of the horizontal-drift APAs is well underway at STFC Daresbury Laboratory in the UK and at the University of Chicago in the US. Each APA takes several weeks to produce, motivating the parallelisation of production across five machines in Daresbury and one in Chicago. Each machine automates the winding of 24 km of wire onto each APA (see “Wind it up” image). Technicians then solder thousands of joints and use a laser system to ensure the wires are all wound to the required tension.
Two large ProtoDUNE detectors at CERN are an essential part of developing and validating DUNE’s detector design. Four APAs are currently installed in a horizontal-drift prototype that will take data this summer as a final validation of the design of the full detector. A vertical-drift prototype (see “Vertical drift” image) will then validate the production of CRP anodes and optimise their electronics. A full-scale test of vertical-drift-detector installation will take place at CERN later this year.
Phase transition
Alongside the deployment of two additional far-detector modules, phase two of the DUNE experiment will include an increase in beam power beyond 2 MW and the deployment of a more capable near detector (MCND) featuring a magnetised high-pressure gaseous-argon TPC. These enhancements pursue increased statistics, lower energy thresholds, better energy resolution and lower intrinsic backgrounds. They are key to DUNE’s measurement of the parameters governing long-baseline neutrino oscillations, and will expand the experiment’s physics scope, including searches for anomalous tau-neutrino appearance, long-lived particles, low-mass dark matter and solar neutrinos.
Phase-one vertical-drift technology is the starting point for phase-two far-detector R&D – a global programme under ECFA in Europe and CPAD in the US that seeks to reduce costs and improve performance. Charge-readout R&D includes improving charge-readout strips, 3D pixel readout and 3D readout using high-performance fast cameras. Light-readout R&D seeks to maximise light coverage by integrating bare silicon photomultipliers and photoconductors into the detector’s field-cage structure.
A water-based liquid scintillator module capable of separately measuring scintillation and Cherenkov light is currently being explored as a possible alternative technology for the fourth “module of opportunity”. This would require modifications to the near detector to include corresponding non-argon targets.
Intense work
At Fermilab, site preparation work is already underway for LBNF, and construction will begin in 2025. The project will produce the world’s most intense beam of neutrinos. Its wide-band beam will cover more than one oscillation period, allowing unique access to the shape of the oscillation pattern in a long-baseline accelerator-neutrino experiment.
LBNF will need modest upgrades to the beamline to handle the 2 MW beam power from the upgrade to the Fermilab accelerator complex, which was recently endorsed by P5. The bigger challenge to the facility will be the proton-target upgrades needed for operation at this beam power. R&D is now taking place at Fermilab and at the Rutherford Appleton Laboratory in the UK, where DUNE’s phase-one 1.2 MW target is being designed and built.
The next generation of big neutrino experiments promises to bring new insights into the nature of our universe
DUNE highlights the international and collaborative nature of modern particle physics, with the collaboration boasting more than 1400 scientists and engineers from 209 institutions in 37 countries. A milestone was achieved late last year when the international community came together to sign the first major multi-institutional memorandum of understanding with the US Department of Energy, affirming commitments to the construction of detector components for DUNE and pushing the project to its next stage. US contributions are expected to cover roughly half of what is needed for the far detectors and the MCND, with the international community contributing the other half, including the cryostat for the third far detector.
DUNE is now accelerating into its construction phase. Data taking is due to start towards the end of this decade, with the goal of having the first far-detector module operational before the end of 2028.
The next generation of big neutrino experiments promises to bring new insights into the nature of our universe – whether it is another step towards understanding the preponderance of matter, the nature of the supernovae explosions that produced the stardust of which we are all made, or even possible signatures of dark matter… or something wholly unexpected!
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.