Topics

Armin Hermann 1933–2024

Armin Hermann

Within CERN circles, Armin Hermann is mainly known as one of the co-editors of the authoritative History of CERN volumes covering the period from the beginnings of the Organization up to 1965. But he did so much more in the field of the history of science.

Armin Hermann was born on 17 June 1933 in Vernon, British Columbia, Canada and grew up in Upper Bavaria in Germany. He studied physics at Ludwig Maximilian University in Munich and obtained his doctorate in theoretical physics in 1963 with a dissertation on the “Mott effect for elementary particles and nuclei of electromagnetic structure”. He worked for a few years at DESY and performed synchrotron-oscillation calculations with an IBM 650 computer. Subsequently, Hermann decided to change his focus from physics proper to its history, which had preoccupied him since his student days.

Hermann was the first to occupy a chair in the history of science and technology at the University of Stuttgart – a chair not situated either at a science or mathematics faculty but rather among general historians. During his 30 year-long tenure, he authored important monographs on quantum theory, quantum mechanics and elementary particle theory. He wrote books on the history of atomic physics titled Weltreich der Physik: Von Galilei bis Heisenberg, The New Physics: The Route into the Atomic Age, and How Science Lost its Innocence, alongside numerous biographies (including Planck, Heisenberg, Einstein and Wirtz) and historical studies on companies, notably on the German optics firm Carl Zeiss. All became very popular among the physics community.

Meanwhile at CERN, the attitude among physicists towards studies in the history of science was rather negative – the mantra was “We don’t care of history, we make history”. However, in 1980, the advisory committee for the CERN History Project examined a feasibility study conducted by Hermann and decided to establish a European study team to write the history of CERN from its early beginnings until at least 1963, with an overview of later years. The project was to be completed within five years and financed outside the CERN budget. Hermann was asked by CERN Council to assume responsibility for the project, and from 1982 to 1985 he was freed from teaching obligations in Stuttgart to conduct research at CERN. He became co-editor of first two volumes on the history of CERN: Launching the European Organization for Nuclear Research and Building and Running the Laboratory, 1954–1965. A third volume covering the story of the history of CERN from the mid-1960s to the late 1970s later appeared under the editorship of John Krige in 1996.

Armin passed away in February 2024 in his home in Oberstarz near Miesbach, nestled among the alpine hills, which he had always felt attached to and which was also the main reason why he declined several tempting calls to other renowned universities. His wife Steffi, his companion of many decades, was by his side to the very end. Many historians of physics, science and technology in Germany and abroad mourn the loss of this influential pioneer in the history of science.

Electroweak precision at the LHC

The Standard Model – an inconspicuous name for one of the great human inventions. It describes all known elementary particles and their interactions, except for gravity. About 19 free parameters tune its behaviour. To the best of our knowledge, they could in principle take any value, and no underlying theory yet conceived can predict their values. They include particle masses, interaction strengths, important technical numbers such as mixing angles and phases, and the vacuum strength of the Higgs field, which theorists believe has alone among fundamental fields permeated every cubic attometre of the universe, since almost the beginning of time. Measuring these parameters is the most fundamental experimental task available to modern science.

The basic constituents of matter interact through forces which are mediated by virtual particles that ping back and forth, delivering momentum and quantum numbers. The gluon mediates the strong interaction, the photon mediates the electromagnetic interaction, and the W and Z bosons mediate the weak interaction. Although the electromagnetic and weak forces operate very differently to each other in everyday life, in the Standard Model they are two manifestations of the broken electroweak interaction – an interaction that broke when the Higgs field switched on throughout the universe, giving mass to matter particles, the W and Z bosons, and the Higgs boson itself, via the Brout–Englert–Higgs (BEH) mechanism. The electroweak theory has been extraordinarily successful in describing experimental results, but it remains mysterious – and the BEH mechanism is the origin of some of those free parameters. The best way to test the electroweak model is to over-constrain its free parameters using precision measurements and try to find a breaking point.

An artist’s visualisation of a proton

Ever since the late 1960s, when Steven Weinberg, Sheldon Glashow and Abdus Salam unified the electromagnetic and weak forces using the BEH mechanism, CERN has had an intimate experimental relationship with the electroweak theory. In 1973 the Z boson was indirectly discovered by observing “neutral current” events in the Gargamelle bubble chamber, using a neutrino beam from the Proton Synchrotron. The W boson was discovered in 1983 at the Super Proton Synchrotron collider, followed by the direct observation of the Z boson in the same machine soon after. The 1990s witnessed a decade of exquisite electroweak precision measurements at the Large Electron Positron (LEP) collider at CERN and the Stanford Linear Collider (SLC) at SLAC National Accelerator Laboratory in the US, before the crown jewel of the electroweak sector, the Higgs boson, was discovered by the ATLAS and CMS collaborations at the Large Hadron Collider (LHC) in 2012 – a remarkable success that delivered the last to be observed, and arguably most mysterious, missing piece of the Standard Model.

What was not expected, was that the ATLAS, CMS and LHCb experiments at the LHC would go on to make electroweak measurements that rival in precision those made at lepton colliders.

Discovery or precision?

Studying the electroweak interaction requires a supply of W and Z bosons. For that, you need a collider. Electrons and positrons are ideally suited for the task as they interact exclusively via the electroweak interaction. By precisely tuning the energy of electron–positron collisions, experiments at LEP and the SLC tested the electroweak sector with an unprecedented 0.1% accuracy at the energy scale of the Z-boson mass (mZ).

The ATLAS detector

Hadron colliders like the LHC have different strengths and weaknesses. Equipped to copiously produce all known Standard Model particles – and perhaps also hypothetical new ones – they are the ultimate instruments for probing the high-energy frontier of our understanding of the microscopic world. The protons they collide are not elementary, but a haze of constituent quarks and gluons that bubble and fizz with quantum fluctuations. Each constituent “parton” carries an unpredictable fraction of the proton’s energy. This injects unavoidable uncertainty into studies of hadron collisions that physicists attempt to encode in probabilistic parton distribution functions. What’s more, when a pair of partons from the two opposing protons interact in an interesting way, the result is overlaid by numerous background particles originating from the remaining partons that were untouched by the original collision – a complexity that is exacerbated by the difficult-to-model strong force which governs the behaviour of quarks and gluons. As a result, hadron colliders have a reputation for being discovery machines with limited precision.

The LHCb detector

The LHC has collided protons at the energy frontier since 2010, delivering far more collisions than comparable previous machines such as the Tevatron at Fermilab in the US. This has enabled a comprehensive search and measurement programme. Following the discovery of the Higgs boson in 2012, measurements have so far verified its place in the electroweak sector of the Standard Model, although the relative precisions of many measurements are currently far lower than those achieved for the W and Z bosons at LEP. But in defiance of expectations, the capabilities of the LHC experiments and the ingenuity of analysts have also enabled many of the world’s most precise measurements of the electroweak interaction. Here, we highlight five.

1. Producing W and Z bosons

When two streams of objects meet, how many strike each other depends on their cross-sectional area. Though quarks and other partons are thought to be fundamental objects with zero extent, particle physicists borrow this logic for particle beams, and extend it by subdividing the metaphorical cross section according to the resulting interactions. The range of processes used to study W and Z bosons at the LHC spans a remarkable eight orders of magnitude in cross section.

WW, WZ and ZZ cross sections as a function of centre-of-mass energy

The most common interaction is the production of single W and Z bosons through the annihilation of a quark and an antiquark in the colliding protons. Measurements with single W and Z boson events have now reached a precision well below 1% thanks to the excellent calibration of the detector performance. They are a prodigious tool for testing and improving the modelling of the underlying process, for example using parton distribution functions.

The second most common interaction is the simultaneous production of two bosons. Measurements of “diboson” processes now routinely reach a precision better than 5%. Since the start of the LHC operation, the accelerator has operated at several collision energies, allowing the experiments to map diboson cross sections as a function of energy. Measurements of the cross sections for creating WW, WZ and ZZ pairs exhibit remarkable agreement with state-of-the art Standard Model predictions (see “Diboson production” figure).

The large amount of collected data at the LHC has recently allowed us to move the frontier to the observation of extremely infrequent “triboson” processes with three W or Z bosons, or photons, produced simultaneously – the first step towards confirming the existence of the quartic self-interaction between the electroweak bosons.

2. The weak mixing angle

The Higgs potential is famously thought to resemble a Mexican hat. The Higgs field that permeates space could in principle exist with a strength corresponding to any point on its surface. Theorists believe it settled somewhere in the brim a picosecond or so after the Big Bang, breaking the perfect symmetry of the hat’s apex, where its value was zero. This switched the Higgs field on throughout the universe – and the massless gauge bosons of the unified electroweak theory mixed to form the photon and W and Z boson mass eigenstates that mediate the broken electroweak interaction today. The weak mixing angle θW is the free parameter of the Standard Model which defines that mixing.

Measurements of the effective weak mixing angle

The θW angle can be studied using a beautifully simple interaction: the annihilation of a quark and its antiquark to create an electron and a positron or a muon and an antimuon. When the pair has an invariant mass in the vicinity of mZ, there is a small preference for the negatively charged lepton to be produced in the same direction as the initial quark. This arises due to quantum interference between the Z boson’s vector and axial-vector couplings, whose relative strengths depend on θW.

The unique challenge at a proton–proton collider like the LHC is that the initial directions of the quark and the antiquark can only be inferred using our limited knowledge of parton distribution functions. These systematic uncertainties currently dominate the total uncertainty, although they can be reduced somewhat by using information on lepton pairs produced away from the Z resonance. The CMS and LHCb collaborations have recently released new measurements consistent with the Standard Model prediction with a precision comparable to that of the LEP and SLC experiments (see “Weak mixing angle” figure).

Quantum physics effects play an interesting role here. In practice, it is not possible to experimentally isolate “tree level” properties like θW, which describe the simplest interactions that can be drawn on a Feynman diagram. Measurements are in fact sensitive to the effective weak mixing angle, which includes the effect of quantum interference from higher-order diagrams.

A crucial prediction of electroweak theory is that the masses of the W and Z bosons are, at leading order, related by the electroweak mixing angle: sin2θW = 1–m2W/m2Z, where mW and mZ are the masses of the W and Z bosons. This relationship is modified by quantum loops involving the Higgs boson, the top quark and possibly new particles. Measuring the parameters of the electroweak theory precisely, therefore, allows us to test for any gaps in our understanding of nature.

Surprisingly, combining this relationship with the mZ measurement from LEP and the CMS measurement of θW also allows a competitive measurement of mW. A measurement of sin2θW with a precision of 0.0003 translates into a prediction of mW with 15 MeV precision, which is comparable to the best direct measurements.

3. The mass and width of the W boson

Precisely measuring the mass of the W boson is of paramount importance to efforts to further constrain the relationships between the parameters of the electroweak theory, and probe possible beyond-the-Standard Model contributions. Particle lifetimes also offer a sensitive test of the electroweak theory. Because of their large masses and numerous decay channels, the W and Z bosons have mean lifetimes of less than 10–24 s. Though this is an impossibly brief time interval to measure directly, Heisenberg’s uncertainty principle smudges a particle’s observed mass by a certain “width” when it is produced in a collider. This width can be measured by fitting the mass distribution of many virtual particles. It is reciprocally related to the particle’s lifetime.

Measurement of the W boson’s mass and width

While lepton-collider measurements of the properties of the Z boson were extensive and achieved remarkable precision, the same is not quite true for the W boson. The mass of the Z boson was measured with a precision of 0.002%, but the mass of the W boson was measured with a precision of only 0.04% – a factor 20 worse. The reason is that while single Z bosons were copiously produced at LEP and SLC, W bosons could not be produced singly, due to charge conservation. W+W pairs were produced, though only at low rates at LEP energies.

In contrast to LEP, hadron colliders produce large quantities of single W bosons through quark–antiquark annihilation. The LHC produces more single W bosons in a minute than all the W-boson pairs produced in the entire lifetime of LEP. Even when only considering decays to electrons or muons and their respective neutrinos – the most precise measurements – the LHC experiments have recorded billions of W-boson events.

But there are obstacles to overcome. The neutrino in the final state escapes undetected. Its transverse momentum with respect to the beam direction can only be measured indirectly, by measuring all other products of the collision – a major experimental challenge in an environment with not just one, but up to 60 simultaneous proton–proton collisions. Its longitudinal momentum cannot be measured at all. And as the W bosons are not produced at rest, extensive theoretical calculations and ancillary measurements are needed to model their momenta, incurring uncertainties from parton distribution functions.

Despite these challenges, the latest measurement of the W boson’s mass by the ATLAS collaboration achieved a precision of roughly 0.02% (see “Mass and width” figure, top). The LHCb collaboration also recently produced its first measurement of the W-boson mass using W bosons produced close to the beam line with a precision at the 0.04% level, dominated for now by the size of the data sample. Owing to the complementary detector coverage of the LHCb experiment with respect to the ATLAS and CMS experiments, several uncertainties are reduced when these measurements are combined.

The Tevatron experiments CDF and D0 also made precise W-boson measurements using proton–antiproton collisions at a lower centre-of-mass energy. The single most precise mass measurement, at the 0.01% level, comes from CDF. It is in stark disagreement with the Standard Model prediction and disagrees with the combination of other measurements.

A highly anticipated measurement by the CMS collaboration may soon weigh in decisively in favour either of the CDF measurement or the Standard Model. The CMS measurement will combine innovative analysis techniques using the Z boson with a larger 13 TeV data set than the 7 TeV data used by the recent ATLAS measurement, enabling more powerful validation samples and thereby greater power to reduce systematic uncertainties.

Measurements of the W boson’s width are not yet sufficiently precise to constrain the Standard Model significantly, though the strongest constraint so far comes from the ATLAS collaboration (see “Mass and width” figure, bottom). Further measurements are a promising avenue to test the Standard Model. If the W boson decays into any hitherto undiscovered particles, its lifetime should be shorter than predicted, and its width greater, potentially indicating the presence of new physics.

4. Couplings of the W boson to leptons

Within the Standard Model, the W and Z bosons have equal couplings to leptons of each of the three generations – a property known as lepton flavour universality (LFU). Any experimental deviation from LFU would indicate new physics.

Ratios of branching fractions for the W boson

As with mass and width, lepton colliders’ precision was superior for the Z boson than the W boson. LEP confirmed LFU in leptonic Z-boson decays to about 0.3%. Comparing the three branching fractions of the W boson in the electron, muon and tau–lepton decay channels, the combination of the four LEP experiments reached a precision of only about 2%.

At the LHC, the large cross section for producing top quark–antiquark pairs that both decay into a W boson and a bottom quark offers a unique sample of W-boson pairs for high-precision studies of their decays. The resulting measurements are the most precise tests of LFU for all three possible comparisons of the coupling of the lepton flavours to the W boson (see “Couplings to leptons” figure).

Regarding the tau lepton to muon ratio, the ATLAS collaboration observed 0.992 ± 0.013 decays to a tau for every one decay to a muon. This result favours LFU and is twice as precise than the corresponding LEP result of 1.066 ± 0.025, which exhibits a deviation of 2.6 standard deviations from unity. Because of the relatively long tau lifetime, ATLAS was able to separate muons produced in the decay of tau leptons from those produced promptly by observing the tau decay length of the order of 2 mm.

The best tau to electron measurement is provided by a simultaneous CMS measurement of all the leptonic and hadronic decay branching fractions of the W boson. The analysis splits the top quark–antiquark pair events based on the multiplicity and flavour of reconstructed leptons, the number of jets, and the number of jets identified as originating from the hadronisation of b quarks. All CMS ratios are consistent with the LFU hypothesis and reduce tension with the Standard Model prediction.

Regarding the muon to electron ratio, measurements have been performed by several LHC and Tevatron experiments. The observed results are consistent with LFU, with the most precise measurement from the ATLAS experiment boasting a precision better than 0.5%.

5. The invisible width of the Z boson

A groundbreaking measurement at LEP deduced how often a particle that cannot be directly observed decays to particles that cannot be detected. The particle in question is the Z boson. By scanning the energy of electron–positron collisions and measuring the broadness of the “lineshape” of the smudged bump in interactions around the mass of the Z, LEP physicists precisely measured its width. As previously noted, a particle’s width is reciprocal to its lifetime and therefore proportional to its decay rate – something that can also be measured by directly accounting for the observed rate of decays to visible particles of all types. The difference between the two numbers is due to Z-boson decays to so-called invisible particles that cannot be reconstructed in the detector. A seminal measurement concluded that exactly three species of light neutrino couple to the Z boson.

Invisible width measurements

The LEP experiments also measured the invisible width of the Z boson using an ingenious method that searched for solitary “recoils”. Here, the trick was to look for the rare occasion when the colliding electron or positron emitted a photon just before creating a virtual Z boson that decayed invisibly. Such events would yield nothing more than a single photon recoiling from an otherwise invisible Z-boson decay.

The ATLAS and CMS collaborations recently performed similar measurements, requiring the invisibly decaying Z boson to be produced alongside a highly energetic jet in place of a recoil photon. By taking the ratio with equivalent recoil decays to electrons and muons, they achieved remarkable uncertainties of around 2%, equivalent to LEP, despite the much more challenging environment (see “Invisible width” figure). The results are consistent with the Standard Model’s three generations of light neutrinos.

Future outlook

Building on these achievements, the LHC experiments are now readying themselves for a more than comparable experimental programme, which is yet to begin. Following the ongoing run of the LHC, a high-luminosity upgrade (HL-LHC) is scheduled to operate throughout the 2030s, delivering a total integrated luminosity of 3 ab–1 to both ATLAS and CMS. The LHCb experiment also foresees a major upgrade to collect an integrated luminosity of more than 300 fb–1 by the end of the LHC operations. A tenfold data set, upgraded detectors and experimental methods, and improvements to theoretical modelling will greatly extend both experimental precision and the reach of direct and indirect searches for new physics. Unprecedented energy scales will be probed and anomalies with respect to the Standard Model may become apparent.

The Large Hadron Collider

Despite the significant challenges posed by systematic uncertainties, there are good prospects to further improve uncertainties in precision electroweak observables such as the mass of the W boson and the effective weak mixing angle, thanks to the larger angular acceptances of the new inner tracking devices currently under production by ATLAS and CMS. A possible programme of high-precision measurements in electron–proton collisions, the LHeC, could deliver crucial input to reduce uncertainties such as from parton distribution functions. The LHeC has been proposed to run concurrently with the HL-LHC by adding an electron beam to the LHC.

Beyond the HL-LHC programme, several proposals for future particle colliders have captured the imagination of the global particle-physics community – and not least the two phases of the Future Circular Collider (FCC) being studied at CERN. With a circumference three to four times greater than that of the LEP/LHC tunnel, electron–positron collisions could be delivered with very high luminosity and centre-of-mass energies from 90 to 365 GeV in the initial FCC-ee phase. The FCC-ee would facilitate an impressive leap in the precision of most electroweak observables. Projections estimate a factor of 10 improvement for Z-boson measurements and up to 100 for W-boson measurements. For the first time, the top quark could be produced in an environment where it is not colour-connected to initial hadrons, in some cases reducing uncertainties by a factor of 10 or more.

The LHC collaborations have made remarkable strides forward in probing the electroweak theory – a theory of great beauty and consequence for the universe. But its most fundamental workings are subtle and elusive. Our exploration is only just beginning.

Homing in on the Higgs self-interaction

Non-resonant and resonant processes driving di-Higgs production at the LHC

The simplest possible interaction in nature is when three identical particle lines, with the same quantum numbers, meet at a single vertex. The Higgs boson is the only known elementary particle that can exhibit such behaviour. More importantly, the strength of the coupling between three or even four Higgs bosons will reveal the first picture of the shape of the Brout–Englert–Higgs potential, responsible for the evolution of the universe in its first moments as well as possibly its fate.

Since the discovery of the Higgs boson at the LHC in 2012, the ATLAS and CMS collaborations have measured its properties and interactions with increasing precision. This includes its couplings to the gauge bosons and to third-generation fermions, its production cross sections, mass and width. So far, the boson appears as the Standard Model (SM) says it should. But the picture is still fuzzy, and many more measurements are needed. After all, the Higgs boson may interact with new particles suggested by theories beyond the SM to shed light on mysteries including the nature of the electroweak phase transition.

Line of attack

“The Higgs self-coupling is the next big thing since the Higgs discovery, and di-Higgs production is our main line of attack,” says Jana Schaarschmidt of ATLAS. “The experiments are making tremendous progress towards measuring Higgs-boson pair production at the LHC – far more than was imagined would be possible 12 years ago – thanks to improvements in analysis techniques and machine learning in particular.”

The dominant process for di-Higgs production at the LHC, gluon–gluon fusion, proceeds via a box or triangle diagram, the latter offering access to the trilinear Higgs coupling constant λ (see figure). Destructive interference between the two processes makes di-Higgs production extremely rare, with a cross section at the LHC about 1000 times smaller than that for single-Higgs production. Many different decay channels are available to ATLAS and CMS. Those with a high probability to occur are chosen if they can also provide a clean way to be distinguished from backgrounds. The most sensitive channels are those with one Higgs boson decaying to a b-quark pair and the other decaying either to a pair of photons, τ leptons or b quarks.

During this year’s Rencontres de Moriond, ATLAS presented new results in the HH → bbbb and HH → multileptons channels and CMS in the HH → γγττ channel. In May, ATLAS released a combination of searches for HH production in five channels using the complete LHC Run 2 dataset. The combination provides the best expected sensitivities to HH production (excluding values more than 2.4 times the SM prediction) and to the Higgs boson self-coupling. A combination of HH searches published by CMS in 2022 obtains a similar sensitivity to the di-Higgs cross-section limits. “In late 2023 we put out a preliminary result combining single-Higgs and di-Higgs analyses to constrain the Higgs self-coupling, and further work on combining all the latest analyses is ongoing,” explains Nadjieh Jafari of CMS.

The Higgs self-coupling is the next big thing since the Higgs discovery

Considerable improvements are expected with the LHC Run 3 and much larger High-Luminosity LHC (HL-LHC) datasets. Based on extrapolations of early subsets of its Run 2 analyses, ATLAS expects to detect SM di-Higgs production with a significance of 3.2σ (4.6σ) with (without) systematic uncertainties by the end of the HL-LHC era. With similar progress at CMS, a di-Higgs observation is expected to be possible at the HL-LHC even with current analy­sis techniques, along with improved knowledge of λ. ATLAS, for example, expects to be able to constrain λ to be between 0.5 and 1.6 times the SM expectation at the level of 1σ.

Testing the foundations

Physicists are also starting to place limits on possible new-physics contributions to HH production, which can originate either from loop corrections involving new particles or from non-standard couplings between the Higgs boson and other SM particles. Several theories beyond the SM, including two-Higgs-doublet and composite-Higgs models, also predict the existence of heavy scalar particles that can decay resonantly into a pair of Higgs bosons. “Large anomalous values of λ are already excluded, and the window of possible values continues to shrink towards the SM as the sensitivity grows,” says Schaarschmidt. “Furthermore, in recent di-Higgs analyses ATLAS and CMS have been able to establish a strong constraint on the coupling between two Higgs bosons and two vector bosons.”

For Christophe Grojean of the DESY theory group, the principal interest in di-Higgs production is to test the foundations of quantum field theory: “The basic principles of the SM are telling us that the way the Higgs boson interacts with itself is mostly dictated by its expectation value (linked to the Fermi constant, i.e. the muon and neutron lifetimes) and its mass. Verifying this prediction experimentally is therefore of prime importance.”

Acceleration, but not as we know it

Metal cavities are at the heart of the vast majority of the world’s 30,000 or so particle accelerators. Excited by microwaves, these resonant structures are finely tuned to generate oscillating electric fields that accelerate particles over many metres. But what if similar energies could be delivered 100 times more rapidly in structures a few tens of microns wide or less?

The key is to reduce the wavelength of the radiation powering the structure down to the optical scale of lasers. By combining solid-state lasers and modern nanofabrication, accelerating structures can be as small as a single micron wide. Though miniaturisation will never allow bunch charges as large as in today’s science accelerators, field strengths can be much higher before structure damage sets in. The trick is to replace highly conductive structures with dielectrics like silicon, fused silica and diamond, which have a much higher damage threshold at optical wavelengths. The length of accelerators can thereby be reduced by orders of magnitude, with millions to billions of particle pulses accelerated per second, depending on the repetition rate of the laser.

Recent progress with “on chip” accelerators promises powerful, high-energy and high-repetition-rate particle sources that are accessible to academic laboratories. Applications may range from localised particle or X-ray irradiation in medical facilities to quantum communication and computation using ultrasmall bunches of electrons as qubits.

Laser focused

The inspiration for on-chip accelerators dates back to 1962, when Koichi Shimoda of the University of Tokyo proposed using early lasers – then called optical masers – as a way to accelerate charged particles. The first experiments were conducted by shining light onto an open metal grating, generating an optical surface mode that could accelerate electrons passing above the surface. This technique was proposed by Yasutugu Takeda and Isao Matsui in 1968 and experimentally demonstrated by Koichi Mizuno in 1987 using terahertz radiation. In the 1980s, accelerator physicist Robert Palmer of Brookhaven National Laboratory proposed using rows of free-standing pillars of subwavelength separation illuminated by a laser – an idea that has propagated to modern devices.

The longitudinal electric field in a dual-pillar colonnade illuminated by a laser

In the 1990s, the groups of John Rosenzweig and Claudio Pellegrini at UCLA and Robert Byer at Stanford began to use dielectric materials, which offer low power absorption at optical frequencies. For femtosecond laser pulses, a simple dielectric such as silica glass can withstand optical field strengths exceeding 10 GV/m. It became clear that combining lasers with on-chip fabrication using dielectric materials could subject particles to accelerating forces 10 to 100 times higher than in conventional accelerators.

In the intervening decades, the dream of realising a laser-driven micro-accelerator has been enabled by major technological advances in the silicon-microchip industry and solid-state lasers. These industrial technologies have paved the way to fabricate and test particle accelerators made from silicon and other dielectric materials driven by ultrashort pulses of laser light. The dielectric laser accelerator (DLA) has been born.

Accelerator on a chip

Colloquially called an accelerator on a chip, a DLA is a miniature microwave accelerator reinvented at the micron scale using the methods of optical photonics rather
than microwave engineering. In both cases, the wavelength of the driving field determines the typical transverse structure dimensions: centimetres for today’s microwave accelerators, but between one and 10 μm for optically powered devices.

Other laser-based approaches to miniaturisation are available. In plasma-wakefield accelerators, particles gain energy from electromagnetic fields excited in an ionised gas by a high-power drive laser (CERN Courier May/June 2024 p25). But the details are starkly different. DLAs are powered by lasers with thousands to millions of times lower peak energy. They operate with more than a million times lower electron charges, but at millions of pulses per second. And unlike plasma accelerators, but similarly to their microwave counterparts, DLAs use a solid material structure with a vacuum channel in which an electromagnetic mode continuously imparts energy to the accelerated particles.

Dielectric structures

This mode can be created by a single laser pulse perpendicular to the electron trajectory, two pulses from opposite sides, or a single pulse directed downwards into the plane of the chip. The latter two options offer better field symmetry.

As the laser impinges on the structure, its electrons experience an electromagnetic force that oscillates at the laser frequency. Particles that are correctly matched in phase and velocity experience a forward accelerating force (see “Continuous acceleration” image). Just as the imparted force begins to change sign, the particles enter the next accelerating cycle, leading to continuous energy gain.

In 2013, two early experiments attracted international attention by demonstrating the acceleration of electrons using structured dielectric devices. Peter Hommelhoff’s group in Germany accelerated 28 keV electrons inside a modified electron microscope using a single-sided glass grating (see “Evolution” image, left panel). In parallel, at SLAC, the groups of Robert Byer and Joel England accelerated relativistic 60 MeV electrons using a dual-sided grating structure, achieving an acceleration gradient of 310 MeV/m and 120 keV of energy gain (see “Evolution” image, middle panel).

Teaming up

Encouraged by the experimental demonstration of accelerating gradients of hundreds of MeV/m, and the power efficiency and compactness of modern solid-state fibre lasers, in 2015 the Gordon and Betty Moore Foundation funded an international collaboration of six universities, three government laboratories and two industry partners to form the Accelerator on a Chip International Program (ACHIP). The central goal is to demonstrate a compact tabletop accelerator based on DLA technology. ACHIP has since developed “shoebox” accelerators on both sides of the Atlantic and used them to demonstrate nanophotonics-based particle control, staging, bunching, focusing and full on-chip electron acceleration by laser-driven microchip devices.

Silicon’s compatibility with established nanofabrication processes makes it convenient, but reaching gradients of GeV/m requires materials with higher damage thresholds such as fused silica or diamond. In 2018, ACHIP research at UCLA accelerated electrons from a conventional microwave linac in a dual-sided fused silica structure powered by ultrashort (45 fs) pulses of 800 nm wavelength laser light. The result was an average energy gain of 850 MeV/m and accelerating fields up to 1.8 GV/m – more than double the prior world best in a DLA, and still a world record.

Longitudinal and transverse beam control

Since DLA structures are non-resonant, the interaction time and energy gain of the particles is limited by the duration of the laser pulse. However, by tilting the laser’s pulse front, the interaction time can be arbitrarily increased. In a separate experiment at UCLA, using a laser pulse tilted by 45˚, the interaction distance was increased to more than 700 µm – or 877 structure periods – with an energy gain of 0.315 MeV. The UCLA group has further extended this approach using a spatial light modulator to “imprint” the phase information onto the laser pulse, achieving more than 3 mm of interaction at 800 nm, or 3761 structure periods.

Under ACHIP, the structure design has evolved in several directions, from single-sided and double-sided gratings etched onto substrates to more recent designs with colonnades of free-standing silicon pillars forming the sides of the accelerating channel, as originally proposed by Robert Palmer some 30 years earlier. At present, these dual-pillar structures (see “Evolution” image, right panel) have proven to be the optimal trade-off between cleanroom fabrication complexity and experimental technicalities. However, due to the lower damage threshold of silicon as compared with fused silica, researchers have yet to demonstrate gradients above 350 MeV/m in silicon-based devices.

With the dual-pillar colonnade chosen as the fundamental nanophotonic building block, research has turned to making DLAs into viable accelerators with much longer acceleration lengths. To achieve this, we need to be able to control the beam and manipulate it in space and time, or electrons quickly diverge inside the narrow acceleration channel and are lost on impact with the accelerating structure. The ACHIP collaboration has made substantial progress here in recent years.

Focusing on nanophotonics

In conventional accelerators, quadrupole magnets focus electron beams in a near perfect analogy to how concave and convex lens arrays transport beams of light in optics. In laser-driven nanostructures it is necessary to harness the intrinsic focusing forces that are already present in the accelerating field itself.

On-chip accelerators promise powerful, high-energy and high-repetition-rate particle sources that are accessible to academic laboratories

In 2021, the Hommelhoff group guided an electron pulse through a 200 nm-wide and 80 µm-long structure based on a theoretical lattice designed by ACHIP colleagues at TU Darmstadt three years earlier. The lattice’s alternating-phase focusing (APF) periodically exchanges an electron bunch’s phase-space volume between the transverse dimension across the narrow width of the accelerating channel and the longitudinal dimension along the propagation direction of the electron pulse. In principle this technique could allow electrons to be guided through arbitrarily long structures.

Guiding is achieved by adding gaps between repeating sets of dual-pillar building-blocks (see “Beam control” image). Combined guiding and acceleration has been demonstrated within the past year. To achieve this, we select a design gradient and optimise the position of each pillar pair relative to the expected electron energy at that position in the structure. Initial electron energies are up to 30 keV in the Hommelhoff group, supplied by electron microscopes, and from 60 to 90 keV in the Byer group, using laser-assisted field emission from silicon nanotips. When accelerated, the electrons’ velocities change dramatically from 0.3 to 0.7 times the speed of light or higher, requiring the periodicity of the structure to change by tens of nanometres to match the velocity of the accelerating wave to the speed of the particles.

On-chip accelerator light source

Although focusing in the narrow dimension of the channel is the most critical requirement, an extension of this method to focus beams in the transverse vertical dimension out of plane of the chip has been proposed, which varies the geometry of the pillars along the out-of-plane dimension. Without it, the natural divergence of the beam in the vertical direction eventually becomes dominant. This approach is awaiting experimental realisation.

Acceleration gradients can be improved by optimising material choice, pillar dimensions, peak optical field strength and the duration of the laser pulses. In recent demonstrations, both the Byer and Hommelhoff groups have kept pillar dimensions constant to ease difficulties in uniformly etching the structures during nanofabrication. The complete structure is then a series of APF cells with tapered cell lengths and tapered dual-pillar periodicity. The combination of tapers accommodates both the changing size of the electron beam and the phase matching required due to the increasing electron energy.

In these proof-of-principle experiments, the Hommelhoff group has designed a nanophotonic dielectric laser accelerator for an injection energy of 28.4 keV and an average acceleration gradient of at least 22.7 MeV/m, demonstrating a 43% energy increase over a 500 µm-long structure. The Byer group recently demonstrated the acceleration of a 96 keV beam at average gradients of 35 to 50 MeV/m, reaching a 25% energy increase over 708 µm. The APF periods were in the range of tens of microns and were tapered along with the energy-gain design curve. The beams were not bunched, and by design only 4% of the electrons were captured and accelerated.

One final experimental point has important implications for the future use of DLAs as compact tabletop tools for ultrafast science. Upon interaction with the DLA, electron pulses have been observed to form trains of evenly spaced sub-wavelength attosecond-scale bunches. This effect was shown experimentally by both groups in 2019, with electron bunches measured down to 270 attoseconds, or roughly 4% of the optical cycle.

From demonstration to application

To date, researchers have demonstrated high gradient (GeV/m) acceleration, compatible nanotip electron sources, laser-driven focusing, interaction lengths up to several millimetres, the staging of multiple structures, and attosecond-level control and manipulation of electrons in nanophotonic accelerators. The most recent experiments combine these techniques, allowing the capture of an accelerated electron bunch with net acceleration and precise control of electron dynamics for the first time.

These milestone experiments demonstrate the viability of the nanophotonic dielectric electron accelerator as a scalable technology that can be extended to arbitrarily long structures and ever higher energy gains. But for most applications, beam currents need to increase.

A compelling idea proposes to “copy and paste” the accelerator design in the cleanroom and make a series of parallel accelerating channels on one chip. Another option is to increase the repetition rate of the driving laser by orders of magnitude to produce more electron pulses per second. Optimising the electron sources used by DLAs would also allow for more electrons per pulse, and parallel arrays of emitters on multi-channel devices promise tremendous advantages. Eventually, active nanophotonics can be employed to integrate the laser and electron sources on a single chip.

Once laser and electron sources are combined, we expect on-chip accelerators to become ubiquitous devices with wide-ranging and unexpected applications, much like the laser itself. Future applications will range from medical treatment tools to electron probes for ultrafast science. According to the International Atomic Energy Agency
statistics, 13% of major accelerator facilities around the world power light sources. On-chip accelerators may follow a similar path.

Illuminating concepts

A concept has been proposed for a dielectric laser-driven undulator (DLU) which uses laser light to generate deflecting forces that wiggle the electrons so that they emit coherent light. Combining a DLA and a DLU could take advantage of the unique time structure of DLA electrons to produce ultrafast pulses of coherent radiation (see “Compact light source” image). Such compact new light sources – small enough to be accessible to individual universities – could generate extremely short flashes of light in ultraviolet or even X-ray wavelength ranges, enabling tabletop instruments for the study of material dynamics on ultrafast time scales. Pulse trains of attosecond electron bunches generated by a DLA could provide excellent probes of transient molecular electronic structure.

The generation of intriguing quantum states of light might also be possible with nanophotonic devices

The generation of intriguing quantum states of light might also be possible with nanophotonic devices. This quantum light results from shaping electron wavepackets inside the accelerator and making them radiate, perhaps even leading to on-chip quantum-communication light sources.

In the realm of medicine, an ultracompact self-contained multi-MeV electron source based on integrated photonic particle accelerators could enable minimally invasive cancer treatments with improved dose control.

One day, instruments relying on high-energy electrons produced by DLA technology may bring the science of large facilities into academic-scale laboratories, making novel science endeavours accessible to researchers across various disciplines and minimally invasive medical treatments available to those in need. These visionary applications may take decades to be fully realised, but we should expect developments to continue to be rapid. The biggest challenges will be increasing beam power and transporting beams across greater energy gains. These need to be addressed to reach the stringent beam quality and machine requirements of longer term and higher energy applications.

Six rare decays at the energy frontier

Thanks to its 13.6 TeV collisions, the LHC directly explores distance scales as short as 5 × 10–20 m. But the energy frontier can also be probed indirectly. By studying rare decays, distance scales as small as a zeptometre (10–21 m) can be resolved, probing the existence of new particles with masses as high as 100 TeV. Such particles are out of the reach of any high-energy collider that could be built in this century.

The key concept is the quantum fluctuation. Just because a collision doesn’t have enough energy to bring a new particle into existence does not mean that a very heavy new particle cannot inform us about its existence. Thanks to Heisenberg’s uncertainty principle, new particles could be virtually exchanged between the other particles involved in the collisions, modifying the probabilities for the processes we observe in our detectors. The effect of massive new particles could be unmistakable, giving physicists a powerful tool for exploring more deeply into the unknown than accelerator technology and economic considerations allow direct searches to go.

The effect of massive new particles could be unmistakable

The search for new particles and forces beyond those of the Standard Model is strongly motivated by the need to explain dark matter, the huge range of particle masses from the tiny neutrino to the massive top quark, and the asymmetry between matter and antimatter that is responsible for our very existence. As direct searches at the LHC have not yet provided any clue as to what these new particles and forces might be, indirect searches are growing in importance. Studying very rare processes could allow us to see imprints of new particles and forces acting at much shorter distance scales than it is possible to explore at current and future colliders.

Anticipating the November Revolution

The charm quark is a good example. The story of its direct discovery unfolded 50 years ago, in November 1974, when teams at SLAC and MIT simultaneously discovered a charm–anticharm meson in particle collisions. But four years earlier, Sheldon Glashow, John Iliopoulos and Luciano Maiani had already predicted the existence of the charm quark thanks to the surprising suppression of the neutral kaon’s decay into two muons.

Neutral kaons are made up of a strange quark and a down antiquark, or vice versa. In the Standard Model, their decay to two muons can proceed most simply through the virtual exchange of two W bosons, one virtual up quark and a virtual neutrino. The trouble was that the rate for the neutral kaon decay to two muons predicted in this  manner turned out to be many orders of magnitude larger than observed experimentally.

NA62 experiment

Glashow, Iliopoulos and Maiani (GIM) proposed a simple solution. With visionary insight, they hypothesised a new quark, the charm quark, which would totally cancel the contribution of the up quark to this decay if their masses were equal to each other. As the rate was non-vanishing and the charm quark had not yet been observed experimentally, they concluded that the mass of the charm quark must be significantly larger than that of the up quark.

Their hunch was correct. In early 1974, months before its direct discovery, Mary K Gaillard and Benjamin Lee predicted the charm quark’s mass by analysing another highly suppressed quantity, the mass difference in K0K0 mixing.

As modifications to the GIM mechanism by new heavy particles are still a hot prospect for discovering new physics in the 2020s, the details merit a closer look. Years earlier, Nicola Cabibbo had correctly guessed that weak interactions act between up quarks and a mixture (d cos θ + s sin θ) of the down and strange quarks. We now know that charm quarks interact with the mixture (–d sin θ + s cos θ). This is just a rotation of the down and strange quarks through this Cabibbo angle. The minus sign causes the destructive interference observed in the GIM mechanism.

With the discovery of a third generation of quarks, quark mixing is now described by the Cabibbo–Kobayashi–Maskawa (CKM) matrix – a unitary three-dimensional rotation with complex phases that parameterise CP violation. Understanding its parameters may prove central to our ability to discover new physics this decade.

On to the 1980s

The story of indirect discoveries continued in the late 1980s, when the magnitude of B0d – B0d mixing implied the existence of a heavy top quark, which was confirmed in 1995, completing the third generation of quarks. The W, Z and Higgs bosons were also predicted well in advance of their discoveries. It’s only natural to expect that indirect searches for new physics will be successful at even shorter distance scales.

Belle II experiment at KEK

Rare weak decays of kaons and B mesons that are strongly suppressed by the GIM mechanism are expected to play a crucial role. Many channels of interest are predicted by the Standard Model to have branching ratios as low as 10–11, often being further suppressed by small elements of the CKM matrix. If the GIM mechanism is violated by new-physics contributions, these branching ratios – the fraction of times a particle decays that way – could be much larger.

Measuring suppressed branching ratios with respectable precision this decade is therefore an exciting prospect. Correlations between different branching ratios can be particularly sensitive to new physics and could provide the first hints of physics beyond the Standard Model. A good example is the search for the violation of lepton-flavour universality (CERN Courier May/June 2019 p33). Though hints of departures from muon–electron universality seem to be receding, hints that muon–tau universality may be violated still remain, and the measured branching ratios for B  K(K*+µ differ visibly from Standard Model predictions.

The first step in this indirect strategy is to search for discrepancies between theoretical predictions and experimental observables. The main challenge for experimentalists is the low branching ratios for the rare decays in question. However, there are very good prospects for measuring many of these highly suppressed branching ratios in the coming years.

Six channels for the 2020s

Six channels stand out today for their superb potential to observe new physics this decade. If their decay rates defy expectations, the nature of any new physics could be identified by studying the correlations between these six decays and others.

The first two channels are kaon decays: the measurements of K+ π+νν by the NA62 collaboration at CERN (see “Needle in a haystack” image), and the measurement of KL π0νν by the KOTO collaboration at J-PARC in Japan. The branching ratios for these decays are predicted to be in the ballpark of 8 × 10–11 and 3 × 10–11, respectively.

Independent observables

The second two are measurements of B  Kνν and B  K*νν by the Belle II collaboration at KEK in Japan. Branching ratios for these decays are expected to be much higher, in the ballpark of 10–5.

The final two channels, which are only accessible at the LHC, are measurements of the dimuon decays Bs µ+µ and Bd µ+µ by the LHCb, CMS and ATLAS collaborations. Their branching ratios are about 4 × 10–9 and 10–10 in the Standard Model. Though the decays B  K(K*+µare also promising, they are less theoretically clean than these six.

The main challenge for theorists is to control quantum-chromodynamics (QCD) effects, both below 10–16 m, where strong interactions weaken, and in the non-perturbative region at distance scales of about 10–15 m, where quarks are confined in hadrons and calculations become particularly tricky. While satisfactory precision has been achieved at short-distance scales over the past three decades, the situation for non-perturbative computations is expected to improve significantly in the coming years, thanks to lattice QCD and analytic approaches such as dual QCD and chiral perturbation theory for kaon decays, and heavy-quark effective field theory for B decays.

Another challenge is that Standard Model predictions for the branching ratios require values for four CKM parameters that are not predicted by the Standard Model, and which must be measured using kaon and B-meson decays. These are the magnitude of the up-strange (Vus) and charm-bottom (Vcb) couplings and the CP-violating phases β and γ. The current precision on measurements of Vus and β is fully satisfactory, and the error on γ = (63.8 ± 3.5)° should be reduced to 1° by LHCb and Belle II in the coming years. The stumbling block is Vcb, where measurements currently disagree. Though experimental problems have not been excluded, the tension is thought to originate in QCD calculations. While measurements of exclusive decays to specific channels yield 39.21(62) × 10–3, inclusive measurements integrated over final states yield 41.96(50) × 10–3. This discrepancy makes the predicted branching ratios differ by 16% for the four B-meson decays, and by 25% and 35% for K+ π+νν and KL π0νν. These discrepancies are a disaster for the theorists who had succeeded over many years of work to reduce QCD uncertainties in these decays to the level of a few percent.

One solution is to replace the CKM dependence of the branching ratios with observables where QCD uncertainties are under good control, for example: the mass differences in B0s  B0s and B0d  B0d mixing (∆Ms and ∆Md); a parameter that measures CP violation in K0 – K0 mixing (εK); and the CP-asymmetry that yields the angle β. Fitting these observables to the experimental data avoids us being forced to choose between inclusive and exclusive values for the charm-bottom coupling, and avoids the 3.5° uncertainty on γ, which in this strategy is reduced to 1.6°. Uncertainty on the predicted branching ratios is thereby reduced to 6% and 9% for B  Kνν and B  K*νν, to 5% for the two kaon decays, and to 4% for Bs µ+µ and Bd µ+µ.

So what is the current experimental situation for the six channels? The latest NA62 measurement of K+ π+νν is 25% larger than the Standard Model prediction. Its 36% uncertainty signals full compatibility at present, and precludes any conclusions about the size of new physics contributing to this decay. Next year, when the full analysis has been completed, this could turn out to be possible. It is unfortunate that the HIKE proposal was not adopted (CERN Courier May/June 2024 p7), as NA62’s expected precision of 15% could have been reduced to 5%. This could turn out to be crucial for the discovery of new physics in this decay.

The present upper bound on KL π0νν from KOTO is still two orders of magnitude above the Standard Model prediction. This bound should be lowered by at least one order of magnitude in the coming years. As this decay is fully governed by CP violation, one may expect that new physics will impact it significantly more than CP-conserving decays such as K+ π+νν.

Branching out from Belle

At present, the most interesting result concerns a 2023 update from Belle II to the measured branching ratio for B+ K+νν (see “Interesting excess” image). The resulting central value from Belle II and BaBar is currently a factor of 2.6 above the Standard Model prediction. This has sparked many theoretical analyses around the world, but the experimental error of 30% once again does not allow for firm conclusions. Measurements of other charge and spin configurations of this decay are pending.

Finally, both dimuon B-meson decays are at present consistent with Standard Model predictions, but significant improvements in experimental precision could still reveal new physics at work, especially in the case of Bd.

Hypothetical future measurements of branching ratios

It will take a few years to conclude if new physics contributions are evident in these six branching ratios, but the fact that all are now predicted accurately means that we can expect to observe or exclude new physics in them before the end of the decade. This would be much harder if measurements of the Vcb coupling were involved.

So far, so good. But what if the observables that replaced Vcb and γ are themselves affected by new physics? How can they be trusted to make predictions against which rare decay rates can be tested?

Here comes some surprisingly good news: new physics does not appear to be required to simultaneously fit them using our new basis of observables ΔMd, εK and ΔMs, as they intersect at a single point in the Vcbγ plane (see “No new physics” figure). This analysis favours the inclusive determination of Vcb and yields a value for γ that is consistent with the experimental world average and a factor of two more accurate. It’s important to stress, though, that non-perturbative four-flavour lattice-QCD calculations of ∆Ms and ∆Md by the HPQCD lattice collaboration played a key role here. It is crucial that another lattice QCD collaboration repeat these calculations, as the three curves cross at different points in three-flavour calculations that exclude charm.

Interesting years are ahead in the field of indirect searches for new physics

In this context, one realises the advantages of Vcbγ plots compared to the usual unitarity-triangle plots, where Vcb is not seen and 1° improvements in the determination of γ are difficult to appreciate. In the late 2020s, determining Vcb and γ from tree-level decays will be a central issue, and a combination of Vcb-independent and Vcb-dependent approaches will be needed to identify any concrete model of new physics.

We should therefore hope that the tension between inclusive and exclusive determinations of Vcb will soon be conclusively resolved. Forthcoming measurements of our six rare decays may then reveal new physics at the energy frontier (see “New physics” figure). With a 1° precision measurement of γ on the horizon, and many Vcb-independent ratios available, interesting years are ahead in the field of indirect searches for new physics.

In 1676 Antonie van Leeuwenhoek discovered a microuniverse populated by bacteria, which he called animalcula, or little animals. Let us hope that we will, in this decade, discover new animalcula on our flavour expedition to the zeptouniverse.

How to democratise radiation therapy

How important is radiation therapy to clinical outcomes today?

Manjit Dosanjh

Manjit Fifty to 60% of cancer patients can benefit from radiation therapy for cure or palliation. Pain relief is also critical in low- and middle-income countries (LMICs) because by the time tumours are discovered it is often too late to cure them. Radiation therapy typically accounts for 10% of the cost of cancer treatment, but more than half of the cure, so it’s relatively inexpensive compared to chemotherapy, surgery or immunotherapy. Radiation therapy will be tremendously important for the foreseeable future.

What is the state of the art?

Manjit The most precise thing we have at the moment is hadron therapy with carbon ions, because the Bragg peak is very sharp. But there are only 14 facilities in the whole world. It’s also hugely expensive, with each machine costing around $150 million (M). Proton therapy is also attractive, with each proton delivering about a third of the radiobiological effect of a carbon ion. The first proton patient was treated at Berkeley in September 1954, in the same month CERN was founded. Seventy years later, we have about 130 machines and we’ve treated 350,000 patients. But the reality is that we have to make the machines more affordable and more widely available. Particle therapy with protons and hadrons probably accounts for less than 1% of radiation-therapy treatments whereas roughly 90 to 95% of patients are treated using electron linacs. These machines are much less expensive, costing between $1M and $5M, depending on the model and how good you are at negotiating.

Most radiation therapy in the developing world is delivered by cobalt-60 machines. How do they work?

Manjit A cobalt-60 machine treats patients using a radioactive source. Cobalt has a half-life of just over five years, so patients have to be treated longer and longer to be given the same dose as the cobalt-60 gets older, which is a hardship for them, and slows the number of patients who can be treated. Linacs are superior because you can take advantage of advanced treatment options that target the tumour using focusing, multi-beams and imaging. You come in from different directions and energies, and you can paint the tumour with precision. To the best extent possible, you can avoid damaging healthy tissue. And the other thing about linacs is that once you turn it off there’s no radiation anymore, whereas cobalt machines present a security risk. One reason we’ve got funding from the US Department of Energy (DOE) is because our work supports their goal of reducing global reliance on high-activity radioactive sources through the promotion of non-radioisotopic technologies. The problem was highlighted by the ART (access to radiotherapy technologies) study I led for International Cancer Expert Corps (ICEC) on the state of radiation therapy in former Soviet Union countries. There, the legacy has always been cobalt. Only three of the 11 countries we studied have had the resources and knowledge to be able to go totally to linacs. Most still have more than 50% cobalt radiation therapy.

The kick-off meeting for STELLA took place at CERN from 29 to 30 May. How will the project work?

Manjit STELLA stands for Smart Technology to Extend Lives with Linear Accelerators. We are an international collaboration working to increase access to radiation therapy in LMICs, and in rural regions in high-income countries. We’re working to develop a linac that is less expensive, more robust and, in time, less costly to operate, service and maintain than currently available options.

Steinar Stapnes

Steinar $1.75M funding from the DOE has launched an 18 month “pre-design” study. ICEC and CERN will collaborate with the universities of Oxford, Cambridge and Lancaster, and a network of 28 LMICs who advise and guide us, providing vital input on their needs. We’re not going to build a radiation-therapy machine, but we will specify it to such a level that we can have informed discussions with industry partners, foundations, NGOs and governments who are interested in investing in developing lower cost and more robust solutions. The next steps, including prototype construction, will require a lot more funding.

What motivates the project?

Steinar The basic problem is that access to radiation therapy in LMICs is embarrassingly limited. Most technical developments are directed towards high-income countries, ultimately profiting the rich people in the world – in other words, ourselves. At present, only 10% of patients in LMICs have access to radiation therapy.

Were working to develop a linac that is less expensive, more robust and less costly to operate, service and maintain than currently available options

Manjit The basic design of the linac hasn’t changed much in 70 years. Despite that, prices are going up, and the cost of service contracts and software upgrades is very high. Currently, we have around 420 machines in Africa, many of which are down for long intervals, which often impacts treatment outcomes. Often, a hospital can buy the linac but they can’t afford the service contract or repairs, or they don’t have staff with the skills to maintain them. I was born in a small village with no gas, electricity or water. I wasn’t supposed to go to school because girls didn’t. I was fortunate to have got an education that enabled me to have a better life with access to the healthcare treatments that I need. I look at this question from the perspective of how we can make radiation therapy available around the world in places such as where I’m originally from.

What’s your vision for the STELLA machine?

Steinar We want to get rid of the cobalt machines because they are not as effective as linacs for cancer treatment and they are a security risk. Hadron-therapy machines are more costly, but they are more precise, so we need to make them more affordable in the future. As Manjit said, globally 90 or 95% of radiation treatments are given by an electron linac, most often running at 6 MeV. In a modern radiation therapy facility today, such linacs are not developing so fast. Our challenge is to make them more reliable and serviceable. We want to develop a workhorse radiation therapy system that can do high-quality treatment. The other, perhaps more important, key parts are imaging and software. CERN has valuable experience here because we build and integrate a lot of detector systems including readout and data-analysis. From a certain perspective, STELLA will be an advanced detector system with an integrated linac.

Are any technical challenges common to both STELLA and to projects in fundamental physics?

Steinar The early and remote prediction of faults is one. This area is developing rapidly, and it would be very interesting for us to deploy this on a number of accelerators. On the detector and sensor side, we would like to make STELLA easily upgradeable, and some of these upgrades could be very much linked to what we want to do for our future detectors. This can increase the industrial base for developing these types of detectors as the medical market is very large. Software can also be interesting, for example for distributed monitoring and learning.

Where are the biggest challenges in bringing STELLA to market?

Steinar We must make medical linacs open in terms of hardware. Hospitals with local experts must be able to improve and repair the system. It must have a long lifetime. It needs to be upgradeable, particularly with regard to imaging, because detector R&D and imaging software are moving quickly. We want it to be open in terms of software, so that we can monitor the performance of the system, predict faults, and do treatment planning off site using artificial intelligence. Our biggest contribution will be to write a specification for a system where we “enforce” this type of open hardware and open software. Everything we do in our field relies on that open approach, which allows us to integrate the expertise of the community. That’s something we’re good at at CERN and in our community. A challenge for STELLA is to build in openness while ensuring that the machines can remain medically qualified and operational at all times.

How will STELLA disrupt the model of expensive service contracts and lower the cost of linacs?

Steinar This is quite a complex area, and we don’t know the solution yet. We need to develop a radically different service model so that developing countries can afford to maintain their machines. Deployment might also need a different approach. One of the work packages of this project is to look at different models and bring in expertise on new ideas. The challenges are not unique to radiation therapy. In the next 18 months we’ll get input from people who’ve done similar things.

A medical linac at the Genolier Clinic

Manjit Gavi, the global alliance for vaccines, was set up 24 years ago to save millions of children who died every year from vaccine-preventable diseases such as measles, TB, tetanus and rubella using vaccinations that were not available to millions of children in poorer parts of the world, especially Africa. Before, people were dying of these diseases, but now they get a vaccination and live. Vaccines and radiation therapy are totally different technologies, but we may need to think that way to really make a critical difference.

Steinar There are differences with respect to vaccine development. A vaccine is relatively cheap, whereas a linac costs millions of dollars. The diseases addressed by vaccines affect a lot of children, more so than cancer, so the patients have a different demographic. But nonetheless, the fact is that there was a group of countries and organisations who took this on as a challenge, and we can learn from their experiences.

Manjit We would like to work with the UN on their efforts to get rid of the disparities and focus on making radiation therapy available to the 70% of the world that doesn’t have access. To accomplish that, we need global buy-in, especially from the countries who are really suffering, and we need governmental, private and philanthropic support to do so.

What’s your message to policymakers reading this who say that they don’t have the resources to increase global access to radiation therapy?

Steinar Our message is that this is a solvable problem. The world needs roughly 5000 machines at $5M or less each. On a global scale this is absolutely solvable. We have to find a way to spread out the technology and make it available for the whole world. The problem is very concrete. And the solution is clear from a technical standpoint.

Manjit The International Atomic Energy Agency (IAEA) have said that the world needs one of these machines for every 200 to 250 thousand people. Globally, we have a population of 8 billion. This is therefore a huge opportunity for businesses and a huge opportunity for governments to improve the productivity of their workforces. If patients are sick they are not productive. Particularly in developing countries, patients are often of a working economic age. If you don’t have good machines and early treatment options for these people, not only are they not producing, but they’re going to have to be taken care of. That’s an economic burden on the health service and there is a knock-on effect on agriculture, food, the economy and the welfare of children. One example is cervical cancer. Nine out of 10 deaths from cervical cancer are in developing countries. For every 100 women affected, 20 to 30 children die because they don’t have family support.

How can you make STELLA attractive to investors?

Steinar Our goal is to be able to discuss the project with potential investor partners – and not only in industry but also governments and NGOs, because the next natural step will be to actually build a prototype. Ultimately, this has to be done by industry partners. We likely cannot rely on them to completely fund this out of their own pockets, because it’s a high-risk project from a business point of view. So we need to develop a good business model and find government and private partners who are willing to invest. The dream is to go into a five-year project after that.

We need to develop a good business model and find government and private partners who are willing to invest

Manjit It’s important to remember that this opportunity is not only linked to low-income countries. One in two UK citizens will get cancer in their lifetime, but according to a study that came out in February, only 25 to 28% of UK citizens have adequate access to radiation therapy. This is also an opportunity for young people to join an industrial system that could actually solve this problem. Radiation therapy is one of the most multidisciplinary fields there is, all the way from accelerators to radio-oncology and everything in between. The young generation is altruistic. This will capture their spirit and imagination.

Can STELLA help close the radiation-therapy gap?

Manjit When the IAEA first visualised radiation-therapy inequalities in 2012, it raised awareness, but it didn’t move the needle. That’s because it’s not enough to just train people. We also need more affordable and robust machines. If in 10 or 20 years people start getting treatment because they are sick, not because they’re dying, that would be a major achievement. We need to give people hope that they can recover from cancer.

A gold mine for neutrino physics

In 1968, deep underground in the Homestake gold mine in South Dakota, Ray Davis Jr. observed too few electron neutrinos emerging from the Sun. The reason, we now know, is that many had changed flavour in flight, thanks to tiny unforeseen masses.

At the same time, Steven Weinberg and Abdus Salam were carrying out major construction work on what would become the Standard Model of particle physics, building the Higgs mechanism into Sheldon Glashow’s unification of the electromagnetic and weak interactions. The Standard Model is still bulletproof today, with one proven exception: the nonzero neutrino masses for which Davis’s observations were in hindsight the first experimental evidence.

Today, neutrinos are still one of the most promising windows into physics beyond the Standard Model, with the potential to impact many open questions in fundamental science (CERN Courier May/June 2024 p29). One of the most ambitious experiments to study them is currently taking shape in the same gold mine as Davis’s experiment more than half a century before.

Deep underground

In February this year, the international Deep Underground Neutrino Experiment (DUNE) completed the excavation of three enormous caverns 1.5 kilometres below the surface at the new Sanford Underground Research Facility (SURF) in the Homestake mine. 800,000 tonnes of rock have been excavated over two years to reveal an underground campus the size of eight soccer fields, ready to house four 17,500 tonne liquid–argon time-projection chambers (LArTPCs). As part of a diverse scientific programme, the new experiment will tightly constrain the working model of three massive neutrinos, and possibly even disprove it.

DUNE will measure the disappearance of muon neutrinos and the appearance of electron neutrinos over 1300 km and a broad spectrum of energies. Given the long journey of its accelerator-produced neutrinos from the Long Baseline Neutrino Facility (LBNF) at Fermilab in Illinois to SURF in South Dakota, DUNE will be uniquely sensitive to asymmetries between the appearance of electron neutrinos and antineutrinos. One predicted asymmetry will be caused by the presence of electrons and the absence of positrons in the Earth’s crust. This asymmetry will probe neutrino mass ordering – the still unknown ordering of narrow and broad mass splittings between the three tiny neutrino masses. In its first phase of operation, DUNE will definitively establish the neutrino mass ordering regardless of other parameters.

The field cage of a prototype liquid–argon time-projection chamber

If CP symmetry is violated, DUNE will then observe a second asymmetry between electron neutrinos and antineutrinos, which by experimental design is not degenerate with the first asymmetry. Potentially the first evidence for CP violation by leptons, this measurement will be an important experimental input to the fundamental question of how a matter–antimatter asymmetry developed in the early universe.

If CP violation is near maximal, DUNE will observe it at 3σ (99.7% confidence) in its first phase. In DUNE and LBNF’s recently reconceptualised second phase, which was strongly endorsed by the US Department of Energy’s Particle Physics Project Prioritization Panel (P5) in December (CERN Courier January/February 2024 p7), 3σ sensitivity to CP violation will be extended to more than 75% of possible values of δCP, the complex phase that parameterises this effect in the three-massive-neutrino paradigm.

Combining DUNE’s measurements with those by fellow next-generation experiments JUNO and Hyper-Kamiokande will test the three-flavour paradigm itself. This paradigm rotates three massive neutrinos into the mixtures that interact with charged leptons via the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix, which features three angles in addition to δCP.

As well as promising world-leading resolution on the PMNS angle θ23, DUNE’s measurements of θ13 and the Δm232 mass splitting will be different and complementary to those of JUNO in ways that could be sensitive to new physics. JUNO, which is currently under construction in China, will operate in the vicinity of a flux of lower-energy electron antineutrinos from nuclear reactors. DUNE and Hyper-Kamiokande, which is currently under construction in Japan, will both study accelerator-produced sources of muon neutrinos and antineutrinos, though using radically different baselines, energy spectra and detector designs.

Innovative and impressive

DUNE’s detector technology is innovative and impressive, promising millimetre-scale precision in imaging the interactions of neutrinos from accelerator and astrophysical sources (see “Millimetre precision” image). The argon target provides unique sensitivity to low-energy electron neutrinos from supernova bursts, while the detectors’ imaging capabilities will be pivotal when searching for beyond-the-Standard-Model physics such as dark matter, sterile-neutrino mixing and non-standard neutrino interactions.

First proposed by Nobel laureate Carlo Rubbia in 1977, LArTPC technology demonstrated its effectiveness as a neutrino detector at Gran Sasso’s ICARUS T600 detector more than a decade ago, and also more recently in the MicroBooNE experiment at Fermilab. Fermilab’s short-baseline neutrino programme now includes ICARUS and the new Short Baseline Neutrino Detector, which is due to begin taking neutrino data this year.

A charged pion ejects a proton

The first phase of DUNE will construct one LArTPC in each of the two detector caverns, with the second phase adding an additional detector in each. A central utility cavern between the north and south caverns will house infrastructure to support the operation of the detectors.

Following excavation by Thyssen Mining, final concrete work was completed in all the underground caverns and drifts, and the installation of power, lighting, plumbing, heating, ventilation and air conditioning is underway. 90% of the subcontracts for the installation of the civil infrastructure have already been awarded, with LBNF and DUNE’s economic impact in Illinois and South Dakota estimated to be $4.3 billion through fiscal years 2022 to 2030.

Once the caverns are prepared, two large membrane cryostats will be installed to house the detectors and their liquid argon. Shipment of material for the first of the two cryostats being provided by CERN is underway, with the first of approximately 2000 components having arrived at SURF in January; the remainder of the steel for the first cryostat was due to have been shipped from its port in Spain by the end of May. The manufacture of the second cryostat by Horta Coslada is ongoing (see “Cryostat creation” image).

Procedures for lifting and manipulating the components will be tested in South Dakota in spring 2025, allowing the collaboration to ensure that it can safely and efficiently handle bulky components with challenging weight distributions in an environment where clearances can reach as little as 3 inches on either side. Lowering detector components down the Homestake mine’s Ross shaft will take four months.

Two configurations

The two far-detector modules needed for phase one of the DUNE experiment will use the same LArTPC technology, though with different anode and high-voltage configurations. A “horizontal-drift” far detector will use 150 6 m-by-2.3 m anode plane assemblies (APAs). Each will be wound with 4000 150 μm diameter copper-beryllium wires to collect ionisation signals from neutrino interactions with the argon.

A section of the second cryostat for DUNE

A second “vertical-drift” far detector will instead use charge readout planes (CRPs) – printed circuit boards perforated with an array of holes to capture the ionisation signals. Here, a horizontal cathode plane will divide the detector into two vertically stacked volumes. This design yields a slightly larger instrumented volume, which is highly modular in design, and simpler and more cost-effective to construct and install. A small amount of xenon doping will significantly enhance photo detection, allowing more light to be collected beyond a drift length of 4 m.

The construction of the horizontal-drift APAs is well underway at STFC Daresbury Laboratory in the UK and at the University of Chicago in the US. Each APA takes several weeks to produce, motivating the parallelisation of production across five machines in Daresbury and one in Chicago. Each machine automates the winding of 24 km of wire onto each APA (see “Wind it up” image). Technicians then solder thousands of joints and use a laser system to ensure the wires are all wound to the required tension.

Two large ProtoDUNE detectors at CERN are an essential part of developing and validating DUNE’s detector design. Four APAs are currently installed in a horizontal-drift prototype that will take data this summer as a final validation of the design of the full detector. A vertical-drift prototype (see “Vertical drift” image) will then validate the production of CRP anodes and optimise their electronics. A full-scale test of vertical-drift-detector installation will take place at CERN later this year.

Phase transition

Alongside the deployment of two additional far-detector modules, phase two of the DUNE experiment will include an increase in beam power beyond 2 MW and the deployment of a more capable near detector (MCND) featuring a magnetised high-pressure gaseous-argon TPC. These enhancements pursue increased statistics, lower energy thresholds, better energy resolution and lower intrinsic backgrounds. They are key to DUNE’s measurement of the parameters governing long-baseline neutrino oscillations, and will expand the experiment’s physics scope, including searches for anomalous tau-neutrino appearance, long-lived particles, low-mass dark matter and solar neutrinos.

A winding machine producing a ProtoDUNE anode plane assembly

Phase-one vertical-drift technology is the starting point for phase-two far-detector R&D – a global programme under ECFA in Europe and CPAD in the US that seeks to reduce costs and improve performance. Charge-readout R&D includes improving charge-readout strips, 3D pixel readout and 3D readout using high-performance fast cameras. Light-readout R&D seeks to maximise light coverage by integrating bare silicon photomultipliers and photoconductors into the detector’s field-cage structure.

A water-based liquid scintillator module capable of separately measuring scintillation and Cherenkov light is currently being explored as a possible alternative technology for the fourth “module of opportunity”. This would require modifications to the near detector to include corresponding non-argon targets.

Intense work

At Fermilab, site preparation work is already underway for LBNF, and construction will begin in 2025. The project will produce the world’s most intense beam of neutrinos. Its wide-band beam will cover more than one oscillation period, allowing unique access to the shape of the oscillation pattern in a long-baseline accelerator-neutrino experiment.

LBNF will need modest upgrades to the beamline to handle the 2 MW beam power from the upgrade to the Fermilab accelerator complex, which was recently endorsed by P5. The bigger challenge to the facility will be the proton-target upgrades needed for operation at this beam power. R&D is now taking place at Fermilab and at the Rutherford Appleton Laboratory in the UK, where DUNE’s phase-one 1.2 MW target is being designed and built.

The next generation of big neutrino experiments promises to bring new insights into the nature of our universe

DUNE highlights the international and collaborative nature of modern particle physics, with the collaboration boasting more than 1400 scientists and engineers from 209 institutions in 37 countries. A milestone was achieved late last year when the international community came together to sign the first major multi-institutional memorandum of understanding with the US Department of Energy, affirming commitments to the construction of detector components for DUNE and pushing the project to its next stage. US contributions are expected to cover roughly half of what is needed for the far detectors and the MCND, with the international community contributing the other half, including the cryostat for the third far detector.

DUNE is now accelerating into its construction phase. Data taking is due to start towards the end of this decade, with the goal of having the first far-detector module operational before the end of 2028.

The next generation of big neutrino experiments promises to bring new insights into the nature of our universe – whether it is another step towards understanding the preponderance of matter, the nature of the supernovae explosions that produced the stardust of which we are all made, or even possible signatures of dark matter… or something wholly unexpected!

Tabletop experiment constrains neutrino size

The BeEST experiment

How big is a neutrino? Though the answer depends on the physical process that created it, knowledge of the size of neutrino wave packets is at present so wildly unconstrained that every measurement counts. New results from the Beryllium Electron capture in Superconducting Tunnel junctions (BeEST) experiment in TRIUMF, Canada, set new lower limits on the size of the neutrino’s wave packet in terrestrial experiments – though theorists are at odds over how to interpret the data.

Neutrinos are created as a mixture of mass eigenstates. Each eigenstate is a wave packet with a unique group velocity. If the wave packets are too narrow, they eventually stop overlapping as the wave evolves, and quantum interference is lost. If the wave packets are too broad, a single mass eigenstate is resolved by Heisenberg’s uncertainty principle, and quantum interference is also lost. No quantum interference means no neutrino oscillations.

“Coherence conditions constrain the lengths of neutrino wave packets both from below and above,” explains theorist Evgeny Akhmedov of MPI-K Heidelberg. “For neutrinos, these constraints are compatible, and the allowed window is very large because neutrinos are very light. This also hints at an answer to the frequently asked question of why charged leptons don’t oscillate.”

The spatial extent of the neutrino wavepacket has so far only been constrained to within 13 orders of magnitude by reactor-neutrino oscillations, say the BeEST team. If wave-packet sizes were at the experimental lower limit set by the world’s oscillation data, it could have impacted future oscillation experiments, such as the Jiangmen Underground Neutrino Observatory (JUNO) that is currently under construction in China.

“This could have destroyed JUNO’s ability to probe the neutrino mass ordering,” says Akhmedov, “however, we expect the actual sizes to be at least six orders of magnitude larger than the lowest limit from the world’s oscillation data. We have no hope of probing them in terrestrial oscillation experiments, in my opinion, though the situation may be different for astrophysical and cosmological neutrinos.”

BeEST uses a novel method to constrain the size of the neutrino wavepacket. The group creates electron neutrinos via electron capture on unstable 7Be nuclei produced at the TRIUMF–ISAC facility in Vancouver. In the final state there are only two products: the electron neutrino and a newly transmuted 7Li daughter atom that receives a tiny energy “kick” by emitting the neutrino. By embedding the 7Be isotopes in superconducting quantum sensors at 0.1 K, the collaboration can measure this low-energy recoil to high precision. Via the uncertainty principle, the team infers a limit on the spatial localisation of the entire final-state system of 6.2 pm – more than 1000 times larger than the nucleus itself.

Consensus has not been reached on how to infer the new lower limit on the size of the neutrino wave packet, with the preprint quoting two lower limits in the vicinity of 10–11 m and 10–8 m based on different theoretical assumptions. Although they differ dramatically, even the weaker limit improves upon all previous reactor oscillation data by more than an order of magnitude, and is enough to rule out decoherence effects as an explanation for sterile-neutrino anomalies, says the collaboration.

“I think the more stringent limit is correct,” says Akhmedov, who points out that this is only about 1.5 orders of magnitude lower than some theoretical predictions. “I am not an experimentalist and therefore cannot judge whether an improvement of 1.5 orders of magnitude can be achieved in the foreseeable future, but I very much hope that this is possible.”

In defiance of cosmic-ray power laws

The Calorimetric Electron Telescope

In a series of daring balloon flights in 1912, Victor Hess discovered radiation that intensified with altitude, implying extra-terrestrial origins. A century later, experiments with cosmic rays have reached low-Earth orbit, but physicists are still puzzled. Cosmic-ray spectra are difficult to explain using conventional models of galactic acceleration and propagation. Hypotheses for their sources range from supernova remnants, active galactic nuclei and pulsars to physics beyond the Standard Model. The study of cosmic rays in the 1940s and 1950s gave rise to particle physics as we know it. Could these cosmic messengers be about to unlock new secrets, potentially clarifying the nature of dark matter?

The cosmic-ray spectrum extends well into the EeV regime, far beyond what can be reached by particle colliders. For many decades, the spectrum was assumed to be broken into intervals, each following a power law, as Enrico Fermi had historically predicted. The junctures between intervals include: a steepening decline at about 3 × 106 GeV known as the knee; a flattening at about 4 × 109 GeV known as the ankle; and a further steepening at the supposed end of the spectrum somewhere above 1010 GeV (10 EeV).

The Calorimetric Electron Telescope detector

While the cosmic-ray population at EeV energies may include contributions from extra-galactic cosmic rays, and the end of the spectrum may be determined by collisions with relic cosmic-microwave-background photons – the Greisen–Zatsepin–Kuzmin cutoff – the knee is still controversial as the relative abundance of protons and other nuclei is largely unknown. What’s more, recent direct measurements by space-borne instruments have discovered “spectral curvatures” below the knee. These significant deviations from a pure power law range from a few hundred GeV to a few tens of TeV. Intriguing anomalies in the spectra of cosmic-ray electrons and positrons have also been observed below the knee.

Electron origins

The Calorimetric Electron Telescope (CALET; see “Calorimetric telescope” figure) on board the International Space Station (ISS) provides the highest-energy direct measurements of the spectrum of cosmic-ray electrons and positrons. Its goal is to observe discrete sources of high-energy particle acceleration in the local region of our galaxy. Led by the Japan Aerospace Exploration Agency, with the participation of the Italian Space Agency and NASA, CALET was launched from the Tanegashima Space Center in August 2015, becoming the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During 2017 a third experiment, ISS-CREAM, joined AMS-02 and CALET, but its observation time ended prematurely.

A candidate electron event in CALET

As a result of radiative losses in space, high-energy cosmic-ray electrons are expected to originate just a few thousand light-years away, relatively close to Earth. CALET’s homogeneous calorimeter (fully active, with no absorbers) is optimised to reconstruct such particles (see “Energetic electron” figure). With the exception of the highest energies, anisotropies in their arrival direction are typically small due to deflections by turbulent interstellar magnetic fields.

Energy spectra also contain crucial information as to where and how cosmic-ray electrons are accelerated. And they could provide possible signatures of dark matter. For example, the presence of a peak in the spectrum could be a sign of dark-matter decay, or dark-matter annihilation into an electron–positron pair, with a detected electron or positron in the final state.

Direct measurements of the energy spectra of charged cosmic rays have recently achieved unprecedented precision thanks to long-term observations of electrons and positrons of cosmic origin, as well as of individual elements from hydrogen to nickel, and even beyond. Space-borne instruments such as CALET directly identify cosmic nuclei by measuring their electric charge. Ground-based experiments must do so indirectly by observing the showers they generate in the atmosphere, incurring large systematic uncertainties. Either way, hadronic cosmic rays can be assumed to be fully stripped of atomic electrons in their high-temperature regions of origin.

A rich phenomenology

The past decade has seen the discovery of unexpected features in the differential energy spectra of both leptonic and hadronic cosmic rays. The observation by PAMELA and AMS of an excess of positrons above 10 GeV has generated widespread interest and still calls for an unambiguous explanation (CERN Courier December 2016 p26). Possibilities include pair production in pulsars, in addition to the well known interactions with the interstellar gas, and the annihilation of dark matter into electron–positron pairs.

Combined electron and positron flux measurements as a function of kinetic energy

Regarding cosmic-ray nuclei, significant deviations of the fluxes from pure power-law spectra have been observed by several instruments in flight, including by CREAM on balloon launches from Antarctica, by PAMELA and DAMPE aboard satellites in low-Earth orbit, and by AMS-02 and CALET on the ISS. Direct measurements have also shown that the energy spectra of “primary” cosmic rays is different from those of “secondary” cosmic rays created by collisions of primaries with the interstellar medium. This rich phenomenology, which encodes information on cosmic-ray acceleration processes and the history of their propagation in the galaxy, is the subject of multiple theoretical models.

An unexpected discovery by PAMELA, which had been anticipated by CREAM and was later measured with greater precision by AMS-02, DAMPE and CALET, was the observation of a flattening of the differential energy spectra of protons and helium. Starting from energies of a few hundred GeV, the proton flux shows a smooth and progressive hardening (increase in gradient) of the spectrum that continues up to around 10 TeV, above which a completely different regime is established. A turning point was the subsequent discovery by CALET and DAMPE of an unexpected softening of proton and helium fluxes above about 10 TeV/Z, where the atomic number Z is one for protons and two for helium. The presence of a second break challenges the conventional “standard model” of cosmic-ray spectra and calls for a further extension of the observed energy range, currently limited to a few hundred TeV.

At present, only two experiments in low-Earth orbit have an energy reach beyond 100 TeV: CALET and DAMPE. They rely on a purely calorimetric measurement of the energy, while space-borne magnetic spectrometers are limited to a maximum magnetic “rigidity” – a particle’s momentum divided by its charge – of a few teravolts. Since the end of PAMELA’s operations in 2016, AMS-02 is now the only instrument in orbit with the ability to discriminate the sign of the charge. This allows separate measurements of the high-energy spectra of positrons and antiprotons – an important input to the observation of final states containing antiparticles for dark-matter searches. AMS-02 is also now preparing for an upgrade: an additional silicon tracker layer will be deployed at the top of the instrument to enable a significant increase in its acceptance and energy reach (CERN Courier March/April 2024 p7).

Pioneering observations

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers, enabling measurements of electrons up to 20 TeV and measurements of hadrons up to 1 PeV. As an all-calorimetric instrument with no magnetic field, its main science goal is to perform precision measurements of the detailed shape of the inclusive spectra of electrons and positrons.

The Vela Pulsar

Thanks to its advanced imaging calorimeter, CALET can measure the kinetic energy of incident particles well into TeV energies, maintaining excellent proton–electron discrimination throughout. CALET’s homogeneous calorimeter has a total thickness of 30 radiation lengths, allowing for a full containment of electron showers. It is preceded by a high-granularity pre-shower detector with imaging capabilities that provide a redundant measurement of charge via multiple energy-loss measurements. The calibration of the two instruments is the key to controlling the energy scale, motivating beam tests at CERN before launch.

A first important deviation from a scale-invariant power-law spectrum was found for electrons near 1 TeV. Here, CALET and DAMPE observed a significant flux reduction, as expected from the large radiative losses of electrons during their travel in space. CALET has now published a high-statistics update up to 7.5 TeV, reporting the presence of candidate electrons above the 1 TeV spectral break (see “Electron break” figure).

This unexplored region may hold some surprises. For example, the detection of even higher energy electrons, such as the 12 TeV candidate recently found by CALET, may indicate the contribution of young and nearby sources such as the Vela supernova remnant, which is known to host a pulsar (see “Pulsar home” image).

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers

A second unexpected finding is the observation of a significant reduction in the proton flux around 10 TeV. This bump and dip were also observed by DAMPE and anticipated by CREAM, albeit with low statistics (see “Proton bump” figure). A precise measurement of the flux has allowed CALET to fit the spectrum with a double-broken power law: after a spectral hardening starting at a few hundred GeV, which is also observed by AMS-02 and PAMELA, and which progressively increases above 500 GeV, a steep softening takes place above 10 TeV.

Proton flux measurements as a function of the kinetic energy

A similar bump and dip have been observed in the helium flux. These spectral features may result from a single physical process that generates a bump in the cosmic-ray spectrum. Theoretical models include an anomalous diffusive regime near the acceleration sources, the dominance of one or more nearby supernova remnants, the gradual release of cosmic rays from the source, and the presence of additional sources.

CALET is also a powerful hunter of heavier cosmic rays. Measurements of the spectra of boron, carbon and oxygen ions have been extended in energy reach and precision, providing evidence of a progressive spectral hardening for most of the primary elements above a few hundred GeV per nucleon. The boron-to-carbon flux ratio is an important input for understanding cosmic-ray propagation. This is because diffusion through the interstellar medium causes an additional softening of the flux of secondary cosmic rays such as boron with respect to primary cosmic rays such as carbon (see “Break in B/C?” figure). The collaboration also recently published the first high-resolution flux measurement of nickel (Z = 28), revealing the element to have a very similar spectrum to iron, suggesting similar acceleration and propagation behaviour.

CALET is also studying the spectra of sub-iron elements, which are poorly known above 10 GeV per nucleon, and ultra-heavy galactic cosmic rays such as zinc (Z = 30), which are quite rare. CALET studies abundances up to Z = 40 using a special trigger with a large acceptance, so far revealing an excellent match with previous measurements from ACE-CRIS (a satellite-based detector), SuperTIGER (a balloon-borne detector) and HEAO-3 (a satellite-based detector decommissioned in the 1980s). Ultra-heavy galactic cosmic rays provide insights into cosmic-ray production and acceleration in some of the most energetic processes in our galaxy, such as supernovae and binary-neutron-star mergers.

Gravitational-wave counterparts

In addition to charged particles, CALET can detect gamma rays with energies between 1 GeV and 10 TeV, and study the diffuse photon background as well as individual sources. To study electromagnetic transients related to complex phenomena such as gamma-ray bursts and neutron-star mergers, CALET is equipped with a dedicated monitor that to date has detected more than 300 gamma-ray bursts, 10% of which are short bursts in the energy range 7 keV to 20 MeV. The search for electromagnetic counterparts to gravitational waves proceeds around the clock by following alerts from LIGO, VIRGO and KAGRA. No X-ray or gamma-ray counterparts to gravitational waves have been detected so far.

CALET measurements of the boron to carbon flux ratio

On the low-energy side of cosmic-ray spectra, CALET has contributed a thorough study of the effect of solar activity on galactic cosmic rays, revealing charge dependence on the polarity of the Sun’s magnetic field due to the different paths taken by electrons and protons in the heliosphere. The instrument’s large-area charge detector has also proven to be ideal for space-weather studies of relativistic electron precipitation from the Van Allen belts in Earth’s magnetosphere.

The spectacular recent experimental advances in cosmic-ray research, and the powerful theoretical efforts that they are driving, are moving us closer to a solution to the century-old puzzle of cosmic rays. With more than four billion cosmic rays observed so far, and a planned extension of the mission to the nominal end of ISS operativity in 2030, CALET is expected to continue its campaign of direct measurements in space, contributing sharper and perhaps unexpected pictures of their complex phenomenology.

Super-massive black holes quickly repoint their jets

Two galaxy clusters observed by the Chandra X-ray Observatory

With masses up to 1015 times greater than that of the Sun, galaxy clusters are the largest concentrations of matter in the universe. Within these objects, the space between the galaxies is filled with a gravitationally bound hot plasma. Given time, this plasma accretes on the galaxies, cools down and eventually forms stars. However, observations indicate that the rate of star formation is slower than expected, suggesting that processes are at play that prevent the gas from accreting. Violent bursts and jets coming from super-massive black holes in the centre of galaxy clusters are thought to quench star formation. A new study indicates that these jets rapidly change their directions.

Super-massive black holes form the centre of all galaxies, including our own, and can undergo periods of activity during which powerful jets are emitted along their spin axes. In the case of galaxy clusters, these bursts can be spotted in real time by looking at their radio emission, while their histories can be traced using X-ray observations. As the jets are emitted, they crash into the intra-cluster plasma, sweeping up material and leaving behind bubbles, or cavities, in the plasma. As the plasma emits in the X-ray region, these bubbles reveal themselves as voids when viewed with X-ray detectors. After their creation, they continue to move through the plasma and remain visible long after the original jet has disappeared (see image).

Francesco Ubertosi of the University of Bologna and co-workers studied a sample of about 60 clusters observed using the Very Long Baseline Array, which produces highly detailed radio information, and the Chandra X-ray telescope. The team studied the angle between the cavities and the current radio jet and found that most cavities are simply aligned, indicating that the current jet points in the same direction as those responsible for the cavities produced in the past. However, around one third of the studied objects show significant angles, some as large as 90°.

Violent bursts and jets are thought to quench star formation

This study therefore shows that the source of the jet, the super-massive black hole, appears to be able to reorient itself over time. More importantly, by dating the cavities the team showed that this can happen within time scales of just one million years. To get an idea of the rapidity of this change, consider that the solar system takes 225 million years to revolve around the super-massive black hole at the centre of the Milky Way. Analogously, Earth takes 365 days for one revolution around the Sun. Therefore, if the Milky Way’s super-massive black hole altered its spin axis on the timescale of one million years, it would be as if the Sun were to change its spin axis in a matter of a few days.

These observations raise the question of how the re-orientation of jets from super-massive black holes takes place. The authors find that the results are unlikely to be due to projection effects, or perturbations that significantly shift the position of the cavities. Instead, the most plausible explanation is that the spin axes of the super-massive black hole tilt significantly, likely affected by complex accretion flows. The results therefore reveal important information about the accretion dynamics of super-massive black holes. They also offer important insights into how stars form in these clusters, as the reorientation would further suppress star formation.

bright-rec iop pub iop-science physcis connect