Comsol -leaderboard other pages

Topics

Homing in on the Higgs self-interaction

Non-resonant and resonant processes driving di-Higgs production at the LHC

The simplest possible interaction in nature is when three identical particle lines, with the same quantum numbers, meet at a single vertex. The Higgs boson is the only known elementary particle that can exhibit such behaviour. More importantly, the strength of the coupling between three or even four Higgs bosons will reveal the first picture of the shape of the Brout–Englert–Higgs potential, responsible for the evolution of the universe in its first moments as well as possibly its fate.

Since the discovery of the Higgs boson at the LHC in 2012, the ATLAS and CMS collaborations have measured its properties and interactions with increasing precision. This includes its couplings to the gauge bosons and to third-generation fermions, its production cross sections, mass and width. So far, the boson appears as the Standard Model (SM) says it should. But the picture is still fuzzy, and many more measurements are needed. After all, the Higgs boson may interact with new particles suggested by theories beyond the SM to shed light on mysteries including the nature of the electroweak phase transition.

Line of attack

“The Higgs self-coupling is the next big thing since the Higgs discovery, and di-Higgs production is our main line of attack,” says Jana Schaarschmidt of ATLAS. “The experiments are making tremendous progress towards measuring Higgs-boson pair production at the LHC – far more than was imagined would be possible 12 years ago – thanks to improvements in analysis techniques and machine learning in particular.”

The dominant process for di-Higgs production at the LHC, gluon–gluon fusion, proceeds via a box or triangle diagram, the latter offering access to the trilinear Higgs coupling constant λ (see figure). Destructive interference between the two processes makes di-Higgs production extremely rare, with a cross section at the LHC about 1000 times smaller than that for single-Higgs production. Many different decay channels are available to ATLAS and CMS. Those with a high probability to occur are chosen if they can also provide a clean way to be distinguished from backgrounds. The most sensitive channels are those with one Higgs boson decaying to a b-quark pair and the other decaying either to a pair of photons, τ leptons or b quarks.

During this year’s Rencontres de Moriond, ATLAS presented new results in the HH → bbbb and HH → multileptons channels and CMS in the HH → γγττ channel. In May, ATLAS released a combination of searches for HH production in five channels using the complete LHC Run 2 dataset. The combination provides the best expected sensitivities to HH production (excluding values more than 2.4 times the SM prediction) and to the Higgs boson self-coupling. A combination of HH searches published by CMS in 2022 obtains a similar sensitivity to the di-Higgs cross-section limits. “In late 2023 we put out a preliminary result combining single-Higgs and di-Higgs analyses to constrain the Higgs self-coupling, and further work on combining all the latest analyses is ongoing,” explains Nadjieh Jafari of CMS.

The Higgs self-coupling is the next big thing since the Higgs discovery

Considerable improvements are expected with the LHC Run 3 and much larger High-Luminosity LHC (HL-LHC) datasets. Based on extrapolations of early subsets of its Run 2 analyses, ATLAS expects to detect SM di-Higgs production with a significance of 3.2σ (4.6σ) with (without) systematic uncertainties by the end of the HL-LHC era. With similar progress at CMS, a di-Higgs observation is expected to be possible at the HL-LHC even with current analy­sis techniques, along with improved knowledge of λ. ATLAS, for example, expects to be able to constrain λ to be between 0.5 and 1.6 times the SM expectation at the level of 1σ.

Testing the foundations

Physicists are also starting to place limits on possible new-physics contributions to HH production, which can originate either from loop corrections involving new particles or from non-standard couplings between the Higgs boson and other SM particles. Several theories beyond the SM, including two-Higgs-doublet and composite-Higgs models, also predict the existence of heavy scalar particles that can decay resonantly into a pair of Higgs bosons. “Large anomalous values of λ are already excluded, and the window of possible values continues to shrink towards the SM as the sensitivity grows,” says Schaarschmidt. “Furthermore, in recent di-Higgs analyses ATLAS and CMS have been able to establish a strong constraint on the coupling between two Higgs bosons and two vector bosons.”

For Christophe Grojean of the DESY theory group, the principal interest in di-Higgs production is to test the foundations of quantum field theory: “The basic principles of the SM are telling us that the way the Higgs boson interacts with itself is mostly dictated by its expectation value (linked to the Fermi constant, i.e. the muon and neutron lifetimes) and its mass. Verifying this prediction experimentally is therefore of prime importance.”

Acceleration, but not as we know it

Metal cavities are at the heart of the vast majority of the world’s 30,000 or so particle accelerators. Excited by microwaves, these resonant structures are finely tuned to generate oscillating electric fields that accelerate particles over many metres. But what if similar energies could be delivered 100 times more rapidly in structures a few tens of microns wide or less?

The key is to reduce the wavelength of the radiation powering the structure down to the optical scale of lasers. By combining solid-state lasers and modern nanofabrication, accelerating structures can be as small as a single micron wide. Though miniaturisation will never allow bunch charges as large as in today’s science accelerators, field strengths can be much higher before structure damage sets in. The trick is to replace highly conductive structures with dielectrics like silicon, fused silica and diamond, which have a much higher damage threshold at optical wavelengths. The length of accelerators can thereby be reduced by orders of magnitude, with millions to billions of particle pulses accelerated per second, depending on the repetition rate of the laser.

Recent progress with “on chip” accelerators promises powerful, high-energy and high-repetition-rate particle sources that are accessible to academic laboratories. Applications may range from localised particle or X-ray irradiation in medical facilities to quantum communication and computation using ultrasmall bunches of electrons as qubits.

Laser focused

The inspiration for on-chip accelerators dates back to 1962, when Koichi Shimoda of the University of Tokyo proposed using early lasers – then called optical masers – as a way to accelerate charged particles. The first experiments were conducted by shining light onto an open metal grating, generating an optical surface mode that could accelerate electrons passing above the surface. This technique was proposed by Yasutugu Takeda and Isao Matsui in 1968 and experimentally demonstrated by Koichi Mizuno in 1987 using terahertz radiation. In the 1980s, accelerator physicist Robert Palmer of Brookhaven National Laboratory proposed using rows of free-standing pillars of subwavelength separation illuminated by a laser – an idea that has propagated to modern devices.

The longitudinal electric field in a dual-pillar colonnade illuminated by a laser

In the 1990s, the groups of John Rosenzweig and Claudio Pellegrini at UCLA and Robert Byer at Stanford began to use dielectric materials, which offer low power absorption at optical frequencies. For femtosecond laser pulses, a simple dielectric such as silica glass can withstand optical field strengths exceeding 10 GV/m. It became clear that combining lasers with on-chip fabrication using dielectric materials could subject particles to accelerating forces 10 to 100 times higher than in conventional accelerators.

In the intervening decades, the dream of realising a laser-driven micro-accelerator has been enabled by major technological advances in the silicon-microchip industry and solid-state lasers. These industrial technologies have paved the way to fabricate and test particle accelerators made from silicon and other dielectric materials driven by ultrashort pulses of laser light. The dielectric laser accelerator (DLA) has been born.

Accelerator on a chip

Colloquially called an accelerator on a chip, a DLA is a miniature microwave accelerator reinvented at the micron scale using the methods of optical photonics rather
than microwave engineering. In both cases, the wavelength of the driving field determines the typical transverse structure dimensions: centimetres for today’s microwave accelerators, but between one and 10 μm for optically powered devices.

Other laser-based approaches to miniaturisation are available. In plasma-wakefield accelerators, particles gain energy from electromagnetic fields excited in an ionised gas by a high-power drive laser (CERN Courier May/June 2024 p25). But the details are starkly different. DLAs are powered by lasers with thousands to millions of times lower peak energy. They operate with more than a million times lower electron charges, but at millions of pulses per second. And unlike plasma accelerators, but similarly to their microwave counterparts, DLAs use a solid material structure with a vacuum channel in which an electromagnetic mode continuously imparts energy to the accelerated particles.

Dielectric structures

This mode can be created by a single laser pulse perpendicular to the electron trajectory, two pulses from opposite sides, or a single pulse directed downwards into the plane of the chip. The latter two options offer better field symmetry.

As the laser impinges on the structure, its electrons experience an electromagnetic force that oscillates at the laser frequency. Particles that are correctly matched in phase and velocity experience a forward accelerating force (see “Continuous acceleration” image). Just as the imparted force begins to change sign, the particles enter the next accelerating cycle, leading to continuous energy gain.

In 2013, two early experiments attracted international attention by demonstrating the acceleration of electrons using structured dielectric devices. Peter Hommelhoff’s group in Germany accelerated 28 keV electrons inside a modified electron microscope using a single-sided glass grating (see “Evolution” image, left panel). In parallel, at SLAC, the groups of Robert Byer and Joel England accelerated relativistic 60 MeV electrons using a dual-sided grating structure, achieving an acceleration gradient of 310 MeV/m and 120 keV of energy gain (see “Evolution” image, middle panel).

Teaming up

Encouraged by the experimental demonstration of accelerating gradients of hundreds of MeV/m, and the power efficiency and compactness of modern solid-state fibre lasers, in 2015 the Gordon and Betty Moore Foundation funded an international collaboration of six universities, three government laboratories and two industry partners to form the Accelerator on a Chip International Program (ACHIP). The central goal is to demonstrate a compact tabletop accelerator based on DLA technology. ACHIP has since developed “shoebox” accelerators on both sides of the Atlantic and used them to demonstrate nanophotonics-based particle control, staging, bunching, focusing and full on-chip electron acceleration by laser-driven microchip devices.

Silicon’s compatibility with established nanofabrication processes makes it convenient, but reaching gradients of GeV/m requires materials with higher damage thresholds such as fused silica or diamond. In 2018, ACHIP research at UCLA accelerated electrons from a conventional microwave linac in a dual-sided fused silica structure powered by ultrashort (45 fs) pulses of 800 nm wavelength laser light. The result was an average energy gain of 850 MeV/m and accelerating fields up to 1.8 GV/m – more than double the prior world best in a DLA, and still a world record.

Longitudinal and transverse beam control

Since DLA structures are non-resonant, the interaction time and energy gain of the particles is limited by the duration of the laser pulse. However, by tilting the laser’s pulse front, the interaction time can be arbitrarily increased. In a separate experiment at UCLA, using a laser pulse tilted by 45˚, the interaction distance was increased to more than 700 µm – or 877 structure periods – with an energy gain of 0.315 MeV. The UCLA group has further extended this approach using a spatial light modulator to “imprint” the phase information onto the laser pulse, achieving more than 3 mm of interaction at 800 nm, or 3761 structure periods.

Under ACHIP, the structure design has evolved in several directions, from single-sided and double-sided gratings etched onto substrates to more recent designs with colonnades of free-standing silicon pillars forming the sides of the accelerating channel, as originally proposed by Robert Palmer some 30 years earlier. At present, these dual-pillar structures (see “Evolution” image, right panel) have proven to be the optimal trade-off between cleanroom fabrication complexity and experimental technicalities. However, due to the lower damage threshold of silicon as compared with fused silica, researchers have yet to demonstrate gradients above 350 MeV/m in silicon-based devices.

With the dual-pillar colonnade chosen as the fundamental nanophotonic building block, research has turned to making DLAs into viable accelerators with much longer acceleration lengths. To achieve this, we need to be able to control the beam and manipulate it in space and time, or electrons quickly diverge inside the narrow acceleration channel and are lost on impact with the accelerating structure. The ACHIP collaboration has made substantial progress here in recent years.

Focusing on nanophotonics

In conventional accelerators, quadrupole magnets focus electron beams in a near perfect analogy to how concave and convex lens arrays transport beams of light in optics. In laser-driven nanostructures it is necessary to harness the intrinsic focusing forces that are already present in the accelerating field itself.

On-chip accelerators promise powerful, high-energy and high-repetition-rate particle sources that are accessible to academic laboratories

In 2021, the Hommelhoff group guided an electron pulse through a 200 nm-wide and 80 µm-long structure based on a theoretical lattice designed by ACHIP colleagues at TU Darmstadt three years earlier. The lattice’s alternating-phase focusing (APF) periodically exchanges an electron bunch’s phase-space volume between the transverse dimension across the narrow width of the accelerating channel and the longitudinal dimension along the propagation direction of the electron pulse. In principle this technique could allow electrons to be guided through arbitrarily long structures.

Guiding is achieved by adding gaps between repeating sets of dual-pillar building-blocks (see “Beam control” image). Combined guiding and acceleration has been demonstrated within the past year. To achieve this, we select a design gradient and optimise the position of each pillar pair relative to the expected electron energy at that position in the structure. Initial electron energies are up to 30 keV in the Hommelhoff group, supplied by electron microscopes, and from 60 to 90 keV in the Byer group, using laser-assisted field emission from silicon nanotips. When accelerated, the electrons’ velocities change dramatically from 0.3 to 0.7 times the speed of light or higher, requiring the periodicity of the structure to change by tens of nanometres to match the velocity of the accelerating wave to the speed of the particles.

On-chip accelerator light source

Although focusing in the narrow dimension of the channel is the most critical requirement, an extension of this method to focus beams in the transverse vertical dimension out of plane of the chip has been proposed, which varies the geometry of the pillars along the out-of-plane dimension. Without it, the natural divergence of the beam in the vertical direction eventually becomes dominant. This approach is awaiting experimental realisation.

Acceleration gradients can be improved by optimising material choice, pillar dimensions, peak optical field strength and the duration of the laser pulses. In recent demonstrations, both the Byer and Hommelhoff groups have kept pillar dimensions constant to ease difficulties in uniformly etching the structures during nanofabrication. The complete structure is then a series of APF cells with tapered cell lengths and tapered dual-pillar periodicity. The combination of tapers accommodates both the changing size of the electron beam and the phase matching required due to the increasing electron energy.

In these proof-of-principle experiments, the Hommelhoff group has designed a nanophotonic dielectric laser accelerator for an injection energy of 28.4 keV and an average acceleration gradient of at least 22.7 MeV/m, demonstrating a 43% energy increase over a 500 µm-long structure. The Byer group recently demonstrated the acceleration of a 96 keV beam at average gradients of 35 to 50 MeV/m, reaching a 25% energy increase over 708 µm. The APF periods were in the range of tens of microns and were tapered along with the energy-gain design curve. The beams were not bunched, and by design only 4% of the electrons were captured and accelerated.

One final experimental point has important implications for the future use of DLAs as compact tabletop tools for ultrafast science. Upon interaction with the DLA, electron pulses have been observed to form trains of evenly spaced sub-wavelength attosecond-scale bunches. This effect was shown experimentally by both groups in 2019, with electron bunches measured down to 270 attoseconds, or roughly 4% of the optical cycle.

From demonstration to application

To date, researchers have demonstrated high gradient (GeV/m) acceleration, compatible nanotip electron sources, laser-driven focusing, interaction lengths up to several millimetres, the staging of multiple structures, and attosecond-level control and manipulation of electrons in nanophotonic accelerators. The most recent experiments combine these techniques, allowing the capture of an accelerated electron bunch with net acceleration and precise control of electron dynamics for the first time.

These milestone experiments demonstrate the viability of the nanophotonic dielectric electron accelerator as a scalable technology that can be extended to arbitrarily long structures and ever higher energy gains. But for most applications, beam currents need to increase.

A compelling idea proposes to “copy and paste” the accelerator design in the cleanroom and make a series of parallel accelerating channels on one chip. Another option is to increase the repetition rate of the driving laser by orders of magnitude to produce more electron pulses per second. Optimising the electron sources used by DLAs would also allow for more electrons per pulse, and parallel arrays of emitters on multi-channel devices promise tremendous advantages. Eventually, active nanophotonics can be employed to integrate the laser and electron sources on a single chip.

Once laser and electron sources are combined, we expect on-chip accelerators to become ubiquitous devices with wide-ranging and unexpected applications, much like the laser itself. Future applications will range from medical treatment tools to electron probes for ultrafast science. According to the International Atomic Energy Agency
statistics, 13% of major accelerator facilities around the world power light sources. On-chip accelerators may follow a similar path.

Illuminating concepts

A concept has been proposed for a dielectric laser-driven undulator (DLU) which uses laser light to generate deflecting forces that wiggle the electrons so that they emit coherent light. Combining a DLA and a DLU could take advantage of the unique time structure of DLA electrons to produce ultrafast pulses of coherent radiation (see “Compact light source” image). Such compact new light sources – small enough to be accessible to individual universities – could generate extremely short flashes of light in ultraviolet or even X-ray wavelength ranges, enabling tabletop instruments for the study of material dynamics on ultrafast time scales. Pulse trains of attosecond electron bunches generated by a DLA could provide excellent probes of transient molecular electronic structure.

The generation of intriguing quantum states of light might also be possible with nanophotonic devices

The generation of intriguing quantum states of light might also be possible with nanophotonic devices. This quantum light results from shaping electron wavepackets inside the accelerator and making them radiate, perhaps even leading to on-chip quantum-communication light sources.

In the realm of medicine, an ultracompact self-contained multi-MeV electron source based on integrated photonic particle accelerators could enable minimally invasive cancer treatments with improved dose control.

One day, instruments relying on high-energy electrons produced by DLA technology may bring the science of large facilities into academic-scale laboratories, making novel science endeavours accessible to researchers across various disciplines and minimally invasive medical treatments available to those in need. These visionary applications may take decades to be fully realised, but we should expect developments to continue to be rapid. The biggest challenges will be increasing beam power and transporting beams across greater energy gains. These need to be addressed to reach the stringent beam quality and machine requirements of longer term and higher energy applications.

Six rare decays at the energy frontier

Thanks to its 13.6 TeV collisions, the LHC directly explores distance scales as short as 5 × 10–20 m. But the energy frontier can also be probed indirectly. By studying rare decays, distance scales as small as a zeptometre (10–21 m) can be resolved, probing the existence of new particles with masses as high as 100 TeV. Such particles are out of the reach of any high-energy collider that could be built in this century.

The key concept is the quantum fluctuation. Just because a collision doesn’t have enough energy to bring a new particle into existence does not mean that a very heavy new particle cannot inform us about its existence. Thanks to Heisenberg’s uncertainty principle, new particles could be virtually exchanged between the other particles involved in the collisions, modifying the probabilities for the processes we observe in our detectors. The effect of massive new particles could be unmistakable, giving physicists a powerful tool for exploring more deeply into the unknown than accelerator technology and economic considerations allow direct searches to go.

The effect of massive new particles could be unmistakable

The search for new particles and forces beyond those of the Standard Model is strongly motivated by the need to explain dark matter, the huge range of particle masses from the tiny neutrino to the massive top quark, and the asymmetry between matter and antimatter that is responsible for our very existence. As direct searches at the LHC have not yet provided any clue as to what these new particles and forces might be, indirect searches are growing in importance. Studying very rare processes could allow us to see imprints of new particles and forces acting at much shorter distance scales than it is possible to explore at current and future colliders.

Anticipating the November Revolution

The charm quark is a good example. The story of its direct discovery unfolded 50 years ago, in November 1974, when teams at SLAC and MIT simultaneously discovered a charm–anticharm meson in particle collisions. But four years earlier, Sheldon Glashow, John Iliopoulos and Luciano Maiani had already predicted the existence of the charm quark thanks to the surprising suppression of the neutral kaon’s decay into two muons.

Neutral kaons are made up of a strange quark and a down antiquark, or vice versa. In the Standard Model, their decay to two muons can proceed most simply through the virtual exchange of two W bosons, one virtual up quark and a virtual neutrino. The trouble was that the rate for the neutral kaon decay to two muons predicted in this  manner turned out to be many orders of magnitude larger than observed experimentally.

NA62 experiment

Glashow, Iliopoulos and Maiani (GIM) proposed a simple solution. With visionary insight, they hypothesised a new quark, the charm quark, which would totally cancel the contribution of the up quark to this decay if their masses were equal to each other. As the rate was non-vanishing and the charm quark had not yet been observed experimentally, they concluded that the mass of the charm quark must be significantly larger than that of the up quark.

Their hunch was correct. In early 1974, months before its direct discovery, Mary K Gaillard and Benjamin Lee predicted the charm quark’s mass by analysing another highly suppressed quantity, the mass difference in K0K0 mixing.

As modifications to the GIM mechanism by new heavy particles are still a hot prospect for discovering new physics in the 2020s, the details merit a closer look. Years earlier, Nicola Cabibbo had correctly guessed that weak interactions act between up quarks and a mixture (d cos θ + s sin θ) of the down and strange quarks. We now know that charm quarks interact with the mixture (–d sin θ + s cos θ). This is just a rotation of the down and strange quarks through this Cabibbo angle. The minus sign causes the destructive interference observed in the GIM mechanism.

With the discovery of a third generation of quarks, quark mixing is now described by the Cabibbo–Kobayashi–Maskawa (CKM) matrix – a unitary three-dimensional rotation with complex phases that parameterise CP violation. Understanding its parameters may prove central to our ability to discover new physics this decade.

On to the 1980s

The story of indirect discoveries continued in the late 1980s, when the magnitude of B0d – B0d mixing implied the existence of a heavy top quark, which was confirmed in 1995, completing the third generation of quarks. The W, Z and Higgs bosons were also predicted well in advance of their discoveries. It’s only natural to expect that indirect searches for new physics will be successful at even shorter distance scales.

Belle II experiment at KEK

Rare weak decays of kaons and B mesons that are strongly suppressed by the GIM mechanism are expected to play a crucial role. Many channels of interest are predicted by the Standard Model to have branching ratios as low as 10–11, often being further suppressed by small elements of the CKM matrix. If the GIM mechanism is violated by new-physics contributions, these branching ratios – the fraction of times a particle decays that way – could be much larger.

Measuring suppressed branching ratios with respectable precision this decade is therefore an exciting prospect. Correlations between different branching ratios can be particularly sensitive to new physics and could provide the first hints of physics beyond the Standard Model. A good example is the search for the violation of lepton-flavour universality (CERN Courier May/June 2019 p33). Though hints of departures from muon–electron universality seem to be receding, hints that muon–tau universality may be violated still remain, and the measured branching ratios for B  K(K*+µ differ visibly from Standard Model predictions.

The first step in this indirect strategy is to search for discrepancies between theoretical predictions and experimental observables. The main challenge for experimentalists is the low branching ratios for the rare decays in question. However, there are very good prospects for measuring many of these highly suppressed branching ratios in the coming years.

Six channels for the 2020s

Six channels stand out today for their superb potential to observe new physics this decade. If their decay rates defy expectations, the nature of any new physics could be identified by studying the correlations between these six decays and others.

The first two channels are kaon decays: the measurements of K+ π+νν by the NA62 collaboration at CERN (see “Needle in a haystack” image), and the measurement of KL π0νν by the KOTO collaboration at J-PARC in Japan. The branching ratios for these decays are predicted to be in the ballpark of 8 × 10–11 and 3 × 10–11, respectively.

Independent observables

The second two are measurements of B  Kνν and B  K*νν by the Belle II collaboration at KEK in Japan. Branching ratios for these decays are expected to be much higher, in the ballpark of 10–5.

The final two channels, which are only accessible at the LHC, are measurements of the dimuon decays Bs µ+µ and Bd µ+µ by the LHCb, CMS and ATLAS collaborations. Their branching ratios are about 4 × 10–9 and 10–10 in the Standard Model. Though the decays B  K(K*+µare also promising, they are less theoretically clean than these six.

The main challenge for theorists is to control quantum-chromodynamics (QCD) effects, both below 10–16 m, where strong interactions weaken, and in the non-perturbative region at distance scales of about 10–15 m, where quarks are confined in hadrons and calculations become particularly tricky. While satisfactory precision has been achieved at short-distance scales over the past three decades, the situation for non-perturbative computations is expected to improve significantly in the coming years, thanks to lattice QCD and analytic approaches such as dual QCD and chiral perturbation theory for kaon decays, and heavy-quark effective field theory for B decays.

Another challenge is that Standard Model predictions for the branching ratios require values for four CKM parameters that are not predicted by the Standard Model, and which must be measured using kaon and B-meson decays. These are the magnitude of the up-strange (Vus) and charm-bottom (Vcb) couplings and the CP-violating phases β and γ. The current precision on measurements of Vus and β is fully satisfactory, and the error on γ = (63.8 ± 3.5)° should be reduced to 1° by LHCb and Belle II in the coming years. The stumbling block is Vcb, where measurements currently disagree. Though experimental problems have not been excluded, the tension is thought to originate in QCD calculations. While measurements of exclusive decays to specific channels yield 39.21(62) × 10–3, inclusive measurements integrated over final states yield 41.96(50) × 10–3. This discrepancy makes the predicted branching ratios differ by 16% for the four B-meson decays, and by 25% and 35% for K+ π+νν and KL π0νν. These discrepancies are a disaster for the theorists who had succeeded over many years of work to reduce QCD uncertainties in these decays to the level of a few percent.

One solution is to replace the CKM dependence of the branching ratios with observables where QCD uncertainties are under good control, for example: the mass differences in B0s  B0s and B0d  B0d mixing (∆Ms and ∆Md); a parameter that measures CP violation in K0 – K0 mixing (εK); and the CP-asymmetry that yields the angle β. Fitting these observables to the experimental data avoids us being forced to choose between inclusive and exclusive values for the charm-bottom coupling, and avoids the 3.5° uncertainty on γ, which in this strategy is reduced to 1.6°. Uncertainty on the predicted branching ratios is thereby reduced to 6% and 9% for B  Kνν and B  K*νν, to 5% for the two kaon decays, and to 4% for Bs µ+µ and Bd µ+µ.

So what is the current experimental situation for the six channels? The latest NA62 measurement of K+ π+νν is 25% larger than the Standard Model prediction. Its 36% uncertainty signals full compatibility at present, and precludes any conclusions about the size of new physics contributing to this decay. Next year, when the full analysis has been completed, this could turn out to be possible. It is unfortunate that the HIKE proposal was not adopted (CERN Courier May/June 2024 p7), as NA62’s expected precision of 15% could have been reduced to 5%. This could turn out to be crucial for the discovery of new physics in this decay.

The present upper bound on KL π0νν from KOTO is still two orders of magnitude above the Standard Model prediction. This bound should be lowered by at least one order of magnitude in the coming years. As this decay is fully governed by CP violation, one may expect that new physics will impact it significantly more than CP-conserving decays such as K+ π+νν.

Branching out from Belle

At present, the most interesting result concerns a 2023 update from Belle II to the measured branching ratio for B+ K+νν (see “Interesting excess” image). The resulting central value from Belle II and BaBar is currently a factor of 2.6 above the Standard Model prediction. This has sparked many theoretical analyses around the world, but the experimental error of 30% once again does not allow for firm conclusions. Measurements of other charge and spin configurations of this decay are pending.

Finally, both dimuon B-meson decays are at present consistent with Standard Model predictions, but significant improvements in experimental precision could still reveal new physics at work, especially in the case of Bd.

Hypothetical future measurements of branching ratios

It will take a few years to conclude if new physics contributions are evident in these six branching ratios, but the fact that all are now predicted accurately means that we can expect to observe or exclude new physics in them before the end of the decade. This would be much harder if measurements of the Vcb coupling were involved.

So far, so good. But what if the observables that replaced Vcb and γ are themselves affected by new physics? How can they be trusted to make predictions against which rare decay rates can be tested?

Here comes some surprisingly good news: new physics does not appear to be required to simultaneously fit them using our new basis of observables ΔMd, εK and ΔMs, as they intersect at a single point in the Vcbγ plane (see “No new physics” figure). This analysis favours the inclusive determination of Vcb and yields a value for γ that is consistent with the experimental world average and a factor of two more accurate. It’s important to stress, though, that non-perturbative four-flavour lattice-QCD calculations of ∆Ms and ∆Md by the HPQCD lattice collaboration played a key role here. It is crucial that another lattice QCD collaboration repeat these calculations, as the three curves cross at different points in three-flavour calculations that exclude charm.

Interesting years are ahead in the field of indirect searches for new physics

In this context, one realises the advantages of Vcbγ plots compared to the usual unitarity-triangle plots, where Vcb is not seen and 1° improvements in the determination of γ are difficult to appreciate. In the late 2020s, determining Vcb and γ from tree-level decays will be a central issue, and a combination of Vcb-independent and Vcb-dependent approaches will be needed to identify any concrete model of new physics.

We should therefore hope that the tension between inclusive and exclusive determinations of Vcb will soon be conclusively resolved. Forthcoming measurements of our six rare decays may then reveal new physics at the energy frontier (see “New physics” figure). With a 1° precision measurement of γ on the horizon, and many Vcb-independent ratios available, interesting years are ahead in the field of indirect searches for new physics.

In 1676 Antonie van Leeuwenhoek discovered a microuniverse populated by bacteria, which he called animalcula, or little animals. Let us hope that we will, in this decade, discover new animalcula on our flavour expedition to the zeptouniverse.

How to democratise radiation therapy

How important is radiation therapy to clinical outcomes today?

Manjit Dosanjh

Manjit Fifty to 60% of cancer patients can benefit from radiation therapy for cure or palliation. Pain relief is also critical in low- and middle-income countries (LMICs) because by the time tumours are discovered it is often too late to cure them. Radiation therapy typically accounts for 10% of the cost of cancer treatment, but more than half of the cure, so it’s relatively inexpensive compared to chemotherapy, surgery or immunotherapy. Radiation therapy will be tremendously important for the foreseeable future.

What is the state of the art?

Manjit The most precise thing we have at the moment is hadron therapy with carbon ions, because the Bragg peak is very sharp. But there are only 14 facilities in the whole world. It’s also hugely expensive, with each machine costing around $150 million (M). Proton therapy is also attractive, with each proton delivering about a third of the radiobiological effect of a carbon ion. The first proton patient was treated at Berkeley in September 1954, in the same month CERN was founded. Seventy years later, we have about 130 machines and we’ve treated 350,000 patients. But the reality is that we have to make the machines more affordable and more widely available. Particle therapy with protons and hadrons probably accounts for less than 1% of radiation-therapy treatments whereas roughly 90 to 95% of patients are treated using electron linacs. These machines are much less expensive, costing between $1M and $5M, depending on the model and how good you are at negotiating.

Most radiation therapy in the developing world is delivered by cobalt-60 machines. How do they work?

Manjit A cobalt-60 machine treats patients using a radioactive source. Cobalt has a half-life of just over five years, so patients have to be treated longer and longer to be given the same dose as the cobalt-60 gets older, which is a hardship for them, and slows the number of patients who can be treated. Linacs are superior because you can take advantage of advanced treatment options that target the tumour using focusing, multi-beams and imaging. You come in from different directions and energies, and you can paint the tumour with precision. To the best extent possible, you can avoid damaging healthy tissue. And the other thing about linacs is that once you turn it off there’s no radiation anymore, whereas cobalt machines present a security risk. One reason we’ve got funding from the US Department of Energy (DOE) is because our work supports their goal of reducing global reliance on high-activity radioactive sources through the promotion of non-radioisotopic technologies. The problem was highlighted by the ART (access to radiotherapy technologies) study I led for International Cancer Expert Corps (ICEC) on the state of radiation therapy in former Soviet Union countries. There, the legacy has always been cobalt. Only three of the 11 countries we studied have had the resources and knowledge to be able to go totally to linacs. Most still have more than 50% cobalt radiation therapy.

The kick-off meeting for STELLA took place at CERN from 29 to 30 May. How will the project work?

Manjit STELLA stands for Smart Technology to Extend Lives with Linear Accelerators. We are an international collaboration working to increase access to radiation therapy in LMICs, and in rural regions in high-income countries. We’re working to develop a linac that is less expensive, more robust and, in time, less costly to operate, service and maintain than currently available options.

Steinar Stapnes

Steinar $1.75M funding from the DOE has launched an 18 month “pre-design” study. ICEC and CERN will collaborate with the universities of Oxford, Cambridge and Lancaster, and a network of 28 LMICs who advise and guide us, providing vital input on their needs. We’re not going to build a radiation-therapy machine, but we will specify it to such a level that we can have informed discussions with industry partners, foundations, NGOs and governments who are interested in investing in developing lower cost and more robust solutions. The next steps, including prototype construction, will require a lot more funding.

What motivates the project?

Steinar The basic problem is that access to radiation therapy in LMICs is embarrassingly limited. Most technical developments are directed towards high-income countries, ultimately profiting the rich people in the world – in other words, ourselves. At present, only 10% of patients in LMICs have access to radiation therapy.

Were working to develop a linac that is less expensive, more robust and less costly to operate, service and maintain than currently available options

Manjit The basic design of the linac hasn’t changed much in 70 years. Despite that, prices are going up, and the cost of service contracts and software upgrades is very high. Currently, we have around 420 machines in Africa, many of which are down for long intervals, which often impacts treatment outcomes. Often, a hospital can buy the linac but they can’t afford the service contract or repairs, or they don’t have staff with the skills to maintain them. I was born in a small village with no gas, electricity or water. I wasn’t supposed to go to school because girls didn’t. I was fortunate to have got an education that enabled me to have a better life with access to the healthcare treatments that I need. I look at this question from the perspective of how we can make radiation therapy available around the world in places such as where I’m originally from.

What’s your vision for the STELLA machine?

Steinar We want to get rid of the cobalt machines because they are not as effective as linacs for cancer treatment and they are a security risk. Hadron-therapy machines are more costly, but they are more precise, so we need to make them more affordable in the future. As Manjit said, globally 90 or 95% of radiation treatments are given by an electron linac, most often running at 6 MeV. In a modern radiation therapy facility today, such linacs are not developing so fast. Our challenge is to make them more reliable and serviceable. We want to develop a workhorse radiation therapy system that can do high-quality treatment. The other, perhaps more important, key parts are imaging and software. CERN has valuable experience here because we build and integrate a lot of detector systems including readout and data-analysis. From a certain perspective, STELLA will be an advanced detector system with an integrated linac.

Are any technical challenges common to both STELLA and to projects in fundamental physics?

Steinar The early and remote prediction of faults is one. This area is developing rapidly, and it would be very interesting for us to deploy this on a number of accelerators. On the detector and sensor side, we would like to make STELLA easily upgradeable, and some of these upgrades could be very much linked to what we want to do for our future detectors. This can increase the industrial base for developing these types of detectors as the medical market is very large. Software can also be interesting, for example for distributed monitoring and learning.

Where are the biggest challenges in bringing STELLA to market?

Steinar We must make medical linacs open in terms of hardware. Hospitals with local experts must be able to improve and repair the system. It must have a long lifetime. It needs to be upgradeable, particularly with regard to imaging, because detector R&D and imaging software are moving quickly. We want it to be open in terms of software, so that we can monitor the performance of the system, predict faults, and do treatment planning off site using artificial intelligence. Our biggest contribution will be to write a specification for a system where we “enforce” this type of open hardware and open software. Everything we do in our field relies on that open approach, which allows us to integrate the expertise of the community. That’s something we’re good at at CERN and in our community. A challenge for STELLA is to build in openness while ensuring that the machines can remain medically qualified and operational at all times.

How will STELLA disrupt the model of expensive service contracts and lower the cost of linacs?

Steinar This is quite a complex area, and we don’t know the solution yet. We need to develop a radically different service model so that developing countries can afford to maintain their machines. Deployment might also need a different approach. One of the work packages of this project is to look at different models and bring in expertise on new ideas. The challenges are not unique to radiation therapy. In the next 18 months we’ll get input from people who’ve done similar things.

A medical linac at the Genolier Clinic

Manjit Gavi, the global alliance for vaccines, was set up 24 years ago to save millions of children who died every year from vaccine-preventable diseases such as measles, TB, tetanus and rubella using vaccinations that were not available to millions of children in poorer parts of the world, especially Africa. Before, people were dying of these diseases, but now they get a vaccination and live. Vaccines and radiation therapy are totally different technologies, but we may need to think that way to really make a critical difference.

Steinar There are differences with respect to vaccine development. A vaccine is relatively cheap, whereas a linac costs millions of dollars. The diseases addressed by vaccines affect a lot of children, more so than cancer, so the patients have a different demographic. But nonetheless, the fact is that there was a group of countries and organisations who took this on as a challenge, and we can learn from their experiences.

Manjit We would like to work with the UN on their efforts to get rid of the disparities and focus on making radiation therapy available to the 70% of the world that doesn’t have access. To accomplish that, we need global buy-in, especially from the countries who are really suffering, and we need governmental, private and philanthropic support to do so.

What’s your message to policymakers reading this who say that they don’t have the resources to increase global access to radiation therapy?

Steinar Our message is that this is a solvable problem. The world needs roughly 5000 machines at $5M or less each. On a global scale this is absolutely solvable. We have to find a way to spread out the technology and make it available for the whole world. The problem is very concrete. And the solution is clear from a technical standpoint.

Manjit The International Atomic Energy Agency (IAEA) have said that the world needs one of these machines for every 200 to 250 thousand people. Globally, we have a population of 8 billion. This is therefore a huge opportunity for businesses and a huge opportunity for governments to improve the productivity of their workforces. If patients are sick they are not productive. Particularly in developing countries, patients are often of a working economic age. If you don’t have good machines and early treatment options for these people, not only are they not producing, but they’re going to have to be taken care of. That’s an economic burden on the health service and there is a knock-on effect on agriculture, food, the economy and the welfare of children. One example is cervical cancer. Nine out of 10 deaths from cervical cancer are in developing countries. For every 100 women affected, 20 to 30 children die because they don’t have family support.

How can you make STELLA attractive to investors?

Steinar Our goal is to be able to discuss the project with potential investor partners – and not only in industry but also governments and NGOs, because the next natural step will be to actually build a prototype. Ultimately, this has to be done by industry partners. We likely cannot rely on them to completely fund this out of their own pockets, because it’s a high-risk project from a business point of view. So we need to develop a good business model and find government and private partners who are willing to invest. The dream is to go into a five-year project after that.

We need to develop a good business model and find government and private partners who are willing to invest

Manjit It’s important to remember that this opportunity is not only linked to low-income countries. One in two UK citizens will get cancer in their lifetime, but according to a study that came out in February, only 25 to 28% of UK citizens have adequate access to radiation therapy. This is also an opportunity for young people to join an industrial system that could actually solve this problem. Radiation therapy is one of the most multidisciplinary fields there is, all the way from accelerators to radio-oncology and everything in between. The young generation is altruistic. This will capture their spirit and imagination.

Can STELLA help close the radiation-therapy gap?

Manjit When the IAEA first visualised radiation-therapy inequalities in 2012, it raised awareness, but it didn’t move the needle. That’s because it’s not enough to just train people. We also need more affordable and robust machines. If in 10 or 20 years people start getting treatment because they are sick, not because they’re dying, that would be a major achievement. We need to give people hope that they can recover from cancer.

A gold mine for neutrino physics

In 1968, deep underground in the Homestake gold mine in South Dakota, Ray Davis Jr. observed too few electron neutrinos emerging from the Sun. The reason, we now know, is that many had changed flavour in flight, thanks to tiny unforeseen masses.

At the same time, Steven Weinberg and Abdus Salam were carrying out major construction work on what would become the Standard Model of particle physics, building the Higgs mechanism into Sheldon Glashow’s unification of the electromagnetic and weak interactions. The Standard Model is still bulletproof today, with one proven exception: the nonzero neutrino masses for which Davis’s observations were in hindsight the first experimental evidence.

Today, neutrinos are still one of the most promising windows into physics beyond the Standard Model, with the potential to impact many open questions in fundamental science (CERN Courier May/June 2024 p29). One of the most ambitious experiments to study them is currently taking shape in the same gold mine as Davis’s experiment more than half a century before.

Deep underground

In February this year, the international Deep Underground Neutrino Experiment (DUNE) completed the excavation of three enormous caverns 1.5 kilometres below the surface at the new Sanford Underground Research Facility (SURF) in the Homestake mine. 800,000 tonnes of rock have been excavated over two years to reveal an underground campus the size of eight soccer fields, ready to house four 17,500 tonne liquid–argon time-projection chambers (LArTPCs). As part of a diverse scientific programme, the new experiment will tightly constrain the working model of three massive neutrinos, and possibly even disprove it.

DUNE will measure the disappearance of muon neutrinos and the appearance of electron neutrinos over 1300 km and a broad spectrum of energies. Given the long journey of its accelerator-produced neutrinos from the Long Baseline Neutrino Facility (LBNF) at Fermilab in Illinois to SURF in South Dakota, DUNE will be uniquely sensitive to asymmetries between the appearance of electron neutrinos and antineutrinos. One predicted asymmetry will be caused by the presence of electrons and the absence of positrons in the Earth’s crust. This asymmetry will probe neutrino mass ordering – the still unknown ordering of narrow and broad mass splittings between the three tiny neutrino masses. In its first phase of operation, DUNE will definitively establish the neutrino mass ordering regardless of other parameters.

The field cage of a prototype liquid–argon time-projection chamber

If CP symmetry is violated, DUNE will then observe a second asymmetry between electron neutrinos and antineutrinos, which by experimental design is not degenerate with the first asymmetry. Potentially the first evidence for CP violation by leptons, this measurement will be an important experimental input to the fundamental question of how a matter–antimatter asymmetry developed in the early universe.

If CP violation is near maximal, DUNE will observe it at 3σ (99.7% confidence) in its first phase. In DUNE and LBNF’s recently reconceptualised second phase, which was strongly endorsed by the US Department of Energy’s Particle Physics Project Prioritization Panel (P5) in December (CERN Courier January/February 2024 p7), 3σ sensitivity to CP violation will be extended to more than 75% of possible values of δCP, the complex phase that parameterises this effect in the three-massive-neutrino paradigm.

Combining DUNE’s measurements with those by fellow next-generation experiments JUNO and Hyper-Kamiokande will test the three-flavour paradigm itself. This paradigm rotates three massive neutrinos into the mixtures that interact with charged leptons via the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix, which features three angles in addition to δCP.

As well as promising world-leading resolution on the PMNS angle θ23, DUNE’s measurements of θ13 and the Δm232 mass splitting will be different and complementary to those of JUNO in ways that could be sensitive to new physics. JUNO, which is currently under construction in China, will operate in the vicinity of a flux of lower-energy electron antineutrinos from nuclear reactors. DUNE and Hyper-Kamiokande, which is currently under construction in Japan, will both study accelerator-produced sources of muon neutrinos and antineutrinos, though using radically different baselines, energy spectra and detector designs.

Innovative and impressive

DUNE’s detector technology is innovative and impressive, promising millimetre-scale precision in imaging the interactions of neutrinos from accelerator and astrophysical sources (see “Millimetre precision” image). The argon target provides unique sensitivity to low-energy electron neutrinos from supernova bursts, while the detectors’ imaging capabilities will be pivotal when searching for beyond-the-Standard-Model physics such as dark matter, sterile-neutrino mixing and non-standard neutrino interactions.

First proposed by Nobel laureate Carlo Rubbia in 1977, LArTPC technology demonstrated its effectiveness as a neutrino detector at Gran Sasso’s ICARUS T600 detector more than a decade ago, and also more recently in the MicroBooNE experiment at Fermilab. Fermilab’s short-baseline neutrino programme now includes ICARUS and the new Short Baseline Neutrino Detector, which is due to begin taking neutrino data this year.

A charged pion ejects a proton

The first phase of DUNE will construct one LArTPC in each of the two detector caverns, with the second phase adding an additional detector in each. A central utility cavern between the north and south caverns will house infrastructure to support the operation of the detectors.

Following excavation by Thyssen Mining, final concrete work was completed in all the underground caverns and drifts, and the installation of power, lighting, plumbing, heating, ventilation and air conditioning is underway. 90% of the subcontracts for the installation of the civil infrastructure have already been awarded, with LBNF and DUNE’s economic impact in Illinois and South Dakota estimated to be $4.3 billion through fiscal years 2022 to 2030.

Once the caverns are prepared, two large membrane cryostats will be installed to house the detectors and their liquid argon. Shipment of material for the first of the two cryostats being provided by CERN is underway, with the first of approximately 2000 components having arrived at SURF in January; the remainder of the steel for the first cryostat was due to have been shipped from its port in Spain by the end of May. The manufacture of the second cryostat by Horta Coslada is ongoing (see “Cryostat creation” image).

Procedures for lifting and manipulating the components will be tested in South Dakota in spring 2025, allowing the collaboration to ensure that it can safely and efficiently handle bulky components with challenging weight distributions in an environment where clearances can reach as little as 3 inches on either side. Lowering detector components down the Homestake mine’s Ross shaft will take four months.

Two configurations

The two far-detector modules needed for phase one of the DUNE experiment will use the same LArTPC technology, though with different anode and high-voltage configurations. A “horizontal-drift” far detector will use 150 6 m-by-2.3 m anode plane assemblies (APAs). Each will be wound with 4000 150 μm diameter copper-beryllium wires to collect ionisation signals from neutrino interactions with the argon.

A section of the second cryostat for DUNE

A second “vertical-drift” far detector will instead use charge readout planes (CRPs) – printed circuit boards perforated with an array of holes to capture the ionisation signals. Here, a horizontal cathode plane will divide the detector into two vertically stacked volumes. This design yields a slightly larger instrumented volume, which is highly modular in design, and simpler and more cost-effective to construct and install. A small amount of xenon doping will significantly enhance photo detection, allowing more light to be collected beyond a drift length of 4 m.

The construction of the horizontal-drift APAs is well underway at STFC Daresbury Laboratory in the UK and at the University of Chicago in the US. Each APA takes several weeks to produce, motivating the parallelisation of production across five machines in Daresbury and one in Chicago. Each machine automates the winding of 24 km of wire onto each APA (see “Wind it up” image). Technicians then solder thousands of joints and use a laser system to ensure the wires are all wound to the required tension.

Two large ProtoDUNE detectors at CERN are an essential part of developing and validating DUNE’s detector design. Four APAs are currently installed in a horizontal-drift prototype that will take data this summer as a final validation of the design of the full detector. A vertical-drift prototype (see “Vertical drift” image) will then validate the production of CRP anodes and optimise their electronics. A full-scale test of vertical-drift-detector installation will take place at CERN later this year.

Phase transition

Alongside the deployment of two additional far-detector modules, phase two of the DUNE experiment will include an increase in beam power beyond 2 MW and the deployment of a more capable near detector (MCND) featuring a magnetised high-pressure gaseous-argon TPC. These enhancements pursue increased statistics, lower energy thresholds, better energy resolution and lower intrinsic backgrounds. They are key to DUNE’s measurement of the parameters governing long-baseline neutrino oscillations, and will expand the experiment’s physics scope, including searches for anomalous tau-neutrino appearance, long-lived particles, low-mass dark matter and solar neutrinos.

A winding machine producing a ProtoDUNE anode plane assembly

Phase-one vertical-drift technology is the starting point for phase-two far-detector R&D – a global programme under ECFA in Europe and CPAD in the US that seeks to reduce costs and improve performance. Charge-readout R&D includes improving charge-readout strips, 3D pixel readout and 3D readout using high-performance fast cameras. Light-readout R&D seeks to maximise light coverage by integrating bare silicon photomultipliers and photoconductors into the detector’s field-cage structure.

A water-based liquid scintillator module capable of separately measuring scintillation and Cherenkov light is currently being explored as a possible alternative technology for the fourth “module of opportunity”. This would require modifications to the near detector to include corresponding non-argon targets.

Intense work

At Fermilab, site preparation work is already underway for LBNF, and construction will begin in 2025. The project will produce the world’s most intense beam of neutrinos. Its wide-band beam will cover more than one oscillation period, allowing unique access to the shape of the oscillation pattern in a long-baseline accelerator-neutrino experiment.

LBNF will need modest upgrades to the beamline to handle the 2 MW beam power from the upgrade to the Fermilab accelerator complex, which was recently endorsed by P5. The bigger challenge to the facility will be the proton-target upgrades needed for operation at this beam power. R&D is now taking place at Fermilab and at the Rutherford Appleton Laboratory in the UK, where DUNE’s phase-one 1.2 MW target is being designed and built.

The next generation of big neutrino experiments promises to bring new insights into the nature of our universe

DUNE highlights the international and collaborative nature of modern particle physics, with the collaboration boasting more than 1400 scientists and engineers from 209 institutions in 37 countries. A milestone was achieved late last year when the international community came together to sign the first major multi-institutional memorandum of understanding with the US Department of Energy, affirming commitments to the construction of detector components for DUNE and pushing the project to its next stage. US contributions are expected to cover roughly half of what is needed for the far detectors and the MCND, with the international community contributing the other half, including the cryostat for the third far detector.

DUNE is now accelerating into its construction phase. Data taking is due to start towards the end of this decade, with the goal of having the first far-detector module operational before the end of 2028.

The next generation of big neutrino experiments promises to bring new insights into the nature of our universe – whether it is another step towards understanding the preponderance of matter, the nature of the supernovae explosions that produced the stardust of which we are all made, or even possible signatures of dark matter… or something wholly unexpected!

Tabletop experiment constrains neutrino size

The BeEST experiment

How big is a neutrino? Though the answer depends on the physical process that created it, knowledge of the size of neutrino wave packets is at present so wildly unconstrained that every measurement counts. New results from the Beryllium Electron capture in Superconducting Tunnel junctions (BeEST) experiment in TRIUMF, Canada, set new lower limits on the size of the neutrino’s wave packet in terrestrial experiments – though theorists are at odds over how to interpret the data.

Neutrinos are created as a mixture of mass eigenstates. Each eigenstate is a wave packet with a unique group velocity. If the wave packets are too narrow, they eventually stop overlapping as the wave evolves, and quantum interference is lost. If the wave packets are too broad, a single mass eigenstate is resolved by Heisenberg’s uncertainty principle, and quantum interference is also lost. No quantum interference means no neutrino oscillations.

“Coherence conditions constrain the lengths of neutrino wave packets both from below and above,” explains theorist Evgeny Akhmedov of MPI-K Heidelberg. “For neutrinos, these constraints are compatible, and the allowed window is very large because neutrinos are very light. This also hints at an answer to the frequently asked question of why charged leptons don’t oscillate.”

The spatial extent of the neutrino wavepacket has so far only been constrained to within 13 orders of magnitude by reactor-neutrino oscillations, say the BeEST team. If wave-packet sizes were at the experimental lower limit set by the world’s oscillation data, it could have impacted future oscillation experiments, such as the Jiangmen Underground Neutrino Observatory (JUNO) that is currently under construction in China.

“This could have destroyed JUNO’s ability to probe the neutrino mass ordering,” says Akhmedov, “however, we expect the actual sizes to be at least six orders of magnitude larger than the lowest limit from the world’s oscillation data. We have no hope of probing them in terrestrial oscillation experiments, in my opinion, though the situation may be different for astrophysical and cosmological neutrinos.”

BeEST uses a novel method to constrain the size of the neutrino wavepacket. The group creates electron neutrinos via electron capture on unstable 7Be nuclei produced at the TRIUMF–ISAC facility in Vancouver. In the final state there are only two products: the electron neutrino and a newly transmuted 7Li daughter atom that receives a tiny energy “kick” by emitting the neutrino. By embedding the 7Be isotopes in superconducting quantum sensors at 0.1 K, the collaboration can measure this low-energy recoil to high precision. Via the uncertainty principle, the team infers a limit on the spatial localisation of the entire final-state system of 6.2 pm – more than 1000 times larger than the nucleus itself.

Consensus has not been reached on how to infer the new lower limit on the size of the neutrino wave packet, with the preprint quoting two lower limits in the vicinity of 10–11 m and 10–8 m based on different theoretical assumptions. Although they differ dramatically, even the weaker limit improves upon all previous reactor oscillation data by more than an order of magnitude, and is enough to rule out decoherence effects as an explanation for sterile-neutrino anomalies, says the collaboration.

“I think the more stringent limit is correct,” says Akhmedov, who points out that this is only about 1.5 orders of magnitude lower than some theoretical predictions. “I am not an experimentalist and therefore cannot judge whether an improvement of 1.5 orders of magnitude can be achieved in the foreseeable future, but I very much hope that this is possible.”

In defiance of cosmic-ray power laws

The Calorimetric Electron Telescope

In a series of daring balloon flights in 1912, Victor Hess discovered radiation that intensified with altitude, implying extra-terrestrial origins. A century later, experiments with cosmic rays have reached low-Earth orbit, but physicists are still puzzled. Cosmic-ray spectra are difficult to explain using conventional models of galactic acceleration and propagation. Hypotheses for their sources range from supernova remnants, active galactic nuclei and pulsars to physics beyond the Standard Model. The study of cosmic rays in the 1940s and 1950s gave rise to particle physics as we know it. Could these cosmic messengers be about to unlock new secrets, potentially clarifying the nature of dark matter?

The cosmic-ray spectrum extends well into the EeV regime, far beyond what can be reached by particle colliders. For many decades, the spectrum was assumed to be broken into intervals, each following a power law, as Enrico Fermi had historically predicted. The junctures between intervals include: a steepening decline at about 3 × 106 GeV known as the knee; a flattening at about 4 × 109 GeV known as the ankle; and a further steepening at the supposed end of the spectrum somewhere above 1010 GeV (10 EeV).

The Calorimetric Electron Telescope detector

While the cosmic-ray population at EeV energies may include contributions from extra-galactic cosmic rays, and the end of the spectrum may be determined by collisions with relic cosmic-microwave-background photons – the Greisen–Zatsepin–Kuzmin cutoff – the knee is still controversial as the relative abundance of protons and other nuclei is largely unknown. What’s more, recent direct measurements by space-borne instruments have discovered “spectral curvatures” below the knee. These significant deviations from a pure power law range from a few hundred GeV to a few tens of TeV. Intriguing anomalies in the spectra of cosmic-ray electrons and positrons have also been observed below the knee.

Electron origins

The Calorimetric Electron Telescope (CALET; see “Calorimetric telescope” figure) on board the International Space Station (ISS) provides the highest-energy direct measurements of the spectrum of cosmic-ray electrons and positrons. Its goal is to observe discrete sources of high-energy particle acceleration in the local region of our galaxy. Led by the Japan Aerospace Exploration Agency, with the participation of the Italian Space Agency and NASA, CALET was launched from the Tanegashima Space Center in August 2015, becoming the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During 2017 a third experiment, ISS-CREAM, joined AMS-02 and CALET, but its observation time ended prematurely.

A candidate electron event in CALET

As a result of radiative losses in space, high-energy cosmic-ray electrons are expected to originate just a few thousand light-years away, relatively close to Earth. CALET’s homogeneous calorimeter (fully active, with no absorbers) is optimised to reconstruct such particles (see “Energetic electron” figure). With the exception of the highest energies, anisotropies in their arrival direction are typically small due to deflections by turbulent interstellar magnetic fields.

Energy spectra also contain crucial information as to where and how cosmic-ray electrons are accelerated. And they could provide possible signatures of dark matter. For example, the presence of a peak in the spectrum could be a sign of dark-matter decay, or dark-matter annihilation into an electron–positron pair, with a detected electron or positron in the final state.

Direct measurements of the energy spectra of charged cosmic rays have recently achieved unprecedented precision thanks to long-term observations of electrons and positrons of cosmic origin, as well as of individual elements from hydrogen to nickel, and even beyond. Space-borne instruments such as CALET directly identify cosmic nuclei by measuring their electric charge. Ground-based experiments must do so indirectly by observing the showers they generate in the atmosphere, incurring large systematic uncertainties. Either way, hadronic cosmic rays can be assumed to be fully stripped of atomic electrons in their high-temperature regions of origin.

A rich phenomenology

The past decade has seen the discovery of unexpected features in the differential energy spectra of both leptonic and hadronic cosmic rays. The observation by PAMELA and AMS of an excess of positrons above 10 GeV has generated widespread interest and still calls for an unambiguous explanation (CERN Courier December 2016 p26). Possibilities include pair production in pulsars, in addition to the well known interactions with the interstellar gas, and the annihilation of dark matter into electron–positron pairs.

Combined electron and positron flux measurements as a function of kinetic energy

Regarding cosmic-ray nuclei, significant deviations of the fluxes from pure power-law spectra have been observed by several instruments in flight, including by CREAM on balloon launches from Antarctica, by PAMELA and DAMPE aboard satellites in low-Earth orbit, and by AMS-02 and CALET on the ISS. Direct measurements have also shown that the energy spectra of “primary” cosmic rays is different from those of “secondary” cosmic rays created by collisions of primaries with the interstellar medium. This rich phenomenology, which encodes information on cosmic-ray acceleration processes and the history of their propagation in the galaxy, is the subject of multiple theoretical models.

An unexpected discovery by PAMELA, which had been anticipated by CREAM and was later measured with greater precision by AMS-02, DAMPE and CALET, was the observation of a flattening of the differential energy spectra of protons and helium. Starting from energies of a few hundred GeV, the proton flux shows a smooth and progressive hardening (increase in gradient) of the spectrum that continues up to around 10 TeV, above which a completely different regime is established. A turning point was the subsequent discovery by CALET and DAMPE of an unexpected softening of proton and helium fluxes above about 10 TeV/Z, where the atomic number Z is one for protons and two for helium. The presence of a second break challenges the conventional “standard model” of cosmic-ray spectra and calls for a further extension of the observed energy range, currently limited to a few hundred TeV.

At present, only two experiments in low-Earth orbit have an energy reach beyond 100 TeV: CALET and DAMPE. They rely on a purely calorimetric measurement of the energy, while space-borne magnetic spectrometers are limited to a maximum magnetic “rigidity” – a particle’s momentum divided by its charge – of a few teravolts. Since the end of PAMELA’s operations in 2016, AMS-02 is now the only instrument in orbit with the ability to discriminate the sign of the charge. This allows separate measurements of the high-energy spectra of positrons and antiprotons – an important input to the observation of final states containing antiparticles for dark-matter searches. AMS-02 is also now preparing for an upgrade: an additional silicon tracker layer will be deployed at the top of the instrument to enable a significant increase in its acceptance and energy reach (CERN Courier March/April 2024 p7).

Pioneering observations

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers, enabling measurements of electrons up to 20 TeV and measurements of hadrons up to 1 PeV. As an all-calorimetric instrument with no magnetic field, its main science goal is to perform precision measurements of the detailed shape of the inclusive spectra of electrons and positrons.

The Vela Pulsar

Thanks to its advanced imaging calorimeter, CALET can measure the kinetic energy of incident particles well into TeV energies, maintaining excellent proton–electron discrimination throughout. CALET’s homogeneous calorimeter has a total thickness of 30 radiation lengths, allowing for a full containment of electron showers. It is preceded by a high-granularity pre-shower detector with imaging capabilities that provide a redundant measurement of charge via multiple energy-loss measurements. The calibration of the two instruments is the key to controlling the energy scale, motivating beam tests at CERN before launch.

A first important deviation from a scale-invariant power-law spectrum was found for electrons near 1 TeV. Here, CALET and DAMPE observed a significant flux reduction, as expected from the large radiative losses of electrons during their travel in space. CALET has now published a high-statistics update up to 7.5 TeV, reporting the presence of candidate electrons above the 1 TeV spectral break (see “Electron break” figure).

This unexplored region may hold some surprises. For example, the detection of even higher energy electrons, such as the 12 TeV candidate recently found by CALET, may indicate the contribution of young and nearby sources such as the Vela supernova remnant, which is known to host a pulsar (see “Pulsar home” image).

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers

A second unexpected finding is the observation of a significant reduction in the proton flux around 10 TeV. This bump and dip were also observed by DAMPE and anticipated by CREAM, albeit with low statistics (see “Proton bump” figure). A precise measurement of the flux has allowed CALET to fit the spectrum with a double-broken power law: after a spectral hardening starting at a few hundred GeV, which is also observed by AMS-02 and PAMELA, and which progressively increases above 500 GeV, a steep softening takes place above 10 TeV.

Proton flux measurements as a function of the kinetic energy

A similar bump and dip have been observed in the helium flux. These spectral features may result from a single physical process that generates a bump in the cosmic-ray spectrum. Theoretical models include an anomalous diffusive regime near the acceleration sources, the dominance of one or more nearby supernova remnants, the gradual release of cosmic rays from the source, and the presence of additional sources.

CALET is also a powerful hunter of heavier cosmic rays. Measurements of the spectra of boron, carbon and oxygen ions have been extended in energy reach and precision, providing evidence of a progressive spectral hardening for most of the primary elements above a few hundred GeV per nucleon. The boron-to-carbon flux ratio is an important input for understanding cosmic-ray propagation. This is because diffusion through the interstellar medium causes an additional softening of the flux of secondary cosmic rays such as boron with respect to primary cosmic rays such as carbon (see “Break in B/C?” figure). The collaboration also recently published the first high-resolution flux measurement of nickel (Z = 28), revealing the element to have a very similar spectrum to iron, suggesting similar acceleration and propagation behaviour.

CALET is also studying the spectra of sub-iron elements, which are poorly known above 10 GeV per nucleon, and ultra-heavy galactic cosmic rays such as zinc (Z = 30), which are quite rare. CALET studies abundances up to Z = 40 using a special trigger with a large acceptance, so far revealing an excellent match with previous measurements from ACE-CRIS (a satellite-based detector), SuperTIGER (a balloon-borne detector) and HEAO-3 (a satellite-based detector decommissioned in the 1980s). Ultra-heavy galactic cosmic rays provide insights into cosmic-ray production and acceleration in some of the most energetic processes in our galaxy, such as supernovae and binary-neutron-star mergers.

Gravitational-wave counterparts

In addition to charged particles, CALET can detect gamma rays with energies between 1 GeV and 10 TeV, and study the diffuse photon background as well as individual sources. To study electromagnetic transients related to complex phenomena such as gamma-ray bursts and neutron-star mergers, CALET is equipped with a dedicated monitor that to date has detected more than 300 gamma-ray bursts, 10% of which are short bursts in the energy range 7 keV to 20 MeV. The search for electromagnetic counterparts to gravitational waves proceeds around the clock by following alerts from LIGO, VIRGO and KAGRA. No X-ray or gamma-ray counterparts to gravitational waves have been detected so far.

CALET measurements of the boron to carbon flux ratio

On the low-energy side of cosmic-ray spectra, CALET has contributed a thorough study of the effect of solar activity on galactic cosmic rays, revealing charge dependence on the polarity of the Sun’s magnetic field due to the different paths taken by electrons and protons in the heliosphere. The instrument’s large-area charge detector has also proven to be ideal for space-weather studies of relativistic electron precipitation from the Van Allen belts in Earth’s magnetosphere.

The spectacular recent experimental advances in cosmic-ray research, and the powerful theoretical efforts that they are driving, are moving us closer to a solution to the century-old puzzle of cosmic rays. With more than four billion cosmic rays observed so far, and a planned extension of the mission to the nominal end of ISS operativity in 2030, CALET is expected to continue its campaign of direct measurements in space, contributing sharper and perhaps unexpected pictures of their complex phenomenology.

Super-massive black holes quickly repoint their jets

Two galaxy clusters observed by the Chandra X-ray Observatory

With masses up to 1015 times greater than that of the Sun, galaxy clusters are the largest concentrations of matter in the universe. Within these objects, the space between the galaxies is filled with a gravitationally bound hot plasma. Given time, this plasma accretes on the galaxies, cools down and eventually forms stars. However, observations indicate that the rate of star formation is slower than expected, suggesting that processes are at play that prevent the gas from accreting. Violent bursts and jets coming from super-massive black holes in the centre of galaxy clusters are thought to quench star formation. A new study indicates that these jets rapidly change their directions.

Super-massive black holes form the centre of all galaxies, including our own, and can undergo periods of activity during which powerful jets are emitted along their spin axes. In the case of galaxy clusters, these bursts can be spotted in real time by looking at their radio emission, while their histories can be traced using X-ray observations. As the jets are emitted, they crash into the intra-cluster plasma, sweeping up material and leaving behind bubbles, or cavities, in the plasma. As the plasma emits in the X-ray region, these bubbles reveal themselves as voids when viewed with X-ray detectors. After their creation, they continue to move through the plasma and remain visible long after the original jet has disappeared (see image).

Francesco Ubertosi of the University of Bologna and co-workers studied a sample of about 60 clusters observed using the Very Long Baseline Array, which produces highly detailed radio information, and the Chandra X-ray telescope. The team studied the angle between the cavities and the current radio jet and found that most cavities are simply aligned, indicating that the current jet points in the same direction as those responsible for the cavities produced in the past. However, around one third of the studied objects show significant angles, some as large as 90°.

Violent bursts and jets are thought to quench star formation

This study therefore shows that the source of the jet, the super-massive black hole, appears to be able to reorient itself over time. More importantly, by dating the cavities the team showed that this can happen within time scales of just one million years. To get an idea of the rapidity of this change, consider that the solar system takes 225 million years to revolve around the super-massive black hole at the centre of the Milky Way. Analogously, Earth takes 365 days for one revolution around the Sun. Therefore, if the Milky Way’s super-massive black hole altered its spin axis on the timescale of one million years, it would be as if the Sun were to change its spin axis in a matter of a few days.

These observations raise the question of how the re-orientation of jets from super-massive black holes takes place. The authors find that the results are unlikely to be due to projection effects, or perturbations that significantly shift the position of the cavities. Instead, the most plausible explanation is that the spin axes of the super-massive black hole tilt significantly, likely affected by complex accretion flows. The results therefore reveal important information about the accretion dynamics of super-massive black holes. They also offer important insights into how stars form in these clusters, as the reorientation would further suppress star formation.

Moriond’s electroweak delights

Moriond 2024

Packed sessions, more than 100 talks and lively discussions at Rencontres de Moriond electroweak, held from 24 to 31 March in La Thuile, Italy, captured the latest thinking in the field. The Standard Model (SM) emerged intact, while new paths of enquiry were illuminated.

Twelve years after the discovery of the Higgs boson, H, a wide variety of analyses by ATLAS and CMS are bringing the new scalar into sharper focus. This includes its mass, for which CMS has reported the most precise single measurement using the H  ZZ → 4ℓ channel: 125.04 ± 0.11 (stat) ± 0.05 (syst) GeV. A Run 2 legacy mass measurement combining ATLAS and CMS results is under way, while projections for the HL-LHC indicate that an uncertainty at the 10–20 MeV level is attainable. For the H width, which is potentially highly sensitive to new physics but notoriously difficult to measure at a hadron collider, the experiments constrain its value to be less than three times the SM width at 95% confidence level using an indirect method with reasonable assumptions. A precision of about 20% is expected from the full HL-LHC dataset.

New generation

The measured H cross sections in all channels continue to support the simplest incarnation of the SM H sector, with a new result from CMS testing the bbH production mode in the ττ and WW channels. Now that the H couplings to the most massive particles are well established, the focus is moving to the second-generation fermions. Directly probing the shape of the Brout–Englert–Higgs potential, and sensitive to new-physics contributions, the H self-coupling is another key target. HH production has yet to be observed at the LHC due to its very low cross section (the combined ATLAS and CMS limit is currently 2.5–3 times the SM value), but an extensive measurement programme utilising multiple channels is under way and Moriond saw new results presented based on HH → bbbb and HH → γγττ decays (see “Homing in on the Higgs self-interaction“).

Searches for exotic H decays, or for additional low-mass scalar bosons as predicted by two-Higgs-doublet extensions to the SM, were a Moriond highlight. A wide scope of new H-boson (a, A) searches have been released by ATLAS and CMS, including a new search for H → aa → muons by CMS in the mass range 0.2–60 GeV and, on the higher mass side, new limits on H/A → tt by ATLAS and A → ZH → ℓℓ tt by CMS. Although none show significant deviations from the SM, most of the searches are statistically limited and there remains a large amount of phase space available for extended H sectors. Generating much conversation in the corridors was a new-physics interpretation of ATLAS and CMS data in terms of a Higgs-triplet model, based on results  in the HH → γγ channel and top-quark differential distributions.

The LHC experiments are making stunning progress in precision electroweak measurements, as exemplified by a new measurement by CMS of the effective leptonic electroweak mixing angle sin2θeff= 0.23157 ± 0.00031, the first LHC measurement of the W-boson width by ATLAS, and precise measurements of the W and Z cross sections at 13.6 TeV. ATLAS announced at Moriond the most precise single-experiment test of lepton-flavour universality in comparisons between W-boson decays to muons and electrons. A wide-ranging presentation of electroweak results based on two-photon collisions at the LHC described recent attempts by CMS to extract the anomalous magnetic moment of the tau lepton. And LHCb showcased its capabilities in providing an independent measurement of the W-boson mass and the Z-boson cross section. Participants heard about the increasing relevance of lattice QCD in precision electroweak measurements, for example in determining the running of alpha and the weak mixing angle. A tension between the predictions from lattice QCD and from more traditional dispersive approaches exists, with a similar origin to that for the anomalous magnetic moment of the muon.

Following the recent observation of entanglement in top-quark pairs by ATLAS and CMS, a presentation addressing the intriguing ability of colliders to carry out fundamental tests of quantum mechanics generated much discussion. Offering full access to spin information, collider experiments can study quantum correlations, wavefunction collapse and decoherence at unprecedented energies, possibly enabling a Bell measurement at the HL-LHC and the first observation of toponium.

Seeking signals from beyond

Searches for long-lived particles by ATLAS, CMS and LHCb – including the first at LHC Run 3 by CMS – were high on the Moriond agenda. Heavy gauge and scalar bosons, left–right gauge boson masses and heavy neutral leptons are among other new-physics scenarios being constrained. Casting the net as wide as possible, the LHC experiments are developing AI anomaly-detection algorithms, while the power of effective field theory (EFT) in parameterising the effect of heavy new particles on LHC measurements continues to grow via a diverse range of analyses. Even at O(6) in the SMEFT, no fewer than 59 Wilson coefficients, each related to different underlying physics processes, need to be to measured.

Neutrinoless double-beta decay, which would be an unambiguous sign of new physics, continues to be hunted by a host of experiments

Tensions between theory and experiment remain in some processes involving b → s or b → c quark transitions. Moriond saw much discussion on such processes, including new results from Belle II on the branching ratio of the highly suppressed decay B → Kνν. Participants heard about the need for theory progress, as has been the case recently with impressive calculations of b → sγ. Predictions for b → sμμ – which show a tension with experiment and that are independent of the R(K) parameters clocking the relative rates of B → +μ and B → Ke+e – are excellent ways to probe new physics. Concerning b → c transitions, updates on R(D*) from Belle II and on R(D*) and R(D) from LHCb based on the muonic decay of the tau lepton take the world-average tension to 3.17σ. The stability of the SM prediction of R(D*) was also questioned.

New flavours

The flavour sector is awash with new results. LHCb presented fresh analyses exploring mixing and CP violation in the charm sector – a unique gateway to the flavour structure of up-type quarks – while CMS presented a new measurement of CP violation in Bs→ J/ψ K+K decays. In ultra-rare kaon decays, KOTO presented a new upper limit on the branching ratio of K0L→ πνν (< 2 × 10–9 at 90% confidence level) and projects a sensitivity < 10–13 with the proposed KOTO II upgrade. NA62 presented a preliminary measurement of the branching ratio of the very rare decay π0→ e+e (5.86 ± 0.37 × 10–8), in agreement with the SM, and results for K+→ πγγ, the latter offering the first evidence that second-order terms must be included in chiral perturbation theory. Belle and Belle II showed new radiative and electroweak penguin results concerning processes such as B0→ γγ, and BESIII presented a precise measurement of the CKM matrix element Vcs. A sweeping theory perspective on the mysterious flavour structure of the SM introduced participants to “flavour modular symmetries” – a promising new game in town for a potential theory of flavour based on modular forms, which are well known in mathematics and were used in the proof of Fermat’s last theorem.

The final sessions of Moriond electro­weak turned to neutrinos, dark matter and astroparticle physics. KATRIN is soon to release an update on the neutrino mass limit based on six times more data, with an expected uncertainty of mν < 0.5 eV, and is undertaking R&D towards a proposed upgrade (KATRIN++) that would use new technology to push the mass limit down further. The collaboration is also stepping up its search for new physics via high-precision spectroscopy and is working towards an upgrade called TRISTAN that will soon zone in on the sterile neutrino hypothesis.

Rencontre at Moriond

In Japan, the T2K facility has undergone an extensive renewal period including its first operation with the near-detector ND280 upgrade in August 2023, which increased the acceptance. Designed to explore neutrino mass ordering and leptonic CP violation, T2K data so far show a slight preference for the “normal” mass ordering while admitting a CP-conserving phase at the level of 2σ. However, a joint analysis between T2K and NOvA, a neutrino oscillation experiment in the US with a longer baseline and complementary sensitivity, prefers a more degenerate parameter space where either CP conservation or the inverted ordering are acceptable solutions. The combined data place a strong constraint on Δm32.

Neutrinoless double-beta decay (NDBD), which would reveal the neutrino to be a Majorana particle and be an unambiguous sign of new physics, continues to be hunted by a host of experiments. LEGEND-200’s first physics data was shown, setting up an ultimate goal of placing a lower limit on the NDBD half-life of 1028 years for 76Ge. Also located at Gran Sasso, CUORE, which has been collecting data since 2019, will operate for one more year before an upgrade is planned. In parallel, designs for a next-generation tonne-scale upgrade, CUPID, are being finalised. Neutrino aficionados were also treated to scotogenic three-loop models, in which neutrinos gain a Dirac mass term from radiative corrections, and to the latest results from FASER at the LHC, including the first emulsion-detector measurements of the νe and νμ cross sections at TeV energies, and a search for axion-like particles.

IceCube, which studies resonant disappearance of antineutrinos due to matter effects, showed intriguing results that delve into new-physics territory. Adding sterile neutrinos improves global fits by 7σ, participants heard, but brings inconsistencies too. Generating much interest, the global p-value for the null hypothesis of the sterile neutrino in the muon disappearance channel is 3.1%, in tension with MINOS. The Deep Core IceCube upgrade will increase the number of strings in the observatory, while the more significant Gen-2 upgrade will expand its overall area. A theory overview of the status of sterile neutrinos, taking into account recent results from MiniBooNE, MicroBooNE, PROSPECT, STEREO, GALEX, SAGE, BEST and others, concluded that experimental evidence for such a fourth neutrino state is fading but not excluded. The so-called reactor anomaly is probably explained by smaller uranium contribution than previously accounted for, while the upgraded Neutrino-4 experiment will shed light on tensions with PROSPECT and STEREO.

Cosmological constraints

The status of dark photons was also reviewed. Constraints are being placed from many sources, including colliders, astrophysical and cosmological bounds, haloscopes, and most recently radio telescopes, the James Webb Space Telescope and beam-dump experiments. PandaX-4T, which seeks to constrain WIMP dark matter and NDBD, is about to restart data-taking. LZ, another large liquid-xenon detector, has placed record limits on dark matter based on its first 60 days of data-taking. Results from the first observing run of a novel kind of laser-interferometric detector, LIDA, to observe axion-like particles in the galactic halo are promising.

No particle-physics conference would be complete without the anomalous magnetic moment of the muon

The latest supersymmetry and dark-matter searches at ATLAS and CMS were also presented, including a new result on R-parity violating supersymmetry and fresh limits on the chargino mass. BESIII reported on exotic searches for massive dark photons, muon-philic particles, glueballs and the QCD axion. Searches for axion-like particles are multiplying in many shapes and forms. In terms of flavour probes of axions, the strongest bounds come from NA62. Less conventionally, probing ultralight dark matter by searching for oscillatory behaviour in gravitational waves is gaining traction. Recent NanoGrav data show no signs of such a signal.

All eyes on the muon

No contemporary particle-physics conference would be complete without the anomalous magnetic moment of the muon – a powerful quantity that takes into account all known and unknown particles, for which the measured value is in significant tension with the SM prediction. As the Fermilab Muon g-2 experiment continues to improve the experimental precision (currently 0.2 ppm), all eyes are on how the SM calculation is performed – specifically the systematic uncertainty associated with a process called hadronic vacuum polarisation. A huge amount of work is going into understanding this quantity, both in terms of the calculational machinery and underlying data used. When computed using lattice QCD, the tension between experiment and theory is significantly reduced. However, the calculations are so complex that few groups have been able to execute them. That is set to change this year, Moriond participants heard, as new lattice calculations are unblinded ahead of the Lattice 2024 meeting in August, followed by a decision on whether to include such results in the official SM prediction at the seventh plenary workshop of the Muon g-2 Theory Initiative at KEK in September.

Experimentally and theoretically, all tools are being thrown at the SM in an attempt to find an explanation for dark matter, the cosmological baryon asymmetry, neutrino masses and other outstanding mysteries. The many high-quality talks at this year’s Moriond electroweak session, including an impressive batch of flash talks in dedicated young-researcher sessions, covered all aspects of the adventure and set the standard for future analyses. An incredible interplay between astrophysical, cosmological, collider and other experimental measurements is rapidly eating into the available parameter space for new physics. Ten years ago, the Moriond theory-summary speaker remarked “new physics must be around the corner, but we see no corner”. While the same could be said today, physicists have a much clearer view of the road ahead.

High time for holographic cosmology

On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers. Thomas Hertog, who was a student and collaborator of Hawking, suggests that it may be viewed as the next book the famous scientist would have written if he were still alive. While addressing fundamental questions about the origin of the cosmos, Hertog sprinkles the text with anecdotes from his interactions with Hawking, easing up on the otherwise intense barrage of ideas and concepts. But despite its relaxed and popular style, the book will be most useful for physicists with a basic education in relativity and quantum theory.

Expanding universes

The book starts with an exhaustive journey through the history of cosmology. It reviews the ancient idea of an eternal mathematical universe, passes through the ages of Copernicus and Newton, and then enters the modern era of Einstein’s universe. Hertog thoroughly explores static and expanding universes, Hoyle’s steady-state cosmos, Hartle and Hawking’s no-boundary universe, Guth’s inflationary universe and Linde’s multiverse with eternal inflation. Everything culminates in the proposal for holographic quantum cosmology that the author developed together with the late Hawking.

What makes the book especially interesting is its philosophical reflections on the historical evolution of various underlying scientific paradigms. For example, the old Greeks developed the Platonic view that the workings of the world should be governed by eternal mathematical laws. This laid the groundwork for the reductionistic worldview that many scientists – especially particle physicists – subscribe to today.

Hertog argues that this way of thinking is flawed, especially when confronted with a Big Bang followed by a burst of inflation. Given the supremely fine-tuned structure of our universe, as is necessitated by the existence of atoms, galaxies and ultimately us, how could the universe “know” back at the time of the Big Bang that this fine-tuned world would emerge after inflation and phase transitions?

On the Origin of Time: Stephen Hawking’s Final Theory

The quest to scientifically understand this apparent intelligent design has led to physical scenarios such as eternal inflation, which produces an infinite collection of pocket universes with their own laws. These ideas blend the anthropic principle – that only a life-friendly universe can be observed – into the narrative of a multiverse.

However, for anthropic reasoning to make sense, one needs to specify what a typical observer would be, observes Hertog, because otherwise the statement is circular. Instead, he argues that one should interpret the history of the universe as an evolutionary process. Not only would physical objects continuously evolve, but also the laws that govern them, thereby building up an enormous chain of frozen accidents analogous to the evolutionary tree of biological species on Earth.

This represents a major paradigm shift as it introduces a retrospective element: one can only understand evolution by looking at it backwards in time. Deterministic and causal explanations apply only at a crude, coarse-grained level, while the precise way that structures and laws play out is governed by accumulated accidents. Essentially the question “how did everything start?” is superseded by the question “how did our universe become as it is today?” This may be seen as adopting a top-down view (into the past) instead of a bottom-up view (from the past).

Hawking criticised traditional cosmology for hiding certain assumptions, in particular the separation of the fundamental laws from initial boundary conditions and from the role of the observer. Instead, one should view the universe, at its most fundamental level, as a quantum superposition of many possible spacetimes, of which the observer is an intrinsic part.

From this Everettian viewpoint, wavefunctions behave like separate branches of reality. A measurement is like a fork in the road, where history divides into different outcomes. This line of thought has significant consequences. The author presents an illuminating analogy with the so-called delayed double-slit experiment, which was first conceived by John Archibald Wheeler. Here the measurement that determines whether an electron behaves as particle or wave is delayed until after the electron has already passed the slit. This demonstrates that the process of observation inflicts a retroactive component which, in a sense, creates the past history of the electron.

The fifth dimension 

Further ingredients are needed to transform this collection of ideas to a concrete proposal, argues Hertog. In short, these are quantum entanglement and holography. Holography has been recognised as a key property of quantum gravity, following Maldacena’s work on quantum black holes. It posits that all the information about the interior of a black hole is encoded at its horizon, which acts like a holographic screen. Inside, a fictitious fifth dimension emerges that plays the role of an energy scale.

A holographic universe would be the polar opposite of a Platonic universe with eternal laws

In Hawking and Hertog’s holographic quantum universe, one considers a Euclidean universe where the role of the holographic screen is played by the surface of our observations. The main idea is that the emergent dimension is time itself! In essence, the observed universe, with all its complexity, is like a holographic screen whose quantum bits encode its past history. Moving from the screen to the interior is equivalent to going back in time, from a highly entangled complex universe to a gradually less structured universe with fading physical laws and less entangled qubits. Eventually no entangled qubits remain. This is the origin of time as well as of the physical laws. Such a holographic universe would be the polar opposite of a Platonic universe with eternal laws.

Could these ideas be tested? Hertog argues that an observable imprint in the spectrum of primordial gravitational waves could be discovered in the future. For now, On the Origin of Time is delightful food for thought.

bright-rec iop pub iop-science physcis connect