In the past two decades, it has become clear that three-quark baryons and quark–antiquark mesons cannot describe the full spectrum of hadrons. Dozens of exotic states have been observed in the charm sector alone. These states are either interpreted as compact objects with four or five valence quarks or as hadron molecules, however, their inner structures remain uncertain due to the complexity of calculations in quantum chromodynamics (QCD) and the lack of direct experimental measurements of the residual strong interaction between charm and light hadrons. New femtoscopy measurement by the ALICE collaboration challenge theoretical expectations and the current understanding of QCD.
Femtoscopy is a well-established method for studying the strong interactions between hadrons. Experimentally, this is achieved by studying particle pairs with small relative momentum. In high-energy collisions of protons at the LHC, the distance between such hadrons at the time of production is about one femtometre, which is within the range of the strong nuclear force. From the momentum correlations of particle pairs, one extracts the scattering length, a0, which quantifies the final-state strong interaction between the two hadrons. By studying the momentum correlations of emitted particle pairs, it is possible to access the final-state interactions of even short-lived hadrons such as D mesons.
The scattering lengths are significantly smaller than the theoretical predictions
The ALICE collaboration has now, for the first time, measured the interaction of open-charm mesons (D+ and D*+) with charged pions and kaons for all the charge combinations. The momentum correlation functions of each system were measured in proton–proton collisions in the LHC at a centre-of-mass energy of 13 TeV. As predicted by heavy-quark spin symmetry, the scattering lengths of Dπ and D*π agree with each other, but they are found to be significantly smaller than the theoretical predictions (figure 1). This implies that the interaction between these mesons can be fully explained by the Coulomb force, and the contribution from strong interactions is negligible within experimental precision. The small measured values of the scattering length challenge our understanding of the residual strong force of heavy-flavour hadrons in the non-perturbative limit of QCD.
These results also have an important impact on the study of the quark–gluon plasma (QGP) – a deconfined state of matter created in ultra-relativistic heavy-ion collisions. The rescattering of D mesons with the other hadrons (mostly pions and kaons) created in such collisions was thought to modify the D-meson spectra, in addition to the modification expected from the QGP formation. The present ALICE measurement demonstrates, however, that the effect of rescattering is expected to be very small.
More precise and systematic studies of charm–hadron interactions will be carried out with the upgraded ALICE detector in the upcoming years.
Rare radiative b-hadron decays are powerful probes of the Standard Model (SM) sensitive to small deviations caused by potential new physics in virtual loops. One such process is the decay of B0s→ μ+μ–γ. The dimuon decay of the B0s meson is known to be extremely rare and has been measured with unprecedented precision by LHCb and CMS. While performing this measurement, LHCb also studied the B0s→ μ+μ–γ decay, partially reconstructed due to the missing photon, as a background component of the B0s→ μ+μ– process and set the first upper limit on its branching fraction to 2.0 × 10–9 at 95% CL (red arrow in figure 1). However, this search was limited to the high-dimuon-mass region, whereas several theoretical extensions of the SM could manifest themselves in lower regions of the dimuon-mass spectrum. Reconstructing the photon is therefore essential to explore the spectrum thoroughly and probe a wide range of physics scenarios.
The LHCb collaboration now reports the first search for the B0s→ μ+μ–γ decay with a reconstructed photon, exploring the full dimuon mass spectrum. Photon reconstruction poses additional experimental challenges, such as degrading the mass resolution of the B0s candidate and introducing additional background contributions. To cope with this ambitious search, machine-learning algorithms and new variables have been specifically designed with the aim of discriminating the signal among background processes with similar signatures. The analysis is performed separately for three dimuon mass ranges to exploit any differences along the spectrum, such as the ϕ(1020) meson contribution in the low invariant mass region. The μ+μ–γ invariant mass distributions of the selected candidates are fitted, including all background contributions and the B0s→ μ+μ–γ signal component. Figure 2 shows the fit for the lowest dimuon mass region.
No significant signal of B0s→ μ+μ–γ is found in any of the three dimuon mass regions, consistent with the background-only hypothesis. Upper bounds on the branching fraction are set and can be seen as the black arrows in figure 1. The mass fit is also performed for the combined candidates of the three dimuon mass regions to set a combined upper limit on the branching fraction to 2.8 × 10–8 at 95% CL.
The SM theoretical predictions of b decays becomes particularly difficult to calculate when a photon is involved, and they have large uncertainties due to the B0s→ γ local form factors. The B0s→ μ+μ–γ decay provides a unique opportunity to validate the different theoretical approaches, which do not agree with each other, as shown by the coloured bands in figure 1. Theoretical calculations of the branching fractions are currently below the experimental limits. The upgraded LHCb detector and the increased luminosity of the LHC’s Run 3 is currently providing conditions for studying rare radiative b-hadron decays with greater precision and, eventually, for finding evidence for the B0s→ μ+μ–γ decay.
Steven Weinberg was a logical freight train – for many, the greatest theorist of the second half of the 20th century. It is timely to reflect on his legacy, the scientific component of which is laid out in a new collection of his publications selected by theoretical physicist Michael Duff (Imperial College).
Six chapters cover Weinberg’s most consequential contributions to effective field theory, the Standard Model, symmetries, gravity, cosmology and short-form popular science writing. I can’t identify any notable omissions and I doubt many others would, though some may raise an eyebrow at the exclusion of his paper deriving the Lee–Weinberg bound. Duff brings each chapter to life with first-hand anecdotes and details that will delight those of us most greatly separated from historical events. I am relatively young, and only had one meaningful interaction with Steven Weinberg.Though my contemporaries and I inhabit a scientific world whose core concepts are interwoven with, if not formed by, Steven Weinberg’s scientific legacy, unlike Michael Duff we are poorly qualified to comment historically on the ecosystem in which this legacy grew, nor on aspects of personality. This makes his commentary particularly valuable to younger readers.
I can envisage three distinct audiences for this new collection. The first is the lay theorist – those who are widely enough read to recognise the depth of Weinberg’s impact in theoretical physics and would like to know more. Such readers will find Duff’s introductions to be insightful and entertaining – helpful preparation for the more technical aspects of the papers, though expertise is required to fully grapple with many of them. There are also a few hand-picked non-technical articles one would otherwise not encounter without some serious investigative effort, including some accessible articles on quantum field theory, effective field theory and life in the multiverse, in addition to the dedicated section on popular articles. These will delight any theory afficionado.
The second audience is practising theorists. If you’re going to invest in a printed collection of publications, then Weinberg is an obvious protagonist. Particle theorists consult his articles so often that they may as well have them close at hand. This collection contains those most often revisited and ought to be useful in this respect. Duff’s introductions also expose technical interconnections between the articles that might otherwise be missed.
The third audience I have in mind are beginning graduate students in particle theory, cosmology and beyond. It would not be a mistake to put this collection on recommended reading lists. In due course, most students should read many of these papers multiple times, so why not get on with it from the get-go? The section on effective field theories (EFTs) contains many valuable key ideas and perspectives. Plenty of those core concepts are still commonly encountered more by osmosis than with any rigour, and this can lead to confused notions around the general approach of EFT. Perhaps an incomplete introduction to EFT could be avoided for graduate students by cutting straight to the fundamentals contained here? The cosmology section also reveals many important modern concepts alongside lucid and fearless wrestling with big questions. The papers on gravity detail techniques that are frequently encountered in any first foray into modern amplitudology, as well as strategies to infer general lessons in quantum field theory from symmetries and self-consistency alone.
In my view, however, the most important section for beginning graduate students is that on the construction of the Standard Model (SM). It may be said that a collective amnesia has emerged regarding the scientific spirit that drove its development. The SM was built by model builders. I don’t say this facetiously. They made educated guesses about the structure of the “ultraviolet” (microscopic) world based on the “infrared” (long-distance) breadcrumbs embedded within low-energy experimental observations. Decades after this swashbuckling era came to an end, there is a growing tendency to view the SM as something rigid, providentially bestowed and permanent. The academic bravery and risk-taking that was required to take the necessary leaps forward then, and which may be required now, is no better demonstrated than in “A Model of Leptons”. All young theorists should read this multiple times. A Model of Leptons exemplifies that not only was Steven Weinberg an unstoppable force of logic, but also a plucky risk taker. It’s inspirational that its final paragraph, which laid out the structure of nature at the electroweak scale, ends with doubt and speculation: “And if this model is renormalisable, then what happens when we extend it to include the couplings of A and B to the hadrons?” By working their way through this collection, graduate students may be inspired to similar levels of ambition and jeopardy.
Amongst the greatest scientists of the last century
In the weeks that followed the passing of Stephen Weinberg, I sensed amongst a number of colleagues of all generations some moods that I could have anticipated; of the loss of not only a bona fide truth-seeker, but also of a leader, frequently the leader. I also perceived a feeling that transcended the scientific realm alone, of someone whose creative genius ought to be recognised amongst the greatest of scientists, musicians, artists and humanity of the last century. How can we productively reflect on that? I imagine we would all do well to learn not only of Weinberg’s important individual scientific insights, but also to attempt to absorb his overall methodology in identifying interesting questions, in breaking new trails in fundamental physics, and in pursuing logic and clarity wherever they may take you. This collection is not a bad place to start.
As the harvest of data from the LHC experiments continues to increase, so does the required number of simulated collisions. This is a resource-intensive task as hundreds of particles must be tracked through complex detector geometries for each simulated physics collision – and Monte Carlo statistics must typically exceed experimental statistics by a factor of 10 or more, to minimise uncertainties when measured distributions are compared with theoretical predictions. To support data taking in Run 3 (2022–2025), the ATLAS collaboration therefore developed, evaluated and deployed a wide array of detailed optimisations to its detector-simulation software.
The production of simulated data begins with the generation of particles produced within the LHC’s proton–proton or heavy-ion collisions, followed by the simulation of their propagation through the detector and the modelling of the electronics signals from the active detection layers. Considerable computing resources are incurred when hadrons, photons and electrons enter the electromagnetic calorimeters and produce showers with many secondary particles whose trajectories and interactions with the detector material must be computed. The complex accordion geometry of the ATLAS electromagnetic calorimeter makes the Geant4 simulation of the shower development in the calorimeter system particularly compute-intensive, accounting for about 80% of the total simulation time for a typical collision event.
Since computing costs money and consumes electrical power, it is highly desirable to speed up the simulation of collision events without compromising accuracy. For example, considerable CPU resources were previously spent in the transportation of photons and neutrons; this has been mitigated by randomly removing 90% of the photons (neutrons) with energy below 0.5 (2) MeV and scaling up the energy deposited from the remaining 10% of low-energy particles. The simulation of photons in the finely segmented electromagnetic calorimeter took considerable time because the probabilities for each possible interaction process were calculated every time photons crossed a material boundary. That calculation time has been greatly reduced by using a uniform geometry with no photon transport boundaries and by determining the position of simulated interactions using the ratio of the cross sections in the various material layers. The combined effect of the optimisations brings an average speed gain of almost a factor of two.
ATLAS has also successfully used fast-simulation algorithms to leverage the available computational resources. Fast simulation aims at avoiding the compute-expensive Geant4 simulation of calorimeter showers by using parameterised models that are significantly faster and retain most of the physics performance of the more detailed simulation. However, one of the major limitations of the fast simulation employed by ATLAS during Run 2 was the insufficiently accurate modelling of physics observables such as the detailed description of the substructure of jets reconstructed with large-radius clustering algorithms.
For Run 3, ATLAS has developed a completely redesigned fast simulation toolkit, known as AtlFast3, which performs the simulation of the entire ATLAS detector. While the tracking systems continue to be simulated using Geant4, the energy response in the calorimeters is simulated using a hybrid approach that combines two new tools: FastCaloSim and FastCaloGAN.
FastCaloSim parametrises the longitudinal and lateral development of electromagnetic and hadronic showers, while the simulated energy response from FastCaloGAN is based on generative adversarial neural networks that are trained on pre-simulated Geant4 showers. AtlFast3 effectively combines the strengths of both approaches by selecting the most appropriate algorithm depending on the properties of the shower-initiating particles, tuned to optimise the performance of reconstructed observables, including those exploiting jet substructure. As an example, figure 1 shows that the hybrid AtlFast3 approach models the number of constituents of reconstructed jets as simulated with Geant4 very accurately.
With its significantly improved physics performance and a speedup between a factor of 3 (for Z → ee events) and 15 (for high-pT di-jet events), AtlFast3 will play a crucial role in delivering high-precision physics simulations of ATLAS for Run 3 and beyond, while meeting the collaboration’s budgetary compute constraints.
Since their first direct detection in 2015, gravitational waves (GWs) have become pivotal in our quest to understand the universe. The ultra-high-frequency (UHF) band offers a window to discover new physics beyond the Standard Model (CERN Courier March/April 2022 p22). Unleashing this potential requires theoretical work to investigate possible GW sources and experiments with far greater sensitivities than those achieved today.
A workshop at CERN from 4 to 8 December 2023 leveraged impressive experimental progress in a range of fields. Attended by nearly 100 international scientists – a noteworthy increase from the 40 experts who attended the first workshop at ICTP Trieste in 2019 – the workshop showcased the field’s expanded research interest and collaborative efforts. Concretely, about 10 novel detector concepts have been developed since the first workshop.
One can look for GWs in a few different ways: observing changes in the space between detector components, exciting vibrations in detectors, and converting GWs into electromagnetic radiation in strong magnetic fields. Substantial progress has been made in all three experimental directions.
Levitating concepts
The leading concepts for the first approach involve optically levitated sensors such as high-aspect-ratio sodium–cyttrium–fluoride prisms, and semi-levitated sensors such as thin silicon or silicon–nitride nanomembranes in long optical resonators. These technologies are currently under study by various groups in the Levitated Sensor Detectors collaboration and at DESY.
For the second approach, the main focus is on millimetre-scale quartz cavities similar to those used in precision clocks. A network of such detectors, known as GOLDEN, is being planned, involving collaborations among UC Davis, University College London and Northwestern University. Superconducting radio-frequency cavities also present a promising technology. A joint effort between Fermilab and DESY is leveraging the existing MAGO prototype to gain insights and design further optimised cavities.
Regarding the third approach, a prominent example is optical high-precision interferometry, combined with a series of accelerator dipole magnets similar to those used in the light-shining-through-a-wall axion-search experiment, ALPS II (Any Light Particle Search II) or the axion helioscope CAST and its planned successor IAXO. In fact, ALPS II is anticipated to commence a dedicated GW search in 2028. Additionally, other notable concepts inspired by axion dark-matter searches involve toroidal magnets, exemplified by experiments like ABRACADABRA, or solenoidal magnets such as BASE or MADMAX.
All three approaches stand to benefit from burgeoning advances in quantum sensing, which promise to enhance sensitivities by orders of magnitude. In this landscape, axion dark-matter searches and UHF GW detection are poised to work in close collaboration, leveraging quantum sensing to achieve unprecedented results. Concepts that demonstrate synergies with axion-physics searches are crucial at this stage, and can be facilitated by incremental investments. Such collaboration builds awareness within the scientific community and presents UHF searches as an additional, compelling science case for their construction.
The workshop showcased the field’s expanded research interest and collaborative efforts
Cross-disciplinary research is also crucial to understand cosmological sources and constraints on UHF GWs. For the former, our understanding of primordial black holes has significantly matured, transitioning from preliminary estimates to a robust framework. Additional sources, such as parabolic encounters and exotic compact objects, are also gaining clarity. For the latter, the workshop highlighted how strong magnetic fields in the universe, such as those in extragalactic voids and planetary magnetospheres, can help set limits on the conversion between electromagnetic and gravitational waves.
Despite much progress, the sensitivity needed to detect UHF GWs remains a visionary goal, requiring the constant pursuit of inventive new ideas. To aid this, the community is taking steps to be more inclusive. The living review produced after the first workshop (arXiv:2011.12414) will be revised to be more accessible for people outside our community, breaking down detector concepts into fundamental building blocks for easier understanding. Plans are also underway to establish a comprehensive research repository and standardise data formats. These initiatives are crucial for fostering a culture of open innovation and expanding the potential for future breakthroughs in UHF GW research. Finally, a new, fully customisable and flexible GW plotter including the UHF frequency range is being developed to benefit the entire GW community.
The journey towards detecting UHF GWs is just beginning. While current sensitivities are not yet sufficient, the community’s commitment to developing innovative ideas is unwavering. With the collective efforts of a dedicated scientific community, the next leap in gravitational-wave research is on the horizon. Limits exist to be surpassed!
New Challenges and Opportunities in Physics Education presents itself as a guidebook for high-school physics educators who are navigating modern challenges in physics education. But whether you’re teaching the next generation of physicists, exploring the particles of the universe, or simply interested in the evolution of physics education, this book promises valuable insights. It doesn’t aim to cater to all equally, but rather to offer a spark of inspiration to a broad spectrum of readers.
The book is structured in two distinctive sections on modern physics topics and the latest information and communication technologies (ICTs) for classrooms. The editors bring together a diverse blend of expertise in modern physics, physics education and interdisciplinary approaches. Marilena Streit-Bianchi and Walter Bonivento are well known names in high-energy physics, with long and successful careers at CERN. In parallel, Marisa Michelini and Matteo Tuveri are pushing the limits of physics education with modern educational approaches and contemporary topics. All four are committed to making physics education engaging and relevant to today’s students.
The first part presents the core concepts of contemporary physics through a variety of narrative techniques, from historical recounting to imaginary dialogues, providing educators with a toolbox of resources to engage students in various learning scenarios. Does the teacher want to “flip the classroom” and assign some reading? They can read about the scientific contributions of Enrico Fermi by Salvatore Esposito. Does the teacher want to encourage discussions? Mariano Cadoni and Mauro Dorato have got their back with a unique piece “Gravity between Physics and Philosophy”, which can support interdisciplinary classroom discussions.
The second half of the book starts with an overview of ICT resources and classical physics examples on how to use them in a classroom setting. The authors then explore the skills that teachers and students need to effectively use ICTs. The transition to ICT feels a bit too long, and the book struggles to weave the two sections into a cohesive narrative, but the second half nevertheless captures the title of the book perfectly – ICTs are the epitome of new opportunities in physics education. While much has been said about them in other works, this book offers a cherry-picked but well rounded collection of ideas for enhancing educational experiences.
The authors not only emphasise modern physics and technology, but also another a very important characteristic of modern science: collaboration. This is an important message that we need to convey to students, as mere historical examples from classical physics sometimes show an elitist view of physics. Lone-genius narratives are often explicitly transitioned to a collaborative understanding of breakthroughs.
The book would not be complete without input from actual teachers. One notable contribution is by Michael Gregory, a particle-physics educator who shares his experiences with distance learning together with Steve Goldfarb, the former IPPOG co-chair. During the pandemic, he used online tools to convey physics concepts not only to his own students, but to students and teachers around the world. As such, his successful virtual science camps and online particle-physics courses reached frequently overlookedaudiences in remote locations.
Overall, New Challenges and Opportunities in Physics Education emerges as a valuable resource for a diverse audience. It is a guidebook for educators searching for innovative strategies to spice up their physics teachings or to better weave modern science into their lessons. Although it might fall short of flawlessly joining the modern-physics content with educational elements in the second half, its value is undeniable. The first part, in particular, serves as a treasure trove not only for educators but also for science communicators and even particle physicists seeking to engage with the public, using the common ground of high-school physics knowledge.
After all these years, neutrinos remain extraordinary – and somewhat deceptive. The experimental success of the three-massive-neutrino paradigm over the past 25 years makes it easy to forget that massive neutrinos are not part of the Standard Model (SM) of particle physics.
The problem lies with how neutrinos acquire mass. Nonzero neutrino masses are not possible without the existence of new fundamental fields, beyond those that are part of the SM. And we know virtually nothing about the particles associated with them. They could be bosons or fermions, light or heavy, charged or neutral, and experimentally accessible or hopelessly out of reach.
This is the neutrino mass puzzle. At its heart is the particle’s uniquely elusive nature, which is both the source of the problem and the main challenge in resolving it.
Mysterious and elusive
Despite outnumbering other known massive particles in the universe by 10 orders of magnitude, neutrinos are the least understood of the matter particles. Unlike electrons, they do not participate in electromagnetic interactions. Unlike quarks, they do not participate in the strong interactions that bind protons and neutrons together. Neutrinos participate only in aptly named weak interactions. Out of the trillions of neutrinos that the Sun beams through you each second, only a handful will interact with your body during your lifetime.
Neutrino physics has therefore had a rather tortuous and slow history. The existence of neutrinos was postulated in 1930 but only confirmed in the 1950s. The hypothesis that there are different types of neutrinos was first raised in the 1940s but only confirmed in the 1960s. And the third neutrino type, postulated when the tau lepton was discovered in the 1970s, was only directly observed in the year 2000. Nonetheless, over the years neutrino experiments have played a decisive role in the development of the most successful theory in modern physics: the SM. And at the turn of the 21st century, neutrino experiments revealed that there is something missing in its description of particle physics.
Neutrinos are fermions with spin one-half that interact with the charged leptons (the electron, muon and tau lepton) and the particles that mediate the weak interactions (the W and Z bosons). There are three neutrino types, or flavours: electron-type (νe), muon-type (νμ) and tau-type (ντ), and each interacts exclusively with its namesake charged lepton. One of the predictions of the SM is that neutrino masses are exactly zero, but a little over 25 years ago, neutrino experiments revealed that this is not exactly true. Neutrinos have tiny but undeniably nonzero masses.
Mixing it up
The search for neutrino masses is almost as old as Pauli’s 93-year-old postulate that neutrinos exist. They were ultimately discovered around the turn of the millennium through the observation of neutrino flavour oscillations. It turns out that we can produce one of the neutrino flavours (for example νμ) and later detect it as a different flavour (for example νe) so long as we are willing to wait for the neutrino flavour to change. The probability associated with this phenomenon oscillates in spacetime with a characteristic distance that is inversely proportional to the differences of the squares of the neutrino masses. Given the tininess of neutrino masses and mass splittings, these distances are frequently measured in hundreds of kilometres in particle-physics experiments.
Neutrino oscillations also require the leptons to mix. This means that the neutrino flavour states are not particles with a well defined mass but are quantum superpositions of different neutrino states with well defined masses. The three mass eigenstates are related to the three flavour eigenstates via a three-dimensional mixing matrix, which is usually parameterised in terms of mixing angles and complex phases.
In the last few decades, precision measurements of neutrinos produced in the Sun, in the atmosphere, in nuclear reactors and in particle accelerators in different parts of the world, have measured the mixing parameters at the several percent level. Assuming the mixing matrix is unitary, all but one have been shown to be nonzero. The measurements have revealed that the three neutrino mass eigenvalues are separated by two different mass-squared differences: a small one of order 10–4 eV2 and a large one of order 10–3 eV2. Data therefore reveal that at least two of the neutrino masses are different from zero. At least one of the neutrino masses is above 0.05 eV, and the second lightest is at least 0.008 eV. While neutrino oscillation experiments cannot measure the neutrino masses directly, precise measurements of beta-decay spectra and constraints from the large-scale structure of the universe offer complementary upper limits. The nonzero neutrino masses are constrained to be less than roughly 0.1 eV.
These masses are tiny when compared to the masses of all the other particles (see “Chasm” figure). The mass of the lightest charged fermion, the electron, is of order 106 eV. The mass of the heaviest fermion, the top quark, is of order 1011 eV, as are the masses of the W, Z and Higgs bosons. These particle masses are all at least seven orders of magnitude heavier than those of the neutrinos. No one knows why neutrino masses are dramatically smaller than those of all other massive particles.
The Standard Model and mass
To understand why the SM predicts neutrino masses to be zero, it is necessary to appreciate that particle masses are complicated in this theory. The reason is as follows. The SM is a quantum field theory. Interactions between the fields are strictly governed by their properties: spin, various “local” charges, which are conserved in interactions, and – for fermions like the neutrinos, charged leptons and quarks – another quantum number called chirality.
In quantum field theories, mass is the interaction between a right-chiral and a different left-chiral field. A naive picture is that the mass-interaction constantly converts left-chiral states into right-chiral ones (and vice versa) and the end result is a particle with a nonzero mass. It turns out, however, that for all known fermions, the left-chiral and right-chiral fermions have different charges. The immediate consequence of this is that you can’t turn one into the other without violating the conservation of some charge so none of the fermions are allowed to have mass: the SM naively predicts that all fermion masses are zero!
The Higgs field was invented to fix this shortcoming. It is charged in such a way that some right-chiral and left-chiral fermions are allowed to interact with one another plus the Higgs field which, uniquely among all known fields, is thought to have been turned on everywhere since the phase transition that triggered electroweak symmetry breaking very early in the history of the universe. In other words, so long as the vacuum configuration of the Higgs field is not trivial, fermions acquire a mass thanks to these interactions.
This is not only a great idea; it is also at least mostly correct, as spectacularly confirmed by the discovery of the Higgs boson a little over a decade ago. It has many verifiable consequences. One is that the strength with which the Higgs boson couples to different particles is proportional to the particle ’s mass – the Higgs prefers to interact with the top quark or the Z or W bosons relative to the electron or the light quarks. Another consequence is that all masses are proportional to the value of the Higgs field in the vacuum (1011 eV) and, in the SM, we naively expect all particle masses to be similar.
Neutrino masses are predicted to be zero because, in the SM, there are no right-chiral neutrino fields and hence none for the left-chiral neutrinos – the ones we know about – to “pair up” with. Neutrino masses therefore require the existence of new fields, and hence new particles, beyond those in the SM.
Wanted: new fields
The list of candidate new fields is long and diverse. For example, the new fields that allow for nonzero neutrino masses could be fermions or bosons; they could be neutral or charged under SM interactions, and they could be related to a new mass scale other than the vacuum value of the SM Higgs field (1011 eV), which could be either much smaller or much larger. Finally, while these new fields might be “easy” to discover with the current and near-future generation of experiments, they might equally turn out to be impossible to probe directly in any particle-physics experiment in the foreseeable future.
Though there are too many possibilities to list, they can be classified into three very broad categories: neutrinos acquire mass by interacting with the same Higgs field that gives mass to the charged fermions; by interacting with a similar Higgs field with different properties; or through a different mechanism entirely.
At first glance, the simplest idea is to postulate the existence of right-chiral neutrino fields and further assume they interact with the Higgs field and the left-chiral neutrinos, just like right-chiral and left-chiral charged leptons and quarks. There is, however, something special about right-chiral neutrino fields: they are completely neutral relative to all local SM charges. Returning to the rules of quantum field theory, completely neutral chiral fermions are allowed to interact “amongst themselves” independent of whether there are other right-chiral or left-chiral fields around. This means the right-chiral neutrino fields should come along with a different mass that is independent from the vacuum value of the Higgs field of 1011 eV.
To prevent this from happening, the right-chiral neutrinos must possess some kind of conserved charge that is shared with the left-chiral neutrinos. If this scenario is realised, there is some new, unknown fundamental conserved charge out there. This hypothetical new charge is called lepton number: electrons, muons, tau leptons and neutrinos are assigned charge plus one, while positrons, antimuons, tau antileptons and antineutrinos have charge minus one. A prediction of this scenario is that the neutrino and the antineutrino are different particles since they have different lepton numbers. In more technical terms, the neutrinos are massive Dirac fermions, like the charged leptons and the quarks. In this scenario, there are new particles associated with the right-chiral neutrino field, and a new conservation law in nature.
Accidental conservation
As of today, there is no experimental evidence that lepton number is not conserved, and readers may question if this really is a new conservation law. In the SM, however, the conservation of lepton number is merely “accidental” – once all other symmetries and constraints are taken into account, the theory happens to possess this symmetry. But lepton number conservation is no longer an accidental symmetry when right-chiral neutrinos are added, and these chargeless and apparently undetectable particles should have completely different properties if it is not imposed.
If lepton number conservation is imposed as a new symmetry of nature, making neutrinos pure Dirac fermions, there appears to be no observable consequence other than nonzero neutrino masses. Given the tiny neutrino masses, the strength of the interaction between the Higgs boson and the neutrinos is predicted to be at least seven orders of magnitude smaller than all other Higgs couplings to fermions. Various ideas have been proposed to explain this remarkable chasm between the strength of the neutrino’s interaction with the Higgs field relative to that of all other fermions. They involve a plurality of theoretical concepts including extra-dimensions of space, mirror copies of our universe and dark sectors.
A second possibility is that there are more Higgs fields in nature and that the neutrinos acquire a mass by interacting with a Higgs field that is different from the one that gives a mass to the charged fermions. Since the neutrino mass is proportional to the vacuum value of a different Higgs field, the fact that the neutrino masses are so small is easy to tolerate: they are simply proportional to a different mass scale that could be much smaller than 1011 eV. Here, there are no right-chiral neutrino fields and the neutrino masses are interactions of the left-chiral neutrino fields amongst themselves. This is possible because, while the neutrinos possess weak-force charge they have no electric charge. In the presence of the nontrivial vacuum of the Higgs fields, the weak-force charge is effectively not conserved and these interactions may be allowed. The fact that the Higgs particle discovered at the LHC – associated with the SM Higgs field – does not allow for this possibility is a consequence of its charges. Different Higgs fields can have different weak-force charges and end up doing different things. In this scenario, the neutrino and the antineutrino are, in fact, the same particle. In more technical terms: the neutrinos are massive Majorana fermions.
Neutrino masses require the existence of new fields, and hence new particles, beyond those in the Standard Model
One way to think about this is as follows: the mass interaction transforms left-chiral objects into right-chiral objects. For electrons, for example, the mass converts left-chiral electrons into right-chiral electrons. It turns out that the antiparticle of a left-chiral object is right-chiral and vice versa, and it is tempting to ask whether a mass interaction could convert a left-chiral electron into a right-chiral positron. The answer is no: electrons and positrons are different objects and converting one into the other would violate the conservation of electric charge. But this is no barrier for the neutrino, and we can contemplate the possibility of converting a left-chiral neutrino into its right-chiral antiparticle without violating any known law of physics. If this hypothesis is correct, the hypothetical lepton-number charge, discussed earlier, cannot be conserved. This hypothesis is experimentally neither confirmed nor contradicted but could soon be confirmed with the observation of neutrinoless double-beta decays – nuclear decays which can only occur if lepton-number symmetry is violated. There is an ongoing worldwide campaign to search for the neutrinoless double-beta decay of various nuclei.
A new source of mass
In the third category, there is a source of mass different from the vacuum value of the Higgs field, and the neutrino masses are an amalgam of the vacuum value of the Higgs field and this new source of mass. A very low new mass scale might be discovered in oscillation experiments, while consequences of heavier ones may be detected in other types of particle-physics experiments, including measurements of beta and meson decays, charged-lepton properties, or the hunt for new particles at high-energy colliders. Searches for neutrinoless double-beta decay can reveal different sources for lepton-number violation, while ultraheavy particles can leave indelible footprints in the structure of the universe through cosmic collisions. The new physics responsible for nonzero neutrino masses might also be related to grand-unified theories or the origin of the matter–antimatter asymmetry of the universe, through a process referred to as leptogenesis. The range of possibilities spans 22 orders of magnitude (see “eV to ZeV” figure).
Challenging scenarios
Since the origin of the neutrino masses here is qualitatively different from that of all other particles, the values of the neutrino masses are expected to be qualitatively different. Experimentally, we know that neutrino masses are much smaller than all charged- fermion masses, so many physicists believe that the tiny neutrino masses are strong indirect evidence for a source of mass beyond the vacuum value of the Higgs field. In most of these scenarios, the neutrinos are also massive Majorana fermions. The challenge here is that if a new mass scale exists in fundamental physics, we know close to nothing about it. It could be within direct reach of particle-physics experiments, or it could be astronomically high, perhaps as large as 1012 times the vacuum value of the SM’s Higgs field.
Searching for neutrinoless double-beta decay is the most promising avenue to reveal whether neutrinos are Majorana or Dirac fermions
How do we hope to learn more? We need more experimental input. There are many outstanding questions that can only be answered with oscillation experiments. These could provide evidence for new neutrino-like particles or new neutrino interactions and properties. Meanwhile, searching for neutrinoless double-beta decay is the most promising avenue to experimentally reveal whether neutrinos are Majorana or Dirac fermions. Other activities include high-energy collider searches for new Higgs bosons that like to talk to neutrinos and new heavy neutrino-like particles that could be related to the mechanism of neutrino mass generation. Charged-lepton probes, including measurements of the anomalous magnetic moment of muons and searches for lepton-flavour violation, may provide invaluable clues, while surveys of the cosmic microwave background and the distribution of galaxies could also reveal footprints of the neutrino masses in the structure of the universe.
We still know very little about the new physics uncovered by neutrino oscillations. Only a diverse experimental programme will reveal the nature of the new physics behind the neutrino mass puzzle.
This book provides a rich glimpse into written science communication throughout a century that introduced many new and abstract concepts in physics. It begins with Einstein’s 1905 paper “On the Electrodynamics of Moving Bodies”, in which he introduced special relativity. Atypically, the paper starts with a thought experiment that helps the reader follow a complex and novel physical mechanism. Authors Harmon and Gross analyse and explain the terminological text and bring further perspective by adding comments made by other scientists or science writers during that time. They follow this analysis style throughout the book, covering science from the smallest to the largest scales and addressing the controversies surrounding atomic weapons.
The only exception from written evaluations of scientific papers is the chapter “Astronomical value”, in which the authors revisit the times of great astronomers such as Galileo Galilei or the Herschel siblings William and Caroline. Even back then, researchers were in need of sponsors and supporters to fund their research. In Galilei’s case, he regularly presented his findings to the Medici family and fuelled fascination in his patrons so that he was able to continue his work.
While writing the book, Gross, a rhetoric and communications professor, died unexpectedly, leaving Harmon, a science writer and editor at Argonne National Laboratory in communications, to complete the work.
While somewhat repetitive in style, readers can pick a topic from the contents and see how scientists and communicators interacted with their audiences. While in-depth scientific knowledge is not required, the book is best targeted at those familiar with the basics of physics who want to gain new perspectives on some of the most important breakthroughs during the past century and beyond. Indeed, by casting well-known texts in a communication context, the book offers analogies and explanations that can be used by anyone involved in public engagement.
A laser ionises rubidium vapour, turning it into plasma. A proton bunch plunges inside, evolving into millimetre-long microbunches. The microbunches pull the plasma’s electrons, forming wakes in the plasma, like a speedboat displacing water. Crests and troughs of the plasma’s electric field trail the proton microbunches at almost the speed of light. If injected at just the right moment, relativistic electrons surf on the accelerating phase of the field over a distance of metres, gaining energy up to a factor of 1000 times faster than can be achieved in conventional accelerators.
Plasma wakefield acceleration is a cutting-edge technology that promises to revolutionise the field of particle acceleration by paving the way for smaller and more cost-effective linear accelerators. The technique traces back to a seminal paper published in 1979 by Toshiki Tajima and John Dawson which laid the foundations for subsequent breakthroughs. At its core, the principle involves using a driver to generate wakefields in a plasma, upon which a witness beam surfs to undergo acceleration. Since the publication of the first paper, the field has demonstrated remarkable success in achieving large accelerating gradients.
Traditionally, only laser pulses and electron bunches have been used as drive beams. However, since 2016 the Advanced Wakefield Experiment (AWAKE) at CERN has used proton bunches from the Super Proton Synchrotron (SPS) as drive beams – an innovative approach with profound implications. Thanks to their high stored energy, proton bunches enable AWAKE to accelerate an electron bunch to energies relevant for high-energy physics in a single plasma, circumventing the need for the multiple accelerating stages that are required when using lasers or electron bunches.
Bridging the divide
Relevant to any accelerator concept based on plasma wakefields, AWAKE technology promises to bridge the gap between global developments at small scales and possible future electron–positron colliders. The experiment is therefore an integral component of the European strategy for particle physics’ plasma roadmap, aiming to advance the concept to a level of technological maturity that would allow their application to particle-physics experiments. An international collaboration of approximately 100 people across 22 institutes worldwide, AWAKE has already published more than 90 papers, many in high-impact journals, alongside significant efforts to train the next generation, culminating in the completion of over 28 doctoral theses to date.
In the experiment, a 400 GeV proton bunch from the SPS is sent into a 10 m-long plasma source containing rubidium vapour at a temperature of around 200 °C (see “Rubidium source” figure). A laser pulse accompanies the proton bunch, ionising the vapour and transforming it into a plasma.
To induce the necessary wakefields, the drive bunch length must be of the order of the plasma wavelength, which corresponds to the natural oscillation period of the plasma. However, the length of the SPS proton bunch is around 6 cm, significantly longer than the 1 mm plasma wavelength in AWAKE, and short wavelengths are required to reach large accelerating gradients.
The solution is to take advantage of a beam-plasma instability, which transforms long particle bunches into microbunches with the period of the plasma through a process known as self-modulation. In other words, as the long proton bunch traverses the plasma, it can be coaxed into splitting into a train of shorter “microbunches”. The bunch train resonantly excites the plasma wave, like a pendulum or a child on a swing, being pushed with small kicks at its natural oscillation interval or resonant frequency. If applied at the right time, each kick increases the oscillation amplitude or height of the wave. When the amplitude is sufficiently high, a witness electron bunch from an external source is injected into the plasma wakefields, to ride the wakefields and gain energy.
The first phase of AWAKE (Run 1, from 2016 to 2018) served as a proof-of-concept demonstration of the acceleration scheme. First, it was shown that a plasma can be used as a compact device to self-modulate a highly relativistic and highly energetic proton bunch (see “Self-modulation” figure). Second, it was shown that the resulting bunch train resonantly excites strong wakefields. Third – the most direct demonstration – it was shown that externally injected electrons can be captured, focused and accelerated to GeV energies by the wakefields. The addition of a percent-level positive gradient in density along the plasma led to 20% boosts in the energy gained by the accelerated electrons.
Based on these proof-of-principle experimental results and expertise at CERN and in the collaboration, AWAKE developed a well-defined programme for Run 2, which launched in 2021 following Long Shutdown 2, and which will run for several more years from now. The goal is to achieve electron acceleration with GeV/m energy gain and beam quality similar to a normalised emittance of 10 mm-mrad and a relative energy spread of a few per cent. In parallel, scalable plasma sources are being developed that can be extended up to hundreds of metres in length (see “Helicon plasma source” and “Discharge source” figures). Once these goals are reached, the concepts of AWAKE could be used in particle-physics applications such as using electron beams with energy between 40 and 200 GeV impinging on a fixed target to search for new phenomena related to dark matter.
Controlled instability
The first Run 2 milestone, on track for completion by the end of the year, is to complete the self-modulator – the plasma that transforms the long proton bunch into a train of microbunches. The demonstration has been staged in two experimental phases.
The first phase was completed in 2022. The results prove that wakefields driven by a full proton bunch can have a reproducible and tunable timing. This is not at all a trivial demonstration given that the experiment is based on an instability!
Techniques to tune the instability are similar to those used with free-electron lasers: provide a controlled initial signal for the instability to grow from and operate in the saturated regime, for example. In AWAKE, the self-modulation instability is initiated by the wakefields driven by an electron bunch placed ahead of the proton bunch. The wakefields from the electron bunch imprint themselves on the proton bunch right from the start, leading to a well defined bunch train. This electron bunch is distinct from the witness bunches, which are later accelerated.
The second experimental phase for the completion of the self-modulator is to demonstrate that high-amplitude wakefields can be maintained over long distances. Numerical simulations predict that self-modulation can be optimised by tailoring the plasma’s density profile. For example, introducing a step in the plasma density should lead to higher accelerating fields that can be maintained over long distances. First measurements are very encouraging, with density steps already leading to increased energy gains for externally injected electrons. Work is ongoing to globally optimise the self-modulator.
AWAKE technology promises to bridge the gap between global developments at small scales and possible future electron–positron colliders
The second experimental milestone of Run 2 will be the acceleration of an electron bunch while demonstrating its sustained beam quality. The experimental setup designed to reach this milestone includes two plasmas: a self-modulator that prepares the proton bunch train, and a second “accelerator plasma” into which an external electron bunch is injected (see “Modulation and acceleration” figure). To make space for the installation of the additional equipment, CERN will in 2025 and 2026 dismantle the CNGS (CERN Neutrinos to Gran Sasso) target area that is installed in a 100m-long tunnel cavern downstream from the AWAKE experimental facility.
Accelerate ahead
Two enabling technologies are needed to achieve high-quality electron acceleration. The first is a source and transport line to inject the electron bunch on-axis into the accelerator plasma. A radio-frequency (RF) injector source was chosen because of the maturity of the technology, though the combination of S-band and X-band structures is novel, and forms a compact accelerator with possible medical applications. It is followed by a transport line that preserves the parameters of the 150 MeV 100 pC bunch, and allows for its tight focusing (5 to 10 µm) at the entrance of the accelerator plasma. External injection into plasma-based accelerators is challenging because of the high frequency (about 235 GHz in AWAKE) and thus small structure size (roughly 200 µm) at which they operate. The main goal is to demonstrate that the electron bunch can be accelerated to 4 to 10 GeV, with a relative energy spread of 5 to 8%, and emerge with approximately the same normalised emittance as at the entrance of the plasma (2–30 mm mrad).
For these experiments, rubidium vapour sources will be used for both the self-modulator and accelerator plasmas, as they provide the uniformity, tunability and reproducibility required for the acceleration process. However, the laser-ionisation process of the rubidium vapour does not scale to lengths beyond 20 m. The alternative enabling technology is therefore a plasma source whose length can be scaled to the 50 to 100 metres required for the bunch to reach 50–100 GeV energies. To achieve this, a laboratory to develop discharge and helicon-plasma sources has been set up at CERN (see “Discharge source” figure). Multiple units can in principle be stacked to reach the desired plasma length. The challenge with such sources is to demonstrate that they can produce required plasma parameters other than length.
The third and final experimental milestone for Run 2 will then be to replace the 10 m-long accelerator plasma with a longer source and achieve proportionally larger energy gains. The AWAKE acceleration concept will then essentially be mature to propose particle-physics experiments, for example with bunches of a billion or so 50 GeV electrons.
The Physics Beyond Colliders (PBC) initiative has diversified the landscape of experiments at CERN by supporting smaller experiments and showcasing their capabilities. Its fifth annual workshop convened around 175 physicists from 25 to 27 March to provide updates on the ongoing projects and to explore new proposals to tackle the open questions of the Standard Model and beyond.
This year, the PBC initiative has significantly strengthened CERN’s dark-sector searches, explained Mike Lamont and Joachim Mnich, directors for accelerators and technology, and research and computing, respectively. In particular, the newly approved SHiP proton beam-dump experiment (see SHiP to chart hidden sector) will complement the searches for light dark-sector particles that are presently conducted with NA64’s versatile setup, which is suitable for electron, positron, muon and hadron beams.
First-phase success
The FASER and SND experiments, now taking data in the LHC tunnel, are two of the successes of the PBC initiative’s first phase. Both search for new physics and study high-energy neutrinos along the LHC collision axis. FASER’s successor, FASER2, promises a 10,000-fold increase in sensitivity to beyond-the-Standard Model physics, said Jonathan Feng (UC Irvine). With the potential to detect thousands of TeV-scale neutrinos a day, it could also measure parton distribution functions and thereby enhance the physics reach of the high-luminosity LHC (HL-LHC). FASER2 may form part of the proposed Forward Physics Facility, set to be located 620 m away, along a tangent from the HL-LHC’s interaction point 1. A report on the facility’s technical infrastructure is scheduled for mid-2024, with a letter of intent foreseen in early 2025. By contrast, the CODEX-b and ANUBIS experiments are being designed to search for feebly interacting particles transverse to LHCb and ATLAS, respectively. In all these endeavours, the Feebly Interacting Particle Physics Centre will act as a hub for exchanges between experiment and theory.
Francesco Terranova (Milano-Bicocca) and Marc Andre Jebramcik (CERN) explained how ENUBET and NuTAG have been combined to optimise a “tagged” neutrino beam for cross-section measurements, where the neutrino flavour is known by studying the decay process of its parent hadron. In the realm of quantum chromodynamics, SPS experiments with lead ions (the new NA60+ experiment) and light ions (NA61/SHINE) are aiming to decode the phases of nuclear matter in the non-perturbative regime. Meanwhile, AMBER is proposing to determine the charge radii of kaons and pions, and to perform meson spectroscopy, in particularwith kaons.
The LHCspin collaboration presented a plan to open a new frontier of spin physics at the LHC building upon the successful operation of the SMOG2 gas cell that is upstream of the LHCb detector. Studying collective phenomena at the LHC in this way could probe the structure of the nucleon in a so-far little-explored kinematic domain and make use of new probes such as charm mesons, said Pasquale Di Nezza (INFN Frascati).
Measuring moments
The TWOCRYST collaboration aims to demonstrate the feasibility and the performance of a possible fixed-target experiment in the LHC to measure the electric and magnetic dipole moments (EDMs and MDMs) of charmed baryons, offering a complementary probe of searches for CP violation in the Standard Model. The technique would use two bent crystals: the first to deflect protons from the beam halo onto a target, with the resulting charm baryons then deflected by the second (precession) crystal onto a detector such as LHCb, while at the same time causing their spins to precess in the strong electric and magnetic fields of the deformed crystal lattice, explained Pascal Hermes (CERN).
New ideas ranged from the measurement of molecular electric dipole moments at ISOLDE to measuring the gravitational field of the LHC beam
Several projects to detect axion-like particles were discussed, including a dedicated superconducting cavity for heterodyne detection being jointly developed by PBC and CERN’s Quantum Technology Initiative. Atom interferometry is another subject of common interest, with PBC demonstrating the technical feasibility of installing an atom interferometer with a baseline of 100 m in one of the LHC’s access shafts. Other new ideas ranged from the measurement of molecular EDMs at ISOLDE to measuring the gravitational field of the LHC beam.
With the continued determination to fully exploit the scientific potential of the CERN accelerator complex and infrastructure for projects that are complementary to high-energy-frontier colliders testified by many fruitful discussions, the annual meeting concluded as a resounding success. The PBC community ended the workshop by thanking co-founder Claude Vallée (CPPM Marseille), who retired as a PBC convener after almost a decade of integral work, and welcomed Gunar Schnell (Ikerbasque and UPV/EHU Bilbao), who will take over as convener.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.