Topics

Jacques Haissinski 1935–2024

Jacques Haissinski, who played an important role in major particle-physics experiments, passed away on 25 March 2024 at the age of 89. His father Moïse worked with Marie Curie and had been a long-time collaborator of her daughter Irène Joliot-Curie.

Jacques entered Ecole Normale Supérieure in 1954 and later went to Stanford, where he worked under Burton Richter on the pioneering Colliding Beam machine to collide electrons in flight using two storage rings. After his military service, Jacques joined the Laboratoire de l’Accélérateur Linéaire in Orsay, to undertake a doctorate on the AdA (Anello di Accumulazione) ring. Built in Frascati from an idea of Bruno Touschek to collide in-flight electrons and positrons stored in the same vacuum chamber, AdA had been brought to Orsay by Pierre Marin to take advantage of the high intensity of the linac beams. Jacques mastered all aspects of the ground-breaking experiment and succeeded in detecting the very first time-in-flight collisions in 1963.

In accelerator physics, following a discovery on the ACO ring at Orsay, Jacques published, in 1967, a basic paper on the longitudinal equilibrium of particles in a storage ring that contained the now widely used “Haïssinski equation”. He also collaborated with Stanford on the commissioning of SPEAR and later SLC, the very first and so-far only linear collider. In phenomenology, following Touschek, he led a programme on radiative corrections and later gave lectures on this subject in preparation for LEP at Ecole de Gif in 1989.

But the main scientific activity of Jacques Haïssinski was experimental particle physics. He took part in many experiments, directed theses in Orsay on ACO, and was spokesperson of the CELLO experiment at DESY. During the construction of LEP, Jacques served as chairperson of the LEP committee at CERN.

After LEP, Jacques turned his interests to astroparticle physics and cosmology, notably giving courses on the subject and collaborating on the EROS experiment and the Planck mission. During that time, he also took responsibilities in the management of Paris-Sud University (at Orsay), and later as a leader in IN2P3 and in the Saclay Laboratory DAPNIA (now IRFU). His leadership was greatly appreciated by the French high-energy physics community.

An outstanding teacher, Jacques also campaigned for the dissemination of knowledge to the public. He was a great humanist who was deeply concerned with social injustice and criminal wars. He presented his views publicly and believed that other physicists should do so. Generous with his precious time, he was always available to pass on his knowledge and vast scientific culture. He marked and inspired several generations of particle and accelerator physicists.

Mats Lindroos 1961–2024

Mats Lindroos

Mats Lindroos, who made major contributions to accelerator technology, passed away on 2 May 2024 aged just 62.

Mats received his PhD in subatomic physics from Chalmers University of Technology in Gothenburg, Sweden in 1993 under the supervision of Björn Jonson. As a PhD student he studied decay properties and hyperfine interactions from oriented nuclei, making use of the low-temperature nuclear orientation facilities at ISOLDE, Daresbury and Studsvik. He joined CERN as a research fellow in 1993 and became a staff member in 1995.

While at CERN, Mats filled a number of diverse roles including being responsible for PS Booster operation and the technical coordination of the ISOLDE facility. He was one of the driving forces behind the HIE-ISOLDE project that commenced construction in 2009 and is now one of the major accelerated radioactive beam facilities worldwide. While at CERN he also played leading roles in several European Union-supported design studies for future conceptual accelerator facilities: the nuclear-physics radioactive beam facility EURISOL and the beta-beam neutrino factory. 

In 2009, when Sweden and Denmark were selected to be the host countries for the European Spallation Source (ESS), Mats returned to his roots in Sweden on secondment from CERN, formally joining the ESS in 2015. As one of the earliest members of the ESS organisation, he was responsible for establishing the nascent accelerator organisation as well as the accelerator collaboration, set up as a CERN-like collaboration, between major European accelerator laboratories across 10 countries to undertake the technical design of this important part of the facility. Mats led the technical design for the 5 MW proton linac of the ESS, and from 2013 as head of the 100-strong accelerator division he led the linac project that is now in the late stages of construction and installation. Even after stepping down from his leadership roles because of illness, he enthusiastically accepted a new one to advise the ESS management. He was fully involved in the process, and undoubtedly would have been instrumental in guiding the future evolution of the facility.

He set up a CERN-like collaboration between major European accelerator laboratories across 10 countries

As a globally recognised expert on accelerator technology, Mats served on many committees in an advisory role, such as the IJC Lab strategic advisory board (France), IN2P3 scientific committee (France), J-PARC technical advisory committee (Japan), PIP-II Fermilab technical advisory committee (US) and CERN’s scientific policy committee. As an adjunct professor at Lund University he enjoyed teaching and supervising students in addition to his numerous research, management and committee roles. Despite all these work activities, Mats found time to oversee, together with his partner Anette, the construction of a house on the south Swedish coast, where they enjoyed walking, gardening and being active in the local community.

Mats has touched all our lives with his energy and passion for research, his creativity for new ideas, his worldly knowledge, his sense of humour, and most importantly, his humanity and kindness. He will be greatly missed by all of us who had the privilege to count him as a friend and colleague.

Shy charm mesons confound predictions

ALICE figure 1

In the past two decades, it has become clear that three-quark baryons and quark–antiquark mesons cannot describe the full spectrum of hadrons. Dozens of exotic states have been observed in the charm sector alone. These states are either interpreted as compact objects with four or five valence quarks or as hadron molecules, however, their inner structures remain uncertain due to the complexity of calculations in quantum chromodynamics (QCD) and the lack of direct experimental measurements of the residual strong interaction between charm and light hadrons. New femtoscopy measurement by the ALICE collaboration challenge theoretical expectations and the current understanding of QCD.

Femtoscopy is a well-established method for studying the strong interactions between hadrons. Experimentally, this is achieved by studying particle pairs with small relative momentum. In high-energy collisions of protons at the LHC, the distance between such hadrons at the time of production is about one femtometre, which is within the range of the strong nuclear force. From the momentum correlations of particle pairs, one extracts the scattering length, a0, which quantifies the final-state strong interaction between the two hadrons. By studying the momentum correlations of emitted particle pairs, it is possible to access the final-state interactions of even short-lived hadrons such as D mesons.

The scattering lengths are significantly smaller than the theoretical predictions

The ALICE collaboration has now, for the first time, measured the interaction of open-charm mesons (D+ and D*+) with charged pions and kaons for all the charge combinations. The momentum correlation functions of each system were measured in proton–proton collisions in the LHC at a centre-of-mass energy of 13 TeV. As predicted by heavy-quark spin symmetry, the scattering lengths of Dπ and D*π agree with each other, but they are found to be significantly smaller than the theoretical predictions (figure 1). This implies that the interaction between these mesons can be fully explained by the Coulomb force, and the contribution from strong interactions is negligible within experimental precision. The small measured values of the scattering length challenge our understanding of the residual strong force of heavy-flavour hadrons in the non-perturbative limit of QCD.

These results also have an important impact on the study of the quark–gluon plasma (QGP) – a deconfined state of matter created in ultra-relativistic heavy-ion collisions. The rescattering of D mesons with the other hadrons (mostly pions and kaons) created in such collisions was thought to modify the D-meson spectra, in addition to the modification expected from the QGP formation. The present ALICE measurement demonstrates, however, that the effect of rescattering is expected to be very small.

More precise and systematic studies of charm–hadron interactions will be carried out with the upgraded ALICE detector in the upcoming years.

LHCb targets rare radiative decay

LHCb figure 1

Rare radiative b-hadron decays are powerful probes of the Standard Model (SM) sensitive to small deviations caused by potential new physics in virtual loops. One such process is the decay of B0s→ μ+μγ. The dimuon decay of the B0s meson is known to be extremely rare and has been measured with unprecedented precision by LHCb and CMS. While performing this measurement, LHCb also studied the B0s→ μ+μγ decay, partially reconstructed due to the missing photon, as a background component of the B0s→ μ+μ process and set the first upper limit on its branching fraction to 2.0 × 10–9 at 95% CL (red arrow in figure 1). However, this search was limited to the high-dimuon-mass region, whereas several theoretical extensions of the SM could manifest themselves in lower regions of the dimuon-mass spectrum. Reconstructing the photon is therefore essential to explore the spectrum thoroughly and probe a wide range of physics scenarios.

The LHCb collaboration now reports the first search for the B0s→ μ+μγ decay with a reconstructed photon, exploring the full dimuon mass spectrum. Photon reconstruction poses additional experimental challenges, such as degrading the mass resolution of the B0s candidate and introducing additional background contributions. To cope with this ambitious search, machine-learning algorithms and new variables have been specifically designed with the aim of discriminating the signal among background processes with similar signatures. The analysis is performed separately for three dimuon mass ranges to exploit any differences along the spectrum, such as the ϕ(1020) meson contribution in the low invariant mass region. The μ+μγ invariant mass distributions of the selected candidates are fitted, including all background contributions and the B0s→ μ+μγ signal component. Figure 2 shows the fit for the lowest dimuon mass region.

LHCb figure 2

No significant signal of B0s→ μ+μγ is found in any of the three dimuon mass regions, consistent with the background-only hypothesis. Upper bounds on the branching fraction are set and can be seen as the black arrows in figure 1. The mass fit is also performed for the combined candidates of the three dimuon mass regions to set a combined upper limit on the branching fraction to 2.8 × 10–8 at 95% CL.

The SM theoretical predictions of b decays becomes particularly difficult to calculate when a photon is involved, and they have large uncertainties due to the B0s→ γ local form factors. The B0s→ μ+μγ decay provides a unique opportunity to validate the different theoretical approaches, which do not agree with each other, as shown by the coloured bands in figure 1. Theoretical calculations of the branching fractions are currently below the experimental limits. The upgraded LHCb detector and the increased luminosity of the LHC’s Run 3 is currently providing conditions for studying rare radiative b-hadron decays with greater precision and, eventually, for finding evidence for the B0s→ μ+μγ decay.

A logical freight train

Steven Weinberg was a logical freight train – for many, the greatest theorist of the second half of the 20th century. It is timely to reflect on his legacy, the scientific component of which is laid out in a new collection of his publications selected by theoretical physicist Michael Duff (Imperial College).

Six chapters cover Weinberg’s most consequential contributions to effective field theory, the Standard Model, symmetries, gravity, cosmology and short-form popular science writing. I can’t identify any notable omissions and I doubt many others would, though some may raise an eyebrow at the exclusion of his paper deriving the Lee–Weinberg bound. Duff brings each chapter to life with first-hand anecdotes and details that will delight those of us most greatly separated from historical events. I am relatively young, and only had one meaningful interaction with Steven Weinberg.  Though my contemporaries and I inhabit a scientific world whose core concepts are interwoven with, if not formed by, Steven Weinberg’s scientific legacy, unlike Michael Duff we are poorly qualified to comment historically on the ecosystem in which this legacy grew, nor on aspects of personality. This makes his commentary particularly valuable to younger readers.

I can envisage three distinct audiences for this new collection. The first is the lay theorist – those who are widely enough read to recognise the depth of Weinberg’s impact in theoretical physics and would like to know more. Such readers will find Duff’s introductions to be insightful and entertaining – helpful preparation for the more technical aspects of the papers, though expertise is required to fully grapple with many of them. There are also a few hand-picked non-technical articles one would otherwise not encounter without some serious investigative effort, including some accessible articles on quantum field theory, effective field theory and life in the multiverse, in addition to the dedicated section on popular articles. These will delight any theory afficionado.

The second audience is practising theorists. If you’re going to invest in a printed collection of publications, then Weinberg is an obvious protagonist. Particle theorists consult his articles so often that they may as well have them close at hand. This collection contains those most often revisited and ought to be useful in this respect. Duff’s introductions also expose technical interconnections between the articles that might otherwise be missed.

Steven Weinberg: Selected Papers

The third audience I have in mind are beginning graduate students in particle theory, cosmology and beyond. It would not be a mistake to put this collection on recommended reading lists. In due course, most students should read many of these papers multiple times, so why not get on with it from the get-go? The section on effective field theories (EFTs) contains many valuable key ideas and perspectives. Plenty of those core concepts are still commonly encountered more by osmosis than with any rigour, and this can lead to confused notions around the general approach of EFT. Perhaps an incomplete introduction to EFT could be avoided for graduate students by cutting straight to the fundamentals contained here? The cosmology section also reveals many important modern concepts alongside lucid and fearless wrestling with big questions. The papers on gravity detail techniques that are frequently encountered in any first foray into modern amplitudology, as well as strategies to infer general lessons in quantum field theory from symmetries and self-consistency alone.

In my view, however, the most important section for beginning graduate students is that on the construction of the Standard Model (SM). It may be said that a collective amnesia has emerged regarding the scientific spirit that drove its development. The SM was built by model builders. I don’t say this facetiously. They made educated guesses about the structure of the “ultraviolet” (microscopic) world based on the “infrared” (long-distance) breadcrumbs embedded within low-energy experimental observations. Decades after this swashbuckling era came to an end, there is a growing tendency to view the SM as something rigid, providentially bestowed and permanent. The academic bravery and risk-taking that was required to take the necessary leaps forward then, and which may be required now, is no better demonstrated than in “A Model of Leptons”. All young theorists should read this multiple times. A Model of Leptons exemplifies that not only was Steven Weinberg an unstoppable force of logic, but also a plucky risk taker. It’s inspirational that its final paragraph, which laid out the structure of nature at the electroweak scale, ends with doubt and speculation: “And if this model is renormalisable, then what happens when we extend it to include the couplings of A and B to the hadrons?” By working their way through this collection, graduate students may be inspired to similar levels of ambition and jeopardy.

Amongst the greatest scientists of the last century

In the weeks that followed the passing of Stephen Weinberg, I sensed amongst a number of colleagues of all generations some moods that I could have anticipated; of the loss of not only a bona fide truth-seeker, but also of a leader, frequently the leader. I also perceived a feeling that transcended the scientific realm alone, of someone whose creative genius ought to be recognised amongst the greatest of scientists, musicians, artists and humanity of the last century. How can we productively reflect on that? I imagine we would all do well to learn not only of Weinberg’s important individual scientific insights, but also to attempt to absorb his overall methodology in identifying interesting questions, in breaking new trails in fundamental physics, and in pursuing logic and clarity wherever they may take you. This collection is not a bad place to start.

ATLAS turbocharges event simulation

ATLAS figure 1

As the harvest of data from the LHC experiments continues to increase, so does the required number of simulated collisions. This is a resource-intensive task as hundreds of particles must be tracked through complex detector geometries for each simulated physics collision – and Monte Carlo statistics must typically exceed experimental statistics by a factor of 10 or more, to minimise uncertainties when measured distributions are compared with theoretical predictions. To support data taking in Run 3 (2022–2025), the ATLAS collaboration therefore developed, evaluated and deployed a wide array of detailed optimisations to its detector-simulation software.

The production of simulated data begins with the generation of particles produced within the LHC’s proton–proton or heavy-ion collisions, followed by the simulation of their propagation through the detector and the modelling of the electronics signals from the active detection layers. Considerable computing resources are incurred when hadrons, photons and electrons enter the electromagnetic calorimeters and produce showers with many secondary particles whose trajectories and interactions with the detector material must be computed. The complex accordion geometry of the ATLAS electromagnetic calorimeter makes the Geant4 simulation of the shower development in the calorimeter system particularly compute-intensive, accounting for about 80% of the total simulation time for a typical collision event.

Since computing costs money and consumes electrical power, it is highly desirable to speed up the simulation of collision events without compromising accuracy. For example, considerable CPU resources were previously spent in the transportation of photons and neutrons; this has been mitigated by randomly removing 90% of the photons (neutrons) with energy below 0.5 (2) MeV and scaling up the energy deposited from the remaining 10% of low-energy particles. The simulation of photons in the finely segmented electromagnetic calorimeter took considerable time because the probabilities for each possible interaction process were calculated every time photons crossed a material boundary. That calculation time has been greatly reduced by using a uniform geometry with no photon transport boundaries and by determining the position of simulated interactions using the ratio of the cross sections in the various material layers. The combined effect of the optimisations brings an average speed gain of almost a factor of two.

ATLAS has also successfully used fast-simulation algorithms to leverage the available computational resources. Fast simulation aims at avoiding the compute-expensive Geant4 simulation of calorimeter showers by using parameterised models that are significantly faster and retain most of the physics performance of the more detailed simulation. However, one of the major limitations of the fast simulation employed by ATLAS during Run 2 was the insufficiently accurate modelling of physics observables such as the detailed description of the substructure of jets reconstructed with large-radius clustering algorithms.

AtlFast3 offers fast, high-precision physics simulations

For Run 3, ATLAS has developed a completely redesigned fast simulation toolkit, known as AtlFast3, which performs the simulation of the entire ATLAS detector. While the tracking systems continue to be simulated using Geant4, the energy response in the calorimeters is simulated using a hybrid approach that combines two new tools: FastCaloSim and FastCaloGAN.

FastCaloSim parametrises the longitudinal and lateral development of electromagnetic and hadronic showers, while the simulated energy response from FastCaloGAN is based on generative adversarial neural networks that are trained on pre-simulated Geant4 showers. AtlFast3 effectively combines the strengths of both approaches by selecting the most appropriate algorithm depending on the properties of the shower-initiating particles, tuned to optimise the performance of reconstructed observables, including those exploiting jet substructure. As an example, figure 1 shows that the hybrid AtlFast3 approach models the number of constituents of reconstructed jets as simulated with Geant4 very accurately.

With its significantly improved physics performance and a speedup between a factor of 3 (for Z ee events) and 15 (for high-pT di-jet events), AtlFast3 will play a crucial role in delivering high-precision physics simulations of ATLAS for Run 3 and beyond, while meeting the collaboration’s budgetary compute constraints.

The inventive pursuit of UHF gravitational waves

Since their first direct detection in 2015, gravitational waves (GWs) have become pivotal in our quest to understand the universe. The ultra-high-frequency (UHF) band offers a window to discover new physics beyond the Standard Model (CERN Courier March/April 2022 p22). Unleashing this potential requires theor­etical work to investigate possible GW sources and experiments with far greater sensitivities than those achieved today.

A workshop at CERN from 4 to 8 December 2023 leveraged impressive experimental progress in a range of fields. Attended by nearly 100 international scientists – a noteworthy increase from the 40 experts who attended the first workshop at ICTP Trieste in 2019 – the workshop showcased the field’s expanded research interest and collaborative efforts. Concretely, about 10 novel detector concepts have been developed since the first workshop.

One can look for GWs in a few different ways: observing changes in the space between detector components, exciting vibrations in detectors, and converting GWs into electromagnetic radiation in strong magnetic fields. Substantial progress has been made in all three experimental directions.

Levitating concepts

The leading concepts for the first approach involve optically levitated sensors such as high-aspect-ratio sodium–cyttrium–fluoride prisms, and semi-levitated sensors such as thin silicon or silicon–nitride nanomembranes in long optical resonators. These technologies are currently under study by various groups in the Levitated Sensor Detectors collaboration and at DESY.

For the second approach, the main focus is on millimetre-scale quartz cavities similar to those used in precision clocks. A network of such detectors, known as GOLDEN, is being planned, involving collaborations among UC Davis, University College London and Northwestern University. Superconducting radio-frequency cavities also present a promising technology. A joint effort between Fermilab and DESY is leveraging the existing MAGO prototype to gain insights and design further optimised cavities.

Regarding the third approach, a prominent example is optical high-precision interferometry, combined with a series of accelerator dipole magnets similar to those used in the light-shining-through-a-wall axion-search experiment, ALPS II (Any Light Particle Search II) or the axion helioscope CAST and its planned successor IAXO. In fact, ALPS II is anticipated to commence a dedicated GW search in 2028. Additionally, other notable concepts inspired by axion dark-matter searches involve toroidal magnets, exemplified by experiments like ABRACADABRA, or solenoidal magnets such as BASE or MADMAX.

All three approaches stand to benefit from burgeoning advances in quantum sensing, which promise to enhance sensitivities by orders of magnitude. In this landscape, axion dark-matter searches and UHF GW detection are poised to work in close collaboration, leveraging quantum sensing to achieve unprecedented results. Concepts that demonstrate synergies with axion-physics searches are crucial at this stage, and can be facilitated by incremental investments. Such collaboration builds awareness within the scientific community and presents UHF searches as an additional, compelling science case for their construction.

The workshop showcased the fields expanded research interest and collaborative efforts

Cross-disciplinary research is also crucial to understand cosmological sources and constraints on UHF GWs. For the former, our understanding of primordial black holes has significantly matured, transitioning from preliminary estimates to a robust framework. Additional sources, such as parabolic encounters and exotic compact objects, are also gaining clarity. For the latter, the workshop highlighted how strong magnetic fields in the universe, such as those in extragalactic voids and planetary magnetospheres, can help set limits on the conversion between electromagnetic and gravitational waves.

Despite much progress, the sensitivity needed to detect UHF GWs remains a visionary goal, requiring the constant pursuit of inventive new ideas. To aid this, the community is taking steps to be more inclusive. The living review produced after the first workshop (arXiv:2011.12414) will be revised to be more accessible for people outside our community, breaking down detector concepts into fundamental building blocks for easier understanding. Plans are also underway to establish a comprehensive research repository and standardise data formats. These initiatives are crucial for fostering a culture of open innovation and expanding the potential for future breakthroughs in UHF GW research. Finally, a new, fully customisable and flexible GW plotter including the UHF frequency range is being developed to benefit the entire GW community.

The journey towards detecting UHF GWs is just beginning. While current sensitivities are not yet sufficient, the community’s commitment to developing innovative ideas is unwavering. With the collective efforts of a dedicated scientific community, the next leap in gravitational-wave research is on the horizon. Limits exist to be surpassed!

New Challenges and Opportunities in Physics Education

New Challenges and Opportunities in Physics Education

New Challenges and Opportunities in Physics Education presents itself as a guidebook for high-school physics educators who are navigating modern challenges in physics education. But whether you’re teaching the next generation of physicists, exploring the particles of the universe, or simply interested in the evolution of physics education, this book promises valuable insights. It doesn’t aim to cater to all equally, but rather to offer a spark of inspiration to a broad spectrum of readers.

The book is structured in two distinctive sections on modern physics topics and the latest information and communication technologies (ICTs) for classrooms. The editors bring together a diverse blend of expertise in modern physics, physics education and interdisciplinary approaches. Marilena Streit-Bianchi and Walter Bonivento are well known names in high-energy physics, with long and successful careers at CERN. In parallel, Marisa Michelini and Matteo Tuveri are pushing the limits of physics education with modern educational approaches and contemporary topics. All four are committed to making physics education engaging and relevant to today’s students.

The first part presents the core concepts of contemporary physics through a variety of narrative techniques, from historical recounting to imaginary dialogues, providing educators with a toolbox of resources to engage students in various learning scenarios. Does the teacher want to “flip the classroom” and assign some reading? They can read about the scientific contributions of Enrico Fermi by Salvatore Esposito. Does the teacher want to encourage discussions? Mariano Cadoni and Mauro Dorato have got their back with a unique piece “Gravity between Physics and Philosophy”, which can support interdisciplinary classroom discussions.

The second half of the book starts with an overview of ICT resources and classical physics examples on how to use them in a classroom setting. The authors then explore the skills that teachers and students need to effectively use ICTs. The transition to ICT feels a bit too long, and the book struggles to weave the two sections into a cohesive narrative, but the second half nevertheless captures the title of the book perfectly – ICTs are the epitome of new opportunities in physics education. While much has been said about them in other works, this book offers a cherry-picked but well rounded collection of ideas for enhancing educational experiences.

The authors not only emphasise modern physics and technology, but also another a very important characteristic of modern science: collaboration. This is an important message that we need to convey to students, as mere historical examples from classical physics sometimes show an elitist view of physics. Lone-genius narratives are often explicitly transitioned to a collaborative understanding of breakthroughs.

The book would not be complete without input from actual teachers. One notable contribution is by Michael Gregory, a particle-physics educator who shares his experiences with distance learning together with Steve Goldfarb, the former IPPOG co-chair. During the pandemic, he used online tools to convey physics concepts not only to his own students, but to students and teachers around the world. As such, his successful virtual science camps and online particle-physics courses reached frequently overlooked  audiences in remote locations.

Overall, New Challenges and Opportunities in Physics Education emerges as a valuable resource for a diverse audience. It is a guidebook for educators searching for innovative strategies to spice up their physics teachings or to better weave modern science into their lessons. Although it might fall short of flawlessly joining the modern-physics content with educational elements in the second half, its value is undeniable. The first part, in particular, serves as a treasure trove not only for educators but also for science communicators and even particle physicists seeking to engage with the public, using the common ground of high-school physics knowledge.

The neutrino mass puzzle

After all these years, neutrinos remain extraordinary – and somewhat deceptive. The experimental success of the three-massive-neutrino paradigm over the past 25 years makes it easy to forget that massive neutrinos are not part of the Standard Model (SM) of particle physics.

The problem lies with how neutrinos acquire mass. Nonzero neutrino masses are not possible without the existence of new fundamental fields, beyond those that are part of the SM. And we know virtually nothing about the particles associated with them. They could be bosons or fermions, light or heavy, charged or neutral, and experimentally accessible or hopelessly out of reach.

This is the neutrino mass puzzle. At its heart is the particle’s uniquely elusive nature, which is both the source of the problem and the main challenge in resolving it.

Mysterious and elusive

Despite outnumbering other known massive particles in the universe by 10 orders of magnitude, neutrinos are the least understood of the matter particles. Unlike electrons, they do not participate in electromagnetic interactions. Unlike quarks, they do not participate in the strong interactions that bind protons and neutrons together. Neutrinos participate only in aptly named weak interactions. Out of the trillions of neutrinos that the Sun beams through you each second, only a handful will interact with your body during your lifetime.

A pink puzzle piece representing neutrinos

Neutrino physics has therefore had a rather tortuous and slow history. The existence of neutrinos was postulated in 1930 but only confirmed in the 1950s. The hypothesis that there are different types of neutrinos was first raised in the 1940s but only confirmed in the 1960s. And the third neutrino type, postulated when the tau lepton was discovered in the 1970s, was only directly observed in the year 2000. Nonetheless, over the years neutrino experiments have played a decisive role in the development of the most successful theory in modern physics: the SM. And at the turn of the 21st century, neutrino experiments revealed that there is something missing in its description of particle physics.

Neutrinos are fermions with spin one-half that interact with the charged leptons (the electron, muon and tau lepton) and the particles that mediate the weak interactions (the W and Z bosons). There are three neutrino types, or flavours: electron-type (νe), muon-type (νμ) and tau-type (ντ), and each interacts exclusively with its namesake charged lepton. One of the predictions of the SM is that neutrino masses are exactly zero, but a little over 25 years ago, neutrino experiments revealed that this is not exactly true. Neutrinos have tiny but undeniably nonzero masses.

Mixing it up

The search for neutrino masses is almost as old as Pauli’s 93-year-old postulate that neutrinos exist. They were ultimately discovered around the turn of the millennium through the observation of neutrino flavour oscillations. It turns out that we can produce one of the neutrino flavours (for example νμ) and later detect it as a different flavour (for example νe) so long as we are willing to wait for the neutrino flavour to change. The probability associated with this phenomenon oscillates in spacetime with a characteristic distance that is inversely proportional to the differences of the squares of the neutrino masses. Given the tininess of neutrino masses and mass splittings, these distances are frequently measured in hundreds of kilometres in particle-physics experiments.

Neutrino oscillations also require the leptons to mix. This means that the neutrino flavour states are not particles with a well defined mass but are quantum superpositions of different neutrino states with well defined masses. The three mass eigenstates are related to the three flavour eigenstates via a three-dimensional mixing matrix, which is usually parameterised in terms of mixing angles and complex phases.

The masses of all known matter particles

In the last few decades, precision measurements of neutrinos produced in the Sun, in the atmosphere, in nuclear reactors and in particle accelerators in different parts of the world, have measured the mixing parameters at the several percent level. Assuming the mixing matrix is unitary, all but one have been shown to be nonzero. The measurements have revealed that the three neutrino mass eigenvalues are separated by two different mass-squared differences: a small one of order 10–4 eV2 and a large one of order 10–3 eV2. Data therefore reveal that at least two of the neutrino masses are different from zero. At least one of the neutrino masses is above 0.05 eV, and the second lightest is at least 0.008 eV. While neutrino oscillation experiments cannot measure the neutrino masses directly, precise measurements of beta-decay spectra and constraints from the large-scale structure of the universe offer complementary upper limits. The nonzero neutrino masses are constrained to be less than roughly 0.1 eV.

These masses are tiny when compared to the masses of all the other particles (see “Chasm” figure). The mass of the lightest charged fermion, the electron, is of order 106 eV. The mass of the heaviest fermion, the top quark, is of order 1011 eV, as are the masses of the W, Z and Higgs bosons. These particle masses are all at least seven orders of magnitude heavier than those of the neutrinos. No one knows why neutrino masses are dramatically smaller than those of all other massive particles.

The Standard Model and mass

To understand why the SM predicts neutrino masses to be zero, it is necessary to appreciate that particle masses are complicated in this theory. The reason is as follows. The SM is a quantum field theory. Interactions between the fields are strictly governed by their properties: spin, various “local” charges, which are conserved in interactions, and – for fermions like the neutrinos, charged leptons and quarks – another quantum number called chirality.

In quantum field theories, mass is the interaction between a right-chiral and a different left-chiral field. A naive picture is that the mass-interaction constantly converts left-chiral states into right-chiral ones (and vice versa) and the end result is a particle with a nonzero mass. It turns out, however, that for all known fermions, the left-chiral and right-chiral fermions have different charges. The immediate consequence of this is that you can’t turn one into the other without violating the conservation of some charge so none of the fermions are allowed to have mass: the SM naively predicts that all fermion masses are zero!

The Higgs field was invented to fix this shortcoming. It is charged in such a way that some right-chiral and left-chiral fermions are allowed to interact with one another plus the Higgs field which, uniquely among all known fields, is thought to have been turned on everywhere since the phase transition that triggered electroweak symmetry breaking very early in the history of the universe. In other words, so long as the vacuum configuration of the Higgs field is not trivial, fermions acquire a mass thanks to these interactions.

This is not only a great idea; it is also at least mostly correct, as spectacularly confirmed by the discovery of the Higgs boson a little over a decade ago. It has many verifiable consequences. One is that the strength with which the Higgs boson couples to different particles is proportional to the particle ’s mass – the Higgs prefers to interact with the top quark or the Z or W bosons relative to the electron or the light quarks. Another consequence is that all masses are proportional to the value of the Higgs field in the vacuum (1011 eV) and, in the SM, we naively expect all particle masses to be similar.

Neutrino masses are predicted to be zero because, in the SM, there are no right-chiral neutrino fields and hence none for the left-chiral neutrinos – the ones we know about – to “pair up” with. Neutrino masses therefore require the existence of new fields, and hence new particles, beyond those in the SM.

Wanted: new fields

The list of candidate new fields is long and diverse. For example, the new fields that allow for nonzero neutrino masses could be fermions or bosons; they could be neutral or charged under SM interactions, and they could be related to a new mass scale other than the vacuum value of the SM Higgs field (1011 eV), which could be either much smaller or much larger. Finally, while these new fields might be “easy” to discover with the current and near-future generation of experiments, they might equally turn out to be impossible to probe directly in any particle-physics experiment in the foreseeable future.

Though there are too many possibilities to list, they can be classified into three very broad categories: neutrinos acquire mass by interacting with the same Higgs field that gives mass to the charged fermions; by interacting with a similar Higgs field with different properties; or through a different mechanism entirely.

A purple puzzle piece representing neutrinos

At first glance, the simplest idea is to postulate the existence of right-chiral neutrino fields and further assume they interact with the Higgs field and the left-chiral neutrinos, just like right-chiral and left-chiral charged leptons and quarks. There is, however, something special about right-chiral neutrino fields: they are completely neutral relative to all local SM charges. Returning to the rules of quantum field theory, completely neutral chiral fermions are allowed to interact “amongst themselves” independent of whether there are other right-chiral or left-chiral fields around. This means the right-chiral neutrino fields should come along with a different mass that is independent from the vacuum value of the Higgs field of 1011 eV.

To prevent this from happening, the right-chiral neutrinos must possess some kind of conserved charge that is shared with the left-chiral neutrinos. If this scenario is realised, there is some new, unknown fundamental conserved charge out there. This hypothetical new charge is called lepton number: electrons, muons, tau leptons and neutrinos are assigned charge plus one, while positrons, antimuons, tau antileptons and antineutrinos have charge minus one. A prediction of this scenario is that the neutrino and the antineutrino are different particles since they have different lepton numbers. In more technical terms, the neutrinos are massive Dirac fermions, like the charged leptons and the quarks. In this scenario, there are new particles associated with the right-chiral neutrino field, and a new conservation law in nature.

Accidental conservation

As of today, there is no experimental evidence that lepton number is not conserved, and readers may question if this really is a new conservation law. In the SM, however, the conservation of lepton number is merely “accidental” – once all other symmetries and constraints are taken into account, the theory happens to possess this symmetry. But lepton number conservation is no longer an accidental symmetry when right-chiral neutrinos are added, and these chargeless and apparently undetectable particles should have completely different properties if it is not imposed.

If lepton number conservation is imposed as a new symmetry of nature, making neutrinos pure Dirac fermions, there appears to be no observable consequence other than nonzero neutrino masses. Given the tiny neutrino masses, the strength of the interaction between the Higgs boson and the neutrinos is predicted to be at least seven orders of magnitude smaller than all other Higgs couplings to fermions. Various ideas have been proposed to explain this remarkable chasm between the strength of the neutrino’s interaction with the Higgs field relative to that of all other fermions. They involve a plurality of theoretical concepts including extra-dimensions of space, mirror copies of our universe and dark sectors.

Nonzero neutrino masses

A second possibility is that there are more Higgs fields in nature and that the neutrinos acquire a mass by interacting with a Higgs field that is different from the one that gives a mass to the charged fermions. Since the neutrino mass is proportional to the vacuum value of a different Higgs field, the fact that the neutrino masses are so small is easy to tolerate: they are simply proportional to a different mass scale that could be much smaller than 1011 eV. Here, there are no right-chiral neutrino fields and the neutrino masses are interactions of the left-chiral neutrino fields amongst themselves. This is possible because, while the neutrinos possess weak-force charge they have no electric charge. In the presence of the nontrivial vacuum of the Higgs fields, the weak-force charge is effectively not conserved and these interactions may be allowed. The fact that the Higgs particle discovered at the LHC – associated with the SM Higgs field – does not allow for this possibility is a consequence of its charges. Different Higgs fields can have different weak-force charges and end up doing different things. In this scenario, the neutrino and the antineutrino are, in fact, the same particle. In more technical terms: the neutrinos are massive Majorana fermions.

Neutrino masses require the existence of new fields, and hence new particles, beyond those in the Standard Model

One way to think about this is as follows: the mass interaction transforms left-chiral objects into right-chiral objects. For electrons, for example, the mass converts left-chiral electrons into right-chiral electrons. It turns out that the antiparticle of a left-chiral object is right-chiral and vice versa, and it is tempting to ask whether a mass interaction could convert a left-chiral electron into a right-chiral positron. The answer is no: electrons and positrons are different objects and converting one into the other would violate the conservation of electric charge. But this is no barrier for the neutrino, and we can contemplate the possibility of converting a left-chiral neutrino into its right-chiral antiparticle without violating any known law of physics. If this hypothesis is correct, the hypothetical lepton-number charge, discussed earlier, cannot be conserved. This hypothesis is experimentally neither confirmed nor contradicted but could soon be confirmed with the observation of neutrinoless double-beta decays – nuclear decays which can only occur if lepton-number symmetry is violated. There is an ongoing worldwide campaign to search for the neutrinoless double-beta decay of various nuclei.

A new source of mass

In the third category, there is a source of mass different from the vacuum value of the Higgs field, and the neutrino masses are an amalgam of the vacuum value of the Higgs field and this new source of mass. A very low new mass scale might be discovered in oscillation experiments, while consequences of heavier ones may be detected in other types of particle-physics experiments, including measurements of beta and meson decays, charged-lepton properties, or the hunt for new particles at high-energy colliders. Searches for neutrinoless double-beta decay can reveal different sources for lepton-number violation, while ultraheavy particles can leave indelible footprints in the structure of the universe through cosmic collisions. The new physics responsible for nonzero neutrino masses might also be related to grand-unified theories or the origin of the matter–antimatter asymmetry of the universe, through a process referred to as leptogenesis. The range of possibilities spans 22 orders of magnitude (see “eV to ZeV” figure).

Challenging scenarios

Since the origin of the neutrino masses here is qualitatively different from that of all other particles, the values of the neutrino masses are expected to be qualitatively different. Experimentally, we know that neutrino masses are much smaller than all charged- fermion masses, so many physicists believe that the tiny neutrino masses are strong indirect evidence for a source of mass beyond the vacuum value of the Higgs field. In most of these scenarios, the neutrinos are also massive Majorana fermions. The challenge here is that if a new mass scale exists in fundamental physics, we know close to nothing about it. It could be within direct reach of particle-physics experiments, or it could be astronomically high, perhaps as large as 1012 times the vacuum value of the SM’s Higgs field.

Searching for neutrinoless double-beta decay is the most promising avenue to reveal whether neutrinos are Majorana or Dirac fermions

How do we hope to learn more? We need more experimental input. There are many outstanding questions that can only be answered with oscillation experiments. These could provide evidence for new neutrino-like particles or new neutrino interactions and properties. Meanwhile, searching for neutrinoless double-beta decay is the most promising avenue to experimentally reveal whether neutrinos are Majorana or Dirac fermions. Other activities include high-energy collider searches for new Higgs bosons that like to talk to neutrinos and new heavy neutrino-like particles that could be related to the mechanism of neutrino mass generation. Charged-lepton probes, including measurements of the anomalous magnetic moment of muons and searches for lepton-flavour violation, may provide invaluable clues, while surveys of the cosmic microwave background and the distribution of galaxies could also reveal footprints of the neutrino masses in the structure of the universe.

We still know very little about the new physics uncovered by neutrino oscillations. Only a diverse experimental programme will reveal the nature of the new physics behind the neutrino mass puzzle.

The Many Voices of Modern Physics

The Many Voices of Modern Physics

This book provides a rich glimpse into written science communication throughout a century that introduced many new and abstract concepts in physics. It begins with Einstein’s 1905 paper “On the Electrodynamics of Moving Bodies”, in which he introduced special relativity. Atypically, the paper starts with a thought experiment that helps the reader follow a complex and novel physical mechanism. Authors Harmon and Gross analyse and explain the terminological text and bring further perspective by adding comments made by other scientists or science writers during that time. They follow this analysis style throughout the book, covering science from the smallest to the largest scales and addressing the controversies surrounding atomic weapons.

The only exception from written evaluations of scientific papers is the chapter “Astronomical value”, in which the authors revisit the times of great astronomers such as Galileo Galilei or the Herschel siblings William and Caroline. Even back then, researchers were in need of sponsors and supporters to fund their research. In Galilei’s case, he regularly presented his findings to the Medici family and fuelled fascination in his patrons so that he was able to continue his work.

While writing the book, Gross, a rhetoric and communications professor, died unexpectedly, leaving Harmon, a science writer and editor at Argonne National Laboratory in communications, to complete the work.

While somewhat repetitive in style, readers can pick a topic from the contents and see how scientists and communicators interacted with their audiences. While in-depth scientific knowledge is not required, the book is best targeted at those familiar with the basics of physics who want to gain new perspectives on some of the most important breakthroughs during the past century and beyond. Indeed, by casting well-known texts in a communication context, the book offers analogies and explanations that can be used by anyone involved in public engagement.

bright-rec iop pub iop-science physcis connect