Comsol -leaderboard other pages

Topics

Eugène Cremmer: 1942-2019

Eugene Cremmer

Theorist Eugène Cremmer, who passed away in October 2019, left his mark in superstring and supergravity theory. He will be remembered across the world as a brilliant colleague, as original as he was likeable.

Born in Paris in 1942, his parents ran a bookstore. The neighbourhood children were firmly oriented towards vocational schools and Eugène was trained in woodworking. He was eventually spotted by a mathematics teacher, obtained a technical Baccalauréat degree and then pursued mathematics at École Normale Supérieure (ENS) in Paris in 1962. In 1968–1969, following a triggering of research into dual models by Daniele Amati and Martinus Veltman, Eugène began to compute higher loop diagrams in a remarkable series of technically impressive papers. The first one was written with André Neveu, and others with Joël Scherk in 1971–1972 while a postdoc at CERN. At that time, CERN was an important cradle of string theory, with groups from different countries forming a critical mass.

In late 1974 Eugène returned to ENS with a small group of pilgrims from the theoretical-physics group at Orsay. He worked with Jean-Loup Gervais on string field theory and later collaborated with Scherk, the author, and several visitors on supersymmetry, supergravity and applications to string theory. His revolutionary 1976 paper with Scherk introduced the linking number of a cyclic dimension by a closed string. This would turn out to be crucial for heterotic string models, T duality and mirror symmetry, for the so-called Scherk– Schwarz compactification, and was soon applied to branes. The 1977 proposal with Scherk of spontaneous compactification of the six extra dimensions of space remains central in modern string theory. In 1978–1979 his pioneering papers on 11D super-gravity and 4D N = 8 supergravity made the 11th dimension unescapable and exhibited exceptional (now widely used) duality symmetries. For these works, Eugène received the CNRS silver medal in 1983. Some 15 years later, duality symmetries were extended to higher degree forms.

The successes of Eugène’s work led to many invitations abroad. Though he chose to remain in France, he maintained collaborations and activities at a high level. He was director of the ENS theoretical-physics laboratory in 2002–2005. Eugène was as regular as clockwork, arriving and leaving the lab at the same time every day – the only exception I witnessed was due to Peter van Nieuwenhuizen’s work addiction, which he enjoyably inflicted upon us for a while. At 12:18 p.m. Eugène would always gather all available colleagues to go to lunch, and this led Guido Altarelli to observe “Were Eugène to disappear the whole lab would starve to death!” Eugène kept his papers in an encrypted pre-computer order, and nobody could understand how he was able to extract any needed reference in no time, always remembering most of the content. He cultivated his inner energy by walking quickly while absorbed in thought. We have lost a role model and a modest, full-time physicist.

Tuning in to neutrinos

DUNE’s dual-phase prototype detector

In traditional Balinese music, instruments are made in pairs, with one tuned slightly higher in frequency than its twin. The notes are indistinguishable to the human ear when played together, but the sound recedes and swells a couple of times each second, encouraging meditation. This is a beating effect: fast oscillations at the mean frequency inside a slowly oscillating envelope. Similar physics is at play in neutrino oscillations. Rather than sound intensity, it’s the probability to observe a neutrino with its initial flavour that oscillates. The difference is how long it takes for the interference to make itself felt. When Balinese musicians strike a pair of metallophones, the notes take just a handful of periods to drift out of phase. By contrast, it takes more than 1020 de Broglie wavelengths and hundreds of kilometres for neutrinos to oscillate in experiments like the planned mega-projects Hyper-Kamiokande and DUNE.

The zeitgeist began to shift to artificially produced neutrinos

Neutrino oscillations revealed a rare chink in the armour of the Standard Model: neutrinos are not massless, but are evolving superpositions of at least three mass eigenstates with distinct energies. A neutrino is therefore like three notes played together: frequencies so close, given the as-yet immeasurably small masses involved, that they are not just indistinguishable to the ear, but inseparable according to the uncertainty principle. As neutrinos are always ultra-relativistic, the energies of the mass eigenstates differ only due to tiny mass contributions of m2/2E. As the mass eigenstates propagate, phase differences develop between them proportional to squared-mass splittings Δm2. The sought-after oscillations range from a few metres to the diameter of Earth.

Orthogonal mixtures

The neutrino physics of the latter third of the 20th century was bookended by two anomalies that uncloaked these effects. In 1968 Ray Davis’s observation of a deficit of solar neutrinos prompted Bruno Pontecorvo to make public his conjecture that neutrinos might oscillate. Thirty years later, the Super-Kamiokande collaboration’s analysis of a deficit of atmospheric muon neutrinos from the other side of the planet posthumously vindicated the visionary Italian, and later Soviet, theorist’s speculation. Subsequent observations have revealed that electron, muon and tau neutrinos are orthogonal mixtures of mass eigenstates ν1 and ν2, separated by a small so-called solar splitting Δm221, and ν3, which is separated from that pair by a larger “atmospheric” splitting usually quantified by Δm232 (see “Little and large” figure). It is not yet known if ν3 is the lightest or the heaviest of the trio. This is called the mass-hierarchy problem.

A narrow splitting between neutrino mass eigenstates

“In the first two decades of the 21st century we have achieved a rather accurate picture of neutrino masses and mixings,” says theorist Pilar Hernández of the University of Valencia, “but the ordering of the neutrino states is unknown, the mass of the lightest state is unknown and we still do not know if the neutrino mixing matrix has imaginary entries, which could signal the breaking of CP symmetry,” she explains. “The very different mixing patterns in quarks and leptons could hint at a symmetry relating families, and a more accurate exploration of the lepton-mixing pattern and the neutrino ordering in future experiments will be essential to reveal any such symmetry pattern.”

Today, experiments designed to constrain neutrino mixing tend to dispense with astrophysical neutrinos in favour of more controllable accelerator and reactor sources. The experiments span more than four orders of magnitude in size and energy and fall into three groups (see “Not natural” figure). Much of the limelight is taken by experiments that are sensitive to the large mass splitting Δm232, which include both a cluster of current (such as T2K) and future (such as DUNE) accelerator-neutrino experiments with long baselines and high energies, and a high-performing trio of reactor-neutrino experiments (Daya Bay, RENO and Double Chooz) with a baseline of about a kilometre, operating just above the threshold for inverse beta decay. The second group is a beautiful pair of long-baseline reactor-neutrino experiments (KamLAND and the soon-to-be-commissioned JUNO), which join experiments with solar neutrinos in having sensitivity to the smaller squared-mass splitting Δm221. Finally, the third group is a host of short-baseline accelerator-neutrino experiments and very-short-baseline reactor neutrino experiments that are chasing tantalising hints of a fourth “sterile” neutrino (with no Standard-Model gauge interactions), which is split from the others by a squared-mass splitting of the order of 1 eV2.

Neutrino-oscillation experiments

Artificial sources

Experiments with artificial sources of neutrinos have a storied history, dating from the 1950s, when physicists toyed with the idea of detecting neutrinos created in the explosion of a nuclear bomb, and eventually observed them streaming from nuclear reactors. The 1960s saw the invention of the accelerator neutrino. Here, proton beams smashed into fixed targets to create a decaying debris of charged pions and their concomitant muon neutrinos. The 1970s transformed these neutrinos into beams by focusing the charged pions with magnetic horns, leading to the discovery of weak neutral currents and insights into the structure of nucleons. It was not until the turn of the century, however, that the zeitgeist of neutrino-oscillation studies began to shift from naturally to artificially produced neutrinos. Just a year after the publication of the Super-Kamiokande collaboration’s seminal 1998 paper on atmospheric–neutrino oscillations, Japanese experimenters trained a new accelerator-neutrino beam on the detector.

Operating from 1999 to 2006, the KEK-to-Kamioka (K2K) experiment sent a beam of muon neutrinos from the KEK laboratory in Tsukuba to the Super-Kamiokande detector, 250 km away under Mount Ikeno on the other side of Honshu. K2K confirmed that muon neutrinos “disappear” as a function of propagation distance over energy. The experiments together supported the hypothesis of an oscillation to tau neutrinos, which could not be directly detected at that energy. By increasing the beam energy well above the tau-lepton mass, the CERN Neutrinos to Gran Sasso (CNGS) project, which ran from 2006 to 2012, confirmed the oscillation to tau neutrinos by directly observing tau leptons in the OPERA detector. Meanwhile, the Main Injector Neutrino Oscillation Search (MINOS), which sent muon neutrinos from Fermilab to northern Minnesota from 2005 to 2012, made world-leading measurements of the parameters describing the oscillation.

With νμ→ ντ oscillations established, the next generation of experiments innovated in search of a subtler effect. T2K (K2K’s successor, with the beam now originating at J-PARC in Tokai) and NOvA (which analyses oscillations over the longer baseline of 810 km between Fermilab and Ash River, Minnesota) both have far detectors offset by a few degrees from the direction of the peak flux of the beams. This squeezes the phase space for the pion decays, resulting in an almost mono-energetic flux of neutrinos. Here, a quirk of the mixing conspires to make the musical analogy of a pair of metallophones particularly strong: to a good approximation, the muon neutrinos ring out with two frequencies of roughly equal amplitude, to yield an almost perfect disappearance of muon neutrinos – and maximum sensitivity to the appearance of electron neutrinos.

Testing CP symmetry

The three neutrino mass eigenstates mix to make electron, muon and tau neutrinos according to the Pontecorvo– Maki–Nakagawa–Sakata (PMNS) matrix, which describes three rotations and a complex phase δCP that can cause charge–parity (CP) violation – a question of paramount importance in the field due to its relevance to the unknown origin of the matter–antimatter asymmetry in the universe. Whatever the value of the complex phase, leptonic CP violation can only be observed if all three of the angles in the PMNS matrix are non-zero. Experiments with atmospheric and solar neutrinos demonstrated this for two of the angles. At the beginning of the last decade, short-baseline reactor-neutrino experiments in China (Daya Bay), Korea (RENO) and France (Double Chooz) were in a race with T2K to establish if the third angle, which leads to a coupling between ν3 and electrons, was also non-zero. In the reactor experiments this would be seen as a small deficit of electron antineutrinos a kilometre or so from the reactors; in T2K the smoking gun would be the appearance of a small number of electron neutrinos not present in the initial muon-neutrino-dominated beam.

After data taking was cut short by the great Sendai earthquake and tsunami of March 2011, T2K published evidence for the appearance of six electron-neutrino events, over the expected background of 1.5 ± 0.3 in the case of no coupling. Alongside a single tau-neutrino candidate in OPERA, these were the first neutrinos seen to appear in a detector with a new flavour, as previous signals had always registered a deficit of an expected flavour. In the closing days of the year, Double Chooz published evidence for 4121 electron–antineutrino events, under the expected tally for no coupling of 4344 ± 165, reinforcing T2K’s 2.5σ indication. Daya Bay and RENO put the matter to bed the following spring, with 5σ evidence apiece that the ν3-electron coupling was indeed non-zero. The key innovation for the reactor experiments was to minimise troublesome flux and interaction systematics by also placing detectors close to the reactors.

A visualisation of the Hyper-Kamiokande detector

Since then, T2K and NOvA, which began taking data in 2014, have been chasing leptonic CP violation – an analysis that is out of the reach of reactor experiments, as δCP does not affect disappearance probabilities. By switching the polarity of the magnetic horn, the experiments can compare the probabilities for the CP-mirror oscillations νμ→ νe and νμ→ νe directly. NOvA data are inconclusive at present. T2K data currently err towards near maximal CP violation in the vicinity of δCP = –π/2. The latest analysis, published in April, disfavours leptonic CP conservation (δCP = 0, ±π) at 2σ significance for all possible mixing parameter values. Statistical uncertainty is the biggest limiting factor.

Major upgrades planned for T2K next year target statistical, interaction-model and detector uncertainties. A substantial increase in beam intensity will be accompanied by a new fine-grained scintillating target for the ND280 near-detector complex, which will lower the energy threshold to reconstruct tracks. New transverse TPCs will improve ND280’s acceptance at high angles, yielding a better cancellation of systematic errors with the far detector, Super-Kamiokande, which is being upgraded by loading 0.01% gadolinium salts into the otherwise ultrapure water. As in reactor-neutrino detectors, this will provide a tag for antineutrino events, to improve sample purities in the search for leptonic CP violation.

T2K and NOvA both plan to roughly double their current data sets, and are working together on a joint fit, in a bid to better understand correlations between systematic uncertainties, and break degeneracies between measurements of CP violation and the mass hierarchy. If the CP-violating phase is indeed maximal, as suggested by the recent T2K result, the experiments may be able to exclude CP conservation with more than 99% confidence. “At this point we will be in a transition from a statistics-dominated to a systematics-dominated result,” says T2K spokesperson Atsuko Ichikawa of the University of Kyoto. “It is difficult to say, but our sensitivity will likely be limited at this stage by a convolution of neutrino-interaction and flux systematics.”

The next generation

Two long-baseline accelerator-neutrino experiments roughly an order of magnitude larger in cost and detector mass than T2K and NOvA have received green lights from the Japanese and US governments: Hyper-Kamiokande and DUNE. One of their primary missions is to resolve the question of leptonic CP violation.

Hyper-Kamiokande will adopt the same approach as T2K, but will benefit from major upgrades to the beam and the near and far detectors in addition to those currently underway in the present T2K upgrade. To improve the treatment of systematic errors, the suite of near detectors will be complemented by an ingenious new gadolinated water-Cherenkov detector at an intermediate baseline: by spanning a range of off-axis angles, it will drive down interaction-model systematics by exploiting previously neglected information on the how the flux varies as a function of the angle relative to the centre of the beam. Hyper-Kamiokande’s increased statistical reach will also be impressive. The power of the Japan Proton Accelerator Research Complex (J-PARC) beam will be increased from its current value of 0.5 MW up to 1.3 MW, and the new far detector will be filled with 260,000 tonnes of ultrapure water, yielding a fiducial volume 8.4 times larger than that of Super-Kamiokande. Procurement of the photo-multiplier tubes will begin this year, and the five-year-long excavation of the cavern has already begun. Data taking is scheduled to commence in 2027. “The expected precision on δCP is 10–20 degrees, depending on its true value,” says Hyper-Kamiokande international co-spokesperson Francesca di Lodovico of King’s College, London.

In the US, the Deep Underground Neutrino Experiment (DUNE) will exploit the liquid-argon–TPC technology first deployed on a large scale by ICARUS – OPERA’s sister detector in the CNGS project. The idea for the technology dates back to 1977, when Carlo Rubbia proposed using liquid rather than gaseous argon as a drift medium for ionisation electrons. Given liquid-argon’s higher density, such detectors can serve as both target and tracker, providing high-resolution 3D images of the interactions – an invaluable tool for reducing systematics related to the murky world of neutrino–nucleus interactions.

Spectacular performance

The technology is currently being developed in two prototype detectors at CERN. The first hones ICARUS’s single-phase approach. “The performance of the prototype has been absolutely spectacular, exceeding everyone’s expectations,” says DUNE co-spokesperson Ed Blucher of the University of Chicago. “After almost two years of operation, we are confident that the liquid–argon technology is ready to be deployed at the huge scale of the DUNE detectors.” In parallel, the second prototype is testing a newer dual-phase concept. In this design, ionisation charges drift through an additional layer of gaseous argon before reaching the readout plane. The signal can be amplified here, potentially easing noise requirements for the readout electronics, and increasing the maximum size of the detector. The dual-phase prototype was filled with argon in summer 2019 and is now recording tracks.

The evolution of the fraction of each flavour in the wavefunction of electron antineutrinos

The final detectors will have about twice the height and 10 to 20 times the footprint. Following the construction of an initial single-phase unit, the DUNE collaboration will likely pick a mix of liquid-argon technologies to complete their roster of four 10 kton far-detector modules, set to be installed a kilometre underground at the Sanford Underground Research Laboratory in Lead, South Dakota. Site preparation and pre-excavation activities began in 2017, and full excavation work is expected to begin soon, with the goal that data-taking begin during the second half of this decade. Work on the near-detector site and the “PIP-II” upgrade to Fermilab’s accelerator complex began last year.

Though similar to Hyper-Kamiokande at first glance, DUNE’s approach is distinct and complementary. With beam energy and baseline both four times greater, DUNE will have greater sensitivity to flavour-dependent coherent-forward-scattering with electrons in Earth’s crust – an effect that modifies oscillation probabilities differently depending on the mass hierarchy. With the Fermilab beam directed straight at the detector rather than off-axis, a broader range of neutrino energies will allow DUNE to observe the oscillation pattern from the first to the second oscillation maximum, and simultaneously fit all but the solar mixing parameters. And with detector, flux and interaction uncertainties all distinct, a joint analysis of both experiments’ data could break degeneracies and drive down systematics.

“If CP violation is maximal and the experiments collect data as anticipated, DUNE and Hyper-Kamiokande should both approach 5σ significance for the exclusion of leptonic CP conservation in about five years,” estimates DUNE co-spokesperson Stefan Söldner-Rembold of the University of Manchester, noting that the experiments will also be highly complementary for non-accelerator topics. The most striking example is supernova-burst neutrinos, he says, referring to a genre of neutrinos only observed once so far, during 15 seconds in 1987, when neutrinos from a supernova in the Large Magellanic Cloud passed through the Earth. “While DUNE is primarily sensitive to electron neutrinos, Hyper-Kamiokande will be sensitive to electron antineutrinos. The difference between the timing distributions of these samples encodes key information about the dynamics of the supernova explosion.” Hyper-Kamiokande spokesperson Masato Shiozawa of ICRR Tokyo also emphasises the broad scope of the physics programmes. “Our studies will also encompass proton decay, high-precision measurements of solar neutrinos, supernova-relic neutrinos, dark-matter searches, the possible detection of solar-flare neutrinos and neutrino geophysics.”

JUNO energy resolution

Half a century since Ray Davis and two co-authors published evidence for a 60% deficit in the flux of solar neutrinos compared to John Bahcall’s prediction, DUNE already boasts more than a thousand collaborators, and Hyper-Kamiokande’s detector mass is set to be 500 times greater than Davis’s tank of liquid tetrachloroethylene. If Ray Davis was the conductor who set the orchestra in motion, then these large experiments fill out the massed ranks of the violin section, poised to deliver what may well be the most stirring passage of the neutrino-oscillation symphony. But other sections of the orchestra also have important parts to play.

Mass hierarchy

The question of the neutrino mass hierarchy will soon be addressed by the Jiangmen Underground Neutrino Observatory (JUNO) experiment, which is currently under construction in China. The project is an evolution of the Daya Bay experiment, and will seek to measure a deficit of electron antineutrinos 53 km from the Yangjiang and Taishan nuclear-power plants. As the reactor neutrinos travel, the small kilometre-scale oscillation observed by Daya Bay will continue to undulate with the same wavelength, revealed in JUNO as “fast” oscillations on a slower and deeper first oscillation maximum due to the smaller solar mass splitting Δm221 (see “An oscillation within an oscillation” figure).

“JUNO can determine the neutrino mass hierarchy in an unambiguous and definite way, independent from the CP phase and matter effects, unlike other experiments using accelerator or atmospheric neutrinos,” says spokesperson Yifang Wang of the Chinese Academy of Sciences in Beijing. “In six years of data taking, the statistical significance will be higher than 3σ.”

JUNO has completed most of the digging of the underground laboratory, and equipment for the production and purification of liquid scintillator is being fabricated. A total of 18,000 20-inch photomultiplier tubes and 26,000 3-inch photomultiplier tubes have been delivered, and most of them have been tested and accepted, explains Wang. The installation of the detector is scheduled to begin next year. JUNO will arguably be at the vanguard of a precision era for the physics of neutrino oscillations, equipped to measure the mass splittings and the solar mixing parameters to better than 1% precision – an improvement of about one order of magnitude over previous results, and even better than the quark sector, claims Wang, somewhat provocatively. “JUNO’s capabilities for supernova-burst neutrinos, diffused supernova neutrinos and geoneutrinos are unprecedented, and it can be upgraded to be a world-best double-beta-decay detector once the mass hierarchy is measured.”

Excavation of the cavern for the JUNO experiment

With JUNO, Hyper-Kamiokande and DUNE now joining a growing ensemble of experiments, the unresolved leitmotifs of the three-neutrino paradigm may find resolution this decade, or soon after. But theory and experiment both hint, quite independently, that nature may have a scherzo twist in store before the grand finale.

A rich programme of short-baseline experiments promises to bolster or exclude experimental hints of a fourth sterile neutrino with a relatively large mixing with the electron neutrino that have dogged the field since the late 1990s. Four anomalies stack up as more or less consistent among themselves. The first, which emerged in the mid-1990s at Los Alamos’s Liquid Scintillator Neutrino Detector (LSND), is an excess of electron antineutrinos that is potentially consistent with oscillations involving a sterile neutrino at a mass splitting Δm2 1 eV2. Two other quite disparate anomalies since then – a few-percent deficit in the expected flux from nuclear reactors, and a deficit in the number of electron neutrinos from radioactive decays in liquid-gallium solar-neutrino detectors – could be explained in the same way. The fourth anomaly, from Fermilab’s MiniBooNE experiment, which sought to replicate the LSND effect at a longer baseline and a higher energy, is the most recent: a sizeable excess of both electron neutrinos and antineutrinos, though at a lower energy than expected. It’s important to note, however, that experiments including KARMEN, MINOS+ and IceCube have reported null searches for sterile neutrinos that fit the required description. Such a particle would also stand in tension with cosmology, notes phenomenologist Silvia Pascoli of Durham University, as models predict it would make too large a contribution to hot dark matter in the universe today, unless non-standard scenarios are invoked.

Three different types of experiment covering three orders of magnitude in baseline are now seeking to settle the sterile-neutrino question in the next decade. A smattering of reactor-neutrino experiments a mere 10 metres or so from the source will directly probe the reactor anomaly at Δm2 1 eV2. The data reported so far are intriguing. Korea’s NEOS experiment and Russia’s DANSS experiment report siren signals between 1 and 2 eV2, and NEUTRINO-4, also based in Russia, reports a seemingly outlandish signal, indicative of very large mixing, at 7 eV2. In parallel, J-PARC’s JSNS2 experiment is gearing up to try to reproduce the LSND effect using accelerator neutrinos at the same energy and baseline. Finally, Fermilab’s short-baseline programme will thoroughly address a notable weakness of both LSND and MiniBooNE: the lack of a near detector.

MiniBooNE detector

The Fermilab programme will combine three liquid-argon TPCs – a bespoke new short-baseline detector (SBND), the existing MicroBooNE detector, and the refurbished ICARUS detector – to resolve the LSND anomaly once and for all. SBND is currently under construction, MicroBooNE is operational, and ICARUS, removed from its berth at Gran Sasso and shipped to the US in 2017, has been installed at Fermilab, following work on the detector at CERN. “The short-baseline neutrino programme at Fermilab has made tremendous technical progress in the past year,” says ICARUS spokesperson and Nobel laureate Carlo Rubbia, noting that the detector will be commissioned as soon as circumstances allow, given the coronavirus pandemic. “Once both ICARUS and SBND are in operation, it will take less than three years with the nominal beam intensity to settle the question of whether neutrinos have an even more mysterious character than we thought.”

Muon neutrinos ring out with two frequencies of roughly equal amplitude, to yield almost perfect disappearance

Outside of the purview of oscillation experiments with artificially produced neutrinos, astrophysical observatories will scale a staggering energy range, from the PeV-scale neutrinos reported by IceCube at the South Pole, down, perhaps, to the few-hundred-μeV cosmic neutrino background sought by experiments such as PTOLEMY in the US. Meanwhile, the KATRIN experiment in Germany is zeroing in on the edges of beta-decay distributions to set an absolute scale for the mass of the peculiar mixture of mass eigenstates that make up an electron antineutrino (CERN Courier January/February 2020 p28). At the same time, a host of experiments are searching for neutrinoless double-beta decay – a process that can only occur if the neutrino is its own antiparticle. Discovering such a Majorana nature for the neutrino would turn the Standard Model on its head, and offer grist for the mill of theorists seeking to explain the tininess of neutrino masses, by balancing them against still-to-be-discovered heavy neutral leptons.

Indispensable input

According to Mikhail Shaposhnikov of the Swiss Federal Institute of Technology in Lausanne, current and future reactor- and accelerator-neutrino experiments will provide an indispensable input for understanding neutrino physics. And not in isolation. “To reach a complete picture, we also need to know the mechanism for neutrino-mass generation and its energy scale, and the most important question here is the scale of masses of new neutrino states: if lighter than a few GeV, these particles can be searched for at new experiments at the intensity frontier, such as SHiP, and at precision experiments looking for rare decays of mesons, such as Belle II, LHCb and NA62, while the heavier states may be accessible at ATLAS and CMS, and at future circular colliders,” explains Shaposhnikov. “These new particles can be the key in solving all the observational problems of the Standard Model, and require a consolidated effort of neutrino experiments, accelerator-based experiments and cosmological observations. Of course, it remains to be seen if this dream scenario can indeed be realised in the coming 20 years.”

 

• This article was updated on 6 July, to reflect results presented at Neutrino 2020

The search for leptonic CP violation

An electron anti-neutrino

Luckily for us, there is presently almost no antimatter in the universe. This makes it possible for us – made of matter – to live without being annihilated in matter–antimatter encounters. However, cosmology tells us that just after the cosmic Big Bang, the universe contained equal amounts of matter and antimatter. Obviously, for the universe to have evolved from that early state to the present one, which contains quite unequal amounts of matter and antimatter, the two must behave differently. This implies that the symmetry CP (charge conjugation × parity) must be violated. That is, there must be physical systems whose behaviour changes if we replace every particle by its antiparticle, and interchange left and right.

In 1964, Cronin, Fitch and colleagues discovered that CP is indeed violated, in the decays of neutral kaons to pions – a phenomenon that later became understood in terms of the behaviour of quarks. By now, we have observed quark CP violation in the strange sector, the beauty sector and most recently in the charm sector (CERN Courier May/June 2019 p7). The observations of CP violation in B (beauty) meson decays have been particularly illuminating. Everything we know about quark CP violation is consistent with the hypothesis that this violation arises from a single complex phase in the quark mixing matrix. This matrix gives the amplitude for any particular negatively-charged quark, whether down, strange or bottom, to convert via a weak interaction into any particular positively-charged quark, be it up, charm or top. Just two parameters in the quark mixing matrix, ρ and η, whose relative size determines the complex phase, account very successfully for numerous quark phenomena, including both CP-violating ones and others. This is impressively demonstrated by a plot of all the experimental constraints on these two parameters (figure 1). All the constraints intersect at a common point.

Of course, precisely which (ρ, η) point is consistent with all the data is not important. Lincoln Wolfenstein, who created the quark-mixing-matrix parametrisation that includes ρ and η, was known to say: “Look, I invented ρ and η, and I don’t care what their values are, so why should you?”

Figure 1

Having observed CP violation among quarks in numerous laboratory experiments of today, we might be tempted to think that we understand how CP violation in the early universe could have changed the world from one with equal quantities of matter and antimatter to one in which matter dominates very heavily over antimatter. However, scenarios that tie early-universe CP violation to that seen among the quarks today, and do not add new physics to the Standard Model of the elementary particles, yield too small a present-day matter–antimatter asymmetry. This leads one to wonder whether early-universe CP violation involving leptons, rather than quarks, might have led to the present dominance of matter over antimatter. This possibility is envisaged by leptogenesis, a scenario in which heavy neutral leptons that were their own antiparticles lived briefly in the early universe, but then underwent CP-asymmetric decays, creating a world with unequal numbers of particles and antiparticles. Such heavy neutral leptons are predicted by “see-saw” models, which explain the extreme lightness of the known neutrinos in terms of the extreme heaviness of the postulated heavy neutral leptons. Leptogenesis can successfully account for the observed size of the present matter–antimatter asymmetry.

Deniable plausibility

In the straightforward version of this picture, the heavy neutral leptons are too massive to be observable at the LHC or any foreseen collider. However, since leptogenesis requires leptonic CP violation, observing this violation in the behaviour of the currently observed leptons would make it more plausible that leptogenesis was indeed the mechanism through which the present matter–antimatter asymmetry of the universe arose. Needless to say, observing leptonic CP violation would also reveal that the breaking of CP symmetry, which before 1964 one might have imagined to be an unbroken, fundamental symmetry of nature, is not something special to the quarks, but is participated in by all the constituents of matter.

Figure 2

To find out if leptons violate CP, we are searching for what is traditionally described as a difference between the behaviour of neutrinos and that of antineutrinos. This description is fine if neutrinos are Dirac particles – that is, particles that are distinct from their antiparticles. However, many theorists strongly suspect that neutrinos are actually Majorana particles – that is, particles that are identical to their antiparticles. In that case, the traditional description of the search for leptonic CP violation is clearly inapplicable, since then the neutrinos and the antineutrinos are the same objects. However, the actual experimental approach that is being pursued is a perfectly valid probe of leptonic CP violation regardless of whether neutrinos are of Dirac or of Majorana character. In fact, this approach is completely insensitive to which of these two possibilities nature has chosen.

Through a glass darkly

The pursuit of leptonic CP violation is based on comparing the rates for two CP mirror-image processes (figure 2). In process A, the initial state is a π+ and an undisturbed detector. The final state consists of a μ+, an e, and a nucleus in the detector that has been struck by an intermediate-state neutrino beam particle that travelled a long distance from its source to the detector. Since the neutrino was born together with a muon, but produced an electron in the detector, and the probability for this to have happened oscillates as a function of the distance the neutrino travels divided by its energy, the process is commonly referred to as muon–neutrino to electron–neutrino oscillation.

Leptogenesis can account for the matter–antimatter asymmetry

In process B, the initial and final states are the same as in process A, but with every particle replaced by its antiparticle. In addition, owing to the character of the weak interactions, the helicity (the projection of the spin along the momentum) of every fermion is reversed, so that left and right are interchanged. Thus, regardless of whether neutrinos are identical to their antiparticles, processes A and B are CP mirror images, so if their rates are unequal, CP invariance is violated. Moreover, since the probability of a neutrino oscillation involves the weak interactions of leptons, but not those of quarks, this violation of CP invariance must come from the weak interactions of leptons.

Of course, we cannot employ an anti-detector in process B in practice. However, the experiment can legitimately use the same detector in both processes. To do that, it must take into account the difference between the cross sections for the beam particles in processes A and B to interact in this detector. Once that is done, the comparison of the rates for processes A and B remains a valid probe of CP non-invariance.

The matrix reloaded

Just as quark CP violation arises from a complex phase in the quark mixing matrix, so leptonic CP violation in neutrino oscillation can arise from a complex phase, δCP, in the leptonic mixing matrix, which is the leptonic analogue of the quark mixing matrix. However, if, as suggested by several short-baseline oscillation experiments, there exist not only the three well-established neutrinos, but also additional so-called “sterile” neutrinos that do not participate in Standard Model weak interactions, then the leptonic mixing matrix is larger than the quark one. As a result, while the quark mixing matrix is permitted to contain just one complex phase, its leptonic analogue may contain multiple complex phases that can contribute to CP violation in neutrino oscillations.

Stack of scintillating cells

Leptonic CP violation is being sought by two current neutrino-oscillation experiments. The NOvA experiment in the US has reported results that are consistent with either the presence or absence of CP violation. The T2K experiment in Japan reports that the complete absence of CP violation is excluded at 95% confidence. Assuming that the leptonic mixing matrix is the same size as the quark one, so that it may contain only one complex phase relevant to neutrino oscillations, the T2K data show a preference for values of that phase, δCP, that correspond to near maximal CP violation. Of course, as Lincoln Wolfenstein would doubtless point out, the precise value of δCP is not important. What counts is the extremely interesting experimental finding that the behaviour of leptons may very well violate CP. In the future, the oscillation experiments Hyper-Kamiokande in Japan and DUNE in the US will probe leptonic CP violation with greater sensitivity, and should be capable of observing it even if it should prove to be fairly small (see Tuning in to neutrinos).

By searching for leptonic CP violation, we hope to find out whether the breaking of CP symmetry occurs among all the constituents of matter, including both the leptons and the quarks, or whether it is a feature that is special to the quarks. If leptonic CP violation should be definitively shown to exist, this violation might be related to the reason that the universe contains matter, but almost no antimatter, so that life is possible.

Neutron sources join the fight against COVID-19

The LADI instrument at the ILL

The global scientific community has mobilised at an unprecedented rate in response to the COVID-19 pandemic, beyond just pharmaceutical and medical researchers. The world’s most powerful analytical tools, including neutron sources, harbour the unique ability to reveal the invisible, structural workings of the virus – which will be essential to developing effective treatments. Since the outbreak of the pandemic, researchers worldwide have been using large-scale research infrastructures such as synchrotron X-ray radiation sources (CERN Courier May/June 2020 p29), as well as cryogenic electron microscopy (cryo-EM) and nuclear magnetic resonance (NMR) facilities, to determine the 3D structures of proteins of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which can lead to COVID-19 respiratory disease, and to identify potential drugs that can bind to these proteins in order to disable the viral machinery. This effort has already delivered a large number of structures and increased our understanding of what potential drug candidates might look like in a remarkably short amount of time, with the number increasing each week.

COVID-19 impacted the operation of all advanced neutron sources worldwide. With one exception (ANSTO in Australia, which continued the production of radioisotopes) all of them were shut down in the context of national lockdowns aimed at reducing the spread of the disease. The neutron community, however, lost no time in preparing for the resumption of activities. Some facilities like Oak Ridge National Laboratory (ORNL) in the US have now restarted operation of their sources exclusively for COVID-19 studies. Here in Europe, while waiting (impatiently) for the restart of neutron facilities such as the Institut Laue-Langevin (ILL) in Grenoble, which is scheduled to be operational by mid-August, scientists have been actively pursuing SARS-CoV-2-related projects. Special research teams on the ILL site have been preparing for experiments using a range of neutron-scattering techniques including diffraction, small-angle neutron scattering, reflectometry and spectroscopy. Neutrons bring to the table what other probes cannot, and are set to make an important contribution to the fight against SARS-CoV-2.

Unique characteristics

Discovered almost 90 years ago, the neutron has been put to a multitude of uses to help researchers understand the structure and behaviour of condensed-matter. These applications include a steadily growing number of investigations into biological systems. For the reasons explained below, these investigations are complementary to the use of X-rays, NMR and cryo-EM. The necessary infrastructure for neutron-scattering experiments is provided to the academic and industrial user communities by a global network of advanced neutron sources. Leading European neutron facilities include the ILL in Grenoble, France, MLZ in Garching, Germany, ISIS in Didcot, UK, and PSI in Villigen, Switzerland. The new European flagship neutron source – the European Spallation Source (ESS) – is under construction in Lund, Sweden.

Structural power

Determining the biological structures that make up a virus such as SARS-CoV-2 (pictured) allows scientists to see what they look like in three dimensions and to understand better how they function, speeding up the design of more effective anti-viral drugs. Knowledge of the structures highlights which parts are the most important: for example, once researchers know what the active site in an enzyme looks like, they can try to design drugs that fit well into the active site – the classic “lock-and-key” analogy. This is also useful in the development of vaccines. Knowledge of the structural components that make up a virus are important since vaccines are often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins.

Neutrons are a particularly powerful tool for the study of biological macromolecules in solutions, crystals and partially ordered systems. Their neutrality means neutrons can penetrate deep into matter without damaging the samples, so that experiments can be performed at room temperature, much closer to physiological temperatures. Furthermore, in contrast to X-rays, which are scattered by electrons, neutrons are scattered by atomic nuclei, and so neutron-scattering lengths show no correlation with the number of electrons, but rather depend on nuclear forces, which can even vary between different isotopes. As such, while hydrogen (H) scatters X-rays very weakly, and protons (H+) do not scatter X-rays at all, with neutrons hydrogen scatters at a similar level to the other common elements (C, N, O, S, P) of biological macromolecules, allowing them to be located. Moreover, since hydrogen and its isotope deuterium (2H/D) exhibit different scattering lengths and signs, this can be exploited in neutron studies to enhance the visibility of specific structural features by substituting one isotope for the other. Examples of this include small-angle neutron scattering (SANS) studies of macromolecular structures that provide low-resolution 3D information on molecular shape without the need for crystallization, and neutron-crystallography studies of proteins that provide high-resolution structures of proteins, including the locations of individual hydrogen atoms that have been exchanged for deuterium to make them particularly visible. Indeed, neutron crystallography can provide unique information on the chemistry occurring within biological macromolecules, such as enzymes, as recent studies on HIV-1 protease, an enzyme essential for the life-cycle of the HIV virus, illustrate.

Treating and stopping COVID-19

Proteases are like biological scissors that cleave polypeptide chains – the primary structure of proteins – at precise locations. If the cleavage is inhibited, for example, by appropriate anti-viral drugs, then so-called poly-proteins remain in their original state and the machinery of virus replication is blocked. For the treatment to be efficient this inhibition has to be robust—that is, the drug occupying the active site should be strongly bound, ideally to atoms in the main chain of the protease. This will increase the likelihood that treatments are effective in the long run, despite mutations of the enzyme, since mutations occur only within the side chains of the enzyme. Neutron research, therefore, provides essential input into the long-term development of pharmaceuticals. This role will be further enhanced in the context of advanced computer-aided drug development that will rely on an orchestrated combination of high-power computing, artificial intelligence and broad-band experimental data on structures.

A neutron Laue diffraction pattern

Neutron crystallography data add supplementary structural information to X-ray data by providing key details regarding hydrogen atoms and protons, which are critical players in the binding of such drugs to their target enzyme through hydrogen bonding, and revealing important details of protein chemistry that help researchers decipher the exact enzyme catalytic pathway. In this way, neutron crystallography data can be hugely beneficial towards understanding how these enzymes function and the design of more effective medications to target them. For example, in the study of complexes between HIV-1 protease – the enzyme responsible for maturation of virus particles into infectious HIV virions – and drug molecules, neutrons can reveal hydrogen-bonding interactions that offer ways to enhance drug-binding and reduce drug-resistance of anti-retroviral therapies.

More than half of the SARS-CoV-2-related structures determined thus far are high-resolution X-ray structures of the virus’s main protease, with the majority of these bound to potential inhibitors. One of the main challenges for performing neutron crystallography is that larger crystals are required than for comparable X-ray crystallography studies, owing to the lower flux of neutron beams relative to X-ray beam intensities. Nevertheless, given the benefits provided by the visualisation of hydrogen-bonding networks for understanding drug-binding, scientists have been optimising crystallisation conditions for the growth of larger crystals, in combination with the production of fully deuterated protein in preparation for neutron crystallography experiments in the near future. Currently, teams at ORNL, ILL and the DEMAX facility in Sweden are growing crystals for SARS-CoV-2 investigations.

Proteases are, however, not the only proteins where neutron crystallography can provide essential information. For example, the spike protein (S-protein) of SARS-CoV-2 that is responsible for mediating the attachment and entry into human cells is of great relevance for developing therapeutic defence strategies against the virus. Here, neutron crystallography can potentially provide unique information about the specific domain of the S-protein where the virus binds to human cell receptors. Comparison of the structure of this region between different variations of coronavirus (SARS-CoV-2 and SARS-CoV) obtained using X-rays suggests small alterations to the amino-acid sequence may enhance the binding affinity of the S-protein to the human receptor hACE2, making SARS-CoV-2 more infectious. Neutron studies will provide further insight into this binding, which is crucial for the attachment of the virus. These experiments are scheduled to take place, e.g. at ILL and ORNL (and possibly MLZ), as soon as large enough crystals have been grown.

The big picture

Biological systems have a hierarchy of structures: starting from molecules that assemble into structures such as proteins; these form complexes which, as supramolecular arrangements like membranes, are the building blocks of cells. These are of course the building blocks of our bodies. Every part of this huge machinery is subject to continuous reorganisation. To understand the functioning, or in the case of a disease, the malfunctioning of a biological system, we therefore must get insight into the biological mechanism on all of these different length scales.

The ILL reactor

When it comes to studying the function of larger biological complexes such as assembled viruses, SANS becomes an important analytical tool. The technique’s capacity to distinguish specific regions (RNA, proteins and lipids) of the virus – thanks to advanced deuteration methods – enables researchers to map out the arrangement of the various components, contributing invaluable information to structural studies of SARS-CoV-2. While other analytical techniques provide the detailed atomic-
resolution structure of small biological assemblies, neutron scattering allows researchers to pan back to see the larger picture of full molecular complexes, at lower resolution. Neutron scattering is also uniquely suited to determining the structure of functional membrane proteins in physiological conditions. Neutron scattering will therefore make it possible to map out the structure of the complex formed by the S-protein and the hACE2 receptor.

Neutrons can penetrate deep into matter without damaging the samples

Last but not least, a full understanding of the virus’s life cycle requires the study of the interaction of the virus with the cell membrane, and the mechanism it uses to penetrate the host cell. SARS-CoV-2 is a virus, like HIV, that possesses a viral envelope composed of lipids, proteins and sugars. By providing information on its molecular structure and composition, the technique of neutron reflectometry – whereby highly collimated neutrons are incident on a flat surface and the intensity of reflected radiation is measured as a function of angle or neutron wavelength – helps to elucidate the precise mechanism the virus uses to penetrate the cell. Like in the case of SANS, the strength of neutron reflectometry relies on the fact that it provides a different contrast to X-rays, and that this contrast can be varied via deuteration allowing, for example, to distinguish a protein inserted into the membrane from the membrane itself. Regarding SARS-CoV-2, this implies that neutron reflectometry can in fact provide detailed structural information on the interaction of small protein fragments, so-called peptides, that mimic the S-protein and that are believed to be responsible for binding with the receptor of the host cell. Defining this mechanism, which is decisive for the infection, will be essential to controlling the virus and its potential future mutations in the long term.

Tool of choice

And we should not forget that viruses in their physiological environments are highly dynamic systems. Knowing how they move, deform and cluster is essential for optimising diagnostic and therapeutic treatments. Neutron spectroscopy, which is ideally suited to follow the motion of matter from small chemical groups to large macromolecular assemblies, is the tool of choice to provide this information.

The League of Advanced European Neutron Sources (CERN Courier May/June 2020 p49) has rapidly mobilised to conduct all relevant experiments. We are equally in close contact with our international partners, some of whom have, or are just in the process of, reopening their facilities. Scientists have to make sure that each research subject is provided with the best-suited analytical tool – in other words, those that have the samples will be given the necessary beam time. Neutron facilities are fast-adapting with special access channels to beam time having been implemented to allow the scientific community to respond without delay to the challenge posed by COVID-19.

Sensing a passage through the unknown

Since the inception of the Standard Model (SM) of particle physics half a century ago, experiments of all shapes and sizes have put it to increasingly stringent tests. The largest and most well-known are collider experiments, which in particular have enabled the direct discovery of various SM particles. Another approach utilises the tools of atomic physics. The relentless improvement in the precision of tools and techniques of atomic physics, both experimental and theoretical, has led to the verification of the SM’s predictions with ever greater accuracy. Examples include measurements of atomic parity violation that reveal the effects of the Z boson on atomic states, and measurements of atomic energy levels that verify the predictions of quantum electrodynamics (QED). Precision atomic physics experiments also include a vast array of searches for effects predicted by theories beyond-the-SM (BSM), such as fifth forces and permanent electric dipole moments that violate parity- and time-reversal symmetry. These tests probe potentially subtle yet constant (or controllable) changes of atomic properties that can be revealed by averaging away noise and controlling systematic errors.

GNOME

But what if the glimpses of BSM physics that atomic spectroscopists have so painstakingly searched for over the past decades are not effects that persist over the many weeks or months of a typical measurement campaign, but rather transient events that occur only sporadically? For example, might not cataclysmic astrophysical events such as black-hole mergers or supernova explosions produce hypothetical ultralight bosonic fields impossible to generate in the laboratory? Or might not Earth occasionally pass through some invisible “cloud” of a substance (such as dark matter) produced in the early universe? Such transient phenomena could easily be missed by experimenters when data are averaged over long times to increase the signal-to-noise ratio.

Transient phenomena

Detecting such unconventional events represents several challenges. If a transient signal heralding new physics was observed with a single detector, it would be exceedingly difficult to confidently distinguish the exotic-physics signal from the many sources of noise that plague precision atomic physics measurements. However, if transient interactions occur over a global scale, a network of such detectors geographically distributed over Earth could search for specific patterns in the timing and amplitude of such signals that would be unlikely to occur randomly. By correlating the readouts of many detectors, local effects can be filtered away and exotic physics could be distinguished from mundane physics.

This idea forms the basis for the Global Network of Optical Magnetometers to search for Exotic physics (GNOME), an international collaboration involving 14 institutions from all over the world (see “Correlated” figure). Such an idea, like so many others in physics, is not entirely new. The same concept is at the heart of the worldwide network of interferometers used to observe gravitational waves (LIGO, Virgo, GEO, KAGRA, TAMA, CLIO), and the global network of proton-precession magnetometers used to monitor geomagnetic and solar activity. What distinguishes GNOME from other global sensor networks is that it is specifically dedicated to searching for signals from BSM physics that have evaded detection in earlier experiments.

Optical atomic magnetometer

GNOME is a growing network of more than a dozen optical atomic magnetometers, with stations in Europe, North America, Asia and Australia. The project was proposed in 2012 by a team of physicists from the University of California at Berkeley, Jagiellonian University, California State University – East Bay, and the Perimeter Institute. The network started taking preliminary data in 2013, with the first dedicated science-run beginning in 2017. With more data on the way, the GNOME collaboration, consisting of more than 50 scientists from around the world, is presently combing the data for signs of the unexpected, with its first results expected later this year.

Exotic-physics detectors

Optical atomic magnetometers (OAMs) are among the most sensitive devices for measuring magnetic fields. However, the atomic vapours that are the heart of GNOME’s OAMs are placed inside multi-layer shielding systems, reducing the effects of external magnetic fields by a factor of more than a million. Thus, in spite of using extremely sensitive magnetometers, GNOME sensors are largely insensitive to magnetic signals. The reasoning is that many BSM theories predict the existence of exotic fields that couple to atomic spins and would penetrate through magnetic shields largely unaffected. Since the OAM signal is proportional to the spin-dependent energy shift regardless of whether or not a magnetic field causes the energy shift, OAMs – even enclosed within magnetic shields – are sensitive to a broad class of exotic fields.

The OAM setup

The basic principle behind OAM operation (see “Optical rotation” figure) involves optically measuring spin-dependent energy shifts by controlling and monitoring an ensemble of atomic spins via angular momentum exchange between the atoms and light. The high efficiency of optical pumping and probing of atomic spin ensembles, along with a wide array of clever techniques to minimise atomic spin relaxation (even at high atomic vapour densities), have enabled OAMs to achieve sensitivities to spin-dependent energy shifts at levels well below 10–20 eV after only one second of integration. One of the 14 OAM installations, at California State University – East Bay, is shown in the “Benchtop physics” image.

However, one might wonder: do any of the theoretical scenarios suggesting the existence of exotic fields predict signals detectable by a magnetometer network while also evading all existing astrophysical and laboratory constraints? This is not a trivial requirement, since previous high-precision atomic spectroscopy experiments have established stringent limits on BSM physics. In fact, OAM techniques have been used by a number of research groups (including our own) over the past several decades to search for spin-dependent energy shifts caused by exotic fields sourced by nearby masses or polarised spins. Closely related work has ruled out vast areas of BSM parameter space by comparing measurements of hyperfine structure in simple hydrogen-like atoms to QED calculations. Furthermore, if exotic fields do exist and couple strongly enough to atomic spins, they could cause noticeable cooling of stars and affect the dynamics of supernovae. So far, all laboratory experiments have produced null results and all astrophysical observations are consistent with the SM. Thus if such exotic fields exist, their coupling to atomic spins must be extremely feeble.

Despite these constraints and requirements, theoretical scenarios both consistent with existing constraints and that predict effects measurable with GNOME do exist. Prime examples, and the present targets of the GNOME collaboration’s search efforts, are ultralight bosonic fields. A canonical example of an ultralight boson is the axion. The axion emerged from an elegant solution, proposed by Roberto Peccei and Helen Quinn in the late 1970s, to the strong–CP problem. The Peccei–Quinn mechanism explains the mystery of why the strong interaction, to the highest precision we can measure, respects the combined CP symmetry whereas quantum chromodynamics naturally accommodates CP violation at a level ten orders of magnitude larger than present constraints. If CP violation in the strong interaction can be described not by a constant term but rather by a dynamical (axion) field, it could be significantly suppressed by spontaneous symmetry breaking at a high energy scale. If the symmetry breaking scale is at the grand-unification-theory (GUT) scale (~1016 GeV), the axion mass is around 10-10 eV, and at the Planck scale (1019 GeV) around 10-13 eV – both many orders of magnitude less massive than even neutrinos. Searching for ultralight axions therefore offers the exciting possibility of probing physics at the GUT and Planck scales, far beyond the direct reach of any existing collider.

Beyond the Standard Model

In addition to the axion, there are a wide range of other hypothetical ultralight bosons that couple to atomic spins and could generate signals potentially detectable with GNOME. Many theories predict the existence of spin-0 bosons with properties similar to the axion (so-called axion-like particles, ALPs). A prominent example is the relaxion, proposed by Peter Graham, David Kaplan and Surjeet Rajendran to explain the hierarchy problem: the mystery of why the electroweak force is about 24 orders-of-magnitude stronger than the gravitational force. In 2010, Asimina Arvanitaki and colleagues found that string theory suggests the existence of many ALPs of widely varying masses, from 10-33 eV to 10-10 eV. From the perspective of BSM theories, ultralight bosons are ubiquitous. Some predict ALPs such as “familons”, “majorons” and “arions”. Others predict new ultralight spin-1 bosons such as dark and hidden photons. There is even a possibility of exotic spin-0 or spin-1 gravitons: while the graviton for a quantum theory of gravity matching that described by general relativity must be spin-2, alternative gravity theories (for example torsion gravity and scalar-vector-tensor gravity) predict additional spin-0 and/or spin-1 gravitons.

Earth passing through a topological defect

It also turns out that such ultralight bosons could explain dark matter. Most searches for ultralight bosonic dark matter assume the bosons to be approximately uniformly distributed throughout the dark matter halo that envelopes the Milky Way. However, in some theoretical scenarios, the ultralight bosons can clump together into bosonic “stars” due to self-interactions. In other scenarios, due to a non-trivial vacuum energy landscape, the ultralight bosons could take the form of “topological” defects, such as domain walls that separate regions of space with different vacuum states of the bosonic field (see “New domains” figure). In either of these cases, the mass-energy associated with ultralight bosonic dark matter would be concentrated in large composite structures that Earth might only occasionally encounter, leading to the sort of transient signals that GNOME is designed to search for.

Magnetic field deviation

Yet another possibility is that intense bursts of ultralight bosonic fields might be generated by cataclysmic astrophysical events such as black-hole mergers. Much of the underlying physics of coalescing singularities is unknown, possibly involving quantum-gravity effects far beyond the reach of high-energy experiments on Earth, and it turns out that quantum gravity theories generically predict the existence of ultralight bosons. Furthermore, if ultralight bosons exist, they may tend to condense in gravitationally bound halos around black holes. In these scenarios, a sizable fraction of the energy released when black holes merge could plausibly be emitted in the form of ultralight bosonic fields. If the energy density of the ultralight bosonic field is large enough, networks of atomic sensors like GNOME might be able to detect a signal.

In order to use OAMs to search for exotic fields, the effects of environmental magnetic noise must be reduced, controlled, or cancelled. Even though the GNOME magnetometers are enclosed in multi-layer magnetic shields so that signals from external electromagnetic fields are significantly suppressed, there is a wide variety of phenomena that can mimic the sorts of signals one would expect from ultralight bosonic fields. These include vibrations, laser instabilities, and noise in the circuitry used for data acquisition. To combat these spurious signals, each GNOME station uses auxiliary sensors to monitor electromagnetic fields outside the shields (which could leak inside the shields at a far-reduced level), accelerations and rotations of the apparatus, and overall magnetometer performance. If the auxiliary sensors indicate data may be suspect, the data are flagged and ignored in the analysis (see “Spurious signals” figure).

GNOME data that have passed this initial quality check can then be scanned to see if there are signals matching the patterns expected based on various exotic physics hypotheses. For example, to test the hypothesis that dark matter takes the form of ALP domain walls, one searches for a signal pattern resulting from the passage of Earth through an astronomical-sized plane having a finite thickness given by the ALP’s Compton wavelength. The relative velocity between the domain wall and Earth is unknown, but can be assumed to be randomly drawn from the velocity distribution of virialised dark matter, having an average speed of about one thousandth the speed of light. The relative timing of signals appearing in different GNOME magnetometers should be consistent with a single velocity v: i.e. nearby stations (in the direction of the wall propagation) should detect signals with smaller delays and stations that are far apart should detect signals with larger delays, and furthermore the time delays should occur in a sensible sequence. The energy shift that could lead to a detectable signal in GNOME magnetometers is caused by an interaction of the domain-wall field φ with the atomic spin S whose strength is proportional to the scalar product of the spin with the gradient of the field, S∙∇φ. The gradient of the domain-wall field ∇φ is proportional to its momentum relative to S, and hence the signals appearing in different GNOME magnetometers are proportional to S∙v. Both the signal-timing pattern and the signal-amplitude pattern should be consistent with a single value of v; signals inconsistent with such a pattern can be rejected as noise.

If such exotic fields exist, their coupling to atomic spins must be extremely feeble

To claim discovery of a signal heralding BSM physics, detections must be compared to the background rate of spurious false-positive events consistent with the expected signal pattern but not generated by exotic physics. The false-positive rate can be estimated by analysing time-shifted data: the data stream from each GNOME magnetometer is shifted in time relative to the others by an amount much larger than any delays resulting from propagation of ultralight bosonic fields through Earth. Such time-shifted data can be assumed to be free of exotic-physics signals, so any detections are necessarily false positives: merely random coincidences due to noise. When the GNOME data are analysed without timeshifts, to be regarded as an indication of BSM physics, the signal amplitude must surpass the 5σ threshold as compared to the background determined with the time-shifted data. This means that, for a year-long data set, an event due to noise coincidentally matching the assumed signal pattern throughout the network would occur only once every 3.5 million years.

Inspiring efforts

Having already collected over a year of data, and with more on the way, the GNOME collaboration is presently combing the data for signs of BSM physics. New results based on recent GNOME science runs are expected in 2020. This would represent the first ever search for such transient exotic spin-dependent effects. Improvements in magnetometer sensitivity, signal characterisation, and data-analysis techniques are expected to improve on these initial results over the next several years. Significantly, GNOME has inspired similar efforts using other networks of precision quantum sensors: atomic clocks, interferometers, cavities, superconducting gravimeters, etc. In fact, the results of searches for exotic transient signals using clock networks have already been reported in the literature, constraining significant parameter space for various BSM scenarios. We would suggest that all experimentalists should seriously consider accurately time-stamping, storing, and sharing their data so that searches for correlated signals due to exotic physics can be conducted a posteriori. One never knows what nature might be hiding just beyond the frontier of the precision of past measurements.

Vector-boson scattering probes quartic coupling

Figure 1

The electroweak (EW) sector of the Standard Model (SM) predicts self-interactions between W and Z gauge bosons through triple and quartic gauge couplings. Following first measurements at LEP and at the Tevatron during the 1990s, these interactions are now a core part of the LHC physics programme, as they offer key insights into EW symmetry breaking, which, in the case of the SM, causes the W and Z bosons to acquire mass as a result of the Brout–Englert–Higgs mechanism. The quartic coupling can be probed at colliders via rare processes such as tri-boson production, which the CMS collaboration observed for the first time earlier this year, and vector-boson scattering (VBS).

The scattering of longitudinally polarised W and Z bosons is a particularly interesting probe of the SM, as its tree-level amplitudes would violate unitarity at high energies without delicate cancellations from quartic gauge couplings and Higgs-boson contributions. Thus, the study of VBS processes provides key insight into the quartic gauge couplings as well as the Higgs sector. These processes offer sensitivity to enhancements caused by models of physics beyond the SM, which modify the Higgs sector with additional Higgs bosons contributing to VBS.

Vector-boson scattering is characterised by the presence of two forward jets, with a large di-jet invariant mass and a large rapidity separation. CMS previously reported the first observation of same-sign W±W± production using the data collected in 2016. The same-sign W±W± process is chosen because of the smaller background yield from other SM processes compared to the opposite-sign W±W process. The collaboration has now updated this analysis and performed new studies of the EW production of two jets produced in association with WZ, and ZZ boson pairs using data collected between 2016 and 2018 at a centre-of-mass energy of 13 TeV, corresponding to 137 fb–1. Vector-boson pairs were selected by their decays to electrons and muons. The W±W± and WZ production modes were studied by simultaneously measuring their production cross sections using several kinematical observables. The measured total cross section for W±W± production of 3.98 ± 0.45 (± 0.37 stat. only) fb is the most accurate to date, with a precision of roughly 10%. No deviation from SM predictions is evident.

Though the contribution from background processes induced by the strong interaction is considerably larger in the WZ and ZZ final states, the scattering centre-of-mass energy and the polarisation of the final-state bosons can be measured as these final states can be more fully reconstructed than in W±W± production. To optimally isolate signal from background, the kinematical information of the WZ and ZZ candidate events is exploited with a boosted decision tree and matrix element likelihood techniques, respectively (see figure). The observed statistical significances for the WZ and ZZ processes are 6.8 and 4.0 standard deviations, respectively, in line with the expected SM significances of 5.3 and 3.5 standard deviations. The possible presence of anomalous quartic gauge couplings could result in an excess of events with respect to the SM predictions. Strong new constraints on the structure of quartic gauge couplings have been set within the framework of dimension-eight effective-field-theory operators.

The observation of the EW production of W±W±, WZ and ZZ boson pairs is an essential milestone towards pre­cision tests of VBS at the LHC, and there is much more to be learned from the future LHC Run-3 data. The High-Lumin­osity LHC should allow for very precise investigations of VBS, including finding evidence for the scattering of longitudinally polarised W bosons.

Common baryon source found in proton collisions

Figure 1

High-energy hadronic collisions, such as those delivered by the LHC, result in the production of a large number of particles. Particle pairs produced close together in both coordinate and momentum space are subject to final-state effects, such as quantum statistics, Coulomb forces and, in the case of hadrons, strong interactions. Femtoscopy uses the correlation of such pairs in momentum space to gain insights into the interaction potential and the spatial extent of an effective particle-emitting source.

Abundantly produced pion pairs are used to assess the size and evolution of the high-density and strongly interacting quark–gluon plasmas, which are formed in heavy-ion collisions. Recently, high-multiplicity pp collisions at the LHC have raised the possibility of observing collective effects similar to those seen in heavy-ion collisions, motivating detailed investigations of the particle source in such systems as well. A universal description of the emission source for all baryon species, independent of the specific quark composition, would open new possibilities to study the baryon–baryon interaction, and would impose strong constraints on particle-production models.

The ALICE collaboration has recently used p–p and p–Λ pairs to perform the first study of the particle-emitting source for baryons produced in pp collisions. The chosen data sample isolates the 1.7 permille highest-multiplicity collisions in the 13 TeV data set, yielding events with 30 to 40 charged particles reconstructed, on average, per unit of rapidity. The yields of protons and Λ baryons are dominated by contributions from short-lived resonances, accounting for about two thirds of all produced particles. A basic thermal model (the statistical hadronisation model) was used to estimate the number and composition of these resonances, indicating that the average lifetime of those feeding to protons (1.7 fm) is significantly shorter than those feeding to Λ baryons (4.7 fm) – this would have led to a substantial broadening of the source shape if not properly accounted for. An explicit treatment of the effect of short-lived resonances was developed by assuming that all primordial particles and resonances are emitted from a common core source with a Gaussian shape. The core source was then folded with the exponential tails introduced by the resonance decays. The resulting root-mean-square width of the Gaussian core scales from 1.3 fm to 0.85 fm as a function of an increase in the pair’s transverse mass (mT) from 1.1 to 2.2 GeV, for both p–p and p–Λ pairs (see figure). The transverse mass of a particle is its total energy in a coordinate system in which its velocity is zero along the beam axis. The two systems exhibit a common scaling of the source size, indicating a common emission source for all baryons. The observed scaling of the source size with mT is very similar to that observed in heavy-ion collisions, wherein the effect is attributed to the collective evolution of the system.

This result is a milestone in the field of correlation studies, as it directly relates to important topics in physics. The common source size observed for p–p and p–Λ pairs implies that the spatial- temporal properties of the hadronisation process are independent of the particle species. This observation can be exploited by coalescence models studying the production of light nuclei, such as deuterons or 3He, in hadronic collisions. Moreover, the femtoscopy formalism relates the emission source to the interaction potential between pairs of particles, enabling the study of the strong nuclear force between hadrons, such as p–K, p–Ξ, p–Ω and ΛΛ, with unprecedented precision.

A price worth paying

The LHC

Science, from the immutable logic of its mathematical underpinnings to the more fluid realms of the social sciences, has carried us from our humble origins to an understanding of such esoteric notions as gravitation and quantum mechanics. This knowledge has been applied to develop devices such as GPS trackers and smartphones – a story repeated in countless domains for a century or more – and it has delivered new tools for basic research along the way in a virtuous circle.

While it is undeniable that science has led us to a better world than that inhabited by our ancestors, and that it will continue to deliver intellectual, utilitarian and economic progress, advancement is not always linear. Research has led us up blind alleys, and taken wrong turnings, yet its strength is its ability to process data, to self-correct and to form choices based on the best available evidence. The current coronavirus pandemic could prove to be a great educator in the methods of science, demonstrating how the right course of action evolves as the evidence accumulates. We’ve seen all too clearly how badly things can go wrong when individuals and governments fail to grasp the importance of evidence-based decision making.

Fundamental science has to make its case not only on the basis of cultural wealth, but also in terms of socioeconomic benefit. In particle physics, we also have no shortage of examples. These go well beyond the web, although an economic impact assessment of that particular invention is one that I would be very interested in seeing. As of 2014, there were some 42,200 particle accelerators worldwide, 64% of which were used in industry, a third for medical purposes and just 3% in research – not bad for a technology invented for fundamental exploration. It’s a similar story for techniques developed for particle detection, which have found their way into numerous applications, especially in medicine and biology.

The benefits of Big Science for economic prosperity become more pertinent if we consider the cumulative contributions to the 21st-century knowledge economy, which relies heavily on research and innovation. In 2018, more than 40% of the CERN budget was returned to industry in its member-state countries through the procurement of supplies and services, generating corollary benefits such as opening new markets. Increasing efforts, for example by the European Commission, to require research infrastructures to estimate their socioeconomic impact are a welcome opportunity to quantify and demonstrate our impact.

CERN has been subject to economic impact assessments since the 1970s, with one recent cost–benefit analysis of the LHC, conducted by economists at the University of Milan, concluding with 92% probability that benefits exceed costs, even when attaching the very conservative figure of zero to the value of the organisation’s scientific discoveries. More recent  studies (CERN Courier September 2018 p51) by the Milan group, focusing on the High-Luminosity LHC, revealed a quantifiable return to society well in excess of the project’s costs, again, not including its scientific output. Extrapolating these results, the authors show that future colliders at CERN would bring similar societal benefits on an even bigger scale.

Across physics more broadly, a 2019 report commissioned by the European Physical Society found that physics-based industries generate more than 16% of total turnover and 12% of overall employment in Europe – represen­ting a net annual contribution of at least €1.45 trillion, and topping contributions from the financial services and retail sectors (CERN Courier January/February 2020 p9).

Of course, there are some who feel that limited resources for science should be deployed in areas such as addressing climate change, rather than blue-sky research. These views can be persuasive, but are misleading. Fundamental research is every bit as important as directed research, and through the virtuous circle of science, they are mutually dependent. The open questions and mind-bending concepts explored by particle physics and astronomy also serve to draw bright young minds into science, even if individuals go on to work in other areas. Surveys of the career paths taken by PhD students working on CERN experiments fully bear this out (CERN Courier April 2019 p55).

In April 2020, as a curtain-raiser to the update of the European Strategy for Particle Physics, Nature Physics published a series of articles about potential future directions for CERN. An editorial pointed out the strong scientific and utilitarian case for future colliders, concluding that: “Even if the associated price tag may seem high – roughly as high as that of the Tokyo Olympic Games – it is one worth paying.” This is precisely the kind of argument that we as a community should be prepared to make if we are to ensure continuing exploration of fundamental physics in the 21st century and beyond.

CLOUD clarifies cause of urban smog

Urban flow patterns

Urban particle pollution ranks fifth in the risk factors for mortality worldwide, and is a growing problem in many built-up areas. In a result that could help shape policies for reducing such pollution, the CLOUD collaboration at CERN has uncovered a new mechanism that drives winter smog episodes in cities.

Winter urban smog episodes occur when new particles form in polluted air trapped below a temperature inversion: warm air above the inversion inhibits convection, causing pollution to build up near the ground. However, how additional aerosol particles form and grow in this highly polluted air has puzzled researchers because they should be rapidly lost through scavenging by pre-existing aerosol particles. CLOUD, which uses an ultraclean cloud chamber situated in a beamline at CERN’s Proton Synchrotron to study the formation of aerosol particles and their effect on clouds and climate, has found that ammonia and nitric acid can provide the answer.

Deriving in cities mainly from vehicle emissions, ammonia and nitric acid were previously thought to play a passive role in particle formation, simply exchanging with ammonium nitrate in the particles. However, the new CLOUD study finds that small inhomogeneities in the concentrations of ammonia and nitric acid can drive the growth rates of newly formed particles up to more than 100 times faster than seen before, but only in short spurts that have previously escaped detection. These ultrafast growth rates are sufficient to rapidly transform the newly formed particles to larger sizes, where they are less prone to being lost through scavenging, leading to a dense smog
episode with a high number of particles.

“Although the emission of nitrogen oxides is regulated, ammonia emissions are not and may even be increasing with the latest catalytic converters used in gasoline and diesel vehicles,” explains CLOUD spokesperson Jasper Kirkby. “Our study shows that regulating ammonia emissions from vehicles could contribute to reducing urban smog.”

Lofty thinking

Jasper Kirkby

What, in a nutshell, is CLOUD?

It’s basically a cloud chamber, but not a conventional one as used in particle physics. We realistically simulate selected atmospheric environments in an ultraclean chamber and study the formation of aerosol particles from trace vapours, and how they grow to become the seeds for cloud droplets. We can precisely control all the conditions found throughout the atmosphere such as gas concentrations, temperature, ultraviolet illumination and “cosmic ray” intensity with a beam from CERN’s Proton Synchrotron (PS). The aerosol processes we study in CLOUD are poorly known yet climatically important because they create the seeds for more than 50% of global cloud droplets.

We have 22 institutes and the crème de la crème of European and US atmospheric and aerosol scientists. It’s a fabulous mixture of physicists and chemists, and the skills we’ve learned from particle physics in terms of cooperating and pooling resources have been incredibly important for the success of CLOUD. It’s the CERN model, the CERN culture that we’ve conveyed to another discipline. We implemented the best of CERN’s know-how in ultra-clean materials and built the cleanest atmospheric chamber in the world.

How did CLOUD get off the ground?

The idea came to me in 1997 during a lecture at CERN given by Nigel Calder, a former editor of New Scientist magazine, who pointed out a new result from satellite data about possible links between cosmic rays and cloud formation. That Christmas, while we visited relatives in Paris, I read a lot of related papers and came up with the idea to test the cosmic ray–cloud link at CERN with an experiment I named CLOUD. I did not want to ride into another field telling those guys how to do their stuff, so I wrote a note of my ideas and started to make contact with the atmospheric community in Europe and build support from lab directors in particle physics. I managed to assemble a dream team to propose the experiment to CERN. The hard part was convincing CERN that they should do this crazy experiment. We proposed it in 2000 and it was finally approved in 2006, which I think is a record for CERN to approve an experiment. There were some people in the climate community who were against the idea that cosmic rays could influence clouds. But we persevered and, once approved, things went very fast. We started taking data in 2009 and have been in discovery mode ever since.

Do you consider yourself a particle physicist or an atmospheric scientist?

An experimental physicist! My training and my love is particle physics, but judging by the papers I write and review, I am now an atmospheric scientist. It was not difficult to make this transition. It was a case of going back to my undergraduate physics and high-school chemistry and learning on the job. It’s also very rewarding. We do experiments, like we all do at CERN, on a 24/7 basis, but with CLOUD I can calculate things in my notebook and see the science that we are doing, so we know immediately what the new stuff is and we can adapt our experiments continuously during our run.

On the other hand, in particle physics the detectors are running all the time but we really don’t know what is in the data without years of very careful analysis afterwards, so there is this decoupling of the result from the actual measurement. Also, in CLOUD we don’t need a separate discipline to tell us about the underlying theory or beauty of what we are doing. In CLOUD you’re the theorist and the experimentalist at the same time – like it was in the early days of particle physics.

How would you compare the Standard Model to state-of-the-art climate models?

It’s night and day. The Standard Model (SM) is such a well formed theory and remarkably high-quality quantitatively that we can see incredibly subtle signals in detectors against a background of something that is extremely well understood. Climate models, on the other hand, are trying to simulate a very complex system about what’s happening on Earth’s surface, involving energy exchanges between the atmosphere, the oceans, the biosphere, the cryosphere … and the influence of human beings. The models involve many parameters that are poorly understood, so modellers have to make plausible yet uncertain choices. As a result, there is much more flexibility in climate models, whereas there is almost none in the SM. Unfortunately, this flexibility means that the predictive power of such models is much weaker than it is in particle physics.

The CLOUD detector

There are skills such as the handling of data, statistics and software optimisation where particle physics is probably the leading science in the world, so I would love to see CERN sponsor a workshop where the two communities could exchange ideas and perhaps even begin to collaborate. This is what CLOUD has done. It’s politically correct to talk about the power of interdisciplinary research, but it’s very difficult in practical terms – especially when it comes to funding because experiments often fall into the cracks between funding agencies.

How has CLOUDs focus evolved during a decade of running?

CLOUD was designed to explore whether variations of cosmic rays in the atmosphere affect clouds and climate, and that’s still a major goal. What I didn’t realise at the beginning is how important aerosol–particle formation is for climate and health, and just how much is not yet understood. The largest uncertainty facing predictions of global warming is not due to a lack of understanding about greenhouse gases, but about how much aerosols and clouds have increased since pre-industrial times from human activities. Aerosol changes have offset some of the warming from greenhouse gases but we don’t know by how much – it could have offset almost nothing, or as much as one half of the warming effect. Consequently, when we project forwards, we don’t know how much Earth will warm later this century to better than a factor of three.

Many of our experiments are now aimed at reducing the aerosol uncertainties in anthropogenic climate change. Since all CLOUD experiments are performed under different ionisation conditions, we are also able to quantify the effect of cosmic rays on the process under study. A third major focus concerns the formation of smog under polluted urban conditions.

What have CLOUD’s biggest contributions been?

We have made several major discoveries and it’s hard to rank them. Our latest result (CLOUD clarifies cause of urban smog) on the role of ammonia and nitric acid in urban environments is very important for human health. We have found that ammonia and nitric acid can drive the growth rates of newly formed particles up to more than 100 times faster than seen before, but only in short spurts that have previously escaped detection. This can explain the puzzling observation of bursts of new particles that form and grow under highly polluted urban conditions, producing winter smog episodes. An earlier CLOUD result, also in Nature, showed that a few parts-per-trillion of amine vapours lead to extremely rapid formation of sulphuric acid particles, limited only by the kinetic collision rate. We had a huge fight with one of the referees of this paper, who claimed that it couldn’t be atmospherically important because no-one had previously observed it. Finally, a paper appeared in Science last year showing that sulphuric acid–amine nucleation is the key process driving new particle formation in Chinese megacities.

In CLOUD youre the theorist and the experimentalist at the same time – like it was in the early days of particle physics

A big result from the point of view of climate change came in 2016 when we showed that trees alone are capable of producing abundant particles and thus cloud seeds. Prior to that it was thought that sulphuric acid was essential to form aerosol particles. Since sulphuric acid was five times lower in the pre-industrial atmosphere, climate models assumed that clouds were fewer and thinner back then. This is important because the pre-industrial era is the baseline aerosol state from which we assess anthropogenic impacts. The fact that biogenic vapours make lots of aerosols and cloud droplets reduces the contrast in cloud coverage (and thus the amount of cooling offset) between then and now. The formation rate of these pure biogenic particles is enhanced by up to a factor 100 by galactic cosmic rays, so the pristine pre-industrial atmosphere was more sensitive to cosmic rays than today’s polluted atmosphere.

There was an important result the very first week we turned on CLOUD, when we saw that sulphuric acid does not nucleate on its own but requires ammonia. Before CLOUD started, people were measuring particles but they weren’t able to measure the molecular composition, so many experiments were being fooled by unknown contaminants.

Have CLOUD results impacted climate policy?

The global climate models that inform the Intergovernmental Panel on Climate Change (IPCC) have begun to incorporate CLOUD aerosol parameterisations, and they are impacting estimates of Earth’s climate sensitivity. The IPCC assessments are hugely impressive works of the highest scientific quality. Yet, there is something of a disconnect between what climate modellers do and what we do in the experimental and observational world. The modellers tend to work in national centres and connect with experiments through the latter’s publications, at the end of the chain. I would like to see much closer linkage between the models and the measurements, as we do in particle physics where there is a fluid connection between theory, experiment and modelling. We do this already in CLOUD, where we have several institutes who are primarily working on regional and global aerosol-cloud models.

What’s next on CLOUD’s horizon?

The East Hall at the PS is being completely rebuilt during CERN’s current long shutdown, but the CLOUD chamber itself is pretty much the only item that is untouched. When the East Area is rebuilt there will be a new beamline and a new experimental zone for CLOUD. We think we have a 10-year programme ahead to address the questions we want to and to settle the cosmic ray–cloud–climate question. That will take me up to just over 80 years old!

Will humanity succeed in preventing catastrophic climate change?

I am an optimist, so I believe there is always a way out of everything. It’s very understandable that people want to freeze the exact temperature of Earth as it is now because we don’t want to see a flood or desert in our back garden. But I’m afraid that’s not how Earth is, even without the anthropogenic influence. Earth has gone through much larger natural climate oscillations, even on the recent timescale of homo sapiens. That being said, I think Earth’s climate is fundamentally stable. Oceans cover two thirds of Earth’s surface and their latent heat of vaporisation is a huge stabiliser of climate – they have never evaporated nor completely frozen over. Also, only around 2% of CO2 is in the atmosphere and most of the rest is dissolved in the oceans, so eventually, over the course of several centuries, CO2 in the atmosphere will equilibrate at near pre-industrial levels. The current warming is an important change – and some argue it could produce a climate tipping point – but Earth has gone through larger changes in the past and life has continued. So we should not be too pessimistic about Earth’s future. And we shouldn’t conflate pollution and climate change. Reducing pollution is an absolute no brainer, but environmental pollution is a separate issue from climate change and should be treated as such.

bright-rec iop pub iop-science physcis connect