Comsol -leaderboard other pages

Topics

Max Zolotorev: 1941-2020

Max Zolotorev

Max Samuilovich Zolotorev, a pioneer of experimental studies of atomic parity violation, passed away on 1 April in his home in Oregon, US.

Max was born in Petrovsk, a small town not far from the Russian city of Saratov, where his mother found herself evacuated from the advancing German army. Upon graduating from secondary school, despite showing unusual talent and ability from an early age, he was not admitted to an institute or even a vocational school because he was Jewish. After eventually securing a position with the Novosibirsk Electro Technical Institute in Siberia, where he demonstrated outstanding academic performance, he was able to transfer to the newly founded Novosibirsk State University. He graduated in 1966, before obtaining his first and second doctoral degrees in 1974 and 1979 at the Institute of Nuclear Physics in Novosibirsk Academgorodok.

Max started out by working on measurements of the hyperon magnetic moments. However, in the early 1970s he was drawn into studying fundamental physics using the methods of atomic, molecular and optical physics. Together with his mentor and colleague Lev Barkov, he was the first to discover parity violation in atoms by observing optical rotation of the plane of polarisation of light propagating through a bismuth vapour.

The 1978 measurement came at a crucial time in the development of the Standard Model. While observations of high-energy neutrino scattering on nuclei at CERN in 1973 provided evidence of neutral weak currents, there was no evidence that the neutral weak current violated parity as predicted by the Glashow–Weinberg–Salam (GWS) model. Furthermore, earlier atomic parity violation experiments had produced null results, in contradiction with theoretical predictions. The observation of parity violation in bismuth, followed later by measurements of parity violating electron scattering at SLAC, was crucial evidence that the GWS model was indeed the correct description of the weak interaction.

Max Zolotorev was an inspiring mentor and teacher who always set the highest expectations for his students

Max and his colleagues also established the foundation for some of today’s most sensitive magnetometers with their measurements in the late 1980s of nonlinear Faraday rotation, clearly identifying the crucial role of quantum coherences. In 1989 Max emigrated to the US and took up a research position at SLAC, later moving to Lawrence Berkeley National Laboratory, where he worked until his retirement in 2018. At SLAC, Max and colleagues proposed using lasers to cool hadrons in colliders as a variation on van der Meer’s stochastic cooling method. The “optical stochastic cooling” concept will soon be tested at Fermilab by a group led by a former student of Max’s. Another of his co-inventions is the so-called “slicing method” to produce ultrashort pulses of X-rays essential for time-resolved studies of the properties of condensed matter.

Max Zolotorev was an inspiring mentor and teacher who always set the highest expectations for his students. His ability to find “weak spots” in one’s scientific logic was legendary. One of Max’s great insights was that, as physicists, we should never design our experiments around what was sitting in our labs or in our heads. Instead, we should choose deep and important problems, think hard about them and develop the cleverest way to approach them that we can, learn new subjects, build new apparatus, and push our boundaries and limits. Max’s work exemplified the curiosity, creativity and rigour of physics at its best.

The LHC as a photon collider

ATLAS Forward Proton Spectrometer

Protons accelerated by the LHC generate a large flux of quasi-real high-energy photons that can interact to produce particles at the electroweak scale. Using the LHC as a photon collider, the ATLAS collaboration announced a set of landmark results at the 40th International Conference on High Energy Physics last week, among which is the first observation of the photo-production of W-boson pairs.

As it proceeds via trilinear and quartic gauge-boson vertices involving two W bosons and either one or two photons, the production of a pair of W bosons from two photons (ɣɣ → WW) tests a longstanding prediction of the Standard Model (SM). This process is extremely rare but predicted precisely by electroweak theory, such that any observed deviation would suggest that new physics is at play. The measurement relies on the large 139 fb–1 dataset of proton–proton collisions recorded by ATLAS in LHC Run 2.

a sample of ɣɣ → WW interactions

Protons usually remain intact or are excited into a higher energy state in photon collisions, with the products of any subsequent decay not reaching the innermost components of the ATLAS detector. In these cases, the electron and muon decaying from the W bosons – an event topology chosen to avoid the high background for same-flavour lepton pairs – are the only particles detected in the vicinity. However, if charged particles arise from nearby proton–proton collisions, the clean ɣɣ → WW signal can be missed. The main background is W-boson pairs produced in head-on proton–proton collisions where particles from the break-up of the protons are not detected due to imperfect detector coverage or reconstruction (figure 1). A total of 127 background events were predicted compared with 307 events observed in the data, corresponding to a signal excess of 8.4 standard deviations. This establishes the existence of light transforming into particles with weak-scale masses – a remarkable and previously unobserved phenomenon.

Innovation

Precisely testing SM predictions of photon collisions requires accurate knowledge of the rate protons remain intact relative to those that break apart. This is challenging to predict theoretically and probing these rates unambiguously requires directly detecting the intact protons. The ATLAS forward-proton spectrometer (AFP) is becoming increasingly indispensable for this task. Among the newest additions to the ATLAS experiment, and located a few millimetres from the beam 210 m either side of the collision point, the AFP can detect protons that have been scattered in photon–photon collisions but which have nevertheless been focused by the LHC’s magnets. Its pioneering results so far analyse a standard-candle process where a proton is scattered in photon collisions that produce electron or muon pairs (ɣɣ → ℓℓ). For these signals, the measured proton energy loss is equal to that predicted from the lepton pairs measured in the main ATLAS detector (figure 2). ATLAS reported 180 events with a proton having matched kinematics to the lepton pair with an expected background of about 20 events. This corresponds to a significance exceeding nine standard deviations for both lepton flavours, establishing the presence of the signal and the successful operation of the AFP spectrometer in high-luminosity data. The detectors were sufficiently well understood to measure the cross sections of these processes.

A sample of ɣɣ → ℓℓ events

Observing ɣɣ → WW and scattered protons in ɣɣ → ℓℓ interactions are long-awaited milestones in an emerging experimental programme studying photon collisions. These complement recent heavy-ion results where ATLAS measured muon pairs from photon collisions and the kinematic properties of light-by-light scattering – a very rare process predicted by quantum electrodynamics. Interestingly, the latter was also used to search for the axion-like particles predicted by certain extensions of the SM.

Observing ɣɣ → WW and scattered protons in ɣɣ → ℓℓ interactions are long-awaited milestones

The techniques developed to study ɣɣ → WW and ɣɣ → ℓℓ interactions lay the groundwork for future, more detailed tests of the SM. Further results using the AFP spectrometer can improve theoretical understanding of photon collisions that will also benefit future measurements of ɣɣ → WW production. These landmark experimental feats will only become more interesting with the increased dataset of Run 3 and the high-luminosity LHC.

Ulrich Becker 1938–2020

Ulrich Becker. Credit: MIT

Ulrich J Becker, professor emeritus at MIT, passed away on 10 March at the age of 81. He was a major contributor to the L3 experiment, the Alpha Magnetic Spectrometer and the advancement of international collaborations in high-energy physics.

Becker was born in Dortmund, Germany, on 17 December 1938 – the day that nuclear fission was discovered in Berlin. As a young man, he was adept as an electrician, coal miner, and even in steel smelting, but he was more drawn to physics. He studied at the University of Marburg and obtained his PhD in Hamburg, focusing on the photo-production and leptonic decays of vector mesons.

In late 1965 Becker met Sam Ting, who admitted him to his group at DESY using the 6 GeV synchrotron to measure the size of the electron. It was a complementary match: Becker was a dogged researcher with detector and hardware acumen, and Ting was a master in scientific organization and politics. They presented their results at the XIIIth International Conference on High Energy Physics at Berkeley in 1966, showing that electrons have no measurable size, which contradicted earlier results.

In 1970 Becker joined the MIT faculty, where he found mentors including Victor Weisskopf and Martin Deutsch. He was promoted to associate professor in 1973, and the following year he began designing a precision spectrometer for Brookhaven National Laboratory. He joined a group led by Ting which used the spectrometer to search for heavy particles produced when protons were smashed into a fixed target of beryllium. Instead, the team recorded an unexpected bump in the data corresponding to the production of a heavy particle with a lifetime that was about a thousand times longer than predicted.

Meanwhile, MIT alumnus Burton Richter was reviewing data from Stanford Linear Accelerator Laboratory when he too found what looked like a long-lived heavy resonance. Ting flew to Stanford in November and he and Richter quickly organized a lab seminar. They presented their discovery of the J/Ψ particle, a bound state of a charm quark and antiquark, on 11 November 1974, sparking rapid changes in high-energy physics. One of Becker’s favorite stories was when he went to Munich in 1975 to share their finding, and Werner Heisenberg interrupted to comment: “Whenever they don’t know what it is, they invent a new quark.” To which Becker replied: “Look, Professor Heisenberg, I’m not arguing whether this is charm or not charm. I’m telling you it’s a particle which doesn’t go away.” A deadly silence followed before Heisenberg replied: “Accepted”. Ting and Richter shared the 1976 Nobel Prize in Physics for the J/Ψ discovery. If only one of the groups, MIT, had discovered it, it is likely that Becker would also have shared in the prize.

He enjoyed reviving broken and abandoned mechanical items.

Becker, who was made a full professor at MIT in 1977, developed several other major instruments which were the catalyst for discoveries. His large-area drift chamber would provide large acceptance coverage for experiments, and his drift tube enabled physicists to measure particles near the interaction point. Those developments led Becker to design and build the huge muon detectors for the MARK-J experiment at DESY, which resulted in the discovery of the three-jet pattern from gluon production. Becker then led hundreds of colleagues in designing the muon detector for the L3 experiment at LEP. He also made important contributions to advancing international collaboration in high-energy physics, for example involving China.

In 1993, Becker started to work with MIT’s team on building an Alpha Magnetic Spectrometer (AMS) — another Ting project which was born when he and Becker were on a coffee break while working on L3. The first AMS detector flew in the Space Shuttle in June 1998 and gathered about 100 hours of cosmic-ray data. Becker then went on to help design the transition radiation detector for AMS-02, which has so far collected more than 150 billion cosmic-ray events from its position on the International Space Station.

He enjoyed reviving broken and abandoned mechanical items. One of his biggest renovations was MIT’s cyclotron, which he converted into one of the biggest functioning magnets in the country, with a strength of up to 1 T. He used it to develop particle detectors for the International Linear Collider, and to characterise gas mixtures for the design of drift and other gas detectors in different magnetic and electric fields.

Becker was a mentor to many great physicists, and invested much to ensure his students received an excellent education. In 2013 he transitioned to emeritus status, but still he came in every day to mentor students. At the age of 81, he even picked up Python to continue his craft. His friendly approach and deep understanding of physics made him a superb teacher, even if his style was highly individual.

Our community has lost an excellent researcher and teacher, and a wonderful colleague and human being. Ulrich Becker is survived by his wife Gerda, his three children and two grandchildren.

Double digits for ultra-rare kaon decay

CERN’s NA62 collaboration has presented its latest progress in the search for K+→π+νν̄ – a “golden decay” with exceptional sensitivity to physics beyond the Standard Model. The new analysis, which includes the full dataset collected until 2018, provides the strongest evidence yet for the existence of this ultra-rare process, at 3.5σ significance. Presenting the result today during the penultimate plenary session of the 40th International Conference on High-Energy Physics, which is being virtually hosted from Prague, lead-analyst Giuseppe Ruggiero of Lancaster University described the result as a great achievement. “After several years of a very challenging analysis, battling ten orders of magnitude of background over the signal, we are proud to have achieved the first statistically significant evidence for a process which has great sensitivity to new physics,” he says.

An important virtue of K+→π+νν̄ is its clean theoretical character

Andrzej Buras

A flavour-changing, neutral-current process, K+→π+νν̄ is highly suppressed in the Standard Model, with contributions from Z-penguin and box diagrams with W, top quark and charm exchanges. The measured branching fraction of 110+40-35 per trillion K+→π+νν̄ decays is in agreement with the Standard Model prediction of 84 ± 10 per trillion (JHEP 11 033). “A particular and very important virtue of K+→π+νν̄ is its clean theoretical character, which can only be matched among meson decays by KL→π0νν̄, and possibly Bs,d→μ+μ,” says Andrzej Buras of the Institute for Advanced Study in Garching, Germany. “This is related to the fact that the low-energy hadronic matrix elements are just those of the quark currents between the hadronic states, which can be extracted from the leading semileptonic decay K+→π0e+ν,” he explains, noting that higher-order QCD and electroweak corrections are already well known, and lattice QCD calculations should soon tackle the small, “long-distance” contributions to the amplitude.

Historical measurements and predictions of the branching fraction for K+→π+νν̄

NA62 observes the 6% of positively charged kaons that are produced when 450 GeV protons from the Super Proton Synchrotron strike a beryllium target. The analysis is challenging because of the tiny branching fraction and the presence of a neutrino pair in the final state. Pioneering the technique of observing kaon decays in flight, the collaboration measures the kinematics of both the initial kaon and the final-state pion to isolate the kinematic signature of K+→π+νν̄, before then suppressing other decay modes by a further eight orders of magnitude using particle-identification techniques.

The collaboration’s new result adds a further 17 events to its previous analysis (arXiv:2007.08218, submitted to JHEP), wherein three events observed in 2016 and 2017 yielded an estimated branching fraction of 47 +72-47 decays per trillion. The previous best measurement was by Brookhaven National Laboratory’s E787 and E949 experiments in the 2000s, which together inferred a branching fraction of 173 +115-105 per trillion (Phys. Rev. Lett. 101 191802).

Meanwhile in Japan

The NA62 result is expected soon to be complemented by a measurement of the related CP-violating KL→π0νν̄ decay by the KOTO collaboration at the J-PARC research facility in Tokai, Japan. This even rarer process has a predicted Standard Model branching fraction of just 34 ± 6 per trillion. KOTO’s 2015 data yielded no event candidates and a 90% confidence upper limit on the branching fraction of 3.0 per billion (Phys. Rev. Lett. 122 021802). The collaboration is now finalising its results from the 2016–2018 run, and plans to improve its sensitivity to less than 0.1 per billion by increasing the beam intensity and upgrading the KOTO detectors.

As experimental uncertainties are expected to approach the theoretical precision in coming years, explains Buras, K+→π+νν̄ and KL→π0νν̄ decays can probe scales as high as a few hundred TeV – beyond the reach of most B-meson decays. “K+→π+νν̄ is most sensitive to hypothetical Z′ gauge bosons, vector-like quark models, supersymmetry and some leptoquark models,” he says. “LHCb studies of KS→ μ+μ and Belle II studies of B→ K(K*)νν̄ will also have a part to play, allowing a global analysis to test not only the concept of minimal flavour violation, but also probe new CP-violating phases and right-handed currents.”

Theorists expect to reach an accuracy of 5% on the predicted K+→π+νν̄ branching ratio towards the end of the decade. In the same period, the NA62 team is seeking to hone its resolution from the current 30% down to 10%. The collaboration will resume data taking in 2021, following upgrades to both beam and detector taking place during the ongoing second long shutdown of CERN’s accelerator complex.

Sensitivity to decay rates below the 10–11 level is now in sight

Cristina Lazzeroni

“The horizon of a new-physics programme with a sensitivity to decay rates well below the 10–11 level is now in sight,” says NA62 spokesperson Cristina Lazzeroni of the University of Birmingham, UK. “The instruments and techniques developed for the NA62 experiment will lead to the next generation of rare-kaon-decay experiments. For the longer term future, a high-intensity kaon-beam programme is starting to take shape at CERN, with prospects to measure the K+→π+νν̄ decay to a few per cent, address the analogous decay of the neutral kaon, and reach extreme sensitivities to a large variety of rare kaon decays that are complementary to investigations in the beauty-quark sector.”

Neutrino 2020 zooms into virtual reality

4,350 people from every continent, including Antarctica, participated from 22 June to 2 July in the XXIX International Conference on Neutrino Physics and Astrophysics, which was hosted online by Fermilab and the University of Minnesota. Originally planned as a five day, in-person June meeting at a large hotel in Chicago city centre, the organisers quickly pivoted in March, due to COVID-19, to an online programme with eight half days over two weeks, four poster sessions with both web-based and virtual-reality displays, and the use of the Slack platform for speaker questions and ongoing discussions.

A highlight of the conference was the first observation of solar CNO neutrinos

A highlight of the conference was the first observation of solar CNO neutrinos by the Borexino collaboration, which operates a 280-tonne liquid-scintillator detector in Italy’s Gran Sasso Laboratory. Dominant in stars more than 1.3 times the mass of the sun, the CNO cycle accounts for about 1% of the sun’s energy and generates a difficult-to-detect neutrino flux similar to backgrounds due to decays in the detector of 210Bi and its daughter nucleus 210Po. Gioacchino Ranucci (INFN, Milano) explained that the spectral fit to the observed data returns only the sum of CNO and 210Bi neutrinos. “The quest for CNO is turned into the quest for 210Bi through 210Po,” he emphasised. “With this outcome, Borexino has completely unravelled the two processes powering the Sun—the pp chain and the CNO cycle.” The final data analysis yielded a 5.1σ statistic against a hypothesis of no CNO neutrinos, and a CNO flux at the Earth of 7.0-1.9+2.9 × 108 cm-2 s-1.

Another highlight from Gran Sasso was the report from the Gerda collaboration on the search for neutrino-less double beta decay. If observed, this process would confirm the long-suspected Majorana rather than Dirac-fermion nature of neutrinos – a beyond the Standard Model feature with intriguing implications for why the neutrino mass is so small. Since Neutrino 2018, Gerda has nearly doubled its Phase 2 exposure and added a liquid-argon veto and a new detector string. The now complete Phase 2 result is a 90% confidence level half-life of >1.8 x 1026 years according to a frequentist analysis, or >1.4 x 1026 years, according to a Bayesian analysis with additional prior assumptions. Talks describing a half-dozen other double-beta-decay experiments displayed the high level of interest in this field.

Sterile neutrinos

Searches for additional “sterile” neutrinos with no Standard-Model gauge interactions were also featured. Takasumi Maruyama (KEK) described the liquid-scintillator JSNS2 experiment as a direct test of the controversial LSND Experiment result, first reported about 25 years ago. JSNS2 collected its first data during the three weeks before Neutrino 2020. Adrien Hourlier (MIT) reported on the now complete analysis of data from MiniBooNE that was collected during the past 17 years. Combining neutrino and anti-neutrino modes, MiniBooNE reports a 4.8σ excess. Hourlier presented soon-to-be published detailed distributions which the collaboration hopes “will guide theorists to explain our data”. Minerba Betancourt (Fermilab) then described the Fermilab Short-Baseline Neutrino (SBN) programme, which will use three detectors to obtain a definitive result on neutrino oscillations for an LSND and MiniBooNE-like ratio of oscillation distance to energy of ~1 m/MeV. The beam neutrino energy peaks at 700 MeV. A new liquid-argon near detector (SBND) will be placed 110 m from the target. The existing MicroBooNE is located at 470 m and the ICARUS Detector, moved from Gran Sasso, is installed at 600 m. Thomas Carroll (Wisconsin) reported on sterile-neutrino limits by muon disappearance determined by the now completed long-baseline MINOS/MINOS+ collaboration. These limits are in tension with the appearance data from both LSND and MiniBooNE when analysed as evidence for sterile neutrinos.

Two talks described the world’s two hundred-kilometre-scale neutrino-oscillation experiments, NOvA and T2K. The degeneracy of mass difference, mixing angle, hierarchy and possible CP violation make interpretation of these experiments’ results quite complex. Interestingly, there is mild tension, albeit only at the 1σ level, between the NOvA and T2K results regarding leptonic CP conservation and the neutrino mass hierarchy. The two collaborations are now working together on a combined analysis. Several talks discussed future initiatives. Lia Merminga (Fermilab) reported on LBNF and PIP-II, which will result in a new neutrino beam from Fermilab to the Sanford Laboratory in South Dakota for the DUNE experiment. Combined, these two projects will result in beam power of 2.4 MW, more than three times the intensity of the current NuMI beam. Michael Mooney (Colorado State) reported on the enormous progress of the DUNE project with two successful prototype detectors operating at CERN and pre-excavation work progressing at Sanford Laboratory. Complementary to the liquid-argon technology of DUNE is the recently approved Hyper-Kamiokande water-Cherenkov detector, which was described by Masaki Ishitsuka (Tokyo University of Science). Hyper-K will have a total mass of 260 kilo-tonne and 8.4 times the fiducial volume of the current Super-Kamiokande detector.

The VR feature attracted 3,409 conference participants

While much of Neutrino 2020 was modelled after the usual features of an in-person conference, the Virtual Reality (VR) poster presentation was novel and unique. Marco Del Tutto (Fermilab) created multiple virtual “rooms” for five posters each, along with additional rooms for topical discussions, sightseeing in Chicago and visiting Fermilab. The most enabling feature of the VR was that the software facilitated dialogue between participants whose avatars could move around the space and speak with one another. For example, if a group of avatars clustered around a poster, the participants could discuss the poster as a group. The VR feature attracted 3,409 participants. The VR was also supplemented by two-minute videos from presenters which enabled 5,800 YouTube views and 60,600 web displays.

In closing remarks, the organisers acknowledged the challenges of an online conference, but also emphasised the strengths of this novel approach. The exciting physics of Neutrino 2020 was made available to an extensive and diverse audience, including many scientists who would not have been able to attend an in-person conference because of funding, visas, family concerns or other issues. About 60% of participants were students or post-docs and the conference reached participants from 67 countries. The Slack discussions and posts on social media indicated wide-spread praise that the online format worked as well as it did. Some aspects of Neutrino 2020 may well affect the planning and organisation of future in-person and online conferences.

Eugène Cremmer: 1942-2019

Eugene Cremmer

Theorist Eugène Cremmer, who passed away in October 2019, left his mark in superstring and supergravity theory. He will be remembered across the world as a brilliant colleague, as original as he was likeable.

Born in Paris in 1942, his parents ran a bookstore. The neighbourhood children were firmly oriented towards vocational schools and Eugène was trained in woodworking. He was eventually spotted by a mathematics teacher, obtained a technical Baccalauréat degree and then pursued mathematics at École Normale Supérieure (ENS) in Paris in 1962. In 1968–1969, following a triggering of research into dual models by Daniele Amati and Martinus Veltman, Eugène began to compute higher loop diagrams in a remarkable series of technically impressive papers. The first one was written with André Neveu, and others with Joël Scherk in 1971–1972 while a postdoc at CERN. At that time, CERN was an important cradle of string theory, with groups from different countries forming a critical mass.

In late 1974 Eugène returned to ENS with a small group of pilgrims from the theoretical-physics group at Orsay. He worked with Jean-Loup Gervais on string field theory and later collaborated with Scherk, the author, and several visitors on supersymmetry, supergravity and applications to string theory. His revolutionary 1976 paper with Scherk introduced the linking number of a cyclic dimension by a closed string. This would turn out to be crucial for heterotic string models, T duality and mirror symmetry, for the so-called Scherk– Schwarz compactification, and was soon applied to branes. The 1977 proposal with Scherk of spontaneous compactification of the six extra dimensions of space remains central in modern string theory. In 1978–1979 his pioneering papers on 11D super-gravity and 4D N = 8 supergravity made the 11th dimension unescapable and exhibited exceptional (now widely used) duality symmetries. For these works, Eugène received the CNRS silver medal in 1983. Some 15 years later, duality symmetries were extended to higher degree forms.

The successes of Eugène’s work led to many invitations abroad. Though he chose to remain in France, he maintained collaborations and activities at a high level. He was director of the ENS theoretical-physics laboratory in 2002–2005. Eugène was as regular as clockwork, arriving and leaving the lab at the same time every day – the only exception I witnessed was due to Peter van Nieuwenhuizen’s work addiction, which he enjoyably inflicted upon us for a while. At 12:18 p.m. Eugène would always gather all available colleagues to go to lunch, and this led Guido Altarelli to observe “Were Eugène to disappear the whole lab would starve to death!” Eugène kept his papers in an encrypted pre-computer order, and nobody could understand how he was able to extract any needed reference in no time, always remembering most of the content. He cultivated his inner energy by walking quickly while absorbed in thought. We have lost a role model and a modest, full-time physicist.

Tuning in to neutrinos

DUNE’s dual-phase prototype detector

In traditional Balinese music, instruments are made in pairs, with one tuned slightly higher in frequency than its twin. The notes are indistinguishable to the human ear when played together, but the sound recedes and swells a couple of times each second, encouraging meditation. This is a beating effect: fast oscillations at the mean frequency inside a slowly oscillating envelope. Similar physics is at play in neutrino oscillations. Rather than sound intensity, it’s the probability to observe a neutrino with its initial flavour that oscillates. The difference is how long it takes for the interference to make itself felt. When Balinese musicians strike a pair of metallophones, the notes take just a handful of periods to drift out of phase. By contrast, it takes more than 1020 de Broglie wavelengths and hundreds of kilometres for neutrinos to oscillate in experiments like the planned mega-projects Hyper-Kamiokande and DUNE.

The zeitgeist began to shift to artificially produced neutrinos

Neutrino oscillations revealed a rare chink in the armour of the Standard Model: neutrinos are not massless, but are evolving superpositions of at least three mass eigenstates with distinct energies. A neutrino is therefore like three notes played together: frequencies so close, given the as-yet immeasurably small masses involved, that they are not just indistinguishable to the ear, but inseparable according to the uncertainty principle. As neutrinos are always ultra-relativistic, the energies of the mass eigenstates differ only due to tiny mass contributions of m2/2E. As the mass eigenstates propagate, phase differences develop between them proportional to squared-mass splittings Δm2. The sought-after oscillations range from a few metres to the diameter of Earth.

Orthogonal mixtures

The neutrino physics of the latter third of the 20th century was bookended by two anomalies that uncloaked these effects. In 1968 Ray Davis’s observation of a deficit of solar neutrinos prompted Bruno Pontecorvo to make public his conjecture that neutrinos might oscillate. Thirty years later, the Super-Kamiokande collaboration’s analysis of a deficit of atmospheric muon neutrinos from the other side of the planet posthumously vindicated the visionary Italian, and later Soviet, theorist’s speculation. Subsequent observations have revealed that electron, muon and tau neutrinos are orthogonal mixtures of mass eigenstates ν1 and ν2, separated by a small so-called solar splitting Δm221, and ν3, which is separated from that pair by a larger “atmospheric” splitting usually quantified by Δm232 (see “Little and large” figure). It is not yet known if ν3 is the lightest or the heaviest of the trio. This is called the mass-hierarchy problem.

A narrow splitting between neutrino mass eigenstates

“In the first two decades of the 21st century we have achieved a rather accurate picture of neutrino masses and mixings,” says theorist Pilar Hernández of the University of Valencia, “but the ordering of the neutrino states is unknown, the mass of the lightest state is unknown and we still do not know if the neutrino mixing matrix has imaginary entries, which could signal the breaking of CP symmetry,” she explains. “The very different mixing patterns in quarks and leptons could hint at a symmetry relating families, and a more accurate exploration of the lepton-mixing pattern and the neutrino ordering in future experiments will be essential to reveal any such symmetry pattern.”

Today, experiments designed to constrain neutrino mixing tend to dispense with astrophysical neutrinos in favour of more controllable accelerator and reactor sources. The experiments span more than four orders of magnitude in size and energy and fall into three groups (see “Not natural” figure). Much of the limelight is taken by experiments that are sensitive to the large mass splitting Δm232, which include both a cluster of current (such as T2K) and future (such as DUNE) accelerator-neutrino experiments with long baselines and high energies, and a high-performing trio of reactor-neutrino experiments (Daya Bay, RENO and Double Chooz) with a baseline of about a kilometre, operating just above the threshold for inverse beta decay. The second group is a beautiful pair of long-baseline reactor-neutrino experiments (KamLAND and the soon-to-be-commissioned JUNO), which join experiments with solar neutrinos in having sensitivity to the smaller squared-mass splitting Δm221. Finally, the third group is a host of short-baseline accelerator-neutrino experiments and very-short-baseline reactor neutrino experiments that are chasing tantalising hints of a fourth “sterile” neutrino (with no Standard-Model gauge interactions), which is split from the others by a squared-mass splitting of the order of 1 eV2.

Neutrino-oscillation experiments

Artificial sources

Experiments with artificial sources of neutrinos have a storied history, dating from the 1950s, when physicists toyed with the idea of detecting neutrinos created in the explosion of a nuclear bomb, and eventually observed them streaming from nuclear reactors. The 1960s saw the invention of the accelerator neutrino. Here, proton beams smashed into fixed targets to create a decaying debris of charged pions and their concomitant muon neutrinos. The 1970s transformed these neutrinos into beams by focusing the charged pions with magnetic horns, leading to the discovery of weak neutral currents and insights into the structure of nucleons. It was not until the turn of the century, however, that the zeitgeist of neutrino-oscillation studies began to shift from naturally to artificially produced neutrinos. Just a year after the publication of the Super-Kamiokande collaboration’s seminal 1998 paper on atmospheric–neutrino oscillations, Japanese experimenters trained a new accelerator-neutrino beam on the detector.

Operating from 1999 to 2006, the KEK-to-Kamioka (K2K) experiment sent a beam of muon neutrinos from the KEK laboratory in Tsukuba to the Super-Kamiokande detector, 250 km away under Mount Ikeno on the other side of Honshu. K2K confirmed that muon neutrinos “disappear” as a function of propagation distance over energy. The experiments together supported the hypothesis of an oscillation to tau neutrinos, which could not be directly detected at that energy. By increasing the beam energy well above the tau-lepton mass, the CERN Neutrinos to Gran Sasso (CNGS) project, which ran from 2006 to 2012, confirmed the oscillation to tau neutrinos by directly observing tau leptons in the OPERA detector. Meanwhile, the Main Injector Neutrino Oscillation Search (MINOS), which sent muon neutrinos from Fermilab to northern Minnesota from 2005 to 2012, made world-leading measurements of the parameters describing the oscillation.

With νμ→ ντ oscillations established, the next generation of experiments innovated in search of a subtler effect. T2K (K2K’s successor, with the beam now originating at J-PARC in Tokai) and NOvA (which analyses oscillations over the longer baseline of 810 km between Fermilab and Ash River, Minnesota) both have far detectors offset by a few degrees from the direction of the peak flux of the beams. This squeezes the phase space for the pion decays, resulting in an almost mono-energetic flux of neutrinos. Here, a quirk of the mixing conspires to make the musical analogy of a pair of metallophones particularly strong: to a good approximation, the muon neutrinos ring out with two frequencies of roughly equal amplitude, to yield an almost perfect disappearance of muon neutrinos – and maximum sensitivity to the appearance of electron neutrinos.

Testing CP symmetry

The three neutrino mass eigenstates mix to make electron, muon and tau neutrinos according to the Pontecorvo– Maki–Nakagawa–Sakata (PMNS) matrix, which describes three rotations and a complex phase δCP that can cause charge–parity (CP) violation – a question of paramount importance in the field due to its relevance to the unknown origin of the matter–antimatter asymmetry in the universe. Whatever the value of the complex phase, leptonic CP violation can only be observed if all three of the angles in the PMNS matrix are non-zero. Experiments with atmospheric and solar neutrinos demonstrated this for two of the angles. At the beginning of the last decade, short-baseline reactor-neutrino experiments in China (Daya Bay), Korea (RENO) and France (Double Chooz) were in a race with T2K to establish if the third angle, which leads to a coupling between ν3 and electrons, was also non-zero. In the reactor experiments this would be seen as a small deficit of electron antineutrinos a kilometre or so from the reactors; in T2K the smoking gun would be the appearance of a small number of electron neutrinos not present in the initial muon-neutrino-dominated beam.

After data taking was cut short by the great Sendai earthquake and tsunami of March 2011, T2K published evidence for the appearance of six electron-neutrino events, over the expected background of 1.5 ± 0.3 in the case of no coupling. Alongside a single tau-neutrino candidate in OPERA, these were the first neutrinos seen to appear in a detector with a new flavour, as previous signals had always registered a deficit of an expected flavour. In the closing days of the year, Double Chooz published evidence for 4121 electron–antineutrino events, under the expected tally for no coupling of 4344 ± 165, reinforcing T2K’s 2.5σ indication. Daya Bay and RENO put the matter to bed the following spring, with 5σ evidence apiece that the ν3-electron coupling was indeed non-zero. The key innovation for the reactor experiments was to minimise troublesome flux and interaction systematics by also placing detectors close to the reactors.

A visualisation of the Hyper-Kamiokande detector

Since then, T2K and NOvA, which began taking data in 2014, have been chasing leptonic CP violation – an analysis that is out of the reach of reactor experiments, as δCP does not affect disappearance probabilities. By switching the polarity of the magnetic horn, the experiments can compare the probabilities for the CP-mirror oscillations νμ→ νe and νμ→ νe directly. NOvA data are inconclusive at present. T2K data currently err towards near maximal CP violation in the vicinity of δCP = –π/2. The latest analysis, published in April, disfavours leptonic CP conservation (δCP = 0, ±π) at 2σ significance for all possible mixing parameter values. Statistical uncertainty is the biggest limiting factor.

Major upgrades planned for T2K next year target statistical, interaction-model and detector uncertainties. A substantial increase in beam intensity will be accompanied by a new fine-grained scintillating target for the ND280 near-detector complex, which will lower the energy threshold to reconstruct tracks. New transverse TPCs will improve ND280’s acceptance at high angles, yielding a better cancellation of systematic errors with the far detector, Super-Kamiokande, which is being upgraded by loading 0.01% gadolinium salts into the otherwise ultrapure water. As in reactor-neutrino detectors, this will provide a tag for antineutrino events, to improve sample purities in the search for leptonic CP violation.

T2K and NOvA both plan to roughly double their current data sets, and are working together on a joint fit, in a bid to better understand correlations between systematic uncertainties, and break degeneracies between measurements of CP violation and the mass hierarchy. If the CP-violating phase is indeed maximal, as suggested by the recent T2K result, the experiments may be able to exclude CP conservation with more than 99% confidence. “At this point we will be in a transition from a statistics-dominated to a systematics-dominated result,” says T2K spokesperson Atsuko Ichikawa of the University of Kyoto. “It is difficult to say, but our sensitivity will likely be limited at this stage by a convolution of neutrino-interaction and flux systematics.”

The next generation

Two long-baseline accelerator-neutrino experiments roughly an order of magnitude larger in cost and detector mass than T2K and NOvA have received green lights from the Japanese and US governments: Hyper-Kamiokande and DUNE. One of their primary missions is to resolve the question of leptonic CP violation.

Hyper-Kamiokande will adopt the same approach as T2K, but will benefit from major upgrades to the beam and the near and far detectors in addition to those currently underway in the present T2K upgrade. To improve the treatment of systematic errors, the suite of near detectors will be complemented by an ingenious new gadolinated water-Cherenkov detector at an intermediate baseline: by spanning a range of off-axis angles, it will drive down interaction-model systematics by exploiting previously neglected information on the how the flux varies as a function of the angle relative to the centre of the beam. Hyper-Kamiokande’s increased statistical reach will also be impressive. The power of the Japan Proton Accelerator Research Complex (J-PARC) beam will be increased from its current value of 0.5 MW up to 1.3 MW, and the new far detector will be filled with 260,000 tonnes of ultrapure water, yielding a fiducial volume 8.4 times larger than that of Super-Kamiokande. Procurement of the photo-multiplier tubes will begin this year, and the five-year-long excavation of the cavern has already begun. Data taking is scheduled to commence in 2027. “The expected precision on δCP is 10–20 degrees, depending on its true value,” says Hyper-Kamiokande international co-spokesperson Francesca di Lodovico of King’s College, London.

In the US, the Deep Underground Neutrino Experiment (DUNE) will exploit the liquid-argon–TPC technology first deployed on a large scale by ICARUS – OPERA’s sister detector in the CNGS project. The idea for the technology dates back to 1977, when Carlo Rubbia proposed using liquid rather than gaseous argon as a drift medium for ionisation electrons. Given liquid-argon’s higher density, such detectors can serve as both target and tracker, providing high-resolution 3D images of the interactions – an invaluable tool for reducing systematics related to the murky world of neutrino–nucleus interactions.

Spectacular performance

The technology is currently being developed in two prototype detectors at CERN. The first hones ICARUS’s single-phase approach. “The performance of the prototype has been absolutely spectacular, exceeding everyone’s expectations,” says DUNE co-spokesperson Ed Blucher of the University of Chicago. “After almost two years of operation, we are confident that the liquid–argon technology is ready to be deployed at the huge scale of the DUNE detectors.” In parallel, the second prototype is testing a newer dual-phase concept. In this design, ionisation charges drift through an additional layer of gaseous argon before reaching the readout plane. The signal can be amplified here, potentially easing noise requirements for the readout electronics, and increasing the maximum size of the detector. The dual-phase prototype was filled with argon in summer 2019 and is now recording tracks.

The evolution of the fraction of each flavour in the wavefunction of electron antineutrinos

The final detectors will have about twice the height and 10 to 20 times the footprint. Following the construction of an initial single-phase unit, the DUNE collaboration will likely pick a mix of liquid-argon technologies to complete their roster of four 10 kton far-detector modules, set to be installed a kilometre underground at the Sanford Underground Research Laboratory in Lead, South Dakota. Site preparation and pre-excavation activities began in 2017, and full excavation work is expected to begin soon, with the goal that data-taking begin during the second half of this decade. Work on the near-detector site and the “PIP-II” upgrade to Fermilab’s accelerator complex began last year.

Though similar to Hyper-Kamiokande at first glance, DUNE’s approach is distinct and complementary. With beam energy and baseline both four times greater, DUNE will have greater sensitivity to flavour-dependent coherent-forward-scattering with electrons in Earth’s crust – an effect that modifies oscillation probabilities differently depending on the mass hierarchy. With the Fermilab beam directed straight at the detector rather than off-axis, a broader range of neutrino energies will allow DUNE to observe the oscillation pattern from the first to the second oscillation maximum, and simultaneously fit all but the solar mixing parameters. And with detector, flux and interaction uncertainties all distinct, a joint analysis of both experiments’ data could break degeneracies and drive down systematics.

“If CP violation is maximal and the experiments collect data as anticipated, DUNE and Hyper-Kamiokande should both approach 5σ significance for the exclusion of leptonic CP conservation in about five years,” estimates DUNE co-spokesperson Stefan Söldner-Rembold of the University of Manchester, noting that the experiments will also be highly complementary for non-accelerator topics. The most striking example is supernova-burst neutrinos, he says, referring to a genre of neutrinos only observed once so far, during 15 seconds in 1987, when neutrinos from a supernova in the Large Magellanic Cloud passed through the Earth. “While DUNE is primarily sensitive to electron neutrinos, Hyper-Kamiokande will be sensitive to electron antineutrinos. The difference between the timing distributions of these samples encodes key information about the dynamics of the supernova explosion.” Hyper-Kamiokande spokesperson Masato Shiozawa of ICRR Tokyo also emphasises the broad scope of the physics programmes. “Our studies will also encompass proton decay, high-precision measurements of solar neutrinos, supernova-relic neutrinos, dark-matter searches, the possible detection of solar-flare neutrinos and neutrino geophysics.”

JUNO energy resolution

Half a century since Ray Davis and two co-authors published evidence for a 60% deficit in the flux of solar neutrinos compared to John Bahcall’s prediction, DUNE already boasts more than a thousand collaborators, and Hyper-Kamiokande’s detector mass is set to be 500 times greater than Davis’s tank of liquid tetrachloroethylene. If Ray Davis was the conductor who set the orchestra in motion, then these large experiments fill out the massed ranks of the violin section, poised to deliver what may well be the most stirring passage of the neutrino-oscillation symphony. But other sections of the orchestra also have important parts to play.

Mass hierarchy

The question of the neutrino mass hierarchy will soon be addressed by the Jiangmen Underground Neutrino Observatory (JUNO) experiment, which is currently under construction in China. The project is an evolution of the Daya Bay experiment, and will seek to measure a deficit of electron antineutrinos 53 km from the Yangjiang and Taishan nuclear-power plants. As the reactor neutrinos travel, the small kilometre-scale oscillation observed by Daya Bay will continue to undulate with the same wavelength, revealed in JUNO as “fast” oscillations on a slower and deeper first oscillation maximum due to the smaller solar mass splitting Δm221 (see “An oscillation within an oscillation” figure).

“JUNO can determine the neutrino mass hierarchy in an unambiguous and definite way, independent from the CP phase and matter effects, unlike other experiments using accelerator or atmospheric neutrinos,” says spokesperson Yifang Wang of the Chinese Academy of Sciences in Beijing. “In six years of data taking, the statistical significance will be higher than 3σ.”

JUNO has completed most of the digging of the underground laboratory, and equipment for the production and purification of liquid scintillator is being fabricated. A total of 18,000 20-inch photomultiplier tubes and 26,000 3-inch photomultiplier tubes have been delivered, and most of them have been tested and accepted, explains Wang. The installation of the detector is scheduled to begin next year. JUNO will arguably be at the vanguard of a precision era for the physics of neutrino oscillations, equipped to measure the mass splittings and the solar mixing parameters to better than 1% precision – an improvement of about one order of magnitude over previous results, and even better than the quark sector, claims Wang, somewhat provocatively. “JUNO’s capabilities for supernova-burst neutrinos, diffused supernova neutrinos and geoneutrinos are unprecedented, and it can be upgraded to be a world-best double-beta-decay detector once the mass hierarchy is measured.”

Excavation of the cavern for the JUNO experiment

With JUNO, Hyper-Kamiokande and DUNE now joining a growing ensemble of experiments, the unresolved leitmotifs of the three-neutrino paradigm may find resolution this decade, or soon after. But theory and experiment both hint, quite independently, that nature may have a scherzo twist in store before the grand finale.

A rich programme of short-baseline experiments promises to bolster or exclude experimental hints of a fourth sterile neutrino with a relatively large mixing with the electron neutrino that have dogged the field since the late 1990s. Four anomalies stack up as more or less consistent among themselves. The first, which emerged in the mid-1990s at Los Alamos’s Liquid Scintillator Neutrino Detector (LSND), is an excess of electron antineutrinos that is potentially consistent with oscillations involving a sterile neutrino at a mass splitting Δm2 1 eV2. Two other quite disparate anomalies since then – a few-percent deficit in the expected flux from nuclear reactors, and a deficit in the number of electron neutrinos from radioactive decays in liquid-gallium solar-neutrino detectors – could be explained in the same way. The fourth anomaly, from Fermilab’s MiniBooNE experiment, which sought to replicate the LSND effect at a longer baseline and a higher energy, is the most recent: a sizeable excess of both electron neutrinos and antineutrinos, though at a lower energy than expected. It’s important to note, however, that experiments including KARMEN, MINOS+ and IceCube have reported null searches for sterile neutrinos that fit the required description. Such a particle would also stand in tension with cosmology, notes phenomenologist Silvia Pascoli of Durham University, as models predict it would make too large a contribution to hot dark matter in the universe today, unless non-standard scenarios are invoked.

Three different types of experiment covering three orders of magnitude in baseline are now seeking to settle the sterile-neutrino question in the next decade. A smattering of reactor-neutrino experiments a mere 10 metres or so from the source will directly probe the reactor anomaly at Δm2 1 eV2. The data reported so far are intriguing. Korea’s NEOS experiment and Russia’s DANSS experiment report siren signals between 1 and 2 eV2, and NEUTRINO-4, also based in Russia, reports a seemingly outlandish signal, indicative of very large mixing, at 7 eV2. In parallel, J-PARC’s JSNS2 experiment is gearing up to try to reproduce the LSND effect using accelerator neutrinos at the same energy and baseline. Finally, Fermilab’s short-baseline programme will thoroughly address a notable weakness of both LSND and MiniBooNE: the lack of a near detector.

MiniBooNE detector

The Fermilab programme will combine three liquid-argon TPCs – a bespoke new short-baseline detector (SBND), the existing MicroBooNE detector, and the refurbished ICARUS detector – to resolve the LSND anomaly once and for all. SBND is currently under construction, MicroBooNE is operational, and ICARUS, removed from its berth at Gran Sasso and shipped to the US in 2017, has been installed at Fermilab, following work on the detector at CERN. “The short-baseline neutrino programme at Fermilab has made tremendous technical progress in the past year,” says ICARUS spokesperson and Nobel laureate Carlo Rubbia, noting that the detector will be commissioned as soon as circumstances allow, given the coronavirus pandemic. “Once both ICARUS and SBND are in operation, it will take less than three years with the nominal beam intensity to settle the question of whether neutrinos have an even more mysterious character than we thought.”

Muon neutrinos ring out with two frequencies of roughly equal amplitude, to yield almost perfect disappearance

Outside of the purview of oscillation experiments with artificially produced neutrinos, astrophysical observatories will scale a staggering energy range, from the PeV-scale neutrinos reported by IceCube at the South Pole, down, perhaps, to the few-hundred-μeV cosmic neutrino background sought by experiments such as PTOLEMY in the US. Meanwhile, the KATRIN experiment in Germany is zeroing in on the edges of beta-decay distributions to set an absolute scale for the mass of the peculiar mixture of mass eigenstates that make up an electron antineutrino (CERN Courier January/February 2020 p28). At the same time, a host of experiments are searching for neutrinoless double-beta decay – a process that can only occur if the neutrino is its own antiparticle. Discovering such a Majorana nature for the neutrino would turn the Standard Model on its head, and offer grist for the mill of theorists seeking to explain the tininess of neutrino masses, by balancing them against still-to-be-discovered heavy neutral leptons.

Indispensable input

According to Mikhail Shaposhnikov of the Swiss Federal Institute of Technology in Lausanne, current and future reactor- and accelerator-neutrino experiments will provide an indispensable input for understanding neutrino physics. And not in isolation. “To reach a complete picture, we also need to know the mechanism for neutrino-mass generation and its energy scale, and the most important question here is the scale of masses of new neutrino states: if lighter than a few GeV, these particles can be searched for at new experiments at the intensity frontier, such as SHiP, and at precision experiments looking for rare decays of mesons, such as Belle II, LHCb and NA62, while the heavier states may be accessible at ATLAS and CMS, and at future circular colliders,” explains Shaposhnikov. “These new particles can be the key in solving all the observational problems of the Standard Model, and require a consolidated effort of neutrino experiments, accelerator-based experiments and cosmological observations. Of course, it remains to be seen if this dream scenario can indeed be realised in the coming 20 years.”

 

• This article was updated on 6 July, to reflect results presented at Neutrino 2020

The search for leptonic CP violation

An electron anti-neutrino

Luckily for us, there is presently almost no antimatter in the universe. This makes it possible for us – made of matter – to live without being annihilated in matter–antimatter encounters. However, cosmology tells us that just after the cosmic Big Bang, the universe contained equal amounts of matter and antimatter. Obviously, for the universe to have evolved from that early state to the present one, which contains quite unequal amounts of matter and antimatter, the two must behave differently. This implies that the symmetry CP (charge conjugation × parity) must be violated. That is, there must be physical systems whose behaviour changes if we replace every particle by its antiparticle, and interchange left and right.

In 1964, Cronin, Fitch and colleagues discovered that CP is indeed violated, in the decays of neutral kaons to pions – a phenomenon that later became understood in terms of the behaviour of quarks. By now, we have observed quark CP violation in the strange sector, the beauty sector and most recently in the charm sector (CERN Courier May/June 2019 p7). The observations of CP violation in B (beauty) meson decays have been particularly illuminating. Everything we know about quark CP violation is consistent with the hypothesis that this violation arises from a single complex phase in the quark mixing matrix. This matrix gives the amplitude for any particular negatively-charged quark, whether down, strange or bottom, to convert via a weak interaction into any particular positively-charged quark, be it up, charm or top. Just two parameters in the quark mixing matrix, ρ and η, whose relative size determines the complex phase, account very successfully for numerous quark phenomena, including both CP-violating ones and others. This is impressively demonstrated by a plot of all the experimental constraints on these two parameters (figure 1). All the constraints intersect at a common point.

Of course, precisely which (ρ, η) point is consistent with all the data is not important. Lincoln Wolfenstein, who created the quark-mixing-matrix parametrisation that includes ρ and η, was known to say: “Look, I invented ρ and η, and I don’t care what their values are, so why should you?”

Figure 1

Having observed CP violation among quarks in numerous laboratory experiments of today, we might be tempted to think that we understand how CP violation in the early universe could have changed the world from one with equal quantities of matter and antimatter to one in which matter dominates very heavily over antimatter. However, scenarios that tie early-universe CP violation to that seen among the quarks today, and do not add new physics to the Standard Model of the elementary particles, yield too small a present-day matter–antimatter asymmetry. This leads one to wonder whether early-universe CP violation involving leptons, rather than quarks, might have led to the present dominance of matter over antimatter. This possibility is envisaged by leptogenesis, a scenario in which heavy neutral leptons that were their own antiparticles lived briefly in the early universe, but then underwent CP-asymmetric decays, creating a world with unequal numbers of particles and antiparticles. Such heavy neutral leptons are predicted by “see-saw” models, which explain the extreme lightness of the known neutrinos in terms of the extreme heaviness of the postulated heavy neutral leptons. Leptogenesis can successfully account for the observed size of the present matter–antimatter asymmetry.

Deniable plausibility

In the straightforward version of this picture, the heavy neutral leptons are too massive to be observable at the LHC or any foreseen collider. However, since leptogenesis requires leptonic CP violation, observing this violation in the behaviour of the currently observed leptons would make it more plausible that leptogenesis was indeed the mechanism through which the present matter–antimatter asymmetry of the universe arose. Needless to say, observing leptonic CP violation would also reveal that the breaking of CP symmetry, which before 1964 one might have imagined to be an unbroken, fundamental symmetry of nature, is not something special to the quarks, but is participated in by all the constituents of matter.

Figure 2

To find out if leptons violate CP, we are searching for what is traditionally described as a difference between the behaviour of neutrinos and that of antineutrinos. This description is fine if neutrinos are Dirac particles – that is, particles that are distinct from their antiparticles. However, many theorists strongly suspect that neutrinos are actually Majorana particles – that is, particles that are identical to their antiparticles. In that case, the traditional description of the search for leptonic CP violation is clearly inapplicable, since then the neutrinos and the antineutrinos are the same objects. However, the actual experimental approach that is being pursued is a perfectly valid probe of leptonic CP violation regardless of whether neutrinos are of Dirac or of Majorana character. In fact, this approach is completely insensitive to which of these two possibilities nature has chosen.

Through a glass darkly

The pursuit of leptonic CP violation is based on comparing the rates for two CP mirror-image processes (figure 2). In process A, the initial state is a π+ and an undisturbed detector. The final state consists of a μ+, an e, and a nucleus in the detector that has been struck by an intermediate-state neutrino beam particle that travelled a long distance from its source to the detector. Since the neutrino was born together with a muon, but produced an electron in the detector, and the probability for this to have happened oscillates as a function of the distance the neutrino travels divided by its energy, the process is commonly referred to as muon–neutrino to electron–neutrino oscillation.

Leptogenesis can account for the matter–antimatter asymmetry

In process B, the initial and final states are the same as in process A, but with every particle replaced by its antiparticle. In addition, owing to the character of the weak interactions, the helicity (the projection of the spin along the momentum) of every fermion is reversed, so that left and right are interchanged. Thus, regardless of whether neutrinos are identical to their antiparticles, processes A and B are CP mirror images, so if their rates are unequal, CP invariance is violated. Moreover, since the probability of a neutrino oscillation involves the weak interactions of leptons, but not those of quarks, this violation of CP invariance must come from the weak interactions of leptons.

Of course, we cannot employ an anti-detector in process B in practice. However, the experiment can legitimately use the same detector in both processes. To do that, it must take into account the difference between the cross sections for the beam particles in processes A and B to interact in this detector. Once that is done, the comparison of the rates for processes A and B remains a valid probe of CP non-invariance.

The matrix reloaded

Just as quark CP violation arises from a complex phase in the quark mixing matrix, so leptonic CP violation in neutrino oscillation can arise from a complex phase, δCP, in the leptonic mixing matrix, which is the leptonic analogue of the quark mixing matrix. However, if, as suggested by several short-baseline oscillation experiments, there exist not only the three well-established neutrinos, but also additional so-called “sterile” neutrinos that do not participate in Standard Model weak interactions, then the leptonic mixing matrix is larger than the quark one. As a result, while the quark mixing matrix is permitted to contain just one complex phase, its leptonic analogue may contain multiple complex phases that can contribute to CP violation in neutrino oscillations.

Stack of scintillating cells

Leptonic CP violation is being sought by two current neutrino-oscillation experiments. The NOvA experiment in the US has reported results that are consistent with either the presence or absence of CP violation. The T2K experiment in Japan reports that the complete absence of CP violation is excluded at 95% confidence. Assuming that the leptonic mixing matrix is the same size as the quark one, so that it may contain only one complex phase relevant to neutrino oscillations, the T2K data show a preference for values of that phase, δCP, that correspond to near maximal CP violation. Of course, as Lincoln Wolfenstein would doubtless point out, the precise value of δCP is not important. What counts is the extremely interesting experimental finding that the behaviour of leptons may very well violate CP. In the future, the oscillation experiments Hyper-Kamiokande in Japan and DUNE in the US will probe leptonic CP violation with greater sensitivity, and should be capable of observing it even if it should prove to be fairly small (see Tuning in to neutrinos).

By searching for leptonic CP violation, we hope to find out whether the breaking of CP symmetry occurs among all the constituents of matter, including both the leptons and the quarks, or whether it is a feature that is special to the quarks. If leptonic CP violation should be definitively shown to exist, this violation might be related to the reason that the universe contains matter, but almost no antimatter, so that life is possible.

Neutron sources join the fight against COVID-19

The LADI instrument at the ILL

The global scientific community has mobilised at an unprecedented rate in response to the COVID-19 pandemic, beyond just pharmaceutical and medical researchers. The world’s most powerful analytical tools, including neutron sources, harbour the unique ability to reveal the invisible, structural workings of the virus – which will be essential to developing effective treatments. Since the outbreak of the pandemic, researchers worldwide have been using large-scale research infrastructures such as synchrotron X-ray radiation sources (CERN Courier May/June 2020 p29), as well as cryogenic electron microscopy (cryo-EM) and nuclear magnetic resonance (NMR) facilities, to determine the 3D structures of proteins of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which can lead to COVID-19 respiratory disease, and to identify potential drugs that can bind to these proteins in order to disable the viral machinery. This effort has already delivered a large number of structures and increased our understanding of what potential drug candidates might look like in a remarkably short amount of time, with the number increasing each week.

COVID-19 impacted the operation of all advanced neutron sources worldwide. With one exception (ANSTO in Australia, which continued the production of radioisotopes) all of them were shut down in the context of national lockdowns aimed at reducing the spread of the disease. The neutron community, however, lost no time in preparing for the resumption of activities. Some facilities like Oak Ridge National Laboratory (ORNL) in the US have now restarted operation of their sources exclusively for COVID-19 studies. Here in Europe, while waiting (impatiently) for the restart of neutron facilities such as the Institut Laue-Langevin (ILL) in Grenoble, which is scheduled to be operational by mid-August, scientists have been actively pursuing SARS-CoV-2-related projects. Special research teams on the ILL site have been preparing for experiments using a range of neutron-scattering techniques including diffraction, small-angle neutron scattering, reflectometry and spectroscopy. Neutrons bring to the table what other probes cannot, and are set to make an important contribution to the fight against SARS-CoV-2.

Unique characteristics

Discovered almost 90 years ago, the neutron has been put to a multitude of uses to help researchers understand the structure and behaviour of condensed-matter. These applications include a steadily growing number of investigations into biological systems. For the reasons explained below, these investigations are complementary to the use of X-rays, NMR and cryo-EM. The necessary infrastructure for neutron-scattering experiments is provided to the academic and industrial user communities by a global network of advanced neutron sources. Leading European neutron facilities include the ILL in Grenoble, France, MLZ in Garching, Germany, ISIS in Didcot, UK, and PSI in Villigen, Switzerland. The new European flagship neutron source – the European Spallation Source (ESS) – is under construction in Lund, Sweden.

Structural power

Determining the biological structures that make up a virus such as SARS-CoV-2 (pictured) allows scientists to see what they look like in three dimensions and to understand better how they function, speeding up the design of more effective anti-viral drugs. Knowledge of the structures highlights which parts are the most important: for example, once researchers know what the active site in an enzyme looks like, they can try to design drugs that fit well into the active site – the classic “lock-and-key” analogy. This is also useful in the development of vaccines. Knowledge of the structural components that make up a virus are important since vaccines are often made from weakened or killed forms of the microbe, its toxins, or one of its surface proteins.

Neutrons are a particularly powerful tool for the study of biological macromolecules in solutions, crystals and partially ordered systems. Their neutrality means neutrons can penetrate deep into matter without damaging the samples, so that experiments can be performed at room temperature, much closer to physiological temperatures. Furthermore, in contrast to X-rays, which are scattered by electrons, neutrons are scattered by atomic nuclei, and so neutron-scattering lengths show no correlation with the number of electrons, but rather depend on nuclear forces, which can even vary between different isotopes. As such, while hydrogen (H) scatters X-rays very weakly, and protons (H+) do not scatter X-rays at all, with neutrons hydrogen scatters at a similar level to the other common elements (C, N, O, S, P) of biological macromolecules, allowing them to be located. Moreover, since hydrogen and its isotope deuterium (2H/D) exhibit different scattering lengths and signs, this can be exploited in neutron studies to enhance the visibility of specific structural features by substituting one isotope for the other. Examples of this include small-angle neutron scattering (SANS) studies of macromolecular structures that provide low-resolution 3D information on molecular shape without the need for crystallization, and neutron-crystallography studies of proteins that provide high-resolution structures of proteins, including the locations of individual hydrogen atoms that have been exchanged for deuterium to make them particularly visible. Indeed, neutron crystallography can provide unique information on the chemistry occurring within biological macromolecules, such as enzymes, as recent studies on HIV-1 protease, an enzyme essential for the life-cycle of the HIV virus, illustrate.

Treating and stopping COVID-19

Proteases are like biological scissors that cleave polypeptide chains – the primary structure of proteins – at precise locations. If the cleavage is inhibited, for example, by appropriate anti-viral drugs, then so-called poly-proteins remain in their original state and the machinery of virus replication is blocked. For the treatment to be efficient this inhibition has to be robust—that is, the drug occupying the active site should be strongly bound, ideally to atoms in the main chain of the protease. This will increase the likelihood that treatments are effective in the long run, despite mutations of the enzyme, since mutations occur only within the side chains of the enzyme. Neutron research, therefore, provides essential input into the long-term development of pharmaceuticals. This role will be further enhanced in the context of advanced computer-aided drug development that will rely on an orchestrated combination of high-power computing, artificial intelligence and broad-band experimental data on structures.

A neutron Laue diffraction pattern

Neutron crystallography data add supplementary structural information to X-ray data by providing key details regarding hydrogen atoms and protons, which are critical players in the binding of such drugs to their target enzyme through hydrogen bonding, and revealing important details of protein chemistry that help researchers decipher the exact enzyme catalytic pathway. In this way, neutron crystallography data can be hugely beneficial towards understanding how these enzymes function and the design of more effective medications to target them. For example, in the study of complexes between HIV-1 protease – the enzyme responsible for maturation of virus particles into infectious HIV virions – and drug molecules, neutrons can reveal hydrogen-bonding interactions that offer ways to enhance drug-binding and reduce drug-resistance of anti-retroviral therapies.

More than half of the SARS-CoV-2-related structures determined thus far are high-resolution X-ray structures of the virus’s main protease, with the majority of these bound to potential inhibitors. One of the main challenges for performing neutron crystallography is that larger crystals are required than for comparable X-ray crystallography studies, owing to the lower flux of neutron beams relative to X-ray beam intensities. Nevertheless, given the benefits provided by the visualisation of hydrogen-bonding networks for understanding drug-binding, scientists have been optimising crystallisation conditions for the growth of larger crystals, in combination with the production of fully deuterated protein in preparation for neutron crystallography experiments in the near future. Currently, teams at ORNL, ILL and the DEMAX facility in Sweden are growing crystals for SARS-CoV-2 investigations.

Proteases are, however, not the only proteins where neutron crystallography can provide essential information. For example, the spike protein (S-protein) of SARS-CoV-2 that is responsible for mediating the attachment and entry into human cells is of great relevance for developing therapeutic defence strategies against the virus. Here, neutron crystallography can potentially provide unique information about the specific domain of the S-protein where the virus binds to human cell receptors. Comparison of the structure of this region between different variations of coronavirus (SARS-CoV-2 and SARS-CoV) obtained using X-rays suggests small alterations to the amino-acid sequence may enhance the binding affinity of the S-protein to the human receptor hACE2, making SARS-CoV-2 more infectious. Neutron studies will provide further insight into this binding, which is crucial for the attachment of the virus. These experiments are scheduled to take place, e.g. at ILL and ORNL (and possibly MLZ), as soon as large enough crystals have been grown.

The big picture

Biological systems have a hierarchy of structures: starting from molecules that assemble into structures such as proteins; these form complexes which, as supramolecular arrangements like membranes, are the building blocks of cells. These are of course the building blocks of our bodies. Every part of this huge machinery is subject to continuous reorganisation. To understand the functioning, or in the case of a disease, the malfunctioning of a biological system, we therefore must get insight into the biological mechanism on all of these different length scales.

The ILL reactor

When it comes to studying the function of larger biological complexes such as assembled viruses, SANS becomes an important analytical tool. The technique’s capacity to distinguish specific regions (RNA, proteins and lipids) of the virus – thanks to advanced deuteration methods – enables researchers to map out the arrangement of the various components, contributing invaluable information to structural studies of SARS-CoV-2. While other analytical techniques provide the detailed atomic-
resolution structure of small biological assemblies, neutron scattering allows researchers to pan back to see the larger picture of full molecular complexes, at lower resolution. Neutron scattering is also uniquely suited to determining the structure of functional membrane proteins in physiological conditions. Neutron scattering will therefore make it possible to map out the structure of the complex formed by the S-protein and the hACE2 receptor.

Neutrons can penetrate deep into matter without damaging the samples

Last but not least, a full understanding of the virus’s life cycle requires the study of the interaction of the virus with the cell membrane, and the mechanism it uses to penetrate the host cell. SARS-CoV-2 is a virus, like HIV, that possesses a viral envelope composed of lipids, proteins and sugars. By providing information on its molecular structure and composition, the technique of neutron reflectometry – whereby highly collimated neutrons are incident on a flat surface and the intensity of reflected radiation is measured as a function of angle or neutron wavelength – helps to elucidate the precise mechanism the virus uses to penetrate the cell. Like in the case of SANS, the strength of neutron reflectometry relies on the fact that it provides a different contrast to X-rays, and that this contrast can be varied via deuteration allowing, for example, to distinguish a protein inserted into the membrane from the membrane itself. Regarding SARS-CoV-2, this implies that neutron reflectometry can in fact provide detailed structural information on the interaction of small protein fragments, so-called peptides, that mimic the S-protein and that are believed to be responsible for binding with the receptor of the host cell. Defining this mechanism, which is decisive for the infection, will be essential to controlling the virus and its potential future mutations in the long term.

Tool of choice

And we should not forget that viruses in their physiological environments are highly dynamic systems. Knowing how they move, deform and cluster is essential for optimising diagnostic and therapeutic treatments. Neutron spectroscopy, which is ideally suited to follow the motion of matter from small chemical groups to large macromolecular assemblies, is the tool of choice to provide this information.

The League of Advanced European Neutron Sources (CERN Courier May/June 2020 p49) has rapidly mobilised to conduct all relevant experiments. We are equally in close contact with our international partners, some of whom have, or are just in the process of, reopening their facilities. Scientists have to make sure that each research subject is provided with the best-suited analytical tool – in other words, those that have the samples will be given the necessary beam time. Neutron facilities are fast-adapting with special access channels to beam time having been implemented to allow the scientific community to respond without delay to the challenge posed by COVID-19.

Sensing a passage through the unknown

Since the inception of the Standard Model (SM) of particle physics half a century ago, experiments of all shapes and sizes have put it to increasingly stringent tests. The largest and most well-known are collider experiments, which in particular have enabled the direct discovery of various SM particles. Another approach utilises the tools of atomic physics. The relentless improvement in the precision of tools and techniques of atomic physics, both experimental and theoretical, has led to the verification of the SM’s predictions with ever greater accuracy. Examples include measurements of atomic parity violation that reveal the effects of the Z boson on atomic states, and measurements of atomic energy levels that verify the predictions of quantum electrodynamics (QED). Precision atomic physics experiments also include a vast array of searches for effects predicted by theories beyond-the-SM (BSM), such as fifth forces and permanent electric dipole moments that violate parity- and time-reversal symmetry. These tests probe potentially subtle yet constant (or controllable) changes of atomic properties that can be revealed by averaging away noise and controlling systematic errors.

GNOME

But what if the glimpses of BSM physics that atomic spectroscopists have so painstakingly searched for over the past decades are not effects that persist over the many weeks or months of a typical measurement campaign, but rather transient events that occur only sporadically? For example, might not cataclysmic astrophysical events such as black-hole mergers or supernova explosions produce hypothetical ultralight bosonic fields impossible to generate in the laboratory? Or might not Earth occasionally pass through some invisible “cloud” of a substance (such as dark matter) produced in the early universe? Such transient phenomena could easily be missed by experimenters when data are averaged over long times to increase the signal-to-noise ratio.

Transient phenomena

Detecting such unconventional events represents several challenges. If a transient signal heralding new physics was observed with a single detector, it would be exceedingly difficult to confidently distinguish the exotic-physics signal from the many sources of noise that plague precision atomic physics measurements. However, if transient interactions occur over a global scale, a network of such detectors geographically distributed over Earth could search for specific patterns in the timing and amplitude of such signals that would be unlikely to occur randomly. By correlating the readouts of many detectors, local effects can be filtered away and exotic physics could be distinguished from mundane physics.

This idea forms the basis for the Global Network of Optical Magnetometers to search for Exotic physics (GNOME), an international collaboration involving 14 institutions from all over the world (see “Correlated” figure). Such an idea, like so many others in physics, is not entirely new. The same concept is at the heart of the worldwide network of interferometers used to observe gravitational waves (LIGO, Virgo, GEO, KAGRA, TAMA, CLIO), and the global network of proton-precession magnetometers used to monitor geomagnetic and solar activity. What distinguishes GNOME from other global sensor networks is that it is specifically dedicated to searching for signals from BSM physics that have evaded detection in earlier experiments.

Optical atomic magnetometer

GNOME is a growing network of more than a dozen optical atomic magnetometers, with stations in Europe, North America, Asia and Australia. The project was proposed in 2012 by a team of physicists from the University of California at Berkeley, Jagiellonian University, California State University – East Bay, and the Perimeter Institute. The network started taking preliminary data in 2013, with the first dedicated science-run beginning in 2017. With more data on the way, the GNOME collaboration, consisting of more than 50 scientists from around the world, is presently combing the data for signs of the unexpected, with its first results expected later this year.

Exotic-physics detectors

Optical atomic magnetometers (OAMs) are among the most sensitive devices for measuring magnetic fields. However, the atomic vapours that are the heart of GNOME’s OAMs are placed inside multi-layer shielding systems, reducing the effects of external magnetic fields by a factor of more than a million. Thus, in spite of using extremely sensitive magnetometers, GNOME sensors are largely insensitive to magnetic signals. The reasoning is that many BSM theories predict the existence of exotic fields that couple to atomic spins and would penetrate through magnetic shields largely unaffected. Since the OAM signal is proportional to the spin-dependent energy shift regardless of whether or not a magnetic field causes the energy shift, OAMs – even enclosed within magnetic shields – are sensitive to a broad class of exotic fields.

The OAM setup

The basic principle behind OAM operation (see “Optical rotation” figure) involves optically measuring spin-dependent energy shifts by controlling and monitoring an ensemble of atomic spins via angular momentum exchange between the atoms and light. The high efficiency of optical pumping and probing of atomic spin ensembles, along with a wide array of clever techniques to minimise atomic spin relaxation (even at high atomic vapour densities), have enabled OAMs to achieve sensitivities to spin-dependent energy shifts at levels well below 10–20 eV after only one second of integration. One of the 14 OAM installations, at California State University – East Bay, is shown in the “Benchtop physics” image.

However, one might wonder: do any of the theoretical scenarios suggesting the existence of exotic fields predict signals detectable by a magnetometer network while also evading all existing astrophysical and laboratory constraints? This is not a trivial requirement, since previous high-precision atomic spectroscopy experiments have established stringent limits on BSM physics. In fact, OAM techniques have been used by a number of research groups (including our own) over the past several decades to search for spin-dependent energy shifts caused by exotic fields sourced by nearby masses or polarised spins. Closely related work has ruled out vast areas of BSM parameter space by comparing measurements of hyperfine structure in simple hydrogen-like atoms to QED calculations. Furthermore, if exotic fields do exist and couple strongly enough to atomic spins, they could cause noticeable cooling of stars and affect the dynamics of supernovae. So far, all laboratory experiments have produced null results and all astrophysical observations are consistent with the SM. Thus if such exotic fields exist, their coupling to atomic spins must be extremely feeble.

Despite these constraints and requirements, theoretical scenarios both consistent with existing constraints and that predict effects measurable with GNOME do exist. Prime examples, and the present targets of the GNOME collaboration’s search efforts, are ultralight bosonic fields. A canonical example of an ultralight boson is the axion. The axion emerged from an elegant solution, proposed by Roberto Peccei and Helen Quinn in the late 1970s, to the strong–CP problem. The Peccei–Quinn mechanism explains the mystery of why the strong interaction, to the highest precision we can measure, respects the combined CP symmetry whereas quantum chromodynamics naturally accommodates CP violation at a level ten orders of magnitude larger than present constraints. If CP violation in the strong interaction can be described not by a constant term but rather by a dynamical (axion) field, it could be significantly suppressed by spontaneous symmetry breaking at a high energy scale. If the symmetry breaking scale is at the grand-unification-theory (GUT) scale (~1016 GeV), the axion mass is around 10-10 eV, and at the Planck scale (1019 GeV) around 10-13 eV – both many orders of magnitude less massive than even neutrinos. Searching for ultralight axions therefore offers the exciting possibility of probing physics at the GUT and Planck scales, far beyond the direct reach of any existing collider.

Beyond the Standard Model

In addition to the axion, there are a wide range of other hypothetical ultralight bosons that couple to atomic spins and could generate signals potentially detectable with GNOME. Many theories predict the existence of spin-0 bosons with properties similar to the axion (so-called axion-like particles, ALPs). A prominent example is the relaxion, proposed by Peter Graham, David Kaplan and Surjeet Rajendran to explain the hierarchy problem: the mystery of why the electroweak force is about 24 orders-of-magnitude stronger than the gravitational force. In 2010, Asimina Arvanitaki and colleagues found that string theory suggests the existence of many ALPs of widely varying masses, from 10-33 eV to 10-10 eV. From the perspective of BSM theories, ultralight bosons are ubiquitous. Some predict ALPs such as “familons”, “majorons” and “arions”. Others predict new ultralight spin-1 bosons such as dark and hidden photons. There is even a possibility of exotic spin-0 or spin-1 gravitons: while the graviton for a quantum theory of gravity matching that described by general relativity must be spin-2, alternative gravity theories (for example torsion gravity and scalar-vector-tensor gravity) predict additional spin-0 and/or spin-1 gravitons.

Earth passing through a topological defect

It also turns out that such ultralight bosons could explain dark matter. Most searches for ultralight bosonic dark matter assume the bosons to be approximately uniformly distributed throughout the dark matter halo that envelopes the Milky Way. However, in some theoretical scenarios, the ultralight bosons can clump together into bosonic “stars” due to self-interactions. In other scenarios, due to a non-trivial vacuum energy landscape, the ultralight bosons could take the form of “topological” defects, such as domain walls that separate regions of space with different vacuum states of the bosonic field (see “New domains” figure). In either of these cases, the mass-energy associated with ultralight bosonic dark matter would be concentrated in large composite structures that Earth might only occasionally encounter, leading to the sort of transient signals that GNOME is designed to search for.

Magnetic field deviation

Yet another possibility is that intense bursts of ultralight bosonic fields might be generated by cataclysmic astrophysical events such as black-hole mergers. Much of the underlying physics of coalescing singularities is unknown, possibly involving quantum-gravity effects far beyond the reach of high-energy experiments on Earth, and it turns out that quantum gravity theories generically predict the existence of ultralight bosons. Furthermore, if ultralight bosons exist, they may tend to condense in gravitationally bound halos around black holes. In these scenarios, a sizable fraction of the energy released when black holes merge could plausibly be emitted in the form of ultralight bosonic fields. If the energy density of the ultralight bosonic field is large enough, networks of atomic sensors like GNOME might be able to detect a signal.

In order to use OAMs to search for exotic fields, the effects of environmental magnetic noise must be reduced, controlled, or cancelled. Even though the GNOME magnetometers are enclosed in multi-layer magnetic shields so that signals from external electromagnetic fields are significantly suppressed, there is a wide variety of phenomena that can mimic the sorts of signals one would expect from ultralight bosonic fields. These include vibrations, laser instabilities, and noise in the circuitry used for data acquisition. To combat these spurious signals, each GNOME station uses auxiliary sensors to monitor electromagnetic fields outside the shields (which could leak inside the shields at a far-reduced level), accelerations and rotations of the apparatus, and overall magnetometer performance. If the auxiliary sensors indicate data may be suspect, the data are flagged and ignored in the analysis (see “Spurious signals” figure).

GNOME data that have passed this initial quality check can then be scanned to see if there are signals matching the patterns expected based on various exotic physics hypotheses. For example, to test the hypothesis that dark matter takes the form of ALP domain walls, one searches for a signal pattern resulting from the passage of Earth through an astronomical-sized plane having a finite thickness given by the ALP’s Compton wavelength. The relative velocity between the domain wall and Earth is unknown, but can be assumed to be randomly drawn from the velocity distribution of virialised dark matter, having an average speed of about one thousandth the speed of light. The relative timing of signals appearing in different GNOME magnetometers should be consistent with a single velocity v: i.e. nearby stations (in the direction of the wall propagation) should detect signals with smaller delays and stations that are far apart should detect signals with larger delays, and furthermore the time delays should occur in a sensible sequence. The energy shift that could lead to a detectable signal in GNOME magnetometers is caused by an interaction of the domain-wall field φ with the atomic spin S whose strength is proportional to the scalar product of the spin with the gradient of the field, S∙∇φ. The gradient of the domain-wall field ∇φ is proportional to its momentum relative to S, and hence the signals appearing in different GNOME magnetometers are proportional to S∙v. Both the signal-timing pattern and the signal-amplitude pattern should be consistent with a single value of v; signals inconsistent with such a pattern can be rejected as noise.

If such exotic fields exist, their coupling to atomic spins must be extremely feeble

To claim discovery of a signal heralding BSM physics, detections must be compared to the background rate of spurious false-positive events consistent with the expected signal pattern but not generated by exotic physics. The false-positive rate can be estimated by analysing time-shifted data: the data stream from each GNOME magnetometer is shifted in time relative to the others by an amount much larger than any delays resulting from propagation of ultralight bosonic fields through Earth. Such time-shifted data can be assumed to be free of exotic-physics signals, so any detections are necessarily false positives: merely random coincidences due to noise. When the GNOME data are analysed without timeshifts, to be regarded as an indication of BSM physics, the signal amplitude must surpass the 5σ threshold as compared to the background determined with the time-shifted data. This means that, for a year-long data set, an event due to noise coincidentally matching the assumed signal pattern throughout the network would occur only once every 3.5 million years.

Inspiring efforts

Having already collected over a year of data, and with more on the way, the GNOME collaboration is presently combing the data for signs of BSM physics. New results based on recent GNOME science runs are expected in 2020. This would represent the first ever search for such transient exotic spin-dependent effects. Improvements in magnetometer sensitivity, signal characterisation, and data-analysis techniques are expected to improve on these initial results over the next several years. Significantly, GNOME has inspired similar efforts using other networks of precision quantum sensors: atomic clocks, interferometers, cavities, superconducting gravimeters, etc. In fact, the results of searches for exotic transient signals using clock networks have already been reported in the literature, constraining significant parameter space for various BSM scenarios. We would suggest that all experimentalists should seriously consider accurately time-stamping, storing, and sharing their data so that searches for correlated signals due to exotic physics can be conducted a posteriori. One never knows what nature might be hiding just beyond the frontier of the precision of past measurements.

bright-rec iop pub iop-science physcis connect