Topics

A voyage to the heart of the neutrino

On 11 June 2018, a tense silence filled the large lecture hall of the Karlsruhe Institute of Technology (KIT) in Germany. In front of an audience of more than 250 people, 15 red buttons were pressed simultaneously by a panel of senior figures including recent Nobel laureates Takaaki Kajita and Art McDonald. At the same time, operators in the control room of the Karlsruhe Tritium Neutrino (KATRIN) experiment lowered the retardation voltage of the apparatus so that the first beta electrons were able to pass into KATRIN’s giant spectrometer vessel. Great applause erupted when the first beta electrons hit the detector.

In the long history of measuring the tritium beta-decay spectrum to determine the neutrino mass, the ensuing weeks of KATRIN’s first data-taking opened a new chapter. Everything worked as expected, and KATRIN’s initial measurements have already propelled it into the top ranks of neutrino experiments. The aim of this ultra-high-precision beta-decay spectroscope, more than 15 years in the making, is to determine, by the mid-2020s, the absolute mass of the neutrino.

Massive discovery

Since the discovery of the oscillation of atmospheric neutrinos by the Super-Kamiokande experiment in 1998, and of the flavour transitions of solar neutrinos by the SNO experiment shortly afterwards, it was strongly implied that neutrino masses are not zero, but big enough to cause interference between distinct mass eigenstates as a neutrino wavepacket evolves in time. We know now that the three neutrino flavour states we observe in experiments – νe, νμ and ντ – are mixtures of three neutrino mass states.

Though not massless, neutrinos are exceedingly light. Previous experiments designed to directly measure the scale of neutrino masses in Mainz and Troitsk produced an upper limit of 2 eV for the neutrino mass – a factor 250,000 times smaller than the mass of the otherwise lightest massive elementary particle, the electron. Nevertheless, neutrino masses are extremely important for cosmology as well as for particle physics. They have a number density of around 336 cm–3, making them the most abundant particles in the universe besides photons, and therefore play a distinct role in the formation of cosmic structure. Comparing data from the Planck satellite together with data from galaxy surveys (baryonic acoustic oscillations) with simulations of the evolution of structure yields an upper limit on the sum of all three neutrino masses of 0.12 eV at 95% confidence within the framework of the standard Lambda cold-dark matter (ΛCDM) cosmological model.

Considerations of “naturalness” lead most theorists to speculate that the exceedingly tiny neutrino masses do not arise from standard Yukawa couplings to the Higgs boson, as per the other fermions, but are generated by a different mass mechanism. Since neutrinos are electrically neutral, they could be identical to their antiparticles, making them Majorana particles. Via the so-called seesaw mechanism, this interesting scenario would require a new and very high particle mass scale to balance the smallness of the neutrino masses, which would be unreachable with present accelerators.

KATRIN’s main spectrometer

As neutrino oscillations arise due to interference between mass eigenstates, neutrino-oscillation experiments are only able to determine splittings between the squares of the neutrino mass eigenstates. Three experimental avenues are currently being pursued to determine the neutrino mass. The most stringent upper limit is currently the model-dependent bound set by cosmological data, as already mentioned, which is valid within the ΛCDM model. A second approach is to search for neutrinoless double-beta decay, which allows a statement to be made about the size of the neutrino masses but presupposes the Majorana nature of neutrinos. The third approach – the one adopted by KATRIN – is the direct determination of the neutrino mass from the kinematics of a weak process such as beta decay, which is completely model-independent and depends only on the principle of energy and momentum conservation.

Figure 1

The direct determination of the neutrino mass relies on the precise measurement of the shape of the beta electron spectrum near the endpoint, which is governed by the available phase space (figure 1). This spectral shape is altered by the neutrino mass value: the smaller the mass, the smaller the spectral modification. One would expect to see three modifications, one for each neutrino mass eigenstate. However, due to the tiny neutrino mass differences, a weighted sum is observed. This “average electron neutrino mass” is formed by the incoherent sum of the squares of the three neutrino mass eigenstates, which contribute to the electron neutrino according to the PMNS neutrino-mixing matrix. The super-heavy hydrogen isotope tritium is ideal for this purpose because it combines a very low endpoint energy, Eo, of 18.6 keV and a short half-life of 12.3 years with a simple nuclear and atomic structure.

KATRIN is born

Around the turn of the millennium, motivated by the neutrino oscillation results, Ernst Otten of the University of Mainz and Vladimir Lobashev of INR Troitsk proposed a new, much more sensitive experiment to measure the neutrino mass from tritium beta decay. To this end, the best methods from the previous experiments in Mainz, Troitsk and Los Alamos were to be combined and upscaled by up to two orders of magnitude in size and precision. Together with new technologies and ideas, such as laser Raman spectroscopy or active background reduction methods, the apparatus would increase the sensitivity to the observable in beta decay (the square of the electron antineutrino mass) by a factor of 100, resulting in a neutrino-mass sensitivity of 0.2 eV. Accordingly, the entire experiment was designed to the limits of what was feasible and even beyond (see “Technology transfer delivers ultimate precision” box).

Technology transfer delivers ultimate precision

The electron transport and tritium retention system

Many technologies had to be pushed to the limits of what was feasible or even beyond. KATRIN became a CERN-recognised experiment (RE14) in 2007 and the collaboration worked with CERN experts in many areas to achieve this. The KATRIN main spectrometer is the largest ultra-high vacuum vessel in the world, with a residual gas pressure in the range of 10–11 mbar – a pressure that is otherwise only found in large volumes inside the LHC ring – equivalent to the pressure recorded at the lunar surface.

Even though the inner surface was instrumented with a complex dual-layer wire electrode system for background suppression and electric-field shaping, this extreme vacuum was made possible by rigorous material selection and treatment in addition to non-evaporable getter technology developed at CERN. KATRIN’s almost 40 m-long chain of superconducting magnets with two large chicanes was put into operation with the help of former CERN experts, and a 223Ra source was produced at ISOLDE for background studies at KATRIN. A series of 83mKr conversion electron sources based on implanted 83Rb for calibration purposes was initially produced at ISOLDE. At present these are produced by KATRIN collaborators and further developed with regard to line stability.

Conversely, the KATRIN collaboration has returned its knowledge and methods to the community. For example, the ISOLDE high-voltage system was calibrated twice with the ppm-accuracy KATRIN voltage dividers, and the magnetic and electrical field calculation and tracking programme KASSIOPEIA developed by KATRIN was published as open source and has become the standard for low-energy precision experiments. The fast and precise laser Raman spectroscopy developed for KATRIN is also being applied to fusion technology.

KIT was soon identified as the best place for such an experiment, as it had the necessary experience and infrastructure with the Tritium Laboratory Karlsruhe. The KIT board of directors quickly took up this proposal and a small international working group started to develop the project. At a workshop at Bad Liebenzell in the Black Forest in January 2001, the project received so much international support that KIT, together with nearly all the groups from the previous neutrino-mass experiments, founded the KATRIN collaboration. Currently, the 150-strong KATRIN collaboration comprises 20 institutes from six countries.

It took almost 16 years from the first design to complete KATRIN, largely because many new technologies had to be developed, such as a novel concept to limit the temperature fluctuations of the huge tritium source to the mK scale at 30 K or the high-voltage stabilisation and calibration to the 10 mV scale at 18.6 kV. The experiment’s two most important and also most complex components are the gaseous, windowless molecular tritium source (WGTS) and the very large spectrometer. In the WGTS, tritium gas is introduced in the midpoint of the 10 m-long beam tube, where it flows out to both sides to be pumped out again by turbomolecular pumps. After being partially cleaned it is re-injected, yielding a closed tritium cycle. This results in an almost opaque column density with a total decay rate of 1011 per second. The beta electrons are guided adiabatically to a tandem of a pre- and a main spectrometer by superconducting magnets of up to 6 T. Along the way, differential and cryogenic pumping sections including geometric chicanes reduce the tritium flow by more than 14 orders of magnitude to keep the spectrometers free of tritium (figure 2).

Filtration

Figure 2

The KATRIN spectrometers operate as so-called MAC-E filters, whereby electrons are guided by two superconducting solenoids at either end and their momenta are collimated by the magnetic field gradient. This “magnetic bottle” effect transforms almost all kinetic energy into longitudinal energy, which is filtered by an electrostatic retardation potential so that only electrons with enough energy to overcome the barrier are able to pass through. The smaller pre-spectrometer blocks the low-energy part of the beta spectrum (which carries no information on the neutrino mass), while the 10 m-diameter main spectrometer provides a much sharper filter width due to its huge size.

The transmitted electrons are detected by a high-resolution segmented silicon detector. By varying the retarding potential of the main spectrometer, a narrow region of the beta spectrum of several tens of eV below the endpoint is scanned, where the imprint of a non-zero neutrino mass is maximal. Since the relative fraction of the tritium beta spectrum in the last 1 eV below the endpoints amounts to just 2 × 10–13, KATRIN demands a tritium source of the highest intensity. Of equal importance is the high precision needed to understand the measured beta spectrum. Therefore, KATRIN possesses a complex calibration and monitoring system to determine all systematics with the highest precision in situ, e.g. the source strength, the inelastic scattering of beta electrons in the tritium source, the retardation voltage and the work functions of the tritium source and the main spectrometer.

Start-up and beyond

After intense periods of commissioning during 2018, the tritium source activity was increased from its initial value of 0.5 GBq (which was used for the inauguration measurements) to 25 GBq (approximately 22% of nominal activity) in spring 2019. By April, the first KATRIN science run had begun and everything went like clockwork. The decisive source parameters – temperature, inlet pressure and tritium content – allowed excellent data to be taken, and the collaboration worked in several independent teams to analyse these data. The critical systematic uncertainties were determined both by Monte Carlo propagation and with the covariance-matrix method, and the analyses were also blinded so as not to generate bias. The excitement during the un-blinding process was huge within the KATRIN collaboration, which gathered for this special event, and relief spread when the result became known. The neutrino-mass square turned out to be compatible with zero within its uncertainty budget. The model fits the data very well (figure 3) and the fitted endpoint turned out to be compatible with the mass difference between 3He and tritium measured in Penning traps. The new results were presented at the international TAUP 2019 conference in Toyama, Japan, and have recently been published.

Figure 3

This first result shows that all aspects of the KATRIN experiment, from hardware to data-acquisition to analysis, works as expected. The statistical uncertainty of the first KATRIN result is already smaller by a factor of two compared to previous experiments and systematic uncertainties have gone down by a factor of six. A neutrino mass was not yet extracted with these first four weeks of data, but an upper limit for the neutrino mass of 1.1 eV (90% confidence) can be drawn, catapulting KATRIN directly to the top of the world of direct neutrino-mass experiments. In the mass region around 1 eV, the limit corresponds to the quasi-degenerated neutrino-mass range where the mass splittings implied by neutrino-oscillation experiments are negligible compared to the absolute masses.

The neutrino-mass result from KATRIN is complementary to results obtained from searches for neutrinoless double beta decay, which are sensitive to the “coherent sum” mββ of all neutrino mass eigenstates contributing to the electron neutrino. Apart from additional phases that can lead to possible cancellations in this sum, the values of the nuclear matrix elements that need to be calculated to connect the neutrino mass mββ with the observable (the half-life) still possess uncertainties of a factor two. Therefore, the result from a direct neutrino-mass determination is more closely connected to results from cosmological data, which give (model-dependent) access to the neutrino-mass sum.

A sizeable influence

Currently, KATRIN is taking more data and has already increased the source activity by a factor of four to close to its design value. The background rate is still a challenge. Various measures, such as out-baking and using liquid-nitrogen cooled baffles in front of the getter pumps, have already yielded a background reduction by a factor 10, and more will be implemented in the next few years. For the final KATRIN sensitivity of 0.2 eV (90% confidence) on the absolute neutrino-mass scale, a total of 1000 days of data are required. With this sensitivity KATRIN will either find the neutrino mass or will set a stringent upper limit. The former would confront standard cosmology, while the latter would exclude quasi-degenerate neutrino masses and a sizeable influence of neutrinos on the formation of structure in the universe. This will be augmented by searches for physics beyond the Standard Model, such as for sterile neutrino admixtures with masses from the eV to the keV scale.

Operators in the KATRIN control room

Neutrino-oscillation results yield a lower limit for the effective electron-neutrino mass to manifest in direct neutrino-mass experiments of about 10 meV (50 meV) for normal (inverse) mass ordering. Therefore, many plans exist to cover this region in the future. At KATRIN, there is a strong R&D programme to upgrade the MAC-E filter principle from the current integral to a differential read-out, which will allow a factor-of-two improvement in sensitivity on the neutrino mass. New approaches to determine the absolute neutrino-mass scale are also being developed: Project 8, a radio-spectroscopy method to eventually be applied to an atomic tritium source; and the electron-capture experiments ECHo and HOLMES, which intend to deploy large arrays of cryogenic bolometers with the implanted isotope 163Ho. In parallel, the next generation of neutrinoless double beta decay experiments like LEGEND, CUPID or nEXO (as well as future xenon-based dark-matter experiments) aim to cover the full range of inverted neutrino-mass ordering. Finally, refined cosmological data should allow us to probe the same mass region (and beyond) within the next decades, while long-baseline neutrino-oscillation experiments, such as JUNO, DUNE and Hyper-Kamiokande, will probe the neutrino-mass ordering implemented in nature. As a result of this broad programme for the 2020s, the elusive neutrino should finally yield some of its secrets and inner properties beyond mixing.

Who ordered all of that?

Masses of quarks and leptons

The origin of the three families of quarks and leptons and their extreme range of masses is a central mystery of particle physics. According to the Standard Model (SM), quarks and leptons come in complete families that interact identically with the gauge forces, leading to a remarkably successful quantitative theory describing practically all data at the quantum level. The various quark and lepton masses are described by having different interaction strengths with the Higgs doublet (figure 1, left), also leading to quark mixing and charge-parity (CP) violating transitions involving strange, bottom and charm quarks. However, the SM provides no understanding of the bizarre pattern of quark and lepton masses, quark mixing or CP violation.

In 1998 the SM suffered its strongest challenge to date with the decisive discovery of neutrino oscillations resolving the atmospheric neutrino anomaly and the long-standing problem of the low flux of electron neutrinos from the Sun. The observed neutrino oscillations require at least two non-zero but extremely small neutrino masses, around one ten millionth of the electron mass or so, and three sizeable mixing angles. However, since the minimal SM assumes massless neutrinos, the origin and nature of neutrino masses (i.e. whether they are Dirac or Majorana particles, the latter requiring the neutrino and antineutrino to be related by CP conjugation) and mixing is unclear, and many possible SM extensions have been proposed.

The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore, with the fermion mass hierarchy now spanning at least 12 orders of magnitude, from the neutrino to the top quark. However, it is not only the fermion mass hierarchy that is unsettling. There are now 28 free parameters in a Majorana-extended SM, including a whopping 22 associated with flavour, surely too many for a fundamental theory of nature. To restate Isidor Isaac Rabi’s famous question following the discovery of the muon in 1936: who ordered all of that?

A theory of flavour

Figure 1

There have been many attempts to formulate a theory beyond the SM that can address the flavour puzzles. Most attempt to enlarge the group structure of the SM describing the strong, weak and electromagnetic gauge forces: SU(3)C× SU(2)L× U(1)Y (see “A taste of flavour in elementary particle physics” panel). The basic premise is that, unlike in the SM, the three families are distinguished by some new quantum numbers associated with a new family or flavour symmetry group, Gfl, which is tacked onto the SM gauge group, enlarging the structure to Gfl× SU(3)C× SU(2)L× U(1)Y. The earliest ideas dating back to the 1970s include radiative fermion-mass generation, first proposed by Weinberg in 1972, who supposed that some Yukawa couplings might be forbidden at tree level by a flavour symmetry but generated effectively via loop diagrams. Alternatively, the Froggatt–Nielsen (FN) mechanism in 1979 assumed an additional U(1)fl symmetry under which the quarks and leptons carry various charges.

To account for family replication and to address the question of large lepton mixing, theorists have explored a larger non-Abelian family symmetry, SU(3)fl, where the three families are analogous to the three quark colours in quantum chromodynamics (QCD). Many other examples have been proposed based on subgroups of SU(3)fl, including discrete symmetries (figure 2, right). More recently, theorists have considered extra-dimensional models in which the Higgs field is located at a 4D brane, while the fermions are free to roam over the extra dimension, overlapping with the Higgs field in such a way as to result in hierarchical Yukawa couplings. Still other ideas include partial compositeness in which fermions may get hierarchical masses from the mixing between an elementary sector and a composite one. The possibilities are seemingly endless. However, all such theories share one common question: what is the scale, Mfl, (or scales) of new physics associated with flavour?

Since experiments at CERN and elsewhere have thoroughly probed the electroweak scale, all we can say for sure is that, unless the new physics is extremely weakly coupled, Mfl can be anywhere from the Planck scale (1019GeV), where gravity becomes important, to the electroweak scale at the mass of the W boson (80 GeV). Thus the flavour scale is very unconstrained.

 

A taste of flavour in elementary particle physics

I I Rabi

The origin of flavour can be traced back to the discovery of the electron – the first elementary fermion – in 1897. Following the discovery of relativity and quantum mechanics, the electron and the photon became the subject of the most successful theory of all time: quantum electrodynamics (QED). However, the smallness of the electron mass (me = 0.511 MeV) compared to the mass of an atom has always intrigued physicists.

The mystery of the electron mass was compounded by the discovery in 1936 of the muon with a mass of 207 me but otherwise seemingly identical properties to the electron. This led Isidor Isaac Rabi to quip “who ordered that?”. Four decades later, an even heavier version of the electron was discovered, the tau lepton, with mass mτ = 17 mμ. Yet the seemingly arbitrary values of the masses of the charged leptons are only part of the story. It soon became clear that hadrons were made from quarks that come in three colour charges mediated by gluons under a SU(3)C gauge theory, quantum chromodynamics (QCD). The up and down quarks of the first family have intrinsic masses mu= 4 me and md = 10 me, accompanied by the charm and strange quarks (mc = 12 mμ and ms = 0.9 mμ) of a second family and the heavyweight top and bottom quarks (mt = 97 mτ and mb = 2.4 mτ) of a third family.

It was also realised that the different quark “flavours”, a term invented by Gell-Mann and Fritzsch, could undergo mixing transitions. For example, at the quark level the radioactive decay of a nucleus is explained by the transformation of a down quark into an up quark plus an electron and an electron antineutrino. Shortly after Pauli hypothesized the neutrino in 1930, Fermi proposed a theory of weak interactions based on a contact interaction between the four fermions, with a coupling strength given by a dimensionful constant GF, whose scale was later identified with the mass of the W boson: GF 1/mW2.

After decades of painstaking observation, including the discovery of parity violation, whereby only left-handed particles experience the weak interaction, Fermi’s theory of weak interactions and QED were merged into an electroweak theory based on SU(2)L × U(1)Y gauge theory. The left-handed (L) electron and neutrino form a doublet under SU(2)L, while the right-handed electron is a singlet, with the doublet and singlet carrying hypercharge U(1)Y and the pattern repeating for the second and third lepton families. Similarly, the left-handed up and down quarks form doublets, and so on. The electroweak SU(2)L× U(1)Y symmetry is spontaneously broken to U(1)QED by the vacuum expectation value of the neutral component of a new doublet of complex scalar boson fields called the Higgs doublet. After spontaneous symmetry breaking, this results in massive charged W and neutral Z gauge bosons, and a massive neutral scalar Higgs boson – a picture triumphantly confirmed by experiments at CERN.

To truly shed light on the Standard Model’s flavour puzzle, theorists have explored higher and more complex symmetry groups than the Standard Model. The most promising approaches all involve a spontaneously broken family or flavour symmetry. But the flavour-breaking scale may lie anywhere from the Planck scale to the electroweak scale, with grand unified theories suggesting a high flavour scale, while recent hints of anomalies from LHCb and other experiments suggest a low flavour scale.

To illustrate the unknown magnitude of the flavour scale, consider for example the FN mechanism, where Mfl is associated with the breaking of the U(1)fl symmetry. In the SM the top-quark mass of 173 GeV is given by a Yukawa coupling times the Higgs vacuum expectation value of 246 GeV divided by the square root of two. This implies a top-quark Yukawa coupling close to unity. The exact value is not important, what matters is that the top Yukawa coupling is of order unity. From this point of view, the top quark mass is not at all puzzling – it is the other fermion masses associated with much smaller Yukawa couplings that require explanation. According to FN, the fermions are assigned various U(1)fl charges and small Yukawa couplings are forbidden due to a U(1)fl symmetry. The symmetry is broken by the vacuum expectation value of a new “flavon” field <φ>, where φ is a neutral scalar under the SM but carries one unit of U(1)fl charge. Small Yukawa couplings then originate from an operator (figure 1, right) suppressed by powers of the small ratio <φ>/Mfl (where Mfl acts as a cut-off scale of the contact interaction).

For example, suppose that the ratio <φ>/Mfl is identified with the Wolfenstein parameter λ = sinθC = 0.225 (where θC is the Cabibbo angle appearing in the CKM quark-mixing matrix). Then the fermion mass hierarchies can be explained by powers of this ratio, controlled by the assigned U(1)fl charges: me/mτλ5, mμ/mτλ2, md/mbλ4, ms/mb∼ λ2, mu/mt ∼ λ8 and mc/mt∼ λ4. This shows how fermion masses spanning many orders of magnitude may be interpreted as arising from integer U(1)fl charge assignments of less than 10. However, in this approach, Mfl may be anywhere from the Planck scale to the electroweak scale by adjusting <φ> such that the ratio λ= <φ>/Mfl is held fixed.

One possibility for Mfl, reviewed by Kaladi Babu at Oklahoma State University in 2009, is that it is not too far from the scale of grand unified theories (GUTs), of order 1016 GeV, which is the scale at which the gauge couplings associated with the SM gauge group unify into a single gauge group. The simplest unifying group, SU(5)GUT, was proposed by Georgi and Glashow in 1974, following the work of Pati and Salam based on SU(4)C× SU(2)L× SU(2)R. Both these gauge groups can result from SO(10)GUT, which was discovered by Fritzsch and Minkowski (and independently by Georgi), while many other GUT groups and subgroups have also been studied (figure 2, left). However, GUT groups by themselves only unify quarks and leptons within a given family, and while they may provide an explanation for why mb= 2.4 mτ, as discussed by Babu, they do not account for the fermion mass hierarchies.

Broken symmetries

Figure 2

A way around this, first suggested by Ramond in 1979, is to combine GUTs with family symmetry based on the product group GGUT× Gfl, with symmetries acting in the specific directions shown in the figure “Family affair”. In order not to spoil the unification of the gauge couplings, the flavour-symmetry breaking scale is often assumed to be close to the GUT breaking scale. This also enables the dynamics of whatever breaks the GUT symmetry, be it Higgs fields or some mechanism associated with compactification of extra dimensions, to be applied to the flavour breaking. Thus, in such theories, the GUT and flavour/family symmetry are both broken at or around Mfl MGUT  1016 GeV, as widely discussed by many authors. In this case, it would be impossible given known technology to directly experimentally access the underlying theory responsible for unification and flavour. Instead, we would need to rely on indirect probes such as proton decay (a generic prediction of GUTs and hence of these enlarged SM structures proposed to explain flavour) and/or charged-lepton flavour-violating processes such as μ → eγ (see CERN Courier May/June 2019 p45).

New ideas for addressing the flavour problem continue to be developed. For example, motivated by string theory, Ferruccio Feruglio of the University of Padova suggested in 2017 that neutrino masses might be complex analytic functions called modular forms. The starting point of this novel idea is that non-Abelian discrete family symmetries may arise from superstring theory in compactified extra dimensions, as a finite subgroup of the modular symmetry of such theories (i.e. the symmetry associated with the non-unique choice of basis vectors spanning a given extra-dimensional lattice). It follows that the 4D effective Lagrangian must respect modular symmetry. This, Feruglio observed, implies that Yukawa couplings may be modular forms. So if the leptons transform as triplets under some finite subgroup of the modular symmetry, then the Yukawa couplings themselves must transform also as triplets, but with a well defined structure depending on only one free parameter: the complex modulus field. At a stroke, this removes the need for flavon fields and ad hoc vacuum alignments to break the family symmetry, and potentially greatly simplifies the particle content of the theory.

Compactification

Although this approach is currently actively being considered, it is still unclear to what extent it may shed light on the entire flavour problem including all quark and lepton mass hierarchies. Alternative string-theory motivated ideas for addressing the flavour problem are also being developed, including the idea that flavons can arise from the components of extra-dimensional gauge fields and that their vacuum alignment may be achieved as a consequence of the compactification mechanism.

The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore

Recently, there have been some experimental observations concerning charged lepton flavour universality violation which hint that the flavour scale might not be associated with the GUT scale, but might instead be just around the corner at the TeV scale (CERN Courier May/June 2019 p33). Recall that in the SM the charged leptons e, μ and τ interact identically with the gauge forces, and differ only in their masses, which result from having different Yukawa couplings to the Higgs doublet. This charged lepton flavour universality has been the subject of intense experimental scrutiny over the years and has passed all the tests – until now. In recent years, anomalies have appeared associated with violations of charged lepton flavour universality in the final states associated with the quark transitions b → c and b → s.

Puzzle solving

In the case of b → c transitions, the final states involving τ leptons appear to violate charged lepton universality. In particular B → D(*) ν decays where the charged lepton ℓ is identified with τ have been shown by Babar and LHCb to occur at rates somewhat higher than those predicted by the SM (the ratios of such final states to those involving electrons and muons being denoted by RD and RD*). This is quite puzzling since all three types of charged leptons are predicted to couple to the W boson equally, and the decay is dominated by tree-level W exchange. Any new-physics contribution, such as the exchange of a new charged Higgs boson, a new W′ or a leptoquark, would have to compete with tree-level W exchange. However, the most recent measurements by Belle, reported at the beginning of 2019 (CERN Courier May/June 2019 p9), measure RD and RD* to be closer to the SM prediction.

In the case of b → s transitions, the LHCb collaboration and other experiments have reported a number of anomalies in B → K(*) + decays such as the RK and RK* ratios of final states containing μ+μ versus e+e, which are measured deviate from the SM by about 2.5 standard deviations. Such anomalies, if they persist, may be accounted for by a new contact operator coupling the four fermions bLsLμLμL suppressed by a dimensionful coefficient M2new  where Mnew ~30 TeV, according to a general operator analysis. This hints that there may be new physics arising from the non-universal couplings of leptoquark and/or a new Z′ whose mass is typically a few TeV in order to generate such an operator (where the 30 TeV scale is reduced to just a few TeV after mixing angles are taken into account). However, the introduction of these new particles increases the SM parameter count still further, and only serves to make the flavour problem of the SM worse.

Link-up

Figure 3

Motivated by such considerations, it is tempting to speculate that these recent empirical hints of flavour non-universality may be linked to a possible theory of flavour. Several authors have hinted at such a connection, for example Riccardo Barbieri of Scuola Normale Superiore, Pisa, and collaborators have related these observations to a U(2)5 flavour symmetry in an effective theory framework. In addition, concrete models have recently been constructed that directly relate the effective Yukawa couplings to the effective leptoquark and/or Z′ couplings. In such models the scale of new physics associated with the mass of the leptoquark and/or a new Z′ may be identified with the flavour scale Mfl defined earlier, except that it should be not too far from the TeV scale in order to explain the anomalies. To achieve the desired link, the effective leptoquark and/or Z′ couplings may be generated by the same kinds of operators responsible for the effective Higgs Yukawa couplings (figure 3).

In such a model the couplings of leptoquarks and/or Z′ bosons may be related to the Higgs Yukawa couplings, with all couplings arising effectively from mixing with a vector-like fourth family. The considered model predicts, apart from the TeV scale leptoquark and/or Z′, and a slightly heavier fourth family, extra flavour-changing processes such as τ μμμ. The model in its current form does not have any family symmetry, and explains the hierarchy of the quark masses in terms of the vector-like fourth family masses, which are free parameters. Crucially, the required TeV scale Z′ mass is given by MZ′ ~ <φ> ~ TeV, which would fix the flavour scale Mfl ~ few TeV. In other words, if the hints for flavour anomalies hold up as further data are collected by the LHCb, Belle II and other experiments, the origin of flavour may be right around the corner.

Beauty baryons strike again

The spectrum of the difference in invariant mass between the Ξb0K− combination and the Ξb0 candidate. The fitted masses of the four peaks are: 6315.64±0.31±0.07±0.50 MeV, 6330.30±0.28±0.07±0.50 MeV, 6339.71±0.26±0.05±0.50 MeV and 6349.88±0.35±0.05±0.50 MeV, where the uncertainties are statistical, systematic, and due to the uncertainty on the world-average Ξb0 mass of 5791.9 ± 0.5 MeV. Credit: LHCb

The LHCb experiment has observed new beauty-baryon states, consistent with theoretical expectations for excited Ωb (bss) baryons. The Ωb (first observed a decade ago at the Tevatron) is a higher mass partner of the Ω (sss), the 1964 discovery of which famously validated the quark model of hadrons. The new LHCb finding will help to test models of hadronic states, including some that predict exotic structures such as pentaquarks.

The LHCb collaboration has uncovered numerous new baryons and mesons during the past eight years, bringing a wealth of information to the field of hadron spectroscopy. Critical to the search for new hadrons is the unique capability of the experiment to trigger on fully hadronic beauty and charm decays of b baryons, distinguish protons, kaons and pions from one another using ring-imaging Cherenkov detectors, and reconstruct secondary and tertiary decay vertices with a silicon vertex detector.

LHCb physicists searched for excited Ωb states via strong decays to Ξb0 K, where the Ξb0 (bsu), in turn, decays weakly through Ξb0 → Ξc+ π and Ξc+ → pK π+. Using the full data sample collected during LHC Run 1 and Run 2, a very large and clean sample of about 19,000 Ξb0 signal decays was collected. Those Ξb0 candidates were then combined with a K candidate coming from the same primary interaction. Combinations with the wrong sign (Ξb0 K+), where no Ωb states are expected, were used to study the background. This control sample was used to tune particle-identification requirements to reject misidentified pions, reducing the background by a factor of 2.5 while keeping an efficiency of 85% on simulated signal decays.

The search used the difference in invariant mass, δM = M(Ξb0 K) – M(Ξb0), determining the δM resolution to be approximately 0.7 MeV using simulated signal decays. (For comparison, the resolution is about 15 MeV for the Ξb0 decay.) Several peaks can be seen by eye (see figure), but to measure their properties a fit is needed. To help constrain the background shape, the wrong-sign δM spectrum (not shown) is fitted simultaneously with the signal mode. The peaks are each described by a relativistic Breit-Wigner convolved with a resolution function.

The width of the Ωb(6350)shows the most significant deviation from zero

Four peaks, corresponding to four excited Ωb states, were included in the fit. Following the usual convention, the new states were named according to their approximate mass: Ωb(6316), Ωb(6330), Ωb(6340)and Ωb(6350). Each mass was measured with a precision of well below 1 MeV, and the errors are dominated by the uncertainty on the world-average Ξb0 mass. All four peaks are narrow. The width of the Ωb(6350)shows the most significant deviation from zero, with a central value of 1.4+1.0 -0.8 ± 0.1 MeV. The two lower-mass peaks have significances below three standard deviations (2.1σ and 2.6σ) and so are not considered conclusive observations. But the two higher-mass peaks have significances of 6.7σ and 6.2σ, above the 5σ threshold for discovery.

The new states seen by LHCb follow a similar pattern to the five narrow peaks observed in the Ξc+K invariant mass spectrum by the collaboration in 2017. It has proven difficult to obtain a satisfactory explanation of all five as excited Ωc0(css) states, raising the possibility that at least one of the Ξc+ K peaks is a pentaquark or a molecular state. Since the Ξc+ Kand Ξb0 K final states differ only by replacing a c quark with a b quark, the two analyses together should provide strong constraints on any models that aim to explain the structures in these mass spectra.

 

Rekindled Atomki anomaly merits closer scrutiny

A large discrepancy in nuclear decay rates spotted four years ago in an experiment in Hungary has received new experimental support, generating media headlines about the possible existence of a fifth force of nature.

In 2015, researchers at the Institute of Nuclear Research (“Atomki”) in Debrecen, Hungary, reported a large excess in the angular distribution of e+e pairs created during nuclear transitions of excited 8Be nuclei to their ground state (8Be* → 8Be γ; γ → e+e). Significant peak-like enhancement was observed at large angles measured between the e+e pairs, corresponding to a 6.8σ surplus over the expected e+e pair-creation from known processes. The excess was soon interpreted by theorists as being due to the possible emission of a new boson X with a mass of 16.7 MeV decaying into e+e pairs.

In a preprint published in October 2019, the Atomki team has now reported a similar excess of events from the electromagnetically forbidden “M0” transition in 4He nuclei. The anomaly has a statistical significance of 7.2σ and is likely, claim the authors, to be due to the same “X17” particle proposed to explain the earlier 8Be excess.

Quality control

“We were all very happy when we saw this,” says lead author Attila Krasznahorkay. “After the analysis of the data a really significant effect could be observed.” Although not a fully blinded analysis, Krasznahorkay says the team has taken several precautions against bias and carried out numerous cross- checks of its result. These include checks for the effect in the angular correlation of e+e pairs in different regions of the energy distribution, and assuming different beam and target positions. The paper does not go into the details of systematic errors, for instance due to possible nuclear-modeling uncertainties, but Krasznahorkay says that, overall, the result is in “full agreement” with the results of the Monte Carlo simulations performed for the X17 decay.

The Atomki team with the apparatus used for the latest beryllium and helium results, which detects electron-positron pairs from the de-excitation of nuclei produced by firing protons at different targets. Credit: Atomki

While it cannot yet be ruled out, the existence of an X boson is not naively expected, say theorists. For one, such a particle would have to “know” about the distinction between up and down quarks and thus electroweak symmetry breaking. Being a vector boson, the X17 would constitute a new force. It could also be related to the dark-matter problem, write Krasznahorkay and co-workers, and could help resolve the discrepancy between measured and predicted values of the muon magnetic moment.

Last year, the NA64 collaboration at CERN reported results from a direct search for the X boson via the bremsstrahlung reaction eZ → eZX, the absence of a signal placing the first exclusion limits on the X–e coupling in the range (1.3–4.2) × 10–4. “The Atomki anomaly could be an experimental effect, a nuclear-physics e ect or something completely new,” comments NA64 spokesperson Sergei Gninenko. “Our results so far exclude only a fraction of the allowed parameter space for the X boson, so I’m really interested in seeing how this story, which is only just beginning, will unfold.” Last year, researchers used data from the BESIII experiment in China to search for direct X-boson production in electron–positron collisions and indirect production in J/ψ decays – finding no signal. Krasznahorkay and colleagues also point to the potential of beam-dump experiments such as PADME in Frascati, and to the upcoming Dark Light experiment at Jefferson Laboratory, which will search for 10–100 MeV dark photons.

I do not know of any inconsistencies in the experimental data that would indicate that it is an experimental effect

Jonathan Feng

Theorist Jonathan Feng of the University of California at Irvine, who’s group proposed the X-boson hypothesis in 2016, says that the new 4He results from Atomki support the previous 8Be evidence of a new particle – particularly since the excess is observed at a slightly different e+e opening angle in 4He (115o) than it is in 8Be (135o). “If it is an experimental error or some nuclear-physics effect, there is no reason for the excess to shift to different angles, but if it is a new particle, this is exactly what is expected,” says Feng. “I do not know of any inconsistencies in the experimental data that would indicate that it is an experimental effect.”

Data details

In 2017, theorists Gerald Miller at the University of Washington and Xilin Zhang at Ohio State concluded that, if the Atomki data are correct, the original 8Be excess cannot be explained by nuclear-physics modelling uncertainties. But they also wrote that a direct comparison to the e+e– data is not feasible due to “missing public information” about the experimental detector efficiency. “Tuning the normalisation of our results reduces the confidence level of the anomaly by at least one standard deviation,” says Miller. As for the latest Atomki result, the nuclear physics in 4He is more complicated than 8Be because two nuclear levels are involved, explains Miller, making it difficult to carry out an analysis analogous to the 8Be one. “For 4He there is also a background pair- production mechanism and interference effect that is not mentioned in the paper, much of which is devoted to the theory and other future experiments,” he says. “I think the authors would have been better served if they presented a fuller account of their data because, ultimately, this is an experimental issue. Confirming or refuting this discovery by future nuclear experiments would be extremely important. A monumental discovery could be possible.”

A monumental discovery could be possible

Gerald Miller

The Hungarian team is now planning on repeating the measurement with a new gamma-ray coincidence spectrometer at Atomki (see main image), which they say might help to distinguish between the vector and the pseudoscalar interpretation of the X17. Meanwhile, a project called New JEDI will enable an independent veri cation of the 8Be anomaly at the ARAMIS-SCALP facility (Orsay, France) during 2020, followed by direct searches by the same group for the existence of the X boson, in particular in other light quantum systems, at the GANIL-SPIRAL2 facility in Caen, France.

“Many people are sceptical that this is a new particle,” says Feng, who too was doubtful at first. “But at this point, what we need are new ideas about what can cause this anomaly. The Atomki group has now found the effect in two different decays. It would be most helpful for other groups to step forward to confirm or refute their results.”

Hypertriton lifetime puzzle nears resolution

Fig. 1.

Hypernuclei are bound states of nucleons and hyperons. Studying their properties is one of the best ways to investigate hyperon–nucleon interactions and offers insights into the high-density inner cores of neutron stars, which favour the creation of the exotic nuclear states. Constraining such astrophysical models requires detailed knowledge of hyperon–nucleon and three-body hyperon–nucleon–nucleon interactions. The strengths of these interactions can be determined in collider experiments by precisely measuring the lifetimes of hypernuclei.

Hypernuclei are produced in significant quantities in heavy-ion collisions at LHC energies. The lightest, the hypertriton, is a bound state of a proton, a neutron and a Λ. With a Λ-separation energy of only ~130 keV, the average distance between the Λ and the deuteron core is 10.6 fm. This relatively large separation implies only a small perturbation to the Λ wavefunction inside the hypernucleus, and therefore a hypertriton lifetime close to that of a free Λ, 263.2 ± 2.0 ps. Most calculations predict the hypertriton lifetime to be in the range 213 to 256 ps.

The measured lifetimes were systematically below theoretical predictions

The first measurements of the hypertriton lifetime were performed in the 1960s and 1970s with imaging techniques such as photographic emulsions and bubble chambers, and were based on very small event samples, leading to large statistical uncertainties. In the last decade, however, measurements have been performed using the larger data samples of heavy-ion collisions. Though compatible with theory, the measured lifetimes were systematically below theoretical predictions: thus the so-called “lifetime puzzle”.

The ALICE collaboration has recently reported a new measurement of the hypertriton lifetime using Pb–Pb collisions at √sNN = 5.02 TeV, which were collected in 2015. The lifetime of the (anti-)hypertriton is determined by reconstructing the two-body decay channel with a charged pion, namely 3ΛH 3 He + π (3Λ̅ H3He + π+). The branching ratio of this decay channel, taken from the theoretical calculations, is 25%. The measured lifetime is 242+34–38(stat) ± 17 (syst) ps. This result shows an improved statistical resolution and reduced systematic uncertainty compared to previous measurements and is currently the most precise measurement. It is also in agreement with both theoretical predictions and the free-Λ lifetime, even within the statistical uncertainty. Combining this ALICE result with previous measurements gives a weighted average of 206+15–13ps (figure 1).

This result represents an important step forward in solving the longstanding hypertriton lifetime puzzle, since it is the first measurement with a large data sample that is close to theoretical expectations. Larger and more precise data sets are expected to be collected during LHC Runs 3 and 4, following the ongoing major upgrade of ALICE. This will allow a significant improvement in the quality of the present lifetime measurement, and the determination of the Λ binding energy with high precision. The combination of these two measurements has the potential to constrain the branching ratio for this decay, which cannot be determined directly without access to the neutral and non-mesonic decay channels. This will be a crucial step towards understanding if the now partially confirmed theoretical description of the hypertriton is finally resolved.

Flavour heavyweights converge on Ljubljana

The international conference devoted to b-hadron physics at frontier machines, Beauty 2019, was held in Ljubljana, Slovenia, from 30 September to 4 October. The aims of the conference series are to review the latest results in heavy-flavour physics and discuss future directions. This year’s edition, the 18th in the series, attracted around 80 scientists and 65 invited talks, of which 13 were theory based.

The study of hadrons containing beauty quarks, and other heavy flavours, offers a powerful way to probe for physics beyond the Standard Model, as highlighted in the inspiring opening talk by Chris Quigg (Fermilab). In the last few years much attention has been focused on b-physics results that do not show perfect agreement with the predictions of the theory. In particular, studies by Belle, BaBar and LHCb of the processes B→K+ and B0 →K*+ (where ℓ± indicates a lepton) in specific kinematic regions have yielded different decay rates for muon pairs and electron pairs, apparently violating lepton universality. For both processes the significance of the effect is around 2.5σ. Popular models to explain this and related effects include leptoquarks and new Z’ bosons, however no firm conclusions can be drawn until more precise measurements are available, which should be the case when the next Beauty meeting occurs.

Indications that φs is nonzero are starting to emerge

The B system is an ideal laboratory for the study of CP violation, and recent results were presented by the LHC experiments for φs – the phase associated with time-dependent measurements of Bs meson decays to CP eigenstates. Indications that φs is nonzero are starting to emerge, which is remarkable given that its magnitude in the Standard Model is less than 0.1 radians. This is great encouragement for Run 3 of the LHC, and beyond.

Heavy-flavour experiments are also well suited to the study of hadron spectroscopy. Many very recent results were shown at the conference including the discovery of the X(3842), which is a charmonium resonance above the open charm threshold, and new excited resonances seen in the Λbππ final state, which help map out the relatively unexplored world of b-baryons. The ATLAS collaboration presented, for the first time, an analysis of Λb→J/ψpK decays in which a structure is observed that is compatible with that of the LHCb pentaquark discovery of 2015, providing the first confirmation by another experiment of these highly exotic states.

Beyond beauty

The Beauty conference welcomes reports on flavour studies beyond b-physics, and a highlight of the week was the first presentation at a conference of new results on the measurement of the branching ratio of the ultra-rare decay K+→π+νν̄, by the NA62 collaboration. The impressive background suppression that the experiment has achieved left the audience in no doubt as to the sensitivity of the result that can be expected when the full data set is accumulated and analysed. Comparing the measurement with the predicted branching fraction of ~10-10 will be a critical test of the Standard Model in the flavour domain.

Flavour physics has a bright future. Several talks presented the first signals and results from the early running of the Belle II experiment, and precise and exciting measurements can be expected when the next meeting in the Beauty series takes place. In parallel, studies with increasing sensitivity will continue to emerge from the LHC. The meeting was updated about progress on the LHCb upgrade, which is currently being installed ready for Run 3, and will allow for an order of magnitude increase in b-hadron samples. The conference was summarised by Patrick Koppenburg (Nikhef), who emphasised the enormous potential of b-hadron studies for uncovering signs of new physics beyond the Standard Model.

The next edition of Beauty will take place in Japan, hosted by Kavli IPMU, University of Tokyo, in autumn 2020.

Debut for baryons in flavour puzzle

LHCb has launched a new offensive in the exploration of lepton-flavour universality – the principle that the weak interaction couples to electrons, muons and tau leptons equally. Following previous results that hinted that e+e pairs might be produced at a greater rate than μ+μ pairs in B-meson decays involving the b→sℓ+ transition (ℓ=e,μ), the study brings b-baryon decays to bear on the subject for the first time.

“LHCb certainly deserves to be congratulated on this nontrivial measurement,” said Jure Zupan of the University of Cincinnati, in the US. “It is very important that LHCb is trying to measure the same quark level transition b→sℓ+ with as many hadronic probes as possible. Though baryon decays are more difficult to interpret, the Standard Model prediction of equal rates is very clean and any significant deviation would mean the discovery of new physics.”

We are living in exciting but somewhat confusing times

Jure Zupan

The current intrigue began in 2014, when LHCb observed the ratio of B+→K+μ+μ to B+→K+e+e decays to be 2.6σ below unity – the so-called RK anomaly. The measurement was updated this year to be closer to unity, but with reduced errors the significance of the deviation – either a muon deficit or an electron surplus – remains almost unchanged at 2.5σ. The puzzle deepened in 2017 when LHCb measured the rate of B0→K*0μ+μ relative to B0→K*0e+e to be more than 2σ below unity in two adjacent kinematic bins – the RK* anomaly. In the same period, measurements of decays to D mesons by LHCb and the B-factory experiments BaBar and Belle consistently hinted that the b→cℓν̄ transition might occur at a greater rate for tau leptons relative to electrons and muons than expected in the Standard Model.

Baryons enter the fray

Now, in a preprint published on 18 December, the LHCb collaboration reports a measurement of the ratio of branching fractions for the highly suppressed baryonic decays Λb0→pKe+e and Λb0→pKμ+μ to be RpK-1 = 1.17+0.18-0.16 (stat) ± 0.07 (syst). The reciprocal ratio to that reported for the B-meson decays, the measurement is consistent with previous LHCb measurements in that it errs on the side of fewer b→sμ+μ than b→se+e transitions, though with no statistical significance for that hypothesis at the present time. The blind analysis was performed for an invariant mass squared of the lepton pairs ranging from 0.1 to 6.0 GeV2 – well below contributions from resonant J/ψ→ℓ+, with observations of the latter reaction used to drive down systematics related to the different experimental treatment of muons and electrons. J/ψ meson decays to μ+μ and e+e pairs are known to respect lepton universality at the 0.4% level.

“It’s very satisfying to have been able to make this lepton-flavour universality test with baryons – having access to the Run 2 data was key,” said analyst Yasmine Amhis of the Laboratoire de l’Accélérateur Linéaire in Orsay. The analysis, which also constitutes the first observation of the decay Λb0→pKe+e, exploits an integrated 4.7 fb-1 of data collected at 7, 8 and 13 TeV. “LHCb is also working on other tests of the flavour anomalies, such as an angular analysis of B0→K*0μ+μ, and updates of the lepton-flavour universality tests of RK and RK* to the full Run 2 dataset,” continued Amhis. “We’re excited to find out whether the pattern of anomalies stays or fades away.”

We’re excited to find out whether the pattern of anomalies stays or fades away

Yasmine Amhis

An important verification of the B-meson anomalies will be performed by the recently launched Belle II experiment, though it is not expected to weigh in on Λb0 decays, says Zupan. “I think it is fair to say that it is only after both Belle II and LHCb are able to confirm the anomalies that new physics will be established,” he says. “Right now, we are living in exciting but somewhat confusing times: is the neutral-current b→sℓ+ anomaly real? Is the charged-current b→cℓν̄ anomaly real? Are they connected? Only time will tell.”

Zooming in on top quarks

Fig. 1.

As the heaviest known particle, the top quark plays a unique role in the Standard Model (SM), making its presence felt in corrections to the masses of the W and Higgs bosons, and also, perhaps, in as-yet unseen physics beyond the SM. During Run 2 of the Large Hadron Collider (LHC), high-luminosity proton beams were collided at a centre-of-mass energy of 13 TeV. This allowed ATLAS to record and study an unprecedented number of collisions producing top–antitop pairs, providing ATLAS physicists with a unique opportunity to gain insights into the top quark’s properties.

ATLAS has measured the top–antitop production cross-section using events where one top quark decays to an electron, a neutrino and a bottom quark, and the other to a muon, a neutrino and a bottom quark. The striking eμ signature gives a clean and almost background-free sample, leading to a result with an uncertainty of only 2.4%, which is the most precise top-quark pair-production measurement to date. The measurement provides information on the top quark’s mass, and can be used to improve our knowledge of the parton distribution functions describing the internal structure of the proton. The kinematic distributions of the leptons produced in top-quark decays have also been precisely measured, providing a benchmark to test programs that model top-quark production and decay at the LHC (figure 1).

Fig. 2.

The mass of the top quark is a fundamental parameter of the SM, which impacts precision calculations of certain quantum corrections. It can be measured kinematically through the reconstruction of the top quark’s decay products. The top quark decays via the weak interaction as a free particle, but the resulting bottom quark interacts with other particles produced in the collision and eventually emerges as a collimated “b-jet” of hadrons. Modelling this process and calibrating the jet measurement in the detector limits the precision in many top-quark mass measurements, however, 20% of the b-jets contain a muon that carries information relating to the parent bottom quark. By combining this muon with an isolated lepton from a W-boson originating from the same top-quark decay, ATLAS has made a new measurement of the top quark mass with a much-reduced dependence on jet modelling and calibration. The result is ATLAS’s most precise individual top-quark mass measurement to date: 174.48 ± 0.78 GeV.

Higher order QCD diagrams translate this imbalance into the charge asymmetry

At the LHC, top and antitop quarks are not produced fully symmetrically with respect to the proton-beam direction, with top antiquarks produced slightly more often at large angles to the beam, and top quarks, which receive more momentum from the colliding proton, emerging closer to the axis. Higher order QCD diagrams translate this imbalance into the so-called charge asymmetry, which the SM predicts to be small (~0.6%), but which could be enhanced, or even suppressed, by new physics processes interfering with the known production modes. Using its full Run-2 data sample, ATLAS finds evidence of charge asymmetry in top-quark pair events with a significance of four standard deviations, confidently showing that the asymmetry is indeed non-zero. The measured charge asymmetry of 0.0060 ± 0.0015 is compatible with the latest SM predictions. ATLAS also measured the charge asymmetry versus the mass of the top–antitop system, further probing the SM (figure 2).

ALICE probes extreme electromagnetic fields

When two lead nuclei collide in the LHC at an energy of a few TeV per nucleon, an extremely strong magnetic field of the order 1014 –1015 T is generated by the spectator protons, which pass by the collision zone without breaking apart in inelastic collisions. The strongest yet probed by scientists, this magnetic field, and in particular the rate at which it decays, is interesting to study since it probes unexplored properties of the quark–gluon plasma (QGP), such as its electric conductivity. In addition, chiral phenomena such as the chiral magnetic effect are expected to be induced by the strong fields. Left–right asymmetry in the production of negatively and positively charged particles relative to the collision reaction plane is one of the observables that is directly sensitive to electromagnetic fields. This asymmetry, called directed flow (v1), is sensitive to two main competing effects: the Lorentz force experienced by charged particles (quarks) propagating in the magnetic field, and the Faraday effect – the quark current that is induced by the rapidly decreasing magnetic field. Charm quarks are produced in the early stages of heavy-ion collisions and are therefore more strongly affected by the electromagnetic fields than lighter quarks.

An extremely strong magnetic field of the order 1014 –1015 T is generated

The ALICE collaboration has recently probed this effect by measuring the directed flow, v1, for charged hadrons and D0/D0 mesons as a function of pseudorapidity (η) in mid-central lead–lead collisions at √sNN = 5.02 TeV. Head-on (most central) collisions were excluded from the analyses because in those collisions there are very few spectator nucleons (almost all nucleons interact inelastically), which leads to a weaker magnetic field.

ALICE extreme electromagnetic fields directed flow

The top-left panel of the figure shows the η dependence of v1 for charged hadrons (centrality class 5–40%). The difference Δv1 between positively and negatively charged hadrons is shown in the bottom-left panel. The η slope is found to be dΔv1/dη = 1.68 ± 0.49 (stat) ± 0.41 (syst) × 10–4   – positive at 2.6σ significance. This measurement has a similar order of magnitude to recent model calculations of the expected effect for charged pions, but with the opposite sign.

The right-hand panels show the same analysis for the neutral charmed mesons D0 (cū) and D0 (c̄u) (centrality class 10–40%). The measured directed flows are found to be about three orders of magnitude larger than for the charged hadrons, reflecting the stronger fields experienced immediately after the collision when the charm quarks are created. The slopes, which are seen to be positive for D0 and negative for D0, are opposite and larger than in the model calculations. The slope of the differences in the directed flows is dΔv1/dη = 4.9 ± 1.7 (stat) ± 0.6 (syst) × 10–1 – positive at 2.7σ significance (lower-right panel). Also, in this case, the sign of the observed slope is opposite with respect to model calculations, suggesting that the relative contributions of the Lorentz and Faraday effects in those calculations are not correct.

Together with recent observations at RHIC, these LHC measurements provide an intriguing first sign of the effect of the large magnetic fields experienced in heavy-ion collisions on final-state particles. Measurements with larger data samples in Run 3 will have a precision sufficient to allow the contributions of the Lorentz force and the Faraday effect to be separated.

CMS goes scouting for dark photons

Fig. 1.

One of the best strategies for searching for new physics in the TeV regime is to look for the decays of new particles. The CMS collaboration has searched in the dilepton channel for particles with masses above a few hundred GeV since the start of LHC data taking. Thanks to newly developed triggers, the searches are now being extended to the more difficult lower range of masses. A promising possible addition to the Standard Model (SM) that could exist in this mass range is the dark photon (ZD). Its coupling with SM particles and production rate depend on the value of a kinetic mixing coefficient ε, and the resulting strength of the interaction of the ZD with ordinary matter may be several orders of magnitude weaker than the electroweak interaction.

The CMS collaboration has recently presented results of a search for a narrow resonance decaying to a pair of muons in the mass range from 11.5 to 200 GeV. This search looks for a strikingly sharp peak on top of a smooth dimuon mass spectrum that arises mainly from the Drell–Yan process. At masses below approximately 40 GeV, conventional triggers are the main limitation for this analysis as the thresholds on the muon transverse momenta (pT), which are applied online to reduce the rate of events saved for offline analysis, introduce a significant kinematic acceptance loss, as evident from the red curve in figure 1.

Fig. 2.

A dedicated set of high-rate dimuon “scouting” triggers, with some additional kinematic constraints on the dimuon system and significantly lower muon pT thresholds, was deployed during Run 2 to overcome this limitation. Only a minimal amount of high-level information from the online reconstruction is stored for the selected events. The reduced event size allows significantly higher trigger rates, up to two orders of magnitude higher than the standard muon triggers. The green curve in figure 1 shows the dimuon invariant mass distribution obtained from data collected with the scouting triggers. The increase in kinematic acceptance for low masses can be well appreciated.

The full data sets collected with the muon scouting and standard dimuon triggers during Run 2 are used to probe masses below 45 GeV, and between 45 and 200 GeV, respectively, excluding the mass range from 75 to 110 GeV where Z-boson production dominates. No significant resonant peaks are observed, and limits are set on ε2 at 90% confidence as a function of the ZD mass (figure 2). These are among the world’s most stringent constraints on dark photons in this mass range.

Copyright © 2020 by CERN
bright-rec iop pub iop-science physcis connect