The space–time symmetries of physics demand that experiments yield identical results under continuous Lorentz transformations – rotations and boosts – and under the discrete CPT transformation (the combination of charge conjugation, parity inversion and time reversal). The Standard-Model Extension (SME) provides a framework for testing these symmetries by including all operators that break them in an effective field theory. The first CPT and Lorentz Symmetry meeting, in Bloomington, Indiana, in 1998, featured the first limits on SME coefficients. Last year’s event, the 8th in the triennial series, brought 100 researchers together from 12 to 16 May 2019 at the Indiana University Center for Spacetime Symmetries, to sample a smorgasbord of ongoing SME studies.
Most physics is described by operators of mass dimension three or four that are quadratic in the conventional fields – for example the Dirac lagrangian contains an operator ψ ∂̸ ψ (mass dimension 3/2 + 1 + 3/2 = 4) and an operator ψψ (mass dimension 3/2 + 3/2 = 3), with the latter controlled by an additional mass coefficient – however, the search for fundamental symmetry violations may need to employ operators of higher mass dimensions and higher order in the fields. One example is the Lorentz-breaking lagrangian-density term (kVV)μν(ψγμ ψ) (ψγν ψ), which is quartic in the fermion field ψ. The coefficient kVV carries units of GeV–2 and controls the operator, which has mass dimension six. Searches for Lorentz-symmetry breaking seek nonzero values for coefficients like kVV. In the 21 years since the first CPT meeting, theoretical studies have uncovered how to write down the myriad operators that describe hypothetical Lorentz violations in both flat and curved space–times. Meanwhile, experiments in particle physics, atomic physics, astrophysics and gravitational physics continue to place exquisitely tight bounds on the SME coefficients, motivated by the intriguing prospect of finding a crack in the Lorentz symmetry of nature.
The SME has revealed uncharted territory that requires theoretical and experimental expertise to navigate
Comparisons between matter and antimatter offer rich prospects for testing Lorentz symmetry, because individual SME coefficients can be isolated. The AEgIS, ALPHA, ASACUSA, ATRAP, BASE and gBAR collaborations at CERN, as well as ones at other institutions, are working to develop the challenging technology for such tests. Several presenters discussed Penning traps – devices that confine charged particles in a static electromagnetic field – for storing and mixing the ingredients for antihydrogen, the production of antihydrogen, spectroscopy for the hyperfine and 1S–2S transitions, and the prospects for interferometric measurements of antimatter acceleration. The commissioning of ELENA, CERN’s 30 m-circumference antiproton deceleration ring, promises larger quantities of relatively slow-moving antiprotons in support of this work.
Lorentz violation can occur independently in each sector of the particle world, and participants discussed existing and future limits on SME coefficients based on the muon g-2 experiment at Fermilab, neutrino oscillations at Daya Bay in China, kaon oscillations in Frascati, and on positronium decay using the Jagellonian PET detector, to name a few. Dozens of Lorentz-symmetry tests have probed the photon sector of the SME with table-top devices such as atomic clocks and resonant cavities, and with astrophysical polarisation measurements of sources such as active galactic nuclei, which leverage vast distances to limit cumulative effects such as the rotation of a polarisation angle. In the gravity sector, SME coefficient bounds were presented from the 2015 gravitational-wave detection by the LIGO collaboration, as well as from observations of pulsars, cosmic rays and other phenomena with signals that are proportional to the travel distance. Symmetry-breaking signals are also sought in matter-gravity interactions with test masses, and here CPT’19 included discussions of short-range spin-dependent gravity and neutron-interferometry physics.
The SME has revealed uncharted territory that requires theoretical and experimental expertise to navigate. CPT’19 showed that there is no shortage of physicists with the adventurous spirit to explore this frontier further.
The sixth edition of Prospects in Neutrino Physics (NuPhys19) attracted almost 100 participants to the Cavendish Conference Centre in London from 16 to 18 December. Jointly organised by King’s College London and the Institute for Particle Physics Phenomenology at Durham University, the conference provides a much-needed snapshot of the fast-moving field of neutrino physics.
The neutrino community’s current challenge is to understand the origin of neutrino masses and lepton mixing. This means establishing whether neutrinos are Dirac or Majorana fermions, their absolute mass scale, the order of the measured mass splittings (the neutrino mass ordering), whether there is leptonic CP violation, the precise value of other parameters in the neutrino mixing matrix, and, finally, whether there is an indication of physics beyond the standard three-neutrino paradigm, for example through the detection of sterile neutrinos.
Construction of the Hyper-Kamiokande experiment will begin in 2020
2015 Nobel laureate Takaaki Kajita (University of Tokyo) opened the conference by confirming that construction of the Hyper-Kamiokande experiment will begin in 2020, following the allocation by the Japanese government of a supplementary budget on 13 December. Hyper-Kamiokande will be a water-Cherenkov detector with a total mass of 260 kton — almost an order of magnitude larger than its famous predecessor Super-Kamiokande, where atmospheric neutrino oscillations were discovered, and far larger than KamiokaNDE, which observed solar neutrinos and supernova SN1987A. Hyper-Kamiokande will eventually replace Super-Kamiokande as the far detector for the upgraded J-PARC neutrino beam, which is situated on the far side of Japan (essentially a comprehensive upgrade of the T2K experiment), with the aim of measuring CP violation in the leptonic sector. It will also provide high statistics for proton-decay searches, supernova neutrino bursts, atmospheric and solar neutrinos, and indirect searches for dark matter. Hyper-Kamiokande will therefore soon join DUNE in the US as a next-generation long-baseline neutrino-oscillation experiment under construction. Together the detectors will provide a far wider coverage of physics signals than either could manage alone.
Critical mass
News of KATRIN’s record-breaking new upper limit on the electron-antineutrino mass was complemented by a report by Joseph Formaggio (MIT) on the successful “Project 8” demonstration in the US of a new approach to directly measuring neutrino masses wherein the energies of beta-decay electrons are determined from the frequency of cyclotron radiation as the electrons spiral in a magnetic field. This work will be complemented by the JUNO experiment in China which will in 2021 begin to constrain the ordering of the neutrino-mass eigenvalues.
The search for neutrinoless double-beta decay also has the potential to provide information on neutrino masses. A potentially unambiguous indication of lepton-number violation and the postulated Majorana nature of neutrinos, it is being pursued aggressively as experiments compete to reduce backgrounds and increase detector masses to the ton-scale. Several talks emphasised the complementary progress by the theory community to better estimate nuclear effects, and reduce the errors arising from the differences between different nuclear models and different isotopes. These calculations are equally important for NOvA and T2K, which is now beginning to probe leptonic CP conservation at the 3? level.
The cosmological upper limit on the sum of neutrino masses could be relaxed upwards
Current and future cosmological constraints of neutrino properties were reviewed by Eleonora Di Valentino (Manchester), whose recent work with Alessandro Melchiorri and Joe Silk reinterprets Planck-satellite data to favour a closed universe at more than 99% significance – an inference which could lead to the current cosmological upper limit on the sum of neutrino masses being relaxed upwards if it is accepted by the community. Conversely, astrophysical neutrinos are also powerful tools for studying astrophysical objects. One key development in this field is the doping of Super-Kamiokande with gadolinium, currently underway in Japan. This will soon give the detector sensitivity to the diffuse supernova-neutrino background.
The next edition of NuPhys will take place in London from 16 to 18 December 2020.
Planck data on the cosmic microwave background (CMB) have been reinterpreted to favour a closed universe at more than 99% confidence, in contradiction with the flat universe favoured by the established ΛCDM model of cosmology. In their new fit to Planck’s 2018 data release, Eleonora Di Valentino (Manchester), Alessandro Melchiorri (La Sapienza) and Joe Silk (Oxford) exchanged an anomalously large lensing amplitude (a phenomenological parameter that rescales the gravitational-lensing potential in the CMB power spectrum) for a higher energy density.
In addition to the lensing anomaly, which leads to inconsistencies between large and small scales, the flat interpretation is already plagued by a 4.4σ tension with the latest determination of the Hubble constant using observations of the recession of Cepheid stars – a tension that grows to 5.4σ in a closed universe.
The inconsistencies between data sets signal “a possible crisis for cosmology”, argue the authors.
Hard-scattering processes in hadronic collisions generate parton showers – highly collimated collections of quarks and gluons that subsequently fragment into hadrons, producing jets. In ultra-relativistic nuclear collisions, the parton shower evolves in a hot and dense quark–gluon plasma (QGP) created by the collision. Interactions of the partons with the plasma lead to reduced parton and jet energies, and modified properties. This phenomenon, known as jet quenching, results in the suppression of jet yields – a suppression that is hypothesised to depend on the structure of the jet. High-momentum shower components with a large angular separation are resolved by the medium, however, it is thought that the plasma has a characteristic angular scale below which they are not resolved, but interact as a single partonic fragment.
Using 5.02 TeV lead–lead collision data taken at the LHC in 2018 and corresponding pp data collected in 2017, ATLAS has measured large-radius jets by clustering smaller-radius jets with transverse momenta pT > 35 GeV. (This procedure suppresses contributions from the underlying event and excludes soft radiation, so that the focus remains on hard partonic splittings.) The sub-jets are further re-clustered in order to obtain the splitting scale, √d12, which represents the transverse momentum scale for the hardest splitting in the jet – a measure of the angular separation between the high-momentum components.
ATLAS has investigated the effect of the splitting scale on jet quenching using the nuclear modification factor (RAA), which is the ratio between the jet yields measured in lead–lead and pp collisions, scaled by the estimated average number of binary nucleon–nucleon collisions. An RAA value of unity indicates no suppression in the QGP, whereas a value below one indicates a suppressed jet yield. The measurement is corrected for background fluctuations and instrumental resolution via an unfolding procedure.
The figure shows RAA for large-radius jets as a function of the average number of participating nucleons – a measure of the centrality of the collision, as glancing collisions involve only a handful of nucleons, whereas head-on collisions involve a large fraction of the 207 or so nucleons in each lead nucleus. RAA is presented separately for large-radius jets with a single isolated high-momentum sub-jet and for those with multiple sub-jets in three intervals of the splitting scale √d12. As expected, jets are increasingly suppressed for more head-on collisions (figure 1). More pertinently to this analysis, and for all centralities, yields of large-radius jets that consist of several sub-jets are found to be significantly more suppressed than those that consist of a single small-radius jet. This observation is qualitatively consistent with the hypothesis that jets with hard internal splittings lose more energy, and provides a new perspective on the role of jet structure in jet suppression. Further progress will require comparison with theoretical models.
There is a longstanding puzzle concerning the value of the Cabibbo–Kobayashi–Maskawa matrix element |Vcb|, which describes the coupling between charm and beauty quarks in W± interactions. This fundamental parameter of the Standard Model has been measured with two complementary methods. One uses theinclusive rate of b-hadron decays into final states containing a c hadron and a charged lepton; the other measures the rate of a specific (exclusive) semileptonic B decay, e.g. B0 → D*–μ+νμ. The world average of results using the inclusive approach, |Vcb|incl = (42.19 ± 0.78) × 10–3, differs from the average of results using the exclusive approach, |Vcb|excl = (39.25 ± 0.56) × 10–3, by approximately three standard deviations.
So far, exclusive determinations have been carried out only at e+e– colliders, using B0 and B+ decays. Operating at the ϒ(4S) resonance, the full decay kinematics can be determined, despite the undetected neutrino, and the total number of B mesons produced, needed to measure |Vcb|, is known precisely. The situation is more challenging in a hadron collider – but the LHCb collaboration has just completed an exclusive measurement of |Vcb| based, for the first time, on Bs0 decays.
The exclusive determination of |Vcb| relies on the description of strong-interaction effects for the b and c quarks bound in mesons, the so-called form factors (FF). These are functions of the recoil momentum of the c meson in the b-meson rest frame, and are calculated using non-perturbative QCD techniques, such as lattice QCD or QCD sum rules. A key advantage of semileptonic Bs0 decays, compared to B0/+ decays, is that their FF can be more precisely computed. Recently, the FF parametrisation used in the exclusive determination has been considered to be a possible origin of the inclusive–exclusive discrepancy, and comparisons between the results for |Vcb| obtained using different parametrisations, such as that by Caprini, Lellouch and Neubert (CLN) and that by Boyd, Grinstein and Lebed (BGL), are considered a key check.
Both parametrisations are employed by LHCb in a new analysis of Bs0 → Ds(*)–μ+νμ decays, using a novel method that does not require the momentum of particles other than Ds– and μ+ to be estimated. The analysis also uses B0 → D(*)–μ+νμ as a normalisation mode, which has the key advantage that many systematic effects cancel in the ratio. With the form factors and relative efficiency-corrected yields in hand, obtaining |Vcb| requires only a few more inputs: branching fractions that were well measured at the B-factories, and the ratio of Bs0 and B0 production fractions measured at LHCb.
The values of |Vcb| obtained are (41.4 ± 1.6) × 10–3 and (42.3 ± 1.7) × 10–3in the CLN and BGL parametrisations, respectively. These results are compatible with each other and agree with previous measurements with exclusive decays, as well as the inclusive determination (figure 1).This new technique can also be applied to B0 decays, giving excellent prospects for new |Vcb| measurements at LHCb. They will also benefit from expected improvements at Belle II to a key external input, the B0 → D(*)–μ+νμ branching fraction. Belle II’s own measurement of |Vcb| is also expected to have reduced systematic uncertainties. In addition, new lattice QCD calculations for the full range of the D*– recoil momentum are expected soon and should give valuable constraints on the form factors. This synergy between theoretical advances, Belle II and LHCb (and its upgrade, due to start in 2021) will very likely say the final word on the |Vcb| puzzle.
Though a free parameter in the Standard Model, the mass of the Higgs boson is important for both theoretical and experimental reasons. Most peculiarly from a theoretical standpoint, our current knowledge of the masses of the Higgs boson and the top quark imply that the quartic coupling of the Higgs vanishes and becomes negative tantalisingly close to, but just before, the Planck scale. There is no established reason for the Standard Model to perch near to this boundary. The implication is that the vacuum is almost but not quite stable, and that on a timescale substantially longer than the age of the universe, some point in space will tunnel to a lower energy state and a bubble of true vacuum will expand to fill the universe. Meanwhile, from an experimental perspective, it is important to continually improve measurements so that uncertainty on the mass of the Higgs boson eventually rivals the value of its width. At that point, measuring the Higgs-boson mass can provide an independent method to determine the Higgs-boson width. The Higgs-boson width is sensitive to the existence of possible undiscovered particles and is expected to be a few MeV according to the Standard Model.
The CMS collaboration recently announced the most precise measurement of the Higgs-boson massachieved thus far, at 125.35 ± 0.15 GeV – a precision of roughly 0.1%. This very high precision was achieved thanks to an enormous amount of work over many years to carefully calibrate and model the CMS detector when it measures the energy and momenta of the electrons, muons and photons necessary for the measurement.
The most recent contribution to this work was a measurement of the mass in the di-photon channel using data collected at the LHC by the CMS collaboration in 2016 (figure 1). This measurement was made using the lead–tungstate crystal calorimeter, which uses approximately 76,000 crystals, each weighing about 1.1 kg, to measure the energy of the photons. A critical step of this analysis was a precise calibration of each crystal’s response using electrons from Z-boson decay, and accounting for the tiny difference between the electron and photon showers in the crystals.
This new result was combined with earlier results obtained with data collected between 2011 and 2016. One measurement was in the decay channel to two Z bosons, which subsequently decay into electron or muon pairs, and another was a measurement in the di-photon channel made with earlier data. The 2011 and 2012 data combined yield 125.06 ± 0.29 GeV. The 2016 data yield 125.46 ± 0.17 GeV. Combining these yields CMS’s current best precision of 125.35 ± 0.15 GeV (figure 2). This new precise measurement of the Higgs-boson mass will not, at least not on its own, lead us in a new direction of physics, but it is an indispensable piece of the puzzle of the Standard Model – and one fruit of the increasing technical mastery of the LHC detectors.
Anomalies, which I take to mean data that disagree with the scientific paradigm of the day, are the bread and butter of phenomenologists working on physics beyond the Standard Model (SM). Are they a mere blip or the first sign of new physics? A keen understanding of statistics is necessary to help decide which “bumps” to work on.
Take the excess in the rate of di-photon production at a mass of around 750 GeV spotted in 2015 by the ATLAS and CMS experiments. ATLAS had a 4σ peak with respect to background, which CMS seemed to confirm, although its signal was less clear. Theorists produced an avalanche of papers speculating on what the signal might mean but, in the end, the signal was not confirmed in new data. In fact, as is so often the case, the putative signal stimulated some very fruitful work. For example, it was realised that ultra-peripheral collisions between lead ions could produce photon-photon resonances, leading to an innovative and unexpected search programme in heavy-ion physics. Other authors proposed using such collisions to measure the anomalous magnetic moment of the tau lepton, which is expected to be especially sensitive to new physics, and in 2018 ATLAS and CMS found the first evidence for (non-anomalous) high-energy light-by-light scattering in lead-lead ultra-peripheral collisions.
Some anomalies have disappeared during the past decade not primarily because they were statistical fluctuations, but because of an improved understanding of theory. One example is the forward-backward asymmetry (AFB) of top–antitop production at the Tevatron. At large transverse momentum, AFB was measured to be much too large compared to SM predictions, which were at next-to-leading order in QCD with some partial next-to-next-to leading order (NNLO) corrections. The complete NNLO corrections, calculated in a Herculean effort, proved to contribute much more than was previously thought, faithfully describing top–antitop production both at the Tevatron and at the LHC.
Other anomalies are still alive and kicking. Arguably, chief among them is the long-standing oddity in the measurement of the anomalous magnetic moment of the muon, which is about 4σ discrepant with the SM predictions. Spotted 20 years ago, many papers have been written in an attempt to explain it, with contributions ranging from supersymmetric particles to leptoquarks. A similarly long-standing anomaly is a 3.8σ excess in the number of electron antineutrinos emerging from a muon–antineutrino beam observed by the LSND experiment and backed up more recently by MiniBooNE. Again, numerous papers attempting to explain the excess, e.g. in terms of the existence of a fourth “sterile” neutrino, have been written, but the jury is still out.
Some anomalies are more recent, and unexpected. The so-called “X17” anomaly reported at a nuclear physics experiment in Hungary, for instance, shows a significant excess in the rate of certain nuclear decays of 8Be and 4He nuclei (see Rekindled Atomki anomaly merits closer scrutiny) which has been interpreted as being due to the creation of a new particle of mass 17 MeV. Though possible theoretically, one needs to work hard to make this new particle not fall afoul of other experimental constraints; confirmation from an independent experiment is also needed. Personally, I am not pursuing this: I think that the best new-physics ideas have already been had by other authors.
When working on an anomaly, beyond-the-SM phenomenologists hypothesise a new particle and/or interaction to explain it, check to see if it works quantitatively, check to see if any other measurements rule the explanation out, then provide new ways in which the idea can be tested. After this, they usually check where the new physics might fit into a larger theoretical structure, which might explain some other mysteries. For example, there are currently many anomalies in measurements of B meson decays, each of which isn’t particularly statistically significant (typically 2–3σ away from the SM) but taken together they form a coherent picture with a higher significance. The exchange of hypothesised Z′ or leptoquark quanta provide working explanations, the larger structure also shedding light on the pattern of masses of SM fermions, and most of my research time is currently devoted to studying them.
The coming decade will presumably sort several current anomalies into discoveries, or those that “went away”. Belle II and future LHCb measurements should settle the B anomalies, while the anomalous muon magnetic moment may even be settled this year by the g-2 experiment at Fermilab. Of course, we hope that new anomalies will appear and stick. One anomaly from the late 1990s – that type 1a supernovae have an anomalous acceleration at large red-shifts – turned out to reveal the existence of dark-energy and produce the dominant paradigm of cosmology today. This reminds us that all surprising discoveries were anomalies at some stage.
On 11 June 2018, a tense silence filled the large lecture hall of the Karlsruhe Institute of Technology (KIT) in Germany. In front of an audience of more than 250 people, 15 red buttons were pressed simultaneously by a panel of senior figures including recent Nobel laureates Takaaki Kajita and Art McDonald. At the same time, operators in the control room of the Karlsruhe Tritium Neutrino (KATRIN) experiment lowered the retardation voltage of the apparatus so that the first beta electrons were able to pass into KATRIN’s giant spectrometer vessel. Great applause erupted when the first beta electrons hit the detector.
In the long history of measuring the tritium beta-decay spectrum to determine the neutrino mass, the ensuing weeks of KATRIN’s first data-taking opened a new chapter. Everything worked as expected, and KATRIN’s initial measurements have already propelled it into the top ranks of neutrino experiments. The aim of this ultra-high-precision beta-decay spectroscope, more than 15 years in the making, is to determine, by the mid-2020s, the absolute mass of the neutrino.
Massive discovery
Since the discovery of the oscillation of atmospheric neutrinos by the Super-Kamiokande experiment in 1998, and of the flavour transitions of solar neutrinos by the SNO experiment shortly afterwards, it was strongly implied that neutrino masses are not zero, but big enough to cause interference between distinct mass eigenstates as a neutrino wavepacket evolves in time. We know now that the three neutrino flavour states we observe in experiments – νe, νμ and ντ – are mixtures of three neutrino mass states.
Though not massless, neutrinos are exceedingly light. Previous experiments designed to directly measure the scale of neutrino masses in Mainz and Troitsk produced an upper limit of 2 eV for the neutrino mass – a factor 250,000 times smaller than the mass of the otherwise lightest massive elementary particle, the electron. Nevertheless, neutrino masses are extremely important for cosmology as well as for particle physics. They have a number density of around 336 cm–3, making them the most abundant particles in the universe besides photons, and therefore play a distinct role in the formation of cosmic structure. Comparing data from the Planck satellite together with data from galaxy surveys (baryonic acoustic oscillations) with simulations of the evolution of structure yields an upper limit on the sum of all three neutrino masses of 0.12 eV at 95% confidence within the framework of the standard Lambda cold-dark matter (ΛCDM) cosmological model.
Considerations of “naturalness” lead most theorists to speculate that the exceedingly tiny neutrino masses do not arise from standard Yukawa couplings to the Higgs boson, as per the other fermions, but are generated by a different mass mechanism. Since neutrinos are electrically neutral, they could be identical to their antiparticles, making them Majorana particles. Via the so-called seesaw mechanism, this interesting scenario would require a new and very high particle mass scale to balance the smallness of the neutrino masses, which would be unreachable with present accelerators.
As neutrino oscillations arise due to interference between mass eigenstates, neutrino-oscillation experiments are only able to determine splittings between the squares of the neutrino mass eigenstates. Three experimental avenues are currently being pursued to determine the neutrino mass. The most stringent upper limit is currently the model-dependent bound set by cosmological data, as already mentioned, which is valid within the ΛCDM model. A second approach is to search for neutrinoless double-beta decay, which allows a statement to be made about the size of the neutrino masses but presupposes the Majorana nature of neutrinos. The third approach – the one adopted by KATRIN – is the direct determination of the neutrino mass from the kinematics of a weak process such as beta decay, which is completely model-independent and depends only on the principle of energy and momentum conservation.
The direct determination of the neutrino mass relies on the precise measurement of the shape of the beta electron spectrum near the endpoint, which is governed by the available phase space (figure 1). This spectral shape is altered by the neutrino mass value: the smaller the mass, the smaller the spectral modification. One would expect to see three modifications, one for each neutrino mass eigenstate. However, due to the tiny neutrino mass differences, a weighted sum is observed. This “average electron neutrino mass” is formed by the incoherent sum of the squares of the three neutrino mass eigenstates, which contribute to the electron neutrino according to the PMNS neutrino-mixing matrix. The super-heavy hydrogen isotope tritium is ideal for this purpose because it combines a very low endpoint energy, Eo, of 18.6 keV and a short half-life of 12.3 years with a simple nuclear and atomic structure.
KATRIN is born
Around the turn of the millennium, motivated by the neutrino oscillation results, Ernst Otten of the University of Mainz and Vladimir Lobashev of INR Troitsk proposed a new, much more sensitive experiment to measure the neutrino mass from tritium beta decay. To this end, the best methods from the previous experiments in Mainz, Troitsk and Los Alamos were to be combined and upscaled by up to two orders of magnitude in size and precision. Together with new technologies and ideas, such as laser Raman spectroscopy or active background reduction methods, the apparatus would increase the sensitivity to the observable in beta decay (the square of the electron antineutrino mass) by a factor of 100, resulting in a neutrino-mass sensitivity of 0.2 eV. Accordingly, the entire experiment was designed to the limits of what was feasible and even beyond (see “Technology transfer delivers ultimate precision” box).
Many technologies had to be pushed to the limits of what was feasible or even beyond. KATRIN became a CERN-recognised experiment (RE14) in 2007 and the collaboration worked with CERN experts in many areas to achieve this. The KATRIN main spectrometer is the largest ultra-high vacuum vessel in the world, with a residual gas pressure in the range of 10–11 mbar – a pressure that is otherwise only found in large volumes inside the LHC ring – equivalent to the pressure recorded at the lunar surface.
Even though the inner surface was instrumented with a complex dual-layer wire electrode system for background suppression and electric-field shaping, this extreme vacuum was made possible by rigorous material selection and treatment in addition to non-evaporable getter technology developed at CERN. KATRIN’s almost 40 m-long chain of superconducting magnets with two large chicanes was put into operation with the help of former CERN experts, and a 223Ra source was produced at ISOLDE for background studies at KATRIN. A series of 83mKr conversion electron sources based on implanted 83Rb for calibration purposes was initially produced at ISOLDE. At present these are produced by KATRIN collaborators and further developed with regard to line stability.
Conversely, the KATRIN collaboration has returned its knowledge and methods to the community. For example, the ISOLDE high-voltage system was calibrated twice with the ppm-accuracy KATRIN voltage dividers, and the magnetic and electrical field calculation and tracking programme KASSIOPEIA developed by KATRIN was published as open source and has become the standard for low-energy precision experiments. The fast and precise laser Raman spectroscopy developed for KATRIN is also being applied to fusion technology.
KIT was soon identified as the best place for such an experiment, as it had the necessary experience and infrastructure with the Tritium Laboratory Karlsruhe. The KIT board of directors quickly took up this proposal and a small international working group started to develop the project. At a workshop at Bad Liebenzell in the Black Forest in January 2001, the project received so much international support that KIT, together with nearly all the groups from the previous neutrino-mass experiments, founded the KATRIN collaboration. Currently, the 150-strong KATRIN collaboration comprises 20 institutes from six countries.
It took almost 16 years from the first design to complete KATRIN, largely because many new technologies had to be developed, such as a novel concept to limit the temperature fluctuations of the huge tritium source to the mK scale at 30 K or the high-voltage stabilisation and calibration to the 10 mV scale at 18.6 kV. The experiment’s two most important and also most complex components are the gaseous, windowless molecular tritium source (WGTS) and the very large spectrometer. In the WGTS, tritium gas is introduced in the midpoint of the 10 m-long beam tube, where it flows out to both sides to be pumped out again by turbomolecular pumps. After being partially cleaned it is re-injected, yielding a closed tritium cycle. This results in an almost opaque column density with a total decay rate of 1011 per second. The beta electrons are guided adiabatically to a tandem of a pre- and a main spectrometer by superconducting magnets of up to 6 T. Along the way, differential and cryogenic pumping sections including geometric chicanes reduce the tritium flow by more than 14 orders of magnitude to keep the spectrometers free of tritium (figure 2).
The KATRIN spectrometers operate as so-called MAC-E filters, whereby electrons are guided by two superconducting solenoids at either end and their momenta are collimated by the magnetic field gradient. This “magnetic bottle” effect transforms almost all kinetic energy into longitudinal energy, which is filtered by an electrostatic retardation potential so that only electrons with enough energy to overcome the barrier are able to pass through. The smaller pre-spectrometer blocks the low-energy part of the beta spectrum (which carries no information on the neutrino mass), while the 10 m-diameter main spectrometer provides a much sharper filter width due to its huge size.
The transmitted electrons are detected by a high-resolution segmented silicon detector. By varying the retarding potential of the main spectrometer, a narrow region of the beta spectrum of several tens of eV below the endpoint is scanned, where the imprint of a non-zero neutrino mass is maximal. Since the relative fraction of the tritium beta spectrum in the last 1 eV below the endpoints amounts to just 2 × 10–13, KATRIN demands a tritium source of the highest intensity. Of equal importance is the high precision needed to understand the measured beta spectrum. Therefore, KATRIN possesses a complex calibration and monitoring system to determine all systematics with the highest precision in situ, e.g. the source strength, the inelastic scattering of beta electrons in the tritium source, the retardation voltage and the work functions of the tritium source and the main spectrometer.
Start-up and beyond
After intense periods of commissioning during 2018, the tritium source activity was increased from its initial value of 0.5 GBq (which was used for the inauguration measurements) to 25 GBq (approximately 22% of nominal activity) in spring 2019. By April, the first KATRIN science run had begun and everything went like clockwork. The decisive source parameters – temperature, inlet pressure and tritium content – allowed excellent data to be taken, and the collaboration worked in several independent teams to analyse these data. The critical systematic uncertainties were determined both by Monte Carlo propagation and with the covariance-matrix method, and the analyses were also blinded so as not to generate bias. The excitement during the un-blinding process was huge within the KATRIN collaboration, which gathered for this special event, and relief spread when the result became known. The neutrino-mass square turned out to be compatible with zero within its uncertainty budget. The model fits the data very well (figure 3) and the fitted endpoint turned out to be compatible with the mass difference between 3He and tritium measured in Penning traps. The new results were presented at the international TAUP 2019 conference in Toyama, Japan, and have recently been published.
This first result shows that all aspects of the KATRIN experiment, from hardware to data-acquisition to analysis, works as expected. The statistical uncertainty of the first KATRIN result is already smaller by a factor of two compared to previous experiments and systematic uncertainties have gone down by a factor of six. A neutrino mass was not yet extracted with these first four weeks of data, but an upper limit for the neutrino mass of 1.1 eV (90% confidence) can be drawn, catapulting KATRIN directly to the top of the world of direct neutrino-mass experiments. In the mass region around 1 eV, the limit corresponds to the quasi-degenerated neutrino-mass range where the mass splittings implied by neutrino-oscillation experiments are negligible compared to the absolute masses.
The neutrino-mass result from KATRIN is complementary to results obtained from searches for neutrinoless double beta decay, which are sensitive to the “coherent sum” mββ of all neutrino mass eigenstates contributing to the electron neutrino. Apart from additional phases that can lead to possible cancellations in this sum, the values of the nuclear matrix elements that need to be calculated to connect the neutrino mass mββ with the observable (the half-life) still possess uncertainties of a factor two. Therefore, the result from a direct neutrino-mass determination is more closely connected to results from cosmological data, which give (model-dependent) access to the neutrino-mass sum.
A sizeable influence
Currently, KATRIN is taking more data and has already increased the source activity by a factor of four to close to its design value. The background rate is still a challenge. Various measures, such as out-baking and using liquid-nitrogen cooled baffles in front of the getter pumps, have already yielded a background reduction by a factor 10, and more will be implemented in the next few years. For the final KATRIN sensitivity of 0.2 eV (90% confidence) on the absolute neutrino-mass scale, a total of 1000 days of data are required. With this sensitivity KATRIN will either find the neutrino mass or will set a stringent upper limit. The former would confront standard cosmology, while the latter would exclude quasi-degenerate neutrino masses and a sizeable influence of neutrinos on the formation of structure in the universe. This will be augmented by searches for physics beyond the Standard Model, such as for sterile neutrino admixtures with masses from the eV to the keV scale.
Neutrino-oscillation results yield a lower limit for the effective electron-neutrino mass to manifest in direct neutrino-mass experiments of about 10 meV (50 meV) for normal (inverse) mass ordering. Therefore, many plans exist to cover this region in the future. At KATRIN, there is a strong R&D programme to upgrade the MAC-E filter principle from the current integral to a differential read-out, which will allow a factor-of-two improvement in sensitivity on the neutrino mass. New approaches to determine the absolute neutrino-mass scale are also being developed: Project 8, a radio-spectroscopy method to eventually be applied to an atomic tritium source; and the electron-capture experiments ECHo and HOLMES, which intend to deploy large arrays of cryogenic bolometers with the implanted isotope 163Ho. In parallel, the next generation of neutrinoless double beta decay experiments like LEGEND, CUPID or nEXO (as well as future xenon-based dark-matter experiments) aim to cover the full range of inverted neutrino-mass ordering. Finally, refined cosmological data should allow us to probe the same mass region (and beyond) within the next decades, while long-baseline neutrino-oscillation experiments, such as JUNO, DUNE and Hyper-Kamiokande, will probe the neutrino-mass ordering implemented in nature. As a result of this broad programme for the 2020s, the elusive neutrino should finally yield some of its secrets and inner properties beyond mixing.
The origin of the three families of quarks and leptons and their extreme range of masses is a central mystery of particle physics. According to the Standard Model (SM), quarks and leptons come in complete families that interact identically with the gauge forces, leading to a remarkably successful quantitative theory describing practically all data at the quantum level. The various quark and lepton masses are described by having different interaction strengths with the Higgs doublet (figure 1, left), also leading to quark mixing and charge-parity (CP) violating transitions involving strange, bottom and charm quarks. However, the SM provides no understanding of the bizarre pattern of quark and lepton masses, quark mixing or CP violation.
In 1998 the SM suffered its strongest challenge to date with the decisive discovery of neutrino oscillations resolving the atmospheric neutrino anomaly and the long-standing problem of the low flux of electron neutrinos from the Sun. The observed neutrino oscillations require at least two non-zero but extremely small neutrino masses, around one ten millionth of the electron mass or so, and three sizeable mixing angles. However, since the minimal SM assumes massless neutrinos, the origin and nature of neutrino masses (i.e. whether they are Dirac or Majorana particles, the latter requiring the neutrino and antineutrino to be related by CP conjugation) and mixing is unclear, and many possible SM extensions have been proposed.
The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore, with the fermion mass hierarchy now spanning at least 12 orders of magnitude, from the neutrino to the top quark. However, it is not only the fermion mass hierarchy that is unsettling. There are now 28 free parameters in a Majorana-extended SM, including a whopping 22 associated with flavour, surely too many for a fundamental theory of nature. To restate Isidor Isaac Rabi’s famous question following the discovery of the muon in 1936: who ordered all of that?
There have been many attempts to formulate a theory beyond the SM that can address the flavour puzzles. Most attempt to enlarge the group structure of the SM describing the strong, weak and electromagnetic gauge forces: SU(3)C× SU(2)L× U(1)Y (see “A taste of flavour in elementary particle physics” panel). The basic premise is that, unlike in the SM, the three families are distinguished by some new quantum numbers associated with a new family or flavour symmetry group, Gfl, which is tacked onto the SM gauge group, enlarging the structure to Gfl× SU(3)C× SU(2)L× U(1)Y. The earliest ideas dating back to the 1970s include radiative fermion-mass generation, first proposed by Weinberg in 1972, who supposed that some Yukawa couplings might be forbidden at tree level by a flavour symmetry but generated effectively via loop diagrams. Alternatively, the Froggatt–Nielsen (FN) mechanism in 1979 assumed an additional U(1)fl symmetry under which the quarks and leptons carry various charges.
To account for family replication and to address the question of large lepton mixing, theorists have explored a larger non-Abelian family symmetry, SU(3)fl, where the three families are analogous to the three quark colours in quantum chromodynamics (QCD). Many other examples have been proposed based on subgroups of SU(3)fl, including discrete symmetries (figure 2, right). More recently, theorists have considered extra-dimensional models in which the Higgs field is located at a 4D brane, while the fermions are free to roam over the extra dimension, overlapping with the Higgs field in such a way as to result in hierarchical Yukawa couplings. Still other ideas include partial compositeness in which fermions may get hierarchical masses from the mixing between an elementary sector and a composite one. The possibilities are seemingly endless. However, all such theories share one common question: what is the scale, Mfl, (or scales) of new physics associated with flavour?
Since experiments at CERN and elsewhere have thoroughly probed the electroweak scale, all we can say for sure is that, unless the new physics is extremely weakly coupled, Mfl can be anywhere from the Planck scale (1019GeV), where gravity becomes important, to the electroweak scale at the mass of the W boson (80 GeV). Thus the flavour scale is very unconstrained.
The origin of flavour can be traced back to the discovery of the electron – the first elementary fermion – in 1897. Following the discovery of relativity and quantum mechanics, the electron and the photon became the subject of the most successful theory of all time: quantum electrodynamics (QED). However, the smallness of the electron mass (me = 0.511 MeV) compared to the mass of an atom has always intrigued physicists.
The mystery of the electron mass was compounded by the discovery in 1936 of the muon with a mass of 207 me but otherwise seemingly identical properties to the electron. This led Isidor Isaac Rabi to quip “who ordered that?”. Four decades later, an even heavier version of the electron was discovered, the tau lepton, with mass mτ = 17 mμ. Yet the seemingly arbitrary values of the masses of the charged leptons are only part of the story. It soon became clear that hadrons were made from quarks that come in three colour charges mediated by gluons under a SU(3)C gauge theory, quantum chromodynamics (QCD). The up and down quarks of the first family have intrinsic masses mu= 4 me and md = 10 me, accompanied by the charm and strange quarks (mc = 12 mμ and ms = 0.9 mμ) of a second family and the heavyweight top and bottom quarks (mt = 97 mτ and mb = 2.4 mτ) of a third family.
It was also realised that the different quark “flavours”, a term invented by Gell-Mann and Fritzsch, could undergo mixing transitions. For example, at the quark level the radioactive decay of a nucleus is explained by the transformation of a down quark into an up quark plus an electron and an electron antineutrino. Shortly after Pauli hypothesized the neutrino in 1930, Fermi proposed a theory of weak interactions based on a contact interaction between the four fermions, with a coupling strength given by a dimensionful constant GF, whose scale was later identified with the mass of the W boson: GF∝ 1/mW2.
After decades of painstaking observation, including the discovery of parity violation, whereby only left-handed particles experience the weak interaction, Fermi’s theory of weak interactions and QED were merged into an electroweak theory based on SU(2)L× U(1)Y gauge theory. The left-handed (L) electron and neutrino form a doublet under SU(2)L, while the right-handed electron is a singlet, with the doublet and singlet carrying hypercharge U(1)Y and the pattern repeating for the second and third lepton families. Similarly, the left-handed up and down quarks form doublets, and so on. The electroweak SU(2)L× U(1)Y symmetry is spontaneously broken to U(1)QED by the vacuum expectation value of the neutral component of a new doublet of complex scalar boson fields called the Higgs doublet. After spontaneous symmetry breaking, this results in massive charged W and neutral Z gauge bosons, and a massive neutral scalar Higgs boson – a picture triumphantly confirmed by experiments at CERN.
To truly shed light on the Standard Model’s flavour puzzle, theorists have explored higher and more complex symmetry groups than the Standard Model. The most promising approaches all involve a spontaneously broken family or flavour symmetry. But the flavour-breaking scale may lie anywhere from the Planck scale to the electroweak scale, with grand unified theories suggesting a high flavour scale, while recent hints of anomalies from LHCb and other experiments suggest a low flavour scale.
To illustrate the unknown magnitude of the flavour scale, consider for example the FN mechanism, where Mfl is associated with the breaking of the U(1)fl symmetry. In the SM the top-quark mass of 173 GeV is given by a Yukawa coupling times the Higgs vacuum expectation value of 246 GeV divided by the square root of two. This implies a top-quark Yukawa coupling close to unity. The exact value is not important, what matters is that the top Yukawa coupling is of order unity. From this point of view, the top quark mass is not at all puzzling – it is the other fermion masses associated with much smaller Yukawa couplings that require explanation. According to FN, the fermions are assigned various U(1)fl charges and small Yukawa couplings are forbidden due to a U(1)fl symmetry. The symmetry is broken by the vacuum expectation value of a new “flavon” field <φ>, where φ is a neutral scalar under the SM but carries one unit of U(1)fl charge. Small Yukawa couplings then originate from an operator (figure 1, right) suppressed by powers of the small ratio <φ>/Mfl (where Mfl acts as a cut-off scale of the contact interaction).
For example, suppose that the ratio <φ>/Mfl is identified with the Wolfenstein parameter λ = sinθC = 0.225 (where θC is the Cabibbo angle appearing in the CKM quark-mixing matrix). Then the fermion mass hierarchies can be explained by powers of this ratio, controlled by the assigned U(1)fl charges: me/mτ∼λ5, mμ/mτ∼λ2, md/mb∼λ4, ms/mb∼ λ2, mu/mt∼ λ8 and mc/mt∼ λ4. This shows how fermion masses spanning many orders of magnitude may be interpreted as arising from integer U(1)fl charge assignments of less than 10. However, in this approach, Mfl may be anywhere from the Planck scale to the electroweak scale by adjusting <φ> such that the ratio λ= <φ>/Mfl is held fixed.
One possibility for Mfl, reviewed by Kaladi Babu at Oklahoma State University in 2009, is that it is not too far from the scale of grand unified theories (GUTs), of order 1016 GeV, which is the scale at which the gauge couplings associated with the SM gauge group unify into a single gauge group. The simplest unifying group, SU(5)GUT, was proposed by Georgi and Glashow in 1974, following the work of Pati and Salam based on SU(4)C× SU(2)L× SU(2)R. Both these gauge groups can result from SO(10)GUT, which was discovered by Fritzsch and Minkowski (and independently by Georgi), while many other GUT groups and subgroups have also been studied (figure 2, left). However, GUT groups by themselves only unify quarks and leptons within a given family, and while they may provide an explanation for why mb= 2.4 mτ, as discussed by Babu, they do not account for the fermion mass hierarchies.
A way around this, first suggested by Ramond in 1979, is to combine GUTs with family symmetry based on the product group GGUT× Gfl, with symmetries acting in the specific directions shown in the figure “Family affair”. In order not to spoil the unification of the gauge couplings, the flavour-symmetry breaking scale is often assumed to be close to the GUT breaking scale. This also enables the dynamics of whatever breaks the GUT symmetry, be it Higgs fields or some mechanism associated with compactification of extra dimensions, to be applied to the flavour breaking. Thus, in such theories, the GUT and flavour/family symmetry are both broken at or around Mfl∼ MGUT ∼ 1016 GeV, as widely discussed by many authors. In this case, it would be impossible given known technology to directly experimentally access the underlying theory responsible for unification and flavour. Instead, we would need to rely on indirect probes such as proton decay (a generic prediction of GUTs and hence of these enlarged SM structures proposed to explain flavour) and/or charged-lepton flavour-violating processes such as μ → eγ (see CERN Courier May/June 2019 p45).
New ideas for addressing the flavour problem continue to be developed. For example, motivated by string theory, Ferruccio Feruglio of the University of Padova suggested in 2017 that neutrino masses might be complex analytic functions called modular forms. The starting point of this novel idea is that non-Abelian discrete family symmetries may arise from superstring theory in compactified extra dimensions, as a finite subgroup of the modular symmetry of such theories (i.e. the symmetry associated with the non-unique choice of basis vectors spanning a given extra-dimensional lattice). It follows that the 4D effective Lagrangian must respect modular symmetry. This, Feruglio observed, implies that Yukawa couplings may be modular forms. So if the leptons transform as triplets under some finite subgroup of the modular symmetry, then the Yukawa couplings themselves must transform also as triplets, but with a well defined structure depending on only one free parameter: the complex modulus field. At a stroke, this removes the need for flavon fields and ad hoc vacuum alignments to break the family symmetry, and potentially greatly simplifies the particle content of the theory.
Compactification
Although this approach is currently actively being considered, it is still unclear to what extent it may shed light on the entire flavour problem including all quark and lepton mass hierarchies. Alternative string-theory motivated ideas for addressing the flavour problem are also being developed, including the idea that flavons can arise from the components of extra-dimensional gauge fields and that their vacuum alignment may be achieved as a consequence of the compactification mechanism.
The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore
Recently, there have been some experimental observations concerning charged lepton flavour universality violation which hint that the flavour scale might not be associated with the GUT scale, but might instead be just around the corner at the TeV scale (CERN Courier May/June 2019 p33). Recall that in the SM the charged leptons e, μ and τ interact identically with the gauge forces, and differ only in their masses, which result from having different Yukawa couplings to the Higgs doublet. This charged lepton flavour universality has been the subject of intense experimental scrutiny over the years and has passed all the tests – until now. In recent years, anomalies have appeared associated with violations of charged lepton flavour universality in the final states associated with the quark transitions b → c and b → s.
Puzzle solving
In the case of b → c transitions, the final states involving τ leptons appear to violate charged lepton universality. In particular B → D(*) ℓνℓdecays where the charged lepton ℓ is identified with τ have been shown by Babar and LHCb to occur at rates somewhat higher than those predicted by the SM (the ratios of such final states to those involving electrons and muons being denoted by RD and RD*). This is quite puzzling since all three types of charged leptons are predicted to couple to the W boson equally, and the decay is dominated by tree-level W exchange. Any new-physics contribution, such as the exchange of a new charged Higgs boson, a new W′ or a leptoquark, would have to compete with tree-level W exchange. However, the most recent measurements by Belle, reported at the beginning of 2019 (CERN Courier May/June 2019 p9), measure RD and RD* to be closer to the SM prediction.
In the case of b → s transitions, the LHCb collaboration and other experiments have reported a number of anomalies in B → K(*)ℓ+ℓ– decays such as the RK and RK* ratios of final states containing μ+μ– versus e+e–, which are measured deviate from the SM by about 2.5 standard deviations. Such anomalies, if they persist, may be accounted for by a new contact operator coupling the four fermions bLsLμLμL suppressed by a dimensionful coefficient M2new where Mnew ~30 TeV, according to a general operator analysis. This hints that there may be new physics arising from the non-universal couplings of leptoquark and/or a new Z′ whose mass is typically a few TeV in order to generate such an operator (where the 30 TeV scale is reduced to just a few TeV after mixing angles are taken into account). However, the introduction of these new particles increases the SM parameter count still further, and only serves to make the flavour problem of the SM worse.
Motivated by such considerations, it is tempting to speculate that these recent empirical hints of flavour non-universality may be linked to a possible theory of flavour. Several authors have hinted at such a connection, for example Riccardo Barbieri of Scuola Normale Superiore, Pisa, and collaborators have related these observations to a U(2)5 flavour symmetry in an effective theory framework. In addition, concrete models have recently been constructed that directly relate the effective Yukawa couplings to the effective leptoquark and/or Z′ couplings. In such models the scale of new physics associated with the mass of the leptoquark and/or a new Z′ may be identified with the flavour scale Mfl defined earlier, except that it should be not too far from the TeV scale in order to explain the anomalies. To achieve the desired link, the effective leptoquark and/or Z′ couplings may be generated by the same kinds of operators responsible for the effective Higgs Yukawa couplings (figure 3).
In such a model the couplings of leptoquarks and/or Z′ bosons may be related to the Higgs Yukawa couplings, with all couplings arising effectively from mixing with a vector-like fourth family. The considered model predicts, apart from the TeV scale leptoquark and/or Z′, and a slightly heavier fourth family, extra flavour-changing processes such as τ →μμμ. The model in its current form does not have any family symmetry, and explains the hierarchy of the quark masses in terms of the vector-like fourth family masses, which are free parameters. Crucially, the required TeV scale Z′ mass is given by MZ′ ~ <φ> ~ TeV, which would fix the flavour scale Mfl ~ few TeV. In other words, if the hints for flavour anomalies hold up as further data are collected by the LHCb, Belle II and other experiments, the origin of flavour may be right around the corner.
The LHCb experiment has observed new beauty-baryon states, consistent with theoretical expectations for excited Ωb− (bss) baryons. The Ωb− (first observed a decade ago at the Tevatron) is a higher mass partner of the Ω− (sss), the 1964 discovery of which famously validated the quark model of hadrons. The new LHCb finding will help to test models of hadronic states, including some that predict exotic structures such as pentaquarks.
The LHCb collaboration has uncovered numerous new baryons and mesons during the past eight years, bringing a wealth of information to the field of hadron spectroscopy. Critical to the search for new hadrons is the unique capability of the experiment to trigger on fully hadronic beauty and charm decays of b baryons, distinguish protons, kaons and pions from one another using ring-imaging Cherenkov detectors, and reconstruct secondary and tertiary decay vertices with a silicon vertex detector.
LHCb physicists searched for excited Ωb− states via strong decays to Ξb0 K−, where the Ξb0 (bsu), in turn, decays weakly through Ξb0 → Ξc+ π− and Ξc+ → pK− π+. Using the full data sample collected during LHC Run 1 and Run 2, a very large and clean sample of about 19,000 Ξb0 signal decays was collected. Those Ξb0 candidates were then combined with a K− candidate coming from the same primary interaction. Combinations with the wrong sign (Ξb0 K+), where no Ωb− states are expected, were used to study the background. This control sample was used to tune particle-identification requirements to reject misidentified pions, reducing the background by a factor of 2.5 while keeping an efficiency of 85% on simulated signal decays.
The search used the difference in invariant mass, δM = M(Ξb0 K−) – M(Ξb0), determining the δM resolution to be approximately 0.7 MeV using simulated signal decays. (For comparison, the resolution is about 15 MeV for the Ξb0 decay.) Several peaks can be seen by eye (see figure), but to measure their properties a fit is needed. To help constrain the background shape, the wrong-sign δM spectrum (not shown) is fitted simultaneously with the signal mode. The peaks are each described by a relativistic Breit-Wigner convolved with a resolution function.
The width of the Ωb(6350)− shows the most significant deviation from zero
Four peaks, corresponding to four excited Ωb− states, were included in the fit. Following the usual convention, the new states were named according to their approximate mass: Ωb(6316)−, Ωb(6330)−, Ωb(6340)− and Ωb(6350)−. Each mass was measured with a precision of well below 1 MeV, and the errors are dominated by the uncertainty on the world-average Ξb0 mass. All four peaks are narrow. The width of the Ωb(6350)− shows the most significant deviation from zero, with a central value of 1.4+1.0-0.8 ± 0.1 MeV. The two lower-mass peaks have significances below three standard deviations (2.1σ and 2.6σ) and so are not considered conclusive observations. But the two higher-mass peaks have significances of 6.7σ and 6.2σ, above the 5σ threshold for discovery.
The new states seen by LHCb follow a similar pattern to the five narrow peaks observed in the Ξc+K− invariant mass spectrum by the collaboration in 2017. It has proven difficult to obtain a satisfactory explanation of all five as excited Ωc0(css) states, raising the possibility that at least one of the Ξc+ K− peaks is a pentaquark or a molecular state. Since the Ξc+ K−and Ξb0 K− final states differ only by replacing a c quark with a b quark, the two analyses together should provide strong constraints on any models that aim to explain the structures in these mass spectra.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.