Comsol -leaderboard other pages

Topics

Opening gambit for LHCb in the Vcb puzzle

Figure 1

There is a longstanding puzzle concerning the value of the Cabibbo–Kobayashi–Maskawa matrix element |Vcb|, which describes the coupling between charm and beauty quarks in W± interactions. This fundamental parameter of the Standard Model has been measured with two complementary methods. One uses the inclusive rate of b-hadron decays into final states containing a c hadron and a charged lepton; the other measures the rate of a specific (exclusive) semileptonic B decay, e.g. B0 → D*μ+νμ. The world average of results using the inclusive approach, |Vcb|incl = (42.19 ± 0.78) × 10–3, differs from the average of results using the exclusive approach, |Vcb|excl = (39.25 ± 0.56) × 10–3, by approximately three standard deviations.

So far, exclusive determinations have been carried out only at e+e colliders, using B0 and B+ decays. Operating at the ϒ(4S) resonance, the full decay kinematics can be determined, despite the undetected neutrino, and the total number of B mesons produced, needed to measure |Vcb|, is known precisely. The situation is more challenging in a hadron collider – but the LHCb collaboration has just completed an exclusive measurement of |Vcb| based, for the first time, on Bs0 decays.

The exclusive determination of |Vcb| relies on the description of strong-interaction effects for the b and c quarks bound in mesons, the so-called form factors (FF). These are functions of the recoil momentum of the c meson in the b-meson rest frame, and are calculated using non-perturbative QCD techniques, such as lattice QCD or QCD sum rules. A key advantage of semileptonic Bs0 decays, compared to B0/+ decays, is that their FF can be more precisely computed. Recently, the FF parametrisation used in the exclusive determination has been considered to be a possible origin of the inclusive–exclusive discrepancy, and comparisons between the results for |Vcb| obtained using different parametrisations, such as that by Caprini, Lellouch and Neubert (CLN) and that by Boyd, Grinstein and Lebed (BGL), are considered a key check.

Both parametrisations are employed by LHCb in a new analysis of Bs0 → Ds(*)μ+νμ decays, using a novel method that does not require the momentum of particles other than Ds and μ+ to be estimated. The analysis also uses B0 → D(*)μ+νμ as a normalisation mode, which has the key advantage that many systematic effects cancel in the ratio. With the form factors and relative efficiency-corrected yields in hand, obtaining |Vcb| requires only a few more inputs: branching fractions that were well measured at the B-factories, and the ratio of Bs0 and B0 production fractions measured at LHCb.

The values of |Vcb| obtained are (41.4 ± 1.6) × 10–3 and (42.3 ± 1.7) × 10–3 in the CLN and BGL parametrisations, respectively. These results are compatible with each other and agree with previous measurements with exclusive decays, as well as the inclusive determination (figure 1).  This new technique can also be applied to B0 decays, giving excellent prospects for new |Vcb| measurements at LHCb. They will also benefit from expected improvements at Belle II to a key external input, the B0 → D(*)μ+νμ branching fraction. Belle II’s own measurement of |Vcb| is also expected to have reduced systematic uncertainties. In addition, new lattice QCD calculations for the full range of the D* recoil momentum are expected soon and should give valuable constraints on the form factors. This synergy between theoretical advances, Belle II and LHCb (and its upgrade, due to start in 2021) will very likely say the final word on the |Vcb| puzzle.

Crystal calorimeter hones Higgs mass

Figure 1

Though a free parameter in the Standard Model, the mass of the Higgs boson is important for both theoretical and experimental reasons. Most peculiarly from a theoretical standpoint, our current knowledge of the masses of the Higgs boson and the top quark imply that the quartic coupling of the Higgs vanishes and becomes negative tantalisingly close to, but just before, the Planck scale. There is no established reason for the Standard Model to perch near to this boundary. The implication is that the vacuum is almost but not quite stable, and that on a timescale substantially longer than the age of the universe, some point in space will tunnel to a lower energy state and a bubble of true vacuum will expand to fill the universe. Meanwhile, from an experi­mental perspective, it is important to continually improve measurements so that uncertainty on the mass of the Higgs boson eventually rivals the value of its width. At that point, measuring the Higgs-boson mass can provide an independent method to determine the Higgs-boson width. The Higgs-boson width is sensitive to the existence of possible undiscovered particles and is expected to be a few MeV according to the Standard Model.

The CMS collaboration recently announced the most precise measurement of the Higgs-boson mass achieved thus far, at 125.35 ± 0.15 GeV – a precision of roughly 0.1%. This very high precision was achieved thanks to an enormous amount of work over many years to carefully calibrate and model the CMS detector when it measures the energy and momenta of the electrons, muons and photons necessary for the measurement.

The most recent contribution to this work was a measurement of the mass in the di-photon channel using data collected at the LHC by the CMS collaboration in 2016 (figure 1). This measurement was made using the lead–tungstate crystal calorimeter, which uses approximately 76,000 crystals, each weighing about 1.1 kg, to measure the energy of the photons. A critical step of this analysis was a precise calibration of each crystal’s response using electrons from Z-boson decay, and accounting for the tiny difference between the electron and photon showers in the crystals.

Figure 2

This new result was combined with earlier results obtained with data collected between 2011 and 2016. One measurement was in the decay channel to two Z bosons, which subsequently decay into electron or muon pairs, and another was a measurement in the di-photon channel made with earlier data. The 2011 and 2012 data combined yield 125.06 ± 0.29 GeV. The 2016 data yield 125.46 ± 0.17 GeV. Combining these yields CMS’s current best precision of 125.35 ± 0.15 GeV (figure 2). This new precise measurement of the Higgs-boson mass will not, at least not on its own, lead us in a new direction of physics, but it is an indispensable piece of the puzzle of the Standard Model – and one fruit of the increasing technical mastery of the LHC detectors.

The Higgs, supersymmetry and all that

John Ellis

What would you say were the best and the worst of times in your half-century-long career as a theorist?

The two best times, in chronological order, were the 1979 discovery of the gluon in three-jet events at DESY, which Mary Gaillard, Graham Ross and I had proposed three years earlier, and the discovery of the Higgs boson at CERN in 2012, in particular because one of the most distinctive signatures for the Higgs, its decay to two photons, was something Gaillard, Dimitri Nanopoulos and I had calculated in 1975. There was a big build up to the Higgs and it was a really emotional moment. The first of the two worst times was in 2000 with the closure of LEP, because maybe there was a glimpse of the Higgs boson. In fact, in retrospect the decision was correct because the Higgs wasn’t there. The other time was in September 2008 when there was the electrical accident in the LHC soon after it started up. No theoretical missing factor-of-two could be so tragic.

Your 1975 work on the phenomenology of the Higgs boson was the starting point for the Higgs hunt. When did you realise that the particle was more likely than not to exist?

Our paper, published in 1976, helped people think about how to look for the Higgs boson, but it didn’t move to the top of the physics agenda until after the discovery of the W and Z bosons in 1983. When we wrote the paper, things like spontaneous symmetry breaking were regarded as speculative hypotheses by the distinguished grey-haired scientists of the day. Then, in the early 1990s, precision measurements at LEP enabled us to look at the radiative corrections induced by the Higgs and they painted a consistent picture that suggested the Higgs would be relatively light (less than about 300 GeV). I was sort of morally convinced beforehand that the Higgs had to exist, but by the early 1990s it was clear that, indirectly, we had seen it. Before that there were alternative models of electroweak symmetry breaking but LEP killed most of them off.

To what extent does the Higgs boson represent a “portal” to new physics?

The Higgs boson is often presented as completing the Standard Model (SM) and solving lots of problems. Actually, it opens up a whole bunch of new ones. We know now that there is at least one particle that looks like an effective elementary scalar field. It’s an entirely new type of object that we’ve never encountered before, and every single aspect of the Higgs is problematic from a theoretical point of view. Its mass: we know that in the SM it is subject to quadratic corrections that make the hierarchy of mass scales unstable.

Every single aspect of the Higgs is problematic from a theoretical point of view

Its couplings to fermions: those are what produce the mixing of quarks, which is a complete mystery. The quartic term of the Higgs potential in the SM goes negative if you extrapolate it to high energies, the theory becomes unstable and the universe is doomed. And, in principle, you can add a constant term to the Higgs potential, which is the infamous cosmological constant that we know exists in the universe today but that is much, much smaller than would seem natural from the point of view of Higgs theory. Presumably some new physics comes in to fix these problems, and that makes the Higgs sector of the SM Lagrangian look like the obvious portal to that new physics.

In what sense do you feel an emotional connection to theory?

The Higgs discovery is testament to the power of mathematics to describe nature. People often talk about beauty as being a guide to theory, but I am always a bit sceptical about that because it depends on how you define beauty. For me, a piece of engineering can be beautiful even if it looks ugly. The LHC is a beautiful machine from that point of view, and the SM is a beautiful theoretical machine that is driven by mathematics. At the end of the day, mathematics is nothing but logic taken as far as you can.

Do you recall the moment you first encountered supersymmetry (SUSY), and what convinced you of its potential?

I guess it must have been around 1980. Of course I knew that Julius Wess and Bruno Zumino had discovered SUSY as a theoretical framework, but their motivations didn’t convince me. Then people like Luciano Maiani, Ed Witten and others pointed out that SUSY could help stabilise the hierarchy of mass scales that we find in physics, such as the electroweak, Planck and grand unification scales. For me, the first phenomenological indication that indicated SUSY could be related to reality was our realisation in 1983 that SUSY offered a great candidate for dark matter in the form of the lightest supersymmetric particle. The second was a few years later when LEP provided very precise measurements of the electroweak mixing angle, which were in perfect agreement with supersymmetric (but not non-supersymmetric) grand unified theories. The third indication was around 1991 when we calculated the mass of the lightest supersymmetric Higgs boson and got a mass up to about 130 GeV, which was being indicated by LEP as a very plausible value, and agrees with the experimental value.

There was great excitement about SUSY ahead of the LHC start-up. In hindsight, does the non-discovery so far make the idea less likely?

Certainly it’s disappointing. And I have to face the possibility that even if SUSY is there, I might not live to meet her. But I don’t think it’s necessarily a problem for the underlying theory. There are certainly scenarios that can provide the dark matter even if the supersymmetric particles are rather heavier than we originally thought, and such models are still consistent with the mass of the Higgs boson. The information you get from unification of the couplings at high energies also doesn’t exclude SUSY particles weighing 10 TeV or so. Clearly, as the masses of the sparticles increase, you have to do more fine tuning to solve the electroweak hierarchy problem. On the other hand, the amount of fine tuning is still many, many orders of magnitude less than what you’d have to postulate without it! It’s a question of how much resistance to pain you have. That said, to my mind the LHC has actually provided three additional reasons for loving SUSY. One is the correct prediction for the Higgs mass. Another is that SUSY stabilises the electroweak vacuum (without it, SM calculations show that the vacuum is metastable). The third is that in a SUSY model, the Higgs couplings to other particles, while not exactly the same as in the SM, should be pretty close – and of course that’s consistent with what has been measured so far.

To what extent is SUSY driving considerations for the next collider?

I still think it’s a relatively clear-cut and well-motivated scenario for physics at the multi-TeV scale. But obviously its importance is less than it was in the early 1990s when we were proposing the LHC. That said, if you want a specific benchmark scenario for new physics at a future collider, SUSY would still be my go-to model, because you can calculate accurate predictions. As for new physics beyond the Higgs and more generally the precision measurements that you can make in the electroweak sector, the next topic that comes to my mind is dark matter. If dark matter is made of weakly-interacting massive particles (WIMPs), a high-energy Future Circular Collider should be able to discover it. You can look at SUSY at various different levels. One is that you just add in these new particles and make sure they have the right couplings to fix the hierarchy problem. But at a more fundamental level you can write down a Lagrangian, postulate this boson-fermion symmetry and follow the mathematics through. Then there is a deeper picture, which is to talk about additional fermionic (or quantum) dimensions of space–time. If SUSY were to be discovered, that would be one of the most profound insights into the nature of reality that we could get.

If SUSY is not a symmetry of nature, what would be the implications for attempts to go beyond the SM, e.g. quantum gravity?

We are never going to know that SUSY is not there. String theorists could probably live with very heavy SUSY particles. When I first started thinking about SUSY in the 1980s there was this motivation related to fine tuning, but there weren’t many other reasons why SUSY should show up at low energies. More arguments came later, for example, dark matter, which are nice but a matter of taste. I and my grandchildren will have passed on, humans could still be exploring physics way below the Planck scale, and string theorists could still be cool with that.

How high do the masses of the super-partners need to go before SUSY ceases to offer a compelling solution for the hierarchy problem and dark matter?

Beyond about 10 TeV it is difficult to see how it can provide the dark matter unless you change the early expansion history of the universe – which of course is quite possible, because we have no idea what the universe was doing when the temperature was above an MeV. Indeed, many of my string colleagues have been arguing that the expansion history could be rather different from the conventional adiabatic smooth expansion that people tend to use as the default. In this case supersymmetric particles could weigh 10 or even 30 TeV and still provide the dark matter. As for the hierarchy problem, obviously things get tougher to bear.

What can we infer about SUSY as a theory of fundamental particles from its recent “avatars” in lasers and condensed-matter systems?

I don’t know. It’s not really clear to me that the word “SUSY” is being used in the same sense that I would use it. Supersymmetric quantum mechanics was taken as a motivation for the laser setup (CERN Courier March/April 2019 p10), but whether the deeper mathematics of SUSY has much to do with the way this setup works I’m not sure. The case of topological condensed-matter systems is potentially a more interesting place to explore what this particular face of SUSY actually looks like, as you can study more of its properties under controlled conditions. The danger is that, when people bandy around the idea of SUSY, often they just have in mind this fermion–boson partnership. The real essence of SUSY goes beyond that and includes the couplings of these particles, and it’s not clear to me that in these effective-SUSY systems one can talk in a meaningful way about what the couplings look like.

Has the LHC new-physics no-show so far impacted what theorists work on?

In general, I think that members of the theoretical community have diversified their interests and are thinking about alternative dark-matter scenarios, and about alternative ways to stabilise the hierarchy problem. People are certainly exploring new theoretical avenues, which is very healthy and, in a way, there is much more freedom for young theorists today than there might have been in the past. Personally, I would be rather reluctant at this time to propose to a PhD student a thesis that was based solely on SUSY – the people who are hiring are quite likely to want them to be not just working on SUSY and maybe even not working on SUSY at all. I would regard that as a bit unfair, but there are always fashions in theoretical physics.

Following a long and highly successful period of theory-led research, culminating in the completion of the SM, what signposts does theory offer experimentalists from here?

I would broaden your question. In particle physics, yes, we have the SM, which over the past 50 years has been the dominant paradigm. But there is also a paradigm in cosmology and gravitation – general relativity and the idea of a big bang – initiated a century ago by Einstein. The 2016 discovery of gravitational waves almost four years ago was the “Higgs moment” for gravity, and that community now finds itself in the same fix that we do, in that they have this theory-led paradigm that doesn’t indicate where to go next.

The discovery of gravitational waves almost four years ago was the “Higgs moment” for gravity

Gravitational waves are going to tell us a lot about astrophysics, but whether they will tell us about quantum gravity is not so obvious. The Higgs boson, meanwhile, tells us that we have a theory that works fantastically well but leaves many mysteries – such as dark matter, the origin of matter, neutrino masses, cosmological inflation, etc – still standing. These are a mixture of theoretical, phenomenological and experimental problems suggesting life beyond the SM. But we don’t have any clear signposts today. The theoretical cats are wandering off in all directions, and that’s good because maybe one of the cats will find something interesting. But there is still a dialogue going on between theory and experiment, and it’s a dialogue that is maybe less of a monologue than it was during the rise of the SM and general relativity. The problems we face in going beyond the current paradigms in fundamental physics are the hardest we’ve faced yet, and we are going to need all the dialogue we can muster between theorists, experimentalists, astrophysicists and cosmologists.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Learning to love anomalies

All surprising discoveries were anomalies at some stage

Anomalies, which I take to mean data that disagree with the scientific paradigm of the day, are the bread and butter of phenomenologists working on physics beyond the Standard Model (SM). Are they a mere blip or the first sign of new physics? A keen understanding of statistics is necessary to help decide which “bumps” to work on.

Take the excess in the rate of di-photon production at a mass of around 750 GeV spotted in 2015 by the ATLAS and CMS experiments. ATLAS had a 4σ peak with respect to background, which CMS seemed to confirm, although its signal was less clear. Theorists produced an avalanche of papers speculating on what the signal might mean but, in the end, the signal was not confirmed in new data. In fact, as is so often the case, the putative signal stimulated some very fruitful work. For example, it was realised that ultra-peripheral collisions between lead ions could produce photon-photon resonances, leading to an innovative and unexpected search programme in heavy-ion physics. Other authors proposed using such collisions to measure the anomalous magnetic moment of the tau lepton, which is expected to be especially sensitive to new physics, and in 2018 ATLAS and CMS found the first evidence for (non-anomalous) high-energy light-by-light scattering in lead-lead ultra-peripheral collisions.

Some anomalies have disappeared during the past decade not primarily because they were statistical fluctuations, but because of an improved understanding of theory. One example is the forward-backward asymmetry (AFB) of top–antitop production at the Tevatron. At large transverse momentum, AFB was measured to be much too large compared to SM predictions, which were at next-to-leading order in QCD with some partial next-to-next-to leading order (NNLO) corrections. The complete NNLO corrections, calculated in a Herculean effort, proved to contribute much more than was previously thought, faithfully describing top–antitop production both at the Tevatron and at the LHC.

Ben Allanach

Other anomalies are still alive and kicking. Arguably, chief among them is the long-standing oddity in the measurement of the anomalous magnetic moment of the muon, which is about 4σ discrepant with the SM predictions. Spotted 20 years ago, many papers have been written in an attempt to explain it, with contributions ranging from supersymmetric particles to leptoquarks. A similarly long-standing anomaly is a 3.8σ excess in the number of electron antineutrinos emerging from a muon–antineutrino beam observed by the LSND experiment and backed up more recently by MiniBooNE. Again, numerous papers attempting to explain the excess, e.g. in terms of the existence of a fourth “sterile” neutrino, have been written, but the jury is still out.

Some anomalies are more recent, and unexpected. The so-called “X17” anomaly reported at a nuclear physics experiment in Hungary, for instance, shows a significant excess in the rate of certain nuclear decays of 8Be and 4He nuclei (see Rekindled Atomki anomaly merits closer scrutiny) which has been interpreted as being due to the creation of a new particle of mass 17 MeV. Though possible theoretically, one needs to work hard to make this new particle not fall afoul of other experimental constraints; confirmation from an independent experiment is also needed. Personally, I am not pursuing this: I think that the best new-physics ideas have already been had by other authors.

When working on an anomaly, beyond-the-SM phenomenologists hypothesise a new particle and/or interaction to explain it, check to see if it works quantitatively, check to see if any other measurements rule the explanation out, then provide new ways in which the idea can be tested. After this, they usually check where the new physics might fit into a larger theoretical structure, which might explain some other mysteries. For example, there are currently many anomalies in measurements of B meson decays, each of which isn’t particularly statistically significant (typically 2–3σ away from the SM) but taken together they form a coherent picture with a higher significance. The exchange of hypothesised Z′ or leptoquark quanta provide working explanations, the larger structure also shedding light on the pattern of masses of SM fermions, and most of my research time is currently devoted to studying them.

The coming decade will presumably sort several current anomalies into discoveries, or those that “went away”. Belle II and future LHCb measurements should settle the B anomalies, while the anomalous muon magnetic moment may even be settled this year by the g-2 experiment at Fermilab. Of course, we hope that new anomalies will appear and stick. One anomaly from the late 1990s – that type 1a supernovae have an anomalous acceleration at large red-shifts – turned out to reveal the existence of dark-energy and produce the dominant paradigm of cosmology today. This reminds us that all surprising discoveries were anomalies at some stage.

Engaging with the invisible

3D models of bar magnets with tactile bumps

We have many fantastic achievements in our wonderful field of research, most recently the completion of the Large Hadron Collider (LHC) and its discovery of a new form of matter, the Higgs boson. The field is now preparing to face the next set of challenges, in whatever direction the European Strategy for Particle Physics recommends. With ambitious goals, this strategy update is the right time to ask: “How do we make ourselves as good as we need to be to succeed?”

Big science has brought more than fundamental knowledge: it has taught us that we can achieve more when we collaborate, and to do this we need to communicate both within and beyond the community. We need to communicate to our funders and, most importantly of all, we need to communicate with wider society to give everyone an opportunity to engage in or become a part of the scientific process. Yet some of the audiences we could and should be reaching are below the radar.

Reaching high-science capital people – those who will attend a laboratory open day, watch a new documentary on dark energy or read a newspaper article about medical accelerators – is a vital part of our work, and we do it well. But many audiences have barriers to traditional modes of outreach and engagement. For example, groups or families with an inherently low science background, perhaps linked to socio-economic grouping, will not read articles in the science-literate mainstream press as they feel, incorrectly, that science is not for them. Large potential audiences with physical or mental disabilities will be put off coming to events for practical reasons such as accessibility or perhaps being unable to read or understand printed or visual media. In the UK alone, millions of people are registered as visually impaired (VI) to some degree. To reach these and other “invisible” audiences, we need to enter their space.

Inspired by the LHC

Interactive magnet exhibits on display

When it comes to science engagement, which is a predominantly visual interaction, the VI audience is underserved. Tactile Collider is a communication project aimed at addressing this gap. The idea came in 2014 when a major LHC exhibition came to Manchester, UK. Joining in panel discussions at a launch party held at the Museum for Science and Industry, it became clear that the accessibility of the exhibition could be improved. Spurring us into action, we also had a request from a local VI couple for an adapted tour. I gathered together some pieces of the ATLAS forward detector and some radio-frequency cavity models, both of which had a pleasing weight and plenty of features to feel with fingers, and gave the couple a bespoke tour of the exhibition. The feedback was fantastic, and making this tactile, interactive and bespoke form of engagement available to more people, be it sighted or VI, was a challenge we accepted.

With the help of the museum staff, we developed the idea further and were soon put in touch with Kirin Saeed, an independent consultant on accessibility and visually impaired herself. Together, we formulated a potential project to a stage where we could approach funders. The UK’s Science and Technology Facilities Council (STFC) recognised and supported our vision for a UK-wide project and funded the nascent Tactile Collider for two years through a £100,000 public-engagement grant.

We wanted to design a project without preconceptions about the techniques and methods of communication and delivery. With co-leaders Chris Edmonds of the University of Liverpool and Robyn Watson, a teacher of VI students, our team spent one year listening to and talking with audiences before we even considered producing mat­erials or defining an approach. We spent time in focus groups, in classrooms across the north of England, visiting museums with VI people, and looking at the varied ways of learning and accessing information for VI groups of all ages. Training in skills such as audio description and tactile-map production was crucial, as were the PhD students who got involved to design materials and deliver Tactile Collider events.

Early on, we focused on a science message based around four key themes: the universe is made of particles; we accelerate these particles using electric fields in cavities; we control particle beams using magnets; and we collide particle beams to make the Higgs boson. The first significant event took place in Liverpool in 2017 and since then the exhibition has toured UK schools, science festivals and, in 2019, joined the CERN Open Days for our first Geneva-region event.

Content development

A key aspect of Tactile Collider is content developed specifically for a VI audience, along with training the delivery team in how to sight-guide and educating them about the large range of visual impairments. As an example, take the magnetic field of a dipole – the first step to understanding how magnets are used to control and manipulate charged particle beams. The idea of a bar magnet having a north pole and a south pole, and magnetic field lines connecting the two, is simple enough to convey using pencil and paper. To communicate with VI audiences, by contrast, the magnet station of Tactile Collider contains a 3D model of a bar magnet with tactile bumps for north and south poles, partnered with tactile diagrams. In some areas of Tactile Collider, 3D sound is employed to give students a choice in how to interact.

Tactile Collider’s Chris Edmonds, Rob Appleby and Robyn Watson

The lessons learned during the project’s development and delivery led us to a set of principles for engagement, which work for all audiences regardless of any particular needs. We found that all science engagement should strive to be authentic, with no dumbing down of the science message and delivered by practicing scientists striving to involve the audience as equals. Alongside this authentic message, VI learners require close interaction with a scientist-presenter in a group of no bigger than four. The scientist should also be trained in VI-audience awareness, sighted guiding, audio description and in the presentation of a tactile narrative linked to the learning outcomes. Coupled with this idea is the need to train presenters to be able to use the differing materials with diverse audience groups.

Tactile Collider toured the UK in 2017 and 2018, visiting many mainstream and specialist schools, and meeting many motivated and enthusiastic students. We have also spent time at music festivals, with a focus on raising awareness of VI issues and giving people a chance to learn about the LHC using senses other than their eyes. One legacy of Tactile Collider is educating our community, and we are planning “VI in science” training events in 2020 in addition to a third community meeting bringing together scientists and communication professionals.

There is now a real interest and understanding in particle physics about the importance of reaching underrepresented audiences. Tactile Collider is a step towards this, and we are working to share the skills and insights we have gained in our journey so far. The idea has also appeared in astronomy: Tactile Universe, based at the Institute of Cosmology and Gravitation at the University of Portsmouth, engages the VI community with astrophysics research, for example by creating 3D printed tactile images of galaxies for use in schools and at public events. The first joint Tactile Collider/Universe event will take place in London in 2020 and we have already jointly hosted two community workshops. The Tactile Collider team is happy to discuss bringing the exhibition to any event, lab or venue.

Fundamental science is a humbling and levelling endeavour. When we consider the Higgs boson and supernovae, none of us can directly engage with the very small or the far or the very massive. Using all of our senses shows us science in a new and fascinating way. 

A voyage to the heart of the neutrino

On 11 June 2018, a tense silence filled the large lecture hall of the Karlsruhe Institute of Technology (KIT) in Germany. In front of an audience of more than 250 people, 15 red buttons were pressed simultaneously by a panel of senior figures including recent Nobel laureates Takaaki Kajita and Art McDonald. At the same time, operators in the control room of the Karlsruhe Tritium Neutrino (KATRIN) experiment lowered the retardation voltage of the apparatus so that the first beta electrons were able to pass into KATRIN’s giant spectrometer vessel. Great applause erupted when the first beta electrons hit the detector.

In the long history of measuring the tritium beta-decay spectrum to determine the neutrino mass, the ensuing weeks of KATRIN’s first data-taking opened a new chapter. Everything worked as expected, and KATRIN’s initial measurements have already propelled it into the top ranks of neutrino experiments. The aim of this ultra-high-precision beta-decay spectroscope, more than 15 years in the making, is to determine, by the mid-2020s, the absolute mass of the neutrino.

Massive discovery

Since the discovery of the oscillation of atmospheric neutrinos by the Super-Kamiokande experiment in 1998, and of the flavour transitions of solar neutrinos by the SNO experiment shortly afterwards, it was strongly implied that neutrino masses are not zero, but big enough to cause interference between distinct mass eigenstates as a neutrino wavepacket evolves in time. We know now that the three neutrino flavour states we observe in experiments – νe, νμ and ντ – are mixtures of three neutrino mass states.

Though not massless, neutrinos are exceedingly light. Previous experiments designed to directly measure the scale of neutrino masses in Mainz and Troitsk produced an upper limit of 2 eV for the neutrino mass – a factor 250,000 times smaller than the mass of the otherwise lightest massive elementary particle, the electron. Nevertheless, neutrino masses are extremely important for cosmology as well as for particle physics. They have a number density of around 336 cm–3, making them the most abundant particles in the universe besides photons, and therefore play a distinct role in the formation of cosmic structure. Comparing data from the Planck satellite together with data from galaxy surveys (baryonic acoustic oscillations) with simulations of the evolution of structure yields an upper limit on the sum of all three neutrino masses of 0.12 eV at 95% confidence within the framework of the standard Lambda cold-dark matter (ΛCDM) cosmological model.

Considerations of “naturalness” lead most theorists to speculate that the exceedingly tiny neutrino masses do not arise from standard Yukawa couplings to the Higgs boson, as per the other fermions, but are generated by a different mass mechanism. Since neutrinos are electrically neutral, they could be identical to their antiparticles, making them Majorana particles. Via the so-called seesaw mechanism, this interesting scenario would require a new and very high particle mass scale to balance the smallness of the neutrino masses, which would be unreachable with present accelerators.

KATRIN’s main spectrometer

As neutrino oscillations arise due to interference between mass eigenstates, neutrino-oscillation experiments are only able to determine splittings between the squares of the neutrino mass eigenstates. Three experimental avenues are currently being pursued to determine the neutrino mass. The most stringent upper limit is currently the model-dependent bound set by cosmological data, as already mentioned, which is valid within the ΛCDM model. A second approach is to search for neutrinoless double-beta decay, which allows a statement to be made about the size of the neutrino masses but presupposes the Majorana nature of neutrinos. The third approach – the one adopted by KATRIN – is the direct determination of the neutrino mass from the kinematics of a weak process such as beta decay, which is completely model-independent and depends only on the principle of energy and momentum conservation.

Figure 1

The direct determination of the neutrino mass relies on the precise measurement of the shape of the beta electron spectrum near the endpoint, which is governed by the available phase space (figure 1). This spectral shape is altered by the neutrino mass value: the smaller the mass, the smaller the spectral modification. One would expect to see three modifications, one for each neutrino mass eigenstate. However, due to the tiny neutrino mass differences, a weighted sum is observed. This “average electron neutrino mass” is formed by the incoherent sum of the squares of the three neutrino mass eigenstates, which contribute to the electron neutrino according to the PMNS neutrino-mixing matrix. The super-heavy hydrogen isotope tritium is ideal for this purpose because it combines a very low endpoint energy, Eo, of 18.6 keV and a short half-life of 12.3 years with a simple nuclear and atomic structure.

KATRIN is born

Around the turn of the millennium, motivated by the neutrino oscillation results, Ernst Otten of the University of Mainz and Vladimir Lobashev of INR Troitsk proposed a new, much more sensitive experiment to measure the neutrino mass from tritium beta decay. To this end, the best methods from the previous experiments in Mainz, Troitsk and Los Alamos were to be combined and upscaled by up to two orders of magnitude in size and precision. Together with new technologies and ideas, such as laser Raman spectroscopy or active background reduction methods, the apparatus would increase the sensitivity to the observable in beta decay (the square of the electron antineutrino mass) by a factor of 100, resulting in a neutrino-mass sensitivity of 0.2 eV. Accordingly, the entire experiment was designed to the limits of what was feasible and even beyond (see “Technology transfer delivers ultimate precision” box).

Technology transfer delivers ultimate precision

The electron transport and tritium retention system

Many technologies had to be pushed to the limits of what was feasible or even beyond. KATRIN became a CERN-recognised experiment (RE14) in 2007 and the collaboration worked with CERN experts in many areas to achieve this. The KATRIN main spectrometer is the largest ultra-high vacuum vessel in the world, with a residual gas pressure in the range of 10–11 mbar – a pressure that is otherwise only found in large volumes inside the LHC ring – equivalent to the pressure recorded at the lunar surface.

Even though the inner surface was instrumented with a complex dual-layer wire electrode system for background suppression and electric-field shaping, this extreme vacuum was made possible by rigorous material selection and treatment in addition to non-evaporable getter technology developed at CERN. KATRIN’s almost 40 m-long chain of superconducting magnets with two large chicanes was put into operation with the help of former CERN experts, and a 223Ra source was produced at ISOLDE for background studies at KATRIN. A series of 83mKr conversion electron sources based on implanted 83Rb for calibration purposes was initially produced at ISOLDE. At present these are produced by KATRIN collaborators and further developed with regard to line stability.

Conversely, the KATRIN collaboration has returned its knowledge and methods to the community. For example, the ISOLDE high-voltage system was calibrated twice with the ppm-accuracy KATRIN voltage dividers, and the magnetic and electrical field calculation and tracking programme KASSIOPEIA developed by KATRIN was published as open source and has become the standard for low-energy precision experiments. The fast and precise laser Raman spectroscopy developed for KATRIN is also being applied to fusion technology.

KIT was soon identified as the best place for such an experiment, as it had the necessary experience and infrastructure with the Tritium Laboratory Karlsruhe. The KIT board of directors quickly took up this proposal and a small international working group started to develop the project. At a workshop at Bad Liebenzell in the Black Forest in January 2001, the project received so much international support that KIT, together with nearly all the groups from the previous neutrino-mass experiments, founded the KATRIN collaboration. Currently, the 150-strong KATRIN collaboration comprises 20 institutes from six countries.

It took almost 16 years from the first design to complete KATRIN, largely because many new technologies had to be developed, such as a novel concept to limit the temperature fluctuations of the huge tritium source to the mK scale at 30 K or the high-voltage stabilisation and calibration to the 10 mV scale at 18.6 kV. The experiment’s two most important and also most complex components are the gaseous, windowless molecular tritium source (WGTS) and the very large spectrometer. In the WGTS, tritium gas is introduced in the midpoint of the 10 m-long beam tube, where it flows out to both sides to be pumped out again by turbomolecular pumps. After being partially cleaned it is re-injected, yielding a closed tritium cycle. This results in an almost opaque column density with a total decay rate of 1011 per second. The beta electrons are guided adiabatically to a tandem of a pre- and a main spectrometer by superconducting magnets of up to 6 T. Along the way, differential and cryogenic pumping sections including geometric chicanes reduce the tritium flow by more than 14 orders of magnitude to keep the spectrometers free of tritium (figure 2).

Filtration

Figure 2

The KATRIN spectrometers operate as so-called MAC-E filters, whereby electrons are guided by two superconducting solenoids at either end and their momenta are collimated by the magnetic field gradient. This “magnetic bottle” effect transforms almost all kinetic energy into longitudinal energy, which is filtered by an electrostatic retardation potential so that only electrons with enough energy to overcome the barrier are able to pass through. The smaller pre-spectrometer blocks the low-energy part of the beta spectrum (which carries no information on the neutrino mass), while the 10 m-diameter main spectrometer provides a much sharper filter width due to its huge size.

The transmitted electrons are detected by a high-resolution segmented silicon detector. By varying the retarding potential of the main spectrometer, a narrow region of the beta spectrum of several tens of eV below the endpoint is scanned, where the imprint of a non-zero neutrino mass is maximal. Since the relative fraction of the tritium beta spectrum in the last 1 eV below the endpoints amounts to just 2 × 10–13, KATRIN demands a tritium source of the highest intensity. Of equal importance is the high precision needed to understand the measured beta spectrum. Therefore, KATRIN possesses a complex calibration and monitoring system to determine all systematics with the highest precision in situ, e.g. the source strength, the inelastic scattering of beta electrons in the tritium source, the retardation voltage and the work functions of the tritium source and the main spectrometer.

Start-up and beyond

After intense periods of commissioning during 2018, the tritium source activity was increased from its initial value of 0.5 GBq (which was used for the inauguration measurements) to 25 GBq (approximately 22% of nominal activity) in spring 2019. By April, the first KATRIN science run had begun and everything went like clockwork. The decisive source parameters – temperature, inlet pressure and tritium content – allowed excellent data to be taken, and the collaboration worked in several independent teams to analyse these data. The critical systematic uncertainties were determined both by Monte Carlo propagation and with the covariance-matrix method, and the analyses were also blinded so as not to generate bias. The excitement during the un-blinding process was huge within the KATRIN collaboration, which gathered for this special event, and relief spread when the result became known. The neutrino-mass square turned out to be compatible with zero within its uncertainty budget. The model fits the data very well (figure 3) and the fitted endpoint turned out to be compatible with the mass difference between 3He and tritium measured in Penning traps. The new results were presented at the international TAUP 2019 conference in Toyama, Japan, and have recently been published.

Figure 3

This first result shows that all aspects of the KATRIN experiment, from hardware to data-acquisition to analysis, works as expected. The statistical uncertainty of the first KATRIN result is already smaller by a factor of two compared to previous experiments and systematic uncertainties have gone down by a factor of six. A neutrino mass was not yet extracted with these first four weeks of data, but an upper limit for the neutrino mass of 1.1 eV (90% confidence) can be drawn, catapulting KATRIN directly to the top of the world of direct neutrino-mass experiments. In the mass region around 1 eV, the limit corresponds to the quasi-degenerated neutrino-mass range where the mass splittings implied by neutrino-oscillation experiments are negligible compared to the absolute masses.

The neutrino-mass result from KATRIN is complementary to results obtained from searches for neutrinoless double beta decay, which are sensitive to the “coherent sum” mββ of all neutrino mass eigenstates contributing to the electron neutrino. Apart from additional phases that can lead to possible cancellations in this sum, the values of the nuclear matrix elements that need to be calculated to connect the neutrino mass mββ with the observable (the half-life) still possess uncertainties of a factor two. Therefore, the result from a direct neutrino-mass determination is more closely connected to results from cosmological data, which give (model-dependent) access to the neutrino-mass sum.

A sizeable influence

Currently, KATRIN is taking more data and has already increased the source activity by a factor of four to close to its design value. The background rate is still a challenge. Various measures, such as out-baking and using liquid-nitrogen cooled baffles in front of the getter pumps, have already yielded a background reduction by a factor 10, and more will be implemented in the next few years. For the final KATRIN sensitivity of 0.2 eV (90% confidence) on the absolute neutrino-mass scale, a total of 1000 days of data are required. With this sensitivity KATRIN will either find the neutrino mass or will set a stringent upper limit. The former would confront standard cosmology, while the latter would exclude quasi-degenerate neutrino masses and a sizeable influence of neutrinos on the formation of structure in the universe. This will be augmented by searches for physics beyond the Standard Model, such as for sterile neutrino admixtures with masses from the eV to the keV scale.

Operators in the KATRIN control room

Neutrino-oscillation results yield a lower limit for the effective electron-neutrino mass to manifest in direct neutrino-mass experiments of about 10 meV (50 meV) for normal (inverse) mass ordering. Therefore, many plans exist to cover this region in the future. At KATRIN, there is a strong R&D programme to upgrade the MAC-E filter principle from the current integral to a differential read-out, which will allow a factor-of-two improvement in sensitivity on the neutrino mass. New approaches to determine the absolute neutrino-mass scale are also being developed: Project 8, a radio-spectroscopy method to eventually be applied to an atomic tritium source; and the electron-capture experiments ECHo and HOLMES, which intend to deploy large arrays of cryogenic bolometers with the implanted isotope 163Ho. In parallel, the next generation of neutrinoless double beta decay experiments like LEGEND, CUPID or nEXO (as well as future xenon-based dark-matter experiments) aim to cover the full range of inverted neutrino-mass ordering. Finally, refined cosmological data should allow us to probe the same mass region (and beyond) within the next decades, while long-baseline neutrino-oscillation experiments, such as JUNO, DUNE and Hyper-Kamiokande, will probe the neutrino-mass ordering implemented in nature. As a result of this broad programme for the 2020s, the elusive neutrino should finally yield some of its secrets and inner properties beyond mixing.

Who ordered all of that?

Masses of quarks and leptons

The origin of the three families of quarks and leptons and their extreme range of masses is a central mystery of particle physics. According to the Standard Model (SM), quarks and leptons come in complete families that interact identically with the gauge forces, leading to a remarkably successful quantitative theory describing practically all data at the quantum level. The various quark and lepton masses are described by having different interaction strengths with the Higgs doublet (figure 1, left), also leading to quark mixing and charge-parity (CP) violating transitions involving strange, bottom and charm quarks. However, the SM provides no understanding of the bizarre pattern of quark and lepton masses, quark mixing or CP violation.

In 1998 the SM suffered its strongest challenge to date with the decisive discovery of neutrino oscillations resolving the atmospheric neutrino anomaly and the long-standing problem of the low flux of electron neutrinos from the Sun. The observed neutrino oscillations require at least two non-zero but extremely small neutrino masses, around one ten millionth of the electron mass or so, and three sizeable mixing angles. However, since the minimal SM assumes massless neutrinos, the origin and nature of neutrino masses (i.e. whether they are Dirac or Majorana particles, the latter requiring the neutrino and antineutrino to be related by CP conjugation) and mixing is unclear, and many possible SM extensions have been proposed.

The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore, with the fermion mass hierarchy now spanning at least 12 orders of magnitude, from the neutrino to the top quark. However, it is not only the fermion mass hierarchy that is unsettling. There are now 28 free parameters in a Majorana-extended SM, including a whopping 22 associated with flavour, surely too many for a fundamental theory of nature. To restate Isidor Isaac Rabi’s famous question following the discovery of the muon in 1936: who ordered all of that?

A theory of flavour

Figure 1

There have been many attempts to formulate a theory beyond the SM that can address the flavour puzzles. Most attempt to enlarge the group structure of the SM describing the strong, weak and electromagnetic gauge forces: SU(3)C× SU(2)L× U(1)Y (see “A taste of flavour in elementary particle physics” panel). The basic premise is that, unlike in the SM, the three families are distinguished by some new quantum numbers associated with a new family or flavour symmetry group, Gfl, which is tacked onto the SM gauge group, enlarging the structure to Gfl× SU(3)C× SU(2)L× U(1)Y. The earliest ideas dating back to the 1970s include radiative fermion-mass generation, first proposed by Weinberg in 1972, who supposed that some Yukawa couplings might be forbidden at tree level by a flavour symmetry but generated effectively via loop diagrams. Alternatively, the Froggatt–Nielsen (FN) mechanism in 1979 assumed an additional U(1)fl symmetry under which the quarks and leptons carry various charges.

To account for family replication and to address the question of large lepton mixing, theorists have explored a larger non-Abelian family symmetry, SU(3)fl, where the three families are analogous to the three quark colours in quantum chromodynamics (QCD). Many other examples have been proposed based on subgroups of SU(3)fl, including discrete symmetries (figure 2, right). More recently, theorists have considered extra-dimensional models in which the Higgs field is located at a 4D brane, while the fermions are free to roam over the extra dimension, overlapping with the Higgs field in such a way as to result in hierarchical Yukawa couplings. Still other ideas include partial compositeness in which fermions may get hierarchical masses from the mixing between an elementary sector and a composite one. The possibilities are seemingly endless. However, all such theories share one common question: what is the scale, Mfl, (or scales) of new physics associated with flavour?

Since experiments at CERN and elsewhere have thoroughly probed the electroweak scale, all we can say for sure is that, unless the new physics is extremely weakly coupled, Mfl can be anywhere from the Planck scale (1019GeV), where gravity becomes important, to the electroweak scale at the mass of the W boson (80 GeV). Thus the flavour scale is very unconstrained.

 

A taste of flavour in elementary particle physics

I I Rabi

The origin of flavour can be traced back to the discovery of the electron – the first elementary fermion – in 1897. Following the discovery of relativity and quantum mechanics, the electron and the photon became the subject of the most successful theory of all time: quantum electrodynamics (QED). However, the smallness of the electron mass (me = 0.511 MeV) compared to the mass of an atom has always intrigued physicists.

The mystery of the electron mass was compounded by the discovery in 1936 of the muon with a mass of 207 me but otherwise seemingly identical properties to the electron. This led Isidor Isaac Rabi to quip “who ordered that?”. Four decades later, an even heavier version of the electron was discovered, the tau lepton, with mass mτ = 17 mμ. Yet the seemingly arbitrary values of the masses of the charged leptons are only part of the story. It soon became clear that hadrons were made from quarks that come in three colour charges mediated by gluons under a SU(3)C gauge theory, quantum chromodynamics (QCD). The up and down quarks of the first family have intrinsic masses mu= 4 me and md = 10 me, accompanied by the charm and strange quarks (mc = 12 mμ and ms = 0.9 mμ) of a second family and the heavyweight top and bottom quarks (mt = 97 mτ and mb = 2.4 mτ) of a third family.

It was also realised that the different quark “flavours”, a term invented by Gell-Mann and Fritzsch, could undergo mixing transitions. For example, at the quark level the radioactive decay of a nucleus is explained by the transformation of a down quark into an up quark plus an electron and an electron antineutrino. Shortly after Pauli hypothesized the neutrino in 1930, Fermi proposed a theory of weak interactions based on a contact interaction between the four fermions, with a coupling strength given by a dimensionful constant GF, whose scale was later identified with the mass of the W boson: GF 1/mW2.

After decades of painstaking observation, including the discovery of parity violation, whereby only left-handed particles experience the weak interaction, Fermi’s theory of weak interactions and QED were merged into an electroweak theory based on SU(2)L × U(1)Y gauge theory. The left-handed (L) electron and neutrino form a doublet under SU(2)L, while the right-handed electron is a singlet, with the doublet and singlet carrying hypercharge U(1)Y and the pattern repeating for the second and third lepton families. Similarly, the left-handed up and down quarks form doublets, and so on. The electroweak SU(2)L× U(1)Y symmetry is spontaneously broken to U(1)QED by the vacuum expectation value of the neutral component of a new doublet of complex scalar boson fields called the Higgs doublet. After spontaneous symmetry breaking, this results in massive charged W and neutral Z gauge bosons, and a massive neutral scalar Higgs boson – a picture triumphantly confirmed by experiments at CERN.

To truly shed light on the Standard Model’s flavour puzzle, theorists have explored higher and more complex symmetry groups than the Standard Model. The most promising approaches all involve a spontaneously broken family or flavour symmetry. But the flavour-breaking scale may lie anywhere from the Planck scale to the electroweak scale, with grand unified theories suggesting a high flavour scale, while recent hints of anomalies from LHCb and other experiments suggest a low flavour scale.

To illustrate the unknown magnitude of the flavour scale, consider for example the FN mechanism, where Mfl is associated with the breaking of the U(1)fl symmetry. In the SM the top-quark mass of 173 GeV is given by a Yukawa coupling times the Higgs vacuum expectation value of 246 GeV divided by the square root of two. This implies a top-quark Yukawa coupling close to unity. The exact value is not important, what matters is that the top Yukawa coupling is of order unity. From this point of view, the top quark mass is not at all puzzling – it is the other fermion masses associated with much smaller Yukawa couplings that require explanation. According to FN, the fermions are assigned various U(1)fl charges and small Yukawa couplings are forbidden due to a U(1)fl symmetry. The symmetry is broken by the vacuum expectation value of a new “flavon” field <φ>, where φ is a neutral scalar under the SM but carries one unit of U(1)fl charge. Small Yukawa couplings then originate from an operator (figure 1, right) suppressed by powers of the small ratio <φ>/Mfl (where Mfl acts as a cut-off scale of the contact interaction).

For example, suppose that the ratio <φ>/Mfl is identified with the Wolfenstein parameter λ = sinθC = 0.225 (where θC is the Cabibbo angle appearing in the CKM quark-mixing matrix). Then the fermion mass hierarchies can be explained by powers of this ratio, controlled by the assigned U(1)fl charges: me/mτλ5, mμ/mτλ2, md/mbλ4, ms/mb∼ λ2, mu/mt ∼ λ8 and mc/mt∼ λ4. This shows how fermion masses spanning many orders of magnitude may be interpreted as arising from integer U(1)fl charge assignments of less than 10. However, in this approach, Mfl may be anywhere from the Planck scale to the electroweak scale by adjusting <φ> such that the ratio λ= <φ>/Mfl is held fixed.

One possibility for Mfl, reviewed by Kaladi Babu at Oklahoma State University in 2009, is that it is not too far from the scale of grand unified theories (GUTs), of order 1016 GeV, which is the scale at which the gauge couplings associated with the SM gauge group unify into a single gauge group. The simplest unifying group, SU(5)GUT, was proposed by Georgi and Glashow in 1974, following the work of Pati and Salam based on SU(4)C× SU(2)L× SU(2)R. Both these gauge groups can result from SO(10)GUT, which was discovered by Fritzsch and Minkowski (and independently by Georgi), while many other GUT groups and subgroups have also been studied (figure 2, left). However, GUT groups by themselves only unify quarks and leptons within a given family, and while they may provide an explanation for why mb= 2.4 mτ, as discussed by Babu, they do not account for the fermion mass hierarchies.

Broken symmetries

Figure 2

A way around this, first suggested by Ramond in 1979, is to combine GUTs with family symmetry based on the product group GGUT× Gfl, with symmetries acting in the specific directions shown in the figure “Family affair”. In order not to spoil the unification of the gauge couplings, the flavour-symmetry breaking scale is often assumed to be close to the GUT breaking scale. This also enables the dynamics of whatever breaks the GUT symmetry, be it Higgs fields or some mechanism associated with compactification of extra dimensions, to be applied to the flavour breaking. Thus, in such theories, the GUT and flavour/family symmetry are both broken at or around Mfl MGUT  1016 GeV, as widely discussed by many authors. In this case, it would be impossible given known technology to directly experimentally access the underlying theory responsible for unification and flavour. Instead, we would need to rely on indirect probes such as proton decay (a generic prediction of GUTs and hence of these enlarged SM structures proposed to explain flavour) and/or charged-lepton flavour-violating processes such as μ → eγ (see CERN Courier May/June 2019 p45).

New ideas for addressing the flavour problem continue to be developed. For example, motivated by string theory, Ferruccio Feruglio of the University of Padova suggested in 2017 that neutrino masses might be complex analytic functions called modular forms. The starting point of this novel idea is that non-Abelian discrete family symmetries may arise from superstring theory in compactified extra dimensions, as a finite subgroup of the modular symmetry of such theories (i.e. the symmetry associated with the non-unique choice of basis vectors spanning a given extra-dimensional lattice). It follows that the 4D effective Lagrangian must respect modular symmetry. This, Feruglio observed, implies that Yukawa couplings may be modular forms. So if the leptons transform as triplets under some finite subgroup of the modular symmetry, then the Yukawa couplings themselves must transform also as triplets, but with a well defined structure depending on only one free parameter: the complex modulus field. At a stroke, this removes the need for flavon fields and ad hoc vacuum alignments to break the family symmetry, and potentially greatly simplifies the particle content of the theory.

Compactification

Although this approach is currently actively being considered, it is still unclear to what extent it may shed light on the entire flavour problem including all quark and lepton mass hierarchies. Alternative string-theory motivated ideas for addressing the flavour problem are also being developed, including the idea that flavons can arise from the components of extra-dimensional gauge fields and that their vacuum alignment may be achieved as a consequence of the compactification mechanism.

The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore

Recently, there have been some experimental observations concerning charged lepton flavour universality violation which hint that the flavour scale might not be associated with the GUT scale, but might instead be just around the corner at the TeV scale (CERN Courier May/June 2019 p33). Recall that in the SM the charged leptons e, μ and τ interact identically with the gauge forces, and differ only in their masses, which result from having different Yukawa couplings to the Higgs doublet. This charged lepton flavour universality has been the subject of intense experimental scrutiny over the years and has passed all the tests – until now. In recent years, anomalies have appeared associated with violations of charged lepton flavour universality in the final states associated with the quark transitions b → c and b → s.

Puzzle solving

In the case of b → c transitions, the final states involving τ leptons appear to violate charged lepton universality. In particular B → D(*) ν decays where the charged lepton ℓ is identified with τ have been shown by Babar and LHCb to occur at rates somewhat higher than those predicted by the SM (the ratios of such final states to those involving electrons and muons being denoted by RD and RD*). This is quite puzzling since all three types of charged leptons are predicted to couple to the W boson equally, and the decay is dominated by tree-level W exchange. Any new-physics contribution, such as the exchange of a new charged Higgs boson, a new W′ or a leptoquark, would have to compete with tree-level W exchange. However, the most recent measurements by Belle, reported at the beginning of 2019 (CERN Courier May/June 2019 p9), measure RD and RD* to be closer to the SM prediction.

In the case of b → s transitions, the LHCb collaboration and other experiments have reported a number of anomalies in B → K(*) + decays such as the RK and RK* ratios of final states containing μ+μ versus e+e, which are measured deviate from the SM by about 2.5 standard deviations. Such anomalies, if they persist, may be accounted for by a new contact operator coupling the four fermions bLsLμLμL suppressed by a dimensionful coefficient M2new  where Mnew ~30 TeV, according to a general operator analysis. This hints that there may be new physics arising from the non-universal couplings of leptoquark and/or a new Z′ whose mass is typically a few TeV in order to generate such an operator (where the 30 TeV scale is reduced to just a few TeV after mixing angles are taken into account). However, the introduction of these new particles increases the SM parameter count still further, and only serves to make the flavour problem of the SM worse.

Link-up

Figure 3

Motivated by such considerations, it is tempting to speculate that these recent empirical hints of flavour non-universality may be linked to a possible theory of flavour. Several authors have hinted at such a connection, for example Riccardo Barbieri of Scuola Normale Superiore, Pisa, and collaborators have related these observations to a U(2)5 flavour symmetry in an effective theory framework. In addition, concrete models have recently been constructed that directly relate the effective Yukawa couplings to the effective leptoquark and/or Z′ couplings. In such models the scale of new physics associated with the mass of the leptoquark and/or a new Z′ may be identified with the flavour scale Mfl defined earlier, except that it should be not too far from the TeV scale in order to explain the anomalies. To achieve the desired link, the effective leptoquark and/or Z′ couplings may be generated by the same kinds of operators responsible for the effective Higgs Yukawa couplings (figure 3).

In such a model the couplings of leptoquarks and/or Z′ bosons may be related to the Higgs Yukawa couplings, with all couplings arising effectively from mixing with a vector-like fourth family. The considered model predicts, apart from the TeV scale leptoquark and/or Z′, and a slightly heavier fourth family, extra flavour-changing processes such as τ μμμ. The model in its current form does not have any family symmetry, and explains the hierarchy of the quark masses in terms of the vector-like fourth family masses, which are free parameters. Crucially, the required TeV scale Z′ mass is given by MZ′ ~ <φ> ~ TeV, which would fix the flavour scale Mfl ~ few TeV. In other words, if the hints for flavour anomalies hold up as further data are collected by the LHCb, Belle II and other experiments, the origin of flavour may be right around the corner.

Leading physicists back future circular collider

An artist’s impression of a particle collision taking place at the Future Circular Collider. Credit: CERN.

The next major European project after the LHC should be a 100-km circumference circular collider, argue more than 50 senior particle physicists in a preprint posted on arXiv at the end of last year. The authors — who include two previous CERN Council presidents, two former CERN Directors-General, leading members of the LHC experiments and high-energy theorists – say that the sequential electron-positron and hadron-hadron programme of the CERN-led Future Circular Collider (FCC) offers the most promising way to explore in full detail the Higgs sector and extend substantially the reach for new physics, and is the best option to maintain Europe’s place at the high-energy frontier during the coming decades.

“The combination of FCC-ee and FCC-hh will provide a forefront scientific programme for CERN for many decades, just as the combination of LEP and LHC has done,” says coauthor and former CERN Council president Michel Spiro. “We consider FCC to be a visionary programme for the future of CERN.”

“We consider FCC to be a visionary programme for the future of CERN”

The 15-page long preprint comes as the update of the European strategy for particle physics enters its final stages, and notes that several important new facts have emerged during the past year: the FCC conceptual design reports were published in January; in March, Japan postponed the decision about an International Linear Collider to an indefinite date; in May, Europe discussed its particle physics strategy at an open symposium in Granada, where several high-energy options were presented; and in September, the European Strategy Group (ESG) published a Physics Briefing Book and prepared a supporting note which included five possible scenarios for major new accelerator facilities and raised a number of important issues. In Europe the options for a post-LHC collider are the FCC and Compact Linear Collider (CLIC) projects, both proposed to be located at CERN. “The supporting note had the cardinal virtue of posing directly the central question: linear or circular?” write the authors. “We summarise our view on the key issues, which contain the answer to this question.”

The estimated physics reach of both machines is explored in detail. In terms of an initial-stage 380 GeV CLIC or 365 GeV FCC-ee, the report finds that both machines cover in comparable ways the number-one priority of the particle physics community: exploring more fully the Higgs sector, and covering top-quark physics. FCC compares favourably with CLIC  on the expected accuracy of the Higgs couplings, it claims, and its much higher luminosity means it can operate as a “tera-Z” and WW facility, providing a new generation of precision electroweak measurements. FCC-ee combines several new accelerator technologies, but “will be built using the vast experience accumulated with previous circular electron-positron colliders,” notes the report. CLIC would require “a vertical beam size in the collision region at the nanometre level”, and the authors raise concerns that CLIC would be restricted to electron–positron collisions with only a single interaction point and one experimental facility.

The differences between the physics reach of a linear and circular machine become sharper for “stage 2”: a 1– 3 TeV CLIC or a 100 TeV FCC-hh. Here, the authors conclude that CLIC will have very interesting capabilities for physics exploration, such as double-Higgs production, assuming that the design performance is achieved, whereas a 100 TeV FCC-hh opens a new energy regime, provided the 16 T magnet technology can be mastered technically and cost-effectively. FCC would have the last word on Standard Model measurements, and “an unrivalled discovery potential, with an increased reach for direct discovery at the highest masses”.

Whichever project is chosen, the necessary time and resources will require a new style for CERN

Both CLIC and FCC require a new scale of investment, the report notes, and success in this formidable task may be achieved “only if the particle physics community at large shows overwhelming support for the recommended programme”. The authors note that the integrated FCC-ee and FCC-hh programme is estimated to be a factor 1.5 more expensive than a 3 TeV CLIC, but will provide a greater range of research opportunities for a larger physics community over a longer time span. The costs are also to be seen in the perspective of the long timeframes of these programmes, each of which will extend over several decades, as well as the expected physics advances.

Whichever project is chosen, conclude the authors, the necessary time and resources will require a new style for CERN, for the particle-physics community – including innovative ways of guiding the careers of young researchers – and for the interaction between politics and society. “The FCC programme that we support will keep particle physics at the high-energy frontier vibrant, but it will require a deep and lasting commitment by society to fundamental research, which the high-energy community must strive to merit and justify,” says Spiro.

The next step in the European strategy update is the ESG drafting session to take place in Bad Honnef, Germany, from 20-24 January. Recommendations to CERN will be formally presented at an event in Budapest in May.

Beauty baryons strike again

The spectrum of the difference in invariant mass between the Ξb0K− combination and the Ξb0 candidate. The fitted masses of the four peaks are: 6315.64±0.31±0.07±0.50 MeV, 6330.30±0.28±0.07±0.50 MeV, 6339.71±0.26±0.05±0.50 MeV and 6349.88±0.35±0.05±0.50 MeV, where the uncertainties are statistical, systematic, and due to the uncertainty on the world-average Ξb0 mass of 5791.9 ± 0.5 MeV. Credit: LHCb

The LHCb experiment has observed new beauty-baryon states, consistent with theoretical expectations for excited Ωb (bss) baryons. The Ωb (first observed a decade ago at the Tevatron) is a higher mass partner of the Ω (sss), the 1964 discovery of which famously validated the quark model of hadrons. The new LHCb finding will help to test models of hadronic states, including some that predict exotic structures such as pentaquarks.

The LHCb collaboration has uncovered numerous new baryons and mesons during the past eight years, bringing a wealth of information to the field of hadron spectroscopy. Critical to the search for new hadrons is the unique capability of the experiment to trigger on fully hadronic beauty and charm decays of b baryons, distinguish protons, kaons and pions from one another using ring-imaging Cherenkov detectors, and reconstruct secondary and tertiary decay vertices with a silicon vertex detector.

LHCb physicists searched for excited Ωb states via strong decays to Ξb0 K, where the Ξb0 (bsu), in turn, decays weakly through Ξb0 → Ξc+ π and Ξc+ → pK π+. Using the full data sample collected during LHC Run 1 and Run 2, a very large and clean sample of about 19,000 Ξb0 signal decays was collected. Those Ξb0 candidates were then combined with a K candidate coming from the same primary interaction. Combinations with the wrong sign (Ξb0 K+), where no Ωb states are expected, were used to study the background. This control sample was used to tune particle-identification requirements to reject misidentified pions, reducing the background by a factor of 2.5 while keeping an efficiency of 85% on simulated signal decays.

The search used the difference in invariant mass, δM = M(Ξb0 K) – M(Ξb0), determining the δM resolution to be approximately 0.7 MeV using simulated signal decays. (For comparison, the resolution is about 15 MeV for the Ξb0 decay.) Several peaks can be seen by eye (see figure), but to measure their properties a fit is needed. To help constrain the background shape, the wrong-sign δM spectrum (not shown) is fitted simultaneously with the signal mode. The peaks are each described by a relativistic Breit-Wigner convolved with a resolution function.

The width of the Ωb(6350)shows the most significant deviation from zero

Four peaks, corresponding to four excited Ωb states, were included in the fit. Following the usual convention, the new states were named according to their approximate mass: Ωb(6316), Ωb(6330), Ωb(6340)and Ωb(6350). Each mass was measured with a precision of well below 1 MeV, and the errors are dominated by the uncertainty on the world-average Ξb0 mass. All four peaks are narrow. The width of the Ωb(6350)shows the most significant deviation from zero, with a central value of 1.4+1.0 -0.8 ± 0.1 MeV. The two lower-mass peaks have significances below three standard deviations (2.1σ and 2.6σ) and so are not considered conclusive observations. But the two higher-mass peaks have significances of 6.7σ and 6.2σ, above the 5σ threshold for discovery.

The new states seen by LHCb follow a similar pattern to the five narrow peaks observed in the Ξc+K invariant mass spectrum by the collaboration in 2017. It has proven difficult to obtain a satisfactory explanation of all five as excited Ωc0(css) states, raising the possibility that at least one of the Ξc+ K peaks is a pentaquark or a molecular state. Since the Ξc+ Kand Ξb0 K final states differ only by replacing a c quark with a b quark, the two analyses together should provide strong constraints on any models that aim to explain the structures in these mass spectra.

 

Rekindled Atomki anomaly merits closer scrutiny

A large discrepancy in nuclear decay rates spotted four years ago in an experiment in Hungary has received new experimental support, generating media headlines about the possible existence of a fifth force of nature.

In 2015, researchers at the Institute of Nuclear Research (“Atomki”) in Debrecen, Hungary, reported a large excess in the angular distribution of e+e pairs created during nuclear transitions of excited 8Be nuclei to their ground state (8Be* → 8Be γ; γ → e+e). Significant peak-like enhancement was observed at large angles measured between the e+e pairs, corresponding to a 6.8σ surplus over the expected e+e pair-creation from known processes. The excess was soon interpreted by theorists as being due to the possible emission of a new boson X with a mass of 16.7 MeV decaying into e+e pairs.

In a preprint published in October 2019, the Atomki team has now reported a similar excess of events from the electromagnetically forbidden “M0” transition in 4He nuclei. The anomaly has a statistical significance of 7.2σ and is likely, claim the authors, to be due to the same “X17” particle proposed to explain the earlier 8Be excess.

Quality control

“We were all very happy when we saw this,” says lead author Attila Krasznahorkay. “After the analysis of the data a really significant effect could be observed.” Although not a fully blinded analysis, Krasznahorkay says the team has taken several precautions against bias and carried out numerous cross- checks of its result. These include checks for the effect in the angular correlation of e+e pairs in different regions of the energy distribution, and assuming different beam and target positions. The paper does not go into the details of systematic errors, for instance due to possible nuclear-modeling uncertainties, but Krasznahorkay says that, overall, the result is in “full agreement” with the results of the Monte Carlo simulations performed for the X17 decay.

The Atomki team with the apparatus used for the latest beryllium and helium results, which detects electron-positron pairs from the de-excitation of nuclei produced by firing protons at different targets. Credit: Atomki

While it cannot yet be ruled out, the existence of an X boson is not naively expected, say theorists. For one, such a particle would have to “know” about the distinction between up and down quarks and thus electroweak symmetry breaking. Being a vector boson, the X17 would constitute a new force. It could also be related to the dark-matter problem, write Krasznahorkay and co-workers, and could help resolve the discrepancy between measured and predicted values of the muon magnetic moment.

Last year, the NA64 collaboration at CERN reported results from a direct search for the X boson via the bremsstrahlung reaction eZ → eZX, the absence of a signal placing the first exclusion limits on the X–e coupling in the range (1.3–4.2) × 10–4. “The Atomki anomaly could be an experimental effect, a nuclear-physics e ect or something completely new,” comments NA64 spokesperson Sergei Gninenko. “Our results so far exclude only a fraction of the allowed parameter space for the X boson, so I’m really interested in seeing how this story, which is only just beginning, will unfold.” Last year, researchers used data from the BESIII experiment in China to search for direct X-boson production in electron–positron collisions and indirect production in J/ψ decays – finding no signal. Krasznahorkay and colleagues also point to the potential of beam-dump experiments such as PADME in Frascati, and to the upcoming Dark Light experiment at Jefferson Laboratory, which will search for 10–100 MeV dark photons.

I do not know of any inconsistencies in the experimental data that would indicate that it is an experimental effect

Jonathan Feng

Theorist Jonathan Feng of the University of California at Irvine, who’s group proposed the X-boson hypothesis in 2016, says that the new 4He results from Atomki support the previous 8Be evidence of a new particle – particularly since the excess is observed at a slightly different e+e opening angle in 4He (115o) than it is in 8Be (135o). “If it is an experimental error or some nuclear-physics effect, there is no reason for the excess to shift to different angles, but if it is a new particle, this is exactly what is expected,” says Feng. “I do not know of any inconsistencies in the experimental data that would indicate that it is an experimental effect.”

Data details

In 2017, theorists Gerald Miller at the University of Washington and Xilin Zhang at Ohio State concluded that, if the Atomki data are correct, the original 8Be excess cannot be explained by nuclear-physics modelling uncertainties. But they also wrote that a direct comparison to the e+e– data is not feasible due to “missing public information” about the experimental detector efficiency. “Tuning the normalisation of our results reduces the confidence level of the anomaly by at least one standard deviation,” says Miller. As for the latest Atomki result, the nuclear physics in 4He is more complicated than 8Be because two nuclear levels are involved, explains Miller, making it difficult to carry out an analysis analogous to the 8Be one. “For 4He there is also a background pair- production mechanism and interference effect that is not mentioned in the paper, much of which is devoted to the theory and other future experiments,” he says. “I think the authors would have been better served if they presented a fuller account of their data because, ultimately, this is an experimental issue. Confirming or refuting this discovery by future nuclear experiments would be extremely important. A monumental discovery could be possible.”

A monumental discovery could be possible

Gerald Miller

The Hungarian team is now planning on repeating the measurement with a new gamma-ray coincidence spectrometer at Atomki (see main image), which they say might help to distinguish between the vector and the pseudoscalar interpretation of the X17. Meanwhile, a project called New JEDI will enable an independent veri cation of the 8Be anomaly at the ARAMIS-SCALP facility (Orsay, France) during 2020, followed by direct searches by the same group for the existence of the X boson, in particular in other light quantum systems, at the GANIL-SPIRAL2 facility in Caen, France.

“Many people are sceptical that this is a new particle,” says Feng, who too was doubtful at first. “But at this point, what we need are new ideas about what can cause this anomaly. The Atomki group has now found the effect in two different decays. It would be most helpful for other groups to step forward to confirm or refute their results.”

bright-rec iop pub iop-science physcis connect