Comsol -leaderboard other pages

Topics

New constraints on charm–quark hadronisation

One of the most useful ways to understand the properties of the quark–gluon plasma (QGP) formed in relativistic heavy-ion collisions is to study how various probes interact when propagating though it. Heavier quarks, such as charm, can provide unique insights as they are produced early in the collisions, and their interactions with the QGP differ from their lighter cousins. One important input to these studies is a detailed understanding of hadronisation, by which quarks form experimentally detectable mesons and baryons.

The lightest charm baryon and meson are the Λ+c (udc) and the D0 (cu̅). In proton– proton (pp) collisions, charm hadrons are formed by fragmentation, in which charm quarks and antiquarks move away from each other and combine with newly generated quarks. In heavy-ion collisions, hadron production can also occur via “coalescence”, whereby charm quarks combine with other quarks while traversing the QGP. The contribution of coalescence depends strongly on the transverse momentum (pT) of the hadrons, and is expected to be much more significant for charm baryons than for charm mesons, as they contain more quarks.

The CMS experiment has recently determined the Λ+c/D0 yield ratio over a broad range of pT using the Λ+c→ pKπ+  and D0 → Kπ+ decay channels in both pp and lead–lead (PbPb) collisions, at a nucleon–nucleon centre-of-mass energy of 5.02TeV. Comparing the behaviour of the Λ+c/D0 ratio in different collision systems allows physicists to study the relative contributions of fragmentation and coalescence.

The measured Λ+c/D0-production cross-section ratio in pp-collisions (figure 1) is found to be significantly larger than that calculated in the standard version of the popular Monte-Carlo event generator PYTHIA, while the inclusion of an improved description of the fragmentation (“PYTHIA8+CR”) can better describe the CMS data. The data can also be reasonably described by a different model that includes Λ+c baryons produced by the decays of excited charm baryons (dashed line). However, an attempt to incorporate the coalescence process characteristic of hadron production in heavy-ion collisions (solid line) fails to reproduce the pp-collision measurements.

The CMS collaboration also measured Λ+c production in PbPb collisions. The Λ+c/D0-production ratio for pT>10GeV/c is found to be consistent with that from pp collisions. This similarity suggests that the coalescence process does not contribute significantly to charm hadron production in this pT range for PbPb collisions. These are the first measurements of the ratios at high pT for both the pp and PbPb systems at a nucleon–nucleon centre-of-mass energy of 5.02TeV.

In late 2018, CMS collected data corresponding to about 10 times more PbPb collisions than were used in the current measurement. These will shed new light on the interplay between the different processes in charm–quark hadronisation in heavy-ion collisions. In the meantime, the current results highlight the lack of understanding of charm–quark hadronisation in pp collisions, a subject that requires further experimental measurements and theoretical studies.

Studying neutron stars in the laboratory

A report from the ALICE experiment

Neutron stars consist of extremely dense nuclear matter. Their maximum size and mass are determined by their equation of state, which in turn depends on the interaction potentials between nucleons. Due to the high density, not only neutrons but also heavier strange baryons may play a role.

The main experimental information on the interaction potentials between nucleons and strange baryons comes from bubble-chamber scattering experiments with strange-hadron beams undertaken at CERN in the 1960s, and is limited in precision due to the short lifetimes (< 200 ps) of the hadrons. The ALICE collaboration is now using the scattering between particles produced in collisions at the LHC to constrain interaction potentials in a new way. So far, pK, pΛ, pΣ0, pΞ and pΩ interactions have been investigated. Recent data have already yielded the first evidence for a strong attractive interaction between the proton and the Ξ baryon.

Strong final-state interactions between pairs of particles make their momenta more parallel to each other in the case of an attractive interaction, and increase the opening angle between them in the case of a repulsive interaction. The attractive potential of the p-Ξ interaction was observed by measuring the correlation of pairs of protons and Ξ particles as a function of their relative momentum (the correlation function) and comparing it with theoretical calculations based on different interaction potentials. This technique is referred to as “femtoscopy” since it simultaneously measures the size of the region in which particles are produced and the interaction potential between them.

Data from proton–lead collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV show that p-Ξ pairs are produced at very small distances (~1.4 fm); the measured correlation is therefore sensitive to the short-range strong interaction. The measured p-Ξ correlations were found to be stronger than theoretical correlation functions with only a Coulomb interaction, whereas the prediction obtained by including both the Coulomb and strong interactions (as calculated by the HAL-QCD collaboration) agrees with the data (figure 1).

As a first step towards evaluating the impact of these results on models of neutron-star matter, the HAL-QCD interaction potential was used to compute the single-particle potential of Ξ within neutron-rich matter. A slightly repulsive interaction was inferred (of the order of 6 MeV, compared to the 1322 MeV mass of the Ξ), leading to better constraints on the equation of state for dense hadronic systems that contain Ξ particles. This is an important step towards determining the equation of state for dense and cold nuclear matter with strange hadrons.

Three-body B+ decays violate CP

New sources of CP violation (CPV) are needed to explain the absence of antimatter in our matter-dominated universe. The LHCb collaboration has reported new results describing CPV in B+π+K+K and B+π+π+π decays. Until very recently, all observations of CPV in B mesons were made in two-body and quasi-two-body decays; however, it has long been conjectured that the complex dynamics of multi-body decays could give rise to other manifestations. For CPV to occur in B decays, competing decay amplitudes with different weak phases (which change sign under CP) and strong phases (which do not) are required. The weak phase differences are tied to fundamental parameters of the Standard Model (SM), but the strong phase difference can arise from loop-diagram contributions, final-state re-scattering effects, and phases associated with intermediate resonant structure.

The three-body B decays under study proceed mainly via various intermediate resonances – effectively, a cascade of two-body decays – but also include contributions from non-resonant three-body interactions. The phase space is two-dimensional (it can be fully described by two kinematic variables) and its size allows a rich tapestry of resonant structures to emerge, bringing quantum-mechanical interference into play. Much as in Young’s double-slit experiment, the total amplitude comprises the sum of all possible decay paths. The interference pattern and its phase variation could contribute to CPV in regions where resonances overlap.

One of the most intriguing LHCb results was the 2014 observation of large CPV effects in certain phase-space regions of B+π+K+K and B+π+π+π decays. In the new analysis, these effects are described with explicit amplitude models for the first time (figure 1). A crucial step in the phenomenological description of these amplitudes is to include unitarity-conserving couplings between final states, most notably ππ and KK. Accounting for these is essential to accurately model the complex S-wave component of the decays, which is the configuration where there is no relative angular momentum between a pair of oppositely-charged final-state particles, and which contains broad resonances that are difficult to model. Three complementary approaches were deployed to describe the complicated spin-0 S-wave component of the B+π+π+π decay: the classical isobar model, which explicitly associates a line-shape with a clear physical interpretation to each contribution in the phase space; the K-matrix method, which takes data from scattering experiments as an input; and finally a quasi-model-independent approach, in which the S-wave magnitude and phase are extracted directly from the data.

LHCb’s amplitude analyses of these decays are based on data from Run 1 of the LHC and contain several groundbreaking results, including the largest CP asymmetry in a single component of an amplitude analysis, found in the ππ KK re-scattering amplitude; the first observation of CPV in the interference between intermediate states, seen in the overlap between the dominant spin-1 ρ(770)0 resonance and the π+π+ S-wave; and the first observation of CPV involving a spin-2 resonance of any kind, found in the decay B+ f2(1270)π+. These results provide significant new insights into how CPV in the SM manifests in practice, and motivate further study, particularly into the strong-phase-generating QCD processes that govern CP violation.

The difficult work begins

The open symposium of the European Strategy for Particle Physics (ESPP) update, which drew to a close last week in Granada, Spain, was a moment for physicists to take stock of their field’s status and future. A week of high-quality presentations and focused discussions proved how far things have moved on since the previous strategy update concluded in 2013. In the past few years the LHC has proved the existence of the Higgs boson and so far suggested that there are no new particles beyond the SM at the electroweak scale. Spectacular progress has been made with neutrinos, dark-matter searches, flavour and electroweak physics, and gravitational-wave astronomy is beginning to take off. The deepest puzzles of the standard models of particle physics and cosmology remain at large, however, and large colliders are one of the best tools to address them.

Recommendations from the ESPP are due early next year. Dominating discussions at the open symposium last week was which project should succeed the LHC after its operations cease in the 2030s. The decision has significant consequences for the next generation of particle physicists, not just in Europe but internationally. Perspectives from Asia and the Americas, in addition to national views and inputs from the astroparticle– and nuclear-physics communities, brought into sharp focus the global nature of modern high-energy physics and the need for greater coordination at all levels.

The 130 or so talks and discussion sessions in Granada revealed a community united in its desire for a post-LHC collider, but less so in its choice of that collider’s form. Enormous efforts have gone into weighing up the physics reach of the various projects under study, a task complicated by the complexity of future accelerator technologies, detectors and analyses. Stimulating some heated exchanges, the ESPP saw the International Linear Collider (ILC) in Japan, a Compact Linear Collider (CLIC) or future circular electron–positron collider (FCC-ee) at CERN and a Circular Electron Positron Collider in China (CEPC) pitted against each other and against expectations from the high-luminosity LHC in terms of their potential in key areas such as Higgs physics.

Summary sessions

Summing up the situation for beyond-SM (BSM) physics, Gian Giudice of CERN said that the remaining BSM-physics space is “huge”, and pointed to four big questions for colliders: to what extent can we tell whether the Higgs is fundamental or composite? Are there new interactions or new particles around or above the electroweak scale? What cases of thermal relic WIMPs are still unprobed and can be fully covered by future collider searches? And to what extent can current or future accelerators probe feebly interacting sectors?

Neutrinos, the least well known of all the SM particles, were the subject of numerous presentations. The ESPP audience was reminded that neutrino masses, as established by neutrino oscillations, are the first particle-physics evidence for BSM phenomena. A vibrant programme is under way to fully measure the neutrino mixing matrix and in particular the neutrino mass ordering and CP violation phase. Other experiments are probing the neutrino’s absolute mass scale and testing whether they are of a Dirac or Majorana nature. Along with gravitational waves, neutrinos play a powerful role in multimesseneger astronomy.

Around a fifth of the 160 input documents to the ESPP were linked to flavour physics, covering topics such as lepton-flavour universality, electric-dipole moments and heavy-flavour studies.

Flavour physics is crucial for BSM searches since it is potentially sensitive to effects at scales as high as 105 TeV, said Antonio Zoccoli of INFN in his summary. There is also much complementarity between low-energy physics, the high-energy frontier and searches for feebly interacting particles, he said. Oddities in b-decays seen by the LHCb collaboration are of particular interest. “Flavour is a major legacy of LHC,” Zoccoli concluded. “Charged hadron particle-ID should be mandatory for a full physics programme at future colliders.”

Summarising ESPP sessions on dark-matter and dark-sector physics, Shoji Asai of the University of Tokyo drew attention to a shift in sociology that is taking place. In the old view, dark-matter solutions arose as a byproduct of “top-down” approaches (such as supersymmetry) to solve the SM’s problems. The “new sociology” holds that dark matter needs an explanation of its own, and it’s to be considered a bonus if such a solution also elucidates important issues such as the strong-CP problem or baryogenesis. Among the “big questions” identified in this sector at the ESPP update were: What are the main differences between light hidden-sector dark matter and WIMPs? How broad is the parameter space for the QCD axion? How do we compare the results of different experiments in a more model-independent way? And how will direct and indirect dark-matter detection experiments inform/guide accelerator searches and vice versa? Asai said that consensus has emerged on the need for more coordination and support between accelerator-based direct detection and indirect detection dark-sector searches, as exemplified by the new European Center for AstroParticle Theory.

In summarising interests in the strong sector, Jorgen D’Hondt of Vrije Universiteit Brussel listed the many dedicated experiments in this area and the open questions identified at the ESPP symposium: “What are the experimental and theoretical prerequisites to reach an adequate precision of perturbative and non-perturbative QCD predictions at the highest energies? What can be learned from beams-on-target experiments at current and potential future accelerators? How to probe the quark–gluon plasma equation of state and to establish whether there is a first-order phase transition at high baryon density? What is known about the make-up of the proton (mass, radius, spin, etc) and how to extract it? And what is the role of strong interactions at very low and very high (up to astrophysical) energies?”

Electroweak sparks

Of all the scientific themes of the week, electroweak physics generated the most lively discussions, especially concerning how well the Higgs boson’s couplings to fermions, gauge bosons and to itself can be probed at current and future colliders. Summary speaker Beate Heinemann of DESY cautioned that such quantitative estimates should be treated with a degree of flexibility at this time, though a few things stand out: one is the impressive estimated performance from the HL-LHC in the next 15 or so years; another is that a long-term physics programme based on successive machines in a 100 km-circumference tunnel offers the largest overall physics reach on the Higgs boson and other key parameters. The long timescales required to master the technology for the next hadron collider were well noted. There is broad agreement that the next major collider after the LHC should collide electrons and positrons to fully explore the Higgs boson and make precision measurements of other electroweak parameters that are sensitive to phenomena at higher energy scales. Whether that machine is circular or linear, and built in Asia or Europe, are the billion-dollar questions facing the community now.

The closer involvement of particle physics with astroparticle physics, in particular following the discovery of gravitational waves, was the running theme of the open symposium. It was argued that, in terms of technology, next-generation gravitational-wave detectors such as the Einstein Telescope are essentially “accelerators without beams” and that CERN’s expertise in vacuum and cryogenic technologies (a result of the lab’s continual pursuit and execution of big-collider projects) would help to make such facilities a reality.

The closing discussion of the symposium offered a final hour for physicists to air their views, many of which were met with applause. Proponents of circular machines highlighted the high flexibility and exploratory potential of projects such as FCC-ee, pointing out that it would serve as an electroweak as well as a Higgs factory. Linear-minded participants cited factors such as the extendable nature of linacs, and the independence of their tunnels from a subsequent hadron collider. For others, the priority for CERN should be to enter negotiations as soon as possible for a 100 km tunnel in the Geneva region, buying time to decide which physics option should be installed. Warm applause followed a remark that CERN decides for itself what its next project should be, without relying on other labs. But there were reminders from others that high-energy physics is an international field and that, in times of scarce resources, all options should be considered.

The high-energy physics community has risen to the occasion of the ESPP update. New thinking, from basic theory to instrumentation, computing, analysis and global organisation, is clearly required to sustain the recent rate of progress. Now that the open symposium is over, the European Strategy Group (ESG) will start to prepare a briefing book. Further input can be submitted to the strategy secretariat during the next months, and at a special session organised by the European Committee for Future Accelerators on 14 July 2019 during the European Physical Society Conference on High Energy Physics in Ghent, Belgium. An ESG drafting session will take place on 20–24 January 2020 in Bad Honnef, Germany, and the update of the ESPP is due to be completed and approved by the CERN Council in May 2020.

Multi-messenger adventures

Recent years have seen enormous progress in astroparticle physics, with the detection of gravitational waves, very-high-energy neutrinos, combined neutrino–gamma observation and the discovery of a binary neutron-star merger, which was seen across the electromagnetic spectrum by some 70 observatories. These important advances opened a new and fascinating era for multi-messenger astronomy, which is the study of astronomical phenomena based on the coordinated observation and interpretation of disparate “messenger” signals.

This book, first published in 2015, is now released in a renewed version to include such recent discoveries and to describe present research lines.

The Standard Model (SM) of particle physics and the lambda-cold-dark-matter theory, also referred to as the SM of cosmology, have both proved to be tremendously successful. However, they leave a few important unsolved puzzles. One issue is that we are still missing a description of the main ingredients of the universe from an energy-budget perspective. This volume provides a clear and updated description of the field, preparing and possibly inspiring students towards a solution to these puzzles.

The book introduces particle physics together with astrophysics and cosmology, starting from experiments and observations. Written by experimentalists actively working on astroparticle physics and with extensive experience in sub-nuclear physics, it provides a unified view of these fields, reflecting the very rapid advances that are being made.

The first eight chapters are devoted to the construction of the SM of particle physics, beginning from the Rutherford experiment up to the discovery of the Higgs particle and the study of its decay channels. The next chapter describes the SM of cosmology and the dark universe. Starting from the observational pillars of cosmology (the expansion of the universe, the cosmic microwave background and primordial nucleosynthesis), it moves on to a discussion about the origins and the future of our universe. Astrophysical evidence for dark matter is presented and its possible constituents and their detection are discussed. A separate chapter is devoted to neutrinos, covering natural and man-made sources; it presents the state of the art and the future prospects in a detailed way. Next, the “messengers from the high-energy universe”, such as high-energy charged cosmic rays, gamma rays, neutrinos and gravitational waves, are explored. A final chapter is devoted to astrobiology and the relations between fundamental physics and life.

This book offers a well-balanced introduction to particle and astroparticle physics, requiring only a basic background of classical and quantum physics. It is certainly a valuable resource that can be used as a self-study book, a reference or a textbook. In the preface, the authors suggest how different parts of the essay can serve as introductory courses on particle physics and astrophysics, and for advanced classes of high-energy astroparticle physics. Its 700+ pages allow for a detailed and clear presentation of the material, contain many useful references and include proposed exercises.

DESY’s astroparticle aspirations

What is your definition of astroparticle physics?

There is no general definition, but let me try nevertheless. Astroparticle physics addresses astrophysical questions through particle-physics experimental methods and, vice versa, questions from particle physics are addressed via astronomical methods. This approach has enabled many scientific breakthroughs and opened new windows to the universe in recent years. In Germany, what drives us is the question of the influence of neutrinos and high-energy processes in the development of our universe, and the direct search for dark matter. There are differences to particle physics both in the physics questions and in the approach: we observe high-energy radiation from our cosmos or rare events in underground laboratories. But there are also many similarities between the two fields of research that make a fruitful exchange possible.

What was your path into the astroparticle field?

I grew up in particle physics: I did my PhD on b-physics at the OPAL experiment at CERN’s LEP collider and then worked for a few years on the HERA-B experiment at DESY. I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY. Particle physics and astroparticle physics overcome borders, and this is a feat that is particularly important again today. Around 20 years ago I switched to ground-based gamma astronomy. I became fascinated in understanding how nature manages to accelerate particles to such enormous energies as we see them in cosmic rays and what role they play in the development of our universe. I experienced very closely how astroparticle physics has developed into an independent field. Seven years ago, I became head of the DESY site in Zeuthen near Berlin. My task is to develop DESY and in particular the Zeuthen site into an international centre for astroparticle physics. The new research division is also a recognition of the work of the people in Zeuthen and an important step for the future.

What are DESY’s strengths in astroparticle research?

Astroparticle physics began in Zeuthen with neutrino astronomy around 20 years ago. It has evolved from humble beginnings, from a small stake in the Lake Baikal experiment to a major role in the km3-sized IceCube array deep in the Antarctic ice. Having entered high-energy gamma-ray astronomy only a few years ago, the Zeuthen location is now a driving force behind the next-generation gamma-ray observatory the Cherenkov Telescope Array (CTA). The campus in Zeuthen will host the CTA Science Data Management Centre and we are participating in almost all currently operating major gamma-ray experiments to prepare for the CTA science harvest. A growing theoretical group supports all experimental activities. The combination of high-energy neutrinos and gamma rays offers unique opportunities to study processes at energies far beyond those reachable by human-made particle accelerators.

Why did DESY establish a dedicated division?

A dedicated research division underlines the importance of astroparticle physics in general and in DESY’s scientific programme in particular, and offers promising opportunities for the future. Astroparticle physics with cosmic messengers has experienced a tremendous development in recent years. The discovery of a large number of gamma-ray sources, the observation of cosmic neutrinos in 2013, the direct detection of gravitational waves in 2015, the observation of the merger of two neutron stars with more than 40 observatories worldwide triggered by its gravitational waves in August 2017, and the simultaneous observation of neutrinos and high-energy gamma radiation from the direction of a blazar the following month are just a few prominent examples. We are on the threshold of a golden age of multi-messenger astronomy, with gamma rays, neutrinos, gravitational waves and cosmic rays together promising completely new insights into the origins and evolution of our universe.

What are the divisions scale and plans?

The next few years will be exciting for us. We have just completed an architectural competition, new buildings will be built and the entire campus will be redesigned in the coming years. We expect well over 350 people to work on the Zeuthen campus, and hosting the CTA data centre will make us a contact point for astroparticle physicists globally. In addition to the growth through CTA, we are expanding our scientific portfolio to include radio detection of high-energy neutrinos and increased activities in astronomical-transient-event follow-up. We are also establishing close cooperation with other partners. Together with the Weizmann Institute in Israel, the University of Potsdam and the Humboldt University in Berlin, we are currently establishing an international doctoral school for multi-messenger astronomy funded by the Helmholtz Association.

How can we realise the full potential of multi-messenger astronomy?

Our potential lies primarily in committed scientists who use their creativity and ideas to take advantage of existing opportunities. For years we have experienced a large number of young people moving into astroparticle physics. We need new, highly sensitive instruments and there is a whole series of outstanding project proposals waiting to be implemented. CTA is being built, the upgrade of the Pierre Auger Observatory is progressing and the first steps for the further upgrade of IceCube have been taken. The funding for the next generation of gravitational-wave experiments, the Einstein Telescope in Europe, is not yet secured. We are currently discussing a possible participation of DESY in gravitational-wave astronomy. Multi-messenger astronomy promises a breathtaking amount of new discoveries. However, the findings will only be possible if, in addition to the instruments, the data are also made available in a form that allows scientists to jointly analyse the information from the various instruments. DESY will play an important role in all these tasks – from the construction of instruments to the training of young scientists. But we will also be involved in the development of the research-data infrastructure required for multi-messenger astronomy.

I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY

How would you describe the astroparticle physics landscape?

The community in Europe is growing. Not only in terms of the number of scientists, but also the size and variety of experiments. In many areas, European astroparticle physics is in transition from medium-sized experiments to large research infrastructures. CTA is the outstanding example of this. The large number of new scientists and the ideas for new research infrastructures show the great appeal of astroparticle physics as a young and exciting field. The proposed Einstein Telescope will cross the threshold of projects requiring investments of more than one billion Euros, requiring coordination at European and international level. With the Astroparticle Physics European Consortium (APPEC) we have taken a step towards improved coordination. DESY is one of the founding members of APPEC and I have been elected vice-chairman of the APPEC general assembly for the next two years. In this area, too, we can learn something from particle physics and are very pleased that CERN is an associate member of APPEC.

What implication does the update of the European strategy for particle physics have for your field?

European astroparticle physics provides a wide range of input to the European Strategy for particle physics, from concrete proposals for experiments to contributions from national committees for astroparticle physics. The contribution to the construction of the Einstein Telescope deserves special attention, and my personal wish is that CERN will coordinate the Einstein Telescope, as suggested in the contribution. With the LHC, CERN has again demonstrated in an outstanding way that it can successfully implement major research projects. With the first gravitational- wave events, we saw only the first flashes of a completely unknown part of our universe. The Einstein Telescope would revolutionise our new view of the world.

The proton laid bare

Every student of physics learns that the nucleus was discovered by firing alpha particles at atoms. The results of this famous experiment by Rutherford in 1911 indicated the existence of a hard-scattering core of positive charge, and, within a few years, led to his discovery of the proton (see Rutherford, transmutation and the proton). Decades later, similar experiments with electrons revealed point-like scattering centres inside the proton itself. Today we know these to be quarks, antiquarks and gluons, but the glorious complexity of the proton is often swept under the carpet. Undergraduate physicists are more often introduced to quarks as objects with flavour quantum numbers that build up mesons and baryons in bound states of twos and threes. Indeed, in the 1960s, many people regarded quarks simply as a useful book-keeping device to classify the many new “elementary” particles that had been discovered in cosmic rays and bubble-chamber experiments. Few people were aware of the inelastic-scattering experiments at SLAC with 20 GeV electrons, which were beginning to reveal a much richer picture of the proton.

The results of these experiments in the 1960s and early 1970s were remarkable. Elastic scattering by the point-like electrons revealed the spatial distribution of the proton’s charge, and cross sections had to be modified by form-factors as a result. These varied strongly depending on how hard the proton was struck – a hardness called the scale of the process, Q2, defined by the negative squared four-momentum transfer between incoming and outgoing electrons. At high enough scales the proton broke up, a phenomenon that can be quantified by x, a kinematic variable related to the inelasticity of the interaction. Both the scale and the inelasticity could be determined from the dynamics of the outgoing electron. Physicists anticipated a complicated dependence on both variables. Studies of scattering at ever higher and lower scales continue to bear fruit to this day.

A surprise at SLAC

The big surprise from the SLAC experiments was that the cross section did not depend strongly on Q2, a phenomenon called “scaling”. The only explanation for scaling was that the electrons were scattering from point-like centres within the proton. Feynman worked out the formalism to understand this by picturing the electron as hitting a point-like “parton” inside the proton. With elegant simplicity, he deduced that the partons each carried a fraction x of the proton’s longitudinal momentum.

Gell-Mann and Zweig had proposed the existence of quarks in 1964, but at first it was by no means obvious that they were partons. The SLAC experiments established that the scattering centres had spin ½ as required by the quark model, but there were two problems. On the one hand there appeared to be not only three, but many scattering centres. On the other, Feynman’s formalism required the partons to be “free” and independent of each other, yet they could hardly be independent if they remained confined in the proton.

Painting a picture

The picture became even more interesting in the late 1970s and 1980s when scattering experiments started to use neutrinos and antineutrinos as probes. Since neutrinos and antineutrinos have a definite handedness, or helicity, such that their spin is aligned against their direction of motion for neutrinos and with it for antineutrinos, their weak interaction with quarks and antiquarks gives different angular distributions. This showed that there must be antiquarks as well as quarks within the proton. In fact, it led to a picture in which the flavour properties of the proton are governed by three valence quarks immersed in a sea of quark–antiquark pairs. But this is not all: the same experiments indicated that the total momentum carried by the valence quarks and the sea still amounts to only around half of that of the proton. This missing momentum was termed an energy crisis, and was solved by the existence of gluons with spin 1, which bind the quarks together and confine them inside the proton.

In fact, the SLAC experiments had been lucky to be making measurements in the kinematic region where scaling holds almost perfectly – where the cross section is independent of Q2. The quark–parton model had to be extended, and became the field theory of quantum chromodynamics (QCD), in which the gluons are field carriers, just like photons in quantum electrodynamics (QED). Formulated in 1973, QCD has a much richer structure than QED. There are eight kinds of gluons that are characterised in terms of a new quantum number called colour, which is carried by both quarks and the gluons themselves, in contrast to QED, where the field carrier is uncharged. The gluon can thus interact with itself as well as with quarks.

From the 1980s onwards, a series of experiments probed increasingly deeply into the proton. Deep-inelastic-scattering experiments using neutrino and muon beams were performed at CERN and Fermilab, before the HERA electron–proton collider at DESY made definitive measurements from 1992 to 2007 (figure 1). The aim was to test the predictions of QCD as much as to investigate the structure of the proton, the goal being not just to list the constituents of the proton, but also to understand the forces between them.

Meanwhile, the EMC experiment at CERN had unearthed a mystery concerning the origin of the proton’s spin (see “The proton spin crisis”), while elsewhere, entirely different experiments were placing increasingly tough limits on the proton’s lifetime (see “The pursuit of proton decay”).

The proton spin crisis

Among many misconceptions in the description of the proton presented in undergraduate physics lectures is the origin of the proton’s spin. When we tell students about the three quarks in a proton, we usually say that its spin (equal to one half) comes from the arithmetic of three spin-½ quarks that align themselves such that two point “up” and one points “down”. However, as shown in measurements of the spin taken by quarks in deep-inelastic-scattering experiments in which both the lepton beam and the proton target are polarised, this is not the case. Rather, as first revealed in results from the European Muon Collaboration in CERN’s North Area in 1987, the quarks account for less than a third of the total proton spin. This was nicknamed the proton’s “spin crisis”, and attempts to fully resolve it remain the goal of experiments today.

Physicists had to develop cleverer experiments, for example looking at semi-inclusive measurements of fast pions and kaons in the final state, and using polarised proton–proton scattering, to determine where the missing spin comes from. It is now established that about 30% of the proton spin is in the valence quarks. Intriguingly, this is made up of +65% from up-valence and –35% from down-valence quarks. The sea seems to be unpolarised, and about 20% of the proton’s spin is in gluon polarisation, though it is not possible to measure this accurately across a wide kinematic range. Nevertheless, it seems unlikely that all of the missing spin is in gluons, and the puzzle is not yet solved.

What could the origin of the remaining ~50% of the proton’s spin be? The answer may lie in the orbital angular momentum of both the quarks and the gluons, but it is difficult to measure this directly. Orbital angular momentum is certainly connected to the transverse structure of the proton. The partons’ transverse momentum must also be considered, and there is the transverse position of the partons, and the transverse, as opposed to longitudinal, spin. Multi-dimensional measurements of transverse momentum distributions and generalised parton distributions can give access to orbital angular momentum. Such measurements are underway at Jefferson Laboratory, and are also a core part of the future Electron-Ion Collider programme.

Amanda Cooper-Sarkar, University of Oxford.

Quantum considerations

As with all quantum phenomena, what is in a proton depends on how you look at it. A more energetic probe has a smaller wavelength and therefore can reveal smaller structures, but it also injects energy into the system, and this allows the creation of new particles. The question then is whether we regard these particles as having been inside the proton in the first place. At higher scales quarks radiate gluons that then split into quark–antiquark pairs, which again radiate gluons: and the gluons themselves can also radiate gluons. The valence quarks thus lose momentum, distributing it between the sea quarks and gluons – increasingly many, with smaller and smaller amounts of momentum. A proton at rest is therefore very different to a proton, say, circulating in the Large Hadron Collider (LHC) at an energy of 7 TeV.

The deep-inelastic-scattering data from muon, neutrino and electron collisions established that QCD was the correct theory of the strong interaction. Experiments found that the structure functions which describe the scattering cross sections are not completely independent of scale, but depend on it logarithmically – in exactly the way that QCD predicts. This allowed the determination of the strong coupling “constant” αs, in analogy with the fine structure constant of QED, and it is now understood that both parameters vary with the scale of the process. In contrast with QED, the strong-coupling constant varies very quickly, from αs ~1 at low energy to ~0.1 at the energy scale of the mass of the Z boson. Thus the quarks become “asymptotically free” when examined at high energy, but are strongly confined at low energy – an insight leading to the award of the 2004 Nobel Prize in Physics to Gross, Politzer and Wilczek.

Once QCD had emerged as the definitive theory, the focus turned to measuring the momentum distributions of the partons, dubbed parton distribution functions (PDFs, figure 2). Several groups work on these determinations using both deep-inelastic-scattering data and related scattering processes, and presently there is agreement between theory and experiment within a few percent across a very wide range of x and Q2 values. However, this is not quite good enough. Today, knowledge of PDFs is increasingly vital for discovery physics at the LHC. Predictions of all cross sections measured at the LHC – whether Standard Model or beyond – need to use input PDFs. After all, when we are colliding protons it is actually the partons inside the proton that are having hard collisions and the rates of these collisions can only be predicted if we know the PDFs in the proton very accurately.

The dominant uncertainty on the direct production of particles predicted by physics beyond the Standard Model now comes from the limited precision of the PDFs of high-x gluons. Indirect searches for new physics are also affected: precision measurements of Standard Model parameters, such as the mass of the W-boson and the weak mixing angle sin2θW, are also limited by the precision of PDFs in the regions where we currently have the best precision.

The pursuit of proton decay

When Rutherford discovered the proton in 1919, the only other basic constituent of matter that was known of was the electron. There was no way that the proton could decay without violating charge conservation. Ten years later, Hermann Weyl went further, proposing the first version of what would become a law for baryon conservation. Even after the discoveries of the positron, and positive muons and pions – all lighter than the proton – there was little reason to question the proton’s stability. As Maurice Goldhaber famously pointed out, were the proton lifetime to be less than 1016 years we should feel it in our bones, because our bodies would be lethally radioactive. In 1954 he improved on this estimate. Arguing that the disappearance of a nucleon would leave a nucleus in an excited state that could lead to fission, he used the observed absence of spontaneous fission in 232Th to calculate a lifetime for bound nucleons of > 1020 years, which Georgy Flerov soon extended to > 3 × 1023 years.

Goldhaber also teamed up with Fred Reines and Clyde Cowan to test the possibility of directly observing proton decay using a 500 l tank of liquid scintillator surrounded by 90 photomultiplier tubes (PMTs) that was designed originally to detect reactor neutrinos. They found no signal, indicating that free protons must live for > 1021 years and bound nucleons for > 1022  years. By 1974, in a cosmic-ray experiment based on 20 tonnes of liquid scintillator, Reines and other colleagues had pushed the proton lifetime to > 1030 years.

Meanwhile, in 1966, Andrei Sakharov  had set out conditions that could yield the observed particle–antiparticle asymmetry of the universe. One of these was that baryon conservation is only approximate and could have been violated during the expansion phase of the early universe. The interactions that could violate baryon conservation would allow the proton to decay, but Sakharov’s suggested proton lifetime of > 1050 years provided little encouragement for experimenters. This all changed around 1974, when proposals for grand unified theories (GUTs) came along. GUTs not only unified the strong, weak and electromagnetic forces, but also closely linked quarks and leptons, allowing for non-conservation of baryon number. In particular, the minimal SU(5) theory of Howard Georgi and Sheldon Glashow led to predicted lifetimes for the decay p  e+π0 in the region of 1031±1 years – not so far beyond the observed lower limit of around 1030 years.

This provided the justification for dedicated proton-decay experiments. By 1981 seven such experiments installed deep underground were using either totally active water Cherenkov detectors or sampling calorimeters to monitor large numbers of protons. These included the Irvine–Michigan–Brookhaven (IMB) detector based on 3300 tonnes of water and 2048 5-inch PMTs and KamiokaNDE in Japan with 1000 tonnes of water and 1000 20-inch PMTs. These experiments were able to push the lower limits on the proton lifetime to > 1032 years and so discount the viability of minimal SU(5) GUTs.

However, in 1987 IMB and Kamiokande II achieved greater fame by each detecting a handful of neutrinos from the supernova SN1987a. Kamiokande II was already studying solar and atmospheric neutrinos, but it was its successor, Super-Kamiokande, that went on to make pioneering observations of atmospheric and solar neutrino oscillations. And it is Super-Kamiokande that currently has the highest lower-limit for proton decay: 1.6 × 1034 years for the decay to e+π0.

Today, the theoretical development of GUTs continues, with predictions in some models of proton lifetimes up to around 1036 years. Future large neutrino experiments – such as DUNE, Hyper-Kamiokande and JUNO – feature proton decay among their goals, with the possibility of extending the limits on the proton lifetime to 1035 years. So the study of proton stability goes on, continuing the symbiosis with neutrino research.

Chris Sutton, former CERN Courier editor.

Strange sightings at the LHC

Standard Model processes at the LHC are now able to contribute to our knowledge of the proton. As well as reducing the uncertainty on PDFs, however, the LHC data have led to a surprise: there seem to be more strange quark–antiquark pairs in the proton than we had thought (CERN Courier April 2017 p11). A recent study of the potential of the High-Luminosity LHC suggests that we can improve the present uncertainty on the gluon PDF by more than a factor of two by studying jet production, direct photon production and top quark–antiquark pair production. Measurements of the W-boson mass or the weak mixing angle will be improved by precision measurements of W and Z-boson production in previously unexplored kinematic regions, and strangeness can be further probed by measurements of these bosons in association with heavy quarks. We also look forward to possible future developments such as a Large Hadron-Electron Collider or a Future Circular Electron Hadron Collider – not least because new kinematic ranges continue to reveal more about the structure of QCD in the high-density regime.

In fact the HERA data already give hints that we may be entering a new phase of QCD at very low x, where the gluon density is very large (figure 3). Such large densities could lead to nonlinear effects in which gluons recombine. When the rate of recombination equals the rate of gluon splitting we may get gluon saturation. This state of matter has been described as a colour glass condensate (CGC) and has been further probed in heavy-ion experiments at the LHC and at RHIC at Brookhaven National Laboratory. The higher gluon densities involved in experiments with heavy nuclei enhance the impact of nonlinear gluon interactions. Interpretations of the data are consistent with the CGC but not definitive. A future electron–ion collider, such as that currently proposed in the US (CERN Courier October 2018, p31), will go further, enabling complete tomographic information about the proton and allowing us to directly connect fundamental partonic behaviour to the proton’s “bulk” properties such as its mass, charge and spin. Meanwhile, table-top spectroscopy experiments are shedding new light on a seemingly mundane yet key property of the proton: its radius (see “Solving the proton-radius puzzle”).

Together with the neutron, the proton constitutes practically all of the mass of the visible matter in the universe. A hundred years on from Rutherford’s discovery, it is clear that much remains to be learnt about the structure of this complex and ubiquitous particle.

Solving the proton-radius puzzle

How big is a proton? Experiments during the past decade have called well-established measurements of the proton’s radius into question – even prompting somewhat outlandish suggestions that new physics might be at play. Soon-to-be-published results promise to settle the proton-radius puzzle once and for all.

Contrary to popular depictions, the proton does not have a hard physical boundary like a snooker ball. Its radius was traditionally deduced from its charge distribution via electron-scattering experiments. Scattering from a charge distribution is different from scattering from a point-like charge: the extended charge distribution modifies the differential cross section by a form factor (the Fourier transform of the charge distribution). For a proton this takes the form of a dipole with respect to the scale of the interaction, and an exponentially decaying charge distribution as a function of the distance from the centre of the proton. Scattering experiments found the root mean square (RMS) radius to be about 0.88 fm.

Since the turn of the millennium, a modest increase in precision on the proton radius was made possible by comparing measurements of transitions in hydrogen with quantum electrodynamics (QED) calculations. Since atomic energy levels need to be corrected due to overlapping electron clouds in the extended charge distribution of the proton, precise measurements of the transition frequencies provide a handle on the proton’s radius. A combination of these measurements yielded the most recent CODATA value of 0.8751(61) fm.

The surprise came in 2010, when the CREMA collaboration at the Paul Scherrer Institute (PSI) in Switzerland achieved a 10-fold improvement in precision via the Lamb shift (the 2S–2P transition) in muonic hydrogen, the bound state of a muon orbiting a proton. As the muon is 200 times heavier than the electron, its Bohr radius is 200 times smaller, and the QED correction due to overlapping electron clouds is more substantial. CREMA observed an RMS proton radius of 0.8418(7) fm, which was five sigma below the world average, giving rise to the so-called “proton radius puzzle”. The team confirmed the measurement in 2013, reporting a radius of 0.8409(4) fm. These observations appeared to call into question the cherished principle of lepton universality.

More recent measurements have reinforced the proton’s slimmed-down nature. In 2016 CREMA reported a radius of 0.8356(20) fm by measuring the Lamb shift in muonic deuterium (the bound state of a muon orbiting a proton and a neutron). Most interestingly, in 2017 Axel Beyer of the Max Planck Institute of Quantum Optics in Garching and collaborators reported a similarly lithe radius of 0.8335(95) fm from observations of the 2S–4P transition in ordinary hydrogen. This low value is confirmed by soon-to-be-published measurements of the 1S–3S transition by the same group, and of the 2S–2P transition by Eric Hessels of York University, Canada, and colleagues. “We can no longer speak about a discrepancy between measurements of the proton radius in muonic and electronic spectroscopy,” says Krzysztof Pachucki of CODATA TGFC and the University of Warsaw.

But what of the discrepancy between spectroscopic and scattering experiments? The calculation of the RMS proton radius using scattering data is tricky due to the proton’s recoil, and analyses must extrapolate the form factor to a scale of Q2 = 0. Model uncertainties can therefore be reduced by performing scattering experiments at increasingly low scales. Measurements may now be aligning with a lower value consistent with the latest results in electronic and muonic spectroscopy. In 2017 Miha Mihovilovic of the University of Mainz and colleagues reported an interestingly low value of 0.810(82) fm using the Mainz Microtron, and results due from the Proton Radius Experiment (pRad) at Jefferson Lab will access a similarly low scale with even smaller uncertainties. Preliminary pRad results presented in October 2018 at the 5th Joint Meeting of the APS Division of Nuclear Physics and the Physical Society of Japan in Hawaii indicate a proton radius of 0.830(20) fm. These electron-scattering results will be complemented by muon-scattering results from the COMPASS experiment at CERN, and the MUSE experiment at PSI.

For now, says Pachucki, the latest CODATA recommendations published in 2016 list the higher value obtained from electron scattering and pre-2015 hydrogen-spectroscopy experiments. If the latest experiments continue to line up with the slimmed-down radius of CREMA’s 2010 result, however, the proton radius puzzle may soon be solved, and the world average revised downwards.

Mark Rayner, CERN.

The flavour of new physics

In 1971, at a Baskin-Robbins ice-cream store in Pasadena, California, Murray Gell-Mann and his student Harald Fritzsch came up with the term “flavour” to describe the different types of quarks. From the three types known at the time – up, down and strange – the list of quark flavours grew to six. A similar picture evolved for the leptons: the electron and the muon were joined by the unexpected discovery of the tau lepton at SLAC in 1975 and completed with the three corresponding neutrinos. These 12 elementary fermions are grouped into three generations of increasing mass.

The three flavours of charged leptons – electron, muon and tau – are the same in many respects. This “flavour universality” is deeply ingrained in the symmetry structure of the Standard Model (SM) and applies to both the electroweak and strong forces (though the latter is irrelevant for leptons). It directly follows from the assumption that the SM gauge group, SU(3) × SU(2) × U(1), is one and the same for all three generations of fermions. The Higgs field, on the other hand, distinguishes between fermions of different flavours and endows them with different masses – sometimes strikingly so. In other words, the gauge forces, such as the electroweak force, are flavour-universal in the SM, while the exchange of a Higgs particle is not.

Today, flavour physics is a major field of activity. A quick look at the Particle Data Group (PDG) booklet, with its long lists of the decays of B mesons, D mesons, kaons and other hadrons, gives an impression of the breadth and depth of the field. Even in the condensed version of the PDG booklet, such listings run to more than 170 pages. Still, the results can be summarised succinctly: all the measured decays agree with SM predictions, with the exception of measurements that probe LFU in two quark-level transitions: b → cτν̅τ and b → sμ+μ.

Oddities in decays to D mesons

In the SM the b → cτν̅τ process is due to a tree-level exchange of a virtual W boson (figure 1, left). The W boson, being much heavier than the amount of energy that is released in the decay of the b quark, is virtual. Rather than materialising as a particle, it leaves its imprint as a very short-range potential that has the property of changing one quark (a b quark) into a different one (a c quark) with the simultaneous emission of a charged lepton and an antineutrino.

Flavour universality is probed by measuring the ratio of branching fractions: RD(*) = Br(B → D(*)τν̅τ)/Br(B → D(*)lν̅l), where l = e, μ. Two ratios can be measured, since the charm quark is either bound inside a D meson or its excited version, the D*, and the two ratios, RD and RD*, have the very welcome property that they can be precisely predicted in the SM. Importantly, since the hadronic inputs that describe the b → c transition do not depend on which lepton flavour is in the final state, the induced uncertainties mostly cancel in the ratios. Currently, the SM prediction is roughly three standard deviations away from the global average of results from the LHCb, BaBar and Belle experiments (figure 2).

A possible explanation for this discrepancy is that there is an additional contribution to the decay rate, due to the exchange of a new virtual particle. For coupling strengths that are of order unity, such that they are appreciably large yet small enough to keep our calculations reliable, the mass of such a new particle needs to be about 3 TeV to explain the reported hints for the increased b → cτν̅τ rates. This is light enough that the new particle could even be produced directly at the LHC. Even better, the options for what this new particle could be are quite restricted.

There are two main possibilities. One is a colour singlet that does not feel the strong force, for which candidates include a new charged Higgs boson or a new vector boson commonly denoted W (figure 1, middle). However, both of these options are essentially excluded by other measurements that do agree with the SM: the lifetime of the Bc meson; searches at the LHC for anomalous signals with tau leptons in the final state; decays of weak W and Z bosons into leptons; and by Bs mixing and B → Kν ν̅ decays.

The second possible type of new particle is a leptoquark that couples to one quark and one lepton at each vertex (figure 3, right). Typically, the constraints from other measurements are less severe for leptoquarks than they are for new colour-singlet bosons, making them the preferred explanation for the b → cτν̅τ anomaly. For instance, they contribute to Bs mixing at the one-loop level, making the resulting effect smaller than the present uncertainties. Since leptoquarks are charged under the strong force, in the same way as quarks are, they can be copiously produced at the LHC via strong interactions. Searches for pair- or singly-produced leptoquarks at the future high-luminosity LHC and at a proposed high-energy LHC will cover most of the available parameter space of current models.

Oddities in decays to kaons

The other decay showing interesting flavour deviations (b → sμ+μ) is probed via the ratios RK(*) = Br(B → K(*)μ+μ)/Br(B → K(*)e+e), which test whether the rate for the b → sμ+μ quark-level transition equals the rate for the b → se+e one. The SM very precisely predicts RK(*) = 1, up to small corrections due to the very different masses of the muon and the electron. Measurements from LHCb on the other hand, are consistently below 1, with statistical significances of about 2.5 standard deviations, while less precise measurements from Belle are consistent with both LHCb and the SM (figure 3). Further support for these discrepancies is obtained from other observables, for which theoretical predictions are more uncertain. These include the branching ratios for decays induced by the b → sμ+μ quark-level transition, and the distributions of the final-state particles.

In contrast to the tree-level b → cτν̅τ process underlying the semileptonic B decays to D mesons, the b → sμ+μ decay is induced via quantum corrections at the one-loop level (figure 4, left) and is therefore highly suppressed in the SM. Potential new-physics contributions, on the other hand, can be exchanged either at tree level or also at one-loop level. This means that there is quite a lot of freedom in what kind of new physics could explain the b → sμ+μ anomaly. The possible tree-level mediators are a Z and leptoquarks with masses of about 30 TeV or lighter, if the couplings are smaller. For loop-induced models the new particles are necessarily light, with masses in the TeV range or below. This means that the searches for direct production of new particles at the LHC can probe a significant range of explanations for the LHCb anomalies. However, for many of the possibilities the high-energy upgrade to the LHC or a future circular collider with much higher energy would be required for the new particles to be discovered or ruled out.

Taking stock

Could the two anomalies be due to a single new lepton non-universal force? Interestingly, a leptoquark dubbed U1 – a spin-one particle that is a colour triplet, charged under hypercharge but not weak isospin – can explain both anomalies. With some effort it can be embedded in consistent theoretical constructions, albeit those with very non-trivial flavour structures. These models are based on modified versions of grand unified theories (GUTs) from the 1980s. Since GUTs unify the leptons and quarks, some of the force carriers can change quarks to leptons and vice versa, i.e. some of the force carriers are leptoquarks. The U1 leptoquark could be one such force carrier, coupling predominantly to the third generation of fermions. In all cases the U1 leptoquark is accompanied by many other particles with masses not much above the mass of U1.

While intriguing, the two sets of B-physics anomalies are by no means confirmed. None of the measurements have separately reached the five standard deviations needed to claim a discovery and, indeed, most are hovering around the 1–3 sigma mark. However, taken together, they form an interesting and consistent picture that something is potentially going on. We are in a lucky position that new measurements are expected to be finished soon, some in a few months, others in a few years.

First of all, the observables showing the discrepancy from the SM, RD(*) and RK(*), will be measured more precisely at LHCb and at Belle II, which is currently ramping up at KEK in Japan. In addition, there are many related measurements that are planned, both at Belle II as well as at LHCb, and also at ATLAS and CMS. For instance, measuring the same transitions, but with different initial- and final-state hadrons, should give further insights into the structure of new-physics contributions. If the anomalies are confirmed, this would then set a clear target for the next collider such as the high-energy LHC or the proposed proton–proton Future Circular Collider, since the new particles cannot be arbitrarily heavy.

If this exciting scenario plays out, it would not be the first time that indirect searches foretold the existence of new physics at the next energy scale. Nuclear beta decay and other weak transitions prognosticated the electroweak W and Z gauge bosons, the rare kaon decay KL→ μ+μ pointed to the existence of the charm quark, including the prediction for its mass from kaon mixing, while B-meson mixings and measurements of electroweak corrections accurately predicted the top-quark mass before it was discovered. Finally, the measurement of CP violation in kaons led to the prediction of the third generation of fermions. If the present flavour anomalies stand firm, they will become another important item on this historic list, offering a view of a new energy scale to explore.

Neutrino connoisseurs talk stats at CERN

PHYSTAT-nu 2019 was held at CERN from 22 to 25 January. Counted among the 130 participants were LHC physicists and professional statisticians as well as neutrino physicists from across the globe. The inaugural meeting took place at CERN in 2000 and PHYSTAT has gone from strength to strength since, with meetings devoted to specific topics in data analysis in particle physics. The latest PHYSTAT-nu event is the third of the series to focus on statistical issues in neutrino experiments. The workshop focused on the statistical tools used in data analyses, rather than experimental details and results.

Modern neutrino physics is geared towards understanding the nature and mixing of the three neutrinos’ mass and flavour eigenstates. This mixing can be inferred by observing “oscillations” between flavours as neutrinos travel through space. Neutrino experiments come in many different types and scales, but they tend to have one calculation in common: whether the neutrinos are created in an accelerator, a nuclear reactor, or by any number of astrophysical sources, the number of events expected in the detector is the product of the neutrino flux and the interaction cross section. Given the ghostly nature of the neutrino, this calculation presents subtle statistical challenges. To cancel common systematics, many facilities have two or more detectors at different distances from the neutrino source. However, as was shown for the NOVA and T2K experiments, competitors to observe CP violation using an accelerator-neutrino beam, it is difficult to correlate the neutrino yields in the near and far detectors. A full cancellation of the systematic uncertainties is complicated by the different detector acceptances, possible variations in the detector technologies, and the compositions of different neutrino interaction modes. In the coming years these two experiments plan to combine their data in a global analysis to increase their discovery power – lessons can be learnt from the LHC experience.

The problem of modelling the interactions of neutrinos with nuclei – essentially the problem of calculating the cross section in the detector – forces researchers to face the thorny statistical challenge of producing distributions that are unadulterated by detector effects. Such “unfolding” corrects kinematic observables for the effects of detector acceptance and smearing, but correcting for these effects can cause huge uncertainties. To counter this, strong “regularisation” is often applied, biasing the results towards the smooth spectra of Monte Carlo simulations. The desirability of publishing unregularised results as well as unfolded measurements was agreed by PHYSTAT-nu attendees. “Response matrices” may also be released, allowing physicists outside of an experimental collaboration to smear their own models, and compare them to detector-level data. Another major issue in modeling neutrino–nuclear interactions is the “unknown unknowns”. As Kevin McFarland of the University of Rochester reflected in his summary talk, it is important not to estimate your uncertainty by a survey of theory models. “It’s like trying to measure the width of a valley from the variance of the position of sheep grazing on it. That has an obvious failure mode: sheep read each other’s papers.”

An important step for current and future neutrino experiments could be to set up a statistics committee, as at the Tevatron, and, more recently, the LHC experiments. This PHYSTAT-nu workshop could be the first real step towards this exciting scenario.

The next PHYSTAT workshop will be held at Stockholm University from 31 July to 2 August on the subject of statistical issues in direct-detection dark-matter experiments.

Standard Model stands strong at Moriond

The 66th Rencontres de Moriond, held in La Thuile, Italy, took place from 16 to 30 March, with the first week devoted to electroweak interactions and unified theories, and a second week to QCD and high-energy interactions. More than 200 physicists took part, presenting new results from precision Standard Model (SM) measurements to new exotic quark states, flavour physics and the dark sector.

A major theme of the electroweak session was flavour physics, and the star of the show was LHCb’s observation of CP violation in charm decays (see LHCb observes CP violation in charm decays). The collaboration showed several other new results concerning charm- and B-meson decays. One much anticipated result was an update on RK, the ratio of rare decays of a B+ to electrons and muons, using data taken at energies of 7, 8 and 13 TeV. These decays are predicted to occur at the same rate to within 1%; previous data collected are consistent with this prediction but favour a lower value, and the latest LHCb results continue to support this picture. Together with other measurements, these results paint an intriguing picture of possible new physics (p33) that was explored in several talks by theorists.

Run-2 results

The LHC experiments presented many new results based on data collected during Run 2. ATLAS and CMS have measured most of the Higgs boson’s main production and decay modes with high statistical significance and carried out searches for new, additional Higgs bosons. From a combination of all Higgs-boson measurements, ATLAS obtained new constraints on the important Higgs self-coupling, while CMS presented updated results on the Higgs decay to two Z bosons and its coupling to top quarks.

Precision SM studies continued with first evidence from ATLAS for the simultaneous production of three W or Z bosons, and CMS presented first evidence for the production of two W bosons in two simultaneous interactions between colliding partons. The very large new dataset has also allowed ATLAS and CMS to expand their searches for new physics, setting stronger lower limits on the allowed mass ranges of supersymmetric and other hypothetical particles (see Boosting searches for fourth-generation quarks and Pushing the limits on supersymmetry). These also include new limits from CMS on the parameters describing slowly moving heavy particles, and constraints from both collaborations on the production rate of Z bosons. ATLAS, using the results of lead–ion collisions taken in 2018, also reported the observation of light-by-light scattering – a very rare process that is forbidden by classical electrodynamics.

New results and prospects in the neutrino sector were communicated, including Daya Bay and the reactor antineutrino flux anomaly, searches for neutrinoless double-beta decay, and the reach of T2K and NOvA in tackling the neutrino mass hierarchy and leptonic CP violation. Dark matter, axions and cosmology also featured prominently. New results from experiments such as XENON1T, ABRACADABRA, SuperCDMS and ATLAS and CMS illustrate the power of multi-prong dark-matter searches – not just for WIMPs but also very light or exotic candidates. Cosmologist Lisa Randall gave a broad-reaching talk about “post-modern cosmology”, in which she argued that – as in particle physics – the easy times are probably over and that astronomers need to look at more subtle effects to break the impasse.

Moriond electroweak also introduced a new session: “feeble interactions”, which was designed to reflect the growing interest in very weak processes at the LHC and future experiments.

LHCb continued to enjoy the limelight during Moriond’s QCD session, announcing the discovery of a new five-quark hadron, named Pc(4312)+, which decays to a proton and a J/ψ and is a lighter companion of the pentaquark structures revealed by LHCb in 2015 (p15). The result is expected to motivate deeper studies of the structure of these and other exotic hadrons. Another powerful way to delve into the depths of QCD, addressed during the second week of the conference, is via the Bc meson family. Following the observation of the Bc(2S) by ATLAS in 2014, CMS reported the existence of a two-peak feature in data corresponding to the Bc(2S) and the Bc*(2S) – supported by new results from LHCb based on its full 2011–2018 data sample. Independent measurements of CP violation in the Bs system reported by ATLAS and LHCb during the electroweak session were also combined to yield the most precise measurement yet, which is consistent with the small value predicted by the SM.

A charmed life

In the heavy-ion arena, ALICE highlighted its observation that baryons containing charm quarks are produced more often in proton–proton collisions than in electron–positron collisions. Initial measurements in lead–lead collisions suggest an even higher production rate for charmed baryons, similar to what has been observed for strange baryons. These results indicate that the presence of quarks in the colliding beams affects the hadron production rate. The collaboration also presented the first measurement of the triangle-shaped flow of J/ψ particles in lead–lead collisions, showing that even heavy quarks are affected by the quarks and gluons in the quark–gluon plasma and retain some memory of the collisions’ initial geometry.

The SM still stands strong after Moriond 2019, and the observation of CP violation in D mesons represents another victory, concluded Shahram Rahatlou of Sapienza University of Rome in the experimental summary. “But the flavour anomaly is still there to be pursued at low and high mass.”

bright-rec iop pub iop-science physcis connect