Bluefors – leaderboard other pages

Topics

Big science meets industry in Copenhagen

Big science equals big business, whether it is manufacturing giant superconducting magnets for particle colliders or perfecting mirror coatings for space telescopes. The Big Science Business Forum (BSBF), held in Copenhagen, Denmark, on 26–28 February, saw more than 1000 delegates from more than 500 companies and organisations spanning 30 countries discuss opportunities in the current big-science landscape.

Nine of the world’s largest research facilities – CERN, EMBL, ESA, ESO, ESRF, ESS, European XFEL, F4E and ILL – offered insights into procurement opportunities and orders totalling more than €12 billion for European companies in the coming years. These range from advisory engineering work and architectural tasks to advanced technical equipment, construction projects and radiation-resistant materials. A further nine organisations also joined the conference programme: ALBA, DESY, ELI-NP, ENEA, FAIR, MAX IV, SCK•CEN – MYRRHA, PSI and SKA, thereby gathering 18 of the world’s most advanced big-science organisations under one roof.

The big-science market is currently fragmented by the varying quality standards and procurement procedures of the different laboratories, delegates heard. BSBF aspired to offer a space to discuss the entry challenges for businesses and suppliers – including small- and medium-sized enterprises – who can be valuable business partners for big-science projects.

“The vision behind BSBF is to provide an important stepping stone towards establishing a stronger, more transparent and efficient big-science market in Europe and we hope that this will be the first of a series of BSBFs in different European cities,” said Agnete Gersing of the Danish ministry for higher education and science during the opening address.

Around 700 one-to-one business meetings took place, and delegates also visited the European Spallation Source and MAX IV facility just across the border in Lund, Sweden. Parallel sessions covered big science as a business area, addressing topics such as the investment potential and best practices of Europe’s big-science market.

“Much of the most advanced research takes place at big-science facilities, and their need for high-tech solutions provides great innovation and growth opportunities for private companies,” said Danish minister for higher education and science, Søren Pind.

Call for input to European strategy update

The European strategy for particle physics, which is due to be updated by May 2020, will guide the direction of the field to the mid-2020s and beyond. To inform this vital process, the secretariat of the European Strategy Group (ESG) is calling upon the particle-physics community across universities, laboratories and national institutes to submit written input by 18 December 2018.

The update of the European strategy got under way in September when the CERN Council established a strategy secretariat (CERN Courier November 2017 p37). Chaired by Halina Abramowicz, former chair of the European Committee for Future Accelerators (ECFA), the secretariat includes Keith Ellis (chair of CERN’s Scientific Policy Committee), Jorgen D’Hondt (current ECFA chair) and Lenny Rivkin (chair of the European Laboratory Directors group).

The ESG secretariat, which has been assigned the task of organising the update process, proposes to broadly follow the steps of the previous two strategy processes concluded in 2006 and 2013. An open symposium, which in previous editions took place in Orsay (France) and Kraków (Poland), will take place in the second half of May 2019, in which the community will be invited to debate scientific input into the strategy update. With the event expected to attract around 500 participants, the secretariat proposes to hold it over a period of four days.

To prepare for the open symposium, the location of which is expected to be decided by the summer, ESG calls for written contributions towards the end of the year. Input should be submitted via a portal on the strategy-update website, which will be available from the beginning of October once the update has been formally launched by the CERN Council. The link will appear on the CERN Council’s web pages (https://council.web.cern.ch/en) and will be widely communicated closer to the time.

A “briefing book” based on the discussions will then be prepared by a physics preparatory group and submitted to the ESG for consideration during a five-day-long drafting session in the second half of January 2020. A special ECFA session on 14 July 2019 during the European Physical Society conference on high-energy physics in Ghent, Belgium, will provide another important opportunity for the community to feed into the ESG’s drafting session.

Global perspective

The European strategy update takes into account the worldwide particle-physics landscape and developments in related fields, and was initiated to coordinate activities across a large, international and fast-moving community. The third update comes as the scale of particle-physics facilities is leading to increased globalisation of the field and as its research direction evolves.

Understanding the properties of the Higgs boson (which was discovered at CERN just before the previous strategy update) remains a key focus of analysis at the LHC and future colliders, as are precision measurements of other Standard Model (SM) parameters and searches for new physics beyond the SM.

Neutrino physics is another key area of interest, with much experimental activity taking place since the last update. A “physics beyond colliders” programme has also been established by CERN to explore projects complementary to high-energy colliders and projects of national laboratories. The European astroparticle and nuclear-physics communities, meanwhile, recently launched their own strategies (CERN Courier September 2017 p6; March 2018 p7), which will also feed into the ESG update.

“After the discovery of the Higgs boson, the field is presented with a number of challenges and opportunities,” says Abramowicz. “Guided by the input from the community, the European strategy will determine which of these opportunities will be pursued.”

Oddball antics in proton–proton collisions

The TOTEM collaboration at CERN has uncovered possible evidence for a subatomic three-gluon compound called an odderon, first predicted in 1973. The result derives from precise measurements of the probability of proton–proton collisions at high energies, and has implications for our understanding of data produced by the LHC and future colliders.

In addition to probing the proton structure, TOTEM is designed to measure the total cross section of proton–proton collisions. Physically it is by far the longest experiment at the LHC, comprising two detectors located 220 m on either side of the CMS experiment. While most proton–proton interactions at the LHC cause the protons to break into their constituent quarks and gluons, TOTEM detects the roughly 25% of elastic collisions that leave the protons intact. Such collisions merely cause the path of the protons to deviate, by around a millimetre over a distance of 200 m.

Elastic scattering at low-momentum transfer and high energies has long been successfully explained by the exchange of a pomeron – a colour-neutral state made up of an even number of gluons – between the incoming protons. But TOTEM’s latest results seem to be incompatible with this traditional picture.

The discrepancy came to light via measurements of a parameter called ρ, which represents the ratio of the real and imaginary parts of the nuclear elastic-scattering amplitude when there is minimal gluon exchange between the colliding protons and thus almost no deviation in their trajectories (corresponding to a vanishing squared four-momentum transfer, t). TOTEM measured the differential elastic proton–proton scattering cross section down to t = 8 × 10−4 GeV2 at an energy of 13 TeV during a special LHC run involving “β = 2.5 km” optics and, exploiting Coulomb–nuclear interference, determined ρ with unprecedented precision: 0.09 ± 0.01.

While conventional models based on various pomeron exchanges and related “even-under-crossing” scattering amplitudes can describe ρ and the total proton–proton cross-section in the energy range 0.01–8 TeV, none can describe simultaneously TOTEM’s latest ρ measurement (which is lower than predicted by conventional models) and TOTEM’s total cross-section measurements ranging from 2.76 to 13 TeV (see figure). Combining the two measurements, TOTEM finds better agreement with models that indicate the exchange of three aggregated gluons.

The odderon started out in the early 1970s as a purely a mathematical concept. After the advent of QCD, however, theorists showed that QCD not just allowed but required the existence of such a three-gluon compound.

Although the new data favour the existence of the odderon, the TOTEM collaboration prefers to emphasise all the possible meanings and consequences its results might have – in particular concerning the behaviour of the total proton–proton cross section at high energies. If it turns out that the odderon is not entirely responsible for the observed decrease in ρ at 13 TeV, then it could be the first observation that the proton–proton cross-section growth slows down at energies beyond this. Either way, claims the TOTEM team, the results would constitute an important discovery.

“The TOTEM result is in a reasonable agreement with what is expected within the QCD picture, and the inclusion of the odderon certainly improves our description of the existing data on the high-energy elastic proton–proton scattering,” says theorist and QCD expert Valery Khoze of Durham University in the UK. “Conservatively, I would say that this is a strong indication in favour of the experimental observation of a long-awaited but so far experimentally elusive object predicted by QCD.”

Basarab Nicolescu of Babes-Bolyai University in Romania – who co-invented the odderon with the late Leszek Lukaszuk – and Evgenij Martynov of the Bogolyubov Institute for Theoretical Physics in Ukraine go further. In a paper published shortly after the TOTEM result, they write that the new data “can be considered as the first experimental discovery of the odderon”.

TOTEM researchers say they will continue to refine their measurements of ρ and explore how this ratio of scattering amplitudes evolves as a function of the squared four-momentum transfer. A similar “forward” experiment at the LHC called ALFA, which is part of the ATLAS experiment, is also taking part in such t-channel studies of the proton–proton cross section.

However, if a three-gluon compound is being produced in proton–proton collisions, it should also appear in other scattering experiments via direct s-channel production. Such a signature of the odderon could be detected, for example, by the LHCb experiment and also the COMPASS experiment at CERN.

“The discovery of the odderon would signal another bright manifestation of the predictive power of the QCD theory and confirm again that perturbative QCD allows for quite fair predictions in the experimentally available domain,” says Khoze.

Antiprotons to hit the road

A project carried out at the Technische Universität (TU) Darmstadt in Germany, funded by the European Commission, aims to build a magnetic trap that allows antiprotons to be transported from one location to another. Launched in January, the ultimate goal of the PUMA (antiProton Unstable Matter Annihilation) project is to transfer antiprotons from CERN’s Antiproton Decelerator (AD) to the nearby ISOLDE facility to study exotic nuclear phenomena.

One of PUMA’s physics goals is to explore the occurrence of neutron halos and neutron skins in very neutron-rich radioactive nuclei. By measuring pions emitted after the capture of low-energy antiprotons by nuclei, researchers will be able to determine how often the antiprotons annihilate with the constituent nucleons and therefore deduce their relative densities at the surface of the nucleus. It would be the first time that such effects were investigated in medium-mass nuclei, contributing to a better understanding of the complex nature of nuclei and related astrophysical processes. In the future, PUMA might also allow the spectroscopy of single-particle states in heavy-nuclei with atomic numbers above 100, offering new insight into the unknown shell structure at the top of the nuclear landscape.

To make such studies possible, PUMA must trap antiprotons for long enough to be transported by truck for use in nuclear experiments at the ISOLDE facility, located a few hundred metres away from the AD. Keeping the antiprotons from annihilating with ordinary matter during this process is no easy task. The idea is to develop a double-zone trap inside a one-tonne superconducting solenoid magnet and keep it under an extremely high vacuum (10–17 mbar) and at a temperature of 4 K. One region of the trap will confine the antiprotons, while a second zone will host collisions between antiprotons and radioactive nuclei that are produced at ISOLDE but decay too rapidly to be transported and studied elsewhere.

PUMA will eventually trap a record one billion antiprotons at CERN’s GBAR experiment, which is currently being hooked up to the ELENA facility at the AD (CERN Courier December 2016 p16), and keep them for several weeks to allow the measurements to be made. The team plans to build and develop the solenoid, trap and detection apparatus in the next two years, targeting 2022 for first collisions at ISOLDE.

Today, CERN is the only place in the world where low-energy antiprotons are produced, but “this project might lead to the democratisation of the use of antimatter,” says project leader Alexandre Obertelli of TU Darmstadt, who was awarded a €2.55 million five-year grant from the European Research Council. Along with researchers from RIKEN in Japan, CEA Saclay and IPN Orsay in France, Obertelli has submitted a letter of intent to CERN’s experiment committee concerning the future ELENA and ISOLDE activities. The PUMA apparatus could also, at a later stage, provide antiprotons to experiments beyond CERN. “For example, to universities or nuclear-physics laboratories where specific nuclei can be produced, such as the new SPIRAL2 facilities in Caen, France,” says Obertelli.

Taking top physics forward

Measurements of top-quark production at high rapidity in LHC proton–proton collisions provide a unique probe of the Standard Model of particle physics (SM). In this kinematic region, top-pair production is characterised by sizeable rates of quark–antiquark and quark–gluon scattering processes (in addition to gluon–gluon fusion), potentially enhancing sensitivity to physics beyond the SM. Precision measurements at high rapidity can also be used to probe the inner structure of the proton, constraining parton distribution functions at high “Bjorken-x” values and reducing uncertainties on the background process rates in other measurements. Such a “forward” region is uniquely covered with full instrumentation at the LHC by the LHCb detector.

LHCb has now made its first measurement of top quark production using Run-2 data collected in proton–proton collisions at the energy of 13 TeV. This is the third measurement from LHCb in the sector of top physics and is also the first from the collaboration to study the dilepton channel.

The measurement was performed by reconstructing dilepton decays of the top-pair system, looking for high-momentum electrons, muons and b-jets in the acceptance of the LHCb detector, using data recorded in 2015 and 2016. About 87% of selected events correspond to the signal process, making this the highest purity measurement of top physics at LHCb to date. Within the region covered by LHCb, the production cross-section of top-quark pairs (multiplied by the branching fraction to the measured final state) was determined to be 126 fb with a precision of about 20%, with the uncertainty dominated by statistical effects. The measurement is compatible with the SM predictions.

Such measurements are only now possible at LHCb owing to the increased proton collision energy (13 TeV) of LHC Run 2. While the overall cross section for top-pair production at the LHC has increased by roughly a factor of three with respect to the 8 TeV proton–proton collisions recorded in Run 1, the cross section within the forward coverage of LHCb has increased by about one order of magnitude.

LHCb expects to accumulate, by the end of Run 2, four times more data than that used in the present analysis. With future runs and the upcoming and planned detector upgrades, LHCb will enter a new era of precision studies of forward top physics.

ALICE puts limits on jet quenching in p–Pb collisions

What are the essential requirements for the formation of a quark–gluon plasma (QGP)? Do only the most violent, head-on lead–lead (Pb–Pb) interactions at the LHC provide such conditions? The answer to such questions will provide key insights into the mechanisms driving the QGP towards equilibration, converting kinetic collision energy into a hot and strongly interacting medium.

Recent measurements of proton–lead (p–Pb) and proton–proton (pp) collisions at the LHC have shown intriguing hints of QGP-like behaviour in such systems, which were initially thought to be too small for QGP formation. Experimentalists classify p–Pb collisions by a parameter called the event activity (EA), which is characterised by particle or energy production in the forward Pb-going direction; the most violent p–Pb collisions, with the largest EA, exhibit correlations that are characteristic of the collective flow of the QGP. Verification of this picture requires measurements of other QGP signals, notably the “quenching” of energetic quark and gluon jets as they propagate through the dense QCD medium.

Jets arise from the scattering of quarks and gluons in the incoming projectiles, and are produced predominantly in azimuthally back-to-back pairs. The first jet-quenching measurements in p–Pb collisions looked for suppression of the inclusive production rate of high momentum hadrons and jets by counting all such objects and comparing them to a reference rate from proton–proton (pp) collisions. Some inclusive suppression measurements indicate significant jet suppression in the highest-EA p–Pb collisions. Quantitative comparison to the pp collision reference spectrum requires the assumption that high-EA is correlated with central p–Pb collisions, in which the proton ploughs through the centre of the Pb-nucleus. However, the relation between forward particle and energy production used to measure EA with the geometry of a p–Pb collision may be modified in events containing jets, complicating its interpretation. An approach to jet quenching in p–Pb that does not invoke this assumption is therefore needed.

For this purpose, the ALICE collaboration has reported measurements of the semi-inclusive distribution of jets recoiling from a high-momentum hadron trigger (h+jet) in p–Pb collisions, as a function of EA. The h+jet distribution is self-normalising, due to the back-to-back nature of jet-pair production: jet quenching is observed as a reduction in jet rate per trigger, without comparison to a pp reference spectrum or the assumption that high-EA corresponds to central p–Pb collisions. The analysis applies a data-driven statistical approach to correct the complex uncorrelated background, enabling the accurate measurement of recoil jets over a broad phase space in the complex LHC environment.

The upper panel of the figure shows distributions of this observable, Δrecoil, for p–Pb collisions with high and low EA. Jet quenching corresponds to the transport of energy out of the jet cone, thereby suppressing Δrecoil for high EA. The ratio is however consistent with unity at all jet energies, indicating negligible jet quenching effects within the uncertainties.

These data provide a limit on the magnitude of medium-induced energy transport to large angles due to jet quenching: for events with high EA, medium-induced charged energy transport out of the jet cone is less than 0.4 GeV/c (90% confidence level). This limit is a factor 20 smaller than the magnitude of jet quenching measured using this observable in Pb–Pb collisions, in contrast to some of the current inclusive jet suppression measurements in p–Pb collisions. This result challenges theoretical models that predicted strong jet quenching in p–Pb collisions. Comparison of these data with the surviving models promises new insight into QGP formation in small systems, and the fundamental processes of equilibration in QCD.

ATLAS illuminates the Higgs boson at 13 TeV

The ATLAS collaboration has released a set of comprehensive results that illuminates the properties of the Higgs boson with improved precision, using its decay into two photons with LHC collisions recorded at a centre-of-mass energy of 13 TeV.

The Higgs-to-two-photons decay played a crucial role in the discovery of the Higgs boson in 2012 owing to the excellent mass resolution and well-modelled backgrounds in this channel. Following the discovery, the properties of the Higgs boson can be probed more precisely using the large 13 TeV dataset.

One major result of the new study is the measurement of the signal strength μ, defined as the ratio of the number of observed and expected Higgs boson events. The signal strength is measured to be μ = 0.99+0.15–0.14 – in good agreement with the Standard Model expectation. The precision could be improved by a factor of two with respect to the previous measurements at energies at 7 and 8 TeV. The precision of signal-strength measurements of individual Higgs boson production modes are also improved significantly thanks to a better understanding of the ATLAS detector, the increased rate of Higgs production at 13 TeV and the extended use of machine-learning techniques to identify specific production processes.

Another key result of the present study are the measurements of nine simplified template cross sections (STXS), which refer to the cross sections of specific Higgs production channels measured in different kinematic regions. Measurements of STXS are corrected for the impact of the Higgs-boson decay and incorporate the acceptance of the experiment, so that they can be combined across Higgs boson channels and experiments (see figure, left).

The properties of the Higgs boson are further investigated by measuring 20 differential and two double-differential cross sections. The Higgs boson transverse momentum (figure, right) and rapidity, the number and properties of jets produced in association with the Higgs boson, and several angular relations that allow us to probe its spin and CP quantum numbers are measured. Five of these distributions are used to search for new CP-even and CP-odd couplings between the Higgs boson and vector bosons or gluons. No significant deviations from the Standard Model predictions are observed.

Collectively, this new set of results at the highest LHC energies sheds light on the fundamental properties of the Higgs boson and extends our knowledge obtained from the first running period of the LHC.

CMS searches for third-generation leptoquarks

Anomalies in decays of B mesons, in which a bottom quark changes flavour to become a charmed quark, reported by the LHCb, Belle and Babar collaborations, have triggered considerable excitement in the particle-physics community (see “Beauty quarks test lepton universality“). The combined results of these experiments suggest that the decay rates of B → D τ ν and B → D* τ ν differ by more than four standard deviations from the Standard Model (SM) predictions.

Several phenomenological studies have suggested that these differences could be explained by the existence of hypothetical new particles called leptoquarks (LQs), which couple to both leptons and quarks. Such particles appear naturally in several scenarios of new physics, including models inspired by grand unified theories or Higgs-compositeness models. Leptoquarks that couple to the third generation of SM fermions (top and bottom quarks, and the tau lepton and its associated neutrino) are considered to be of particular interest to explain these flavour anomalies.

Leptoquarks coupling to fermions of the first and also the second generation of the SM have been the target of many searches by collider experiments at the successive energy frontiers (SPS, LEP, HERA, Tevatron). The most sensitive searches have been performed at the LHC, resulting in the exclusion of LQs with masses below 1.1 TeV. Searches for third-generation LQs were first performed at the Tevatron, and the baton has now been passed to the LHC.

The first investigation by the CMS collaboration used events recorded at an energy of 8 TeV during LHC Run 1, and targeted LQ pair production via the strong interaction with the decay channel of the LQ to a top quark and a tau lepton. The result of this search, reported by CMS in 2015, was that third-generation LQs with masses below 0.685 TeV were excluded. These early results have now been extended using the 2016 dataset at 13 TeV, employing more sophisticated analysis methods. The new search investigates final states containing an electron or a muon, one or two tau leptons that decay to hadrons and additional jets. To achieve sensitivity to the largest possible range of LQ masses, the analysis uses several event categories in which deviations from the SM predictions are searched for. The SM backgrounds mainly consist of top-quark pair production and W+ jets events, whose contributions are derived from the data rather than from simulation.

No significant indication of the existence of third-generation LQs has yet been found in any of the categories studied (see left-hand figure). The collaboration was therefore able to place exclusion limits on the product of the production cross section and branching fraction as small as 0.01 pb, which translate into lower limits on LQ masses extending above 1 TeV.

Combining the result of a search for the pair-production of supersymmetric bottom squarks, which can be reinterpreted as a search for LQs in the decay mode of a bottom quark and a tau neutrino, results in limits that probe the TeV mass range over all possible LQ branching ratios (see figure, right). Another recent search targets different LQs that decay into a bottom quark and a tau lepton. Using a smaller dataset at 13 TeV, this search excludes masses below 0.85 TeV for a unity branching fraction.

This is the first time that searches at the LHC have achieved sufficient sensitivity to explore the mass range favoured by phenomenological analyses of LQs and the current flavour anomalies. No hints of these states have been found, but analyses are under way using larger datasets and including additional signatures.

Spotting the first extragalactic planets

Three decades since astronomers first detected planets outside our solar system, exoplanets are now being discovered at a rate of hundreds per year. Although it is reasonable to assume other galaxies than our own contain planets, no direct detections of such objects have been made owing to their small size and their large distances from Earth.

Now, however, radiation emitted around a distant black hole has revealed the existence of extragalactic planets in a galaxy 3.8 billion light years away, located between the black hole and us. The planets, which have no way of being directly detected using any kind of existing telescope, are visible thanks to the small gravitational distortions they inflict on X-rays emanating from the more distant black hole.

The discovery was made by Xinyu Dai and Eduardo Guerras from the University of Oklahoma in the US using data from the Chandra X-ray Observatory. The distant black hole in question, which forms the supermassive centre of the quasar RX J1131-1231, is surrounded by an accretion disk that heats up as it orbits and emits radiation at X-ray wavelengths. Thanks to a fortunate cosmic alignment, this radiation is amplified by gravitational lensing and therefore can be studied accurately. The lensing galaxy positioned between Earth and the quasar causes light from RX J1131-1231 to bend around it, appearing to us not as a normal point-source but as a ring with four bright spots (see figure). The spots are a result of radiation coming from the same location of the quasar, which initially followed different paths but ended up being directed to the Earth.

Dai and Guerras focused on the spectral features of iron, a strong emission line that reveals details of the accretion disk, and found that this emission line is not just shifted in energy but that the amount of the shift varies with time. Although a shift in the frequency of this line is common, for example due to relative velocities between observers, its position is generally very stable with time when studying a specific object. Based on the 38 times RX J1131-1231 had been observed by the Chandra satellite during the past decade, the Oklahoma duo found that the energy varied significantly between observations in all of the four bright points of the ring.

These observations thus form the best evidence for the existence of extragalactic planets.

This feature can be explained using microlensing. The intermediate lensing galaxy is not a uniform mass but rather consists of small point masses, mainly stars and planets. As the relatively small objects within the lensing galaxy move, the light from the quasar passing through it is deflected in slightly different ways, causing different parts of the accretion disk to be amplified at different levels over time. As the different parts of the disk appear to emit at different energies, the measured variations in the energy of this emission line can be explained by the movement of objects within the lensing galaxy. The question is: what objects could cause such changes over time scales of several years?

Stars, being so numerous and massive, are one good candidate explanation. But Dai and Guerras calculated that the chance for a star to cause such short-term variations is very small. A better candidate, suggest fits to analytical models, is unbound planets which do not orbit a star. The Chandra data were best described by a model in which, for each star, there are more than 2000 unbound planets with masses between that of the Moon and Jupiter. Although the exact population of such planets is not well known even for our own galaxy, their number is well within the existing constraints. These observations thus form the best evidence for the existence of extragalactic planets and, by also providing the number of such planets in that galaxy, teach us something about the number of unbound planets we can expect in our own galaxy.

Beauty quarks test lepton universality

Of all the puzzling features of the Standard Model of particle physics (SM), one of the most vexing is the arrangement of the elementary particles into families or generations. Each pair of fermions comes in three and apparently only three copies: the electron, muon, tau leptons and their associated neutrinos, and three pairs of quarks. The only known difference between generations is the different strengths of their interactions with the Higgs field, known as the Yukawa couplings. This results in different masses for each particle, giving a wide range of experimental signatures.

In the case of the charged leptons (electrons, muons and taus), this pattern also results in one simple post-diction, known as lepton universality (LU): other than effects related to their different masses, all the SM interactions treat the three charged leptons identically. During the past couple of decades, LU has been tested to sub-percent precision in interactions of photons and weak bosons, and in transitions between light quarks. These measurements were made, for example, at the Large Electron–Positron (LEP) collider at CERN in decays of W and Z bosons, by the PIENU and NA62 fixed-target experiments in decays of pions and kaons, and in J/ψ decays by the BES-III, CLEO and KEDR collaborations. However, LU has never been established to such a degree of precision in decays of heavy quarks.

Measurements from Run 1 of decays of beauty hadrons at the LHCb experiment, in addition to earlier results from the B-factories Belle at KEKB and BaBar at PEP-II, have hinted at potential deviations from LU. None is statistically significant on its own but, taken together, the results have led to speculation on whether non-SM forces exist or phenomena that treat leptons differently depending on their flavour are at play. If a deviation from LU was to be confirmed, it would be clear evidence for physics processes beyond the SM and perhaps a sign that we are finally moving towards an understanding of the structure of the fermions.

Two classes

The results so far concern two classes of transitions in b-quark hadron decays, exemplified in figure 1. Measurements of highly suppressed flavour-changing neutral-current (FCNC) decays, b s+, hint at a difference involving muons and electrons, while measurements of the more frequent leading-order or tree-level decays, b c+ν, hint at a difference between muons and taus. These two classes of decays present very different challenges, both experimentally and theoretically. The latter, semi-leptonic, decays of b-quark hadrons proceed through tree-level diagrams in which a virtual W boson decays into a lepton–neutrino pair. Measurements of decays involving electrons and muons show no deviations with respect to the SM within the current level of precision. In contrast, measurements of decays involving τ leptons are only marginally in agreement with the SM expectation. The quantity that is experimentally measured is the ratio of branching fractions RD(*) = BF(B D(*)τ+ντ)/BF(B D(*)+ν), with = e or μ. This ratio is precisely predicted in the SM owing to the cancellation of the leading uncertainty that stems from the knowledge of the decay form-factors.

Interest in these decay modes was heightened in 2012 when the BaBar collaboration found values for RD and RD* above the SM prediction. This was followed in 2015 by results from the Belle collaboration that were also consistently high. Experimentally, such semi-tauonic beauty decays are extremely difficult to measure because taus are not reconstructed directly and at least two undetected neutrinos are present in the final state. To get around this, the BaBar and Belle experiments used both B mesons produced from Υ(4S) decays. By reconstructing the decay of one B meson in the event, the teams were able to infer the recoil of the other, “signal”, B decay. This tagging technique, based on the known momentum of the initial-state positron–electron pair and therefore that of the Υ(4S), allows the determination of the momentum of the B signal, the reconstruction of its decay under the assumption that only neutrinos escape detection, and the separation of signal and background.

The study of beauty-hadron decays to final states involving τ leptons was deemed not to be feasible at hadron colliders such as the LHC. This is a result of the unknown momentum of the colliding partons and the significantly more complex environment with respect to electron–­positron B-factories in terms of particle densities, detector occupancy, trigger and detection efficiencies. However, due to the significant Lorentz boost and the excellent performance of the LHCb vertex locator, the decay vertices of the b-hadrons produced at the LHC are well separated from the proton–proton interaction point. This enables the collaboration to approximate the b-hadron momentum and its decay kinematics with sufficient resolution to preserve the discrimination between signal and background.

Exploiting the tau

The first measurement of RD* at a hadron collider was performed by LHCb researchers in 2015 using the decays of the τ lepton into a muon and two neutrinos. This measurement again came out higher than the SM prediction, thus strengthening the tension between theory and experiment raised by Belle and BaBar.

In 2017, LHCb reported another RD* measurement by exploiting the decay of the τ lepton into three charged pions and a neutrino. This measurement was considered to be even more difficult than the previous one due to the large backgrounds from B decays and the apparent lack of discriminating variables. Nevertheless, the presence of a τ decay vertex significantly detached from the b-hadron decay vertex allows the most abundant backgrounds to be suppressed. The residual background, due to b-hadrons decaying to a D* and another charm meson that subsequently gives three pions in a detached vertex topology, is reduced by exploiting the different resonant structure of the three-pion system. The resulting measurement of RD* is larger than, although compatible with, the SM prediction, and consistent with previous determinations.

The combined world average of RD* and RD measurements, known to precisions of 5 and 10%, respectively, remains in tension with the SM prediction at a level of four standard deviations (figure 2). This provides solid motivation for further LU tests in semi-tauonic decays of B hadrons. In the next years, the LHCb collaboration will therefore extend the RD* measurement to the datasets collected in Run 2 and continue to study semi-tauonic decays of other b-quark hadrons.

In early 2018 the first measurement of RJ/ψ was performed, probing LU in the Bc sector. While the result was higher than the SM, the current uncertainty is large and the SM prediction is not yet firm. However, it can be an interesting test for the future. An important extension of this already rich physics programme, already being explored by Belle, will consider observables other than branching fractions, such as polarisation and angular distributions of the final-state particles. This will provide crucial insight when interpreting the current anomalies in terms of new-physics models.

The plot thickens

The results described above concern tree-level semi-leptonic decays. In contrast, the other relevant class of transitions for testing LU, b s+, are highly suppressed because there are no tree-level FCNCs in the SM. This increases the sensitivity to the possible existence of new physics. The presence of new particles contributing to these processes could lead to a sizeable increase or decrease in the rate of particular decays, or change the angular distribution of the final-state particles. Tests of LU in these decays involve measurements of the ratio of branching fractions between muon and electron decay modes RK(*) = BF(B K(*)μ+μ)/BF(B K(*)e+e).

These modes represent a considerable challenge because the highly energetic LHC environment causes electrons to emit a large amount of bremsstrahlung radiation as they traverse the material of the LHCb detector. This effect complicates the analysis procedure, for example making it more difficult to separate the signal and backgrounds where one or more particles have not been reconstructed. Fortunately, there are several control samples in the data that can be used to study electron reconstruction effects, such as the resonant decays B K(*)(J/ψ e+e), and ultimately the precision is dominated by the statistical uncertainty of the decays involving electrons. Despite this, the LHCb measurements dominate the world precision.

Three measurements of RK(*) have been performed by the LHCb experiment with the Run 1 data: two in the B0 K*0+ decay mode (RK*) and one in the B+ K++ decay mode (RK). The results are more precise than those performed at previous experiments, and all have a tendency to sit below the SM predictions (figure 3). The BaBar and Belle experiments have also measured these LU ratios and found them to be consistent with the SM, albeit with a larger uncertainty.

Assuming that rather than being statistical fluctuations these deviations arise from new physics, one can ask the question: what is driving the RK and RK* anomalies? Is the electron decay rate being enhanced or the muon suppressed, or both? One could get an answer to this question by looking at the differential branching fractions of the decays B+ K+μ+μ, B0 K+0μ+μ and Bs0 → φμ+μ. Although with small statistical significance, all these branching fractions consistently sit below the SM predictions, indicating that something could be destructively interfering with the muonic decay amplitude. If a new particle was really contributing to the B decay amplitude, then one would naturally expect it to also influence the angular distribution of the decay products. Intriguingly, by studying the angular distribution of B0 K*0μ+μ decays one observes discrepancies that can be interpreted as being compatible with the expectation based on the central values of RK and RK*

Can we conclude it is due to new physics? Unfortunately not. Information such as branching fractions and angular observables are affected by non-perturbative QCD effects. In principle, these can be controlled, but there is an open question about whether the interference of fully hadronic decays such as B0 K*0J/ψ could mimic some of the discrepancies seen. This contribution is very hard to calculate and will most likely require controlling in the data directly.

All the results so far probing LU at LHCb are based on LHC Run 1 data recorded at a centre-of-mass energy of 7 and 8 TeV. Measurements of the RK and RK* ratios can be significantly improved over future years with the analysis of the full Run 2 data at an energy of 13 TeV. LHCb will also broaden its search for LU violation to other types of FCNC decays, such as Bs → φμ+μ. Another interesting avenue, recently taken up by Belle, is to compare the angular distributions of the decays B0 K*0μ+μ and B K*0e+e. If LU were indeed violated, then one would expect to see differences between the angular distributions of muons and electrons as well as the decay rates.

Potential explanations

It is possible that the anomalies seen in tree-level and FCNC decays are related. The tree-level decays are sensitive to new physics at the TeV scale, whereas the FCNC decays are sensitive to scales of the order 10 TeV on account of the SM suppression of loop-level decays. If one would like to explain both anomalies with a single model, then this must also be suppressed in its contribution to b s+ decays compared to b cτ+ντ decays. This can be done by either forbidding FCNC processes at tree level, like in the SM, or by having a hierarchical flavour structure where the coupling to third-generation leptons is enhanced with respect to muons. Amongst several speculations, the most promising model in this regard introduces the well known concept of leptoquarks, which are particles that carry both lepton and quark quantum numbers (figure 4). The mass scale for such a leptoquark could be around 1 TeV, which is clearly very interesting for direct searches at the LHC.

The theoretical options open up if one would like to explain only one set of anomalies. For example, the loop-level anomalies can be explained with a Z boson of a few TeV in mass, although the allowed parameter space for such a model competes with the constraints imposed by Bs matter–antimatter oscillations. Overall, there are many possible models proposed that can explain one or both of these anomalies, and differentiating between them would become an exciting challenge if these were to be confirmed.

In any case, the amount of data analysed for the measurements described here corresponds to just one-third of what will be available by the end of 2018 at LHCb. Meanwhile, following a major overhaul of the KEK accelerator, the Belle-II experiment is about to start operations in Japan and is expected to collect data until 2025 (CERN Courier September 2016 p32). The two experiments are designed for the study of heavy-flavour physics, and their complementary characteristics will allow researchers to perform ultra-precise measurements of decays of b-quark hadrons. Hence, the prospects for continuing to test lepton universality in the next decade and beyond are excellent.

bright-rec iop pub iop-science physcis connect