Comsol -leaderboard other pages

Topics

Shining light on the precision frontier

Lepton_Conference_photo featured

The 30th International Symposium on Lepton Photon Interactions at High Energies, hosted online by the University of Manchester from 10 to 14 January, saw more than 500 physicists from around the world engaged in a broad science programme. The Lepton Photon series dates to the 1960s and takes place every two years. This was the first time the conference was meant to return to the UK in over 50 years, with its original August time slot moved to January due to Covid-19 restrictions. The agenda was stretched to improve accessibility in different time zones. Posters were presented via pre-recorded videos and three prizes were awarded following a public vote.

With 2022 marking the ten-year anniversary of the Higgs-boson discovery, it was appropriate that the conference kicked-off with an experimental Higgs-summary talk. Both the ATLAS and CMS collaborations showcased their latest high-precision measurements of Higgs-boson properties and searches for physics beyond the Standard Model using the Higgs boson as a portal. ATLAS presented a new combination of the Higgs total and differential cross-section measurements in the two-photon and four-lepton channels, while CMS shared the first full Run-2 search for resonant di-Higgs production in several multi-lepton final states.

The LHC experiments continue to demonstrate the power of hadron colliders to test the electroweak sector. Notable new results included the first observation of opposite-charge WWjj production at CMS, the first tri-boson (WWW) observation at ATLAS, and LHCb entering the game of W-boson mass measurements. A highlight of the talks covering QCD topics was a combined fit of the parton distribution function of the proton to differential cross-section measurements from ATLAS and HERA data. A wide range of new-physics searches were presented, including a dark-photon search from ATLAS with the full Run-2 data, and a CMS search for new scalars decaying into final states with Higgs-bosons.

In flavour physics, the pattern of anomalies in rare leptonic and semi-leptonic processes continues to intrigue. Highlights in this area included new tests of lepton universality from LHCb in Λb0 Λc+? decays (ℓ=e, μ, τ) , where the decay involving a τ lepton was observed for the first time, and from Belle in Ω?0→Ω+? decays, where the ratio of the e-μ final-state branching ratios was found to be in agreement with the expectation of unity and where the μ decay had been measured for the first time. Similar studies of rare leptonic decays are now also taking place in the charm sector. The BESIII collaboration tested in one study the e-μ universality in a second decay mode and confirmed its agreement with the Standard Model. Participants also heard about the latest searches for the ultra-rare decay K→π?? from KOTO, searching for the neutral kaon decay mode, and from NA62, which now has a 3.4σ evidence for the charged kaon decay mode.

With the 2021 update on muon g-2 from Fermilab, and with the MEG-II, DeeMe and Mu3e experiments getting ready to search for muon-to-electron transitions, there is much excitement about charged-lepton physics. CP violation in beauty and charm remains a hot topic, with updates from LHCb, Belle and BES-III on D0 and Bs oscillations and the CKM angle γ. In all these areas, the theoretical community continues to push the boundaries to make improved predictions. Among other things, theorists presented the latest global fits of Wilson coefficients, and several welcome developments in lattice QCD. 

The highlights from the neutrino sector included the low-energy excess search by MicroBooNE and the observation of the CNO cycle of solar neutrinos by Borexino. The latest results from the long-baseline experiments – T2K and recently NovA– are starting to hint at large CP violating effects in neutrino oscillations.

A series of talks on dark-matter searches spanned collider experiments, direct detection and astrophysical signatures. Some interesting anomalies persist, such as the DAMA annual modulation and the XENON1T low-energy excess. These will be challenged by a suite of next-generation detectors, such as PANDAX-4T, XENONnT, LZ and DarkSide-20k.

The conference also included a rich programme of talks covering astrophysics with an emphasis on gravitational waves and multi-messenger astronomy. Hot-off-the-press was a combined search for spatial correlations between neutrinos and ultra-high energy cosmic rays, using data from ANTARES, IceCube, Auger and TA collaborations, with no sign yet of a connection.

As well as many new results from experiments in operation, the conference included sessions devoted to R&D in accelerators, detectors, software and computing, covering both collider and non-collider experiments. With many new facilities proposed in the medium and long terms, technological challenges, which include power consumption, data rates and radiation tolerance, are immense and demand significant efforts in harnessing promising avenues such as high-temperature superconductors, quantum sensors or specialised computer accelerators. Common to all areas is the need to train and retain highly skilled people to lead these efforts in future.

A firm part of the Lepton Photon plenary programme are discussions around diversity, inclusion and outreach. A lively panel discussion covered many aspects of the former two topics and ended with a key message to the whole community: be an ally and take an active stance in support of minorities. The conference ended with traditional reports from the IUPAP commission on particles and fields and from ICFA, followed by strategy updates from Snowmass and the African Strategy for Fundamental and Applied Physics. While Snowmass is an established process for regular updates of the US strategy for the field based on wide-spread community input both from the US and internationally, the African strategy is the first of its kind and is testament to the continent’s ambition and growing importance in physics research. The next conference will take place in Melbourne in July 2023.

Snowmass promises bright future

Snowmass21

Every seven to 10 years, the US high-energy physics community comes together to re-evaluate and update its vision of the field. These wide-ranging exercises, organised by the American Physical Society (APS)’s Division of Particles and Fields (DPF) since 1982, are now known as the Snowmass Community Studies on account of the final drafting having historically taken place in Snowmass, Colorado. They include all related disciplines that contribute to elementary particle physics and welcome the participation of physicists from outside the US.

Snowmass exists to identify the physics issues that should be addressed and possible approaches to pursuing them, but we do not seek to specify which projects should be carried out. That task is accomplished by a Particle Physics Project Prioritization Panel (P5), a subpanel of the US High Energy Physics Advisory Panel (HEPAP), which uses the Snowmass output to develop programmatic priorities based on specific budget scenarios and provides recommendations to US funding agencies. Snowmass 2013 and the subsequent 2014 P5 roadmap recommended a suite of new projects, including: the HL-LHC upgrade; DUNE/LBNF; a short-baseline neutrino programme; the PIP-II proton source upgrade; the Mu2e experiment; the LSST camera and DESI; the LUX-ZEPLIN and CDMS dark matter searches; preparation for a new cosmic-microwave-background explorer; and strong investment in R&D for future accelerators. With many of these projects now under construction, it is vital to prepare the next round of compelling US particle-physics initiatives.

In April 2020 we kicked off a new Snowmass study. Initially scheduled to conclude with a workshop at the University of Washington in Seattle in July 2021, the process was paused due to COVID-19. On 24 September, at a virtual “Snowmass Day” meeting, we declared the Snowmass process officially resumed, with the Seattle workshop scheduled for 17 to 26 July.

White papers describing ideas, proposals and projects are due by 15 March for discussion

The Snowmass 2021 study is divided into 10 “frontiers”: energy; neutrino physics; rare processes and precision measurements; cosmic; theory; accelerator; instrumentation; computation; underground facilities; and community engagement. Each frontier is led by two or three conveners and is divided into between six and 11 topical groups – with community development, demographics, and diversity and inclusion addressed across all frontiers. A Snowmass early-career organisation has also been formed to assist young physicists in contributing to the process. The whole exercise is overseen by a steering group, which includes the DPF chair line, and international representation is provided by an advisory group chosen by national and regional physics societies.

Informing Snowmass 2021 are many recent results: Higgs-boson properties obtained by ATLAS and CMS; the measurement of the angle θ13 in the neutrino mixing matrix; evidence for anomalies in B-meson decays from LHCb; and the tension between Fermilab’s measurement of muon g-2 and the Standard Model prediction. These topics will continue to be explored in current experiments. Snowmass 2021 and the latest European strategy update focus on what comes next.

Collider matters

In the Snowmass process, we collect all ideas, whether they are large or small, expensive or less so, require international collaboration or not, and are hosted in the US or elsewhere. One topic of intense interest worldwide is the next generation of colliders, both to study the Higgs boson with sub-percent level precision and to directly search for new phenomena in the multi-TeV regime. The proposed Higgs factories require some final development that could be completed in a few years, which would enable a decision on which machine to build, and the start of negotiations to fund it, as an international project. Machines to explore the multi-TeV terrain require significantly more R&D to develop and industrialise the necessary new technologies. We expect this Snowmass/P5 process to set the direction for US participation in this R&D effort and future construction projects. We also look forward to new experiments and upgrades to existing experiments in neutrino physics, rare decays and astrophysics, along with new R&D initiatives in detectors, computing, accelerators and theory.

White papers describing ideas, proposals and projects are due by 15 March 2022 for discussion at the Seattle meeting, where a draft report will be produced and then submitted to HEPAP and the APS in the fall. With hard work and good will, we expect to emerge from the Snowmass/P5 process with a grand vision for a vibrant US high-energy physics programme over the 10 years starting from 2025 and with a roadmap for large new initiatives that will come to fruition in the 2030s. Please join us and contribute your ideas to shaping our future!

Exploring the early universe with gravitational waves

UHF-GW-meeting

Seven years after the direct detection of gravitational waves (GWs), particle physicists around the world are preparing for the next milestone in GW astronomy: the search for a cosmological stochastic GW background. Current and planned GW observatories roughly cover 12 orders of magnitude from the nanohertz to kilohertz regimes, in which astrophysical models predict sizable GW signals from the merging of compact objects such as black-hole and neutron-star mergers, as observed by the LIGO/Virgo collaborations. It is also expected that the universe contains a randomly distributed GW background, which is yet to be detected. This could be the result of various known and unknown astrophysical signals, which are too weak to be resolved individually, or could be due to hypothetical processes in the very early Universe, such as phase transitions at high temperatures. The most promising region to search for the latter is arguably the ultra-high frequency (UHF) regime encompassing megahertz and gigahertz GWs, which is beyond the reach of current detectors. The detection of such a stochastic GW background could therefore offer a powerful probe of the early universe and of physics beyond the Standard Model.

On 12-15 October a virtual workshop hosted by CERN explored theoretical models and detector concepts targeting the UHF GW regime. Following an initial meeting at ICTP Trieste in 2019 and the publication of a Living Review on UHF GWs, the goal of the workshop was to bring together theorists and experimentalists to discuss feasibility studies and prototypes of existing detector concepts as well as to review more recent proposals.  

The wide range of detector concepts discussed demonstrates the rapid evolution of this field and shows the difficulty in choosing the optimal strategy. Tailoring “light shining through wall” experiments for GWs is one promising approach. In the presence of a static magnetic field, general relativity in conjunction with electrodynamics allows GWs to generate electromagnetic radiation at the same frequency, similar to the conversion of the hypothetical axion into photons. In this case, the bounds placed on “axion to photon” couplings, for example as determined by the CAST and OSQAR experiments at CERN or the ALPS experiments at DESY, can be recast as GW bounds. 

The sheer variety of systems offers a new playground for creative ideas and underlines the cross-disciplinary nature of this field 

Another approach, echoing that of the very first GW searches in the late 1960s, is to detect the mechanical deformation induced by GWs at the base of resonant-bar detectors, which can be implemented in the UHF regime using centimetre-sized bulk acoustic wave devices common in radio-frequency engineering. Resonant microwave cavities are another approach to detect interactions between GWs and electromagnetism, and have been explored in the past, such as by the MAGO collaboration at CERN (2004-2007) or proposed as a modified version of the ADMX experiment at the University of Washington. Further proposals include the precise measurement of optically levitated nanoparticles, transitions in Bose Einstein condensates, mesoscopic quantum systems, cosmological detectors and magnon systems. The sheer variety of systems, the majority of which are much smaller and less costly than long-baseline interferometric detectors, offers a new playground for creative ideas and underlines the cross-disciplinary nature of this field. Working groups set up during the workshop will investigate some of the most promising ideas in more detail within the next months.

Complementing the discussion about detector concepts, theorists presented BSM models that predict violent processes in the early universe, which could source strong GW signals. These arise e.g. in some models of cosmic inflation, at the transition phase between cosmic inflation and the radiation dominated universe, or from spontaneous symmetry breaking processes. Since these processes occur isotropically everywhere in the Universe, the expected signal is a diffuse gravitational wave background. Moreover, some relics of these processes, such as topological defects and primordial black holes, may have survived until the late universe and may still be actively emitting gravitational waves. 

The current sensitivity of all proposed and existing detector concepts is several orders of magnitude away from the expected cosmological GW signals. Given that the first laser-interferometer GW detectors built in the 1970s were eight orders of magnitude below the sensitivity of the currently operating LIGO/Virgo/KAGRA observatories, however, there is every reason to think that the search for UHF GWs is the beginning and not the end of a story.  

LHCb probes lepton universality with taus

The LHCb collaboration has made the first observation of the semileptonic baryon decay Λb0→ Λc+τντ, and used it to carry out a new test of lepton-flavour universality. Presented on 10 January at the 30th Lepton Photon conference organised by the University of Manchester, the result brings a further tool to understand the flavour anomalies reported by LHCb and other experiments in recent years.

Lepton-flavour universality (LFU) is the principle that the weak interaction couples to electrons, muons and tau leptons equally. Decays of hadrons to electrons, muons and tau leptons are therefore predicted to occur at the same rate, once differences in the lepton masses are taken into account.

RLc1

During the past few years, physicists have seen hints that some processes might not respect LFU. One of the strongest comes from b→cℓν(ℓ=μ,τ) transitions in B-meson decays, as quantified by the parameter R(D), which measures the ratio of the branching fractions of B →Dτντ and B→Dν. The combined deviation from precise Standard Model predictions of R(D*) and R(D) as measured by the BaBar, Belle and LHCb collaborations amounts to around 3.4σ. R(J/ψ), which concerns the branching ratios of Bc+→J/ψτ+ντ and Bc+→J/ψμ+νμ, was also found by LHCb to be larger than expected, but only at the level of around 2σ. Another key test of LFU involves the flavour-changing neutral current (FCNC) quark transition b→sℓ+, for which several channels suggest that electrons are produced at a greater rate than muons. The largest effect comes from the decay B+→K+e+e, for which LHCb finds R(K) to lie 3.1σ from the Standard Model expectation.

Taken individually, none of the measurements are significant. But together they present an intriguing pattern. New-physics models based on leptoquarks have been proposed as possible explanations for the anomalies observed in semileptonic B-meson decays and in FCNC reactions.

Baryons entered the fray in late 2019, when LHCb compared the rates of Λb0→pKe+e and Λb0→pKμ+μ decays. Although R(pK) also erred on the side of fewer muons than electrons, it was found to be in agreement with the Standard Model within the limited statistics. The latest LHCb analysis, which compared the branching ratio of Λb0→ Λc+τντ from a sample of around 350 events selected from LHC Run 1 to that of Λb0→ Λc+μνμ measured by the former DELPHI experiment at LEP, found R(Λc+) = 0.242±0.026(stat)±0.040(syst)±0.059(ext), in good agreement (approximately 1σ ) with the Standard Model prediction of 0.324±0.004.

R(D*) can be large and R(Λc+) small in one new-physics scenario, or R(D*) large and R(Λc+) even larger in another

Guy Wormser

Baryon decays provide complementary constraints on potential violations of LFU to those from meson decays due to the different spin of the initial state. This allows constraints to be placed on possible new-physics scenarios, explains Guy Wormser of IJCLab, who led the LHCb analysis: “R(D*) can be large and R(Λc+) small in one new-physics scenario or R(D*) large and R(Λc+) even larger in another. The spin of the accompanying hadron changes the way new-physics couples into the reaction, and it depends also of the spin of particle present in the new-physics model, usually a leptoquark which can be a scalar, pseudoscalar, vector, axial vector or tensor. Our result excludes phase-space regions in some of these scenarios. In the future, a combined measurement of LFU violation — if it is confirmed — in mesons and baryons can therefore help to pin down the characteristics of the new-physics mediator.”

The latest LHCb result concerning R(Λc+) is likely to trigger intensive discussions among theorists, says the collaboration, with future measurements of this and other “R” measurements using Run-2 and Run-3 data keenly anticipated.

Exotic flavours at the FCC

Half a century after its construction, the Standard Model of particle physics (SM) still reigns supreme as the most accurate mathematical description of the visible matter in the universe and its interactions. It was placed upon its throne by the many precise measurements made at the Large Electron Positron collider (LEP), in particular, and its rule was confirmed by the discovery of the Higgs boson at the Large Hadron Collider (LHC). CERN’s LEP/LHC success story, in which a hadron collider provided direct evidence for a new particle (the Higgs boson) whose properties were already partially established at a lepton collider, can serve as a blueprint for physics discoveries at a proposed Future Circular Collider (FCC) operating at CERN after the end of the LHC. 

Back in the late 1970s and early 1980s when the LEP/LHC programme was first proposed, the W and Z bosons mediating the weak interactions had not yet been observed, the top quark was considered a possible discovery, and the Higgs boson was regarded as a distant speculation. Precise studies of the W and Z, which were discovered in 1983 at the SPS proton–antiproton collider at CERN, were key items in LEP’s physics programme along with direct searches for the top quark, the Higgs boson and possible unknown particles. Even though the LEP experiments did not reveal any new particles beyond the W and Z, the unprecedented precision of its measurements revealed indirect effects (via quantum fluctuations) of the top and the Higgs, thereby providing indirect evidence for the SM mechanism of electroweak symmetry breaking. When the top quark was discovered at the Tevatron proton–antiproton collider at Fermilab in 1995, and the Higgs boson at the LHC in 2012, their masses were within the ranges indicated by precision measurements made at lepton colliders. 

Layout of the Future Circular Collider at CERN

Nowadays, the hope is that the proposed FCC programme – comprising an electron–positron collider followed by a high-energy proton-proton collider in the same ~100 km tunnel – will repeat the LEP/LHC success story at an even higher level of precision and energy. The e+e FCC stage would reproduce the entire LEP sample of Z bosons within a couple of minutes, yielding around 5 × 1012 Z bosons after four years of operation. In addition to allowing an incredibly accurate determination of the Z-boson’s properties, Z decays would also provide unprecedented samples of bottom quarks (1.5 × 1012) and tau leptons (3 × 1011). Potential increases in the FCC-ee centre-of-mass-energy would also produce unparalleled numbers of W+W and top–antitop pairs, which are important for the global electroweak fit, close to their respective thresholds, as well as more Higgs bosons than promised by other proposed e+e Higgs factories.

Probing beyond the Standard Model

Analyses of FCC-ee data, combined with results from previous experiments at the LHC and elsewhere, would not only push our understanding of the SM to the next level but would also provide powerful indirect probes of possible physics beyond the SM, with sensitivities to masses an order of magnitude greater than those of the LHC. A possible subsequent proton–proton FCC stage (FCC-hh) operating at a centre-of-mass energy of at least 100 TeV would then provide unequalled opportunities to discover this new physics directly, just as the LHC made possible the discovery of the Higgs boson following the indirect hints from high-precision LEP data. Whereas the combination of LEP and the LHC explore the TeV scale both indirectly and directly, the combination of FCC-ee and FCC-hh will carry the search for new physics to 30 TeV and beyond. 

The e+e stage of FCC would reproduce the entire LEP sample of Z bosons within a couple of minutes

However, for this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach. While the existence of dark matter and neutrino masses already prove that the SM cannot be complete (and there is no shortage of theoretical ideas as to what extensions of the SM could account for them), these observations can be explained by new particles within a very wide mass range – possibly well beyond the reach of FCC-hh. Fortunately, intriguing hints for new physics in the flavour sector have accumulated in recent years that point towards beyond-the-SM physics that should be accessible to FCC.

B-decay anomalies

Within the SM, the charged leptons – electrons, muons and taus – all have very similar properties. They interact with the photon as well as the W and Z bosons in the same way, and differ only in their masses, which in the SM are represented as Yukawa couplings to the Higgs boson. It is therefore said that the SM (approximately) respects lepton-flavour universality (LFU), despite the seemingly large differences in charged-lepton lifetimes originating from phase-space effects. 

Flavour observables (i.e. processes resulting from rare transitions among the different generations of quarks and leptons), and observables measuring LFU in particular, are especially promising to test the SM because they are strongly suppressed in the SM and thus very sensitive to new physics. In recent years, a coherent pattern of anomalies, all pointing towards the violation of LFU, have emerged. Two classes of fundamental processes giving rise to decays of B mesons – b → sℓ+ and b → cτν – show deviations from the SM predictions. 

In the flavour-changing neutral-current process b → sℓ+, a heavy bottom quark undergoes a transition to a strange quark and a pair of oppositely-charged leptons, which could be either electrons or muons. The ratios RK = Br(B → +μ)/Br(B → Ke+e) and RK* = Br(B → K*μ+μ)/Br(B → K*e+e), measured most precisely by the LHCb collaboration, are particularly interesting because their SM predictions are very clean. Since the muon and electron masses are negligible compared to the B-meson mass, the ratio of muon to electron decays should be close to unity according to the SM. However, intriguingly, LHCb has observed values significantly lower than one, and recently reported first evidence for LFU violation in RK . These hints of new physics are supported by measurements of the angular observable P5′ in B0→ K*0μ+μ decays and the rate of Bs→ φμ+μ decays. Importantly, all these observations can potentially be explained by the same new-physics interactions and are consistent with all other available measurements of processes involving b → sℓ+transitions. In fact, global fits of all available b → sℓ+  data find a preference for new physics compared to the SM hypothesis which reeks of a possible discovery.

Anomalous correlations

The second class of anomalies involves the charged-current process b → cτν, which is already mediated at tree level in the SM. The corresponding B-meson decays therefore have much higher probabilities to occur and thus larger branching ratios. However, the non-negligible tau mass leads to imperfect cancellations of the form factors in the ratio to electron or muon final states, and thus the resulting SM prediction is not as precise as those for RK and RK*. The most prominent examples of observables involving b → cτν transitions are the ratios RD = Br(B → Dτντ)/Br(B → Dℓν) and RD* = Br(B → D*τ ντ)/Br(B → D*τν). Here, the measurements of Belle, BaBar and LHCb consistently point above the SM predictions, resulting in a combined tension of 3σ. Importantly, as these processes happen quite frequently in the SM, a significant new-physics effect would be required to account for the corresponding anomaly. 

With the FCC-ee capable of producing 1.5 × 1012 b quarks, clearly the b anomalies could be further verified within a short period of running, assuming that LHCb, Belle II and possibly other experiments do confirm them. The large data sample would also allow physicists to study complementary modes that bear upon LFU but are more difficult for LHCb to measure, such as other “R” measurements involving neutral kaons. These measurements would be invaluable for pinning down the mechanism responsible for any violation of lepton universality.

Other possible anomalies

The B anomalies are just one exciting avenue that a “Tera-Z factory” like FCC-ee could explore further. The anomalous magnetic moment of the muon, aμ, can also be viewed as an exciting hint for new physics in the lepton sector. Predicted by the Dirac equation to have a value exactly equal to two, the physical value of the magnetic moment of the muon is slightly higher due to fluctuations at the quantum level. The very high precision of both the calculation and measurement therefore make it a powerful observable with which to search for new physics. A tension between the measured and predicted value of aμ has persisted since Brookhaven published its final result in 2006, and was recently strengthened by the muon g-2 experiment at Fermilab, yielding an overall significance of 4.2σ when combined with the earlier Brookhaven data. 

Effects of new physics on precision electroweak measurements

Various models have been proposed to explain the g-2 anomaly. They include leptoquarks (scalar or vector particles that carry colour and couple directly to a quark and a lepton that arise in models with extended gauge groups) and supersymmetry. Such leptoquarks could have masses anywhere between the lower LHC limit of 1.5 TeV and about 10 TeV, thus being within the reach of FCC-hh, whereas a supersymmetric explanation would require a couple of new particles with masses of a few hundred GeV, possibly even within reach of FCC-ee. Importantly, any explanation involving heavy new particles would also lead to effects in Z → μ+μ, as both observables are sensitive to interactions with sizeable coupling strength to muons. FCC-ee’s large Z-boson sample could therefore reveal deviations from the SM predictions at the suggested level. Leptoquarks could also modify the SM prediction for H  μ+μ decay, which will be measured very accurately at FCC-hh (see “Anomalous correlations” figure).

CKM under scrutiny

As the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which describes flavour violation in the quark sector, is unitary, the sum of the squares of the elements in each row and in each column must add up to unity. This unitarity relation can be used to check the consistency of different determinations of CKM elements (within the SM) and thus also to search for new physics. Interestingly, a deficit in the first-row unitarity relation exists at the 3σ level. This can be traced back to the fact that the value of the element Vud, extracted from super-allowed beta decays, is not compatible with the value of Vus, determined from kaon and tau decays, given CKM unitarity. Interestingly, this deviation can also be interpreted as a sign of LFU violation, since beta decays involve electrons while the most precise determination of Vus comes from decays with final-state muons. 

Here, a new-physics effect at a relative sub-per-mille level compared to the SM would suffice to explain the anomaly. This could be achieved by a heavy new lepton or a massive gauge boson affecting the determination of the Fermi constant that parametrises the strength of the weak interactions. As the Fermi constant can also be determined from the global electroweak fit, for which Z decays are crucial inputs, FCC-ee would again be the perfect machine to investigate this anomaly, as it could improve the precision by a large factor (see “High precision” figure). Indeed, the Fermi constant may be determined directly to one part in 105 from the enormous sample (> 1011) of Z decays to tau leptons. 

For this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach

FCC-ee’s extraordinarily large dataset will also enable scrutiny of a long-standing anomaly in the forward-backward asymmetry of Z → bb decays. The LEP measurement of ΔAFB, which arises from the difference between the Z boson couplings to left- and right-handed chiral states with different strengths, lies 2–3σ below the SM prediction. Although not significant, this anomaly may also be linked to new physics entering in b → s transitions.

Finally, a possible difference in the decay asymmetries of B → D*μν vs B → D*eν was recently reported by an analy­sis of Belle data. As in the case of RK, the SM prediction that the difference between the muon and the electron asymmetries should be zero is very clean and, like RD and RD*, this observable points towards new physics in b → c transitions and could be related via leptoquarks to g-2 of the muon. Once more, the great number of b quarks to be produced at FCC-ee, together with the clean environment of a lepton collider, would allow this observable to be determined with unprecedented accuracy.

Since all these anomalies point, to varying degrees, towards the existence of LFU-violating new physics, it raises the question of whether a common explanation exists? There are several particularly interesting possibilities, including leptoquarks, new scalars and fermions (as arise in supersymmetric extensions of the SM), new vector bosons (W′ and Z′) and new heavy fermions. In the overwhelming majority of such scenarios, a direct discovery of a new particle is possible at FCC-hh. For example, it could discover leptoquarks with masses up to 10 TeV and Z′ bosons with masses up to 40 TeV, covering most of the mass ranges expected in such models.

Anomalies point to possible violations of lepton-flavour universality

A return to the Z pole and beyond

The LEP programme was extremely successful in determining the mechanism of electroweak symmetry breaking, in particular by measuring the properties and decays of the Z boson very precisely from a 17 million-strong sample. This allowed for a prediction of a range for the Higgs mass within which it was later discovered at the LHC. The flavour anomalies could lead to a similar situation in the near future. In this case, the roughly 5 × 1012 Z bosons that the FCC-ee is designed to collect would not only be able to test the effects of new particles in precision electroweak observables, but also, via Z decays into bottom quarks and tau leptons, provide a unique testing ground for flavour physics. As noted earlier, FCC-ee’s Z-pole run is also envisaged to be the first step in a broader electroweak programme encompassing large statistics at the WW and tt thresholds, in addition to its key role as a precision Higgs factory. 

Looking much further ahead to the energy frontier, FCC-hh would be able, in the overwhelming number of scenarios motivated by the flavour anomalies, to directly discover a new particle. Furthermore, FCC-hh would allow for a precise determination of rare Higgs decays and the Higgs potential, probing new-physics effects related to this sector, such as leptoquark explanations of the anomalous magnetic moment of the muon.

Pending the outcome of the FCC feasibility study recommended by the 2020 update of the European strategy for particle physics, the hope that the LEP/LHC success story could be repeated by FCC-ee/FCC-hh is well justified. While FCC-ee could be used to indirectly pin down the parameters of the model(s) of new physics explaining the flavour anomalies via precision electroweak and flavour measurements, FCC-hh would be capable of searching for the predicted particles directly. 

How the Sun and stars shine

Staring at the Sun

Each second, fusion reactions in the Sun’s core fling approximately 60 billion neutrinos onto every square centimetre of the Earth. In the late 1990s, the Borexino experiment at Gran Sasso National Laboratory in Italy was conceived to measure these neutrinos right down to a few tens of keV, where the bulk of the flux lies. The detector’s name means “little Borex” and refers to an earlier idea for a large experiment with a boron-loaded liquid scintillator, which was shelved in favour of the present, smaller and more ambitious detector. Rather than studying rare but high-energy 8B neutrinos from a little-followed branch of the proton–proton (pp) fusion chain, Borexino would target the far more numerous but lower energy neutrinos produced in the Sun by electron captures on 7Be.

The fusion reactions generating the Sun’s energy

Three decades after its conception, Borexino has far exceeded this goal thanks to the exceptional radiopurity of the experimental apparatus (see “Detector design” panel. Special care taken in construction and commissioning has achieved a radiopurity about three orders of magnitude better than predicted, and 10 to 12 orders of magnitude below natural radioactivity. This has allowed the collaboration to probe the entire solar-neutrino spectrum, including not only the pp chain, but also the carbon–nitrogen–oxygen (CNO) cycle. This mechanism plays a minor role in the Sun but becomes important for more massive stars, dominating the energy production and the production of elements heavier than helium in the universe at large.

The heart of the Sun

The pp-chain generates 99% of the energy in the Sun: it begins when two protons fuse to produce a deuteron and an electron neutrino – the so-called pp neutrino (see “Chain and cycle” figure). Subsequent reactions produce light elements, such as 3He, 4He, 7Be, 7Li, 8B and more electron neutrinos. In Borexino, the sensitivity to pp neutrinos depends on the amount of 14C in the liquid scintillator: with an end-point energy of 0.156 MeV compared with a maximum visible energy for pp neutrinos of 0.264 MeV, the 14C 14N + β + ν beta decay sets the detection threshold and the feasibility of probing pp-neutrinos. The Borexino scintillator was therefore made using petroleum from very old and deep geological layers, to ensure a low content of 14C.

Detector design

Like many particle-physics detectors, Borexino has an onion-like design. The innermost layers have the highest radio-purity. The detector’s active core consists of 278 tonnes of pseudocumene (C9H12) scintillator. Into this is dissolved 2,5-diphenyloxazole (PPO) at a concentration of 1.5 grams per litre, which shifts the emission light to 400 nm, where the sensitivity of photomultipliers is peaked. The scintillator is contained within a 125 μm-thick nylon inner vessel (IV) with a 4.5 m radius – made thin to reduce radiation emission from the nylon . In addition, the IV stops radon diffusion towards the core of the detector. 

Borexino design

The IV is contained within a 7 m-radius stainless-steel sphere (SSS) that supports 2212 photomultipliers (PMTs) and contains 1000 tonnes of pseudocumene as high-radio-purity shielding liquid against radioactivity from PMTs and the SSS itself. Between the SSS and the IV, a second nylon balloon acts as a barrier preventing radon and its progeny from reaching the scintillator. The SSS is contained in a 2400-tonne tank of highly purified water which, together with Borexino’s underground location, shields the detector from environmental radioactivity. The tank boasts a muon detector to tag particles crossing the detector. 

When a neutrino interacts in the target volume, energy deposited by the decelerating electron is registered by a handful of PMTs. The neutrino’s energy can be obtained from the total charge, and the hit-time distribution is used to infer the location of the event’s vertex. Recoiling electrons are used to tag electron neutrinos, and the combination of a positron annihilation and a neutron capture on hydrogen (an inverse beta decay) are used to tag electron antineutrinos.

Due to the impossibility of discriminating individual solar-neutrino events from the backgrounds, the greatest challenge has been the reduction of natural radioactivity to unprecedented levels. In the early 1990s, Borexino developed innovative techniques such as under-vacuum distillation, water extraction, ultrafiltration and nitrogen sparging with ultra-high radiopurity nitrogen to reduce radioactive impurities in the scintillator to 10–10 Bq/kg or better. An initial detector called the Counting Test Facility was developed as a means to demonstrate such claims, publishing results for the key uranium, thorium and krypton backgrounds in 1995. Full data taking at Borexino began in 2007. 

Since data-taking began in 2007, Borexino has measured, for the first time, all the individual fluxes produced in the pp-chain. In 2014 the collaboration made the first definitive observation of pp neutrinos, using a comparison with the predicted energy spectrum. In 2018 the collaboration performed, with the same apparatus, a measurement of all the pp-chain components (pp, 7Be, pep and 8B neutrinos), demonstrating the large-scale energy-generation mechanism in the Sun for the first time (see “Energy spectrum” figure). This spectral fit allowed the collaboration to directly determine the ratio between the interaction rate of 3He + 3He fusions and that of 3He + 4He fusions – a crucial parameter for characterising the pp chain and its energy production.

The simultaneous measurement of pp-chain neutrino fluxes also gave Borexino a unique window onto the famous “vacuum-matter” transition, whereby coherent virtual W-boson interactions with electrons modify neutrino-oscillation probabilities as neutrinos propagate through matter, enhancing the oscillation probability as a function of energy. In 2018 Borexino measured the solar electron–neutrino survival probability, Pee, in the energy range from a few tens of keV up to 15 MeV (see “Survival probability” figure). This was the first direct observation of the transition from a low-energy vacuum regime (Pee~0.55) to a higher energy matter regime where neutrino propagation is dominantly affected by the solar interior (Pee~0.32). The transition was measured by Borexino at the level of 98% confidence.

CNO cycle

A different way to burn hydrogen, the CNO cycle, was hypothesised independently by Carl Friedrich von Weizsäcker and Hans Albrecht Bethe between 1937 and 1939. Here, 12C acts as a catalyst, and electron neutrinos are produced by the beta decay of 13N and 15O, with a small contribution from 17F. The maximum energy of CNO neutrinos is about 1.7 MeV. In addition to making an important contribution to the production of elements heavier than helium, this cycle is important for the nucleosynthesis of 16O and 17O. In massive stars it also develops in more complex reactions producing 18F, 18O, 19F, 18Ne and 20Ne.

Solar neutrinos and residual backgrounds

The sensitivity to CNO neutrinos in Borexino mainly comes from events in the energy range from 0.8 to 1 MeV. In this region, the dominant background comes from 210Bi, which is produced by the slow radioactive decay 210Pb (22 y) 210Bi (5 d) + β + ν210Po (138 d) + β + ν206Pb (stable) + α. The 210Bi activity can be inferred from 210Po, which can be efficiently tagged using pulse-shape discrimination. However, convective currents in the liquid scintillator bring into the central fiducial mass 210Po produced by 210Pb, which is most likely to be embedded on the nylon containment vessel. In order to reduce convection currents, a passive insulation system and a temperature control system were installed in 2016, significantly reducing the effect of seasonal temperature variations. 

Thanks to these and other efforts, in 2020 Borexino rejected the null hypothesis of no CNO reactions by more than five standard deviations, providing the first direct proof of the process. The energy production as a fraction of the solar luminosity was measured to be 1-0.3+0.4 %, in agreement with the Solar Standard Model (SSM) prediction of roughly 0.6 ± 0.1% (which assumes the solar surface has a high metallicity – a topic discussed in  more detail later). Given that luminosity scales as M4 and number density as M–2.5 for stars between one and 10 solar masses, the CNO cycle is thought to be the most important source of energy in massive hydrogen-burning stars. Borexino has provided the first experimental evidence for this hypothesis.

Probing solar metallicity using CNO neutrinos is of the utmost importance, and Borexino is hard at work on the problem

But, returning to the confines of our solar system, it’s important to remember that the SSM is not a closed book. Borexino’s results are thus far in agreement with its assumption of a protostar that had a uniform composition throughout its entire volume when fusion began (“zero-age homogeneity”). However, thanks to the ability of neutrinos to peek into the heart of the Sun, the experiment now has the potential to explore this assumption and weigh in on one of the most intriguing controversies in astrophysics.

The solar-abundance controversy

As stars evolve, the distribution of elements within them changes thanks to fusion reactions and convection currents. But the composition of the surface is thought to remain very nearly the same as that of the protostar, as it is not hot enough there for fusion to occur. Measuring the abundance of elements on a star’s surface therefore gives an idea of the protostar’s composition and is a powerful way to constrain the SSM. 

Solar-neutrino measurements

Currently, the best method to determine the surface abundance of elements heavier than helium (“metallicity”) uses measurements of photo-absorption lines. Since 2005, improved hydrodynamic calculations (which are needed to model atomic-line formation, and radiative and collisional processes which contribute to excitation and ionisation) indicate a much lower surface metallicity than was previously considered. However, helioseismology observables differ by roughly five standard deviations from SSM predictions that use the new surface metallicity to infer the protostar’s composition, when the sound–speed profile, surface–helium abundance and the depth of the convective envelope are taken into account. Helioseismology implies that the zero-age Sun’s core was richer in metallicity than the present surface composition, suggesting a violation of zero-age homogeneity and a break with the SSM. This is the solar-abundance controversy, which was discovered in 2005.

One possible explanation is that a late “dilution” of the Sun’s convective zone occurred due to a deposition of elements during the formation of the solar system. Were there to have been an accretion of dust and gas from the proto-planetary disc onto the central star during the evolution of the star–planet system, this could have changed the initial metallicity of the surface of the Sun – a hypothesis backed up by recent simulations that show that a metal-poor accretion could produce the present surface metallicity. 

As they are an excellent probe of metallicity, CNO neutrinos have an important role to play in settling the solar-abundance controversy. If Borexino were to measure the Sun’s present core metallicity, and by running simulations backwards prove that its surface metallicity must have been diluted right from its birth, this would violate one of the basic assumptions of the SSM. Probing solar metallicity using CNO neutrinos is, therefore, of the utmost importance, and Borexino is hard at work on the problem. Initial results favour the high-metallicity hypothesis with a significance of 2.1 standard deviations – a tentative first hint from Borexino that zero-age homogeneity may indeed be false.

The ancient question of why and how the Sun and stars shine finally has a comprehensive answer from Borexino, which has succeeded thanks to the detector’s extreme and unprecedented radio-purity – the hard work of hundreds of researchers over almost three decades.

Space-based data probe neutron lifetime

Recent measurements of the neutron lifetime

The neutron lifetime is key to a range of fields, not least astrophysics and cosmology, where it is used in the modeling of the synthesis of helium and heavier elements in the early universe. Its value, however, is uncertain. In recent years, discrepancies of up to 4σ between measurements of the neutron lifetime using different methods present a puzzle that particle physicists, nuclear physicists and cosmologists are increasingly eager to solve. 

A recent experiment with the UCNτ experiment at the Los Alamos Neutron Science Center, which resulted in the most constraining measurement of the lifetime to date, further strengthens the discrepancy. The latest result, achieved using the so called “bottle” method, results in a neutron lifetime of 877.75 ± 0.28 (stat) +0.22 –0.16 (syst) s, whereas measurements using the “beam” method have consistently resulted in longer lifetimes (see figure). While the beam method determines the lifetime by measuring the decay products of the neutron, the bottle method instead stores cold, or thermalised, neutrons for a certain time before counting the remaining ones by direct detection. If not the result of some unknown systematic error, the discrepancy could be a sign of exotic physics whereby the longer lifetime in the beam method stems from an unmeasured second decay channel. 

Escape detection

Astrophysics brings a third, independent measurement into play based on the bombardment of galactic cosmic rays on planetary surfaces. This continual process liberates large numbers of high-energy neutrons, some of which escape into space while others approach thermal equilibrium with surface and atmospheric material, a proportion subsequently escaping into space where at some point they will decay. The neutron lifetime can therefore be inferred by counting the neutrons remaining at different distances from their production location, using detectors positioned hundreds to thousands of kilometres above the surface. As the escaped neutron flux depends on a planet’s particular elemental composition at depths corresponding to the neutron mean-free path (typically around 10 cm), neutron spectrometers have already been installed on several missions to explore planetary surface compositions.

A dedicated instrument on a future lunar mission could bring a crucial third independent tool to tackle the neutron lifetime puzzle

In 2020, using neutrons produced through interactions of cosmic rays with Venus and Mercury, a team from the Johns Hopkins Applied Physics Laboratory and Durham University demonstrated the feasibility of such a neutron-lifetime measurement. Now, using data from a lunar mission, the same team has provided the first results with uncertainties approaching those coming from lab-based experiments. Importantly, since it also relies on direct detection, the result from space should produce the same lifetime as the bottle experiments.

For this latest study, the researchers used data from NASA’s Lunar Prospector taken during several elliptical orbits around the moon in 1998. The orbiter contained two neutron detectors, one with a cadmium shield making it insensitive to slow or thermal neutrons, and one containing a tin shield that allows it to measure thermal as well as higher- energy neutrons. The difference between the two count rates then provides the thermal neutron flux. Combining this with the spacecraft position, the group deduced the thermal neutron flux for different positions and distances towards the Moon and fitted the data against a model that includes the production and propagation of thermal neutrons originating from interactions of cosmic rays with the lunar surface.

Surface studies

The highly detailed models account for neutron production from cosmic-ray interactions with the different elements of the lunar surface, and also for the varying composition of the surface in different regions. For the lifetime measurement, thermal neutrons were used due to their lower velocities (a few km/s), which makes their flux as a function of the distance to the surface (typically several 100 km) more sensitive to their lifetimes. The higher sensitivity comes at the cost of greater model complexity, however. For example, thermal neutrons cannot simply be modeled as traveling in a straight line, but are affected by the lunar gravity, meaning that they not only come directly from the surface but also enter the detector from the back as they perform elliptical orbits. 

The study found a lifetime of 887 ± 14 (stat) +7–3 (syst) s. The systematic error stems mainly from uncertainties in the surface composition and its variations as well as a lack of modeling of the temperature variation of the Moon’s surface, which affects the thermalisation process, and from uncertainties in the ephermides (location) of the spacecraft. In future dedicated missions, the latter two issues can be mitigated, while knowledge of the surface composition can be improved with additional studies. Indeed, the large statistical error arises from this being a non-dedicated mission where the small data sample used was not even part of the science data of the original mission. The results are therefore highly promising, as they show that a dedicated instrument on a future lunar mission would bring a crucial third independent tool to tackle the neutron lifetime puzzle.

Unrivalled precision on Z invisible width

The three regions used to extract the Z-boson invisible width

The LHC was built in the 27 km tunnel originally excavated for LEP, the highest energy electron–positron collider ever built. Designed to study the carriers of the weak force, LEP’s greatest legacy is the accuracy with which it pinned down the properties of the Z boson. Among the highlights is the measurement of the Z boson’s invisible width and decay branching fraction, which was used to deduce that there are three, and only three, species of light neutrinos that couple to the Z boson. This measurement of the Z-boson invisible width from LEP has remained the most precise for two decades.

This precise measurement of the Z-boson invisible width is the first of its kind at a hadron collider

In a bid to provide an independent and complementary test of the Standard Model (SM) at a new energy regime, CMS has performed a precise measurement of the Z-boson invisible width – the first of its kind at a hadron collider. The analysis uses the experimental signature of a very energetic jet accompanied by large missing transverse momentum to select events where the Z boson decays predominantly to neutrinos. The invisible width is then extracted from the well-known relationship between the Z-boson coupling to neutrinos and its coupling to muons and electrons. 

While the production of a pair of neutrinos occurs through a pure Z interaction, the production of a pair of charged leptons can also occur through a virtual photon. The contribution of virtual photon exchange and the interference between photon and Z-boson exchange are determined to be less than 2% for a dilepton invariant mass range of 71–111 GeV, and was accounted for to allow the collaboration to compare the results directly to the Z’s decay to neutrinos. 

Figure 1 shows the missing transverse momentum distribution for the three key regions contributing to this measurement: the jets-plus-missing-transverse-momentum region; the dimuon-plus-jets region; and the dielectron-plus-jets region. For the dilepton regions, selected muons and electrons are not included in the calculation of the missing transverse momentum. The dominant background to the jets plus missing transverse momentum region is from a W boson decaying leptonically, and accounts for 35% of the events. Estimating this background with a high accuracy is one of the key aspects of the measurement, and was performed by studying several exclusive regions in data that are designed to be kinematically very similar to the signal region, but statistically independent. 

The invisible width of the Z boson was extracted from a simultaneous likelihood fit and measured to be 523 ±3 (stat) ±16 (syst) MeV. This 3.2% uncertainty in the final result is dominated by systematic uncertainties, with the largest contributions coming from the uncertainty in the efficiencies of selecting muons and electrons. In a fitting tribute to its predecessor and testament to the LHC entering a precision era of physics, this measurement from CMS is competitive with the LEP combined result of 503 ± 16 MeV and is currently the world’s most precise single direct measurement.

Charm-strange mesons probe hadronisation

Pb–Pb collision data

The ALICE collaboration has reported a new measurement of the production of Ds+ mesons, which contain a charm and an anti-strange quark, in Pb–Pb collisions collected in 2018 at a centre- of-mass energy per nucleon pair of 5.02 TeV. The large data sample and the use of machine-learning techniques for the selection of particle candidates led to increased precision on this important quantity. 

D-meson measurements probe the interaction between charm quarks and the quark–gluon plasma (QGP) formed in ultra-relativistic heavy-ion collisions. Charm quarks are produced in the early stages of the nucleus–nucleus collision and thus experience the whole system evolution, losing part of their energy via scattering processes and gluon radiation. The presence of the QGP medium also affects the charm-quark hadronisation and, in addition to the fragmentation mechanism, a competing process based on charm–quark recombination with light quarks of the medium might occur. Given that strange quark–antiquark pairs are abundantly produced in the QGP, the recombination mechanism could enhance the yield of Ds+ mesons in Pb–Pb collisions with respect to that of D0 mesons, which do not contain strange quarks. 

ALICE investigated this possibility using the ratio of the yields of Ds+ and D0 mesons. The figure displays the Ds+ /D0 yield ratio in central (0–10%) Pb–Pb collisions divided by the ratio in pp collisions, showing that the values of the ratio in the 2 < pT < 8 GeV/c interval are higher in central Pb–Pb collisions by about 2.3σ. The measured Ds+ /D0 double ratio also hints at a peak for pT5–6 GeV/c. Its origin could be related to the different D-meson masses and to the collective radial expansion of the system with a common flow-velocity profile. In addition, the hadronisation via fragmentation becomes dominant at high transverse momenta, and consequently, the values of the Ds+ /D0 ratio become similar between Pb–Pb and pp collisions.

The measurement was compared with theoretical calculations based on charm–quark transport in a hydrodynamically expanding QGP (LGR, TAMU, Catania and PHSD), which implement the strangeness enhancement and the hadronisation of charm quarks via recombination in addition to the fragmentation in the vacuum. The Catania and PHSD models predict a ratio almost flat in pT, while TAMU and LGR describe the peak at pT 3–5 GeV/c. 

Complementary information was obtained by comparing the elliptic flow coefficient v2 of Ds+ and non-strange D mesons (D0, D+ and D*+) in semi-central (30–50%) Pb–Pb collisions. The Ds+– meson v2 is positive in the 2 < pT < 8 GeV/c interval with a significance of 6.4σ, and is compatible within uncertainties with that of non-strange D mesons. These features of the data are described by model calculations that include recombination of charm and strange quarks.

The freshly-completed upgrade of the detectors and the harvest of Pb–Pb collision data expected in Run 3 will allow the ALICE collaboration to further improve the measurements, deepening our understanding of the heavy-quark interaction and hadronisation in the QGP.

Searching for Higgs compositeness

The mass range excluded in the search for the pair production of vector-like top quarks

Since the discovery of the Higgs boson at the LHC in 2012, physicists have a more complete understanding of the Standard Model (SM) and the origin of elementary particle mass. However, theoretical questions such as why the Higgs boson is so light remain. An attractive candidate explanation postulates that the Higgs boson is not a fundamental particle, but instead is a composite state of a new, strongly-interacting sector – similar to the pion in ordinary strong interactions. In such composite-Higgs scenarios, new partners of the top and bottom quarks of the SM could be produced and observed at the LHC. 

If they exist, VLQs could be very heavy, with masses at the TeV scale, and could be produced either singly or in pairs at the LHC.

Ordinary SM quarks come in left-handed and right-handed varieties, which behave differently in weak interactions. The hypothetical new quark partners, however, behave the same way in weak interactions, whether they are left- or right-handed. Composite-Higgs models, and several other theories beyond the SM, predict the existence of such “vector-like quarks” (VLQs). Searching for them is therefore an exciting opportunity for the LHC experiments. 

If they exist, VLQs could be very heavy, with masses at the TeV scale, and could be produced either singly or in pairs at the LHC. Furthermore, VLQs could decay into regular top or bottom quarks in combination with a W, Z or Higgs boson. This rich phenomenology warrants a varied range of complementary searches to provide optimal coverage. 

The ATLAS collaboration has recently carried out two VLQ searches based on the full Run–2 dataset (139 fb–1) at 13 TeV. The first analysis targets pair-production of VLQs, focusing on the possibility that most VLQs decay to a Z boson and a top quark. To help identify likely signal events, leptonically decaying Z bosons were tagged in events with pairs of electrons or muons. To maximise the discriminating power between the VLQ signal and the SM background, machine-learning techniques using a deep neural network were employed to identify the hadronic decays of top quarks, Z, W or Higgs bosons, and categorise events into 19 distinct regions. 

The second analysis targets the single production of VLQs. While the rate of pair production of VLQs through regular strong interactions only depends on their mass, their single production also depends on their coupling to SM electroweak bosons. As a result, depending on the model under consideration, VLQs heavier than approximately 1 TeV might predominantly be produced singly, and a measurement would therefore uniquely allow insight into this coupling strength.

The analysis was optimised for VLQ decays to top quarks in combination with either a Higgs or a Z boson. Events with a single lepton and multiple jets were selected, and tagging algorithms were used to identify the boosted  leptonic and hadronic decays of top quarks, and the hadronic decays of Higgs and Z bosons. The presence of a forward jet, characteristic of the single VLQ production mode, was used (along with the multiplicity of jets, b-jets and reconstructed boosted objects) to categorise the analysed events into 24 regions.

The largest excluded mass for the single production of a vector-like top quark for a range of models

The observations from both analyses are consistent with SM predictions, which allows ATLAS to set the strongest constraints to date on VLQ production. Together, the pair- and single-production analyses exclude VLQs with masses up to 1.6 TeV (see figure 1) and 2.0 TeV (see figure 2), respectively, depending on the assumed model. These two analyses are part of a broader suite of searches for VLQs underway in ATLAS. The combination of these searches will provide the greatest potential for the discovery of VLQs, and ATLAS therefore looks forward to the upcoming Run–3 data.

bright-rec iop pub iop-science physcis connect