Popular representations of the Standard Model (SM) often hide its beautiful weirdness, for example slotting quarks and leptons into boxes and arranging them like a low-grade Mendeleev, or contriving a dartboard arrangement. The “double simplex” scheme invented in 2005 by US theorist Chris Quigg, which was recently given a flashy makeover by Quanta magazine (see image), is much richer (arXiv:hep-ph/0509037).
Jogesh Pati and Abdus Salam’s suggestion, in their 1974 shot at a grand unified theory, that lepton number be regarded as a fourth colour, inspired Quigg to place the leptons at the fourth point of an SU(4) tetrahedron. The additional edges therefore represent possible leptoquark transitions. Left-handed fermion doublets (left) are reflected in the broken mirror of parity to reveal right-handed fermion singlets (right), though Quanta, unlike Quigg perhaps favouring a purely left-handed Majorana mass term, omit possible right-handed neutrinos.
A final distinction is that Quigg chooses to superimpose the left and right simplexes – a term for a generalised triangle or tetrahedron in an arbitrary number of dimensions – while Quanta elects to separate the tetrahedra, and label couplings to the Higgs boson with sweeping loops. This obscures a beautiful feature of Quigg’s design, whereby the Yukawa couplings hypothesised by the SM, which couple the left- and right-handed incarnations of massive fermions in interactions with the Higgs field, link opposite corners of the superimposed double simplex, placing the Higgs boson at the centre of the picture. Quigg, who intended that the double simplex precipitate questions, also points out that the corners of the superimposed tetrahedra define a cube, whose edges suggest a possible new category of feeble interactions yet to be discovered.
Evidence for the decay of the Higgs boson to a photon and a low-mass electron or muon pair, propagated predominantly by a virtual photon (γ*), H → γ*γ → ℓℓγ (where ℓ = e or μ), has been obtained at the LHC. At an LHC seminar today, the ATLAS collaboration reported a 3.2σ excess over background of H → ℓℓγ decay candidates with dilepton mass mℓℓ < 30 GeV.
The H → ℓℓγ decay is particularly interesting as it is a loop process
The measurement of rare decays of the Higgs boson is a crucial component of the Higgs-boson physics programme at the LHC, since they probe potential new interactions with the Higgs boson introduced by possible extensions of the Standard Model. The H → ℓℓγ decay is particularly interesting in this respect as it is a loop process and the three-body final state allows the CP structure of the Higgs boson to be probed. However, the small expected signal-to-background ratio and the typically low dilepton invariant mass make the search for H → ℓℓγ highly challenging.
The analysis performed by ATLAS searched for H → e+e–γ and H → μ+μ–γ decays. Special treatment was needed in particular for the electron channel: a dedicated electron trigger was developed as well as a specific identification algorithm. The predicted mℓℓ spectrum rises steeply towards lower values, with a kinematic cutoff at twice the final-state lepton mass. At such low electron–positron invariant masses, and given the large transverse momentum of their system, the electromagnetic showers induced by the electron and the positron in the ATLAS calorimeter can merge, requiring a specially developed reconstruction. Furthermore, a dedicated identification algorithm was developed for these topologies, and its efficiency was measured in data using photon detector-material conversions at low radius into an electron–positron pair from Z → ℓℓγ events.
The signal extraction is performed by searching in the ℓℓγ invariant mass (mℓℓγ) range between 110 and 160 GeV for a narrow signal peak over smooth background at the mass of the Higgs boson. The sensitivity to the H → ℓℓγ signal was increased by separating events in mutually exclusive categories based on lepton types and event topologies. ATLAS reports evidence in data for a H → ℓℓγ signal emerging over the background with a significance of 3.2σ (see figure). The Higgs boson production cross section times H → ℓℓγ branching fraction, measured for mℓℓ < 30 GeV, amounts to 8.7+2.8–2.7 fb. It corresponds to a signal strength – the ratio of the measured cross section times branching fraction to the Standard Model prediction – of 1.5 ± 0.5. With this, ATLAS has also extended the invariant-mass range of the lepton pair for the related Higgs-boson decay into a photon and a Z boson to lower masses, opening the door to future studies of three-body Higgs-boson decays and investigations of its underlying CP structure.
Looking back on the great discoveries in particle physics, one can see two classes. The discovery of the Ω– in 1964 and of the top quark in 1995 were the final pieces of a puzzle – they completed an existing mathematical structure. In contrast, the discovery of CP violation in 1964 and of the J/ψ in 1974 opened up new vistas on the microscopic world. Paradoxically, although the Higgs boson was slated for discovery for almost half a century following the papers of Brout, Englert, Higgs, Weinberg and others, its discovery belongs in the second class. It constitutes a novel departure in the same way as the J/ψ and the discovery of CP violation, rather than the completion of a paradigm as represented by the discoveries of the Ω– and the top quark.
The novelty of the Higgs boson derives largely from its apparently scalar nature. It is the only fundamental particle without spin. Additionally, it is the only fundamental particle with a self-coupling (gluons also couple to other gluons, but only to those with different colour combinations). Measurements of the couplings of the Higgs boson to the W and Z bosons at the LHC have confirmed its role in the generation of their masses, likewise for the charged third-generation fermions. Despite this great success, the Higgs boson is connected to many of the most troublesome aspects of the Standard Model (see “Connecting the Higgs to Standard Model enigmas” panel). It is for this reason that the recently concluded update of the European strategy for particle physics advocated an electron–positron Higgs factory as the highest priority collider after the LHC, to allow detailed study of this novel and unique particle.
Circular vs linear
The discovery of the Higgs boson at the relatively light mass of 125 GeV, announced by the ATLAS and CMS collaborations in 2012, had two important consequences for experiment. The first was the large number of potentially observable branching fractions available. The second was that circular, as well as linear, e+e– machines could serve as Higgs factories. The two basic mechanisms for Higgs-boson production at such colliders are associated production, e+e–→ ZH, and vector-boson fusion. The former process is dominant at the low-energy first stage of the various Higgs factories under consideration, with vector-boson fusion becoming more important with increasing energy (see “Channeling the Higgs” figure). About a quarter of a million Higgs bosons would be produced per inverse attobarn of data, leading to substantial numbers of recorded events even after the branching ratios to observable modes are taken into account.
Four Higgs-factory designs are presently being considered. Two are linear accelerators, namely the International Linear Collider (ILC) under consideration in Japan and the Compact Linear Collider (CLIC) at CERN, while the other two are circular: the Future Circular Collider (FCC-ee) at CERN and the Circular Electron Positron Collider (CEPC) in China.
The beams in circular colliders continuously lose energy due to synchrotron radiation, causing the luminosity at circular colliders to decrease with beam energy roughly as Eb–3.5. The advantage of circular colliders is their high instantaneous luminosity, in particular at the centre-of-mass energy relevant for the Higgs-physics programme (250 GeV), but even more so at lower energies such as those corresponding to the Z-boson mass (91 GeV). Electron and positron beams in a circular machine naturally achieve transverse polarisation, which can be exploited to make precise measurements of the beam energy via the electron and positron spin-precession frequencies.
In contrast, for linear colliders the luminosity increases roughly linearly with the beam energy. The advantages of linear accelerators are that they can be extended to higher energies, and the beams can be polarised longitudinally. The ZH associated cross section can be increased by 40% with longitudinal polarisations of –80% and 30% for electrons and positrons, respectively. This increase, coupled with the ability to isolate certain components of Higgs-boson production by tuning the polarisation, enables a linear machine to achieve similar precisions on Higgs-boson measurements with half the integrated luminosity of a circular machine.
FCC-ee, CEPC and ILC are foreseen to run for several years at a centre-of-mass energy of around 250 GeV, where the ZH production cross section is largest. Instead, CLIC plans to run its first stage at 380 GeV where both WW fusion and ZH production contribute, and tt production is possible. The circular colliders FCC-ee and CEPC envisage running at the Z-pole and the WW production threshold for long enough to collect of the order 1012 Z bosons and 108 WW pairs, enabling powerful electroweak and flavour-physics programmes (see “Compare and contrast” table). To achieve design luminosity, all proposed e+e– colliders need beams focused to a very small size in one direction (30–70 nm for FCC-ee, 3–8 nm for ILC and 1–3 nm for CLIC), which are all below the values so far achieved at existing facilities.
Evolving designs
The proposed circular colliders are based on a combination of concepts that have been proven and used in previous and present colliders (LEP, SLC, PEP-II, KEKB, SuperKEKB, DAFNE). In Higgs-production mode the beam lifetime is limited by Bhabha scattering to about 30 minutes and therefore requires quasi-continuous injection or “top-up” as used by the B-factories. Each of the circular collider main concepts and parameters has been demonstrated in a previous machine, and thus the designs are considered mature. The total FCC-ee construction cost is estimated to be 10.5 billion CHF for energies up to 240 GeV, with an additional 1.1 billion CHF to go to the tt threshold. This includes 5.4 billion CHF for the tunnel, which could be reused later for a hadron collider. The CEPC cost has been estimated at $5 billion, including $1.3 billion for the tunnel. With the present design, the FCC-ee power consumption is 260–340 MW for the various energy stages (compared to 150 MW for the LHC).
The ILC was proposed in the late 1990s and a technical design report published in 2012. It uses superconducting RF cavities for the acceleration, as used in the currently operating European XFEL facility in Germany, to aim for gradients of 35 MV/m. The cost of the first energy stage (250 GeV) was estimated as $4.8–5.3 billion, with a power consumption of 130–200 MW, and an expression of interest to host the ILC as a global project is being considered in Japan. The CLIC accelerator uses a second beam, termed a drive-beam, to accelerate the primary beam, aiming for gradients in excess of 100 MV/m. This concept has been demonstrated with electron beams at the CLIC test facility, CTF3. The cost of the first energy stage of CLIC is estimated as 5.9 billion CHF with a power consumption of 170 MW, rising to 590 MW for final-stage operation at 3 TeV.
Another important difference between the proposed linear and circular colliders concerns the number of detectors they can host. Collisions at linear machines only occur at one interaction point, while in circular colliders at least two interaction points are proposed, doubling the luminosity available for analyses. Two detectors also offer the dual benefits of scientific competition and the cross-checking of results. At the ILC two detectors are proposed but they cannot run concurrently since they use the same interaction point.
FCC-ee and CLIC have both been proposed as CERN-hosted international projects, similar to the LHC or high-luminosity LHC (HL-LHC). At present, as recommended by the 2020 update of the European strategy for particle physics, a feasibility study for the FCC (including its post-FCC-ee hadron-collider stage, FCC-hh) is ongoing, with the goal of presenting an updated conceptual design report by the next strategy update in 2026. Among the e+e– colliders, CLIC has the greatest capacity to be extended to the multi-TeV energy range. In its low-energy incarnation it could be realised either with the drive-beam or conventional technology. CEPC is conceptually and technologically similar to FCC-ee and has also presented a conceptual design report. Nearly all statements about FCC-ee also hold for CEPC except that CEPC’s design luminosity is about a factor of two lower, and thus it takes longer to acquire the same integrated luminosity. At circular colliders, the multi-TeV regime (at least 100 TeV in the case of FCC-hh) would be reached by using proton beams, similar to what was done with LHC following LEP.
In addition to the vacuum expectation value of the Higgs field and the mass of the Higgs boson, the discovery of the Higgs boson introduces a large number of parameters into the Standard Model. Among them are the Yukawa couplings of the nine charged fermions (in contrast, the gauge sector of the SM has only three free parameters). The Yukawa forces, of which only three have been discovered corresponding to the couplings to the charged third-generation fermions, are completely new. They are of disparate strengths and, unlike the other forces, are not subject to the constraint of local gauge invariance. They provide a parameterisation of the theory of flavour, rather than an explanation. It is of primary importance to discover, bound and characterise the Yukawa forces. In particular, the discovery of CP violation in the Yukawa couplings would go beyond the confines of the Standard Model.
Famously, because of its scalar nature, the quantum corrections to the Higgs boson mass are only bounded by the cut-off on the theory, demanding large renormalisations to maintain the mass at 125 GeV as measured. This issue is not so much a problem for the Standard Model per se. However, in the context of a more complete theory that aims to supersede and encompass the Standard Model, it becomes much more troubling. In effect, the degree of cancellation necessary to maintain the Higgs mass at 125 GeV effectively sabotages the predictive power of any more complete theory. This sabotage becomes deadly as the scale of the new physics is pushed to higher and higher energies.
The electroweak potential is another area of importance in which our current knowledge is fragmentary. Within the confines of the Standard Model the potential is completely specified by the position of its minimum – the vacuum expectation value and the second derivative of the potential at the minimum, the mass of the Higgs boson (or equivalently its self-coupling). We have no direct knowledge of the behaviour of the potential at larger field values further from the minimum. In addition, extrapolation of the currently understood Higgs potential to higher energy reveals a world teetering between stability and instability. Further information about the behaviour of the potential could help us to interpret the meaning of this result. A modified electroweak potential might also give rise to a first-order phase transition at high temperature, rather than the smooth crossover expected for the Standard Model Higgs potential. This would fulfil one of the three Sakharov conditions necessary to generate an asymmetry between matter and antimatter in our universe.
To quantify the scientific reach of the proposed colliders compared to current knowledge or the expectations for the HL-LHC, it is necessary to define figures-of-merit for the observables that will be measured. For the Higgs boson the focus is on the coupling strengths to the Standard Model bosons and fermions, as well as the couplings to any new particles. The strength with which the Higgs boson couples to the various particles, i, is denoted by κi, defined such that κi = 1 corresponds to the Standard Model. Non-standard phenomena are included in this “kappa” framework by introducing two new quantities: the branching ratio into invisible particles (determined by measuring the missing energy in identified Higgs events), and the branching ratio to untagged particles (determined by measuring the contributions to the total width accounted for by the observed modes, or by directly searching for anomalous decays).
Higgs-boson observables
At hadron colliders, only ratios of κi parameters can be measured, since a precise measurement of the total width of the Higgs boson is lacking (the expected total width of the Higgs boson in the Standard Model is 4.2 MeV, which is far too small to be resolved experimentally). To determine the absolute κi values at a hadron collider a further assumption needs to be made, either on decay rates of the Higgs boson to new particles or on one of the κi values. An assumption that is often made, and valid in many beyond-the-Standard-Model theories, is that κZ≤ 1.
The kappa framework, however, by construction, does not parameterise possible effects coming from different Lorentz structures and/or the energy dependence of the Higgs couplings. Such effects could generically arise from the existence of new physics at higher scales and could lead not only to changes in the predicted rates, but also in distributions. Deviations of κi from 1 indicate a departure from the Standard Model, but do not provide a tool to diagnose its cause. This shortcoming is remedied in so-called effective-operator formalisms by including operators of mass dimension greater than four.
At e+e– colliders a Higgs boson produced via e+e–→ ZH can be identified without observing its decay products. This measurement, of primary importance, is unique to e+e– colliders. By measuring the Z decay products and with the precise knowledge of the momenta of the incoming e– and e+ beams, the presence of the Higgs boson in ZH events can be inferred based on energy and momentum conservation alone, without actually tagging the Higgs boson. In this way one directly measures the coupling between the Higgs and Z bosons. In combination with the Higgs branching ratio to Z pairs it can be interpreted as a measurement of the Higgs-boson width. The first-stage e+e– Higgs factories all constrain the total width at about the 2% level.
LHC and HL-LHC
To assess the potential impact of the e+e– Higgs factories it is important to examine the point of departure provided by the LHC and HL-LHC. Since its startup in 2010 the LHC has made a monumental impact on our understanding of the Higgs sector. After the Higgs discovery in 2012, a measurement programme started and now, with nearly 150 fb–1 of data analysed by ATLAS and CMS, much has been learned. The Higgs-boson mass has been measured with a precision of < 0.2%, its spin and parity confirmed as expected in the Standard Model, and its coupling to bosons and to third-generation charged fermions established with a precision of 5–10%.
With the HL-LHC and its experiments planned to operate from 2027, the precision on the coupling parameters and the branching ratios to new particles will be increased by a factor of 5–10 in all cases, typically resulting in a sensitivity of a few % (see “Kappa couplings” figure). The HL-LHC will also enable measurements of the very rare μ+μ– decay, the first evidence for which was recently reported by CMS and ATLAS, and thus show whether the Higgs boson also generates the mass of a second-generation fermion. With the full HL-LHC dataset, corresponding to 3000 fb–1 for each of ATLAS and CMS, it is expected that di-Higgs production will be established with a significance of four standard deviations. This will allow a determination of the Higgs-boson’s coupling to itself with a precision of 50%.
About a quarter of a million Higgs bosons could be produced per inverse attobarn of data
The LHC has also made enormous progress in the direct searches for new particles at high energies. With more than 1000 papers published on this topic, hunting down particles predicted by dozens of theoretical ideas, and no firm sign of a new particle anywhere, it is clear that the new physics is either heavier, or more weakly coupled or has other features that hides it in the LHC data. The LHC is also a precision machine for electroweak physics, having measured the W-boson mass and the top-quark mass with uncertainties of 0.02% and 0.3%, respectively. In addition, a large number of relevant cross-section measurements of multi-boson production have been made, probing the trilinear and quartic interactions of the gauge bosons with each other.
Higgs-factory impact
In terms of the measurement precision on the Higgs-boson couplings, the proposed Higgs factories are expected to bring a major improvement with respect to HL-LHC in most cases (see “Relative precision” figure). Only for the rare decays to muons, photons and Zγ, and for the very massive top quark, is this not the case. The highest precision (0.2% in the case of FCC-ee) is achieved on κZ since the main Higgs production mode, ZH, depends directly on it, regardless of the decay mode. For other Standard Model particles, improvement factors of two to four are typical. For the invisible and untagged decays, the constraints are improved to around 0.2% and 1%, respectively, for some of the Higgs factories. A new measurement, not possible at the LHC, is that of the charm–quark coupling, κc.
None of the initial stages of the proposed Higgs factories will be able to directly probe the self-coupling of the Higgs boson beyond the 50% expected from the HL-LHC, since the cross-sections for the relevant processes (e+e–→ ZHH and e+e–→ HHνν) are negligible at centre-of-mass energies below 400 GeV. The Higgs self-coupling, however, enters through loops also in single-Higgs production and indirect effects might therefore be observable, for instance as a small (< 1%) deviation in measurements of the inclusive ZH cross section. Measurements of the Higgs self-coupling exploiting the di-Higgs production process can only be performed at higher energy colliders. The ILC and CLIC project uncertainties of around 30% at their intermediate energies and around 10% at their ultimate energies, while FCC-hh projects a precision of around 5%. Similarly, for the Higgs coupling to the top quark, the HL-LHC precision of 3.2% will not be improved by the initial stages of any of the Higgs factories.
The proposed Higgs factories also have a rich physics programme at lower energies, particularly at the Z pole. FCC-ee, for instance, plans to run for four years at the Z pole to accumulate a total of more than 1012 Z bosons – 100,000 times more than at LEP. This will enable a rich and unprecedented electroweak physics programme, constraining so-called oblique parameters (which are sensitive to violations of weak isospin) at the per-mille level, 100 times better than today. It will also enable a B-physics programme, complementary to that at Belle II and LHCb. At CEPC, a similar programme is possible, while at ILC and CLIC the luminosity when running at the Z pole is much lower: the typical number of Z-bosons that can be accumulated here is 109, 100 times more than LEP but not at the same level as the circular colliders. FCC-ee’s electroweak programme also foresees a run at the WW threshold to enable a high-precision measurement of the W mass.
Concerning the large top-quark mass, measurements at the LHC suffer from uncertainties associated with renormalisation schemes and it is unlikely to improve the precision significantly at the HL-LHC beyond the currently achieved value of 400 MeV. At an e+e– collider operating at the tt threshold (~350 GeV), a measurement of the top mass with total uncertainty of around 50 MeV and with full control of the issues associated with the renormalisation scheme is possible. In addition to its importance as a fundamental parameter of the Standard Model, the top mass is the dominant term in the evolution of the Higgs potential with energy to determine vacuum stability (see “Connecting the Higgs to Standard Model enigmas” panel).
To assess the potential impact of the e+e– Higgs factories it is important to examine the point of departure provided by the LHC and HL-LHC
In short, a Higgs factory promises to expand our knowledge of nature at the smallest scales. The ZH cross-section measurement alone will probe fine tuning at a level of a few permille, about 30 times better than what we know today. This provides indirect sensitivity to new particles with masses up to 10–30 TeV, depending on their coupling strength, and could point to a new energy scale in nature.
But most of all the Higgs boson has not exhausted its ability to surprise. The rest of the Standard Model is a compact structure, exquisitely tested, and ruled by local gauge invariance and other symmetries. Compared to this, the Lagrangian of the Higgs sector is the wild west, where the final laws have yet to be written. Does the Higgs boson have a significant rate of invisible decays, which could be a key component in understanding the nature of dark matter in our universe? Does the Higgs boson act as a portal to other scalar degrees of freedom? Does the Higgs boson provide a source of CP violation? An electron–positron Higgs factory provides a tool to address these questions with unique clarity, when deviations between the measured and predicted values of observables are detected. Building on the data from the HL-LHC, it will be the perfect tool to elucidate the underlying laws of physics.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
The ability to accelerate charged particles using the “wakefields” of plasma density waves offers the promise of high-energy particle accelerators that are more compact than those based on radio-frequency cavities. Proposed in 1979, the idea is to create a wave inside a plasma upon which electrons can “surf” and gain energy over short distances. Although highly complex, wakefield acceleration (WFA) driven by laser pulses or electron beams has been successfully used to accelerate electron beams to tens of GeV within distances of less than a metre, and the AWAKE experiment at CERN is attempting to achieve higher energy gains by using protons as drive beams. Recent studies suggest that WFA may also occur naturally, potentially offering an explanation for some of the highest energy cosmic rays ever observed.
So-called Fermi acceleration, first conceived by the eponymous Italian in 1949, is considered to be the main mechanism responsible for high-energy cosmic rays. In this process, charged particles are accelerated due to relativistic shockwaves occurring within jets emitted by black-hole binaries, active galactic nuclei or gamma-ray bursts, to name just a few sources. As a charged particle travels within the jet it gets accelerated each time it passes through the shock wave, allowing it to gain energy until the magnetic field in the environment can no longer contain it. This process predicts the observed power-law spectrum of cosmic rays quite well, at least up to energies of around 1019 eV. Beyond this energy, however, Fermi acceleration becomes less efficient as the particles start to lose energy due to collisions and/or synchrotron radiation. The existence of ultra-high-energy cosmic rays (UHECRs), which have been observed up to energies of 1021 eV, indicates that a different acceleration mechanism could be at play in that energy domain. Thanks to its very high efficiency, WFA could provide such a mechanism.
Although there are clearly no laser beams in astrophysical objects, plasma fields that can support waves can be found in many astrophysical settings. For example, in theories developed by Toshiki Tajima of the University of California at Irvine (UCI), one of the inventors of WFA technology, waves could be produced by instabilities in the accretion disks around compact objects such as black holes. These accretion disks can periodically transition from a highly magnetised to a little magnetised state, emitting electromagnetic waves that can propagate into the disk’s jets in the form of Alfven waves. As these waves continue to propagate along the jets they transform back into electromagnetic waves that can accelerate electrons on the front of the plasma’s “bow wake” and protons on the back of it.
Clear predictions
The energies that are theoretically achievable in cosmic-ray WFA depend on the mass of the compact object, as do the periodicities with which such waves can be produced. This allows clear predictions to be made for a range of different objects, which can be tested against observational data.
Groups based at UCI and at RIKEN in Japan recently tested these predictions on a range of astrophysical objects, spanning from 1 to 109 solar masses. Although not conclusive, these first comparisons between theory and observations indicate several interesting features that require further investigation. For example, WFA models predict periodic emission of both UHECRs – the protons on the back of the bow wake – in coincidence with electromagnetic radiation produced by the electrons from the front of the bow wake. Due to interactions with the intergalactic medium, UHECRs are also expected to produce secondary particles, including neutrinos. WFA could thereby also explain periodic outbursts of neutrinos in coincidence with gamma-rays from, for example, blazars, for which evidence was recently found by the IceCube experiment in collaboration with a range of electromagnetic instruments. Additionally, WFA could explain the non-uniformity of the UHECR sky such as that recently reported by the Pierre Auger Observatory (see CERN Courier December 2017 p15), as it allows for cosmic rays with energies up to 1024 eV to be produced within objects that lie within the location of the observed hot-spot.
In concert with future space-based UHECR detectors such as JEM-EUSO and POEMMA, further analysis of existing data should definitively answer the question of whether WFA does indeed occur in space. The clear predictions relating to periodicity, and the coincident emission of neutrinos, gamma-rays and other electromagnetic radiation, make it an ideal subject to study within the multi-messenger frameworks that are currently being set up.
Charm and beauty quarks are excellent probes of the hot and dense state of deconfined quarks and gluons (quark–gluon plasma, QGP) which is created in high-energy heavy-ion collisions. These heavy quarks are produced in hard-scattering processes at the early stages of the collisions, and interact with the constituents of the newly created QGP through both elastic and inelastic processes. These quarks, which can be studied through their decays into leptons, lose energy while propagating through the QGP medium. Consequently, different production yields are observed at large momenta in nucleus–nucleus collisions compared to proton–proton collisions. This effect can be quantified using the nuclear modification factor, RAA, which is the ratio of nucleus–nucleus and proton–proton particle yields, scaled by the average number of binary nucleon–nucleon collisions. Comparing measurements in different collision systems sheds light on heavy-quark energy-loss mechanisms, and provides high-precision tomography of the QGP.
The results show that collision geometry plays an important role in heavy-quark energy loss
A new analysis by the ALICE collaboration compares the production of leptons from heavy-flavour hadron decays in Pb–Pb and Xe–Xe collisions at √sNN = 5.02 and 5.44 TeV, respectively. The measurements use the muon and electron decay channels at forward rapidity and mid-rapidity. The results show that collision geometry plays an important role in heavy-quark energy loss.
Remarkable agreement
A remarkable agreement is observed between the muon yields in head-on Xe–Xe collisions and slightly offset Pb–Pb collisions (figure 1, left). Given the larger size of the lead nucleus, these collision centrality classes – 0–10% and 10–20%, respectively – give rise to similar charged-particle multiplicities, and thus suggest the creation of similar QGP densities and sizes in the colliding systems.
In both cases, the production of muons from heavy-flavour hadron decays is suppressed up to a factor of about 2.5 for 5 GeV < pT < 6 GeV. This suppression is successfully reproduced by the MC@sHQ+EPOS2 model, which considers both elastic and inelastic energy-loss processes of the heavy quarks in the QGP, but is underestimated by the PHSD model, which only includes elastic processes. The analysis also saw ALICE’s first sensitivity down to pT = 0.2 GeV using a lower magnetic field (0.2 T) in the solenoid magnet (figure 1, right). The suppression pattern for muons and electrons from heavy-flavour hadron decays is similar at both forward and mid-rapidity, indicating that heavy quarks strongly interact with the medium over a wide rapidity interval. The suppression is smaller in these “glancing” semi-central collisions than in the previously discussed head-on collisions. This is compatible with the hypothesis that the in-medium energy loss depends on the energy density and on the size of the system created in the collision.
The precision of the measurements brings new insights into the nature of parton energy loss and new constraints to the modelling of its dependence on the size of the QGP medium in transport-model calculations. Further constraints will be set by future higher precision measurements during Run 3, when ALICE will measure leptons from charm and beauty decays separately, at both central and forward rapidity. A short run with the much smaller oxygen–oxygen system may also be scheduled and contribute to a deeper understanding of the dependence of system size on in-medium energy loss for heavy quarks.
The Cabibbo–Kobayashi–Maskawa (CKM) matrix element Vub describes the coupling between u and b quarks in the weak interaction, and is one of the fundamental parameters of the Standard Model (SM). Though it was first observed to be non-zero 30 years ago, its value is still debated. |Vub| determines the length of the least well-known side of the corresponding unitarity triangle, and is therefore a key ingredient for testing the consistency of the SM in the flavour sector. LHCb has recently published a new result on |Vub| using the first ever measurement of the Bs0 → K–μ+νμ decay.
|Vub| and |Vcb| are the focus of a longstanding puzzle. When comparing the world-average values derived from inclusive and exclusive B-meson decays, respectively, the inclusive and exclusive measurements disagree by more than three standard deviations, for measurements of both |Vub| and |Vcb|. Traditionally, the exclusive |Vub| determination requires the reconstruction of the semileptonic b → u decay B0 → π–μ+νμ. LHCb also has access to Bs0 meson and b-baryon decays, but the missing neutrino makes it difficult to isolate the signal from the copious background. Defying expectations, however, in 2015 LHCb managed to observe the Λb0 → pμ–νμ decay, and used the normalisation channel Λb0 → Λ+cμ–νμ to determine |Vub|/ |Vcb|. The main difficulty in this type of analysis resides in the fact that only two charged particles are reconstructed in decays such as Bs0 → K–μ+νμ and Λb0 → pμ–νμ. A huge background arising from other sources dominates the selected data sample. Machine-learning algorithms are therefore used to isolate the signal from the various background categories consisting of decays with additional charged and/or neutral particles in the final state. The remaining irreducible background is modelled by using both simulation and control samples extracted from data.
This is the first experimental test of the form-factor calculations
First observation
In a recent paper, the LHCb collaboration presented the first observation of the decay Bs0 → K–μ+νμ. The decay Bs0 → Ds– μ+νμ is used as a normalisation channel to minimise experimental systematic uncertainties. The study was performed in two regions of the squared invariant mass (or momentum transfer) q2 of the muon and the neutrino below and above 7 GeV2. The observed total yield was about 13,000 events, corresponding to a branching fraction of (1.06 ± 0.10) × 10–4, of which about one third stemmed from the low q2 range (figure 1).
The extraction of the ratio |Vub|/|Vcb| requires external knowledge of the form factors describing the strong Bs0 → K– and Bs0 → Ds– transitions, to account for the interactions of the quarks bound in mesons. These vary with the momentum transfer and are calculated using non-perturbative techniques, such as lattice QCD (LQCD) and light-cone sum rules (LCSR). As LQCD and LCSR calculations are more accurate at high and low q2, respectively, they are used in the corresponding q2 regions. The obtained value of |Vub|/|Vcb| = 0.095 ± 0.008 in the high q2 interval shows agreement with the world average of exclusive measurements, and with the LHCb result using Λb0 → pμ–νμ decays, while in the low q2 region, |Vub|/|Vcb| = 0.061 ± 0.004 is significantly lower (figure 2). This is the first experimental test of the form-factor calculations, and new results are expected in the near future. These will help settle the exclusive versus inclusive debate surrounding the values of |Vub| and |Vcb|, and provide further constraints on the unitarity triangle.
Searches for new physics at high-energy colliders traditionally target heavy new particles with short lifetimes. These searches determine detector design, data acquisition and analysis methods. However, there could be new long-lived particles (LLPs) which travel through the detectors without decaying, either because they are light or have small couplings. Searches for LLPs have been going on at the LHC since the start of data taking, and at previous colliders, but they are attracting increasing interest in recent times, more so in light of the lack of new particles discovered in more mainstream searches.
Detecting LLPs at the LHC experiments requires a paradigm shift with respect to the usual data-analysis and trigger strategies. To that end, more than 200 experimentalists and theorists met online from 16 to 19 November for the eighth workshop of the LHC LLP community.
Dark quarks would undergo fragmentation and hadronisation, resulting in “dark showers”
Strong theoretical motivations underpin searches for LLPs. For example, dark matter could be part of a larger dark sector, parallel to the Standard Model (SM), with new particles and interactions. If dark quarks could be produced at the LHC, they would undergo fragmentation and hadronisation in the dark sector resulting in characteristic “dark showers” — one of the focuses of the workshop. Collider signatures for dark showers depend on the fraction of unstable particles they contain and their lifetime, with a range of categories presenting their own analysis challenges: QCD-like jets, semi-visible jets, emerging jets, and displaced vertices with missing transverse energy. Delegates agreed on the importance of connecting collider-level searches for dark showers with astrophysical and cosmological scales. In a similar spirit of collaboration across communities, a joint session with the HEP Software Foundation focused on triggering and reconstruction software for dedicated LLP detectors.
Heavy neutral leptons
The discovery of heavy neutral leptons (HNLs) could address different open questions of the SM. For example, neutrinos are expected to be left-handed and massless in the SM, but oscillate between flavours as their wavefunction evolves, providing evidence for as-yet immeasurably small masses. One way to fix this problem is to complete the field pattern of the SM with right-handed HNLs. The number and other characteristics of HNLs depend on the model considered, but in many cases HNLs are long-lived and connect to other important questions of the SM, such as dark matter and the baryon asymmetry of the universe. There are many ongoing searches for HNLs at the LHC and many more proposed elsewhere. During the November workshop the discussion touched on different models and simulations, reviewing what is available and what is needed for the different signal benchmarks.
Another focus was the reinterpretation of previous LLP searches. Recasting public results is common practice at the LHC and a good way to increase physics impact, but reinterpreting LLP searches is more difficult than prompt searches due to the use of non-standard selections and analysis-specific objects.
The latest results from CERN experiments were presented. ATLAS reported the first LHC search for sleptons using displaced-lepton final states, greatly improving sensitivity compared to LEP. CMS presented a search for strongly interacting massive particles with trackless jets, and a search for long-lived particles decaying to jets with displaced vertices. LHCb reported searches for low -mass di-muon resonances and a search for heavy neutrinos in the decay of a W boson into two muons and a jet, and the NA62 experiment at CERN’s SPS presented a search for π0 decays to invisible particles. These results bring important new constraints on the properties and parameters of LLP models.
Dedicated detectors
A series of dedicated LLP detectors at CERN — including the Forward Physics Facility for the HL-LHC, the CMS forward detector, FASER, Codex-b and Codex-ß, MilliQan, MoEDAL-MAPP, MATHUSLA, ANUBIS, SND@LHC, and FORMOSA – are in different stages between proposal and operation. These additional detectors, located at various distances from the LHC experiments, have diverse strengths: some, like MilliQan, look for specific particles (milli-charged particles, in that case), whereas others, like Mathusla, offer a very low background environment in which to search for neutral LLPs. These complementary efforts will, in the near future, provide all the different pieces needed to build the most complete picture possible of a variety of LLP searches, from axion-like particles to exotic Higgs decays, potentially opening the door to a dark sector.
ATLAS reported the first LHC search for sleptons using displaced-lepton final states
The workshop featured a dedicated session on future colliders for the first time. Designing these experiments with LLPs in mind would radically boost discovery chances. Key considerations will be tracking and the tracking volume, timing information, trigger and DAQ, as well as potential additional instrumentation in tunnels or using the experimental caverns.
Together with the range of new results presented and many more in the pipeline, the 2020 LLP workshop was representative of a vibrant research community, constantly pushing the “lifetime frontier”.
The international conference devoted to b-hadron physics at frontier machines, Beauty 2020, took place from 21 to 24 September, hosted virtually by Kavli IPMU, University of Tokyo. This year’s edition, the 19th in the series, attracted around 350 registrants, significantly more than have attended physical Beauty conferences in the past. Two days were devoted to parallel sessions, a change in approach necessitated by the online format, stimulating lively discussion. There were 64 invited talks, of which 13 were overviews given by theorists.
Studies of beauty hadrons have great sensitivity to possible physics beyond the Standard Model (SM), as was stressed by Gino Isidori (University of Zurich) in the opening talk of the conference. Possible lepton-universality anomalies that have emerged from analyses of decays into pairs of leptons and accompanying hadrons are particularly tantalising, as they show significant deviations from the SM in a manner that could be explained by the existence of new particles such as leptoquarks or Z′ bosons. We will know much more when LHCb releases measurements from the updated analysis of the full Run-2 data set. In the meantime, the combined results from ATLAS, CMS and LHCb for the branching ratio of the ultra-rare decay Bs→ μ+μ– generated much discussion. This final state is produced only a few times every billion Bs decays, but is now measured to a remarkable precision of 13%. Intriguingly, the observed value of the branching ratio lies two standard deviations below the SM prediction (see “Ultra-rare” figure) – an effect that some commentators have noted could be driven by the same new particles invoked to explain the other flavour anomalies.
Recent impressive results were shown in the field of CP violation. LHCb presented the first ever observation of time-dependent CP violation in the Bs system – a phenomenon that has eluded previous experiments on account of the very fast (a rate of about 3 × 1012 Hz) Bs oscillations and inadequate sample sizes. In addition, new LHCb results were shown for the CP-violating phase γ. The most precise of these comes from an analysis that isolates B → DK decays which are followed by D → KSπ+π– decays, and the distributions of the final-state particles compared depending on whether they originate from B– or B+ mesons. This analysis is based on the full Run 1 and Run 2 data sets and constrains γ to a precision of five degrees, which from this single analysis alone represents around a four-fold improvement compared to when the LHC began operation. Further improvements are expected over the coming years.
Participants were eager to learn about the progress of the SuperKEKB accelerator and Belle II experiment. SuperKEKB is now operating at higher luminosity than any previous electron–positron machine, and the data set collected by Belle II (of the order 100 fb–1) is already sufficient to demonstrate the capabilities of the detector and to allow for important early physics studies, which were shown during the week. Belle II has superior performance to the first-generation B-factory experiments, BaBar and Belle, in areas such as flavour tagging and proper-time resolution, and will collect around 50 times the integrated luminosity. By the end of the decade Belle II will have accumulated 50 ab–1 of data, from which many precise and exciting physics measurements are expected.
Recent impressive results were shown in the field of CP violation
Studies of kaon decays provide important insights into flavour physics that are complementary to those obtained from b-hadrons. The NA62 collaboration presented its updated branching ratio for the ultra-rare decay K+→ π+νv, which is predicted to be around 10–10 in the SM. The data set is now sufficiently large to see a signal with a significance of more than three standard deviations. Future running is planned to allow a measurement to be made with a 10–20% precision, which will provide a powerful test of the SM prediction (CERN Courier September/October 2020 p9).
The concluding plenary session focused on the future of flavour physics. The LHCb experiment is currently being upgraded, and a further upgrade is foreseen at the end of the decade. In parallel, the upgrades of ATLAS and CMS will increase their capabilities for beauty studies. In the electron–positron domain, Belle II will continue to accumulate data, and there is the exciting possibility of a super-tau-charm factory, situated in either China or Russia, which will collect very large data sets at lower energies. These prospects were surveyed by Phillip Urquijo (University of Melbourne) in the summary talk of the conference, who stressed the importance of exploiting these ongoing and future facilities to the maximum. Flavour studies have a bright future, and they are sure to retain a central role in our search for physics beyond the SM.
Since the discovery of the Higgs boson in 2012, great progress has been made in our understanding of the Standard Model (SM) and the prospects for the discovery of new physics beyond it. Despite excellent advances in Higgs-sector measurements, searches for WIMP dark matter and exploration of very rare processes in the flavour realm, however, no unambiguous signals of new fundamental physics have been seen. This is the reason behind the explosion of interest in feebly interacting particles (FIPs) over the past decade or so.
The inaugural FIPs 2020 workshop, hosted online by CERN from 31 August to 4 September, convened almost 200 physicists from around the world. Structured around the four “portals” that may link SM particles and fields to a rich dark sector – axions, dark photons, dark scalars and heavy neutral leptons – the workshop highlighted the synergies and complementarities among a great variety of experimental facilities, and called for close collaboration across different physics communities.
Today, conventional experimental efforts are driven by arguments based on the naturalness of the electroweak scale. They result in searches for new particles with sizeable couplings to the SM, and masses near the electroweak scale. FIPs represent an alternative paradigm to the traditional beyond-the-SM physics explored at the LHC. With masses below the electroweak scale, FIPs could belong to a rich dark sector and answer many open questions in particle physics (see “Four portals” figure). Diverse searches using proton beams (CERN and Fermilab), kaon beams (CERN and JPARC), neutrino beams (JPARC and Fermilab) and muon beams (PSI) today join more idiosyncratic experiments across the globe in a worldwide search for FIPs.
FIPs can arise from the presence of feeble couplings in the interactions of new physics with SM particles and fields. These may be due to a dimensionless coupling constant or to a “dimensionful” scale, larger than that of the process being studied, which is defined by a higher dimension operator that mediates the interaction. The smallness of these couplings can be due to the presence of an approximate symmetry that is only slightly broken, or to the presence of a large mass hierarchy between particles, as the absence of new-physics signals from direct and indirect searches seems to suggest.
Take the axion, for example. This is the particle that may be responsible for the conservation of charge–parity symmetry in strong interactions. It may also constitute a fraction or the entirety of dark matter, or explain the hierarchical masses and mixings of the SM fermions – the flavour puzzle.
Or take dark photons or dark Z′ bosons, both examples of new vector gauge bosons. Such particles are associated with extensions of the SM gauge group, and, in addition to indicating new forces beyond the four we know, could lead to evidence of dark-matter candidates with thermal origins and masses in the MeV to GeV range.
Exotic Higgs bosons could also have been responsible for cosmological inflation
Then there are exotic Higgs bosons. Light dark scalar or pseudoscalar particles related to the SM Higgs may provide novel ways of addressing the hierarchy problem, in which the Higgs mass can be stabilised dynamically via the time evolution of a so-called “relaxion” field. They could also have been responsible for cosmological inflation.
Finally, consider right-handed neutrinos, often referred to as sterile neutrinos or heavy neutral leptons, which could account for the origin of the tiny, nearly-degenerate masses of the neutrinos of the SM and their oscillations, as well as providing a mechanism for our universe’s matter–antimatter asymmetry.
Scientific diversity
No single experimental approach can cover the large parameter space of masses and couplings that FIPs models allow. The interconnections between open questions require that we construct a diverse research programme incorporating accelerator physics, dark-matter direct detection, cosmology, astrophysics, and precision atomic experiments, with a strong theoretical involvement. The breadth of searches for axions or axion-like particles (ALPs) is a good indication of the growing interest in FIPs (see “Scaling the ALPs” figure). Experimental efforts here span particle and astroparticle physics. In the coming years, helioscopes, which aim to detect solar axions by their conversion into photons (X-rays) in a strong magnetic field, will improve the sensitivity by more than 10 orders of magnitude in mass in the sub-eV range. Haloscopes, which work by converting axions into visible photons inside a resonant microwave cavity placed inside a strong magnetic field, will complement this quest by increasing the sensitivity for small couplings by six orders of magnitude (down to the theoretically motivated gold band in a mass region where the axions can be a dark-matter candidate). Accelerator-based experiments, meanwhile, can probe the strongly motivated QCD scale (MeV–GeV) and beyond for larger couplings. All these results
will be complemented by a lively theoretical activity aimed at interpreting astrophysical signals within axion and ALP models.
FIPs 2020 triggered lively discussions that will continue in the coming months via topical meetings on different subjects. Topics that motivated particular interest between communities included possible ways of comparing results from direct-detection dark-matter experiments in the MeV–GeV range against those obtained at extracted beam line and collider experiments; the connection between right-handed neutrino properties and active neutrino parameters; and the interpretation of astrophysical and cosmological bounds, which often overwhelm the interpretation of each of the four portals.
The next FIPs workshop will take place at CERN next year.
On 17 January 1957, a few months after Chien-Shiung Wu’s discovery of parity violation, Wolfgang Pauli wrote to Victor Weisskopf: “Ich glaube aber nicht, daß der Herrgott ein schwacher Linkshänder ist” (I cannot believe that God is a weak left-hander). But maximal parity violation is now well established within the Standard Model (SM). The weak interaction only couples to left-handed particles, as dramatically seen in the continuing absence of experimental evidence for right-handed neutrinos. In the same way, the polarisation of photons originating from transitions that involve the weak interaction is expected to be completely left-handed.
The LHCb collaboration recently tested the handedness of photons emitted in rare flavour-changing transitions from a b-quark to an s-quark. These are mediated by the bosons of the weak interaction according to the SM – but what if new virtual particles contribute too? Their presence could be clearly signalled by a right-handed contribution to the photon polarisation.
New virtual particles could be clearly signalled by a right-handed contribution to the photon polarisation
The b → sγ transition is rare. Fewer than one in a thousand b-quarks transform into an s-quark and a photon. This process has been studied for almost 30 years at particle colliders around the world. By precise measurements of its properties, physicists hope to detect hints of new heavy particles that current colliders are not powerful enough to produce.
The probability of this b-quark decay has been measured in previous experiments with a precision of about 5%, and found to agree with the SM prediction, which bears a similar theoretical uncertainty. A promising way to go further is to study the polarisation of the emitted photon. Measuring the b → sγ polarisation is not easy though. The emitted photons are too energetic to be analysed by a polarimeter and physicists must find innovative ways to probe them indirectly. For example, a right-handed polarisation contribution could induce a charge-parity asymmetry in the B0→ KSπ0γ or Bs0→ φγ decays. It could also contribute to the total rate of radiative b → sγ decays, containing any strange meson, B → Xsγ.
The LHCb collaboration has pioneered a new method to perform this measurement using virtual photons and the largest sample of the very rare B0→ K*0e+e– decay ever collected. First, the sub-sample of decays that come from B0→ K*0γ with a virtual photon that materialises in an electron–positron pair is isolated. The angular distributions of the B0→ K*0e+e– decay products are then used as a polarimeter to measure the handedness of the photon. The number of decays with a virtual photon is small compared to the decays with a real photon, but these latter decays cannot be used as the information on the polarisation is lost.
The size of the right-handed contribution to b → sγ is encoded in the magnitude of the complex parameter C′7/C7. This is a ratio of the right- and left-handed Wilson coefficients that are used in the effective description of b → s transitions. The new B0→ K*0e+e– analysis by the LHCb collaboration constrains the value of C′7/C7, and thus the photon polarisation, with unprecedented precision (figure 1). The measurement is compatible with the SM prediction.
This result showcases the exceptional capability of the LHCb experiment to study b → sγ transitions. The uncertainty is currently dominated by the data sample size, and thus more accurate studies are foreseen with the large data sample expected in Run 3 of the LHC. More precise measurements may yet unravel a small right-handed polarisation.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.