Hooman Davoudiasl and Peter Denton of Brookhaven National Laboratory have used the recent Event Horizon Telescope image of supermassive black hole M87* to disfavour “fuzzy” models of ultra-light boson dark matter with masses of the order of a few 10-21 eV (Phys. Rev. Lett. 123 021102). The inferred mass, spin and age of the black hole are incompatible with the existence of such fuzzy dark matter given the principle of superradiance, whereby quantum fluctuations deplete the angular momentum of a rotating black hole by populating a cloud of bosons around it. The effect depends only on the bosons’ mass, and does not presuppose any non-gravitational interactions. Future measurements of M87* and other spinning supermassive black holes have the potential to exclude the entire parameter space for fuzzy dark matter.
An intriguing alternative to cold dark matter, fuzzy dark matter could address the “core-cusp problem”, wherein observations of an approximately constant dark matter density in the inner parts of galaxies conflict with the steep power-law-like behaviour of cosmological simulations. The particles’ long de Broglie wavelengths, of the order of a kiloparsec, would suppress structure at this scale.
On 10 July, CERN and the Astroparticle Physics European Consortium (APPEC) founded a new research centre for astroparticle physics theory called EuCAPT. Led by an international steering committee comprising 12 theorists from institutes in France, Portugal, Spain, Sweden, Germany, the Netherlands, Italy, Switzerland and the UK, and from CERN, EuCAPT aims to coordinate and promote theoretical physics in the fields of astroparticle physics and cosmology in Europe.
Astroparticle physics is undergoing a phase of profound transformation, explains inaugural EuCAPT director Gianfranco Bertone, who is spokesperson of the Centre for Gravitation and Astroparticle Physics at the University of Amsterdam. “We have recently obtained extraordinary results such as the discovery of high-energy cosmic neutrinos with IceCube, the direct detection of gravitational waves with LIGO and Virgo, and we have witnessed the birth of multi-messenger astrophysics. Yet we have formidable challenges ahead of us: understanding the nature of dark matter and dark energy, elucidating the origin of cosmic rays, understanding the matter-antimatter asymmetry problem, and so on. These are highly interdisciplinary problems that have ramifications in cosmology, particle, and astroparticle physics, and that are best addressed by a strong and diverse community of scientists.”
The construction of experimental astroparticle facilities is coordinated by APPEC, but until now there was no Europe-wide coordination of theoretical activities, says Bertone. “We want to be open and inclusive, and we hope that all interested scientists will feel welcome to join this new initiative.” On a practical level, EuCAPT aims to coordinate scientific and training activities, help researchers attract adequate resources for their projects, and promote a stimulating and open environment in which young scientists can thrive. CERN will act as the central hub of the consortium for the first five years.
It is not a coincidence that CERN has been chosen as the central hub of EuCAPT, says Gian Giudice, head of CERN’s theory department. “The research that we are doing at CERN-TH is an exploration of the possible links between physics at the smallest and largest scales. Creating a collaborative network among European research centres in astroparticle physics and cosmology will boost activities in these fields and foster dialogue with particle physics,” he says. “Dark matter, dark energy, inflation and the origin of large-scale structures are big questions regarding the universe. But there are good hints that suggest that their explanation has to be looked for in the domain of particle physics.”
The Bs meson is a bound state of a strange quark and a beauty antiquark – as such it possesses both beauty and strangeness. For many years the search for its extremely rare decay to a μ+μ– pair was a holy grail of particle physics, because of its sensitivity to theories that extend the Standard Model (SM). The SM predicts the decay rate for Bs→μ+μ– to be only about 3.6 parts per billion (ppb). Its lighter cousin, the B0, which is made from a down quark and a beauty antiquark, has an even lower predicted branching fraction for decays to a μ+μ– pair of 0.1 ppb. If beyond-the-SM particles exist, however, the predictions could be modified by their presence, giving the decays sensitivity to new physics that rivals and might even exceed that of direct searches.
It took more than a quarter of a century of extensive effort to establish Bs→μ+μ–, and the first observation was presented in 2013, in a joint publication by the CMS and LHCb collaborations based on LHC Run 1 data. The same paper reported evidence for B0→μ+μ– with a significance of three standard deviations, however, this signal has not subsequently been confirmed by CMS, LHCb or ATLAS analyses. A new CMS Run 2 analysis now looks set to bolster interest in these intriguing decays.
The CMS collaboration has updated its 2013 analysis with higher centre-of-mass-energy Run 2 data from 2016, permitting an observation of Bs→μ+μ– with a significance of 5.6 standard deviations (figure 1). The results are consistent with the latest results from ATLAS and LHCb, and while no significant deviation from the SM is observed by any of the experiments, all three decay rates are found to lie slightly below the SM prediction. The slight deficit is not significant, but the trend is intriguing because it could be related to so-called flavour anomalies recently observed by the LHCb experiment in other rare decays of B mesons (CERN Courier May/June p9). This makes the new CMS measurement even more exciting. The new analysis showed no sign of B0→μ+μ–, and a stringent 95% confidence limit of less than 0.36 ppb was set on its rate.
CMS also managed to measure the effective lifetime of the Bs meson using the several dozen Bs→μ+μ– decay events that were observed. The interest in measuring this lifetime is that, just as for the branching fraction, new physics might alter its value from the SM expectation. This measurement yielded a lifetime of about 1.7 ps, consistent with the SM. The measured CMS value is also consistent with the only other such lifetime measurement, performed by LHCb.
With three times more Run 2 data yet to be analysed by CMS, the next update – based on the full Run 1 and Run 2 datasets – may shed more light on this fascinating corner of physics, and move us closer to the ultimate goal, which is the observation of the B0→μ+μ– decays.
The LHC completed its Run 2 operations in December 2018, delivering a large dataset of proton–proton collisions at a centre-of-mass energy of 13 TeV. The ATLAS detector maintained a high level of readiness and performance throughout Run 2, resulting in 139 fb–1 of data for physics analyses.
An increasingly consistent picture of the properties of the Higgs boson is being drawn in light of the Run 2 data. This is thanks to a wide range of measurements, and particularly through the establishment of its couplings with third-generation quarks following the observation of the H → bb decay and associated ttH production.
The H → γγ and H → ZZ* → 4ℓ final states, where 4ℓ denotes 4e, 2e2μ or 4μ, provide clean experimental signatures that played a leading role in the discovery of the Higgs boson, and are ideal for precision measurements that could reveal subtle effects from new physics. ATLAS presented updated results for these two channels using the full Run 2 dataset at the 2019 summer conferences.
Using improved identification and energy calibration of leptons, photons and jets, and new analysis techniques, a sample of about 210 H → ZZ* → 4ℓ signal events (figure 1) and 6550 H → γγ signal events were selected to perform a series of measurements. The properties of the Higgs boson are investigated by measuring inclusive, differential and per-production-mode cross sections that are sensitive to different modelling aspects.
In the 4ℓ channel, differential cross-section measurements are performed as a function of the transverse momentum of the Higgs boson and the number of jets produced in association with it. The different production mechanisms of the Higgs boson are measured inclusively and in various regions of kinematic phase space, which are cleanly separated by neural networks.
In the high-statistics γγ channel, differential cross sections are measured for a set of variables related to the Higgs boson kinematics, as well as the kinematics and multiplicity of jets produced in association with the Higgs boson. The measured distributions are used to constrain modified interactions of the Higgs boson with SM particles.
The measurements in both channels are found to be well described by the SM predictions. Their combination yields a total Higgs-production cross section of 55.4 ± 4.3 pb, in agreement with the SM prediction of 55.6 ± 2.5 pb. The combined measurement of the transverse-momentum differential cross section (figure 2) has significantly improved in precision compared to earlier results. It is sensitive to the virtual processes governing the dominant Higgs-boson production through gluon fusion and to direct contributions from new physics.
Achieving 8% precision on the Higgs cross section is a significant step towards studying the electroweak symmetry breaking mechanism. Numerous additional measurements are being pursued by ATLAS in the Higgs-boson sector with the full Run 2 dataset to perform detailed tests of SM predictions and hunt for new phenomena.
The Humboldt Kolleg conference Discoveries and Open Puzzles in Particle Physics and Gravitation took place at Kitzbühel in the Austrian Alps from 24 to 28 June, bringing Humboldt prize winners, professors and research-fellow alumni together with prospective future fellows. The meeting was sponsored by the Humboldt Foundation, based in Bonn, whose mission is to promote cooperation between scientists in Germany and elsewhere. The programme focused on connections between particle physics and the large-scale cosmological structure of the universe.
The most recent LHC experimental results were presented by Karl Jakobs (Freiburg and ATLAS spokesperson), confirming the status of the Standard Model (SM). A key discussion topic raised by Fred Jegerlehner (DESY-Zeuthen) is whether the SM’s symmetries might be “emergent” at the relatively low energies of current experiments: in contrast to unification models that exhibit maximal symmetry at the highest energies, the gauge symmetries could emerge in the infrared, but “dissolve” in the extreme ultraviolet. Consider the analogy of a carpet: it looks flat and invariant under translations when viewed from a distance, but this smoothness dissolves when we look at it close up, e.g. as perceived by an ant crawling on it. A critical system close to the Planck scale – the scale where quantum-gravity effects should be important – could behave similarly: the only modes that can exist as long-range correlations, e.g. light-mass particles, self-organise into multiplets with a small number of particles, just as they do in the SM. The vector modes become the gauge bosons of U(1), SU(2) and SU(3); low-energy symmetries such as baryon- and lepton-number conservation would all be violated close to the Planck scale.
Ideas connecting particle physics and quantum computing were also discussed by Peter Zoller (Innsbruck) and Erez Zohar (MPQ, Munich). Here, one takes a lattice field theory that is theoretically difficult to solve and maps it onto a fully controllable quantum system such as an optical lattice that can be programmed in experiments to do calculations – a quantum simulator. First promising results with up to 20 qubits have been obtained for the Schwinger model (QED in 1+1 dimensions). This model exhibits dynamical mass generation and is a first prototype before looking at more complicated theories like QCD.
The cosmological constant is related to the vacuum energy density, which is in turn connected to possible phase transitions in the early universe.
A key puzzle concerns the hierarchies of scales: the small ratio of the Higgs-boson mass to the Planck scale plus the very small cosmological constant that drives the accelerating expansion of the universe. Might these be related? The cosmological constant is related to the vacuum energy density, which is in turn connected to possible phase transitions in the early universe. Future gravitational-wave experiments with LISA were discussed by Stefano Vitale (Trento) and are expected to be sensitive to the effects of these phase transitions.
A main purpose of Humboldt Kolleg is the promotion of young scientists from the central European region. Student poster prizes sponsored by the Kitzbühel mayor Klaus Winkler were awarded to Janina Krzysiak (IFJ PAN, Krakow) and Jui-Lin Kuo (HEPHY, Vienna).
The XVIII International Conference on Strangeness in Quark Matter (SQM 2019) was held from 10 to 15 June in Bari, Italy. With 270 delegates from 32 countries, the largest participation ever for the SQM series, the conference focused on the role of strange and heavy-flavour quarks in heavy-ion collisions and astrophysics. The scientific programme consisted of 50 invited plenary talks, 76 contributed parallel talks and a rich poster session with more than 60 contributions.
A state-of-the-art session opened the conference, also including a tribute to the late Roy Glauber entitled “The Glauber model in high-energy nucleus–nucleus collisions”. Subsequent sessions were dedicated to highlights from theory and experiment, and included reports on results from low- and high-energy collisions, as well as on hyperon interactions in lattice QCD and thermal models. Representatives from all major collaborations at CERN’s LHC and SPS, Brookhaven’s RHIC, the Heavy Ion Synchrotron SIS at the GSI Darmstadt and the NICA project at the JINR Dubna made special efforts to release new results at SQM 2019.
Among the highlights were reports that particle-yield measurements are close to determining where phenomena such as strangeness enhancement are localised in phase space. Collective behaviour in small systems was also a much-discussed topic, with new results from the PHENIX experiment showing that p-Au, d-Au and 3He-Au collisions exhibit elliptic flow coefficients consistent with expectations regarding their initial collision geometry. Results from ALICE, CMS and STAR consistently corroborate the presence of elliptic flow in small systems.
There is also increasing interest in transverse-momentum differential baryon-to-meson ratios in the heavy-flavour sector. Recent results from pp and Pb-Pb collisions from both ALICE and CMS suggest that the same dynamics observed in the ratio Λ/K0S may be present in Λc/D, despite the fact that strange and charm quarks are thought to be created in different stages of the system’s evolution. Further studies and future measurements may be needed.
A promising new perspective for the LHC data is to use high-energy pp and p-Pb collisions as factories of identified hadrons created by a source of finite radius and then to measure the ensuing interactions between these hadrons using femtoscopy. This technique has allowed the ALICE collaboration to study interactions that were so far not measured at all and probe, for instance, the p-Ξ and p-Ω interaction potentials. These results provide fundamental constraints to the QCD community and are significant in the context of the astrophysics.
New results on the onset of deconfinement were shown by the NA61/SHINE collaboration. First results on strangeness production at low energy from HADES and BM@N also enriched the discussion at SQM 2019.
Presentations at the final session showed good prospects for future measurements at FAIR (GSI Darmstadt), NICA (JINR Dubna), the Heavy-Ion Project (J-PARC), and at CERN, givenongoing detector upgrades, the high-luminosity programme, and possible next-generation colliders. Perspectives for QCD measurements at future electron–ion colliders were also presented. On the theory side, new developments and strong research efforts are bringing a better understanding of strangeness production and open heavy-flavour dynamics in heavy-ion collisions.
Young scientist prizes sponsored by the Nuclear Physics European Collaboration Committee were awarded to Bong-Hwi Lim of Pusan National University, Korea, and to Olga Soloveva of Goethe University, Frankfurt for their poster contributions. The inaugural Andre Mischke Award (established at SQM2019) for the young scientist with the best experimental parallel talk was given to Erin Frances Gauger of the University of Texas, Austin.
The next edition of SQM will take place in Busan, Korea, in May 2021.
Processes where the flavour of charged leptons is not conserved are undetectably rare in the Standard Model (SM). For neutral leptons, flavour violation is known to occur in neutrino oscillations, but charged-lepton-flavour violation (CLFV) is so suppressed that, if observed, it would provide indisputable evidence of physics beyond the SM.
The LHCb collaboration recently reported the results of searches for two CLFV decays, B+→ K+μ± e∓and B(s)0→ τ±μ∓, using 3 fb–1 of data collected in 2011 and 2012. The two decays provide complementary information as their final states involve charged leptons from different families, and both represent experimental challenges for LHCb. While the detector performance is excellent for muons, it is more difficult to reconstruct electrons and taus. The difficulty with electrons is related to energy losses via bremsstrahlung radiation. Meanwhile, the short-lived tau leptons are always reconstructed from their decay products, which include at least one neutrino, and thus part of the tau’s energy is unavoidably lost. In both cases, the analyses are able to recover some of the lost information and improve the resolution by exploiting constraints on the kinematics and topology of the decay.
Neither search found a signal (figure 1), but thanks to these reconstruction techniques and the large quantity of B-meson decays recorded by the detector, LHCb has established the most stringent upper limits on the branching fractions of these decays: 9.5 × 10–9 for B+→ K+μ− e+, 8.8 × 10–9 for B+→ K+μ+ e–, 1.4 × 10–5 for B0→ τ±μ∓, and 4.2 × 10–5 for Bs0→ τ±μ∓ (all at the 95% confidence level). The latter is also the first ever limit on Bs0→ τ±μ∓.
Decays of B-mesons are particularly interesting in light of recent flavour anomalies
CLFV decays of B-mesons are particularly interesting in light of recent flavour anomalies, whereby LHCb found hints that the decay rates for b → sμ+μ– and b → se+e– are not equal (CERN Courier May/June 2019 p33). While the anomalies are most suggestive of the violation of lepton flavour universality, several proposed extensions to the SM that address them also predict CLFV, with branching ratios for B+→ K+μ± e∓and B(s)0→ τ±μ∓, which are within LHCb’s reach. The latest LHCb results therefore impose strong new constraints on beyond-SM models. The analyses also open the door to further LHCb tests of CLFV by demonstrating the feasibility of searches for rare processes with final-state electrons and taus.
At the 22nd edition of the Planck conference series, which took place in Granada, Spain, from 3–7 June, 170 particle physicists and cosmologists discussed the latest in beyond the Standard Model (BSM) physics and ultraviolet completions of the SM within theories that unify the fundamental interactions.
Several speakers addressed the serious model-building restrictions in supersymmetry and Higgs compositeness that are imposed by the negative results of direct searches for BSM particles at ATLAS and CMS. Particular emphasis was put on the (extended) Higgs sector of the SM, where precision measurements might detect signals of BSM physics. Updates from LHCb and Belle on the flavour anomalies were also eagerly discussed, with proposed explanations including leptoquarks and additional U(1) gauge symmetries with exotic vector-like quarks. However, not all were convinced that the results signal BSM physics. On the cosmological side, delegates learned of the latest attempts to build models of WIMPs, axions, magnetic relics and dark radiation, which also include mechanisms for baryogenesis and inflation in the early universe.
Given the absence of new BSM particles so far at the LHC, theorists talk of a “desert” beyond the weak and Planck scales containing nothing but SM particles. Several speakers reported that phase transitions between non-trivial Higgs vacua could lead to violent phenomena in the early universe that might be tested by future gravitational-wave detectors. Within the inflationary universe these phenomena might also lead to the production of primordial black holes that could explain dark matter.
Discussions of ultraviolet (i.e. high-energy) completions of the SM encompassed the grand unification of fundamental interactions, the origin of neutrino masses, flavour symmetries and the so-called “swampland conjectures”, which characterise theories that might not be compatible with a consistent theory of quantum gravity. Therefore, one might hope that healthy signals of BSM physics might appear somewhere between the desert and the swampland.
Planck 2020 will be held from 8-12 June in Durham, UK.
In the early 1970s the term “Standard Model” did not yet exist – physicists used “Weinberg–Salam model” instead. But the discovery of the weak neutral current in Gargamelle at CERN in 1973, followed by the prediction and observation of particles composed of charm quarks at Brookhaven and SLAC, quickly shifted the focus of particle physicists from the strong to the electroweak interactions – a sector in which trailblazing theoretical work had quietly taken place in the previous years. Plans for an electron–positron collider at CERN were soon born, with the machine first named LEP (Large Electron Positron collider) in a 1976 CERN yellow report authored by a distinguished study group featuring, among others, John Ellis, Burt Richter, Carlo Rubbia and Jack Steinberger.
LEP’s size – four times larger than anything before it – was chosen from the need to observe W-pair production, and to check that its cross section did not diverge as a function of energy. The phenomenology of the Z-boson’s decay was to come under similar scrutiny. At the time, the number of fermion families was undefined, and it was even possible that there were so many neutrino families that the Z lineshape would be washed out. LEP’s other physics targets included the possibility of producing Higgs bosons. At the time, the mass of the Higgs boson was completely unknown and could have been anywhere from around zero to 1 TeV.
The CERN Council approved LEP in October 1981 for centre-of-mass energies up to 110 GeV. It was a remarkable vote of confidence in the Standard Model (SM), given that the W and Z bosons had not yet been directly observed. A frantic period followed, with the ALEPH, DELPHI, L3 and OPAL detectors approved in 1983. Based on similar geometric principles, they included drift chambers or TPCs for the main trackers, BGO crystals, lead–glass or lead–gas sandwich electromagnetic calorimeters, and, in most cases, an instrumented return yoke for hadron calorimetry and muon filtering. The underground caverns were finished in 1988 and the detectors were in various stages of installation by the end of spring 1989, by which time the storage ring had been installed in the 27 km-circumference tunnel (see The greatest lepton collider).
Expedition to the Z pole
The first destination was the Z pole at an energy of around 90 GeV. Its location was then known to ±300 MeV from measurements of proton–antiproton collisions at Fermilab’s Tevatron. The priority was to establish the number of light neutrino families, a number that not only closely relates to the number of elementary fermions but also impacts the chemical composition and large-scale structure of the universe. By 1989 the existence of the νe, νμ and ντ neutrinos was well established. Several model-dependent measurements from astrophysics and collider physics at the time had pointed to the number of light active neutrinos (Nν) being less than five, but the SM could, in principle, accommodate any higher number.
The initial plan to measure Nν using the total width of the Z resonance was quickly discarded in favour of the visible peak cross section, where the effect was far more prominent – and in first approximation, insensitive to new possible detectable channels. The LEP experiments were therefore thrown in at the deep end, needing to make an absolute cross-section measurement with completely new detectors in an unfamiliar environment that demanded triggers, tracking, calorimetry and the luminosity monitors to all work and acquire data in synchronisation.
On the evening of 13 August, during a first low-luminosity pilot run just one month after LEP achieved first turns, OPAL reported the first observation of a Z decay (see OPAL fruits). Each experiment quickly observed a handful more. The first Z-production run took place from 18 September to 9 October, with the four experiments accumulating about 3000 visible Z decays each. They took data at the Z peak and at 1 and 2 GeV either side, improving the precision on the Z mass and allowing a measurement of the peak cross section. The results, including those from the Mark II collaboration at SLAC’s linear electron–positron SLC collider, were published and presented in CERN’s overflowing main auditorium on 13 October.
After only three weeks of data taking and 10,000 Z decays, the number of neutrinos was found to be three. In the following years, some 17 million Z decays were accumulated, and cross-section measurement uncertainties fell to the per-mille level. And while the final LEP number – Nν = 2.9840 ± 0.0082 – may appear to be a needlessly precise measurement of the number three (figure 1a), it today serves as by far the best high-energy constraint on the unitarity of the neutrino mixing matrix. LEP’s stash of a million clean tau pairs from Z → τ+τ–– decays also allowed the universality of the lepton–neutrino couplings to the weak charged current to be tested with unprecedented precision. The present averages are still dominated by the LEP numbers: gτ/gμ = 1.0010 ± 0.0015 and gτ/ge = 1.0029 ± 0.0015.
LEP continued to carry out Z-lineshape scans until 1991, and repeated them in 1993 and 1995. Two thirds of the total luminosity was recorded at the Z pole. As statistical uncertainties on the Z’s parameters went down, the experiments were challenged to control systematic uncertainties, especially in the experimental acceptance and luminosity. Monte Carlo modelling of fragmentation and hadronisation was gradually improved by tuning to measurements in data. On the luminosity front it soon became clear that dedicated monitors would be needed to measure small-angle Bhabha scattering (e+e–→ e+e–), which proceeds at a much higher rate than Z production. The trick was to design a compact electromagnetic calorimeter with sufficient position resolution to define the geometric acceptance, and to compare this to calculations of the Bhabha cross section.
The final ingredient for LEP’s extraordinary precision was a detailed knowledge of the beam energy, which required the four experiments to work closely with accelerator experts. Curiously, the first energy calibration was performed in 1990 by circulating protons in the LEP ring – the first protons to orbit in what would eventually become the LHC tunnel, but at a meagre energy of 20 GeV. The speed of the protons was inferred by comparing the radio-frequency electric field needed to keep protons and electrons circulating at 20 GeV on the same orbit, allowing a measurement of the total magnetic bending field on which the beam energy depends. This gave a 20 MeV uncertainty on the Z mass. To reduce this to 1.7 MeV for the final Z-pole measurement, however, required the use of resonant depolarisation routinely during data taking. First achieved in 1991, this technique uses the natural transverse spin polarisation of the beams to yield an instantaneous measurement of the beam energy to a precision of ±0.1 MeV – so precise that it revealed minute effects caused, for example, by Earth’s tides and the passage of local trains (see Tidal forces, melting ice and the TGV to Paris). The final precision was more than 10 times better than had been anticipated in pre-LEP studies.
Electroweak working group
The LEP electroweak working group saw the ALEPH, DELPHI, L3 and OPAL collaborations work closely on combined cross-section and other key measurements – in particular the forward-backward asymmetry in lepton and b-quark production – at each energy point. By 1994, results from the SLD collaboration at SLAC were also included. Detailed negotiations were sometimes needed to agree on a common treatment of statistical correlations and systematic uncertainties, setting a precedent for future inter-experiment cooperation. Many tests of the SM were performed, including tests of lepton universality (figure 1b), adding to the tau lepton results already mentioned. Analyses also demonstrated that the couplings of leptons and quarks are consistent with the SM predictions.
The combined electroweak measurements were used to make stunning predictions of the top-quark and Higgs-boson masses, mt and mH. After the 1993 Z-pole scan, the LEP experiments were able to produce a combined measurement of the Z width with a precision of 3 MeV in time for the 1994 winter conferences, allowing the prediction mt = 177 ± 13 ± 19 GeV where the first error is experimental and the second is due to mH not being known. A month later the CDF collaboration at the Tevatron announced the possible existence of a top quark with a mass of 176 ± 16 GeV. Both CDF and its companion experiment D0 reached 5σ “discovery” significance a year later. It is a measure of the complexity of the Z-boson analyses (in particular the beam-energy measurement) that the final Z-pole results were published a full 11 years later, constraining the Higgs mass to be less than 285 GeV at 95% confidence level (figure 1c), with a best fit at 129 GeV.
From QCD to the W boson
LEP’s fame in the field tends to concern its electroweak breakthroughs. But, with several million recorded hadronic Z decays, the LEP experiments also made big advances in quantum chromodynamics (QCD). These results significantly increased knowledge of hadron production and quark and gluon dynamics, and drove theoretical and experimental methods that are still used extensively today. LEP’s advantage as a lepton collider was to have an initial state that was independent of nucleon structure functions, allowing the measurement of a single, energy-scale-dependent coupling constant. The strong coupling constant αs was determined to be 0.1195 ± 0.0034 at the Z pole, and to vary with energy – the highlight of LEP’s QCD measurements. This so-called running of αs was verified over a large energy range, from the tau mass up to 206 GeV, yielding additional experimental confirmation of QCD’s core property of asymptotic freedom (figure 2a).
Many other important QCD measurements were performed, such as the gluon self-coupling, studies of differences between quark and gluon jets, verification of the running b-quark mass, studies of hadronisation models, measurements of Bose–Einstein correlations and detailed studies of hadronic systems in two-photon scattering processes. The full set of measurements established QCD as a consistent theory that accurately describes the phenomenology of the strong interaction.
Following successful Z operations during the “LEP1” phase in 1989–1995, a second LEP era devoted to accurate studies of W-boson pair production at centre-of-mass energies above 160 GeV got under way. Away from the Z resonance, the electron-positron annihilation cross section decreases sharply; as soon as the centre-of-mass energy reaches twice the W and Z boson masses, the WW, then ZZ, production diagrams open up (figure 2b). Accessing the WW threshold required the development of superconducting radio-frequency cavities, the first of which were already installed in 1994, and they enabled a gradual increase in the centre-of-mass energy up to a maximum of 209 GeV in 2000.
The “LEP2” phase allowed the experiments to perform a signature analysis, which dated back to the first conception of the machine: the measurement of the WW-boson cross section. Would it diverge or would electroweak diagrams interfere to suppress it? The precise measurement of the WW cross section as a function of the centre-of-mass energy was a very important test of the SM since it showed that the sum and interference of three four-fermion processes were indeed acting in the WW production: the t-channel ν exchange, and the s-channel γ and Z exchange (figure 2c). LEP data proved that the γWW and ZWW triple gauge vertexes are indeed present and interfere destructively with the t-channel diagram, suppressing the cross section and stopping it from diverging.
The second key LEP2 electroweak measurement was of the mass and total decay width of the W boson, which were determined by directly reconstructing the decay products of the two W bosons in the fully hadronic (W+W–→ qqqq) and semi-leptonic (W+W–→ qqℓνℓ) decay channels. The combined LEP W-mass measurement from direct reconstruction data alone is 80.375 ± 0.025(stat) ± 0.022(syst) GeV, the largest contribution to the systematic uncertainties originating from fragmentation and hadronisation uncertainties. The relation between the Z-pole observables, mt and mW, provides a stringent test of the SM and constrains the Higgs mass.
To the Higgs and beyond
Before LEP started, the mass of the Higgs boson was basically unknown. In the simplest version of the SM, involving a single Higgs boson, the only robust constraints were its non-observation in nuclear decays (forbidding masses below 14 MeV) and the need to maintain a sensible, calculable theory (ruling out masses above 1 TeV). In 1990, soon after the first LEP data-taking period, the full Higgs-boson mass range below 24 GeV was excluded at 95% confidence level by the LEP experiments. Above this mass the main decay of the Higgs boson, occurring 80% of the time, was predicted to be its decays into b quark–antiquark pairs, followed by pairs of tau leptons, charm quarks or gluons, while the WW* decay mode starts to contribute at the maximum reachable masses of approximately 115 GeV. The main production process is Higgs-strahlung, whereby a Higgs is emitted by a virtual Z boson.
The combined electroweak measurements were used to make stunning predictions of the top quark and Higgs boson masses
During the full lifetime of LEP, the four experiments kept searching for neutral and charged Higgs bosons in several models and exclusion limits continued to improve. In its last year of data taking, when the centre-of-mass energy reached 209 GeV, ALEPH reported an excess of four-jet events. It was consistent with a 114 GeV Higgs boson and had a significance that varied as the data were accumulated, peaking at an instantaneous significance of around 3.9 standard deviations. The other three experiments carefully scrutinised their data to confirm or disprove ALEPH’s suggestion, but none observed any long-lasting excess in that mass region. Following many discussions, the LEP run was extended until 8 November 2000. However, it was decided not to keep running the following year so as not to impact the LHC schedule. The final LEP-wide combination excluded, at 95% confidence level, a SM Higgs boson with mass below 114.4 GeV.
The four LEP experiments carried out many other searches for novel physics that set limits on the existence of new particles. Notable cases are the searches for additional Higgs bosons in two-Higgs-doublet models and their minimal supersymmetric incarnation. Neutral scalar and pseudoscalar Higgs bosons lighter than the Z boson and charged Higgs bosons up to the kinematic limit of their pair production were also excluded. Supersymmetric particles suffered a similar fate, in the theoretically attractive assumption of R-parity conservation. The existence of sleptons and charginos was excluded in the largest part of the parameter space for masses below 70–100 GeV, near the kinematic limit for their pair production. Neutralinos with masses below approximately half the Z-boson mass were also excluded in a large part of the parameter space. The LEP exclusions for several of these electroweak-produced supersymmetric particles are still the most stringent and most model-independent limits ever obtained.
It is very hard to remember how little we knew before LEP and the giant step that LEP made. It was often said that LEP discovered electroweak radiative corrections at the level of 5σ, opening up a precision era in particle physics that continues to set the standard today and offer guidance on the elusive new physics beyond the SM.
The annual “g-2 physics week”, which took place on Elba Island in Italy from 27 May to 1 June, saw almost 100 physicists discuss the latest progress at the muon g−2 experiment at Fermilab. The muon magnetic anomaly, aμ, is one of the few cases where there is a hint of a discrepancy between a Standard Model (SM) prediction and an experimental measurement. Almost 20 years ago, in a sequence of increasingly precise measurements, the E821 collaboration at Brookhaven National Laboratory (BNL) determined aμ = (g–2)/2 with a relative precision of 0.54 parts per million (ppm), providing a rigorous test of the SM. Impressive as it was, the result was limited by statistical uncertainties.
A new muon g−2 experiment currently taking data at Fermilab, called E989, aims to improve the experimental error on aμ by a factor of four. The collaboration took its first dataset in 2018, integrating 40% more statistics than the BNL experiment, and is now coming to the end of a second run that will yield a combined dataset more than three times larger.
A thorough review of the many analysis efforts during the first data run has been conducted. The muon magnetic anomaly is determined from the ratio of the muon and proton precession frequencies in the same magnetic field. The ultimate aim of experiment E989 is to measure both of these frequencies with a precision of 0.1 ppm by employing techniques and expertise from particle-physics experimentation (straw tracking detectors and calorimetetry), nuclear physics (nuclear magnetic resonance) and accelerator science. These frequencies are independently measured by several analysis groups with different methodologies and different susceptibilities to systematic effects.
A recent relative unblinding of a subset of the data with a statistical precision of 1.3 ppm showed excellent agreement across the analyses in both frequencies. The absolute values of the two frequencies are still subject to a ~25 ppm hardware blinding offset, so no physics conclusion can yet be drawn. But the exercise has shown that the collaboration is well on the way to publishing its first result with a precision better than E821 towards the end of the year.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.