Topics

LEP’s electroweak leap

Trailblazing events

In the early 1970s the term “Standard Model” did not yet exist – physicists used “Weinberg–Salam model” instead. But the discovery of the weak neutral current in Gargamelle at CERN in 1973, followed by the prediction and observation of particles composed of charm quarks at Brookhaven and SLAC, quickly shifted the focus of particle physicists from the strong to the electroweak interactions – a sector in which trailblazing theoretical work had quietly taken place in the previous years. Plans for an electron–positron collider at CERN were soon born, with the machine first named LEP (Large Electron Positron collider) in a 1976 CERN yellow report authored by a distinguished study group featuring, among others, John Ellis, Burt Richter, Carlo Rubbia and Jack Steinberger.

LEP’s size – four times larger than anything before it – was chosen from the need to observe W-pair production, and to check that its cross section did not diverge as a function of energy. The phenomenology of the Z-boson’s decay was to come under similar scrutiny. At the time, the number of fermion families was undefined, and it was even possible that there were so many neutrino families that the Z lineshape would be washed out. LEP’s other physics targets included the possibility of producing Higgs bosons. At the time, the mass of the Higgs boson was completely unknown and could have been anywhere from around zero to 1 TeV.

The CERN Council approved LEP in October 1981 for centre-of-mass energies up to 110 GeV. It was a remarkable vote of confidence in the Standard Model (SM), given that the W and Z bosons had not yet been directly observed. A frantic period followed, with the ALEPH, DELPHI, L3 and OPAL detectors approved in 1983. Based on similar geometric principles, they included drift chambers or TPCs for the main trackers, BGO crystals, lead–glass or lead–gas sandwich electromagnetic calorimeters, and, in most cases, an instrumented return yoke for hadron calorimetry and muon filtering. The underground caverns were finished in 1988 and the detectors were in various stages of installation by the end of spring 1989, by which time the storage ring had been installed in the 27 km-circumference tunnel (see The greatest lepton collider).

Expedition to the Z pole

The first destination was the Z pole at an energy of around 90 GeV. Its location was then known to ±300 MeV from measurements of proton–antiproton collisions at Fermilab’s Tevatron. The priority was to establish the number of light neutrino families, a number that not only closely relates to the number of elementary fermions but also impacts the chemical composition and large-scale structure of the universe. By 1989 the existence of the νe, νμ and ντ neutrinos was well established. Several model-dependent measurements from astrophysics and collider physics at the time had pointed to the number of light active neutrinos (Nν) being less than five, but the SM could, in principle, accommodate any higher number.

The OPAL logbook entry for the first Z boson seen at LEP

The initial plan to measure Nν using the total width of the Z resonance was quickly discarded in favour of the visible peak cross section, where the effect was far more prominent – and in first approximation, insensitive to new possible detectable channels. The LEP experiments were therefore thrown in at the deep end, needing to make an absolute cross-section measurement with completely new detectors in an unfamiliar environment that demanded triggers, tracking, calorimetry and the luminosity monitors to all work and acquire data in synchronisation.

On the evening of 13 August, during a first low-luminosity pilot run just one month after LEP achieved first turns, OPAL reported the first observation of a Z decay (see OPAL fruits). Each experiment quickly observed a handful more. The first Z-production run took place from 18 September to 9 October, with the four experiments accumulating about 3000 visible Z decays each. They took data at the Z peak and at 1 and 2 GeV either side, improving the precision on the Z mass and allowing a measurement of the peak cross section. The results, including those from the Mark II collaboration at SLAC’s linear electron–positron SLC collider, were published and presented in CERN’s overflowing main auditorium on 13 October.

After only three weeks of data taking and 10,000 Z decays, the number of neutrinos was found to be three. In the following years, some 17 million Z decays were accumulated, and cross-section measurement uncertainties fell to the per-mille level. And while the final LEP number – Nν = 2.9840 ± 0.0082 – may appear to be a needlessly precise measurement of the number three (figure 1a), it today serves as by far the best high-energy constraint on the unitarity of the neutrino mixing matrix. LEP’s stash of a million clean tau pairs from Z → τ+ τ– decays also allowed the universality of the lepton–neutrino couplings to the weak charged current to be tested with unprecedented precision. The present averages are still dominated by the LEP numbers: gτ/gμ = 1.0010 ± 0.0015 and gτ/ge = 1.0029 ± 0.0015.

Diagrams showing measurements at LEP

LEP continued to carry out Z-lineshape scans until 1991, and repeated them in 1993 and 1995. Two thirds of the total luminosity was recorded at the Z pole. As statistical uncertainties on the Z’s parameters went down, the experiments were challenged to control systematic uncertainties, especially in the experimental acceptance and luminosity. Monte Carlo modelling of fragmentation and hadronisation was gradually improved by tuning to measurements in data. On the luminosity front it soon became clear that dedicated monitors would be needed to measure small-angle Bhabha scattering (e+e e+e), which proceeds at a much higher rate than Z production. The trick was to design a compact electromagnetic calorimeter with sufficient position resolution to define the geometric acceptance, and to compare this to calculations of the Bhabha cross section.

The final ingredient for LEP’s extraordinary precision was a detailed knowledge of the beam energy, which required the four experiments to work closely with accelerator experts. Curiously, the first energy calibration was performed in 1990 by circulating protons in the LEP ring – the first protons to orbit in what would eventually become the LHC tunnel, but at a meagre energy of 20 GeV. The speed of the protons was inferred by comparing the radio-frequency electric field needed to keep protons and electrons circulating at 20 GeV on the same orbit, allowing a measurement of the total magnetic bending field on which the beam energy depends. This gave a 20 MeV uncertainty on the Z mass. To reduce this to 1.7 MeV for the final Z-pole measurement, however, required the use of resonant depolarisation routinely during data taking. First achieved in 1991, this technique uses the natural transverse spin polarisation of the beams to yield an instantaneous measurement of the beam energy to a precision of ±0.1 MeV – so precise that it revealed minute effects caused, for example, by Earth’s tides and the passage of local trains (see Tidal forces, melting ice and the TGV to Paris). The final precision was more than 10 times better than had been anticipated in pre-LEP studies.

Electroweak working group

The LEP electroweak working group saw the ALEPH, DELPHI, L3 and OPAL collaborations work closely on combined cross-section and other key measurements – in particular the forward-backward asymmetry in lepton and b-quark production – at each energy point. By 1994, results from the SLD collaboration at SLAC were also included. Detailed negotiations were sometimes needed to agree on a common treatment of statistical correlations and systematic uncertainties, setting a precedent for future inter-experiment cooperation. Many tests of the SM were performed, including tests of lepton universality (figure 1b), adding to the tau lepton results already mentioned. Analyses also demonstrated that the couplings of leptons and quarks are consistent with the SM predictions.

The combined electroweak measurements were used to make stunning predictions of the top-quark and Higgs-boson masses, mt and mH. After the 1993 Z-pole scan, the LEP experiments were able to produce a combined measurement of the Z width with a precision of 3 MeV in time for the 1994 winter conferences, allowing the prediction mt = 177 ± 13 ± 19 GeV where the first error is experimental and the second is due to mH not being known. A month later the CDF collaboration at the Tevatron announced the possible existence of a top quark with a mass of 176 ± 16 GeV. Both CDF and its companion experiment D0 reached 5σ “discovery” significance a year later. It is a measure of the complexity of the Z-boson analyses (in particular the beam-energy measurement) that the final Z-pole results were published a full 11 years later, constraining the Higgs mass to be less than 285 GeV at 95% confidence level (figure 1c), with a best fit at 129 GeV.

From QCD to the W boson

LEP’s fame in the field tends to concern its electroweak breakthroughs. But, with several million recorded hadronic Z decays, the LEP experiments also made big advances in quantum chromodynamics (QCD). These results significantly increased knowledge of hadron production and quark and gluon dynamics, and drove theoretical and experimental methods that are still used extensively today. LEP’s advantage as a lepton collider was to have an initial state that was independent of nucleon structure functions, allowing the measurement of a single, energy-scale-dependent coupling constant. The strong coupling constant αs was determined to be 0.1195 ± 0.0034 at the Z pole, and to vary with energy – the highlight of LEP’s QCD measurements. This so-called running of αs was verified over a large energy range, from the tau mass up to 206 GeV, yielding additional experimental confirmation of QCD’s core property of asymptotic freedom (figure 2a).

Diagrams showing LEP results

Many other important QCD measurements were performed, such as the gluon self-coupling, studies of differences between quark and gluon jets, verification of the running b-quark mass, studies of hadronisation models, measurements of Bose–Einstein correlations and detailed studies of hadronic systems in two-photon scattering processes. The full set of measurements established QCD as a consistent theory that accurately describes the phenomenology of the strong interaction.

Following successful Z operations during the “LEP1” phase in 1989–1995, a second LEP era devoted to accurate studies of W-boson pair production at centre-of-mass energies above 160 GeV got under way. Away from the Z resonance, the electron-positron annihilation cross section decreases sharply; as soon as the centre-of-mass energy reaches twice the W and Z boson masses, the WW, then ZZ, production diagrams open up (figure 2b). Accessing the WW threshold required the development of superconducting radio-frequency cavities, the first of which were already installed in 1994, and they enabled a gradual increase in the centre-of-mass energy up to a maximum of 209 GeV in 2000.

The “LEP2” phase allowed the experiments to perform a signature analysis, which dated back to the first conception of the machine: the measurement of the WW-boson cross section. Would it diverge or would electroweak diagrams interfere to suppress it? The precise measurement of the WW cross section as a function of the centre-of-mass energy was a very important test of the SM since it showed that the sum and interference of three four-fermion processes were indeed acting in the WW production: the t-channel ν exchange, and the s-channel γ and Z exchange (figure 2c). LEP data proved that the γWW and ZWW triple gauge vertexes are indeed present and interfere destructively with the t-channel diagram, suppressing the cross section and stopping it from diverging.

The second key LEP2 electroweak measurement was of the mass and total decay width of the W boson, which were determined by directly reconstructing the decay products of the two W bosons in the fully hadronic (W+W qqqq) and semi-leptonic (W+W qqℓν) decay channels. The combined LEP W-mass measurement from direct reconstruction data alone is 80.375 ± 0.025(stat) ± 0.022(syst) GeV, the largest contribution to the systematic uncertainties originating from fragmentation and hadronisation uncertainties. The relation between the Z-pole observables, mt and mW, provides a stringent test of the SM and constrains the Higgs mass.

To the Higgs and beyond

Before LEP started, the mass of the Higgs boson was basically unknown. In the simplest version of the SM, involving a single Higgs boson, the only robust constraints were its non-observation in nuclear decays (forbidding masses below 14 MeV) and the need to maintain a sensible, calculable theory (ruling out masses above 1 TeV). In 1990, soon after the first LEP data-taking period, the full Higgs-boson mass range below 24 GeV was excluded at 95% confidence level by the LEP experiments. Above this mass the main decay of the Higgs boson, occurring 80% of the time, was predicted to be its decays into b quark–antiquark pairs, followed by pairs of tau leptons, charm quarks or gluons, while the WW* decay mode starts to contribute at the maximum reachable masses of approximately 115 GeV. The main production process is Higgs-strahlung, whereby a Higgs is emitted by a virtual Z boson.

The combined electroweak measurements were used to make stunning predictions of the top quark and Higgs boson masses

During the full lifetime of LEP, the four experiments kept searching for neutral and charged Higgs bosons in several models and exclusion limits continued to improve. In its last year of data taking, when the centre-of-mass energy reached 209 GeV, ALEPH reported an excess of four-jet events. It was consistent with a 114 GeV Higgs boson and had a significance that varied as the data were accumulated, peaking at an instantaneous significance of around 3.9 standard deviations. The other three experiments carefully scrutinised their data to confirm or disprove ALEPH’s suggestion, but none observed any long-lasting excess in that mass region. Following many discussions, the LEP run was extended until 8 November 2000. However, it was decided not to keep running the following year so as not to impact the LHC schedule. The final LEP-wide combination excluded, at 95% confidence level, a SM Higgs boson with mass below 114.4 GeV.

The four LEP experiments carried out many other searches for novel physics that set limits on the existence of new particles. Notable cases are the searches for additional Higgs bosons in two-Higgs-doublet models and their minimal supersymmetric incarnation. Neutral scalar and pseudoscalar Higgs bosons lighter than the Z boson and charged Higgs bosons up to the kinematic limit of their pair production were also excluded. Supersymmetric particles suffered a similar fate, in the theoretically attractive assumption of R-parity conservation. The existence of sleptons and charginos was excluded in the largest part of the parameter space for masses below 70–100 GeV, near the kinematic limit for their pair production. Neutralinos with masses below approximately half the Z-boson mass were also excluded in a large part of the parameter space. The LEP exclusions for several of these electroweak-produced supersymmetric particles are still the most stringent and most model-independent limits ever obtained.

It is very hard to remember how little we knew before LEP and the giant step that LEP made. It was often said that LEP discovered electroweak radiative corrections at the level of 5σ, opening up a precision era in particle physics that continues to set the standard today and offer guidance on the elusive new physics beyond the SM.

Muon g−2 collaboration prepares for first results

The muon g−2 collaboration

The annual “g-2 physics week”, which took place on Elba Island in Italy from 27 May to 1 June, saw almost 100 physicists discuss the latest progress at the muon g−2 experiment at Fermilab. The muon magnetic anomaly, aμ, is one of the few cases where there is a hint of a discrepancy between a Standard Model (SM) prediction and an experimental measurement. Almost 20 years ago, in a sequence of increasingly precise measurements, the E821 collaboration at Brookhaven National Laboratory (BNL) determined aμ = (g–2)/2 with a relative precision of 0.54 parts per million (ppm), providing a rigorous test of the SM. Impressive as it was, the result was limited by statistical uncertainties.

A new muon g−2 experiment currently taking data at Fermilab, called E989, aims to improve the experimental error on aμ by a factor of four. The collaboration took its first dataset in 2018, integrating 40% more statistics than the BNL experiment, and is now coming to the end of a second run that will yield a combined dataset more than three times larger.

A thorough review of the many analysis efforts during the first data run has been conducted. The muon magnetic anomaly is determined from the ratio of the muon and proton precession frequencies in the same magnetic field. The ultimate aim of experiment E989 is to measure both of these frequencies with a precision of 0.1 ppm by employing techniques and expertise from particle-physics experimentation (straw tracking detectors and calorimetetry), nuclear physics (nuclear magnetic resonance) and accelerator science. These frequencies are independently measured by several analysis groups with different methodologies and different susceptibilities to systematic effects.

A recent relative unblinding of a subset of the data with a statistical precision of 1.3 ppm showed excellent agreement across the analyses in both frequencies. The absolute values of the two frequencies are still subject to a ~25 ppm hardware blinding offset, so no physics conclusion can yet be drawn. But the exercise has shown that the collaboration is well on the way to publishing its first result with a precision better than E821 towards the end of the year.

Bottomonium elliptic-flow no-show

Diagram of elliptic flow

High-energy heavy-ion collisions at the LHC give rise to a deconfined system of quarks and gluons called the quark–gluon plasma (QGP). One of its most striking features is the emergence of collective motion due to pressure gradients that develop at the centre. Direct experimental evidence for this collective motion is the observation of anisotropic flow, which translates the asymmetry of the initial geometry into a final-state momentum anisotropy. Its magnitude is quantified by harmonic coefficients vn in a Fourier decomposition of the azimuthal distribution of particles. As a result of the almond-shaped geometry of the interaction volume, the largest contribution to the asymmetry is the second coefficient, or “elliptic flow”, v2.

A positive v2 has been measured for a large variety of particles, from pions, protons and strange hadrons up to the heavier J/ψ meson. The latter is a curious case as quarkonia such as J/ψ are bound states of a heavy quark (charm or bottom) and its antiquark (CERN Courier December 2017 p11). Quarkonia constitute interesting probes of the QGP because heavy-quark pairs are produced early and experience the full evolution of the collision. In heavy-ion collisions at the LHC, charmonia, such as the J/ψ, dissociate due to screening from free colour charges in the QGP, and regenerate by the recombination of thermalised charm quarks. More massive still, and having a higher binding energy than charmonium, the dissociation of bottomonium ϒ(1S) is expected to be limited to the early stage of the collision when the temperature of the surrounding QGP medium is high. Its regeneration is not expected to be significant because of the small number of available bottom quarks.

The ALICE collaboration recently reported the first measurement of the elliptic flow of the ϒ(1S) meson in lead–lead (Pb–Pb) collisions using the full Pb–Pb data set of LHC Run 2 (figure 1). The measured values of the ϒ(1S) v2 are small and consistent with zero, making bottomonia the first hadrons that do not seem to flow in heavy-ion collisions at the LHC. Compared to the measured ν2 of inclusive J/ψ in the same centrality and pT intervals, the v2 of ϒ(1S) is lower by 2.6 standard deviations. The results are also consistent with the small, positive values predicted by models that include no or small regeneration of bottomonia by the recombination of bottom quarks interacting in the QGP.

These observations, in combination with earlier measurements of the suppression of ϒ(1S) and J/ψ, support the scenario in which charmonia dissociate and reform in the QGP, while bottomonia are dominantly dissociated at early stages of the collisions. Future datasets, to be collected during LHC runs 3 and 4 after a major upgrade of the ALICE detector, will significantly improve the quality of the present measurements.

Grappling with dark energy

Adam Riess of Johns Hopkins University

Could you tell us a few words about the discovery that won you a share of the 2011 Nobel Prize in Physics?

Back in the 1990s, the assumption was that we live in a dense universe governed by baryonic and dark matter, but astronomers could only account for 30% of matter. We wanted to measure the expected deceleration of the universe at larger scales, in the hope that we would find evidence for some kind of extra matter that theorists predicted could be out there. So, from 1994 we started a campaign to measure the distances and redshifts of type-1a supernovae explosions. The shift in a supernova’s spectrum due to the expansion of space gives its redshift, and the relation between redshift and distance is used to determine the expansion rate of the universe. By comparing the expansion rates at two different epochs of the universe, we can estimate the expansion rate of the universe and how it changes over time. We made this comparison in 1998 and, to our surprise, we found that instead of decreasing, the expansion rate was speeding up. A stronger confirmation came after combining our measurements with those of the High-z Supernova Search Team. The result could be interpreted if the universe instead of decelerating is speeding up its expansion.

What was the reaction from your colleagues when you announced your findings?

That our result was wrong! There were understandably different reactions but the fact that two independent teams were measuring an accelerating expansion rate, plus the independent confirmation from measurements of the Cosmic Microwave Background (CMB), made it clear that the universe is accelerating. We reviewed all possible sources of errors including the presence of some yet unknown astronomical process, but nothing came out. Barring a series of unrelated mistakes, we were looking at a new feature of the universe.

There were other puzzles at that time in cosmology that the idea of an accelerating universe could also solve. The so-called “age crisis” (many stars looked older than the age of the universe) was one of them. This meant that either the stellar ages are too high or that there is something wrong with the age of the universe and its expansion. This discrepancy could be resolved by accounting for an accelerated expansion.

What is driving the accelerated expansion?

One idea is that the cosmological constant, initially introduced by Einstein so that general relativity could accommodate a static universe, is linked to the vacuum energy. Today we know that the vacuum energy can’t be the final answer because summing the contributions from the presumed quantum states in the universe produces an enormous number for the expansion rate that is about 120 orders of magnitude higher than observed. This rate is so high that it would have ripped apart galaxies, stars, planets, before any structure was formed.

The accelerating expansion can be due to what we broadly refer to as dark energy, but its source and its physics remain unknown. It is an ongoing area of research. Today we are making further supernovae observations to measure even more precisely the expansion rate, which will help us to understand the physics behind it.

By which other methods can we determine the source of the acceleration?

Today there is a vast range of approaches, using both space and ground experiments. A lot of work is ongoing on identifying more supernovae and measuring their distances and redshifts with higher precision. Other experiments are also looking to baryonic acoustic oscillations that would provide a standard ruler for measuring cosmological distances in the universe. There are proposals to use weak gravitational lensing, which is extremely sensitive to the parameters describing dark energy as well as the shape and history of the universe. Redshift space distortions due to the peculiar velocities of galaxies can also tell us something. We may be able to learn something from these different types of observations in a few years. The hope is to be able to measure the equation-of-state of dark energy with a 1% precision, and its variation over time with about 10% precision. This will offer a better understanding of whether dark energy is the cosmological constant or perhaps some form of energy temporarily stored in a scalar field that could change over time.

Is this one of the topics that you are currently involved with?

Yes, among other things. I am also working on improving the precision of the measurements of the Hubble constant, Ho, which characterises the present state and expansion rate of our universe. Refined measurements of Ho could point to potential discrepancies in the cosmological model.

What’s wrong with our current determination of the Hubble constant?

The problem is that even when we account for dark energy (factoring in any uncertainties we are aware of) we get a discrepancy of about 9% when comparing the predicted expansion rate based on CMB data using the standard “ΛCDM” cosmological model with the present expansion. The uncertainty in this measurement has now gone below 2%, leading to a significance of more than 5σ while future observations from the SH0ES programme would likely reduce it to 1.5%.

A new feature in the dark sector of the universe appears increasingly necessary to explain the present tension

There is something more profound in the disagreement of these two measurements. One measures how fast the universe is expanding today, while the other is based on the physics of the early universe – taking into account a specific model – and measuring how fast it should have been expanding. If these values don’t agree, there is a very strong likelihood that we are missing something in our cosmological model that connects the two epochs in the history of our universe. A new feature in the dark sector of the universe appears in my view increasingly necessary to explain the present tension.

When did the seriousness of the H0 discrepancy become clear?

It is hard to pinpoint a date, but it was between the publication of first results from Planck in 2013, which predicted the value of H0 based on precise CMB measurements, and the publication of our 2016 paper that confirmed the H0 measurement. Since then, the tension has been growing. Various people were convinced along this way as new data came in, while there are people who are still not convinced. This diversity of opinions is a healthy sign for science: we should take into account alternative viewpoints and continuously reassess the evidence that we have without taking anything for granted.

How can the Hubble discrepancy be interpreted?

The standard cosmological model, which contains just six free parameters, allows us to extrapolate the evolution from the Big Bang to the present cosmos – period of almost 14 billion years. The model is based on certain assumptions: that space in the early universe was flat; that there are three neutrinos; that dark matter is very nonreactive; that dark energy is similar to the cosmological constant; and that there is no more complex physics. So one or perhaps a combination of these can be wrong. Knowing the original content of the universe and the physics, we should be able to measure how the universe was expanding in the past and what should be its present expansion rate. The fact that there is a discrepancy means that we don’t have the right understanding.

We think that the phenomenon that we call inflation is similar to what we call dark energy, and it is possible that there was another expansion episode in the history of the universe just after the recombination period. Certain theories predict a form of “early dark energy” becomes significant giving a boost to the universe that matches our current observations. Another option is the presence of dark radiation: a term that could account for a new type of neutrino or for another relativistic particle present in the early history of the universe. The presence of dark radiation would change the estimate of the expansion rate before the recombination period and gives us a way to address the current Hubble-constant problem. Future measurements could tell us if other predictions of this theory are correct or not.

Does particle physics have a complementary role to play?

Oh definitely. Both collider and astrophysics experiments could potentially reveal either the properties of dark matter or a new relativistic particle or something new that could change the cosmological calculations. There is an overlap concerning the contributions of these fields in understanding the early universe, a lot of cross-talk and blurring of the lines – and in my view, that’s healthy.

What has it been like to win a Nobel prize at the relatively early age of 42?

It has been a great honour. You can choose whether you want to do science or not, as long as this choice is available. So certainly, the Nobel is not a curse. Our team is continually trying to refine the supernovae measurements, while this is a growing community. Hopefully, if you come back in a couple of years, we will have more answers to your questions.

Galaxies thrive on new physics

This supercomputer-generated image of a galaxy suggests that general relativity might not be the only way to explain how gravity works. Theorists at Durham University in the UK simulated the universe using hydrodynamical simulations based on “f(R) gravity” – in which a scalar field enhances gravitational forces in low-density regions (such as the outer parts of a galaxy) but is screened by the so-called chameleon mechanism in high-density environments such as our solar system (see C Arnold et al. Nature Astronomy; arXiv:1907.02977).

The left-hand side of the image shows the scalar field of the theory: bright-yellow regions correspond to large scalar-field values, while dark-blue regions correspond to to a very small scalar fields, i.e. regions where screening is active and the theory behaves like general relativity. The right-hand side of the image shows the gas density with stars overplotted. The simulation, which was based on a total of 12 simulations for different model parameters and resolutions, and which required a total runtime of about 2.5 million core-hours, shows that spiral galaxies like our Milky Way could still form even with different laws of gravity.

“Our research definitely does not mean that general relativity is wrong, but it does show that it does not have to be the only way to explain gravity’s role in the evolution of the universe,” says lead author Christian Arnold of Durham University’s Institute for Computational Cosmology.

Interdisciplinary physics at the AEDGE

Frequency niche

Following the discovery of gravitational waves by the LIGO and Virgo collaborations, there is great interest in observing other parts of the gravitational-wave spectrum and seeing what they can tell us about astrophysics, particle physics and cosmology. The European Space Agency (ESA) has approved the LISA space experiment that is designed to observe gravitational waves in a lower frequency band than LIGO and Virgo, while the KAGRA experiment in Japan, the INDIGO experiment in India and the proposed Einstein Telescope (ET) will reinforce LIGO and Virgo. However, there is a gap in observational capability in the intermediate-frequency band where there may be signals from the mergers of massive black holes weighing between 100 and 100,000 solar masses, and from a first-order phase transition or cosmic strings in the early universe.

This was the motivation for a workshop held at CERN on 22 and 23 July that brought experts from the cold-atom community together with particle physicists and representatives of the gravitational-wave community. Experiments using cold atoms as clocks and in interferometers offer interesting prospects for detecting some candidates for ultralight dark matter as well as gravitational waves in the mid-frequency gap. In particular, a possible space experiment called AEDGE could complement the observations by LIGO, Virgo, LISA and other approved experiments.

The workshop shared information about long-baseline terrestrial cold-atom experiments that are already funded and under construction, such as MAGIS in the US, MIGA in France and ZAIGA in China, as well as ideas for future terrestrial experiments such as MAGIA-advanced in Italy, AION in the UK and ELGAR in France. Delegates also heard about space – CACES (China) and CAL (NASA) – and sounding-rocket experiments – MAIUS (Germany) – using cold atoms in space and microgravity.

A suggestion for an atom interferometer using a pair of satellites is being put forward by the AEDGE team

ESA has recently issued a call for white papers for its Voyage 2050 long-term science programme, and a suggestion for an atom interferometer using a pair of satellites is being put forward by the AEDGE team (in parallel with a related suggestion called STE-QUEST) to build upon the experience with prior experiments. AEDGE was the focus of the CERN workshop, and would have unique capabilities to probe the assembly of the supermassive black holes known to power active galactic nuclei, physics beyond the Standard Model in the early universe and ultralight dark matter. AEDGE would be a uniquely interdisciplinary space mission, harnessing cold-atom technologies to address key issues in fundamental physics, astrophysics and cosmology.

Higgs hunters still hungry in Paris

Participants at Higgs Hunting 2019

The 10th Higgs Hunting workshop took place in Orsay and Paris from 29–31 July, attracting 110 physicists for lively discussions about recent results in the Higgs sector. The ATLAS and CMS collaborations presented Run 2 analyses with up to 140 fb–1 of data collected at a centre-of-mass energy of 13 TeV. The statistical uncertainty on some Higgs properties, such as the production cross-section, has now been reduced by a factor three compared to Run 1. This puts some Higgs studies on the verge of being dominated by systematic uncertainties. By the end of the LHC’s programme, measurements of the Higgs couplings to the photon, W, Z, gluon, tau lepton and top and bottom quarks are all expected to be dominated by theoretical rather than statistical or experimental uncertainties.

Several searches for additional Higgs bosons were presented. The general recipe here is to postulate a new field in addition to the Standard Model (SM) Higgs doublet, which in the minimal case yields a lone physical Higgs universally associated with the particle discovered at the LHC with a mass of 125 GeV in 2012. Adding a hypothetical additional Higgs doublet, however, as in the two Higgs doublet model, would yield five physical states: CP-even neutral Higgs bosons h and H, the CP-odd pseudoscalar A, and two charged Higgs bosons H±; the model would also bequeath three additional free parameters. Other models discussed at Higgs Hunting 2019 include the minimal and next-to-minimal supersymmetric SMs and extra Higgs states with doubly charged Higgs bosons. Anna Kaczmarska from ATLAS and Suzanne Gascon-Shotkin from CMS described direct searches for such additional Higgs bosons decaying to SM particles or Higgs bosons. Loan Truong from ATLAS and Yuri Gershtein from CMS described studies of rare – and potentially beyond-SM – decays of the 125 GeV Higgs boson. No significant excesses were reported, but hope remains for Run 3, which will begin in 2021.

Nobel laureate Gerard ’t Hooft gave a historical talk on the role of the Higgs in the renormalisation of electroweak theory, recalling the debt his Utrecht group, where the work was done almost 50 years ago, owed to pioneers like Faddeev and Popov. Seven years after the particle’s discovery, we now know it to be spin-0 with mainly CP-even interactions with bosons, remarked Fabio Cerutti of Berkeley in the experimental summary. With precision on the Higgs mass now better than two parts per mille, all of the SM’s free parameters are known with high precision, he continued, and all but three of them are linked to Higgs-boson interactions.

Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

Abraham Lincoln

Hunting season may now be over, Cerutti concluded, but the time to study Higgs anatomy and exploit the 95% of LHC data still to come is close at hand. Giulia Zanderighi’s theory summary had a similar message: Higgs studies are still in their infancy and the discovery of what seems to be a very SM-like Higgs at 125 GeV allows us to explore a new sector with a broad experimental programme that will extend over decades. She concluded with a quote from Abraham Lincoln: “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”

The next Higgs Hunting workshop will be held in Orsay and/or Paris from 7–9 September 2020.

PANIC 2020 – The 22nd Particle and Nuclei International Conference

ICHEP

Neutrino 2020 – International Conference on Neutrino physics

Copyright © 2019 by CERN
bright-rec iop pub iop-science physcis connect