Evidence for the decay of the Higgs boson to a photon and a low-mass electron or muon pair, propagated predominantly by a virtual photon (γ*), H → γ*γ → ℓℓγ (where ℓ = e or μ), has been obtained at the LHC. At an LHC seminar today, the ATLAS collaboration reported a 3.2σ excess over background of H → ℓℓγ decay candidates with dilepton mass mℓℓ < 30 GeV.
The H → ℓℓγ decay is particularly interesting as it is a loop process
The measurement of rare decays of the Higgs boson is a crucial component of the Higgs-boson physics programme at the LHC, since they probe potential new interactions with the Higgs boson introduced by possible extensions of the Standard Model. The H → ℓℓγ decay is particularly interesting in this respect as it is a loop process and the three-body final state allows the CP structure of the Higgs boson to be probed. However, the small expected signal-to-background ratio and the typically low dilepton invariant mass make the search for H → ℓℓγ highly challenging.
The analysis performed by ATLAS searched for H → e+e–γ and H → μ+μ–γ decays. Special treatment was needed in particular for the electron channel: a dedicated electron trigger was developed as well as a specific identification algorithm. The predicted mℓℓ spectrum rises steeply towards lower values, with a kinematic cutoff at twice the final-state lepton mass. At such low electron–positron invariant masses, and given the large transverse momentum of their system, the electromagnetic showers induced by the electron and the positron in the ATLAS calorimeter can merge, requiring a specially developed reconstruction. Furthermore, a dedicated identification algorithm was developed for these topologies, and its efficiency was measured in data using photon detector-material conversions at low radius into an electron–positron pair from Z → ℓℓγ events.
The signal extraction is performed by searching in the ℓℓγ invariant mass (mℓℓγ) range between 110 and 160 GeV for a narrow signal peak over smooth background at the mass of the Higgs boson. The sensitivity to the H → ℓℓγ signal was increased by separating events in mutually exclusive categories based on lepton types and event topologies. ATLAS reports evidence in data for a H → ℓℓγ signal emerging over the background with a significance of 3.2σ (see figure). The Higgs boson production cross section times H → ℓℓγ branching fraction, measured for mℓℓ < 30 GeV, amounts to 8.7+2.8–2.7 fb. It corresponds to a signal strength – the ratio of the measured cross section times branching fraction to the Standard Model prediction – of 1.5 ± 0.5. With this, ATLAS has also extended the invariant-mass range of the lepton pair for the related Higgs-boson decay into a photon and a Z boson to lower masses, opening the door to future studies of three-body Higgs-boson decays and investigations of its underlying CP structure.
H Frederick Dylla is a “Sputnik kid”, whose curiosity and ingenuity led him on a successful 50-year career in physics, from plasma to accelerators and leading the American Institute of Physics. His debut book, Scientific Journeys: A Physicist Explores the Culture, History and Personalities of Science, is a collection of essays that puts a multidisciplinary historical perspective on the actors and events that shaped the world of science and scholarly publishing. Through geopolitical and economic context and a rich record of key events, he highlights innovations that have found their use in social and business applications. Those cited as having contributed to global technological progress range from the web and smartphones to medical imaging and renewable energy.
Dylla begins with the story of medieval German abbess, mystic, composer and medicinal botanist Hildegard of Bingen
The book is divided in five chapters: “signposts” (in the form of key people and events in scientific history); mentors and milestones in his life; science policy; communicating science; and finally a brief insight into the relationship between science and art. He begins with the story of medieval German abbess, mystic, composer and medicinal botanist Hildegard of Bingen: “a bright signpost of scholarship”. Dylla goes on to explore the idea that a single individual at the right time and place can change the course of history. Bounding through the centuries, he highlights the importance of science policy and science communication, the funding of big and small science alike, and the contemporary challenges linked to research, teaching science and scholarly publishing. Examples among these, says Dylla, are the protection of scientific integrity, new practices of distance learning and the weaknesses of the open-access model. The book ends bang up to date with a thought on the coronavirus pandemic and science’s key role in overcoming it.
Intended for teachers, science historians and students from high school to graduate school, Dylla’s book puts a face on scientific inventions. The weightiest chapter, mentors and milestones, focuses on personalities who have played an important role in his scientific voyage. Among the many named, however, Mildred Dresselhaus – the “queen of carbon” – is the only female scientist featured in the book besides Hildegard. Though by beginning the book with a brilliant but at best scientifically adjacent abbess who preceded Galileo by four centuries Dylla tacitly acknowledges the importance of representing diversity, the book unintentionally makes it discomfortingly clear how scarce role models for women can be in the white-male dominated world of science. The lack of a discussion on diversity is a missed opportunity in an otherwise excellent book.
The International Linear Collider (ILC) is a proposed electron–positron linear collider with a Higgs factory operating at a centre-of-mass energy of 250 GeV (ILC250) as a first stage. Its electron and positron beams can be longitudinally polarised, and the accelerator may be extended to operate at 500 GeV up to 1 TeV, and possibly beyond. In addition, the unique time structure of the ILC beams (which would collide at short bursts of 1312 bunches with 0.554 ms spacing at a frequency of 5 Hz) places much less stringent requirements on readout speed and radiation hardness than conditions at the LHC detectors. This allows the use of low-mass tracking and high-granularity sensors in the ILC detectors, giving unprecedented resolution in jet-energy measurements. It also results in an expected data rate of just a few GB/s, allowing collisions to be recorded without a trigger.
ILC250 primarily targets precision measurements of the Higgs boson (see Targeting a Higgs factory). However, fully exploiting these measurements demands substantial improvement in our knowledge about many other Standard Model (SM) observables. Here, ILC250 opens three avenues: the study of gauge-boson pair-production and fermion pair-production at 250 GeV; fermion-pair production at effective centre-of-mass energies lowered to about 91.2 GeV by prior emission of photons (radiative returns to the Z pole); and operation of the collider at both the Z pole and the WW threshold. In all of these cases, the polarisation of the electron and positron beams (at polarisations up to 80% and 30%–60%, respectively) boosts the statistical power of many measurements by factors between 2.5 (for Higgs measurements) and 10 (at the Z pole), thanks to the ability to exploit observables such as left–right asymmetries of production cross-sections. These additional polarisation-dependent observables are also essential to disentangle the unavoidable interference between Z and γ exchange in fermion pair-production at energies above the Z pole, enabling access to the chiral couplings of fermions to the Z and the photon. Broadly speaking, the polarised beams and the high luminosity of ILC250 will lead to at least one order of magnitude improvement over the current knowledge for many SM precision observables.
Other important inputs when interpreting Higgs measurements are charged triple-gauge couplings (TGCs), which are also probes of physics beyond the SM. ILC250 will measure these 100 times more precisely than LEP, with a further factor-of-two improvement possible at the higher-energy stage ILC500. These numbers refer to the case of extracting simultaneously all three TGCs relevant in SM effective field theory, which is currently the most favoured framework for the interpretation of precision Higgs-boson data, whereas TGC results from the LHC assume that only one of these couplings deviates from its SM value at a time. With both beams polarised and with full control over the orientation of the polarisation vectors, all 28 TGC parameters that exist in the most general case can potentially be determined simultaneously at the ILC.
Z-pole physics
Classic electroweak precision observables refer to the Z pole. ILC250 will produce about 90 million visible Z events via radiative return, which is about five times more than at LEP and 100 times more than SLC. Thanks to the polarised beams, these data will allow a direct measurement of the asymmetry Ae between the left- and right-handed electron’s coupling to the Z boson with 10 times better accuracy than today, and enable the asymmetries Af of the final-state fermions to the Z to be directly extracted. This is quite different from the case of unpolarised beams, where only the product Ae Af can be accessed. Compared to LEP/SLC results, the Z-pole asymmetries can be improved by typically a factor of 20 using only the radiative returns to the Z at ILC250. This would settle beyond doubt the long-standing question of whether the 3σ tension between the weak mixing-angle extractions from SLC and LEP originates from physics beyond the SM. With a few minor modifications, the ILC can also directly operate at the Z pole, improving fermion asymmetries by another factor 6 to 25 with respect to the radiative-return results.
The higher integrated luminosity of the ILC will provide new opportunities to search for physics beyond the SM
At energies above the Z pole, di-fermion production is sensitive to hypothetical, heavy siblings of the Z boson (so-called Z′ bosons) and to four-fermion operators, i.e. contact-interaction-like parametrisations of yet unknown interactions. ILC250 could indirectly discover Z′ particles with masses up to 6 TeV, while ILC1000 could extend the reach to 18 TeV. For contact interactions, depending on the details of the assumed model, compositeness scales of up to 160 TeV can be probed at ILC250, and up to nearly 400 TeV at ILC1000.
Direct searches for new physics
At first glance, it might seem that direct searches at ILC250 offer only a marginal improvement over LEP, which attained a collision energy of 209 GeV. Nevertheless, the higher integrated luminosity of the ILC (about 2000 times higher than LEP’s above the WW threshold), its polarised beams, much-improved detectors, and triggerless readout will provide new opportunities to search for physics beyond the SM. For example, ILC250 will improve on LEP searches for a new scalar particle produced in association with the Z boson by over an order of magnitude. Another example of a rate-limited search at LEP is the supersymmetric partner of the tau lepton, the tau slepton. In the most general case, tau-slepton masses above 26.3 GeV are not excluded, and in this case no improvement from HL-LHC is expected. The ILC, with its highly-granular detectors covering angles down to 6 mrad with respect to the collision axis, has the ability to cover masses up to nearly the kinematic limit of half the collision energy, also in the experimentally most difficult parts of the parameter space.
The absence of discoveries of new high-mass states at the LHC has led to increased interest in fermionic “Z-portal” models, with masses of dark-matter particles below the electroweak scale. A dark photon, for example, could be detected via its mixing with SM photons. In searching for such phenomena, ILC250 could cover the region between the reach of the B-factories, which is limited to below 10 GeV, and the LHC experiments, which start searching in a range above 150 GeV.
The ILC’s Higgs-factory stage will require only about 40% of the tunnel length available at the Kitakami Mountains in northern Japan, which is capable of housing a linear collider at least 50 km long. This is sufficient to reach a centre-of-mass energy of 1 TeV with current technology by extending the linacs and augmenting power and cryogenics. The upgrade to ILC500 is expected to cost approximately 60% of the ILC250 cost, while going to 1 TeV would require an estimated 100% of the ILC250 cost, assuming a modest increase of the accelerating gradient over what has been achieved (CERN Courier November/December 2020 p35). These upgrades offer the opportunity to optimise the exact energies of the post-Higgs-factory stages according to physics needs and technological advances.
ILC at higher energies
ILC500 targets the energy range 500–600 GeV, which would improve the precision on Higgs-boson couplings typically by a factor of two compared to ILC250 and on charged triple-gauge couplings by a factor of three to four. It would also offer optimal sensitivity in three important measurements. The first is the electroweak couplings of the top quark, for which a variety of new-physics models predict deviations for instance in its coupling to the Z (see “Model sensitivity” figure). The second is the Higgs self-coupling λ from double Higgs-strahlung (e+e–→ ZHH): while ILC500 could reach a precision of 27% on λ, at 1 TeV a measurement based on vector-boson fusion (VBF) reaches 10%. These numbers assume that λ takes the value predicted by the SM. However, the situation can be quite different if λ is larger, as is typically required by models of baryogenesis, and only the
combination of double Higgs-strahlung and VBF-based measurements can guarantee a precision of at least 10–20% for any value of λ (see “Higgs self-coupling” figure). A third physics target is the top-quark Yukawa coupling, for which a precision of 6.3% is projected at ILC500, 3.2% at 550 GeV and 1.6% at 1 TeV.
While ILC250 has interesting discovery potential in various rate-limited searches, ILC500 extends the kinematic reach significantly beyond LEP. For instance, in models of supersymmetry that adhere to naturalness, the supersymmetric partners of the Higgs boson (the higgsinos) must have masses that are not too far from the Z or Higgs bosons, typically around 100 to 300 GeV. While the lower range of these particles is already accessible at ILC250, the higher energy stages of the ILC will be able to cover the remainder of this search space. The ILC is also able to reconstruct decay chains when the mass differences among higgsinos are small, which is a challenging signature for the HL-LHC.
The ILC is the only future collider that is currently being discussed at the government level, by Japan, the US and various countries in Europe. It is also the most technologically established proposal, its cutting edge radio-frequency cavities already in operation at the European XFEL. The 2020 update of the European strategy for particle physics also noted that, should an ILC in Japan go ahead, the European particle-physics community would wish to collaborate. Recently, an ILC international development team was established to prepare for the creation of the ILC pre-laboratory, which will make all necessary technical preparations before construction can begin. If intergovernmental negotiations are successful, the ILC could undergo commissioning as early as the mid-2030s.
The ability to collide high-energy beams of hadrons under controlled conditions transformed the field of particle physics. Until the late 1960s, the high-energy frontier was dominated by the great proton synchrotrons. The Cosmotron at Brookhaven National Laboratory and the Bevatron at Lawrence Berkeley National Laboratory were soon followed by CERN’s Proton Synchrotron and Brookhaven’s Alternating Gradient Synchrotron, and later by the Proton Synchrotron at Serpukov near Moscow. In these machines protons were directed to internal or external targets in which secondary particles were produced.
The kinematical inefficiency of this process, whereby the centre-of-mass energy only increases as the square root of the beam energy, was recognised from the outset. In 1943, Norwegian engineer Rolf Widerøe proposed the idea of colliding beams, keeping the centre of mass at rest in order to exploit the full energy for the production of new particles. One of the main problems was to get colliding beam intensities high enough for a useful event rate to be achieved. In the 1950s the prolific group at the University of Wisconsin Midwestern Universities Research Association (MURA), led by Donald Kerst, worked on the problem of “stacking” particles, whereby successive pulses from an injector synchrotron are superposed to increase the beam intensity. They mainly concentrated on protons, where Liouville’s theorem (which states that for a continuous fluid under the action of conservative forces the density of phase space cannot be increased) was thought to apply. Only much later, ways to beat Liouville and to increase the beam density were found. At the 1956 International Accelerator Conference at CERN, Kerst made the first proposal to use stacking to produce colliding beams (not yet storage rings) of sufficient intensity.
At that same conference, Gerry O’Neill from Princeton presented a paper proposing that colliding electron beams could be achieved in storage rings by making use of the natural damping of particle amplitudes by synchrotron-radiation emission. A design for the 500 MeV Princeton–Stanford colliding beam experiment was published in 1958 and construction started that same year. At the same time, the Budker Institute for Nuclear Research in Novosibirsk started work on VEP-1, a pair of rings designed to collide electrons at 140 MeV. Then, in March 1960, Bruno Touschek gave a seminar at Laboratori Nazionali di Frascati in Italy where he first proposed a single-ring, 0.6 m-circumference 250 MeV electron–positron collider. “AdA” produced the first stored electron and positron beams less than one year later – a far cry from the time it takes today’s machines to go from conception to operation! From these trailblazers evolved the production machines, beginning with ADONE at Frascati and SPEAR at SLAC. However, it was always clear that the gift of synchrotron-radiation damping would become a hindrance to achieving very high energy collisions in a circular electron–positron collider because the power radiated increases as the fourth power of the beam energy and the inverse fourth power of mass, so is negligible for protons compared with electrons.
A step into the unknown
Meanwhile, in the early 1960s, discussion raged at CERN about the next best step for particle physics. Opinion was sharply divided between two camps, one pushing a very high-energy proton synchrotron for fixed-target physics and the other using the technique proposed at MURA to build an innovative colliding beam proton machine with about the same centre-of-mass energy as a conventional proton synchrotron of much larger dimensions. In order to resolve the conflict, in February 1964, 50 physicists from among Europe’s best met at CERN. From that meeting emerged a new committee, the European Committee for Future Accelerators, under the chairmanship of one of CERN’s founding fathers, Edoardo Amaldi. After about two years of deliberation, consensus was formed. The storage ring gained most support, although a high-energy proton synchrotron, the Super Proton Synchrotron (SPS), was built some years later and would go on to play an essential role in the development of hadron storage rings. On 15 December 1965, with the strong support of Amaldi, the CERN Council unanimously approved the construction of the Intersecting Storage Rings (ISR), launching the era of hadron colliders.
On 15 December 1965, the CERN Council unanimously approved the construction of the ISR, launching the era of hadron colliders
First collisions
Construction of the ISR began in 1966 and first collisions were observed on 27 January 1971. The machine, which needed to store beams for many hours without the help of synchrotron-radiation damping to combat inevitable magnetic field errors and instabilities, pushed the boundaries in accelerator science on all fronts. Several respected scientists doubted that it would ever work. In fact, the ISR worked beautifully, exceeding its design luminosity by an order of magnitude and providing an essential step in the development of the next generation of hadron colliders. A key element was the performance of its ultra-high-vacuum system, which was a source of continuous improvement throughout the 13 year-long lifetime of the machine.
For the experimentalists, the ISR’s collisions (which reached an energy of 63 GeV) opened an exciting adventure at the energy frontier. But they were also learning what kind of detectors to build to fully exploit the potential of the machine – a task made harder by the lack of clear physics benchmarks known at the time in the ISR energy regime. The concept of general-purpose instruments built by large collaborations, as we know them today, was not in the culture of the time. Instead, many small collaborations built experiments with relatively short lifecycles, which constituted a fruitful learning ground for what was to come at the next generation of hadron colliders.
There was initially a broad belief that physics action would be in the forward directions at a hadron collider. This led to the Split Field Magnet facility as one of the first detectors at the ISR, providing a high magnetic field in the forward directions but a negligible one at large angle with respect to the colliding beams (the nowadays so-important transverse direction). It was with subsequent detectors featuring transverse spectrometer arms over limited solid angles that physicists observed a large excess of high transverse momentum particles above low-energy extrapolations. With these first observations of point-like parton scattering, the ISR made a fundamental contribution to strong-interaction physics. Solid angles were too limited initially, and single-particle triggers too biased, to fully appreciate the hadronic jet structure. That feat required third-generation detectors, notably the Axial Field Spectrometer (AFS) at the end of the ISR era, offering full azimuthal central calorimeter coverage. The experiment provided evidence for the back-to-back two-jet structure of hard parton scattering.
For the detector builders, the original AFS concept was interesting as it provided an unobstructed phi-symmetric magnetic field in the centre of the detector, however, at the price of massive Helmholtz coil pole tips obscuring the forward directions. Indeed, the ISR enabled the development of many original experimental ideas. A very important one was the measurement of the total cross section using very forward detectors in close proximity to the beam. These “Roman Pots”, named for their inventors, made their appearance in all later hadron colliders, confirming the rising total pp cross section with energy.
It is easy to say after the fact, still with regrets, that with an earlier availability of more complete and selective (with electron-trigger capability) second- and third-generation experiments at the ISR, CERN would not have been left as a spectator during the famous November revolution of 1974 with the J/ψ discoveries at Brookhaven and SLAC. These, and the ϒ resonances discovered at Fermilab three years later, were clearly observed in the later-generation ISR experiments.
SPS opens new era
However, events were unfolding at CERN that would pave the way to the completion of the Standard Model. At the ISR in 1972, the phenomenon of Schottky noise (density fluctuations due to the granular nature of the beam in a storage ring) was first observed. It was this very same noise that Simon van der Meer speculated in a paper a few years earlier could be used for what he called “stochastic cooling” of a proton beam, beating Liouville’s theorem by the fact that a beam of particles is not a continuous fluid. Although it is unrealistic to detect the motion of individual particles and damp them to the nominal orbit, van der Meer showed that by correcting the mean transverse motion of a sample of particles continuously, and as long as the statistical nature of the Schottky signal was continuously regenerated, it would be theoretically possible to reduce the beam size and increase its density. With the bandwidth of electronics available at the time, van der Meer concluded that the cooling time would be too long to be of practical importance. But the challenge was taken up by Wolfgang Schnell, who built a state-of-the-art feedback system that demonstrated stochastic cooling of a proton beam for the first time. This would open the door to the idea of stacking and cooling of antiprotons, which later led to the SPS being converted into a proton–antiproton collider.
Another important step towards the next generation of hadron colliders occurred in 1973 when the collaboration working on the Gargamelle heavy-liquid bubble chamber published two papers revealing the first evidence for weak neutral currents. These were important observations in support of the unified theory of electromagnetic and weak interactions, for which Sheldon Glashow, Abdus Salam and Steven Weinberg were to receive the Nobel Prize in Physics in 1979. The electroweak theory predicted the existence and approximate masses of two vector bosons, the W and the Z, which were too high to be produced in any existing machine. However, Carlo Rubbia and collaborators proposed that, if the SPS could be converted into a collider with protons and antiprotons circulating in opposite directions, there would be enough energy to create them.
To achieve this the SPS would need to be converted into a storage ring like the ISR, but this time the beam would need to be kept “bunched” with the radio-frequency (RF) system working continuously to achieve a high enough luminosity (unlike the ISR where the beams were allowed to de-bunch all around the ring). The challenges here were two-fold. Noise in the RF system causes particles to diffuse rapidly from the bunch. This was solved by a dedicated feedback system. It was also predicted that the beam–beam interaction would limit the performance of a bunched-beam machine with no synchrotron-radiation damping due to the strongly nonlinear interactions between a particle in one beam with the global electromagnetic field in the other beam.
A much bigger challenge was to build an accumulator ring in which antiprotons could be stored and cooled by stochastic cooling until a sufficient intensity of antiprotons would be available to transfer into the SPS, accelerate to around 300 GeV and collide with protons. This was done in two stages. First a proof-of-principle was needed to show that the ideas developed at the ISR transferred to a dedicated accumulator ring specially designed for stochastic cooling. This ring was called the Initial Cooling Experiment (ICE), and operated at CERN in 1977–1978. In ICE transverse cooling was applied to reduce the beam size and a new technique for reducing the momentum spread in the beam was developed. The experiment proved to be a big success and the theory of stochastic cooling was refined to a point where a real accumulator ring (the Antiproton Accumulator) could be designed to accumulate and store antiprotons produced at 3.5 GeV by the proton beam from the 26 GeV Proton Synchrotron. First collisions of protons and antiprotons at 270 GeV were observed on the night of 10 July 1981, signalling the start of a new era in colliding beam physics.
A clear physics goal, namely the discovery of the W and Z intermediate vector bosons, drove the concepts for the two main SppS experiments UA1 and UA2 (in addition to a few smaller, specialised experiments). It was no coincidence that the leaders of both collaborations were pioneers of ISR experiments, and many lessons from the ISR were taken on board. UA1 pioneered the concept of a hermetic detector that covered as much as possible the full solid angle around the interaction region with calorimetry and tracking. This allows measurements of the missing transverse energy/momentum, signalling the escaping neutrino in the leptonic W decays. Both electrons and muons were measured, with tracking in a state-of-the-art drift chamber that provided bubble-chamber-like pictures of the interactions. The magnetic field was provided by a dipole-magnet configuration, an approach not favoured in later generation experiments because of its inherent lack of azimuthal symmetry. UA2 featured a (at the time) highly segmented electromagnetic and hadronic calorimeter in the central part (down to 40 degrees with respect to the beam axis), with 240 cells pointing to the interaction region. But it had no muon detection, and in its initial phase only limited electromagnetic coverage in the forward regions. There was no magnetic field except for the forward cones with toroids to probe the W polarisation.
In 1983 the SppS experiments made history with the direct discoveries of the W and Z. Many other results were obtained, including the first evidence of neutral B-meson particle–antiparticle mixing at UA1 thanks to its tracking and muon detection. The calorimetry of UA2 provided immediate unambiguous evidence for a two-jet structure in events with large transverse energy. Both UA1 and UA2 pushed QCD studies far ahead. The lack of hermeticity in UA2’s forward regions motivated a major upgrade (UA2′) for the second phase of the collider, complementing the central part with new fully hermetic calorimetry (both electromagnetic and hadronic), and also inserting a new tracking cylinder employing novel technologies (fibre tracking and silicon pad detectors). This enabled the experiment to improve searches for top quarks and supersymmetric particles, as well as making almost background-free first precision measurements of the W mass.
Meanwhile in America
At the time the SppS was driving new studies at CERN, the first large superconducting synchrotron (the Tevatron, with a design energy close to 1 TeV) was under construction at Fermilab. In view of the success of the stochastic cooling experiments, there was a strong lobby at the time to halt the construction of the Tevatron and to divert effort instead to emulate the SPS as a proton–antiproton collider using the Fermilab Main Ring. Wisely this proposal was rejected and construction of the Tevatron continued. It came into operation as a fixed-target synchrotron in 1984. Two years later it was also converted into a proton–antiproton collider and operated at the high-energy frontier until its closure in September 2011.
A huge step was made with the detector concepts for the Tevatron experiments, in terms of addressed physics signatures, sophistication and granularity of the detector components. This opened new and continuously evolving avenues in analysis methods at hadron colliders. Already the initial CDF and DØ detectors for Run I (which lasted until 1996) were designed with cylindrical concepts, characteristic of what we now call general-purpose collider experiments, albeit DØ still without a central magnetic field in contrast to CDF’s 1.4 T solenoid. In 1995 the experiments delivered the first Tevatron highlight: the discovery of the top quark. Both detectors underwent major upgrades for Run II (2001–2011) – a theme now seen for the LHC experiments – which had a great impact on the Tevatron’s physics results. CDF was equipped with a new tracker, a silicon vertex detector, new forward calorimeters and muon detectors, while DØ added a 1.9 T central solenoid, vertexing and fibre tracking, and new forward muon detectors. Alongside the instrumentation was a breath-taking evolution in real-time event selection (triggering) and data acquisition to keep up with the increasing luminosity of the collider.
The physics harvest of the Tevatron experiments during Run II was impressive, including a wealth of QCD measurements and major inroads in top-quark physics, heavy-flavour physics and searches for phenomena beyond the Standard Model. Still standing strong are its precision measurements of the W and top masses and of the electroweak mixing angle sin2θW. The story ended in around 2012 with a glimpse of the Higgs boson in associated production with a vector boson. The CDF and DØ experience influenced the LHC era in many ways: for example they were able to extract the very rare single-top production cross-section with sophisticated multivariate algorithms, and they demonstrated the power of combining mature single-experiment measurements in common analyses to achieve ultimate precision and sensitivity.
For the machine builders, the pioneering role of the Tevatron as the first large superconducting machine was also essential for further progress. Two other machines – the Relativistic Heavy Ion Collider at Brookhaven and the electron–proton collider HERA at DESY – derived directly from the experience of building the Tevatron. Lessons learned from that machine and from the SppS were also integrated into the design of the most powerful hadron collider yet built: the LHC.
The Large Hadron Collider
The LHC had a difficult birth. Although the idea of a large proton–proton collider at CERN had been around since at least 1977, the approval of the Superconducting Super Collider (SSC) in the US in 1987 put the whole project into doubt. The SSC, with a centre-of-mass energy of 40 TeV, was almost three times more powerful than what could ever be built using the existing infrastructure at CERN. It was only the resilience and conviction of Carlo Rubbia, who shared the 1984 Nobel Prize in Physics with van der Meer for the project leading to the discovery of the W and Z bosons, that kept the project alive. Rubbia, who became Director-General of CERN in 1989, argued that, in spite of its lower energy, the LHC could be competitive with the SSC by having a luminosity an order of magnitude higher, and at a fraction of the cost. He also argued that the LHC would be more versatile: as well as colliding protons, it would be able to accelerate heavy ions to record energies at little extra cost.
The SSC was eventually cancelled in 1993. This made the case for the LHC even stronger, but the financial climate in Europe at the time was not conducive to the approval of a large project. For example, CERN’s largest contributor, Germany, was struggling with the cost of reunification and many other countries were getting to grips with the introduction of the single European currency. In December 1993 a plan was presented to the CERN Council to build the machine over a 10-year period by reducing the other experimental programmes at CERN to the absolute minimum, with the exception of the full exploitation of the flagship Large Electron Positron (LEP) collider. Although the plan was generally well received, it became clear that Germany and the UK were unlikely to agree to the budget increase required. On the positive side, after the demise of the SSC, a US panel on the future of particle physics recommended that “the government should declare its intentions to join other nations in constructing the LHC”. Positive signals were also being received from India, Japan and Russia.
In June 1994 the proposal to build the LHC was made once more. However, approval was blocked by Germany and the UK, which demanded substantial additional contributions from the two host states, France and Switzerland. This forced CERN to propose a “missing magnet” machine where only two thirds of the dipole magnets would be installed in a first stage, allowing operation at reduced energy for a number of years. Although costing more in the long run, the plan would save some 300 million Swiss Francs in the first phase. This proposal was put to Council in December 1994 by the new Director-General Christopher Llewellyn Smith and, after a round of intense discussions, the project was finally approved for two-stage construction, to be reviewed in 1997 after non-Member States had made known their contributions. The first country to do so was Japan in 1995, followed by India, Russia and Canada the next year. A final sting in the tail came in June 1996 when Germany unilaterally announced that it intended to reduce its CERN subscription by between 8% and 9%, prompting the UK to demand a similar reduction and forcing CERN to take out loans. At the same time, the two-stage plan was dropped and, after a shaky start, the construction of the full LHC was given the green light.
The fact that the LHC was to be built at CERN, making full use of the existing infrastructure to reduce cost, imposed a number of strong constraints. The first was the 27 km-circumference of the LEP tunnel in which the machine was to be housed. For the LHC to achieve its design energy of 7 TeV per beam, its bending magnets would need to operate at a field of 8.3 T, about 60% higher than ever achieved in previous machines. This could only be done using affordable superconducting material by reducing the temperature of the liquid-helium coolant from its normal boiling point of 4.2 K to 1.9 K – where helium exists in a macroscopic quantum state with the loss of viscosity and a very large thermal conductivity. A second major constraint was the small (3.8 m) tunnel diameter, which made it impossible to house two independent rings like the ISR. Instead, a novel and elegant magnet design, first proposed by Bob Palmer at Brookhaven, with the two rings separated by only 19 cm in a common yoke and cryostat was developed. This also considerably reduced the cost.
This journey is now poised to continue, as we look ahead towards how a general-purpose detector at a future 100 TeV hadron collider might look like
At precisely 09:30 on 10 September 2008, almost 15 years after the project’s approval, the first beam was injected into the LHC, amid global media attention. In the days that followed good progress was made until disaster struck: during a ramp to full energy, one of the 10,000 superconducting joints between the magnets failed, causing extensive damage which took more than a year to recover from. Following repairs and consolidation, on 29 November 2009 beam was once more circulating and full commissioning and operation could start. Rapid progress in ramping up the luminosity followed, and the LHC physics programme, at an initial energy of 3.5 TeV per beam, began in earnest in March 2010.
LHC experiments
Yet a whole other level of sophistication was realised by the LHC detectors compared to those at previous colliders. The priority benchmark for the designs of the general-purpose detectors ATLAS and CMS was to unambiguously discover (or rule out) the Standard Model Higgs boson for all possible masses up to 1 TeV, which demanded the ability to measure a variety of final states. The challenges for the Higgs search also guaranteed the detectors’ potential for all kinds of searches for physics beyond the Standard Model, which was the other driving physics motivation at the energy frontier. These two very ambitious LHC detector designs integrated all the lessons learned from the experiments at the three predecessor machines, as well as further technology advances in other large experiments, most notably at HERA and LEP.
Just a few simple numbers illustrate the giant leap from the Tevatron to the LHC detectors. CDF and DØ, in their upgraded versions operating at a luminosity of up to 4 × 1032 cm–2s–1, typically had around a million channels and a triggered event rate of 100 Hz, with event sizes of 500 kB. The collaborations were each about 600 strong. By contrast, ATLAS and CMS operated during LHC Run 2 at a luminosity of 2 × 1034 cm–2s–1 with typically 100 million readout channels, and an event rate and size of 500 Hz and 1500 kB. Their publications have close to 3000 authors.
For many major LHC-detector components, complementary technologies were selected. This is most visible for the superconducting magnet systems, with an elegant and unique large 4 T solenoid in CMS serving both the muon and inner tracking measurements, and an air-core toroid system for the muon spectrometer in ATLAS together with a 2 T solenoid around the inner tracking cylinder. These choices drove the layout of the active detector components, for instance the electromagnetic calorimetry. Here again, different technologies were implemented: a novel-configuration liquid-argon sampling calorimeter for ATLAS and lead-tungstate crystals for CMS.
From the outset, the LHC was conceived as a highly versatile collider facility, not only for the exploration of high transverse-momentum physics. With its huge production of b and c quarks, it offered the possibility of a very fruitful programme in flavour physics, exploited with great success by the purposely designed LHCb experiment. Furthermore, in special runs the LHC provides heavy-ion collisions for studies of the quark–gluon plasma – the field of action for the ALICE experiment.
As the general-purpose experiments learned from the history of experiments in their field, the concepts of both LHCb and ALICE also evolved from a previous generation of experiments in their fields, which would be interesting to trace back. One remark is due: the designs of all four main detectors at the LHC have turned out to be so flexible that there are no strict boundaries between these three physics fields for them. All of them have learned to use features of their instruments to contribute at least in part to the full physics spectrum offered by the LHC, of which the highlight so far was the July 2012 announcement of the discovery of the Higgs boson by the ATLAS and CMS collaborations. The following year the collaborations were named in the citation for the 2013 Nobel Prize in Physics awarded to François Englert and Peter Higgs.
Since then, the LHC has exceeded its design luminosity by a factor of two and delivered an integrated luminosity of almost 200 fb–1 in proton–proton collisions, while its beam energy was increased to 6.5 TeV in 2015. The machine has also delivered heavy ion (lead–lead) and even lead–proton collisions. But the LHC still has a long way to go before its estimated end of operations in the mid-to-late 2030s. To this end, the machine was shut down in November 2018 for a major upgrade of the whole of the CERN injector complex as well as the detectors to prepare for operation at high luminosities, ultimately up to a “levelled” luminosity of 7 × 1034 cm–2s–1. The High Luminosity LHC (HL-LHC) upgrade is pushing the boundaries of superconducting magnet technology to the limit, particularly around the experiments where the present focusing elements will be replaced by new magnets built from high-performance Nb3Sn superconductor. The eventual objective is to accumulate 3000 fb–1 of integrated luminosity.
In parallel, the LHC-experiment collaborations are preparing and implementing major upgrades to their detectors using novel state-of-art technologies and revolutionary approaches to data collection to exploit the tenfold data volume promised by the HL-LHC. Hadron-collider detector concepts have come a long way in sophistication over the past 50 years. However, behind the scenes are other factors paramount to their success. These include an equally spectacular evolution in data-flow architectures, software and the computing approaches, and analysis methods – all of which have been driven into new territories by the extraordinary needs for dealing with rare events within the huge backgrounds of ordinary collisions at hadron colliders. Worthy of particular mention in the success of all LHC physics results is the Worldwide LHC Computing Grid. This journey is now poised to continue, as we look ahead towards how a general-purpose detector at a future 100 TeV hadron collider might look like.
Beyond the LHC
Although the LHC has at least 15 years of operations ahead of it, the question now arises, as it did in 1964: what is the next step for the field? The CERN Council has recently approved the recommendations of the 2020 update of the European strategy for particle physics, which includes, among other things, a thorough study of a very high-energy hadron collider to succeed the LHC. A technical and financial feasibility study for a 100 km circular collider at CERN with a collision energy of at least 100 TeV is now under way. While a decision to proceed with such a facility is to come later this decade, one thing is certain: lessons learned from 50 years of experience with hadron colliders and their detectors will be crucial to the success of our next step into the unknown.
Looking back on the great discoveries in particle physics, one can see two classes. The discovery of the Ω– in 1964 and of the top quark in 1995 were the final pieces of a puzzle – they completed an existing mathematical structure. In contrast, the discovery of CP violation in 1964 and of the J/ψ in 1974 opened up new vistas on the microscopic world. Paradoxically, although the Higgs boson was slated for discovery for almost half a century following the papers of Brout, Englert, Higgs, Weinberg and others, its discovery belongs in the second class. It constitutes a novel departure in the same way as the J/ψ and the discovery of CP violation, rather than the completion of a paradigm as represented by the discoveries of the Ω– and the top quark.
The novelty of the Higgs boson derives largely from its apparently scalar nature. It is the only fundamental particle without spin. Additionally, it is the only fundamental particle with a self-coupling (gluons also couple to other gluons, but only to those with different colour combinations). Measurements of the couplings of the Higgs boson to the W and Z bosons at the LHC have confirmed its role in the generation of their masses, likewise for the charged third-generation fermions. Despite this great success, the Higgs boson is connected to many of the most troublesome aspects of the Standard Model (see “Connecting the Higgs to Standard Model enigmas” panel). It is for this reason that the recently concluded update of the European strategy for particle physics advocated an electron–positron Higgs factory as the highest priority collider after the LHC, to allow detailed study of this novel and unique particle.
Circular vs linear
The discovery of the Higgs boson at the relatively light mass of 125 GeV, announced by the ATLAS and CMS collaborations in 2012, had two important consequences for experiment. The first was the large number of potentially observable branching fractions available. The second was that circular, as well as linear, e+e– machines could serve as Higgs factories. The two basic mechanisms for Higgs-boson production at such colliders are associated production, e+e–→ ZH, and vector-boson fusion. The former process is dominant at the low-energy first stage of the various Higgs factories under consideration, with vector-boson fusion becoming more important with increasing energy (see “Channeling the Higgs” figure). About a quarter of a million Higgs bosons would be produced per inverse attobarn of data, leading to substantial numbers of recorded events even after the branching ratios to observable modes are taken into account.
Four Higgs-factory designs are presently being considered. Two are linear accelerators, namely the International Linear Collider (ILC) under consideration in Japan and the Compact Linear Collider (CLIC) at CERN, while the other two are circular: the Future Circular Collider (FCC-ee) at CERN and the Circular Electron Positron Collider (CEPC) in China.
The beams in circular colliders continuously lose energy due to synchrotron radiation, causing the luminosity at circular colliders to decrease with beam energy roughly as Eb–3.5. The advantage of circular colliders is their high instantaneous luminosity, in particular at the centre-of-mass energy relevant for the Higgs-physics programme (250 GeV), but even more so at lower energies such as those corresponding to the Z-boson mass (91 GeV). Electron and positron beams in a circular machine naturally achieve transverse polarisation, which can be exploited to make precise measurements of the beam energy via the electron and positron spin-precession frequencies.
In contrast, for linear colliders the luminosity increases roughly linearly with the beam energy. The advantages of linear accelerators are that they can be extended to higher energies, and the beams can be polarised longitudinally. The ZH associated cross section can be increased by 40% with longitudinal polarisations of –80% and 30% for electrons and positrons, respectively. This increase, coupled with the ability to isolate certain components of Higgs-boson production by tuning the polarisation, enables a linear machine to achieve similar precisions on Higgs-boson measurements with half the integrated luminosity of a circular machine.
FCC-ee, CEPC and ILC are foreseen to run for several years at a centre-of-mass energy of around 250 GeV, where the ZH production cross section is largest. Instead, CLIC plans to run its first stage at 380 GeV where both WW fusion and ZH production contribute, and tt production is possible. The circular colliders FCC-ee and CEPC envisage running at the Z-pole and the WW production threshold for long enough to collect of the order 1012 Z bosons and 108 WW pairs, enabling powerful electroweak and flavour-physics programmes (see “Compare and contrast” table). To achieve design luminosity, all proposed e+e– colliders need beams focused to a very small size in one direction (30–70 nm for FCC-ee, 3–8 nm for ILC and 1–3 nm for CLIC), which are all below the values so far achieved at existing facilities.
Evolving designs
The proposed circular colliders are based on a combination of concepts that have been proven and used in previous and present colliders (LEP, SLC, PEP-II, KEKB, SuperKEKB, DAFNE). In Higgs-production mode the beam lifetime is limited by Bhabha scattering to about 30 minutes and therefore requires quasi-continuous injection or “top-up” as used by the B-factories. Each of the circular collider main concepts and parameters has been demonstrated in a previous machine, and thus the designs are considered mature. The total FCC-ee construction cost is estimated to be 10.5 billion CHF for energies up to 240 GeV, with an additional 1.1 billion CHF to go to the tt threshold. This includes 5.4 billion CHF for the tunnel, which could be reused later for a hadron collider. The CEPC cost has been estimated at $5 billion, including $1.3 billion for the tunnel. With the present design, the FCC-ee power consumption is 260–340 MW for the various energy stages (compared to 150 MW for the LHC).
The ILC was proposed in the late 1990s and a technical design report published in 2012. It uses superconducting RF cavities for the acceleration, as used in the currently operating European XFEL facility in Germany, to aim for gradients of 35 MV/m. The cost of the first energy stage (250 GeV) was estimated as $4.8–5.3 billion, with a power consumption of 130–200 MW, and an expression of interest to host the ILC as a global project is being considered in Japan. The CLIC accelerator uses a second beam, termed a drive-beam, to accelerate the primary beam, aiming for gradients in excess of 100 MV/m. This concept has been demonstrated with electron beams at the CLIC test facility, CTF3. The cost of the first energy stage of CLIC is estimated as 5.9 billion CHF with a power consumption of 170 MW, rising to 590 MW for final-stage operation at 3 TeV.
Another important difference between the proposed linear and circular colliders concerns the number of detectors they can host. Collisions at linear machines only occur at one interaction point, while in circular colliders at least two interaction points are proposed, doubling the luminosity available for analyses. Two detectors also offer the dual benefits of scientific competition and the cross-checking of results. At the ILC two detectors are proposed but they cannot run concurrently since they use the same interaction point.
FCC-ee and CLIC have both been proposed as CERN-hosted international projects, similar to the LHC or high-luminosity LHC (HL-LHC). At present, as recommended by the 2020 update of the European strategy for particle physics, a feasibility study for the FCC (including its post-FCC-ee hadron-collider stage, FCC-hh) is ongoing, with the goal of presenting an updated conceptual design report by the next strategy update in 2026. Among the e+e– colliders, CLIC has the greatest capacity to be extended to the multi-TeV energy range. In its low-energy incarnation it could be realised either with the drive-beam or conventional technology. CEPC is conceptually and technologically similar to FCC-ee and has also presented a conceptual design report. Nearly all statements about FCC-ee also hold for CEPC except that CEPC’s design luminosity is about a factor of two lower, and thus it takes longer to acquire the same integrated luminosity. At circular colliders, the multi-TeV regime (at least 100 TeV in the case of FCC-hh) would be reached by using proton beams, similar to what was done with LHC following LEP.
In addition to the vacuum expectation value of the Higgs field and the mass of the Higgs boson, the discovery of the Higgs boson introduces a large number of parameters into the Standard Model. Among them are the Yukawa couplings of the nine charged fermions (in contrast, the gauge sector of the SM has only three free parameters). The Yukawa forces, of which only three have been discovered corresponding to the couplings to the charged third-generation fermions, are completely new. They are of disparate strengths and, unlike the other forces, are not subject to the constraint of local gauge invariance. They provide a parameterisation of the theory of flavour, rather than an explanation. It is of primary importance to discover, bound and characterise the Yukawa forces. In particular, the discovery of CP violation in the Yukawa couplings would go beyond the confines of the Standard Model.
Famously, because of its scalar nature, the quantum corrections to the Higgs boson mass are only bounded by the cut-off on the theory, demanding large renormalisations to maintain the mass at 125 GeV as measured. This issue is not so much a problem for the Standard Model per se. However, in the context of a more complete theory that aims to supersede and encompass the Standard Model, it becomes much more troubling. In effect, the degree of cancellation necessary to maintain the Higgs mass at 125 GeV effectively sabotages the predictive power of any more complete theory. This sabotage becomes deadly as the scale of the new physics is pushed to higher and higher energies.
The electroweak potential is another area of importance in which our current knowledge is fragmentary. Within the confines of the Standard Model the potential is completely specified by the position of its minimum – the vacuum expectation value and the second derivative of the potential at the minimum, the mass of the Higgs boson (or equivalently its self-coupling). We have no direct knowledge of the behaviour of the potential at larger field values further from the minimum. In addition, extrapolation of the currently understood Higgs potential to higher energy reveals a world teetering between stability and instability. Further information about the behaviour of the potential could help us to interpret the meaning of this result. A modified electroweak potential might also give rise to a first-order phase transition at high temperature, rather than the smooth crossover expected for the Standard Model Higgs potential. This would fulfil one of the three Sakharov conditions necessary to generate an asymmetry between matter and antimatter in our universe.
To quantify the scientific reach of the proposed colliders compared to current knowledge or the expectations for the HL-LHC, it is necessary to define figures-of-merit for the observables that will be measured. For the Higgs boson the focus is on the coupling strengths to the Standard Model bosons and fermions, as well as the couplings to any new particles. The strength with which the Higgs boson couples to the various particles, i, is denoted by κi, defined such that κi = 1 corresponds to the Standard Model. Non-standard phenomena are included in this “kappa” framework by introducing two new quantities: the branching ratio into invisible particles (determined by measuring the missing energy in identified Higgs events), and the branching ratio to untagged particles (determined by measuring the contributions to the total width accounted for by the observed modes, or by directly searching for anomalous decays).
Higgs-boson observables
At hadron colliders, only ratios of κi parameters can be measured, since a precise measurement of the total width of the Higgs boson is lacking (the expected total width of the Higgs boson in the Standard Model is 4.2 MeV, which is far too small to be resolved experimentally). To determine the absolute κi values at a hadron collider a further assumption needs to be made, either on decay rates of the Higgs boson to new particles or on one of the κi values. An assumption that is often made, and valid in many beyond-the-Standard-Model theories, is that κZ≤ 1.
The kappa framework, however, by construction, does not parameterise possible effects coming from different Lorentz structures and/or the energy dependence of the Higgs couplings. Such effects could generically arise from the existence of new physics at higher scales and could lead not only to changes in the predicted rates, but also in distributions. Deviations of κi from 1 indicate a departure from the Standard Model, but do not provide a tool to diagnose its cause. This shortcoming is remedied in so-called effective-operator formalisms by including operators of mass dimension greater than four.
At e+e– colliders a Higgs boson produced via e+e–→ ZH can be identified without observing its decay products. This measurement, of primary importance, is unique to e+e– colliders. By measuring the Z decay products and with the precise knowledge of the momenta of the incoming e– and e+ beams, the presence of the Higgs boson in ZH events can be inferred based on energy and momentum conservation alone, without actually tagging the Higgs boson. In this way one directly measures the coupling between the Higgs and Z bosons. In combination with the Higgs branching ratio to Z pairs it can be interpreted as a measurement of the Higgs-boson width. The first-stage e+e– Higgs factories all constrain the total width at about the 2% level.
LHC and HL-LHC
To assess the potential impact of the e+e– Higgs factories it is important to examine the point of departure provided by the LHC and HL-LHC. Since its startup in 2010 the LHC has made a monumental impact on our understanding of the Higgs sector. After the Higgs discovery in 2012, a measurement programme started and now, with nearly 150 fb–1 of data analysed by ATLAS and CMS, much has been learned. The Higgs-boson mass has been measured with a precision of < 0.2%, its spin and parity confirmed as expected in the Standard Model, and its coupling to bosons and to third-generation charged fermions established with a precision of 5–10%.
With the HL-LHC and its experiments planned to operate from 2027, the precision on the coupling parameters and the branching ratios to new particles will be increased by a factor of 5–10 in all cases, typically resulting in a sensitivity of a few % (see “Kappa couplings” figure). The HL-LHC will also enable measurements of the very rare μ+μ– decay, the first evidence for which was recently reported by CMS and ATLAS, and thus show whether the Higgs boson also generates the mass of a second-generation fermion. With the full HL-LHC dataset, corresponding to 3000 fb–1 for each of ATLAS and CMS, it is expected that di-Higgs production will be established with a significance of four standard deviations. This will allow a determination of the Higgs-boson’s coupling to itself with a precision of 50%.
About a quarter of a million Higgs bosons could be produced per inverse attobarn of data
The LHC has also made enormous progress in the direct searches for new particles at high energies. With more than 1000 papers published on this topic, hunting down particles predicted by dozens of theoretical ideas, and no firm sign of a new particle anywhere, it is clear that the new physics is either heavier, or more weakly coupled or has other features that hides it in the LHC data. The LHC is also a precision machine for electroweak physics, having measured the W-boson mass and the top-quark mass with uncertainties of 0.02% and 0.3%, respectively. In addition, a large number of relevant cross-section measurements of multi-boson production have been made, probing the trilinear and quartic interactions of the gauge bosons with each other.
Higgs-factory impact
In terms of the measurement precision on the Higgs-boson couplings, the proposed Higgs factories are expected to bring a major improvement with respect to HL-LHC in most cases (see “Relative precision” figure). Only for the rare decays to muons, photons and Zγ, and for the very massive top quark, is this not the case. The highest precision (0.2% in the case of FCC-ee) is achieved on κZ since the main Higgs production mode, ZH, depends directly on it, regardless of the decay mode. For other Standard Model particles, improvement factors of two to four are typical. For the invisible and untagged decays, the constraints are improved to around 0.2% and 1%, respectively, for some of the Higgs factories. A new measurement, not possible at the LHC, is that of the charm–quark coupling, κc.
None of the initial stages of the proposed Higgs factories will be able to directly probe the self-coupling of the Higgs boson beyond the 50% expected from the HL-LHC, since the cross-sections for the relevant processes (e+e–→ ZHH and e+e–→ HHνν) are negligible at centre-of-mass energies below 400 GeV. The Higgs self-coupling, however, enters through loops also in single-Higgs production and indirect effects might therefore be observable, for instance as a small (< 1%) deviation in measurements of the inclusive ZH cross section. Measurements of the Higgs self-coupling exploiting the di-Higgs production process can only be performed at higher energy colliders. The ILC and CLIC project uncertainties of around 30% at their intermediate energies and around 10% at their ultimate energies, while FCC-hh projects a precision of around 5%. Similarly, for the Higgs coupling to the top quark, the HL-LHC precision of 3.2% will not be improved by the initial stages of any of the Higgs factories.
The proposed Higgs factories also have a rich physics programme at lower energies, particularly at the Z pole. FCC-ee, for instance, plans to run for four years at the Z pole to accumulate a total of more than 1012 Z bosons – 100,000 times more than at LEP. This will enable a rich and unprecedented electroweak physics programme, constraining so-called oblique parameters (which are sensitive to violations of weak isospin) at the per-mille level, 100 times better than today. It will also enable a B-physics programme, complementary to that at Belle II and LHCb. At CEPC, a similar programme is possible, while at ILC and CLIC the luminosity when running at the Z pole is much lower: the typical number of Z-bosons that can be accumulated here is 109, 100 times more than LEP but not at the same level as the circular colliders. FCC-ee’s electroweak programme also foresees a run at the WW threshold to enable a high-precision measurement of the W mass.
Concerning the large top-quark mass, measurements at the LHC suffer from uncertainties associated with renormalisation schemes and it is unlikely to improve the precision significantly at the HL-LHC beyond the currently achieved value of 400 MeV. At an e+e– collider operating at the tt threshold (~350 GeV), a measurement of the top mass with total uncertainty of around 50 MeV and with full control of the issues associated with the renormalisation scheme is possible. In addition to its importance as a fundamental parameter of the Standard Model, the top mass is the dominant term in the evolution of the Higgs potential with energy to determine vacuum stability (see “Connecting the Higgs to Standard Model enigmas” panel).
To assess the potential impact of the e+e– Higgs factories it is important to examine the point of departure provided by the LHC and HL-LHC
In short, a Higgs factory promises to expand our knowledge of nature at the smallest scales. The ZH cross-section measurement alone will probe fine tuning at a level of a few permille, about 30 times better than what we know today. This provides indirect sensitivity to new particles with masses up to 10–30 TeV, depending on their coupling strength, and could point to a new energy scale in nature.
But most of all the Higgs boson has not exhausted its ability to surprise. The rest of the Standard Model is a compact structure, exquisitely tested, and ruled by local gauge invariance and other symmetries. Compared to this, the Lagrangian of the Higgs sector is the wild west, where the final laws have yet to be written. Does the Higgs boson have a significant rate of invisible decays, which could be a key component in understanding the nature of dark matter in our universe? Does the Higgs boson act as a portal to other scalar degrees of freedom? Does the Higgs boson provide a source of CP violation? An electron–positron Higgs factory provides a tool to address these questions with unique clarity, when deviations between the measured and predicted values of observables are detected. Building on the data from the HL-LHC, it will be the perfect tool to elucidate the underlying laws of physics.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
The proposed 100 km-circumference Future Circular Collider (FCC) at CERN features, as a first stage, an electron–positron Higgs and electroweak factory (FCC-ee) operating at centre-of-mass energies from 91 GeV (the Z mass) to a maximum of 365 GeV (above the tt production threshold). The same tunnel is then planned to host a hadron collider (FCC-hh) operating at the highest possible energies, at least 100 TeV. The complete FCC programme, whose financial and technical feasibility is currently under study, offers unprecedented potential in terms of the reach on phenomena beyond the Standard Model (SM). The proposed Circular Electron Positron Collider project in China adopts the scheme envisioned for the FCC-ee, with a somewhat less ambitious overall physics programme.
While the original goal of a future lepton collider is the precise study of the interactions of the scalar boson discovered in 2012 at the LHC, seeking answers to open questions in particle physics requires many high-precision measurements of the other three heaviest SM particles: the W and Z electroweak bosons and the top quark. Beyond the exploration of the Higgs sector, FCC-ee offers a rich range of opportunities to indirectly and directly discover new phenomena.
Studies of Higgs-boson interactions are prime tests of the dynamics of electroweak symmetry breaking and of the generation of elementary-particle masses. At FCC-ee, the Higgs boson will dominantly be produced by radiation off a Z boson. With around one million such e+e–→ ZH events recorded in three years of operation, a per-mil precision is targeted on the cross-section measurement. This corresponds to probing phenomena coupled to the scalar SM sector at energy scales approaching 10 TeV. The Higgsstrahlung process is, however, sensitive to gauge interactions beyond those of the Higgs boson (see “Higgs production” figure), which can themselves be affected by new physics. A robust test of the SM’s consistency will require independent experimental determination of these interactions. The precision available today is insufficient, however, and calls for new electroweak measurements to be performed.
Electroweak and top-quark precision
FCC-ee will provide these missing pieces, and much more. An unprecedented number (5 × 1012) of Z bosons will be produced with an exquisite knowledge of the centre-of-mass energy (100 keV or lower, thanks to the availability of transverse polarisation of the beams), thereby surpassing the precision of all previous measurements at LEP and SLC by several orders of magnitude. Uncertainties of the order of 100 keV on the Z-boson’s mass and 25 keV on its width can be achieved, as well as precisions of around 10–5on the various charged fermion couplings, and of 3 × 10–5 on the QED coupling strength αQED (mZ). Impressive numbers of pairs of tau leptons (1.7 × 1011) and 1012 each of c and b quarks will be produced in Z decays, allowing order-of-magnitude improvements on tau and heavy-flavour observables compared to other planned facilities.
At the WW threshold, with 108 W bosons collected at a centre-of-mass energy of 161 GeV and threshold scans with an energy uncertainty of about 300 keV, a unique W-boson mass precision of 0.5 MeV will be reached. Meticulous measurements of di-boson production will be essential for the Higgs programme, given the gauge-symmetry relations between triple-gauge-boson and Higgs-gauge-boson interactions. Hadronic W and Z decays will also provide measurements of the QCD coupling strength with per-mil uncertainties – a factor of 10 better than the current world average.
Stepping up to a centre-of-mass energy of 350 GeV, e+e–→ tt measurements would deliver an impressive determination of the top-quark mass with 10 MeV statistical uncertainty, thanks to energy scans with a 4 MeV precision. At the highest FCC-ee energies, the determination of the top quark’s electroweak couplings, which affect Higgs processes, can be performed to sub-percent precision.
These high-precision FCC-ee measurements in the Higgs, electroweak and top-quark sectors will be sensitive to a large variety of new-physics scenarios. High-mass physics with SM couplings, for example, can be tested up to scales of the order of 50 TeV. Regardless of mass scale, mixing of new particles with known ones at the level of a few tens of ppm will also produce visible effects.
Probing new physics at the Z pole
Given that new light particles are constrained to be feebly coupled to the SM, large e+e– luminosities are needed to search for them. By examining an astounding number of Z-boson decays, FCC-ee will explore uncharted territories in direct searches for feebly coupled light states, such as heavy neutral leptons and axion-like particles. If not directly produced, the former are also probed indirectly through precision electroweak measurements.
Heavy neutral leptons (N) are sterile particles, such as those invoked in neutrino mass-generation mechanisms. The mixing of these states with neutrinos would induce interactions with electroweak bosons and charged leptons, for example N¯W, NνZ or NνH. Heavy neutral leptons can have a wide range of masses and be searched for at FCC-ee, both directly and indirectly, with unparalleled reach. When heavier than the muon and mixing with either the e or µ flavours, they lower the µ → eνeνµ decay rate and affect the extraction of the Fermi constant, leading to deviations from the SM in many precision electroweak observables. When lighter than the Z boson, they could be produced in Z → νN decays. FCC-ee will bring order-of-magnitude improvements over LEP bounds in both regimes (see “Heavy neutral leptons” figure). The direct sensitivity improves even more dramatically than the indirect one: in the parameter space where N have sizeable lifetimes, displaced vertices provide a spectacular, background-free, signature (see “Discovery potential” image). This region of great interest corresponds to weak-scale leptogenesis, in which right-handed neutrinos participate in the generation of the baryon asymmetry of the universe.
Axion-like particles (ALPs) are pseudoscalar singlets with derivative couplings to the SM, which may be generated in the breaking of global symmetries at high scales. They could contribute to the dark-matter relic abundance and, in a specific range of parameter space, provide a dynamical explanation for the absence of CP violation in the strong interaction. Having symmetry-protected masses, ALPs can be naturally light. For masses smaller than twice that of the electron, they can only visibly decay to photons. Suppressed by a potentially large scale, their couplings to the SM may be tiny. ALPs lifetimes could thus be large. A coupling to either hypercharge or weak isospin would allow them to be produced in Z-boson decays together with a photon and to decay to photon pairs. Searching for this signature, FCC-ee will probe couplings more than an order of magnitude smaller than those accessible at the LHC (see “Axion-like particles” figure). Pairs of ALPs could possibly also be produced in the decay of the Higgs boson, whose small width enhances branching fractions and allows small couplings to be probed. Producing Higgs bosons in larger numbers, hadron colliders are, however, more efficient at probing such interactions.
Towards a new frontier
The physics potential of FCC-ee clearly extends much beyond its original purpose as a Higgs and electroweak factory. Upgrading the facility to FCC-hh will require a new machine based on high-field superconducting magnets, although key parts of FCC-ee infrastructure would be usable at both colliders. Compared to the LHC, FCC-hh will collect about 10 times more integrated luminosity and increase the direct discovery reach for high-mass particles – such as Z′ or W′ gauge bosons, gluinos and squarks, and even WIMP dark matter – by a factor of around 10, up to scales of about 50 TeV. It would also serve as a giga Higgs factory, producing more than 1010 Higgs bosons during its planned 25 years of data taking, albeit not in the ultraclean collision environment of FCC-ee.
Beyond exquisite precision on Higgs-boson couplings to other SM particles, a 100 TeV proton–proton collider comes to the fore in revealing how the Higgs boson couples to itself, which is connected to the electroweak phase transition in the early universe and ultimately to the stability of the vacuum. The rate of Higgs pair-production events, which in some part occur through the Higgs self-interaction, would grow by a factor of 40 at FCC-hh with respect to the LHC and enable this unique property of the Higgs boson to be measured with a statistical accuracy reaching ±2%. Such a measurement would comprehensively explore classes of models that rely on modifying the Higgs potential to drive a strong first-order phase transition at the time of electroweak symmetry breaking, a necessary condition to induce baryogenesis.
Stepping up to an energy of 350 GeV would deliver an impressive determination of the top-quark mass
Following the highly successful model of LEP and its successor, the LHC, the integrated FCC programme offers a far-reaching particle-physics programme at the limits of known technology to significantly push the frontier of our knowledge of the fundamental particles and interactions. A conceptual design report was published in 2019, estimating that operations could begin as soon as 2040 for FCC-ee and 2065 for FCC-hh. Exploring the financial and technical feasibility of this visionary project
is one of the highest priority recommendations of the 2020 update of the European strategy for particle physics, with a decision on whether or not to proceed expected by the next strategy update towards the middle of the decade.
The Compact Linear Collider (CLIC) is conceived in its first stage to be an 11 km-long electron–positron collider operating at a centre-of-mass energy of 380 GeV. Unlike other Higgs-factory proposals that start around 240 GeV, CLIC benefits at the initial stage not only from top-quark production, but also from two Higgs-boson production modes – Higgsstrahlung (e+e–→ HZ) and WW fusion – giving extra complementary input for global interpretations of the data.
A defining feature of a linear collider is that its collision energy can be raised by extending its length. While the European strategy update recommended a circular hadron collider at the energy frontier as a long-term ambition, CLIC represents a compelling alternative were a circular machine found not to be feasible. CLIC has the potential to be extended in several stages up to 50 km and a maximum energy of 3 TeV, giving access to a wide range of physics processes (see “Multichannel” figure). Some important processes such as Higgsstrahlung production fall with energy, while others such as double-Higgs production require higher energies, and processes occurring through vector-boson fusion grow with energy. In general, the beyond-Standard-Model (BSM) sensitivity of scattering processes such as ZH, WW and two-fermion (including top-pair) production rises strongly with energy, so the higher-energy stages bring further sensitivity to potential new physics both indirectly and directly.
Lepton colliders can in general explore much closer to kinematic limits than hadron colliders
In contrast to the ILC (see ILC: beyond the Higgs), CLIC operates via a novel two-beam scheme, whereby radio-frequency power extracted from a high-current, low-energy drive beam is used to accelerate the colliding beams. Were a decision to be made to upgrade CLIC from 380 GeV to 1.5 TeV, the length of the main linacs would have to be extended to 29 km, as well as moving and adding accelerator modules. Going from an energy of 1.5 to 3 TeV, as well as further lengthening of the main linacs, a second drive-beam complex must be added. CLIC’s combination of Higgs- and top-factory running, and multi-TeV extension potential, makes it illuminating to study the physics prospects of the initial stage in parallel with those of the ultimate energy.
Higgs physics
At 380 GeV, with 1 ab–1 of integrated luminosity CLIC would produce around 160,000 Higgs bosons. This stage would enable precision determinations well beyond the HL-LHC, for example in the single-Higgs couplings to WW, ZZ, bb, and cc. Due to the known kinematic constraints in the collision environment, it also allows an absolute determination of the Higgs couplings, as opposed to the ratios accessible at the LHC. The corresponding precision on Higgs-coupling measurements is increased considerably by the enhanced statistics at 1.5 TeV, where CLIC could produce 1 million Higgs bosons with an integrated luminosity of 2.5 ab–1 as well as opening sensitivity to other processes. A linear collider like CLIC provides considerable flexibility, for example: collecting at 380 GeV 1 ab–1 in 8 years or 4 ab–1 in 13 years, as studied recently, before a possible jump to 1.5 TeV.
The 1.5 TeV energy stage gives access to two double-Higgs production mechanisms: double-Higgsstrahlung (e+e–→ ZHH) and vector-boson fusion (e+e–→ HHνeνe). Such production of Higgs-boson pairs allows the Higgs self-coupling to be probed directly. While the 1.5 TeV stage could reach a precision of –29%/+67% using a rate-only analysis, at 3 TeV an ultimate Higgs self-coupling precision of –8%/+11% is expected, also exploiting differential information. Furthermore, the ability to measure both the ZHH and HHνeνe processes allows for an unambiguous determination of the Higgs self-coupling even if it is far from its Standard Model value. Unlike indirect determinations from ZH measurements at lower Higgs-factory energies, the precision of CLIC’s direct Higgs-self-coupling measurement is largely preserved in global fits. CLIC could thus robustly verify that the Higgs self-coupling assumes the value predicted by the Standard Model, or uniquely identify the new-physics effects responsible for potential tensions with the Standard Model in Higgs observables.
Top-quark physics
CLIC is unique among the proposed electron–positron colliders in producing top-quark pairs at its initial energy stage. Electroweak couplings to third-generation fermions such as the top are particularly relevant in many BSM scenarios. Operating at the top-quark pair-production threshold of around 350 GeV would allow precise measurements of the top-quark mass and width, while cross-section and asymmetry measurements would probe the top-quark interactions. However, comprehensive exploration of top-quark couplings requires several energy stages, and spacing them widely as the CLIC baseline envisages enhances energy-dependent effects.
Electron-beam longitudinal polarisation at ±80% plays an important role in the precision programme at CLIC. Generally the polarisation significantly enhances WW-fusion processes, for example single- and double-Higgs production at higher energies; we make use of this in the baseline scenario by taking more data with left-handed electrons at the later stages. In the interpretation of Standard Model measurements, polarisation also helps to disentangle different contributions. The coupling of the top quark to the Z boson and the photon is one such example.
Indirect searches
Many observables such as cross-sections and differential distributions for WW and two-fermion production, in addition to measurements from the Higgs-boson and top-quark sectors, can be used to constrain potential new physics in the framework of effective field theory. Here, the Standard Model Lagrangian is supplemented by interaction operators of higher dimension that describe the effects of new particles. These particles could be too heavy to be produced at CLIC, but can still be probed through the effects they induce, indirectly, on CLIC observables.
For many new-physics operators, CLIC is projected to bring an order of magnitude increase in sensitivity over the HL-LHC. The 380 GeV stage already significantly enhances our knowledge of operators relating to modifications of the Higgs couplings, as well as electroweak observables such as triple-gauge couplings. The higher-energy stages are then particularly effective in probing operators that induce corrections to Standard Model predictions which grow with energy. Sensitivity to these operators allows a wide range of new-physics scenarios to be probed without reference to particular models. Comparisons performed for the 2020 update of the European strategy for particle physics show, for example, that sensitivities derived in this way to four-fermion, or two-fermion two-boson contact interactions rise very steeply with the centre-of-mass energy of a lepton collider, allowing CLIC to probe scales up to 100 TeV and beyond.
Precision measurements of Standard Model processes can also be interpreted in the context of particular BSM models, such as the broad classes of composite Higgs and top, or extra-dimension models. At CLIC this represents strong new-physics reach. For example, a 3 TeV CLIC has sensitivity to Higgs compositeness up to a scale of around 18 TeV for all values of the compositeness sector coupling strength (see “Sensitivity” figure, left), and can reach beyond 40 TeV in particularly favourable scenarios; in all cases well beyond what the HL-LHC can exclude. At high masses, a multi-TeV lepton collider such as CLIC also provides the best possible sensitivity to search for new vector bosons such as the Y-universal Z′, which has couplings to quarks and leptons that are comparable (see figure, right).
As a further example, the very high energy of CLIC, and therefore the high propagator virtuality in two-fermion production, means that high-precision differential cross-sections could reveal deviations from Standard Model predictions owing to the presence of new particles in loops. This would allow discovery or exclusion of new states, for example dark-matter candidates, with a variety of possible quantum numbers and masses in the range of several TeV.
Direct searches
Direct searches for new physics at CLIC benefit from the relatively clean collision environment and from triggerless detector readout, both of which allow searches for elusive signatures that are difficult at a hadron collider. Mono-photon final states are an example of such a signature. In simplified dark-matter models containing a dark-matter particle and a mediator, dark-matter particles can be pair-produced in association with a photon, which is observed in the detector. In the case of a scalar mediator, lepton colliders are particularly sensitive and CLIC’s reach for the mediator can exceed its centre-of-mass energy significantly. In the case where the couplings to electrons and quarks are different, e+e– and proton colliders provide complementary sensitivities.
Lepton colliders can in general explore much closer to kinematic limits than hadron colliders, and this was recently verified in several examples of pair production, including simplified supersymmetric models and doubly charged Higgs production. Supersymmetric models where the higgsino multiplet is decoupled from all other supersymmetric states can lead to charginos decaying to slightly lighter neutralinos and leaving a “disappearing track stub” signature in the detector. CLIC at 3 TeV would be sensitive to such a higgsino to masses beyond 1.1 TeV, which is what would be required for the higgsino to account for the dark-matter relic mass density.
All the above approaches can be combined to illuminate the electroweak phase transition in the early universe. Models of electroweak baryogenesis can contain new scalar particles to facilitate a strong first-order phase transition, during which the electroweak symmetry is broken. Such scalar singlet extensions of the Higgs sector can be searched for directly; and indirectly from a universal scaling of all Higgs couplings.
Having both the precision capacity of a lepton collider and also the high-energy reach of multi-TeV collisions, CLIC has strong potential beyond a Higgs factory as a discovery machine. Over the next five years CERN will maintain a level of R&D in key CLIC technologies, which are also being adapted for medical applications, such that the project could be realised in a timely way after the HL-LHC if the international community decides to take this route.
The European strategy for particle physics (ESPP), updated by the CERN Council in June 2020, lays the foundations for a bright future for accelerator-based particle physics. Its 20 recommendations – covering the components of a compelling scientific programme for the short, medium and long terms, as well as the societal and environmental impact of the field, public engagement and support for early-career scientists – set out an ambitious but prudent approach to realise the post-LHC future in Europe within the worldwide context.
Full exploitation of the LHC and its high-luminosity upgrade is a major priority, both in terms of its physics potential and its role as a springboard to a future energy-frontier machine. The ESPP identified an electron–positron Higgs factory as the highest priority next collider. It also recommended that Europe, together with its international partners, investigate the technical and financial feasibility of a future hadron collider at CERN with a centre-of-mass energy of at least 100 TeV, with an electron–positron Higgs and electroweak factory as a possible first stage. Reinforced R&D on a range of accelerator technologies is another ESPP priority, as is continued support for a diverse scientific programme.
Implementation starts now
It is CERN’s role, in strong collaboration with other laboratories and institutions in Europe and beyond, to help translate the visionary scientific objectives of the ESPP update into reality. CERN’s recently approved medium-term plan (MTP), which covers the period 2021–2025, provides a first implementation of the ESPP vision.
Starting this year, CERN will deploy efforts on the feasibility study for a Future Circular Collider (FCC) as recommended by the ESPP update. One of the first goals is to verify that there are no showstoppers to building a 100 km tunnel in the Geneva region, and to gather pledges for the necessary funds to build it. The estimated FCC cost cannot be met only from CERN’s budget, and special contributions from non-Member States as well as new funding mechanisms will be required. Concerning the enabling technologies, the first priority is to demonstrate that the superconducting high-field magnets needed for 100 TeV (or more) proton–proton collisions in a 100 km tunnel can be made available on the mid-century time scale. To this end CERN is implementing a reinforced magnet R&D programme in partnership with industry and other institutions in Europe and beyond. Fresh resources will be used to explore low- and high-temperature superconducting materials, to develop magnet models towards industrialisation and cost reduction, and to build the needed test infrastructure. These studies will also have vast applications outside the field. Minimising the environmental impact of the tunnel, the colliders and detectors will be another major focus, as well as maximising the benefits to society from the transfer of FCC-related technologies.
The 2020 MTP includes resources to continue R&D on key technologies for the Compact Linear Collider and for the establishment of an international design study for a muon collider. Further advanced accelerator technologies will be pursued, as well as detector R&D and a new initiative on quantum technologies.
Continued progress requires a courageous, global experimental venture involving all the tools at our disposal
Scientific diversity is an important pillar of CERN’s programme and will continue to be supported. Resources for the CERN-hosted Physics Beyond Colliders study have been increased in the 2020 MTP and developments for long-baseline neutrino experiments in the US and Japan will continue at an intense pace via the CERN Neutrino Platform.
Immense impact
The discovery of the Higgs boson, a particle with unprecedented characteristics, has contributed to turning the focus of particle physics towards deep structural questions. Furthermore, many of the open questions in the microscopic world are increasingly intertwined with the universe at large. Continued progress on this rich and ambitious path of fundamental exploration requires a courageous, global experimental venture involving all the tools at our disposal: high-energy colliders, low-energy precision tests, observational cosmology, cosmic rays, dark-matter searches, gravitational waves, neutrinos, and many more. High-energy colliders, in particular, will continue to be an indispensable and irreplaceable tool to scrutinise nature at the smallest scales. If the FCC can be realised, its impact will be immense, not only on CERN’s future, but also on humanity’s knowledge.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
You started out studying electrical engineering. Why the switch to physics, and what have been your main research interests?
Actually, I studied them both in parallel, having started out in electrical engineering and then attending physics courses after I found myself getting a bit bored. I graduated with a Masters in electrical engineering, and then pursued a PhD in particle physics, working on the MARK-J experiment at DESY studying muon pairs, which allowed us to make estimates of the Z mass and sin2θ. To some level at MARK-J we could already test electroweak theory. Afterwards, I did a postdoc at CERN for two years on the L3 experiment, and ended up staying on L3 for 12 years. My background in engineering has helped several times during my career. For example, I acted as an interface between the physicists at CERN and the engineers in Aachen who designed and built the complicated L3 readout electronics, as they couldn’t always speak the same language.
How do you remember your LEP days?
It was a marvellous time, certainly some of the best years of my life. For the firsts few years at L3 I didn’t do any physics analysis – I was down in the tunnel dealing with the readout electronics. After a few years I was able to pick up physics again, going back to electroweak physics, and becoming the coordinator of the line-shape group that was in charge of measurements of Z parameters. I later became L3 analysis coordinator. I was there for essentially the whole duration of LEP, leaving CERN at the end of 1999 and joining the CMS group at Aachen University.
What are your key achievements since becoming DESY’s director for particle and astroparticle physics in 2009?
I came to DESY shortly before the experiments at HERA stopped and became director as the analyses were ramping down and LHC activities were ramping up. Certainly, one of the biggest achievements during this time was helping DESY transition from having local experiments onsite to a laboratory that now plays a key role in the CMS and ATLAS experiments. DESY became one of the largest Tier-2 data centres of the worldwide LHC computing grid, plus it had a lot of experts on proton structure and in detector operation who were highly welcomed by the LHC experiments. DESY joined the LHC relatively late, in 2008, but now has a very strong involvement in the ATLAS and CMS trackers, for example, and has set up a large infrastructure to build one end-cap tracker for ATLAS and one for CMS. DESY also joined the Belle experiment at KEK, and continues to be one of the leading labs in the development of detector R&D for future colliders. Smaller scale experiments at DESY also picked up speed, in particular axion searches. Recently the 24th dipole for the ALPS-II experiment was installed, which is really impressive. The motivation for astroparticle physics was always more concentrated at DESY’s Zeuthen site, and two years ago it was decided to create an independent division for astroparticle physics to give it more visibility.
How has the transition from collider physics to X-ray science changed life at DESY?
Well, there is no longer the burden at DESY to operate large accelerators and other facilities for particle physics, so those resources are now all directed towards photon science, such as the operation of the PETRA light source, the FLASH facility and the European XFEL. On the other hand, the laboratory has also grown over the last decade, to the benefit of photon science. However, if you count the number of DESY authors in ATLAS and CMS, it is still the second or third largest laboratory, so DESY is still very significant in particle physics.
How would you sum-up the state of high-energy physics today?
I’m optimistic, otherwise I wouldn’t be here! Often when I talk to students, I tell them that the best is yet to come in particle physics. Yes it’s true, we do not have at the moment a scenario like we had for the LHC, or for the SppS, which had clear targets to discover new particles, but if you look back in history, this hasn’t been the case very often. We would not have built several machines, including LEP, if that was the case. Discovery doesn’t have to necessarily mean new particles. So that’s why I am optimistic for the future of the field, because we have the Higgs boson now, which is a very special particle. It’s the first of its kind – not another quark or lepton. Studying the Higgs in detail might be the key to new insights into fundamental physics. This is also the central theme of the recent European strategy update.
I don’t think the question of linear vs circular is a technology one
What do you see as your main opportunities and challenges during the next five years?
CERN is a very complicated thing. I have been away for 20 years now, so I am still in a learning phase. It is very clear what our challenges are though. We have to make the next LHC run a success, and we also need to prepare for the HL-LHC. The world is looking on us for that. The second most important thing is the implementation of the European strategy update, and in particular, the preparation for the longer-term future of CERN. We have to prepare a convincing plan for the post-LHC collider, to be ready for decision at the next strategy update at the latest.
What is in store for computing?
Computing will remain a major challenge. LHC Run 3 will start soon and we have to prepare for it now, including securing the necessary funds. On the horizon there is the high-luminosity LHC, with an enormous increase in data volumes that would by far exceed the available capacities in a flat-budget scenario. We will have to work in close collaboration with the experiments and our international partners to address this challenge and be open to new ideas and emerging technologies. I believe that the new Prévessin Computing Centre will be instrumental and enhance collaboration among the experiments and the IT department.
What involvement did you have in the European strategy update?
I was a member of the European strategy group in my capacity as research director for particle physics at DESY. The strategy group contained the scientific delegates to council, plus about a dozen people from the national laboratories. I was in Bad Honnef in January 2020 for the final drafting session – it was an interesting time. If you had asked me on the Monday of that week what the result at the end would be, I would have said there was no way that we could reach consensus on a strategy. But we did, even if deciding on the specific facility to be built was beyond the ESPP mandate.
Should a post-LHC electron–positron Higgs factory be linear or circular?
Its shape is not my principal concern – I want one to be built, preferably at CERN. However, if we can get additional resources from outside the field to have one built in Japan or China, then we should grab the opportunity and try a global collaboration. I think even for the next project at CERN, we also need support from outside Europe. I don’t think the question of linear vs circular is a technology one – I think we have already mastered both technologies. We have pros and cons for both types of machine, but for me it is important that we get support for one of them, and the feasibility study that has been requested for a large circular tunnel in the Geneva area is an important step.
Young people ask me which horse will win the race – I don’t know. I consider it as my task as CERN’s director for research and computing to unite the community behind the next collider because that will be vital for our success. The next collider will be a Higgs factory and there are so many things in common between the various proposals if you consider the detectors or the physics. People should come together and try to push the idea of a Higgs factory in whatever topology. Look, I am a scientist. At DESY I have been working on linear colliders. And in the European XFEL we essentially already have a prototype for the International Linear Collider. But if CERN or China build a circular collider, I will be the first one who signs up for an experiment! I think many others think like me.
What are the main challenges in getting the next collider off the ground?
We have competition now – very severe competition. I see that in Germany everybody is now speaking about life science and biology because of the pandemic, plus there are other key societal challenges such as climate and energy. These are topics that also have an interesting story to tell, and one which might be easier to understand. If someone asks me what the applications of the Higgs boson are, I reply that I don’t know. However, I am convinced that in 50 or 100 years from now, people will know. As particle physicists we have to continue to point out our role in society to motivate the investments and resources for our future plans, not just in science, but in technology and impact on society. If you look at the first accelerators, they were not built with other applications in mind – they were built to understand what the core of matter is. But look at the applications of accelerators, detectors and computing that have spun-off from this. X-ray science is one very strong, unforeseen example.
Would a lack of consensus for the next collider risk making physicists appear unsure about their ambitions?
Of course, there will be people who think that. However, there are also politicians, who I know in the US for instance, who are very supportive of the field. If you compare us to the synchrotron field for instance, there are dozens of light-source facilities around the world. This discipline has the benefit of not having to converge on only one – each country can essentially build its own facility. We have the challenge that we have to get a global consensus. I think many politicians understand this. While it is true that particle physics is not a decisive topic in elections, we have a duty to share our scientific adventure and results with the public. We are very fortunate in Germany that we have had a scientist as chancellor for the past 15 years, which I think this is one of the main reasons Germany is flourishing.
I consider it as my task as a CERN director for research to unite the community
What would be the implication for European particle physics were Japan or China to proceed with a Higgs factory?
I do not have a “gold-plated” answer for this. It really depends on things that are beyond our direct control as physicists. It could be an opportunity for CERN. One of the things that the strategy update confirms is that Europe is the leader of the field scientifically and also technologically, thanks mainly to the LHC. One of the arguments that CERN could profit from is the fact that Europe should want to remain the leader, or at least “a leader” in the field. That might be very helpful for CERN to also get a future project on track. Being the leader in the field is something that CERN, and Europe, can build upon.
What is your philosophy for successful scientific management?
I believe in flat hierarchies. Science is about competition for the best ideas, and the capital of research laboratories like CERN are the people, their motivation and their creativity. Therefore, I intend to foster the CERN spirit of fruitful collaboration in our laboratory but also with all our partners in Europe and the rest of the world.
The recently completed European strategy for particle physics (ESPP) outlines a coherent and fascinating vision for an effective and efficient exploration of the most fundamental laws of physics. Scientific recommendations for the field provide concrete guidance and priorities on future research facilities and efforts to expand our current knowledge. The depth with which we can address open mysteries about the universe depends heavily on our ability to innovate instrumentation and research infrastructures.
The ESPP calls upon the European Committee for Future Accelerators (ECFA) to develop a global detector R&D roadmap to support proposals at European and national levels. That roadmap will define the backbone of the detector R&D needed to implement the community’s vision for both the short and long term. At its plenary meeting in November, ECFA initiated a roadmap panel to develop and organise the process to realise the ESPP goals in a timely fashion. In addition to listing the targeted R&D projects required, the roadmap will also consider transformational, blue-sky R&D relevant to the ESPP.
Six technology-oriented task forces will capture each of the major components in detector instrumentation: gaseous and liquid detectors; solid-state detectors; photon detection and particle-identification; calorimetry; and quantum and emerging technologies. Along with three cross-cutting task forces devoted to electronics, integration and training, these efforts will proceed via in-depth consultation with the research community. An open symposium for each task force, due to be held in March or April 2021, will inform discussions that will eventually culminate in a roadmap document in the summer. To identify synergies and opportunities with adjacent research fields, an advisory panel – comprising representatives from the nuclear and astrophysics fields, the photon- and neutron-physics communities, as well as those working in fusion and space research – will also be established.
The roadmap will also consider transformational, blue-sky R&D relevant to the ESPP
In parallel, with a view to stepping up accelerator R&D, the European Laboratory Directors Group is developing an accelerator R&D roadmap as a work-plan for this decade. Technologies under consideration include high-field magnets, high-temperature superconductors, plasma-wakefield acceleration and other high-gradient accelerating structures, bright muon beams, and energy-recovery linacs. The roadmap, to be completed on a similar timeline as that for detectors, will set the course for R&D and technology demonstrators to enable future facilities that support the scientific objectives of the ESPP.
Gathering for a Higgs factory
The global ambition for the next-generation accelerator beyond the HL-LHC is an electron–positron Higgs factory, which can include an electroweak and top-quark factory in its programme. Pending the outcome of the technical and financial feasibility study for a future FCC-like hadron collider at CERN, the community has at this stage not concluded on the type of Higgs factory that is to emerge with priority. The International Linear Collider (ILC) in Japan and the Future Circular Collider (FCC-ee) at CERN are listed, with the Compact Linear Collider (CLIC) as a possible backup.
It goes without saying, and for ECFA within its mandate to explore, that the duplication of similar accelerators should be avoided and international cooperation for creating these facilities should be encouraged if it is essential and efficient for achieving the ESPP goal. At this point, coordination of R&D activities is crucial to maximise scientific results and to make the most efficient use of resources.
Recognising the need for the experimental and theoretical communities involved in physics studies, experiment designs and detector technologies at future Higgs factories to gather, ECFA supports a series of workshops from 2021 to share challenges and expertise, and to respond coherently to this ESPP priority. An international advisory committee will soon be formed to further identify synergies both in detector R&D and physics-analysis methods to make efforts applicable or transferable across Higgs factories. Concrete collaborative research programmes are to emerge to pursue these synergies. With the strategy discussion behind us, we now need to focus on getting things done together.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.