The family of charged leptons is composed of the electron, muon (μ) and tau lepton (τ). According to the Standard Model (SM), these particles only differ in their mass: the muon is heavier than the electron and the tau is heavier than the muon. A remarkable feature of the SM is that each flavour is equally likely to interact with a W boson. This is known as lepton flavour universality.
In a new ATLAS measurement reported this week at the LHCP conference, a novel technique using events with top-quark pairs has been exploited to test the ratio of the probabilities for tau leptons and muons to be produced in W boson decays, R(τ/μ). In the SM, R(τ/μ) is expected to be unity, but a longstanding tension with this prediction has existed since the LEP era in the 1990s, where, from a combination of the four experiments, R(τ/μ) was measured to be 1.070 ± 0.026, deviating from the SM expectation by 2.7σ. This strongly motivated the need for new measurements with higher precision. If the LEP result were confirmed it would correspond to an unambiguous discovery of beyond the SM physics.
Tag and probe
To conclusively prove either that the LEP discrepancy is real or that it was just a statistical fluctuation, a precision of at least 1–2% is required — something previously not thought possible at a hadron collider like the LHC, where inclusive W bosons, albeit produced abundantly, suffer from large backgrounds and kinematic biases due to the online selection in the trigger. The key to achieving this is to obtain a sample of muons and tau leptons from W boson decays that is as insensitive as possible to the details of the trigger and object reconstruction used to select them. ATLAS has achieved this by exploiting both the LHC’s large sample of over 100 million top-quark pairs produced in the latest run, and the fact that top quarks decay almost exclusively to a W boson and a b quark. In a tag-and-probe approach, one W boson is used to select the events and the other is used, independently of the first, to measure the fractions of decays to tau-leptons and muons.
The analysis focuses on tau-lepton decays to a muon, rather than hadronic tau decays which are more complicated to reconstruct, thus reducing the systematic uncertainties associated with the object reconstruction. The lifetime of the tau lepton and its lower momentum decay products are exploited by the precise muon reconstruction available from the ATLAS detector to separate muons from tau-lepton decays and muons produced directly by a W decay (so-called prompt muons). Specifically, the absolute distance of closest approach of muon tracks in the plane perpendicular to the beam line, |d0μ| (figure 1), and the transverse momentum, pTμ, of the muons, are used to isolate these contributions. These variables, in particular |d0μ|, are calibrated using a pure sample of prompt muons from Z→μμ data.
The extraction of R(τ/μ) is performed using a fit to |d0μ| and pTμ where the cancellation of several systematic uncertainties is observed as they are correlated between the prompt μ and τ→μ contributions. This includes, for example, uncertainties related to jet reconstruction, flavour tagging and trigger efficiencies. As a result, the measurement obtains very high precision, surpassing that of the previous LEP measurement.
The measured value is R(τ/μ) = 0.992 ± 0.013 [ ± 0.007 (stat) ± 0.011 (syst) ], forming the most precise measurement of this ratio, with an uncertainty half the size of that from the combination of LEP results (figure 2). It is in agreement with the Standard Model expectation and suggests that the previous LEP discrepancy may be due to a fluctuation.
Though surviving this latest test, the principle of lepton flavour universality will not quite be out of the woods until the anomalies in B-meson decays recorded by the LHCb experiment (CERN Courier May/June 2020 p10) have also been definitively probed.
The ALPHA collaboration at CERN has reported the first measurements of fine-structure effects and the Lamb shift in antihydrogen atoms. The results, published in Nature in February, bring further scrutiny to comparisons between antimatter and ordinary matter, which, if found to behave differently, would challenge CPT symmetry and shake the foundations of the Standard Model.
In 1947, US physicist Willis Lamb and his colleagues observed an incredibly small shift in the n = 2 energy levels of hydrogen in a vacuum. Under traditional physics theories of the day, namely the Dirac equation, these states should have the same energy and the Lamb shift shouldn’t exist. The discovery spurred the development of quantum electrodynamics (QED), which explains the discrepancy as being due to interactions between the atom’s constituents with vacuum-energy fluctuations, and won Lamb the Nobel Prize in Physics in 1955.
Antimatter spectroscopy
The ALPHA team creates antihydrogen atoms by binding antiprotons delivered by CERN’s Antiproton Decelerator (AD) with positrons. The antiatoms are then confined in a magnetic trap in an ultra-high vacuum, and illuminated with a laser to measure their spectral response. This technique enables the measurement of known quantum effects such as the fine structure and the Lamb shift, which have now been measured in the antihydrogen atom for the first time. The ALPHA team previously used this approach to measure other quantum effects in antihydrogen, the most recent being a measurement of the Lyman–alpha (1S–2P) transition in 2018.
The splitting of the n = 2 energy level of hydrogen is a separation between the 2P3/2 and 2P1/2 levels in the absence of a magnetic field, and is caused by the interaction between the electron’s spin and the orbital momentum. The classic Lamb shift is the splitting between the 2S1/2 and 2P1/2 levels, also in the absence of a magnetic field, and is the result of the effect on the electron of quantum fluctuations associated with virtual photons.
The work confirms that a key portion of QED holds up in both matter and antimatter
Jeffrey Hangst
In its new study, the ALPHA team determined the fine-structure splitting and the Lamb shift by inducing transitions between the lowest (n = 1) energy level of antihydrogen and the 2P3/2 and 2P1/2 levels in the presence of a 1 T magnetic field. Using the value of the frequency of a previously measured transition (1S–2S), the team was able to infer the values of the fine-structure splitting and the Lamb shift. The results were found to be consistent with theoretical predictions of the splittings in normal hydrogen, within the experimental uncertainties of 2% for the fine-structure splitting and 11% for the Lamb shift. “The work confirms that a key portion of QED holds up in both matter and antimatter, and probes aspects of antimatter interaction – such as the Lamb shift – that we have long looked forward to addressing,” says ALPHA spokesperson Jeffrey Hangst.
The seminal measurements of antihydrogen’s spectral structure that are now possible follow more than 30 years of effort by the low-energy antimatter community at CERN. The first antihydrogen atoms were observed at CERN’s LEAR facility in 1995 and, in 2002, the ATHENA and ATRAP collaborations produced cold (trappable) antihydrogen at the AD, opening the way to precision measurements of antihydrogen’s atomic spectra. In addition to spectral measurements, the charge-to-mass ratios for the proton and antiproton have been shown to agree to 69 parts per trillion by the BASE experiment, and the antiproton-to-electron mass ratio has been measured to agree with its proton counterpart to a level of 0.8 parts per billion by the ASACUSA experiment. The newly completed ELENA facility at the AD will increase the number of available antiprotons by up to two orders of magnitude.
Next for the ALPHA team is chilling large samples of antihydrogen using state-of-the-art laser cooling techniques. “These techniques will transform antimatter studies and will allow unprecedentedly high-precision comparisons between matter and antimatter,” says Hangst.
Supersymmetry is an attractive extension of the Standard Model, and aims to answer some of the most fundamental open questions in modern particle physics. For example: why is the Higgs boson so light? What is dark matter and how does it fit in with our understanding of the universe? Do electroweak and strong forces unify at smaller distances?
Supersymmetry predicts a new partner for each elementary particle, including the heaviest particle ever observed – the top quark. If the partner of the top quark (the top squark, or “stop”) were not too heavy, the quantum corrections to the Higgs boson mass would largely cancel, thereby stabilising its small value of 125 GeV. Moreover, the lightest supersymmetric particle (LSP) may be stable and weakly interacting, providing a dark-matter candidate. Signs of the top squark, and thus supersymmetry, may yet be lurking in the enormous number of proton–proton collisions provided by the LHC.
Two new searches
The ATLAS collaboration recently released two new searches, each looking to detect pairs of top squarks by exploring the full LHC dataset corresponding to an integrated luminosity of 139 fb–1 recorded during Run 2. Each top squark decays to a top quark and an LSP that escapes the detector without interacting. Thus, our experimental signature is an event that is energetically unbalanced, with two sets of top-quark remnants and a large amount of missing energy.
A challenge for such searches is that the masses of the supersymmetric particles are unknown, leaving a large range of possibilities to explore. Depending on the mass difference between the top squark and the LSP, the final decay products can be (very) soft or (very) energetic, calling for different reconstruction techniques and sparking the development of new approaches. For example, novel “soft b-tagging” techniques, based on either pure secondary-vertex information or jets built from tracks, were implemented for the first time in these analyses to extend the sensitivity to lower kinematic regimes. This allowed the searches to probe small top squark–LSP mass differences down to 5 GeV for the first time.
Leptoquark decays would exhibit a similar experimental signature to top-squark decays
Other sophisticated analysis strategies, including the use of machine-learning techniques, improved the discrimination between the signal and Standard-Model background and maximised the sensitivity of the analysis. Furthermore, these two searches are designed in such a way as to fully complement one another. Together they greatly extend the reach in the top squark mass versus LSP mass plane, including the challenging region where the top squark masses are very close to the top mass (figure 1). No evidence of new physics was found in any of these searches.
Beyond supersymmetry, these search results are intriguing for other new-physics scenarios. For example, the decay of a hypothetical top quark–neutrino hybrid, called a leptoquark, would exhibit a similar experimental signature to a top-squark decay. The results also constrain models predicting dark matter produced with a pair of top quarks that do not originate from supersymmetry.
Weighing in at 180 times the mass of the proton, the top quark is the heaviest elementary particle discovered so far. Because of its large mass, it is the only quark that does not form bound states with other quarks but decays immediately after it has been produced. Despite its short lifetime, its existence has far-reaching consequences. It governs the stability of the electroweak vacuum, gives large contributions to the mass of the W boson, and influences many other important observables through quantum-loop corrections. An accurate knowledge of its mass is important for our understanding of fundamental interactions.
The top quark governs the stability of the electroweak vacuum
The LHC’s high centre-of-mass energy makes it an ideal laboratory to study the properties of the top quark with unprecedented precision. Such studies demand that jets originating from light and bottom quarks are measured very accurately, however, subtleties remain even then, as exact calculations are not possible for low-energy quarks and gluons once they start to form bound states. In this regime, our approximations become inaccurate, because the mass of the bound states becomes as large as the energy of the underlying process. An exciting way to overcome these difficulties is to measure top quarks that have been produced with very high transverse momenta and thus large Lorentz boosts. In these topologies, the decay products are highly collimated, and can be clearly assigned to a decaying top quark. Effects from the formation of hadrons play a minor effect in boosted topologies as the top quarks, which were originally produced in quark–antiquark pairs, move apart from each other fast enough that their decays can be considered to happen independently.
Boosted precision
By reconstructing a boosted top quark in a single jet, a measurement of the jet mass can be translated into one of the top-quark mass. The CMS collaboration has carried out such a measurement using the √s = 13 TeV data collected in 2016, reconstructing the top-quark jets with the novel XCone algorithm to obtain a top quark mass of 172.6 ± 2.5 GeV (figure 1). Due to this new way of reconstructing jets, an improvement of more than a factor of three relative to an earlier measurement at √s = 8 TeV has been achieved. Although the uncertainty is larger than for direct measurements, where top quarks are reconstructed from multiple jets or leptons and missing transverse momentum (which currently yield a world average of 172.9 ± 0.4 GeV from a combination of CMS, ATLAS and Tevatron measurements), this new result shows for the first time the potential of using boosted top quarks for precision measurements.
The jet mass can be translated into the top-quark mass
Measuring the properties of the top quark at high momenta enables detailed studies of a theoretically compelling kinematic regime that has not been accessible before. Different effects, such as the collinear radiation of gluons and quarks, govern its dynamics compared to top-quark production at low energies. Exploiting the full Run-2 dataset should allow CMS to extend this measurement to higher boosts, and establish the boosted regime for a number of precision measurements in the top-quark sector in Run 3 and at the high-luminosity LHC.
Almost 90 years since Pauli postulated its existence, much remains to be learnt about the neutrino. The observation in 1998 of neutrino oscillations revealed that the particle’s flavour and mass eigenstates mix and oscillate. At least two must be massive, like the other known fermions, though with far smaller masses. The need for a mechanism to generate such small masses strongly hints at the existence of new physics beyond the Standard Model. Faced with such compelling questions, neutrino experiments are springing up at an unprecedented rate, from a plethora of searches for neutrinoless double-beta decay to gigantic astrophysical–neutrino detectors at the South Pole (IceCube) and soon in the Mediterranean Sea (KM3NeT), and two projects of enormous scope on the horizon in DUNE and Hyper-Kamiokande. Now, then, is a timely moment for the publication of a tutorial for graduate students and young researchers who are entering this fast-moving field.
Access all areas
Edited by former spokesperson of the OPERA experiment Antonio Ereditato, The State of the Art of Neutrino Physics provides an historical account and introduction to basic concepts, reviews of the various subfields where neutrinos play a significant role, and gives a detailed account of the data produced by present experiments in operation. An extremely valuable compilation of topical articles, the book covers essentially all areas of research in experimental neutrino physics, from astrophysical, solar and atmospheric neutrinos to accelerator and reactor neutrinos. The large majority of the articles are written in a didactic style by leading experts in the field, allowing young researchers to acquaint themselves with the diverse research in the field. In particular the chapter describing the formalism of neutrino oscillations should be required reading for all aspiring neutrino physicists. In all cases special attention is given to experimental challenges.
From the theory side, chapters cover measurements at neutrino experiments of the low-energy interactions of neutrinos with nuclei (a key way to reduce systematic uncertainties), the phenomenology and consequences of the yet-to-be-determined neutrino-mass hierarchy, and the possibility of CP violation in the lepton sector. A very detailed account of solar neutrinos and matter effects in the Sun is written by Alexei Smirnov, one of the inventors of the celebrated Mikheyev–Smirnov–Wolfenstein effect, which describes how weak interactions with electrons modify oscillation probabilities for the various neutrino flavours. More speculative scenarios, for example on the possibility of the existence of sterile neutrinos, are discussed as well.
For a book like this, which has the ambition to address a broad palette of neutrino questions, it is always difficult to be totally complete, but it comes close. Some topics have evolved in the details since 2016, when the material upon which the book is based was written, but that doesn’t take away from the book’s value as a tutorial. I recommend it very highly to young and not-so-young aspiring
neutrino aficionados alike.
The steady increase in the energy of colliders during the past 40 years, which has fuelled some of the greatest discoveries in particle physics, was possible thanks to progress in superconducting materials and accelerator magnets. The highest particle energies have been reached by proton–proton colliders, where beams of high-rigidity travelling on a piecewise circular trajectory require magnetic fields largely in excess of those that can be produced using resistive electromagnets. Starting from the Tevatron in 1983, through HERA in 1991, RHIC in 2000 and finally the LHC in 2008, all large-scale hadron colliders were built using superconducting magnets.
Large superconducting magnets for detectors are just as important to high-energy physics experiments as beamline magnets are to particle accelerators. In fact, detector magnets are where superconductivity took its stronghold, right from the infancy of the technology in the 1960s, with major installations such as the large bubble-chamber solenoid at Argonne National Laboratory, followed by the giant BEBC solenoid at CERN, which held the record for the highest stored energy for many years. A long line of superconducting magnets has provided the magnetic fields for detectors of all large-scale high-energy physics colliders, with the most recent and largest realisation being the LHC experiments, CMS and ATLAS.
Optimisation
All past accelerator and detector magnets had one thing in common: they were built using composite Nb–Ti/Cu wires and cables. Nb–Ti is a ductile alloy with a critical field of 14.5 T and critical temperature of 9.2 K, made from almost equal parts of the two constituents. It was discovered to be superconducting in 1962 and its performance, quality and cost have been optimised over more than half a century of research, development and large-scale industrial production. Indeed, it is unlikely that the performance of the LHC dipole magnets, operated so far at 7.7 T and expected to reach nominal conditions at 8.33 T, can be surpassed using the same superconducting material, or any foreseeable improvement of this alloy.
And yet, approved projects and studies for future circular machines are all calling for the development of superconducting magnets that produce fields beyond those produced for the LHC. These include the High-Luminosity LHC (HL-LHC), which is currently taking shape, and the Future Circular Collider design study (FCC), both at CERN, together with studies and programmes outside Europe, such as the Super proton–proton Collider in China (SppC) or the past studies of a Very Large Hadron Collider at Fermilab and the US–DOE Muon Accelerator Program (see HL-LHC quadrupole successfully tested). This requires that we turn to other superconducting materials and novel magnet technology.
The HL-LHC springboard
To reach its main objective, to increase the levelled LHC luminosity at ATLAS and CMS, and the integrated luminosity by a factor of 10, the HL-LHC requires very large-aperture quadrupoles, with field levels at the coil in the range of 12 T in the interaction regions. These quadrupoles, currently being built and tested at CERN and Fermilab (see HL-LHC quadrupole successfully tested), are the main fruit of the 10-year US-DOE LHC Accelerator Research Program (US–LARP) – a joint venture between CERN, Brookhaven National Laboratory, Fermilab and Lawrence Berkeley National Laboratory. In addition, the increased beam intensity calls for collimators to be inserted in locations within the LHC “dispersion suppressor”, the portion of the accelerator where the regular magnet lattice is modified to ensure that off-momentum particles are centered in the interaction points. To gain the required space, standard arc dipoles will be substituted by dipoles of shorter length and higher field, approximately 11 T. As described earlier, such fields require the use of new materials. For the HL-LHC, the material of choice is the intermetallic compound of niobium and tin Nb3Sn, which was discovered in 1954. Nb3Sn has a critical field of about 30 T and a critical temperature of about 18 K, outperforming Nb–Ti by a factor of two. Though discovered before Nb–Ti, and exhibiting better performance, Nb3Sn has not been used for accelerator magnets so far because in its final form it is brittle and cannot withstand large stress and strain without special precautions.
The HL-LHC is the springboard to the future of high-field accelerator magnets
In fact, Nb3Sn was one of the candidate materials considered for the LHC in the late 1980s and mid 1990s. Already at that time it was demonstrated that accelerator magnets could be built with Nb3Sn, but it was also clear that the technology was complex, with a number of critical steps, and not ripe for large-scale production. A good 20 years of progress in basic material performance, cable development, magnet engineering and industrial process control was necessary to reach the present state, during which time the success of the production of Nb3Sn for the ITER fusion experiment has given confidence in the credibility of this material for large-scale applications. As a result, magnet experts are now convinced that Nb3Sn technology is sufficiently mature to satisfy the challenging field levels required by the HL-LHC.
A difficult recipe
The present manufacturing recipe for Nb3Sn accelerator magnets consists of winding the magnet coil with glass-fibre insulated cables made of multi-filamentary wires that contain Nb and Sn precursors in a Cu matrix. In this form the cables can be handled and plastically deformed without breakage. The coils then undergo heat treatment, typically at a temperature of around 650 °C, during which the precursor elements react chemically and form the desired Nb3Sn superconducting phase. At this stage, the reacted coil is extremely fragile and needs to be protected from any mechanical action. This is done by injecting a polymer, which fills the interstitial spaces among cables, and is subsequently cured to become a matrix of hardened plastic providing cohesion and support to the cables.
The above process, though conceptually simple, has a number of technical difficulties that call for top-of-the-line engineering and production control. To give some examples, the texture of the electrical insulation, consisting of a few tenths of mm of glass fibre, needs to be able to withstand the high-temperature heat-treatment step, but also retain dielectric and mechanical properties at liquid-helium temperatures 1000 °C lower. The superconducting wire also changes its dimensions by a few percent, which is orders of magnitude larger than the dimensional accuracy requested for field quality and therefore must be predicted and accommodated for by appropriate magnet and tooling design. The finished coil, even if it is made solid by the polymer cast, still remains stress and strain sensitive. The level of stress that can be tolerated without breakage can be up to 150 MPa, to be compared to the electromagnetic stress of optimised magnets operating at 12 T that can reach levels in the range of 100 MPa. This does not leave much headroom for engineering margins and manufacturing tolerances. Finally, protecting high-field magnets from quenches, with their large stored energy, requires that the protection system has a very fast reaction – three times faster than at the LHC – and excellent noise rejection to avoid false trips related to flux jumps in the large Nb3Sn filaments.
The next jump
The CERN magnet group, in collaboration with the US–DOE laboratories participating in the LHC Accelerator Upgrade Project, is in the process of addressing these and other challenges, finding solutions suitable for a magnet production on the scale required for the HL-LHC. A total of six 11 T dipoles (each about 6 m long) and 20 inner triplet quadrupoles (up to 7.5 m long) are in production at CERN and in the US, and the first magnets have been tested (see “Power couple” image). And yet, it is clear that we are not ready to extrapolate such production on a much larger scale, i.e. to the thousands of magnets required for a possible future hadron collider such as FCC-hh. This is exactly why the HL-LHC is so critical to the development of high-field magnets for future accelerators: not only will it be the first demonstration of Nb3Sn magnets in operation, steering and colliding beams, but by building it on a scale that can be managed at the laboratory level we have a unique opportunity to identify all the areas of necessary development, and the open technology issues, to allow the next jump. Beyond its prime physics objective, the HL-LHC is therefore the springboard to the future of high-field accelerator magnets.
Climb to higher peak fields
For future circular colliders, the target dipole field has been set at 16 T for FCC-hh, allowing proton–proton collisions at an energy of 100 TeV, while China’s proposed pp collider (SppC) aims at a 12 T dipole field, to be followed by a 20 T dipole. Are these field levels realistic? And based on which technology?
Looking at the dipole fields produced by Nb3Sn development magnets during the past 40 years (figure 1), fields up to 16 T have been achieved in R&D demonstrators, suggesting that the FCC target can be reached. In 2018 “FRESCA2” – a large-aperture (100 mm) dipole developed over the past decade through a collaboration between CERN and CEA-Saclay in the framework of the European Union project EuCARD – attained a record field of 14.6 T at 1.9 K (13.9 T at 4.5 K). Another very recent result, obtained in June 2019, is the successful test at Fermilab by the US Magnet Development Programme (MDP) of a “cos-theta” dipole with an aperture of 60 mm called MDPCT1 (see “Cos-theta 1” image), which reached a field of 14.1 T a t 4.5 K (CERN Courier September/October 2019 p7). In February this year, the CERN magnet group set a new Nb3Sn record with an enhanced racetrack model coil (eRMC), developed in the framework of the FCC study. The setup, which consists of two racetrack coils assembled without mid-plane gap (see “Racetrack demo” image), produced a 16.36 T central field at 1.9 K and a 16.5 T peak field on the coil, which is the highest ever reached for a magnet of this configuration. The magnet was also tested at 4.5 K and reached a field of about 16.3 T (see HL-LHC quadrupole successfully tested). These results send a positive signal for the feasibility of next-generation hadron colliders.
A field of 16 T seems to be the upper limit that can be reached with a Nb3Sn accelerator magnet. Indeed, though the conductor performance can still be improved, as demonstrated by recent results obtained at the National High Magnetic Field Laboratory (NHMFL), Ohio State University and Fermilab within the scope of the US-MDP, this is the point at which the material itself will run out of steam. As for any other superconductor, the critical current density drops as the field grows, requiring an increasing amount of material to carry a given current. The effect becomes dramatic when approaching a significant fraction of the critical field. Akin to Nb-Ti in the region of 8 T, a further field increase with Nb3Sn beyond 16 T would require an exceedingly large coil and an impractical amount of conductor. Reaching the ultimate performance of Nb3Sn, which will be situated between the present 12 T and the expected maximum of 16 T, still requires much work. The technology issues identified by the ongoing work on the HL-LHC magnets are exacerbated by the increase in field, electromagnetic force and stored energy. Innovative industrial solutions will be needed, and the conductor itself brought to a level of maturity comparable to Nb–Ti in terms of performance, quality and cost. This work is the core of the ongoing FCC magnet-development programme that CERN is pursuing in collaboration with laboratories, universities and industries worldwide.
As the limit of Nb3Sn comes into view, we see history repeating itself: the only way to push beyond it to higher fields will be to resort to new materials. Since Nb3Sn is technically the low-temperature superconductor (LTS) with the highest performance, this will require a shift to high-temperature superconductors.
High-temperature superconductivity (HTS), discovered in 1986, is of great relevance in the quest for high fields. When operated at low temperature (the same liquid-helium range as LTS), HTS materials have exceedingly large critical fields in the range of 100 T and above. And yet, only recently has the material and magnet engineering reached the point where HTS materials can generate magnetic fields in excess of LTS ones. The first user applications coming to fruition are ultra-high-field NMR magnets, as recently delivered by Bruker Biospin, and the intense magnetic fields required by materials science, for example the 32 T all-superconducting user facility built at NHMFL.
As for their application in accelerator magnets, the potential of HTS to make a quantum leap is enormous. But it is also clear that the tough challenges that needed to be solved for Nb3Sn will escalate to a formidable level in HTS accelerator magnets. The magnetic force scales with the square of the field produced by the magnet, and for HTS the problem will no longer be whether the material can carry the super-currents, but rather how to manage stresses approaching structural material limits. Stored energy has the same square-dependence on the field, and quench detection and protection in large HTS magnets are still a spectacular challenge. In fact, HTS magnet engineering will probably differ so much from the LTS paradigm that it is fair to say that we do not yet know whether we have identified all the issues that need to be solved. HTS is the most exciting class of material to work with; the new world for brave explorers. But it is still too early to count on practical applications, not least because the production cost for this rather complex class of ceramic materials is about two orders of magnitude higher than that of good-old Nb–Ti.
It is thus logical to expect the near future to be based mainly on Nb3Sn. With the first demonstration to come imminently in the LHC, we need to consolidate the technology and bring it to the maturity necessary on a large-scale production. This may likely take place in steps – exploring 12 T territory first, while seeking the solutions to the challenges of ultimate Nb3Sn performance towards 16 T – and could take as long as a decade. For China’s SppC, iron-based HTS has been suggested as a route to 20 T dipoles. This technology study is interesting from the point of view of the material, but the magnet technology for iron-based superconductors is still rather far away.
Meanwhile, nurtured by novel ideas and innovative solutions, HTS could grow from the present state of a material of great potential to its first applications. The LHC already uses HTS tapes (based on Bi-2223) for the superconducting part of the current leads. The HL-LHC will go further, by pioneering the use of MgB2 to transport the large currents required to power the new magnets over considerable distances (thereby shielding power converters and making maintenance much easier). The grand challenges posed by HTS will likely require a revolution rather than an evolution of magnet technology, and significant technology advancement leading to large-scale application in accelerators can only be imagined on the 25-year horizon.
Road to the future
There are two important messages to retain from this rather simplified perspective on high-field magnets for accelerators. Firstly, given the long lead times of this technology, and even in times of uncertainty, it is important to maintain a healthy and ambitious programme so that the next step in technology is at hand when critical decisions on the accelerators of the future are due. The second message is that with such long development cycles and very specific technology, it is not realistic to rely on the private sector to advance and sustain the specific demands of HEP. In fact, the business model of high-energy physics is very peculiar, involving long investment times followed by short production bursts, and not sustainable by present industry standards. So, without taking the place of industry, it is crucial to secure critical know-how and infrastructure within the field to meet development needs and ensure the long-term future of our accelerators, present and to come.
After the discovery of the long‑sought Higgs boson at a mass of 125 GeV, a major question in particle physics is whether the electroweak symmetry breaking sector is indeed as simple as the one implemented in the Standard Model (SM), or whether there are additional Higgs bosons. Additional Higgs bosons would occur, for example, in the presence of a second Higgs field, as realised in two‑Higgs doublet models, among which is the well‑known minimal supersymmetric extension of the SM (MSSM). The discovery of additional Higgs bosons could therefore be a gateway to new symmetries in nature.
ATLAS has recently released results of a search for heavy Higgs bosons decaying into a pair of tau leptons using the complete LHC Run 2 dataset (139 fb–1 of 13 TeV proton–proton data). The new analysis provides a considerable increase in sensitivity to MSSM scenarios compared to previous results.
The MSSM features five Higgs bosons
The MSSM features five Higgs bosons, among which, the observed Higgs boson can be the lightest one. The couplings of the heavy Higgs bosons to down‑type leptons and quarks, such as the tau lepton and bottom quark, are enhanced for large values of tan β – the ratio of the vacuum expectation values of the two Higgs doublets, and one of the key parameters of the model. The heavy neutral Higgs bosons A (CP odd) and H (CP even) are produced mainly via gluon–gluon interactions or in association with bottom quarks. Their branching fractions to tau leptons can reach sizeable values across a large part of the model‑parameter space, making this channel particularly sensitive to a wide range of MSSM scenarios.
The new ATLAS search requires the presence of two oppositely charged tau‑lepton candidates, one of which is identified as a hadronic tau decay, and the other as either a hadronic or a leptonic decay. To profit from the enhancement of the production of signal events in association with bottom quarks at large tan β values (for example when the heavy Higgs boson is radiated by a b‑quark produced in the collision of two gluons), the data are further categorised based on the presence or absence of additional b‑jets. One of the challenges of the analysis is the misidentification of backgrounds with hadronic jets as tau candidates. These backgrounds are estimated from data by measuring the misidentification probabilities and applying them to events in control regions representative of the event selection. The final discriminant is on the quantity mTtot, which is built from the combination of the transverse masses of the two tau‑lepton decay products (figure 1).
The data agree with the prediction assuming no additional Higgs bosons, despite a small, non‑significant excess around a putative signal mass value of 400 GeV. The measurement places limits on the production cross section that can be translated into constraints on MSSM parameters. One realisation of the MSSM is the hMSSM scenario, in which the knowledge of the observed Higgs‑boson mass is used to reduce the number of parameters. The A/H → ττ exclusion limit dominates over large parts of the parameter space (figure 2), but still leaves room for possible discoveries at masses above the top‑anti‑top quark production threshold. ATLAS continues to refine this and conduct further searches for heavy Higgs bosons in various final states.
The coupling between quarks and gluons depends strongly on the energy scale of the process. The same is true for the masses of the quarks. This effect – the so‑called “running” of the strong coupling constant and the quark masses – is described by the renormalisation group equations (RGEs) of quantum chromodynamics (QCD). The experimental verification of the RGEs is both an important test of the validity of QCD and an indirect search for unknown physics, as physics beyond the Standard Model could modify the RGEs at scales probed by the Large Hadron Collider. The running of the strong coupling constant has been established at many experiments in the past, and, over the past 20 years, evidence for the running of the masses of the charm and bottom quarks was demonstrated using data from LEP, SLC and HERA, though the running of the top‑quark mass has hitherto proven elusive.
CMS has probed the running of the mass of the top quark for the first time
The CMS collaboration has now, for the first time, probed the running of the mass of the top quark. The measurement was performed using proton–proton collision data at a centre‑of‑mass energy of 13 TeV, recorded by the CMS detector in 2016. The top quark’s mass was determined as a function of the invariant mass of the top quark–antiquark system (the energy scale of the process), by comparing differential measurements of the system’s production cross section with theoretical predictions. In the vast majority of the cases, top quarks decay into a W boson and a bottom quark. In this analysis, candidate events are selected in the final state where one W boson decays into an electron and a neutrino, and the other decays into a muon and a neutrino.
One-loop agreement
The cross section was determined using a maximum‑likelihood fit to multi‑differential distributions of final‑state observables, allowing the precision of the measurement to be significantly improved by comparison to standard methods (figure 1). The measured cross section was then used to extract the value of the top‑quark mass as a function of the energy scale. The running was determined with respect to an arbitrary reference scale. The measured points are in good agreement with the one‑loop solution of the RGE, within 1.1 standard deviations, and a hypothetical no‑running scenario is excluded at above 95% confidence level.
This novel result supports the validity of the RGEs up to a scale of the order of 1 TeV. Its precision is limited by systematic uncertainties related to experimental calibrations and the modelling of the top‑quark production in the simulation. Further progress will not only require a significant effort in improving the calibrations of the final‑state objects, but also substantial theoretical developments.
Jets are the most abundant high‑energy objects produced in collisions at the LHC, and often contaminate searches for new physics. In heavy‑ion collisions, however, these collimated showers of hadrons are not a background but one of the main tools to probe the deconfined state of strongly interacting matter known as the quark‑gluon plasma.
There are many open questions about the structure of the quark‑gluon plasma: What are the relevant degrees of freedom? How do high‑energy quarks and gluons interact with the hot QCD medium? Do factorisation and universality hold in this extreme environment? To answer these questions, experiments study how jets are modified in heavy‑ion collisions, where, unlike in proton‑proton collisions, they may interact with the constituents of the quark‑gluon plasma. Since jet production and interactions can be computed in perturbative QCD, comparing theoretical calculations to measurements can provide insight to the properties of the quark‑gluon plasma.
Soft power
In this spirit, the ALICE collaboration has measured the inclusive jet production yield in both Pb‑Pb and proton–proton (pp) collisions at a centre‑of‑mass energy of 5.02 TeV. Jets were reconstructed from a combination of information from the ALICE tracking detectors and electromagnetic calorimeter for a variety of jet radii R. The detectors’ excellent performance with soft tracks was exploited to allow the measurements to cover the lowest jet transverse momentum (pT,jet) region measured at the LHC, where jet modification effects are predicted to be strongest. The measured jet yields in Pb‑Pb collisions exhibit strong suppression compared to pp collisions, consistent with theoretical expectations that jets lose energy as they propagate through the quark‑gluon plasma (figure 1). For relatively narrow R = 0.2 jets, the data show stronger suppression at lower pT, jet than at higher pT,jet, suggesting that lower pT,jet jets lose a larger fraction of their energy. Additionally, the data show no significant R dependence of the suppression within the uncertainties of the measurement, which places constraints on the angular distribution of the “lost” energy.
Several theoretical models, spanning a range of physics approximations from jet‑medium weak‑coupling to strong‑coupling, were compared to the data. The models are able to generally describe the trends of the data, but several models exhibit hints of disagreement with the measurements. These data complement existing jet measurements from ATLAS and CMS, and take advantage of ALICE’s high‑precision tracking system to provide additional constraints on jet‑quenching models in heavy‑ion collisions at low pT. Moreover, these measurements can be used in combination with other jet observables to extract properties of the medium such as the transverse momentum diffusion parameter, which describes the angular broadening of jets as they traverse the quark–gluon plasma, as a function of the medium temperature and the jet pT.
The “reference” measurements in pp collisions contain important QCD physics themselves. This new set of measurements was performed systematically from R= 0.1 to R= 0.6, in order to span from small R, where hadronisation effects are large, to large R, where underlying event effects are large. These data can be used to constrain the perturbative structure of the inclusive jet cross section, as well as hadronisation and underlying event effects, which are of broad interest to the high‑energy physics community.
Going forward, ALICE is actively working to further constrain theoretical predictions in both pp and Pb‑Pb collisions by exploring complementary jet measurements, including jet substructure, heavy‑flavour jets, and more. With a nearly 10 times larger Pb‑Pb data sample collected in 2018, upcoming analyses of the data will be important for connecting observed jet modifications to properties of the quark‑gluon plasma.
Physics beyond the Standard Model must exist, to account for dark matter, the smallness of neutrino masses and the dominance of matter over antimatter in the universe; but we have no real clue of its energy scale. It is also widely recognised that new and more precise tools will be needed to be certain that the 125 GeV boson discovered in 2012 is indeed the particle postulated by Brout, Englert, Higgs and others to have modified the base potential of the whole universe, thanks to its coupling to itself, liberating energy for the masses of the W and Z bosons.
To tackle these big questions, and others, the Future Circular Collider (FCC) study, launched in 2014, proposed the construction of a new 100 km circular tunnel to first host an intensity-frontier 90 to 365 GeV e+e– collider (FCC-ee), and then an energy-frontier (> 100 TeV) hadron collider, which could potentially also allow electron–hadron collisions. Potentially following the High-Luminosity LHC in the late 2030s, FCC-ee would provide 5 × 1012 Z decays – over five orders of magnitude more than the full LEP era, followed by 108 W pairs, 106 Higgs bosons (ZH events) and 106 top-quark pairs. In addition to providing the highest parton centre-of-mass energies foreseeable today (up to 40 TeV), FCC-hh would also produce more than 1013 top quarks and W bosons, and 50 billion Higgs bosons per experiment.
Rising to the challenge
Following the publication of the four-volume conceptual design report and submissions to the European strategy discussions, the third FCC Physics and Experiments Workshop was held at CERN from 13 to 17 January, gathering more than 250 participants for 115 presentations, and establishing a considerable programme of work for the coming years. Special emphasis was placed on the feasibility of theory calculations matching the experimental precision of FCC-ee. The theory community is rising to the challenge. To reach the required precision at the Z-pole, three-loop calculations of quantum electroweak corrections must include all the heavy Standard Model particles (W±, Z, H, t).
In parallel, a significant focus of the meeting was on detector designs for FCC-ee, with the aim of forming experimental proto-collaborations by 2025. The design of the interaction region allows for a beam vacuum tube of 1 cm radius in the experiments – a very promising condition for vertexing, lifetime measurements and the separation of bottom and charm quarks from light-quark and gluon jets. Elegant solutions have been found to bring the final-focus magnets close to the interaction point, using either standard quadrupoles or a novel magnet design using a superposition of off-axis (“canted”) solenoids. Delegates discussed solutions for vertexing, tracking and calorimetry during a Z-pole run at FCC-ee, where data acquisition and trigger electronics would be confronted with visible Z decays at 70 kHz, all of which would have to be recorded in full detail. A new subject was π/K/p identification at energies from 100 MeV to 40 GeV – a consequence of the strategy process, during which considerable interest was expressed in the flavour-physics programme at FCC-ee.
Physicists cannot refrain from investigating improvements
The January meeting showed that physicists cannot refrain from investigating improvements, in spite of the impressive statistics offered by the baseline design of FCC-ee. Increasing the number of interaction points from two to four is a promising way to nearly double the total delivery of luminosity for little extra power consumption, but construction costs and compatibility with a possible subsequent hadron collider must be determined. A bolder idea discussed at the workshop aims to improve both luminosity (by a factor of 10) and energy reach (perhaps up to 600 GeV), by turning FCC-ee into a 100 km energy-recovery linac. The cost, and how well this would actually work, are yet to be established. Finally, a tantalising possibility is to produce the Higgs boson directly in the s-channel: e+e– → H, sitting exactly at a centre-of-mass energy equal to that of the Higgs boson. This would allow unique access to the tiny coupling of the Higgs boson to the electron. As the Higgs width (4.2 MeV in the Standard Model) is more than 20 times smaller than the natural energy spread of the beam, this would require a beam manipulation called monochromatisation and a careful running procedure, which a task force was nominated to study.
The ability to precisely probe the self-coupling of the Higgs boson is the keystone of the FCC physics programme. As said above, this self-interaction is the key to the electroweak phase transition, and could have important cosmological implications. Building on the solid foundation of precise and model-independent measurements of Higgs couplings at FCC-ee, FCC-hh would be able to access Hμμ, Hγγ, HZγ and Htt couplings at sub-percent precision. Further study of double Higgs production at FCC-hh shows that a measurement of the Higgs self-coupling could be done with a statistical precision of a couple of percent with the full statistics – which is to say that after the first few years of running the precision will already have been reduced to below 10%. This is much faster than previously realised, and definitely constituted the highlight of the workshop
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.