Comsol -leaderboard other pages

Topics

Taming the superconductors of tomorrow

The steady increase in the energy of colliders during the past 40 years, which has fuelled some of the greatest discoveries in particle physics, was possible thanks to progress in superconducting materials and accelerator magnets. The highest particle energies have been reached by proton–proton colliders, where beams of high-rigidity travelling on a piecewise circular trajectory require magnetic fields largely in excess of those that can be produced using resistive electromagnets. Starting from the Tevatron in 1983, through HERA in 1991, RHIC in 2000 and finally the LHC in 2008, all large-scale hadron colliders were built using superconducting magnets.

A Nb3Sn cable

Large superconducting magnets for detectors are just as important to high-energy physics experiments as beamline magnets are to particle accelerators. In fact, detector magnets are where superconductivity took its stronghold, right from the infancy of the technology in the 1960s, with major installations such as the large bubble-chamber solenoid at Argonne National Laboratory, followed by the giant BEBC solenoid at CERN, which held the record for the highest stored energy for many years. A long line of superconducting magnets has provided the magnetic fields for detectors of all large-scale high-energy physics colliders, with the most recent and largest realisation being the LHC experiments, CMS and ATLAS.

Optimisation

All past accelerator and detector magnets had one thing in common: they were built using composite Nb–Ti/Cu wires and cables. Nb–Ti is a ductile alloy with a critical field of 14.5 T and critical temperature of 9.2 K, made from almost equal parts of the two constituents. It was discovered to be superconducting in 1962 and its performance, quality and cost have been optimised over more than half a century of research, development and large-scale industrial production. Indeed, it is unlikely that the performance of the LHC dipole magnets, operated so far at 7.7 T and expected to reach nominal conditions at 8.33 T, can be surpassed using the same superconducting material, or any foreseeable improvement of this alloy.

One of the 11 T niobium-tin dipoles for the HL-LHC

And yet, approved projects and studies for future circular machines are all calling for the development of superconducting magnets that produce fields beyond those produced for the LHC. These include the High-Luminosity LHC (HL-LHC), which is currently taking shape, and the Future Circular Collider design study (FCC), both at CERN, together with studies and programmes outside Europe, such as the Super proton–proton Collider in China (SppC) or the past studies of a Very Large Hadron Collider at Fermilab and the US–DOE Muon Accelerator Program (see HL-LHC quadrupole successfully tested). This requires that we turn to other superconducting materials and novel magnet technology.

The HL-LHC springboard

To reach its main objective, to increase the levelled LHC luminosity at ATLAS and CMS, and the integrated luminosity by a factor of 10, the HL-LHC requires very large-aperture quadrupoles, with field levels at the coil in the range of 12 T in the interaction regions. These quadrupoles, currently being built and tested at CERN and Fermilab (see HL-LHC quadrupole successfully tested), are the main fruit of the 10-year US-DOE LHC Accelerator Research Program (US–LARP) – a joint venture between CERN, Brookhaven National Laboratory, Fermilab and Lawrence Berkeley National Laboratory. In addition, the increased beam intensity calls for collimators to be inserted in locations within the LHC “dispersion suppressor”, the portion of the accelerator where the regular magnet lattice is modified to ensure that off-momentum particles are centered in the interaction points. To gain the required space, standard arc dipoles will be substituted by dipoles of shorter length and higher field, approximately 11 T. As described earlier, such fields require the use of new materials. For the HL-LHC, the material of choice is the intermetallic compound of niobium and tin Nb3Sn, which was discovered in 1954. Nb3Sn has a critical field of about 30 T and a critical temperature of about 18 K, outperforming Nb–Ti by a factor of two. Though discovered before Nb–Ti, and exhibiting better performance, Nb3Sn has not been used for accelerator magnets so far because in its final form it is brittle and cannot withstand large stress and strain without special precautions.

The HL-LHC is the springboard to the future of high-field accelerator magnets

In fact, Nb3Sn was one of the candidate materials considered for the LHC in the late 1980s and mid 1990s. Already at that time it was demonstrated that accelerator magnets could be built with Nb3Sn, but it was also clear that the technology was complex, with a number of critical steps, and not ripe for large-scale production. A good 20 years of progress in basic material performance, cable development, magnet engineering and industrial process control was necessary to reach the present state, during which time the success of the production of Nb3Sn for the ITER fusion experiment has given confidence in the credibility of this material for large-scale applications. As a result, magnet experts are now convinced that Nb3Sn technology is sufficiently mature to satisfy the challenging field levels required by the HL-LHC.

A difficult recipe

The present manufacturing recipe for Nb3Sn accelerator magnets consists of winding the magnet coil with glass-fibre insulated cables made of multi-filamentary wires that contain Nb and Sn precursors in a Cu matrix. In this form the cables can be handled and plastically deformed without breakage. The coils then undergo heat treatment, typically at a temperature of around 650 °C, during which the precursor elements react chemically and form the desired Nb3Sn superconducting phase. At this stage, the reacted coil is extremely fragile and needs to be protected from any mechanical action. This is done by injecting a polymer, which fills the interstitial spaces among cables, and is subsequently cured to become a matrix of hardened plastic providing cohesion and support to the cables.

Nb3Sn 11 T dipoles for the HL-LHC

The above process, though conceptually simple, has a number of technical difficulties that call for top-of-the-line engineering and production control. To give some examples, the texture of the electrical insulation, consisting of a few tenths of mm of glass fibre, needs to be able to withstand the high-temperature heat-treatment step, but also retain dielectric and mechanical properties at liquid-helium temperatures 1000 °C lower. The superconducting wire also changes its dimensions by a few percent, which is orders of magnitude larger than the dimensional accuracy requested for field quality and therefore must be predicted and accommodated for by appropriate magnet and tooling design. The finished coil, even if it is made solid by the polymer cast, still remains stress and strain sensitive. The level of stress that can be tolerated without breakage can be up to 150 MPa, to be compared to the electromagnetic stress of optimised magnets operating at 12 T that can reach levels in the range of 100 MPa. This does not leave much headroom for engineering margins and manufacturing tolerances. Finally, protecting high-field magnets from quenches, with their large stored energy, requires that the protection system has a very fast reaction – three times faster than at the LHC – and excellent noise rejection to avoid false trips related to flux jumps in the large Nb3Sn filaments.

The next jump

The CERN magnet group, in collaboration with the US–DOE laboratories participating in the LHC Accelerator Upgrade Project, is in the process of addressing these and other challenges, finding solutions suitable for a magnet production on the scale required for the HL-LHC. A total of six 11 T dipoles (each about 6 m long) and 20 inner triplet quadrupoles (up to 7.5 m long) are in production at CERN and in the US, and the first magnets have been tested (see “Power couple” image). And yet, it is clear that we are not ready to extrapolate such production on a much larger scale, i.e. to the thousands of magnets required for a possible future hadron collider such as FCC-hh. This is exactly why the HL-LHC is so critical to the development of high-field magnets for future accelerators: not only will it be the first demonstration of Nb3Sn magnets in operation, steering and colliding beams, but by building it on a scale that can be managed at the laboratory level we have a unique opportunity to identify all the areas of necessary development, and the open technology issues, to allow the next jump. Beyond its prime physics objective, the HL-LHC is therefore the springboard to the future of high-field accelerator magnets.

Climb to higher peak fields

For future circular colliders, the target dipole field has been set at 16 T for FCC-hh, allowing proton–proton collisions at an energy of 100 TeV, while China’s proposed pp collider (SppC) aims at a 12 T dipole field, to be followed by a 20 T dipole. Are these field levels realistic? And based on which technology?

The MDP “cos-theta 1” dipole accelerator magnet at Fermilab

Looking at the dipole fields produced by Nb3Sn development magnets during the past 40 years (figure 1), fields up to 16 T have been achieved in R&D demonstrators, suggesting that the FCC target can be reached. In 2018 “FRESCA2” – a large-aperture (100 mm) dipole developed over the past decade through a collaboration between CERN and CEA-Saclay in the framework of the European Union project EuCARD – attained a record field of 14.6 T at 1.9 K (13.9 T at 4.5 K). Another very recent result, obtained in June 2019, is the successful test at Fermilab by the US Magnet Development Programme (MDP) of a “cos-theta” dipole with an aperture of 60 mm called MDPCT1 (see “Cos-theta 1” image), which reached a field of 14.1 T a t 4.5 K (CERN Courier September/October 2019 p7). In February this year, the CERN magnet group set a new Nb3Sn record with an enhanced racetrack model coil (eRMC), developed in the framework of the FCC study. The setup, which consists of two racetrack coils assembled without mid-plane gap (see “Racetrack demo” image), produced a 16.36 T central field at 1.9 K and a 16.5 T peak field on the coil, which is the highest ever reached for a magnet of this configuration. The magnet was also tested at 4.5 K and reached a field of about 16.3 T (see HL-LHC quadrupole successfully tested). These results send a positive signal for the feasibility of next-generation hadron colliders.

A field of 16 T seems to be the upper limit that can be reached with a Nb3Sn accelerator magnet. Indeed, though the conductor performance can still be improved, as demonstrated by recent results obtained at the National High Magnetic Field Laboratory (NHMFL), Ohio State University and Fermilab within the scope of the US-MDP, this is the point at which the material itself will run out of steam. As for any other superconductor, the critical current density drops as the field grows, requiring an increasing amount of material to carry a given current. The effect becomes dramatic when approaching a significant fraction of the critical field. Akin to Nb-Ti in the region of 8 T, a further field increase with Nb3Sn beyond 16 T would require an exceedingly large coil and an impractical amount of conductor. Reaching the ultimate performance of Nb3Sn, which will be situated between the present 12 T and the expected maximum of 16 T, still requires much work. The technology issues identified by the ongoing work on the HL-LHC magnets are exacerbated by the increase in field, electromagnetic force and stored energy. Innovative industrial solutions will be needed, and the conductor itself brought to a level of maturity comparable to Nb–Ti in terms of performance, quality and cost. This work is the core of the ongoing FCC magnet-development programme that CERN is pursuing in collaboration with laboratories, universities and industries worldwide.

As the limit of Nb3Sn comes into view, we see history repeating itself: the only way to push beyond it to higher fields will be to resort to new materials. Since Nb3Sn is technically the low-temperature superconductor (LTS) with the highest performance, this will require a shift to high-temperature superconductors.

Figure 1

High-temperature superconductivity (HTS), discovered in 1986, is of great relevance in the quest for high fields. When operated at low temperature (the same liquid-helium range as LTS), HTS materials have exceedingly large critical fields in the range of 100 T and above. And yet, only recently has the material and magnet engineering reached the point where HTS materials can generate magnetic fields in excess of LTS ones. The first user applications coming to fruition are ultra-high-field NMR magnets, as recently delivered by Bruker Biospin, and the intense magnetic fields required by materials science, for example the 32 T all-superconducting user facility built at NHMFL.

As for their application in accelerator magnets, the potential of HTS to make a quantum leap is enormous. But it is also clear that the tough challenges that needed to be solved for Nb3Sn will escalate to a formidable level in HTS accelerator magnets. The magnetic force scales with the square of the field produced by the magnet, and for HTS the problem will no longer be whether the material can carry the super-currents, but rather how to manage stresses approaching structural material limits. Stored energy has the same square-dependence on the field, and quench detection and protection in large HTS magnets are still a spectacular challenge. In fact, HTS magnet engineering will probably differ so much from the LTS paradigm that it is fair to say that we do not yet know whether we have identified all the issues that need to be solved. HTS is the most exciting class of material to work with; the new world for brave explorers. But it is still too early to count on practical applications, not least because the production cost for this rather complex class of ceramic materials is about two orders of magnitude higher than that of good-old Nb–Ti.

A Nb3Sn demonstrator racetrack dipole magnet

It is thus logical to expect the near future to be based mainly on Nb3Sn. With the first demonstration to come imminently in the LHC, we need to consolidate the technology and bring it to the maturity necessary on a large-scale production. This may likely take place in steps – exploring 12 T territory first, while seeking the solutions to the challenges of ultimate Nb3Sn performance towards 16 T – and could take as long as a decade. For China’s SppC, iron-based HTS has been suggested as a route to 20 T dipoles. This technology study is interesting from the point of view of the material, but the magnet technology for iron-based superconductors is still rather far away.

Meanwhile, nurtured by novel ideas and innovative solutions, HTS could grow from the present state of a material of great potential to its first applications. The LHC already uses HTS tapes (based on Bi-2223) for the superconducting part of the current leads. The HL-LHC will go further, by pioneering the use of MgB2 to transport the large currents required to power the new magnets over considerable distances (thereby shielding power converters and making maintenance much easier). The grand challenges posed by HTS will likely require a revolution rather than an evolution of magnet technology, and significant technology advancement leading to large-scale application in accelerators can only be imagined on the 25-year horizon.

Road to the future

There are two important messages to retain from this rather simplified perspective on high-field magnets for accelerators. Firstly, given the long lead times of this technology, and even in times of uncertainty, it is important to maintain a healthy and ambitious programme so that the next step in technology is at hand when critical decisions on the accelerators of the future are due. The second message is that with such long development cycles and very specific technology, it is not realistic to rely on the private sector to advance and sustain the specific demands of HEP. In fact, the business model of high-energy physics is very peculiar, involving long investment times followed by short production bursts, and not sustainable by present industry standards. So, without taking the place of industry, it is crucial to secure critical know-how and infrastructure within the field to meet development needs and ensure the long-term future of our accelerators, present and to come.

Tau pairs speed search for heavy Higgs bosons

Figure 1

After the discovery of the long‑sought Higgs boson at a mass of 125 GeV, a major question in particle physics is whether the electroweak symmetry breaking sector is indeed as simple as the one implemented in the Standard Model (SM), or whether there are additional Higgs bosons. Additional Higgs bosons would occur, for example, in the presence of a second Higgs field, as realised in two‑Higgs doublet models, among which is the well‑known minimal supersymmetric extension of the SM (MSSM). The discovery of additional Higgs bosons could therefore be a gateway to new symmetries in nature.

ATLAS has recently released results of a search for heavy Higgs bosons decaying into a pair of tau leptons using the complete LHC Run 2 dataset (139 fb–1 of 13 TeV proton–proton data). The new analysis provides a considerable increase in sensitivity to MSSM scenarios compared to previous results.

The MSSM features five Higgs bosons

The MSSM features five Higgs bosons, among which, the observed Higgs boson can be the lightest one. The couplings of the heavy Higgs bosons to down‑type leptons and quarks, such as the tau lepton and bottom quark, are enhanced for large values of tan β – the ratio of the vacuum expectation values of the two Higgs doublets, and one of the key parameters of the model. The heavy neutral Higgs bosons A (CP odd) and H (CP even) are produced mainly via gluon–gluon interactions or in association with bottom quarks. Their branching fractions to tau leptons can reach sizeable values across a large part of the model‑parameter space, making this channel particularly sensitive to a wide range of MSSM scenarios.

Figure 2

New search

The new ATLAS search requires the presence of two oppositely charged tau‑lepton candidates, one of which is identified as a hadronic tau decay, and the other as either a hadronic or a leptonic decay. To profit from the enhancement of the production of signal events in association with bottom quarks at large tan β values (for example when the heavy Higgs boson is radiated by a b‑quark produced in the collision of two gluons), the data are further categorised based on the presence or absence of additional b‑jets. One of the challenges of the analysis is the misidentification of backgrounds with hadronic jets as tau candidates. These backgrounds are estimated from data by measuring the misidentification probabilities and applying them to events in control regions representative of the event selection. The final discriminant is on the quantity mTtot, which is built from the combination of the transverse masses of the two tau‑lepton decay products (figure 1).

The data agree with the prediction assuming no additional Higgs bosons, despite a small, non‑significant excess around a putative signal mass value of 400 GeV. The measurement places limits on the production cross section that can be translated into constraints on MSSM parameters. One realisation of the MSSM is the hMSSM scenario, in which the knowledge of the observed Higgs‑boson mass is used to reduce the number of parameters. The A/H → ττ exclusion limit dominates over large parts of the parameter space (figure 2), but still leaves room for possible discoveries at masses above the top‑anti‑top quark production threshold. ATLAS continues to refine this and conduct further searches for heavy Higgs bosons in various final states.

First sight of the running of the top-quark mass

Figure 1

The coupling between quarks and gluons depends strongly on the energy scale of the process. The same is true for the masses of the quarks. This effect – the so‑called “running” of the strong coupling constant and the quark masses – is described by the renormalisation group equations (RGEs) of quantum chromodynamics (QCD). The experimental verification of the RGEs is both an important test of the validity of QCD and an indirect search for unknown physics, as physics beyond the Standard Model could modify the RGEs at scales probed by the Large Hadron Collider. The running of the strong coupling constant has been established at many experiments in the past, and, over the past 20 years, evidence for the running of the masses of the charm and bottom quarks was demonstrated using data from LEP, SLC and HERA, though the running of the top‑quark mass has hitherto proven elusive.

CMS has probed the running of the mass of the top quark for the first time

The CMS collaboration has now, for the first time, probed the running of the mass of the top quark. The measurement was performed using proton–proton collision data at a centre‑of‑mass energy of 13 TeV, recorded by the CMS detector in 2016. The top quark’s mass was determined as a function of the invariant mass of the top quark–antiquark system (the energy scale of the process), by comparing differential measurements of the system’s production cross section with theoretical predictions. In the vast majority of the cases, top quarks decay into a W boson and a bottom quark. In this analysis, candidate events are selected in the final state where one W boson decays into an electron and a neutrino, and the other decays into a muon and a neutrino.

One-loop agreement

The cross section was determined using a maximum‑likelihood fit to multi‑differential distributions of final‑state observables, allowing the precision of the measurement to be significantly improved by comparison to standard methods (figure 1). The measured cross section was then used to extract the value of the top‑quark mass as a function of the energy scale. The running was determined with respect to an arbitrary reference scale. The measured points are in good agreement with the one‑loop solution of the RGE, within 1.1 standard deviations, and a hypothetical no‑running scenario is excluded at above 95% confidence level.

This novel result supports the validity of the RGEs up to a scale of the order of 1 TeV. Its precision is limited by systematic uncertainties related to experimental calibrations and the modelling of the top‑quark production in the simulation. Further progress will not only require a significant effort in improving the calibrations of the final‑state objects, but also substantial theoretical developments.

ALICE extends quenching studies to softer jets

Figure 1

Jets are the most abundant high‑energy objects produced in collisions at the LHC, and often contaminate searches for new physics. In heavy‑ion collisions, however, these collimated showers of hadrons are not a background but one of the main tools to probe the deconfined state of strongly interacting matter known as the quark‑gluon plasma.

There are many open questions about the structure of the quark‑gluon plasma: What are the relevant degrees of freedom? How do high‑energy quarks and gluons interact with the hot QCD medium? Do factorisation and universality hold in this extreme environment? To answer these questions, experiments study how jets are modified in heavy‑ion collisions, where, unlike in proton‑proton collisions, they may interact with the  constituents of the quark‑gluon plasma. Since jet production and interactions can be computed in perturbative QCD, comparing theoretical calculations to measurements can provide insight to the properties of the quark‑gluon plasma.

Soft power

In this spirit, the ALICE collaboration has measured the inclusive jet production yield in both Pb‑Pb and proton–proton (pp) collisions at a centre‑of‑mass energy of 5.02 TeV. Jets were reconstructed from a combination of information from the ALICE tracking detectors and electromagnetic calorimeter for a variety of jet radii R. The detectors’ excellent performance with soft tracks was exploited to allow the measurements to cover the lowest jet transverse momentum (pT,jet) region measured at the LHC, where jet modification effects are predicted to be strongest. The measured jet yields in Pb‑Pb collisions exhibit strong suppression compared to pp collisions, consistent with theoretical expectations that jets lose energy as they propagate through the quark‑gluon plasma (figure 1). For relatively narrow R = 0.2 jets, the data show stronger suppression at lower pT, jet than at higher pT,jet, suggesting that lower pT,jet jets lose a larger fraction of their energy. Additionally, the data show no significant R dependence of the suppression within the uncertainties of the measurement, which places constraints on the angular distribution of the “lost” energy.

Several theoretical models, spanning a range of physics approximations from jet‑medium weak‑coupling to strong‑coupling, were compared to the data. The models are able to generally describe the trends of the data, but several models exhibit hints of disagreement with the measurements. These data complement existing jet measurements from ATLAS and CMS, and take advantage of ALICE’s high‑precision tracking system to provide additional constraints on jet‑quenching models in heavy‑ion collisions at low pT. Moreover, these measurements can be used in combination with other jet observables to extract properties of the medium such as the transverse momentum diffusion parameter, which describes the angular broadening of jets as they traverse the quark–gluon plasma, as a function of the medium temperature and the jet pT.

The “reference” measurements in pp collisions contain important QCD physics themselves. This new set of measurements was performed systematically from R= 0.1 to R= 0.6, in order to span from small R, where hadronisation effects are large, to large R, where underlying event effects are large. These data can be used to constrain the perturbative structure of the inclusive jet cross section, as well as hadronisation and underlying event effects, which are of broad interest to the high‑energy physics community.

Going forward, ALICE is actively working to further constrain theoretical predictions in both pp and Pb‑Pb collisions by exploring complementary jet measurements, including jet substructure, heavy‑flavour jets, and more. With a nearly 10 times larger Pb‑Pb data sample collected in 2018, upcoming analyses of the data will be important for connecting observed jet modifications to properties of the quark‑gluon plasma.

Circular colliders eye Higgs self-coupling

Coupling correlations

Physics beyond the Standard Model must exist, to account for dark matter, the smallness of neutrino masses and the dominance of matter over antimatter in the universe; but we have no real clue of its energy scale. It is also widely recognised that new and more precise tools will be needed to be certain that the 125 GeV boson discovered in 2012 is indeed the particle postulated by Brout, Englert, Higgs and others to have modified the base potential of the whole universe, thanks to its coupling to itself, liberating energy for the masses of the W and Z bosons.

To tackle these big questions, and others, the Future Circular Collider (FCC) study, launched in 2014, proposed the construction of a new 100 km circular tunnel to first host an intensity-frontier 90 to 365 GeV e+e collider (FCC-ee), and then an energy-frontier (> 100 TeV) hadron collider, which could potentially also allow electron–hadron collisions. Potentially following the High-Luminosity LHC in the late 2030s, FCC-ee would provide 5 × 1012 Z decays – over five orders of magnitude more than the full LEP era, followed by 108 W pairs, 106 Higgs bosons (ZH events) and 106 top-quark pairs. In addition to providing the highest parton centre-of-mass energies foreseeable today (up to 40 TeV), FCC-hh would also produce more than 1013 top quarks and W bosons, and 50 billion Higgs bosons per experiment.

Rising to the challenge

Following the publication of the four-volume conceptual design report and submissions to the European strategy discussions, the third FCC Physics and Experiments Workshop was held at CERN from 13 to 17 January, gathering more than 250 participants for 115 presentations, and establishing a considerable programme of work for the coming years. Special emphasis was placed on the feasibility of theory calculations matching the experimental precision of FCC-ee. The theory community is rising to the challenge. To reach the required precision at the Z-pole, three-loop calculations of quantum electroweak corrections must include all the heavy Standard Model particles (W±, Z, H, t).

In parallel, a significant focus of the meeting was on detector designs for FCC-ee, with the aim of forming experimental proto-collaborations by 2025. The design of the interaction region allows for a beam vacuum tube of 1 cm radius in the experiments – a very promising condition for vertexing, lifetime measurements and the separation of bottom and charm quarks from light-quark and gluon jets. Elegant solutions have been found to bring the final-focus magnets close to the interaction point, using either standard quadrupoles or a novel magnet design using a superposition of off-axis (“canted”) solenoids. Delegates discussed solutions for vertexing, tracking and calorimetry during a Z-pole run at FCC-ee, where data acquisition and trigger electronics would be confronted with visible Z decays at 70 kHz, all of which would have to be recorded in full detail. A new subject was π/K/p identification at energies from 100 MeV to 40 GeV – a consequence of the strategy process, during which considerable interest was expressed in the flavour-physics programme at FCC-ee.

Physicists cannot refrain from investigating improvements

The January meeting showed that physicists cannot refrain from investigating improvements, in spite of the impressive statistics offered by the baseline design of FCC-ee. Increasing the number of interaction points from two to four is a promising way to nearly double the total delivery of luminosity for little extra power consumption, but construction costs and compatibility with a possible subsequent hadron collider must be determined. A bolder idea discussed at the workshop aims to improve both luminosity (by a factor of 10) and energy reach (perhaps up to 600 GeV), by turning FCC-ee into a 100 km energy-recovery linac. The cost, and how well this would actually work, are yet to be established. Finally, a tantalising possibility is to produce the Higgs boson directly in the s-channel: e+e → H, sitting exactly at a centre-of-mass energy equal to that of the Higgs boson. This would allow unique access to the tiny coupling of the Higgs boson to the electron. As the Higgs width (4.2 MeV in the Standard Model) is more than 20 times smaller than the natural energy spread of the beam, this would require a beam manipulation called monochromatisation and a careful running procedure, which a task force was nominated to study.

The ability to precisely probe the self-coupling of the Higgs boson is the keystone of the FCC physics programme. As said above, this self-interaction is the key to the electroweak phase transition, and could have important cosmological implications. Building on the solid foundation of precise and model-independent measurements of Higgs couplings at FCC-ee, FCC-hh would be able to access Hμμ, Hγγ, HZγ and Htt couplings at sub-percent precision. Further study of double Higgs production at FCC-hh shows that a measurement of the Higgs self-coupling could be done with a statistical precision of a couple of percent with the full statistics – which is to say that after the first few years of running the precision will already have been reduced to below 10%. This is much faster than previously realised, and definitely constituted the highlight of the workshop

Sketching out a muon collider

The machine–detector interface for a muon collider

High-energy particle colliders have proved to be indispensable tools in the investigation of the nature of the fundamental forces. The LHC, at which the discovery of the Higgs boson was made in 2012, is a prime recent example. Several major projects have been proposed to push our understanding of the universe once the LHC reaches the end of its operations in the late 2030s. These have been the focus of discussions for the soon-to-conclude update of the European strategy for particle physics. An electron–positron Higgs factory that allows precision measurements of the Higgs boson’s couplings and the Higgs potential seems to have garnered consensus as the best machine for the near future. The question is: what type will it be?

Today, mature options for electron–positron colliders exist: the Future Circular Collider (FCC-ee) and the Compact Linear Collider (CLIC) proposals at CERN; the International Linear Collider (ILC) in Japan; and the Circular Electron–Positron Collider (CEPC) in China. FCC-ee offers very high luminosities at the required centre-of-mass energies. However, the maximum energy that can be reached is limited by the emission of synchrotron radiation in the collider ring, and corresponds to a centre-of-mass energy of 365 GeV for a 100 km-circumference machine. Linear colliders accelerate particles without the emission of synchrotron radiation, and hence can reach higher energies. The ILC would initially operate at 250 GeV, extendable to 1 TeV, while the highest energy proposal, CLIC, has been designed to reach 3 TeV. However, there are two principal challenges that must be overcome to go to higher energies with a linear machine: first, the beam has to be accelerated to full energy in a single passage through the main linac; and, second, it can only be used once in a single collision. At higher energies the linac has to be longer (around 50 km for a 1 TeV ILC and a 3 TeV CLIC) and is therefore more costly, while the single collision of the beam also limits the luminosity that can be achieved for a reasonable power consumption.

Beating the lifetime 

An ingenious solution to overcome these issues is to replace the electrons and positrons with muons and anti-muons. In a muon collider, fundamental particles that are not constituents of ordinary matter would collide for the first time. Being 200 times heavier than the electron, the muon emits about two billion times less synchrotron radiation. Rings can therefore be used to accelerate muon beams efficiently and to bring them into collision repeatedly. Also, more than one experiment can be served simultaneously to increase the amount of data collected. Provided the technology can be mastered, it appears possible to reach a ratio of luminosity to beam power that increases with energy. The catch is that muons live on average for 2.2 μs, which leads to a reduction in the number of muons produced by about an order of magnitude before they enter the storage ring. One therefore has to be rather quick in producing, accelerating and colliding the muons; this rapid handling provides the main challenges of such a project.

Precision and discovery

Two muon-collider concepts

The development of a muon collider is not as advanced as the other lepton-collider options that were submitted to the European strategy process. Therefore the unique potential of a multi-TeV muon collider deserves a strong commitment to fully demonstrate its feasibility. Extensive  studies submitted to the strategy update show that a muon collider in the multi-TeV energy range would be competitive both as a precision and as a discovery machine, and that a full effort by the community could demonstrate that a muon collider operating at a few TeV can be ready on a time scale of about 20 years. While the full physics capabilities at high energies remain to be quantified, and provided the beam energy and detector resolutions at a muon collider can be maintained at the parts-per-mille level, the number of Higgs bosons produced would allow the Higgs’ couplings to fermions and bosons to be measured with extraordinary precision. A muon collider operating at lower energies, such as those for the proposed FCC-ee (250 and 365 GeV) or stage-one CLIC (380 GeV) machines, has not been studied in detail since the beam-induced background will be harsher and careful optimisation of machine parameters would be required to reach the needed luminosity. Moreover, a muon collider generating a centre-of-mass energy of 10 TeV or more and with a luminosity of the order of 1035 cm–2 s–1 would allow a direct measurement of the trilinear and quadrilinear self-couplings of the Higgs boson, enabling a precise determination of the shape of the Higgs potential. While the precision on Higgs measurements achievable at muon colliders is not yet sufficiently evaluated to perform a comparison to other future colliders, theorists have recently shown that a muon collider is competitive in measuring the trilinear Higgs coupling and that it could allow a determination of the quartic self-coupling that is significantly better than what is currently considered attainable at other future colliders. Owing to the muon’s greater mass, the coupling of the muon to the Higgs boson is enhanced by a factor of about 104 compared to the electron–Higgs coupling. To exploit this, previous studies have also investigated a muon collider operating at a centre-of-mass energy of 126 GeV (the Higgs pole) to measure the Higgs-boson line-shape. The specifications for such a machine are demanding as it requires knowledge of the beam-energy spread at the level of a few parts in 105.

Half a century of ideas

A sketch of the MICE apparatus

The idea of a muon collider was first introduced 50 years ago by Gersh Budker and then developed by Alexander Skrinsky and David Neuffer until the Muon Collider Collaboration became a formal entity in 1997, with more than 100 physicists from 20 institutions in the US and a few more from Russia, Japan and Europe. Brookhaven’s Bob Palmer was a key figure in driving the concept forward, leading the outline of a “complete scheme” for a muon collider in 2007. Exploratory work towards a muon collider and neutrino factory was also carried out at CERN around the turn of the millennium. It was only when the Muon Accelerator Program (MAP), directed by Mark Palmer of Brookhaven, was formally approved in 2011 in the US, that a systematic effort started to develop and demonstrate the concepts and critical technologies required to produce, capture, condition, accelerate and store intense beams of muons for a muon collider on the Fermilab site. Although MAP was wound down in 2014, it generated a reservoir of expertise and enthusiasm that the current international effort on physics, machine and detector studies can not do without.

So far, two concepts have been proposed for a muon collider (figure 1). The first design, developed by MAP, is to shoot a proton beam into a target to produce pions, many of which decay into muons. This cloud of muons (with positive and negative charge) is captured and an ionisation cooling system of a type first imagined by Budker rapidly cools the muons from the showers to obtain a dense beam. The muons are cooled in a chain of low-Z absorbers in which they lose energy by ionising the matter, reducing  their phase space volume; the lost energy would then be replaced by acceleration. This is so far the only concept that can achieve cooling within the timeframe of the muon lifetime. The beams would be accelerated in a sequence of linacs and rings, and injected at full energy into the collider ring. A fully integrated conceptual design for the MAP concept remains to be developed.

The unique potential of a multi-TeV muon collider deserves a strong commitment to fully demonstrate its feasibility

The alternative approach to a muon collider, proposed in 2013 by Mario Antonelli of INFN-LNF and Pantaleo Raimondi of the ESRF, avoids a specific cooling apparatus. Instead, the Low Emittance Muon Accelerator (LEMMA) scheme would send 45 GeV positrons into a target where they collide with electrons to produce muon pairs with a very small phase space (the energy of the electron and positron in the centre-of-mass frame are small, so little transverse momentum can be generated). The challenge with LEMMA is that the probability for a positron to produce a muon pair is exceedingly low, requiring an unprecedented positron-beam current and inducing a high stress in the target system. The muon beams produced would be circulated about 1000 times, limited by the muon lifetime, in a ring collecting muons produced from as many positron bunches as possible before they are accelerated and collided in a fashion similar to the proton-driven scheme of MAP. The low emittance of the LEMMA beams potentially allows the use of lower muon currents, easing the challenges of operating a muon collider due to the remnants of the decaying muons. The initial LEMMA scheme offered limited performance in terms of luminosity, and further studies are required to optimise all parameters of the source before capture and fast acceleration. With novel ideas and a dedicated expert team, LEMMA could potentially be shown to be competitive with the MAP scheme.

Results of muons that pass through MICE

Concerning the ambitious muon ionisation-cooling complex (figure 2), which is the key challenge of MAP’s proton-driven muon-collider scheme, the Muon Ionization Cooling Experiment (MICE) collaboration recently published results demonstrating the feasibility of the technique (CERN Courier March/April 2020 p7). Since muons produced from proton interactions in a target emerge in a rather undisciplined state, MICE set out to show that their transverse phase-space could be cooled by passing the beam through an energy-absorbing material and accelerating structures embedded within a focusing magnetic lattice – all before the muons have time to decay. For the scheme to work, the cooling (squeezing the beam in transverse phase space) due to ionisation energy loss must exceed the heating due to multiple Coulomb scattering within the absorber. Materials with low multiple scattering and a long radiation length, such as liquid hydrogen and lithium hydride, are therefore ideal.

MICE, which was based at the ISIS neutron and muon source at the Rutherford Appleton Laboratory in the UK, was approved in 2005. Using data collected in 2018, the MICE collaboration was able to determine the distance of a muon from the centre of the beam in 4D phase space (its so-called amplitude or “single-particle emittance”) both before and after it passed through the absorber, from which it was possible to estimate the degree of cooling that had occurred. The results (figure 3) demonstrated that ionisation cooling occurs with a liquid-hydrogen or lithium-hydride absorber in place. Data from the experiment were found to be well described by a Geant4-based simulation, validating the designs of ionisation cooling channels for an eventual muon collider. The next important step towards a muon collider would be to design and build a cooling module combining the cavities with the magnets and absorbers, and to achieve full “6D” cooling. This effort could profit from tests at Fermilab of accelerating cavities that can operate in a very high magnetic field, and also from the normal-conducting cavity R&D undertaken for the CLIC study, which pushed accelerating gradients to the limit.

Collider ring

The collider ring itself is another challenging aspect of a muon collider. Since the charge of the injected beams decreases over time due to the random decays of muons, superconducting magnets with the highest possible field are needed to minimise the ring circumference and thus maximise the average number of collisions. A larger muon energy makes it harder to bend the beam and thus requires a larger ring circumference. Fortunately, the lifetime of the muon also increases with its energy, which fully compensates for this effect. Dipole magnets with a field of 10.5 T would allow the muons to survive about 2000 turns. Such magnets, which are about 20% more powerful than those in the LHC, could be built from niobium-tin (Nb3Sn) as used in the new magnets for the HL-LHC (see Taming the superconductors of tomorrow).

Magnet model

The electrons and positrons produced when muons decay pose an additional challenge for the magnet design. The decay products will hit the magnets and can lead to a quench (whereby the magnet suddenly loses its superconductivity, rapidly releasing an immense amount of stored energy). It is therefore important to protect the magnets. The solutions considered include the use of large-aperture magnets in which shielding material can be placed, or designs where the magnets have no superconductor in the plane of the beam. Future magnets based on high-temperature superconductors could also help to improve the robustness of the bends against this problem since they can tolerate a higher heat load.

Other systems necessary for a muon collider are only seemingly more conventional. The ring that accelerates the beam to the collision energy is a prime example. It has to ramp the beam energy in a period of milliseconds or less, which means the beam has to circulate at very different energies through the same magnets. Several solutions are being explored. One, featuring a so-called fixed-field alternating-gradient ring, uses a complicated system of magnets that enables particles at a wider than normal range of energies to fly on different orbits that are close enough to fit into the same magnet apertures. Another possibility is to use a fast-ramping synchrotron: when the beam is injected at low energy it is kept on its orbit by operating the bending magnets at low field. The beam is then accelerated and the strength of the bends is increased accordingly until the beam can be extracted into the collider. It is very challenging to ramp superconducting magnets at the required speed, however. Normal-conducting magnets can do better, but their magnetic field is limited. As a consequence, the accelerator ring has to be larger than the collider ring, which can use superconducting magnets at full strength without the need to ramp them. Systems that combine static superconducting and fast-ramping normal-conducting bends have been explored by the MAP collaboration. In these designs, the energy in the fields of the fast-ramping bends will be very high, so it is important that the energy is recuperated for use in a subsequent accelerating cycle. This requires a very efficient energy-recovery system which extracts the energy after each cycle and reuses it for the next one. Such a system, called POPS (“power for PS”), is used to power the magnets of CERN’s Proton Synchrotron. The muon collider, however, requires more stored energy and much higher power flow, which calls for novel solutions.

High occupancy

Muon decays also induce the presence of a large amount of background in the detectors at a muon collider – a factor that must be studied in detail since it strongly depends on the beam energy at the collision point and on the design of the interaction region. The background particles reaching the detector are mainly produced by the interactions between the decay products of the muon beams and the machine elements. Their type, flux and characteristics therefore strongly depend on the machine lattice and the configuration of the interaction point, which in turn depends on the collision energy. The background particles (mainly photons, electrons and neutrons) may be produced tens of metres upstream of the interaction point. To mitigate the effects of the beam-induced background inside the detector, tungsten shielding cones, called nozzles, are proposed in this configuration and their opening angle has to be optimised for a specific beam energy, which affects the detector acceptance (see figure 4). Despite these mitigations, a large particle flux reaches the detector, causing a very high occupancy in the first layers of the tracking system, which impacts the detector performance. Since the arrival time in each sub-detector is asynchronous with respect to the beam crossing, due to the different paths taken by the beam-induced background and the muons, new-generation 4D silicon sensors that allow exploitation of the time distribution will be needed to remove a significant fraction of the background hits.

Energy expansion

It was recently demonstrated, by a team supported by INFN and Padova University in collaboration with MAP researchers, that state-of-the-art detector technology for tracking and jet reconstruction would make one of the most critical measurements at a muon collider – the vector-boson fusion channel μ+μ → (W*W*) ν ν → H ν ν, with H → b b – feasible in this harsh environment, with a high level of precision, competitive to other proposed machines (figure 5). A muon collider could in principle expand its energy reach to several TeV with good luminosity, allowing unprecedented exploration in direct searches and high-precision tests of Standard Model phenomena, in particular the Higgs self-couplings.

Muon collider Higgs-boson decay simulation

The technology for a muon collider also underpins a so-called neutrino factory, in which beams of equal numbers of electron and muon neutrinos are produced from the decay of muons circulating in a storage ring – in stark contrast to the neutrino beams used at T2K and NOvA, and envisaged for DUNE and Hyper-K, which use neutrinos from the decays of pions and kaons from proton collisions on a fixed target. In such a facility it is straightforward to tune the neutrino-beam energy because the neutrinos carry away a substantial fraction of the muon’s energy. This, combined with the excellent knowledge of the beam composition and energy spectrum that arises from the precise knowledge of muon-decay characteristics, makes a neutrino factory an attractive place to measure neutrino oscillations with great precision and to look for oscillation phenomena that are outside the standard, three-neutrino-mixing paradigm. One proposal – nuSTORM, an entry-level facility proposed for the precise measurement of neutrino-scattering and the search for sterile neutrinos – can provide the ideal test-bed for the technologies required to deliver a muon collider.

Muon-based facilities have the potential to provide lepton–antilepton collisions at centre-of-mass energies in excess of 3 TeV and to revolutionise the production of neutrino beams. Where could such a facility be built? A 14 TeV muon collider in the 27 km-circumference LHC tunnel has recently been discussed, while another option is to use the LHC tunnel to accelerate the muons and construct a new, smaller tunnel for the actual collider. Such a facility is estimated to provide a physics reach comparable to a 100 TeV circular hadron collider, such as the proposed Future Circular Collider, FCC-hh. A LEMMA-like positron driver scheme with a potentially lower neutrino radiation could possibly extend this energy range still further. Fermilab, too, has long been considered a potential site for a muon collider, and it has been demonstrated that the footprint of a muon facility is small enough to fit in the existing Fermilab or CERN sites. However, the realistic performance and feasibility of such a machine would have to be confirmed by a detailed feasibility study identifying the required R&D to address its specific issues, especially the compatibility of existing facilities with muon decays. Minimising off-site neutrino radiation is one of the main challenges to the design and civil-engineering aspects of a high-energy muon collider because, while the interaction probability is tiny, the total flux of neutrinos is sufficiently high in a very small area in the collider plane to produce localised radiation that can reach a fraction of natural-radiation levels. Beam wobbling, whereby the lattice is modified periodically so that the neutrino flux pointing to Earth’s surface is spread out, is one of the promising solutions to alleviate the problem, although it requires further studies.

It was only when the Muon Accelerator Program was formally approved in 2011 in the US that a systematic effort started

A muon collider would be a unique lepton-collider facility at the high-energy frontier. Today, muon-collider concepts are not as mature as those for FCC-ee, CLIC, ILC or CEPC. It is now important that a programme is established to prove the feasibility of the muon collider, address the key remaining technical challenges, and provide a conceptual design that is affordable and has an acceptable power consumption. The promises for the very high-energy lepton frontier suggests that this opportunity should not be missed.  

New SMOG on the horizon

Figure 1

LHCb will soon become the first LHC experiment able to run simultaneously with two separate interaction regions. As part of the ongoing major upgrade of the LHCb detector, the new SMOG2 fixed‑target system will be installed in long shutdown 2. SMOG2 will replace the previous System for Measuring the Overlap with Gas (SMOG), which injected noble gases into the vacuum vessel of LHCb’s vertex detector (VELO) at a low rate with the initial goal of calibrating luminosity measurements. The new system has several advantages, including the ability to reach effective area densities (and thus luminosities) up to two orders of magnitude higher for the same injected gas flux.

SMOG2 is a gas target confined within a 20 cm‑long aluminium storage cell that is mounted at the upstream edge of the VELO, 30 cm from the main interaction point, and coaxial with the LHC beam (figure 1). The storage‑cell technology allows a very limited amount of gas to be injected in a well defined volume within the LHC beam pipe, keeping the gas pressure and density profile under precise control, and ensuring that the beam‑pipe vacuum level stays at least two orders of magnitude below the upper threshold set by the LHC. With beam‑gas interactions occurring at roughly 4% of the proton–proton collision rate at LHCb, the lifetime of the beam will be essentially unaffected. The cell is made of two halves, attached to the VELO with an alignment precision of 200 μm. Like the VELO halves, they can be opened for safety during LHC beam injection and tuning, and closed for data‑taking. The cell is sufficiently narrow that as small a flow as 10–15 particles per second will yield tens of pb–1 of data per year. The new injection system will be able to switch between gases within a few minutes, and in principle is capable of injecting not just noble gases, from helium up to krypton and xenon, but also several other species, including H2, D2, N2, and O2.

SMOG2 will open a new window on QCD studies and astroparticle physics at the LHC

SMOG2 will open a new window on QCD studies and astroparticle physics at the LHC, performing precision measurements in poorly known kinematic regions. Collisions with the gas target will occur at a nucleon–nucleon centre‑of‑mass energy of 115 GeV for a proton beam of 7 TeV, and 72 GeV for a Pb beam of 2.76 TeV per nucleon. Due to the boost of the interacting system in the laboratory frame and the forward geometrical acceptance of LHCb, it will be possible to access the largely unexplored high‑x and intermediate Q2 regions.

Combined with LHCb’s excellent particle identification capabilities and momentum resolution, the new gas target system will allow us to advance our understanding of the gluon, antiquark, and heavy‑quark components of nucleons and nuclei at large‑x. This will benefit searches for physics beyond the Standard Model at the LHC, by improving our knowledge of the parton distribution functions of both protons and nuclei, particularly at high‑x, where new particles are most often expected, and will inform the physics programmes of proposed next‑generation accelerators such as the Future Circular Collider. The gas target will also allow the dynamics and spin distributions of quarks and gluons inside unpolarised nucleons to be studied for the first time at the LHC, a decade before corresponding measurements at much higher accuracy are performed at the Electron‑Ion Collider in the US. Studying particles produced in collisions with light nuclei, such as He, and possibly N and O, will also allow LHCb to give important inputs to cosmic‑ray physics and dark‑matter searches. Last but not least, SMOG2 will allow LHCb to perform studies of heavy‑ion collisions at large rapidities, in an unexplored energy range between the SPS and RHIC, offering new insights into the QCD phase diagram.

EPS announces 2020 accelerator awards

The European Physical Society’s accelerator group (EPS-AG) has announced the winners of its 2020 prizes, which are awarded every three years for outstanding achievements in the accelerator field. The prizes will be presented on 14 May during the International Particle Accelerator Conference (IPAC), which was planned to be held at the GANIL laboratory in Caen, France, and will now take place from 11-14 May in a virtual format due to restrictions resulting from the COVID-19 epidemic.

Lucio Rossi

The EPS-AG Rolf Widerøe Prize for outstanding work in the accelerator field has been given to Lucio Rossi of CERN, who is project leader for the high-luminosity LHC. Rossi, who initially worked in plasma physics before moving into applied superconductivity for particle accelerators, was rewarded “for his pioneering role in the development of superconducting magnet technology for accelerators and experiments, its application to complex projects in high-energy physics including strongly driving industrial capability, and for his tireless effort in promoting the field of accelerator science and technology”.

Hideaki Hotchi

The Gersch Budker Prize for a recent significant, original contribution to the accelerator field, has been awarded to Hideaki Hotchi of J-PARC in Japan. He receives the prize for his achievements “in the commissioning of the J-PARC Rapid Cycling Synchrotron, with sustained 1 MW operation at unprecedented low levels of beam loss made possible by his exceptional understanding of complex beam dynamics processes, thereby laying the foundations for future high power proton synchrotrons worldwide”.

The Frank Sacherer Prize, for an individual in the early part of his or her career goes, to Johannes Steinmann of Argonne national Laboratory for his “significant contribution to the development and demonstration of ultra-fast accelerator instrumentation using THz technology, having the potential for major impact on the field of electron bunch-by-bunch diagnostics”.

 

Applicants for the EPS-AG Bruno Touschek prize, which is awarded to a student or trainee accelerator physicist or engineer, will be judged on the quality of the work submitted to the IPAC conference.

The previous (2017) EPS-AG prizewinners were: Lyn Evans of CERN (Rolf Widerøe Prize); Pantaleo Raimondi of the ESRF (Gersh Budker Prize), Anna Grassellino of Fermilab (Frank Sacherer Prize); and Fabrizio Giuseppe Bisesto of INFN-LNF (Bruno Touschek Prize).

First foray into CP symmetry of top-Higgs interactions

One of the many doors to new physics that have been opened by the discovery of the Higgs boson concerns the possibility of finding charge-parity violation (CPV) in Higgs-boson interactions. Were CPV to be observed in the Higgs sector, it would be an unambiguous indication of physics beyond the Standard Model (SM), and could have important ramifications for understanding the baryon asymmetry of the universe. Recently, the ATLAS and CMS collaborations reported their first forays into this area by measuring the CP-structure of interactions between the Higgs boson and top quarks.

While CPV is well established in the weak interactions of quarks (most recently in the charm system by the LHCb collaboration), and is explained in the SM by the existence of a phase in the CKM matrix, the amount of CPV observed is many orders of magnitude too small to account for the observed cosmological matter-antimatter imbalance. Searching for additional sources of CPV is a major programme in particle physics, with a moderate-significance suggestion of CPV in lepton interactions recently announced by the T2K collaboration. It is likely that sources of CPV from phenomena beyond the scope of the SM are needed, and the detailed properties of the Higgs sector are one of several possible hiding places.

Based on the full LHC Run 2 dataset, ATLAS and CMS studied events where the Higgs boson is produced in association with one or two top quarks before decaying into two photons. The latter (ttH) process, which accounts for around 1% of the Higgs bosons produced at the LHC, was observed by both collaborations in 2018. But the tH production channel is predicted to be about six times rarer. This is due to destructive interference between higher order diagrams involving W bosons, and makes the tH process particularly sensitive to new-physics processes.

Exploring the CP properties of these interactions is non-trivial

According to the SM, the Higgs boson is “CP-even” – that is, it is possible to rotate-away any CP-odd phase from the scalar mass term. Previous probes of the interaction between the Higgs and vector bosons by CMS and ATLAS support the CP-even nature of the Higgs boson, determining its quantum numbers to be most consistent with JPC = 0++, though small CP-odd contributions from a more complex coupling structure are not excluded. The presence of a CP-odd component, together with the dominant CP-even one, would imply CPV, altering the kinematic properties of the ttH process and modifying tH production. Exploring the CP properties of these interactions is non-trivial, and requires the full capacities of the detectors and analysis techniques.

The collaborations employed machine-learning (Boosted Decision Tree) algorithms to disentangle the relative fractions of the CP-even and CP-odd components of top-Higgs interactions. The CMS collaboration observed ttH production at significance of 6.6σ, and excluded a pure CP-odd structure of the top-Higgs Yukawa coupling at 3.2σ. The ratio of the measured ttH production rate to the predicted production rate was found by CMS to be 1.38 with an uncertainty of about 25%. ATLAS data also show agreement with the SM. Assuming a CP-even coupling, ATLAS observed ttH with a significance of 5.2σ. Comparing the strength of the CP-even and CP-odd components, the collaboration favours a CP-mixing angle very close to 0 (indicating no CPV) and excludes a pure CP-odd coupling at 3.9σ. ATLAS did not observe tH production, setting an upper limit on its rate of 12 times the SM expectation.

In addition to further probing the CP properties of the top–Higgs interaction with larger data samples, ATLAS and CMS are searching in other Higgs-boson interactions for signs of CPV.

Gamma-ray polarisation sharpens multi-messenger astrophysics

POLAR polarisation plot

Recent years have seen the dawn of multi-messenger astrophysics. Perhaps the most significant contributor to this new era was the 2017 detection of gravitational waves (GWs) in coincidence with a bright electromagnetic phenomenon, a gamma-ray burst (GRB). GRBs consist of intense bursts of gamma rays which, for periods ranging from hundreds of milliseconds to hundreds of seconds, outshine any other source in the universe. Although the first such event was spotted back in 1967, and typically one GRB is detected every day, the underlying astrophysical processes responsible remain a mystery. The joint GW–electromagnetic detection answered several questions about the nature of GRBs, but many others remain.

Recently, researchers made the first attempts to add gamma-ray polarisation into the mix. If successful, this could enable the next step forward within the multi-messenger field.

So far, three photon parameters – arrival time, direction and energy – have been measured extensively for a range of different objects within astrophysics. Yet, despite the wealth of information it contains, the photon polarisation has been neglected. X-ray or gamma-ray fluxes emitted by charged particles within strong magnetic fields are highly polarised, while those emitted by thermal processes are typically unpolarised. Polarisation therefore allows researchers to easily identify the dominant emission mechanism for a particular source. GRBs are one such source, since a consensus on where the gamma rays actually originate from is still missing.

Difficult measurements

The reason that polarisation has not been measured in great detail is related to the difficulty of performing the measurements. To measure the polarisation of an incoming photon, details of the secondary products produced as it interacts in a detector need to be measured. With gamma rays, for example, the angle at which the gamma ray scatters in the detector is related to its polarisation vector. This means that, in addition to detecting the photon, researchers need to study its subsequent path. Such measurements are further complicated by the need to perform them above the atmosphere on satellites, which complicates the detector design significantly.

The 2020s should see the start of a new type of astrophysics

Recent progress has shown that, although challenging, polarisation measurements are possible. The most recent example came from the POLAR mission, a Swiss, Polish and Chinese experiment fully dedicated to measuring the polarisation of GRBs, which took data from September 2016 to April 2017. The team behind POLAR, which was launched to space in 2016 attached to a module for the China Space Station, recently published its first results. Though they indicate that the emission from GRBs is likely unpolarised, the story appears to be more complex. For example, the polarisation is found to be low when looking at the full GRB emission, but when studying it over short time intervals, a strong hint of high polarisation is found with a rapidly changing polarisation angle during the GRB event. This rapid evolution of the polarisation angle, which is yet to be explained by the theoretical community, smears out the polarisation when looking at the full GRB. In order to fully understand the evolution, which could give hints of an evolution of a magnetic field, finer time-binning and more precise measurements are needed, which require more statistics.

POLAR-2

Two future instruments capable of providing such detailed measurements are currently being developed. The first, POLAR-2, is the follow-up of the POLAR mission and was recently recommended to become a CERN-recognised experiment. P OLAR-2 w ill b e a n order of magnitude more sensitive (due to larger statistics and lower systematics) than its predecessor and therefore should be able to answer most of the questions raised by the recent POLAR results. The experiment will also play an important role in detecting extremely weak GRBs, such as those expected from GW events. POLAR-2, which will be launched in 2024 to the under-construction China Space Station, could well be followed by a similar but slightly smaller instrument called LEAP, which recently progressed to the final stage of a NASA selection process. If successful, LEAP would join POLAR-2 in 2025 in orbit on the International Space Station.

Apart from dedicated GRB polarimeters, progress is also being made at other upcoming instruments such as NASA’s Imaging X-ray Polarimetry Explorer and China-ESA’s enhanced X-ray Timing and Polarimetry mission, which aim to perform the first detailed polarisation measurements of a range of astrophysical objects in the X-ray region. While the first measurements from POLAR have been published recently, and more are expected soon, the 2020s should see the start of a new type of astrophysics, which adds yet another parameter to multi-messenger exploration.

bright-rec iop pub iop-science physcis connect