Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individual stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so stars, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indicating a common origin.
The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.
The stars appear not only to be very old, but also very similar to one another, indicating a common origin
One way to discern the age of globular clusters is to study the elemental composition of the stars within them. This is often expressed as the metallicity, which is the ratio of all elements heavier than hydrogen and helium (confusingly referred to as metals in the astronomical community) to these two light elements. Hydrogen and helium were produced during the Big Bang, while anything heavier was produced in the first generation of stars, implying that the first generation of stars had zero metallicity and that the metallicity increases with each generation. Until recently the lowest metallicities of stars in globular clusters were 0.2% that of the Sun. This “lower floor” in metallicity was thought to put constraints on their maximum age and size, with lower-metallicity clusters thought to be unable to survive to this day. The newly discovered stream, however, has metallicities lower than 0.05% that of the Sun, changing this perception.
Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individuals stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so starts, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indication a common origin.
The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.
Captured clusters
The stars in the recently observed C-19 stream are no longer a dense cluster. Rather, they all appear to follow the same orbit within our galaxy, the plane of which is almost perpendicular to the galactic disk in which we orbit its centre. This similarity in orbit, as well as their very similar metallicity and general chemical content, indicate that they once formed a globular cluster which was absorbed by the Milky Way. The orbit dynamics further indicate it was captured at a time when the potential well of the Milky Way was significantly smaller than it is now, implying that the capture of this cluster by our galaxy occurred long ago. Since then, the once dense cluster heated up and got smeared out as it orbited the galactic centre through interactions with the disk, as well as with the potential dark-matter halo.
The discovery, published in Nature, does not directly answer the question of where and how globular clusters were formed. It does however provide us with a nearby laboratory to study issues like cluster and galaxy formation, the merging of such objects and the subsequent destruction of the cluster through interactions with both baryonic as well as potential dark matter. This particular cluster furthermore consists of some of the oldest stars found, and could have been formed before the re-ionisation of the universe, which is thought to have taken place between 150 million and a billion years after the Big Bang. Further information about such ancient objects can be expected soon thanks to the recently launched James Webb Space Telescope. This instrument will be able to see some of the earliest formed galaxies, and can thereby provide additional clues on the origin of the fossils now found within our own galaxy.
In the 1960s, the creators of the Standard Model made a smart choice: while all charged fermions came in pairs, with left-handed and right-handed components, neutrinos were only left-handed. This “handicap” of neutrinos allowed physicists to accommodate in the most economical way important features of the experimental data at that time. First, such left-handed-only neutrinos are naturally massless, and second, individual leptonic flavours (electron, muon and tau) are automatically conserved.
It is now well established that neutrinos have masses and that the neutrino flavours mix with each other, in similarity with quarks. If this were known 55 years ago, Weinberg’s seminal 1967 work “A Model of Leptons” would be different: in addition to the left-handed neutrinos, it would very likely also contain their right-handed counterparts. The structure of the Standard Model (SM) dictates that these new states, if they exist, are the only singlets with respect to weak-isospin and hyper-charge gauge symmetry and thus do not participate directly in electroweak interactions (see “On the other hand” figure). This makes right-handed neutrinos (also referred to as sterile neutrinos, singlet fermions or heavy neutral leptons) very special: unlike charged quarks and leptons, which get their masses from the Yukawa interaction with the Brout–Englert–Higgs field, the masses of right-handed neutrinos depend on an additional parameter – the Majorana mass – which is not related to the vacuum expectation value and which results in the violation of lepton-number conservation. As such, right-handed neutrinos are also sometimes referred to as Majorana leptons or Majorana fermions.
Leaving aside the possible signals of eV-scale neutrino states reported in recent years, all established experimental signatures of neutrino oscillations can be explained by the SM with the addition of two heavy-neutral leptons (HNLs). If there were only one HNL, then two out of three SM neutrinos would be massless; with two HNLs, only one of the SM neutrinos is massless – this is not excluded experimentally. Any larger number of HNLs is also possible.
The simplest way to extend the SM in the neutrino sector is to add several HNLs and no other new particles. Already this class of theories is very rich (different numbers of HNLs and different values of their masses and couplings imply very different phenomenology), and contains several different scenarios explaining not only the observed masses and flavour oscillations of the SM neutrinos but also other phenomena that are not accommodated by the SM. The scenario in which the Majorana masses of right-handed neutrinos are much higher than the electroweak scale is known as the “type I see-saw model”, first put forward in the late 1970s. The theory with three right-handed neutrinos (the same as the number of generations in the SM) with their masses below the electroweak scale is called the neutrino minimal standard model (νMSM), and was proposed in the mid-2000s.
Would these new particles be useful for anything else besides neutrino physics? The answer is yes. The first, lightest HNL N1 may serve as a dark-matter particle, whereas the other two HNLs N2,3 not only “give” masses to active neutrinos but can also lead to the matter–antimatter asymmetry of the universe. In other words, the SM extended by just three HNLs could solve the key outstanding observational problems of the SM, provided the masses and couplings of the HNLs are chosen in a specific domain.
The leptonic extension of the SM by right-handed neutrinos is quite similar to the gradual adaptation of electroweak theory to experimental data during the past 50 years. While the bosonic sector of the electroweak model remains intact from 1967, with the discoveries of the W and Z bosons in 1983 and the Higgs boson in 2012, the fermionic sector evolved from one to two to three generations, revealing the remarkable symmetry between quarks and leptons. It took about 20 years to find all the quarks and leptons of the third generation. How much time it will take to discover HNLs, if they indeed exist, depends crucially on their masses.
The value of the Majorana mass, and therefore the physical mass of an HNL, is arbitrary from a theoretical point of view and cannot be found from neutrino-oscillation experiments. The famous see-saw formula that relates the observed masses of the active neutrinos to the Majorana masses of HNLs has a degeneracy: change the Yukawa couplings of HNLs to neutrinos by a factor x and the HNL masses by a factor x2, and the active neutrino masses and the physics of their oscillations remain intact. The scale of HNL masses thus can be any number from a fraction of an eV to 1015 GeV (see “Options abound” figure). Moreover, there could be several HNLs with very different masses. Indeed, even in the SM the masses of charged fermions, though they share a similar origin, differ by almost six orders of magnitude.
Motivated by the value of the active neutrino masses, the HNL could be light, with masses of the order of 1 eV. Alternatively, similar to the known quarks and charged leptons, they could be somewhere around the GeV or Fermi scale. Or they could be close to the grand unification scale, 1015 GeV, where the strong and electromagnetic interactions are thought to be unified. These possibilities have different theoretical and experimental consequences.
The case of the light sterile neutrino
The see-saw formula tells us that if the mass of HNLs is around 1 eV, their Yukawa couplings should be of the order of 10–12. Such light sterile neutrinos can be potentially observed in neutrino experiments, as they can be involved in the oscillations together with the three active neutrino species. Several experiments – including LSND, GALLEX, SAGE, MiniBooNE and BEST – have reported anomalies in neutrino-oscillation data (the so-called short-baseline, gallium and reactor anomalies) that could be interpreted as a signal for the existence of light sterile neutrinos. However, it looks difficult, if not impossible, to reconcile the existence of these states with recent negative results of other experiments such as MINOS+, MicroBooNE and IceCUBE, accounting for additional constraints coming from β-decay, neutrinoless double-β decay and cosmology.
The parameters of light sterile neutrinos required to explain the experimental anomalies are in strong tension with the cosmological bounds (see “Cosmological bounds” figure). For example, their mixing angle with the ordinary neutrinos should be sufficiently large that these states would have been produced abundantly in the early universe, affecting its expansion rate during Big Bang nucleosynthesis and thus changing the abundances of the light elements. In addition, light sterile neutrinos would affect the formation of structure. Having been created in the hot early universe with relativistic velocities, they would have escaped from forming structures until they cooled down in much later epochs. This so-called “hot dark matter” scenario would mean that the smallest structures, which form first, and the larger ones, which require much more time to develop, would experience different amounts of dark matter. Moreover, the presence of such particles would affect baryon acoustic oscillations and therefore impact the value of the Hubble constant deduced from them.
Besides tensions between the experiments and cosmological bounds, light sterile neutrinos do not provide any solution to the outstanding problems of the SM. They cannot be dark-matter particles because they are too light, nor can they produce the baryon asymmetry of the universe as their Yukawa couplings are too small to give any substantial contribution to lepton-number violation at the temperatures (> 160 GeV) at which the anomalous electroweak processes with baryon non-conservation have a chance to convert a lepton asymmetry into a baryon asymmetry.
Three Fermi-scale heavy neutral leptons
Another possible scale for HNL masses is around a GeV, plus or minus a few orders of magnitude. Right-handed neutrinos with such masses do not interfere with active-neutrino oscillations because the corresponding length over which these oscillations may occur is far too small. As only two active-neutrino mass differences are fixed by neutrino-oscillation experiments, it is sufficient to have two HNLs N2,3 with appropriate Yukawa couplings to active neutrinos: to get the correct neutrino masses, they should not be smaller than ~10–8 (compared to the electron Yukawa coupling of ~10–6). These two HNLs may produce the baryon asymmetry of the universe, as we explain later, whereas the lightest singlet fermion, N1, may interact with neutrinos much more weakly and thus can be a dark-matter particle (although unstable, its lifetime can greatly exceed the age of the universe).
Three main considerations determine the possible range of masses and couplings of the dark-matter sterile neutrino (see “Dark-matter constraints” figure). The first is cosmological production. If N1 interact too strongly, they would be overproduced in ℓ+ℓ–→ N1ν reactions and make the abundance of dark matter larger than what is inferred by observations, providing an upper limit on their interaction strength. Conversely, the requirement to produce enough dark matter results in a lower bound on the mixing angle that depends on the conditions in the early universe during the epoch of N1 production. Moreover, the lower bound completely disappears if N1 can also be produced at very high temperatures by interactions related to gravity or at the end of cosmological inflation. The second consideration is X-ray data. Radiative N1→γν decays produce a narrow line that can be detected by X-ray telescopes such as XMM–Newton or Chandra, resulting in an upper limit on the mixing angle between sterile and active neutrinos. While this upper limit depends on the uncertainties in the distribution of dark matter in the Milky Way and other nearby galaxies and clusters, as well as on the modelling of the diffuse X-ray background, it is possible to marginalise these to obtain very robust constraints.
The third consideration for the sterile neutrino’s properties is structure formation. If N1 is too light, a very large number-density of such particles is required to make an observed halo of a small galaxy. As HNLs are fermions, however, their number density cannot exceed that of a completely degenerate Fermi gas, placing a very robust lower bound on the N1 mass. This bound can be further improved by taking into account that light dark-matter particles remain relativistic until late epochs and therefore suppress or erase density perturbations on small scales. As a result, they would affect the inner structure of the halos of the Milky Way and other galaxies, as well as the matter distribution in the intergalactic medium, in ways that can be observed via gravitational-lensed galaxies, gaps in the stellar streams in galaxies and the spectra of distant quasars.
Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM
The upper limits on the interaction strength of sterile neutrinos fixes the overall scale of active neutrino masses in the νMSM. The dark-matter sterile neutrino effectively decouples from the see-saw formula, making the mass of one of the active neutrinos much smaller than the observed solar and atmospheric neutrino-mass differences and fixing the masses of the two other active neutrinos to approximately 0.009 eV and 0.05 eV (for the normal ordering) and to the near-degenerate value 0.05 eV for the inverted ordering.
HNLs at the GeV scale and beyond
Our universe is baryon-asymmetric – it does not contain antimatter in amounts comparable with the matter. Though the SM satisfies all three “Sakharov conditions” necessary for baryon-asymmetry generation (baryon number non-conservation, C and CP-violation, and departure from thermal equilibrium), it cannot explain the observed baryon asymmetry. The Kobayashi–Maskawa CP-violation is too small to produce any substantial effects, and departures from thermal equilibrium are tiny at the temperatures at which the anomalous fermion-number non-conserving processes are active. This is not the case with two GeV-scale HNLs: these particles are not in thermal equilibrium for temperatures above a few tens of GeV, and CP violation in their interactions with leptons can be large. As a result, a lepton asymmetry is produced, which is converted into baryon asymmetry by the baryon-number violating reactions of the SM.
The requirement to get baryon asymmetry in the νMSM puts stringent constraints on the masses and coupling of HNLs (see “Baryon-asymmetry constraints” figure). The mixing angle of these particles cannot be too large, otherwise they equilibrate and erase the baryon asymmetry, and it cannot be below a certain value because it would make the active neutrino masses too small. We know that their mass should be larger than that of the pion, otherwise their decays in the early universe would break the success of Big Bang nucleosynthesis. In addition, the masses of two HNLs should be close to each other so as to enhance CP-violating effects. Interestingly, the HNLs with these properties are within the experimental reach of existing and future accelerators, as we shall see.
The final possible choice of HNL masses is associated with the grand unification scale, ~1015 GeV. To get the correct neutrino masses, the Yukawa couplings of a pair of these superheavy particles should be of the order of one, in which case the baryon asymmetry of the universe can be produced via thermal leptogenesis and anomalous baryon- and lepton-number non-conservation at high temperatures. The third HNL, if interacting extremely weakly, may play the role of a dark-matter particle, as described previously. Another possibility is that there are three superheavy HNLs and one light one, to play the role of dark matter. This model, as well as that with HNL masses of the order of the electroweak scale, may therefore solve the most pressing problems of the SM. The only trouble is that we will never be able to test it experimentally, since the masses of N2,3 are beyond the reach of any current or future experiment.
Experimental opportunities
It is very difficult to detect HNLs experimentally. Indeed, if the masses of these particles are within the reach of current and planned accelerators, they must interact orders of magnitude more weakly than the ordinary weak interactions. As for the dark-matter sterile neutrino, the most promising route is indirect detection with X-ray space telescopes. The new X-ray spectrometer XRISM, which is planned to be launched this year, has great potential to unambiguously detect a signal from dark-matter decay. Like many astrophysical observatories, however, it will not be able to determine the particle origin of this signal. Thus, complementary laboratory searches are needed. One experimental proposal that claims a sufficient sensitivity to enter into the cosmologically relevant region is HUNTER, based on radioactive atom trapping and high-resolution decay-product spectrometry. Sterile neutrinos with masses of around a keV can also show up as a kink in the β-decay spectrum of radioactive nuclei, as discussed by the ambitious PTOLEMY proposal. The current generation of experiments that study β-decay spectra – KATRIN and Troitsk nu-mass – also perform searches for keV HNLs, but they are sensitive to significantly larger mixing angles than required for a dark-matter particle. Extending the KATRIN experiment with a multi-pixel silicon drift detector, TRISTAN, will significantly improve the sensitivity here.
The most promising perspectives to find N2,3 responsible for neutrino masses and baryogenesis are experiments at the intensity frontier. For HNL masses below 5 GeV (the beauty threshold) the best strategy is to direct proton beams at a target to create K, D or B mesons that decay producing HNLs, and then to search for HNL decays through “nothing → leptons and hadrons” processes in a near detector. This strategy was used in the previous PS191 experiment at CERN’s Proton Synchrotron (PS), NOMAD, BEBC and CHARM at the Super Proton Synchrotron (SPS) and NuTeV at Fermilab. There are several proposals for future experiments along these lines. The proposed SHiP experiment at the SPS Beam Dump Facility has the best potential as it can potentially cover almost all parameter space down to the lowest bound on coupling constants coming from neutrino masses. The SHiP collaboration has already performed detailed studies and beam tests, and the experiment is under consideration by the SPS and PS experiments committee. A smaller-scale proposal, SHADOWS, covers part of the interesting parameter space.
The search for HNLs can be carried out at the near detectors of DUNE at Fermilab and T2K/T2HK in Japan, which are due to come online later this decade. The LHC experiments ATLAS, CMS, LHCb, FASER and SND, as well as the proposed CODEX-b facility, can also be used, albeit with fewer chances to enter deeply into the cosmologically interesting part of the HNL parameter space. The decays of HNLs can also be searched for at future huge detectors such as MATHUSLA. And, going to larger HNL masses, breakthroughs can be made at the proposed Future Circular Collider FCC-ee, studying the processes Z →νN with a displaced vertex (DV) corresponding to the subsequent decay of N to available channels (see “Electron coupling” figure).
Conclusions
Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM. But the situation is very different from that in the period preceding the discovery of the Higgs boson, where the consistency of the SM together with other experimental results allowed us to firmly conclude that either the Higgs boson had to be discovered at the LHC, or new physics beyond the SM must show up. Although we know for sure that the SM is incomplete, we do not have a firm prediction about where to search for new particles nor what their masses, spins, interaction types and strengths are.
Experimental guidance and historical experience suggest that the SM should be extended in the fermion sector, and the completion of the SM with three Majorana fermions solves the main observational problems of the SM at once. If this extension of the SM is correct, the only new particles to be discovered in the future are three Majorana fermions. They have remained undetected so far because of their extremely weak interactions with the rest of the world.
The second long-shutdown of the CERN accelerator complex (LS2) is complete. After three years of intense works at all levels across the accelerators and experiments, beams are expected in the LHC in April. For the accelerators, the main LS2 priorities were the consolidation of essential safety elements (dipole diodes) for the LHC magnets, several interventions for the High-Luminosity LHC (HL-LHC) and associated upgrades of the injection chain via the LHC Injectors Upgrade project. Contributing to the achievement of these and many other planned parallel activities, the CERN vacuum team has completed an intense period of work in the tunnels, workshops and laboratories.
Particle beams require extremely low pressure in the pipes in which they travel to ensure that their lifetime is not limited by interactions with residual gas molecules and to minimise backgrounds in the physics detectors. During LS2, all of the LHC’s arcs were vented to the air after warm-up to room temperature and all welds were leak-checked after the diode consolidation (with only one leak found among the 1796 tests performed). The vacuum team also replaced or consolidated around 150 turbomolecular pumps acting on the cryogenic insulation vacuum. In total, 2.4 km of non-evaporable-getter (NEG)-coated beampipes were also opened to the air at room temperature – an exhaustive programme of work spanning mechanical repair and upgrade (across 120 weeks), bake-out (90 weeks) and NEG activation (45 weeks). The vacuum level in these beampipes is now in the required range, with most of the pressure readings below 10–10 mbar.
The vacuum control system was also significantly improved by reducing single points of failure, removing confusing architectures and, for the first time, using mobile vacuum equipment controlled and monitored wirelessly. In view of the higher LHC luminosity and the consequent higher radioactivity dose during Run 3 and beyond, the vacuum group has developed and installed new radiation-tolerant electronics controlling 100 vacuum gauges and valves in the LHC dispersion suppressors. This was the first step of a larger campaign to be implemented in the next long-shutdown, including the production of 1000 similar electronics cards for vacuum monitoring. In parallel, the control software was renewed. This included the introduction of resilient, scalable and self-healing web-based frameworks used by the biggest names in industry.
In the LHC experimental areas, the disassembling of the vacuum chambers at the beginning of LS2 required 93 interventions and 550 person-hours of work in the caverns, with the most impressive change in vacuum hardware implemented in CMS and LHCb (see “Interaction points” images). In CMS, a new 7.3 m-long beryllium beam-pipe with an internal diameter of 43.4 mm was installed and 12 new aluminium chambers were manufactured, surface-finished and NEG-coated at CERN. The mechanical installation, including alignments, pump-down and leak detection, took two months, while the bake-out and venting with ultra-pure neon required a further month. In LHCb, the vacuum team contributed to the new Vertex Locator (VELO). Its “RF box” – a delicate piece of equipment filled with silicon detectors, electronics and cooling circuits designed to protect the VELO without affecting the beams – is situated just a few mm from the beam with an aluminium window thinned down to 150 μm by chemical etching and then NEG-coated. As the VELO encloses the RF box and both volumes are under separated vacua, the pump-down is a critical operation because pressure differences across the thin window must be lower than 10 mbar to ensure mechanical integrity. The last planned activity for the vacuum team in LS2, the bake-out of the ATLAS beam pipes, took place in February.
Vacuum challenges
From the list of successful achievements, it could be assumed that vacuum activities in LS2 have gone smoothly, with the team applying well known procedures and practicing knowledge accumulated over decades. However, as might be expected when working with several teams in parallel and at the limits of technology, with around 100 km of piping under vacuum for the LHC alone, this is far from the case. Since the beginning of LS2, CERN vacuum experts have experienced several technical issues and obstacles, a few of which deserve a mention (see “Overcoming the LS2 vacuum obstacles” panel). All these headaches have challenged our regular way of working and allowed us to reflect on procedures, communication and reporting, and technical choices.
But the real moment of truth is still yet to come, when the intensity of the LHC beams reaches the new nominal value boosted by the upgraded injectors. Under the spotlight will be surface electron emission, which drives the formation of electron clouds and their consequences, including beam instabilities and heat load on the cryogenic system. The latter showed anomalously high values during Run 2, with strong inhomogeneity along the ring indicating an uneven surface conditioning. The question is what will happen to the heat load during Run 3? Thanks to the effort and achievements of a dedicated taskforce, the scrubbing and following physics runs will provide a detailed answer in a few months. Last year, the task force installed additional instrumentation in the cryogenic lines in selected positions and, after many months of detective work, identified the most probable culprit of the puzzling heat-load values: the formation of a non-native copper oxide layer during electron bombardment of hydroxylated copper surfaces at cryogenic temperatures. UV exposure in selected gas, local bakeout and plasma etching are among the mitigation techniques we are going to investigate.
The HL-LHC horizon
LS2 might only just have finished but we are already thinking about LS3 (2026–2028), whose leitmotif will be the finalisation of the HL-LHC project. Thanks to more focused beams at the collision points and an increased proton bunch population, the higher beam luminosity at CMS and ATLAS (peaking at a levelled value of 5 × 1034cm–2s–1) will enable an integrated luminosity of 3000 fb–1 in 12 years. For the HL-LHC vacuum systems, this requires a completely new design of the beam screens in the focusing area of the experiments, the implementation of carbon thin-film coatings in the unbaked beampipes to cope with the lower secondary electron yield threshold, and radiation-compatible equipment near the experiments and radiation-tolerant electronics down to the dispersion suppressor zones.
Overcoming the LS2 vacuum obstacles
Forgotten sponge
During the first beam-commissioning of the PS, anomalous high proton losses were detected, generating pressure spikes and a high radioactive dose near one of the magnets. An endoscopic inspection (see image above, left) revealed the presence of an orange sponge that had been used to protect the vacuum chamber extremities before welding (and which had been left behind due to a miscommunication between the teams involved), blocking the lower half of the beam pipe. After days of investigation with the beams and interventions by technicians, the chamber was cut open and the offending object removed.
Leaky junctions
Having passed all tests before they were installed, new corrugated thin-walled vacuum chambers installed in the Proton Synchrotron Booster to reduce eddy-current effects suffered vacuum leaks after a few days of magnet pulsing. The leaks appeared in lip-welded junctions in several chambers, indicating a systematic production issue. Additional spare chambers were produced and, as the leaks remain tolerable, a replacement is planned during the next year-end technical stop. Until then, this issue will be the Sword of Damocles on the heads of the vacuum teams in charge of the LHC’s injectors.
Powering mismatch
During the first magnet tests of the TT2 transfer line, a vacuum sector was suddenly air-vented. The support of the vacuum chambers was found to be broken; two bellows were destroyed (see image, middle), and the vacuum chamber twisted. The origin of the problem was a different powering scheme of the magnet embedding the chamber: faster magnetic pulses generated higher eddy-current and Lorentz forces that were incompatible with the beampipe design and supports. It was solved by inserting a thin insulation layer between vacuum flanges to interrupt the eddy current, a practice common in other parts of the injectors.
QRL quirks
The LHC’s helium transfer lines (QRL) require regular checks, especially after warm-up and cool-down. During LS2, the vacuum team installed two additional turbomolecular pumps to compensate for the rate increase of a known leak in sector B12, allowing operation until at least the next long-shutdown. Another troubling leak which opened only for helium pressures above 7 bar was detected in a beam-screen cooling circuit. Fixing it would have required the replacement of the nearby magnet but the leak turned out to be tolerable at cryogenic temperatures, although its on/off behaviour remains to be fully elucidated.
Damaged disks
Installed following the incident in sector 3–4 shortly after LHC startup, the beam vacuum in the LHC arcs is protected against overpressure by 832 “burst disks”. A 30 μm-thick stainless- steel disk membrane nominally breaks when the pressure in the vacuum system is 0.5 bar higher than the tunnel air pressure. Despite the careful venting procedure, 19 disks were either broken or damaged before the re-pumping of the arcs. Subsequent lab tests showed no damage in spare disks cycled 30 times at 1.1 bar. The vacuum teams replaced the damaged disks and are trying to understand the cause.
Buckled fingers
Before cool-down, a 34 mm-diameter ball fitted with a 40 MHz transmitter is pushed through the LHC beam pipes to check for obstacles. The typical defect is a buckling of the RF fingers in the plug-in modules (PIMs) that maintain electrical continuity as the machine thermally contracts. Unfortunately, in two cases the ball arrived damaged, and it took days to collect and identify all the broken pieces. A buckled finger was successfully found in sector 8-1, but another in sector 2-3 (see image, right) was revealed only when the pilot beam circulated. This forced a re-warming of the arc, venting of the beampipe and the replacement of the damaged PIM, followed by additional re-cooling and aperture and electrical tests.
The first piece of vacuum equipment concerned is the “VAX”: a compact set of components, pumps, valves and gauges installed in an area of limited access and relatively high radioactivity between the last focusing magnet of the accelerator and the high-luminosity experiments. The VAX module is designed to be fully compatible with robot intervention, enabling leak detection, gasket change and complete removal of parts to be carried out remotely and safely.
Despite the massive shielding between the experiment caverns and the accelerator tunnels, secondary particles from high-energy proton collisions can reach accelerator components outside the detector area. At nominal HL-LHC luminosity, up to 3.8 kW of power will be deposited in the tunnel on each side of CMS and ATLAS, of which 1.2 kW is intercepted by the 60 m-long sequence of final focusing magnets. Such a power is incompatible with magnet cooling at 1.9 K and, in the long run, could cause the insulation of the superconducting cables to deteriorate. To avoid this issue, the vacuum team designed a new beam screen equipped with tungsten-alloy shielding so that at least half of the power is captured before being transmitted to the magnet cold mass.
All eyes are on the successful restart of the CERN accelerator complex and the beginning of LHC Run 3
The new HL-LHC beam screens took several years of design and manufacturing optimisation, multi-physics simulations and tests with prototypes. The most intense study concerned the mechanical integrity of this complicated object when the hosting magnet undergoes a quench, causing the current to drop from nearly 20 kA to 0 kA in a few tenths of a second. The manufacturing learning phase is now complete and the beam-screen facility will be ready this year, including the new laser-welding robot and cryogenic test benches. Carbon coating is the additional novelty of the HL-LHC beam screens, with the purpose of suppressing electron clouds (see “Beam screen” image). At the beginning of LS2 the first beam screens were successfully coated in situ, involving a small robot carrying carbon and titanium targets, and magnets for plasma confinement during deposition.
The vacuum team is also involved in the production of crab cavities, another breakthrough brought by the HL-LHC project. The surfaces of these complex-shaped niobium objects are treated by a dedicated machine that can provide rotation while chemically polishing with a mixture of nitric, hydrofluoric and phosphoric acids. The vacuum system of the cryomodules in which the cavities are cooled at 2 K was also designed at CERN.
Outlook
Vacuum technology for particle accelerators has been pioneered by CERN since its early days, with the Intersecting Storage Rings bringing the most important breakthroughs. Over the decades, the CERN vacuum group has merged surface-physics specialists, thin-film coating experts and galvanic-treatment professionals, together with teams of designers and colleagues dedicated to the operation of large vacuum equipment. In doing so, CERN has become one of the world’s leading R&D centres for extreme vacuum technology, contributing to major existing and future accelerator projects at CERN and beyond. With the HL-LHC in direct view, the vacuum team looks forward to attacking new challenges. For now, though, all eyes are on the successful restart of the CERN accelerator complex and the beginning of LHC Run 3.
What attracted you to the position of DESY director of particle physics?
DESY is one of the largest and most important particle-physics laboratories in the world. I was born and grew up in Hamburg and took my first career steps at DESY during my university studies. I received my PhD there in 1999 and returned as a scientist in 2016, so I know the lab very well. It is a great lab and department, with many opportunities and so many excellent people. I am sure it will be fun to work with all of them and to develop a strategy for the future.
What previous management roles do you think will serve you best at DESY?
Being ATLAS deputy spokesperson from 2013 to 2017 was one of the best roles I’ve had in my career, and I benefitted hugely from the experience. I was fortunate to have an excellent spokesperson in Dave Charlton and I learned a lot from him, as well as from many others I worked with. I try to understand enough details to make educated decisions but not to micromanage. I also think motivating people, listening to them and promoting their talents is key to achieving common goals.
What are the current and upcoming experiments at DESY?
The biggest on-going experimental activities in particle physics are the ATLAS and CMS experiments. We have large groups in both, and for each we are building a tracker end-cap based on silicon-strip detectors at our detector assembly facility, primarily together with German universities. This is a huge undertaking that is currently ongoing for the HL-LHC. Another important activity is to build a vertex detector to be installed in 2023 at the Belle II experiment running at KEK in Japan. We also have a significant programme of local experiments covering axion searches. One of the big projects next summer will be the start of the ALPS II experiment, which will look for axion-like particles by shining an intense laser on a “wall” and seeing if any laser photons appear on the other side, having been transformed into axions by a large magnetic field. We have two other axion experiments planned: BabyIAXO, which looks for axion-like particles coming from the Sun, for which construction is now starting; and MadMax, which looks for axions in the dark-matter halo. Axions were postulated by Peccei and Quinn to solve the strong-CP problem but are also a good candidate for dark matter if they exist. A further experiment, which DESY theorist Andreas Ringwald and I proposed, LUXE, would deliver the European XFEL 16.5 GeV electron beam into a high-intensity laser so that the beam electrons experience a very strong electromagnetic field within their rest frame. LUXE would reach the so-called Schwinger limit, and allow us to see what happens when QED becomes strong and transitions from the perturbative to the non-perturbative regime.
There are many accelerators at DESY, such as PETRA, where the gluon was discovered in the 1970s. Today, PETRA is one of the best synchrotron-radiation facilities in the world and is used for a wide range of science, for example imaging of small structures such as viruses. It is an application of accelerators where the impact on society is more direct and obvious than it is in particle physics.
How can we increase the visibility of particle physics to society?
This is a very important point. The knowledge we get from particle physics today is clear, but it is less clear how we can transfer this knowledge to help solve pressing problems in society, such as climate change or a pandemic. Humankind desires to increase its knowledge, and it is important that we continue with fundamental research purely to increase our knowledge. We have already come so far in the past 5000 years. And, many technical innovations were made for that purpose alone but then resulted in transformative changes. Take the idea of the accelerator. It was developed at Berkeley during the 1930s with no particular application in mind, but today is used routinely around the world to prolong life by irradiating tumours. Or the transistor, without which there would not be any computers, which was developed in the 1920s based on the then-emerging understanding of atoms. It is important to promote both targeted research that directly addresses problems as well as fundamental research, which every now and again will result in groundbreaking changes. When thinking about our projects and experiments we need to keep in mind if and how any of our technical developments can be made in a way that addresses big societal problems.
It is important that we inspire the general public, in particular the young, about science. Educational programmes are key, such as Beamline for Schools, which is one of CERN’s flagship schemes. This was hosted by DESY during Long Shutdown 2 and a team at DESY will continue the collaboration.
CERN recently launched its Quantum Technology Initiative. Does DESY have plans in this area?
DESY received funding from the state of Brandenburg to build a centre for quantum computing, the CTQA, which is located at DESY’s Zeuthen site. Karl Jansen, one of our scientists there, has spent most of his life working on lattice QCD calculations and is leading this effort. I myself am involved in research using quantum computing for particle tracking at the LUXE experiment. The layout of the tracker for this experiment is simpler compared to the LHC experiments, which is why we want to do it here first. We have to understand how to use quantum computers in conjunction with classical computers to solve actual problems efficiently. There is no doubt that quantum computing solves questions that are otherwise not possible, and we also think they will be able to solve problems more efficiently by using less resources compared to classical machines. That could also contribute to reducing the impact of computing on climate change.
What was your participation in the 2020 update of the European strategy for particle physics (ESPPU) and how have things progressed since?
It was exciting to be part of the ESPPU drafting process. I was very impressed by the sincerity and devotion of the people in the hall in Bad Honnef when the process concluded. There was a lot of respect and understanding of the different views on how to balance the scientific ambitions with the realities of funding, R&D needs and other factors.
The ESPPU recommended first and foremost to complete the HL-LHC upgrade. This is a big undertaking and demands our focus. For the future, an electron-positron Higgs factory is the highest priority, in addition to ramping up accelerator R&D. Last year an accelerator R&D roadmap was prepared following the ESPPU recommendation. Very different directions are laid out, and now the task is to understand how to prioritise and streamline the different directions, and to ensure the relevant aspects are progressing significantly by the next update (probably in 2026). For instance CERN’s main focus is R&D on the next generation of magnets for a new hadron machine, while DESY has a strong progamme in plasma-wakefield accelerators for electron machines. But both DESY and CERN are also contributing to other aspects and there are other labs and universities in Europe which make important contributions. At DESY we also try to exploit synergies between developing new accelerators for photon science and high-energy physics.
What is the best machine to follow the LHC?
The next machine needs to be a collider that can measure the Higgs properties at the per-cent and even in some cases the per-mille level – a Higgs factory. In addition to the excellent scientific potential, factors to consider are timescale and cost, but also making it a “green” accelerator and considering its innovation potential. Finding a good balance there is not easy, and there are several proposals that were studied as part of the ESPPU.
What are your three most interesting open questions in particle physics?
Mine are related to the Higgs boson. One is the matter–antimatter asymmetry, because the exact form of the electroweak phase transition is closely related to the Higgs field. If it was a smooth transition, it cannot explain the matter–antimatter asymmetry; if it was violent, it could potentially be able to explain it. We should be able to learn something about this with the HL-LHC, but to know for sure we need a future collider. The second question is why is there a muon? Flavour physics fascinates me, and the Higgs-boson is the only particle that distinguishes between the electron, the muon and the tau, which is why I would like to study it extensively. The third question is what is dark matter? One intriguing possibility is that the Higgs boson decays to dark-matter particles, and with a Higgs factory we could measure this, even if it only happens for 0.3% of all Higgs bosons. The Higgs boson is so important for understanding our universe, that’s why we need a Higgs factory, although we will already learn a lot from the LHC and HL-LHC.
Today, women make up more than 30% of the scientists at DESY, whereas in 2005 it was less than 10%
Is the community doing a good job in communicating beyond the field?
It is crucial that scientists communicate scientific facts, especially now when there are “post-truth” tendencies in society. We have a duty as people who are publicly funded to communicate our work to the public. Many people are excited about the origin of the universe and the fundamental laws of physics we are studying. Activities such as the CERN and DESY open days attract many visitors. We also see really good turnouts at public lectures as well as during our “science on tap” activity in Hamburg. I gave a talk about the first minutes of the universe, and the bar was packed and people had many questions during one of these events. We should all spend some of our time communicating science. Of course, we have to mostly do the actual research, otherwise we do not have anything to communicate.
You are the first female director in DESY’s 60-year history. What do you think about the situation for women in physics, for instance the “25 by ‘25” initiative?
The 25 by ‘25 initiative is good. We have been fortunate at DESY that there was a strong drive from the German government. Research funding has increased a lot during the past 10–15 years and there was dedicated funding available to attract women to large research centres. Today, women make up more than 30% of the scientists at DESY, whereas in 2005 it was less than 10%. Having special programmes unfortunately appears to be necessary as change happens too slowly by itself otherwise. Having women in visible roles in science is important. I myself was inspired by several women in particle physics, such as Beate Naroska, the only female professor at the physics department when I was a student, Young-Kee Kim, who was spokesperson of the CDF experiment when I was a postdoc and later deputy-director of Fermilab, and last but not least Fabiola Gianotti, who was spokesperson of ATLAS when I joined and is now the Director-General of CERN.
After 25 years of development, the James Webb Space Telescope (JWST) successfully launched from Europe’s spaceport in French Guiana on the morning of 25 December. Nerves were on edge as the Ariane 5 rocket blasted its $10 billion cargo through the atmosphere, aided by a velocity kick from its equatorial launch site. An equally nail-biting moment came 27 minutes later, when the telescope separated from the launch vehicle and deployed its solar array. In scenes reminiscent of those at CERN on 10 September 2008 when the first protons made their way around the LHC, the JWST command centre erupted in applause. “Go Webb, go!” cheered the ground team as the craft drifted into the darkness.
The result of an international partnership between NASA, ESA and the Canadian Space Agency, Webb took a similar time to design and build as the LHC and cost almost twice as much. Its science goals are also complementary to particle physics. The 6.2 tonne probe’s primary mirror – the largest ever flown in space, with a diameter of 6.5 m compared to 2.4 m for its predecessor, Hubble – will detect light, stretched to the infrared by the expansion of the universe, from the very first galaxies. In addition to shedding new light on the formation of galaxies and planets, Webb will deepen our understanding of dark matter and dark energy. “The promise of Webb is not what we know we will discover,” said NASA administrator Bill Nelson after the launch. “It’s what we don’t yet understand or can’t yet fathom about our universe. I can’t wait to see what it uncovers!”
The promise of Webb is not what we know we will discover. It’s what we don’t yet understand or can’t yet fathom about our universe
Bill Nelson
Five days after launch, Webb successfully unfurled and tensioned its 300 m2 sunshield. Although the craft’s final position at Earth–Sun Lagrange point 2 (L2) ensures that it is sheltered by Earth’s shadow, further protection from sunlight is necessary to keep its four science instruments operating at 34 K. The delicate deployment procedure involved 139 release mechanisms, 70 hinge assemblies, some 400 pulleys and 90 individual cables – each of which was a potential single-point failure. Just over one week later, on 7 and 8 January, the two wings of the primary mirror, which had to be folded in for launch, were opened, involving the final four of a total of 178 release mechanisms. The ground team then began the long procedure of aligning the telescope optics via 126 actuators on the backside of the primary mirror’s 18 hexagonal segments. On 24 January, having completed a 1.51 million-km journey, the observatory successfully inserted itself into its orbit at L2, marking the end of the complex deployment process and the beginning of commissioning activities. The process will take months, with Webb scheduled to return its first science images in the summer.
The 1998 discovery of the accelerating expansion of the universe, which implies that around 70% of the universe is made up of an unknown dark energy, stemmed from observations of distant type-Ia supernovae that appeared fainter than expected. While the primary evidence came from ground-based observations, Hubble helped confirm the existence of dark energy via optical and near-infrared observations of supernovae at earlier times. Uniquely, Webb will allow cosmologists to see even farther, from as early as 200 million years after the Big Bang, while also extending the observation and cross-calibration of other standard candles, such as Cepheid variables and red giants, beyond what is currently possible with Hubble. Operating in the infrared rather than optical regime also means less scattering of light from interstellar gas.
With these capabilities, the JWST should enable the local rate of expansion to be determined to a precision of 1%. This will bring important information to the current tension between the measured expansion rate at early and late times, as quantified by the Hubble constant, and possibly shed light on the nature of dark energy.
Launching Webb is a huge celebration of the international collaboration that made this mission possible
Josef Aschbacher
By measuring the motion and gravitational lensing of early objects, Webb will also survey the distribution of dark matter, and might even hint at what it’s made of. “In order to make progress in the identification of dark matter, we need observations that clearly discriminate among the tens of possible explanations that theorists have put forward in the past four decades,” explains Gianfranco Bertone, director of the European Consortium for Astroparticle Theory. “If dark matter is ‘warm’ for example – meaning that it is composed of particles moving at mildly relativistic speeds when first structures are assembled – we should be able to detect its imprint on the number density of small dark-matter halos probed by the JWST. Or, if dark matter is made of primordial black holes, as suggested in the early 1970s by Stephen Hawking, the JWST could detect the faint emission produced by the accretion of gas onto these objects in early epochs.”
On 11 February, Webb returned images of its first star in the form of 18 blurry white dots, the product of the unaligned primary-mirror segments all reflecting light from the same star back at the secondary mirror and into its near-infrared camera. Though underwhelming at first sight, this and similar images are crucial to allow operators to gradually align and focus the hexagonal mirror segments until 18 images become one. After that, Webb will start downlinking science data at a rate of about 60 GB per day.
“Launching Webb is a huge celebration of the international collaboration that made this next-generation mission possible,” said ESA director-general Josef Aschbacher. “We are close to receiving Webb’s new view of the universe and the exciting scientific discoveries that it will make.”
The world’s longest-serving heavy-ion collider, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory, started its latest run in December. In addition to further probing the quark–gluon plasma, the focus of RHIC Run 22 (the 3.8 km-circumference collider’s 22nd run in as many years) is on testing innovative accelerator techniques and detector technologies for the Electron–Ion Collider (EIC) due to enter operation at Brookhaven in the early 2030s.
The EIC, which will add an electron storage ring to RHIC, will collide 5–18 GeV electrons (and possibly positrons) with ion beams of up to 275 GeV per nucleon, targeting luminosities of 1034 cm–2 s–1 and a beam polarisation of up to 85%. This will enable researchers to go beyond the present one-dimensional picture of nuclei and nucleons: by correlating the longitudinal components of the quark and gluon momenta with their transverse momenta and spatial distribution inside the nucleon, the EIC will enable 3D “nuclear femtography”.
Unique ability
Preparations for the EIC rely on RHIC’s unique ability to collide polarised proton beams via the use of helical dipole magnets, which offers a directional frame of reference to study hadron collisions. The last time polarised protons were collided at RHIC was 2017. For Run 22, the accelerator team aims to accumulate proton–proton collisions at the highest possible polarisation, and also at the highest energies (255 GeV per beam). To ensure the EIC hadron beams are as tightly packed as possible, thus maximising the luminosity, the accelerator team will try a technique previously used at RHIC to accelerate larger particles, but which has never been used with protons before.
“We are going to split each proton bunch into two when they’re still at low energy in the Booster, and accelerate those as two separate bunches,” explains Run-22 coordinator Vincent Schoefer. “That splitting will alleviate some of the stress during low energy, and then we can merge the bunches back together to put very dense bunches into RHIC.” Such merging is challenging, he adds, because it takes around 300,000 turns in the Alternating Gradient Synchrotron (the link between the Booster and RHIC), during which the protons must be handled “very gently”.
To further reduce the spread of high-energy hadron beams, the team will explore several cooling strategies (a major challenge for high-energy hadron beams) for possible use at the EIC. One is coherent electron cooling, whereby electrons from a high-gain free-electron laser are used to attract the protons closer to a central position. In addition, the team plans to ramp up beams of helium-3 ions to develop methods for measuring the polarisation of particles other than protons. Measuring how particles in the beam scatter off a gas target is the established method, but ions such as helium-3 can complicate matters by breaking up when they strike the target. To accurately measure the polarisation of helium-3 and other beams at the EIC, it is necessary to identify when this breakup occurs. During Run 22 the RHIC team will test its ability to accurately characterise scattering products using unpolarised helium-3 beams to develop new polarimetry methods.
During the run, RHIC’s recently upgraded STAR detector will track particles emerging from collisions at a wider range of angles than ever before (covering a rapidity of –1.5 – 4.2). The upgrades include finer granulated sensors for the inner part of the time projection chamber, and two new forward-tracking detectors and electromagnetic and hadronic calorimetery at one end of the detector, which will allow better reconstruction of jets.
Detector technologies
In addition to increasing the dataset for exploring colour-charge interactions, these upgrades will give physicists crucial information about the detector technologies and the behaviour of nucleon structure relevant to the EIC. RHIC’s other main detector, the upgraded PHENIX, is under construction and scheduled to enter operation during Run 23 next year.
“Our goal this run is basically doing EIC physics with proton–proton collisions,” says Elke-Caroline Aschenauer, who led the STAR upgrade project. “We have to verify that what you measure in electron–proton collisions at the EIC and in proton–proton events at RHIC is universal – meaning it doesn’t depend on which probe you use to measure it.”
The BASE collaboration at the CERN Antiproton Decelerator (AD) has made the most precise comparison yet between the properties of matter and antimatter. Reporting in Nature in January, following a 1.5 year-long measurement campaign, the collaboration finds the charge-to-mass ratios of protons and antiprotons to be identical within an experimental uncertainty of just 16 parts per trillion. The result is four times more precise than the previous BASE comparison in 2015 and places strong constraints on possible violations of CPT invariance in the Standard Model.
The charge-to-mass ratio is now the most precisely measured property of the antiproton
Stefan Ulmer
Invariance under the simultaneous operations of charge conjugation, parity transformation and time reversal is a pillar of quantum field theories such as the Standard Model. Direct, high-precision tests of CPT invariance are therefore powerful probes of new physics, and of the possible mechanisms through which the universe came to be matter-dominated.
“The charge-to-mass ratio is now the most precisely measured property of the antiproton,” says BASE spokesperson Stefan Ulmer of RIKEN in Japan. “To reach this precision, we made considerable upgrades to the experiment and carried out the measurements when the antimatter factory was closed down, so that they would not be affected by disturbances from the experiment’s magnetic field.” The upgrades include a rigorous re-design of the cryostage of the experiment and the development of a multi-layer shielding-coil system, which considerably reduced magnetic-field fluctuations in the central measurement trap, explains Ulmer. “Another important ingredient is the implementation of a superconducting image-current detection system with tunable resonance frequency and ultra-high non-destructive detection efficiency, which eliminates the dominant systematic shift of the previous charge-to-mass ratio comparison.”
The BASE team confined antiprotons and negatively charged hydrogen ions in a state-of-the-art Penning trap, in which charged particles follow a cyclical trajectory with a frequency that scales with the trap’s magnetic-field strength and the particle’s charge-to-mass ratio.
By alternately feeding antiprotons and hydrogen ions one at a time into the trap, the team was able to measure their cyclotron frequencies under the same conditions. Performed over four campaigns between December 2017 and May 2019, the measurements involved more than 24,000 cyclotron-frequency comparisons, each lasting 260 seconds. Within the experimental uncertainty, the result, –(q/m)p/(q/m)p̄= 1.000000000003(16), demonstrates that the Standard Model respects CPT invariance at an energy scale of 1.96×10–27 GeV at 68% confidence. It also improves knowledge of 10 coefficients in the Standard Model extension – a generalised, observer-independent effective field theory used for investigations of Lorentz violation.
Weak equivalence principle
The BASE team also used their data to test the weak equivalence principle, which states that different bodies in the same gravitational field undergo the same acceleration. Any difference between the gravitational interaction of protons and antiprotons, for example due to anomalous gravitational scalar or tensor couplings to antimatter, would result in a difference in the proton and antiproton cyclotron frequencies. Sampling the varying gravitational field of Earth as it orbits the Sun, BASE found no such difference, constraining the strength of anomalous antimatter/gravitational interactions to less than 1.8×10–7 and enabling the first differential test of the weak equivalence principle (WEP) using antiprotons.
“From this interpretation we constrain the differential matter–antimatter WEP-violating coefficient to less than 0.03, which is comparable to the initial precision goals of other AD experiments that aim to drop antihydrogen in the Earth’s gravitational field,” explains Ulmer. “BASE did not directly drop antimatter, but our measurement of the influence of gravity on a baryonic antimatter particle is, according to our understanding, conceptually very similar, indicating no anomalous interaction between antimatter and gravity at the achieved level of uncertainty.”
The collaboration expects to reach even higher sensitivities on both the WEP test and the proton–antiproton charge-to- mass ratio comparison by increasing the experiment’s magnetic-field strength, stability and homogeneity. Further improvements are anticipated from the use of transportable antiproton traps, such as BASE-STEP, which allow precision antiproton experiments to be moved from the fluctuating accelerator environment to a calm laboratory space.
Particle physicists are no strangers to outreach, be it giving public talks, writing popular books or taking part in science shows. But how many are brave enough to enter a career in teaching, arguably the most important science-communication activity of all? CERN alumni who have returned to the classroom reveal teaching to be one of the hardest but most rewarding things they have ever done.
“I love my job,” exclaims Octavio Dominguez, who completed his PhD in 2013 studying the appearance of electron-cloud build-up in the LHC before deciding to switch to teaching. Having personally benefitted from some excellent teachers who sparked an “unquenchable curiosity”, he says, the idea of being a teacher had been on his mind ever since he was at secondary school. “The profession is definitely not exempt of challenges. Well, in fact I can say it’s the most difficult thing I’ve ever done… But if I keep doing it, it’s because the feedback from students is absolutely priceless. It’s truly amazing seeing my students evolve into the best version of themselves.”
Job satisfaction
Despite giving as many as 25 lessons per week, including presentations and practicals, and spending long hours outside school preparing materials and marking assignments, happiness and personal satisfaction are cited as the main rewards of working as a teacher. “I particularly enjoy seeing the enthusiasm in students’ eyes – it is something that cannot be explained with words,” says Eleni Ntomari, who was a summer student at CERN in 2006, then a PhD student and postdoc working on the CMS experiment. “From the outside, teaching might not appear difficult, but in reality it is not just a profession but a ‘project’ with no timetable and a continuation of trying to learn new things in order to become more efficient and helpful for your students.” Ntomari took advantage of every teaching opportunity that academic life offered, from being a lab instructor, becoming a CERN guide and giving talks at local schools when a teaching opportunity in Greece arose during her postdoctoral fellowship at DESY. “I realised teaching was highly gratifying, so I decided to continue my career as a physics teacher in secondary and high schools.”
I particularly enjoy seeing the enthusiasm in students’ eyes
Eleni Ntomari
Teachers of STEM subjects are in acute demand. In the US, physics has the most severe teacher shortage followed by mathematics and chemistry, with large surpluses of biology and earth-science teachers, according to the Cornell physics teacher education coalition. Furthermore, around two thirds of US high-school physics teachers do not have a degree in physics or physics education. The picture is similar in Europe, with a brief teacher survey carried out by the European Physical Society in 2020 revealing the overwhelming opinion that a serious problem exists: 81% of respondents believed there is a shortage of specialist teachers in their country, of which 87% thought that physics is being taught by non-specialists.
Initiatives such as the UN International Day of Education on 24 January help to bring visibility and recognition to the profession, says Dominguez: “Education is one of the principal means to change the world for the better, but I feel that the teaching profession is frequently disregarded by many people in our society,” he says. “I’ve spent most of my career as a teacher in schools in deprived areas of the UK, and now I’m doing my second year in one of the most affluent schools in the country. This has given me a new perspective on society and has helped me understand better why some behaviour patterns appear.”
The CERN effect
The fascinating machines and thought-provoking concepts underpinning particle physics make a research background at CERN a major bonus in the classroom, explains Alexandra Galloni, a CERN summer student in 1995 who completed her PhD at the DELPHI experiment in 1998, spent a decade in IT consultancy, and is now head of science and technology at one of the UK’s top-performing secondary schools. “I milked my PhD as much as I could – I promised a visit from Brian Cox to my first school at interview, and although I didn’t pull that one off, contacts at CERN have enriched life both at school and on many of the CERN trips I inevitably ended up running. The Liverpool LHCb team have hosted incredible ‘Particle Schools’ at CERN for students and staff from many schools almost every year since then, leading to gushing feedback from all involved.”
I love the variety, the unexpected moments and the human interaction in the classroom
Alexandra Sheridan
Keeping in touch with events at CERN has also led to exciting moments for the students, she adds, such as watching the Higgs-discovery announcement in 2012, applying for Beamline for Schools in 2014, taking part in the ATLAS Open Data project and participating in Zoom calls with CERN contacts about future colliders and antimatter. “The surrounding tasks to teaching can be gruelling, and I would be lying if I said I didn’t resent the never-ending to-do list and lack of being able to plan much personal time during term-time. But I love the variety, the unexpected moments and the human interaction in the classroom.”
CERN offers many professional-development programmes for teachers to keep up-to-date with developments in particle physics and related areas, as well as dedicated experiment sessions at “S’Cool LAB”, the coordination of the highly popular Beamline for Schools competition and internships for high-school students. These efforts are also underpinned by an education-research programme that has seen five PhD theses produced during the past five years as well as 67 published articles since the programme began in 2009. “We are reaching out to all our member states and beyond to enthuse the next generations of STEM professionals and contribute to their science education,” says Sascha Schmeling, who leads the CERN teacher and student programmes. “Engaging the public with fundamental research is a vital part of CERN’s mission.”
The successful restart of Linac4 on 9 February marked the start of the final countdown to LHC Run 3. Inaugurated in May 2017 after two decades of design and construction, Linac4 was connected to the next link in the accelerator chain, the Proton Synchrotron Booster (PSB), in 2019 at the beginning of Long Shutdown 2 and operated for physics last year. The 86 m-long accelerator now replaces the long-serving Linac2 as the source of all proton beams for CERN experiments.
On 14 February, H– ions accelerated to 160 MeV in Linac4 were sent to the PSB, with beam commissioning and physics to start in ISOLDE on 7 and 28 March. Beams will be sent to the PS on 28 February, to serve, after set-up, experiments in the East Area, the Antiproton Decelerator and n_TOF. The SPS will be commissioned with beam during the week beginning 7 March, after which beams will be supplied to the AWAKE facility and to the North Area experiments, where physics operations are due to begin on 25 April.
Meanwhile, preparations for some of the protons’ final destination, the LHC, are under way. Powering tests and magnet training in the last of the LHC’s eight sectors are scheduled to start in the week of 28 February and to last for four weeks, after which the TI12 and TI18 transfer tunnels and the LHC experiments will be closed and machine checkout will begin. LHC beam commissioning with 450 GeV protons is scheduled to start on 11 April, with collisions at 450 GeV per beam expected around 10 May. Stable beams with collisions at 6.8 TeV per beam and nominal bunch population are scheduled for 15 June. An intensity ramp-up will follow, producing collisions with 1200 bunches per beam in the week beginning 18 July on the way to over double this number of bunches. High-energy proton-proton operations will continue for 3–4 months, before the start of a month-long run with heavy ions on 14 November. All dates are subject to change as the teams grapple with LHC operations at higher luminosities and energies than those during Run 2, following significant upgrade and consolidation work completed during LS2.
Among the highlights of Run 3 are the first runs of the neutrino experiments FASERν and SND@LHC
Among the highlights of Run 3 are the first runs of the neutrino experiments FASERν and SND@LHC, as well as the greater integrated luminosities and physics capabilities resulting from upgrades of the four main LHC experiments. A special request was made by LHCb for a SMOG2 proton-helium run in 2023 to measure the antiproton production rate and thus improve understanding of the cosmic antiproton excess reported by AMS-02. Ion runs with oxygen, including proton-oxygen and oxygen-oxygen, will commence in 2023 or 2024. The former is also long-awaited by the cosmic-ray community, to help improve models of high-energy air showers, while high-energy oxygen-oxygen collisions allow studies of the emergence of collective effects in small systems. High β* runs to maximise the interaction rate will be available for the forward experiments TOTEM and LHCf in late 2022 and early 2023.
On 28 January, CERN announced a change to the LHC schedule to allow necessary work for the High-Luminosity LHC (HL-LHC) both in the machine and in the ATLAS and CMS experiments. The new schedule foresees Long Shutdown 3 to start in 2026, one year later than in the previous schedule, and to last for three instead of 2.5 years. “Although the HL-LHC upgrade is not yet completed, a gradual intensity increase from 1.2 × 1011 to 1.8 × 1011 protons per bunch is foreseen for 2023,” says Rende Steerenberg, head of the operations group. “This promises exciting times and a huge amount of data for the experiments.”
The COVID-19 pandemic has cost more than five million lives and disrupted countless more. Without the results of decades of curiosity-driven research, however, the situation would have been much worse. The pandemic therefore serves as a stark and brutal reminder of the links between basic science and the balanced, sustainable and inclusive development of our planet.
The International Year of Basic Sciences for Sustainable Development (IYBSSD), proclaimed by the United Nations (UN) general assembly on 2 December 2021, is a key moment of mobilisation to convince economic and political leaders, as well as the public, of the critical links between basic research and the 2030 Agenda for Sustainable Development adopted by all UN member states in 2015. Due to their evidence-based nature, universality and openness, basic sciences not only contribute to expanding knowledge and improving societal welfare, but also help to reduce societal inequality, improve inclusion and foster intercultural dialogue and peace. They are thus central in achieving the UN Agenda’s 17 Sustainable Development Goals.
Virtuous circle
Many examples of basic sciences’ transformative contribution to society are so widespread that they are taken for granted. The web was born at CERN from the needs of global particle physics; general relativity underpins the global positioning system; search engines and artificial intelligence rely on brilliant mathematics and statistical methods; mobile phones derive from the discovery of transistors; and Wi-Fi from developments in astronomy. The discovery of DNA, positron emission tomography, magnetic resonance imaging and radiotherapy have transformed medical diagnostics and treatments, while advances in basic physics, chemistry and materials science are reducing pollution and revolutionising the generation and storage of renewable energy.
Basic science, together with applied scientific research and technological applications, is thus one of the key elements of the virtuous circle that allows the sustainable development of society. Yet, basic sciences are often not as prominent as they should be in discussions concerning societal, environmental and economic development. The aims of the IYBSSD are to focus global attention on the enabling role of basic science and to improve the collaboration between basic sciences and policy-making.
Particle physics has a major role to play in making the IYBSSD a success
The IYBSSD, led by the International Union of Pure and Applied Physics – which will celebrate its centenary in 2022 – has received strong support from around 30 international science unions and organisations active in physics, mathematics, chemistry, life science and social science, along with 70 national and international academies of sciences, and 30 Nobel laureates and Fields medallists. A series of specific activities coordinated at local, national and international levels will aim to promote inclusive collaboration (with special attention paid to gender balance), enhance basic-science training and education, and encourage the full implementation of open-access publishing and open data in the basic sciences.
The IYBSSD inauguration ceremony will take place at UNESCO on 8 July, and a closing ceremony is planned to take place at CERN in 2023, hopefully timed with the completion of the Science Gateway building. Events of all sorts proposed by countries, territories, scientific unions, organisations and academies endorsed by the steering committee will occur throughout the year.
The role of particle physics
As one of the most basic sciences of all, particle physics has a major role in making the IYBSSD a success. The high-energy physics community should use all the available opportunities in 2022 and 2023, be it through conferences, workshops, collaboration meetings or other activities, to place our field under the auspices of the IYBSSD. We need to show how this community advances science for the benefit of society, how much it re-enchants our world and therefore makes it worth sustaining, how much it contributes in its practice to openness, equity, diversity and inclusion, and to multicultural dialogue and peace. The CERN model is emblematic of these contributions. Many of the programmes of the CERN & Society foundation also promote these values in line with the IYBSSD objectives.
The need for humanity to maintain and develop high levels of interest and participation in basic sciences makes awareness-raising initiatives such as the IYBSSD critical. Following the recent international years of physics, chemistry, mathematics and astronomy, it is now time for us to get behind this unprecedented, global interdisciplinary initiative