Comsol -leaderboard other pages

Topics

Fear of a Black Universe: an outsider’s guide to the future of physics

Fear of a black universe feature

Stephon Alexander is a professor of theoretical physics at Brown University, specialising in cosmology, particle physics and quantum gravity. He is also a self-professed outsider, as the subtitle of his latest book Fear of a Black Universe suggests. His first book, The Jazz of Physics, was published in 2016. Fear of a Black Universe is a rallying cry for anyone who feels like a misfit because their identity or outside-the-box thinking doesn’t mesh with cultural norms. By interweaving historical anecdotes and personal experiences, Alexander shows how outsiders drive innovation by making connections and asking questions insiders might dismiss as trivial.

Alexander is Black and internalised his outsider sense early in his career. As a postdoc in the early 2000s, he found that his attempts to engage with other postdocs in his group were rebuffed. He eventually learned from his friend Brian Keating, who is white, the reason why: “They feel that they had to work so hard to get to the top and you got in easily, through affirmative action”. Instead of finding his peers’ rejection limiting, Alexander reinterpreted their dismissal as liberating: “I’ve come to realise that when you fit in, you might have to worry about maintaining your place in the proverbial club… so I eventually became comfortable being the outsider. And since I was never an insider, I didn’t have to worry that colleagues might laugh at me for my unlikely approach.”

Instead of finding his peers’ rejection limiting, Alexander reinterpreted their dismissal as liberating

Alexander argues that true breakthroughs come from “deviants”. He draws parallels between outsiders in physics and graffiti artists, who were considered vandals until the art world recognised their talent and contributions. Alexander recounts his own “deviance” in a humorous and sometimes  self-deprecating manner. He recalls a talk he gave at a conference about his first independent paper, which involved reinterpreting the universe as a three-dimensional membrane orbiting a five-dimensional black hole. During the talk he was often interrupted, eventually prompting a well-respected Indian physicist to stand up and shout “Let him finish! No one ever died from theorising.”

Alexander took these words to heart, and asks his readers to do the same during the speculative discussions in the second part of his book. Here, Alexander intersperses mainstream physics with some of his self-described “strange” ideas, acknowledging that some readers might write him off as an “oddball crank”. He explores the intersection of physics with philosophy, biology, consciousness, and searches for extraterrestrial life. Some sections – such as the chapter on alien quantum computers generating the effect of dark energy – feel more like science fiction than science. But Alexander reassures readers that, while many of his ideas are strange, so are many experimentally verified tenants of physics. “In fact, the likelihood that any one of us will create a new paradigm because we have violated the norms… is very slim” he observes.

Science wise, this book is not for the faint-hearted. While many other public-facing physics books slowly wade readers into early-20th-century physics and touch on more abstract concepts only in the final chapters, part I of Fear of a Black Universe dives directly into relativity, quantum mechanics and emergence. Part II then launches into a much deeper discussion about supersymmetry, baryogenesis, quantum gravity and quantum computing. But the strength of Alexander’s new work isn’t in its retellings of Einstein’s thought experiments or even its deconstruction of today’s cosmological enigma. More than anything, this book makes a case for cultivating diversity in science that goes beyond “gesticulations of identity politics”.

Fear of a Black Universe is both mind-bending and refreshing. It approaches physics with a childlike curiosity and allows the reader to playfully contemplate questions many have but few discuss for fear of sounding like a crank. This book will be enjoyable for scientists and science enthusiasts who can set cultural norms aside and just enjoy the ride.

Exploring the CMB like never before

To address the major questions in cosmology, the cosmic microwave background (CMB) remains the single most important phenomenon that can be observed. Not this author’s words, but those of the recent US National Academies of Sciences, Engineering, and Medicine report Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Astro2020), which recommended that the US pursue a next-generation ground-based CMB experiment, CMB-S4, to enter operation in around 2030. 

The CMB comprises the photons created in the Big Bang. These photons have therefore experienced the entire history of the universe. Everything that has happened has left an imprint on them in the form of anisotropies in their temperature and polarisation with characteristic amplitudes and angular scales. The early universe was hot enough to be completely ionised, which meant that the CMB photons constantly scattered off free electrons. During this period the primary CMB anisotropies were imprinted, tracing the overall geometry of the universe, the fraction of the energy density in baryons, the number of light-relic particles and the nature of inflation. After about 375,000 years of expansion the universe cooled enough for neutral hydrogen atoms to be stable. With the free electrons rapidly swept up by protons, the CMB photons simply free-streamed in whatever direction they were last moving in. When we observe the CMB today we therefore see a snapshot of this so-called last-scattering surface.

The continued evolution of the universe had two main effects on the CMB photons. First, its ongoing expansion stretched their wavelengths to peak at microwave frequencies today. Second, the growth of structure eventually formed galaxy clusters that changed the direction, energy and polarisation of the CMB photons that pass through them, both from gravitational lensing by their mass and from inverse Compton scattering by the hot gas that makes up the inter-cluster medium. These secondary anisotropies therefore constrain all of the parameters that this history depends on, from the moment the first stars formed to the number of light-relic particles and the masses of neutrinos.  

The temperature anisotropies of the CMB

As noted by the Astro2020 report, the history of CMB research is that of continuously improving ground and balloon experiments, punctuated by comprehensive measurements from the major satellite missions COBE, WMAP and Planck. The increasing temperature and polarisation sensitivity and angular resolution of these satellites is evidenced in the depth and resolution of the maps they produced (see “Relic radiation” image”). However, such maps are just our view of the CMB – one particular realisation of a random process. To derive the underlying cosmology that gave rise to them, we need to measure the amplitude of the anisotropies on various angular scales (see “Power spectra” figure). Following the serendipitous discovery of the CMB in 1965, the first measurements of the temperature anisotropy were made by COBE in 1992. The first peak in the temperature power spectrum was measured by the BOOMERanG and MAXIMA balloons in 2000, followed by the E-mode polarisation of the CMB by the DASI experiment in 2002, and the B-mode polarisation by the South Pole Telescope and POLARBEAR experiments in 2015.

CMB-S4, a joint effort supported by the US Department of Energy (DOE) and the National Science Foundation (NSF), will help write the next chapter in this fascinating adventure. Planned to comprise 21 telescopes at the South Pole and in the Chilean Atacama Desert instrumented with more than 500,000 cryogenically-cooled superconducting detectors, it will exceed the capabilities of earlier generations of experiments by more than an order of magnitude and deliver transformative discoveries in fundamental physics, cosmology, astrophysics and astronomy.

The CMB-S4 challenge 

Three major challenges must be addressed to study the CMB at such levels of precision. Firstly, the signals are extraordinarily faint, requiring massive datasets to reduce the statistical uncertainties. Secondly, we have to contend with systematic effects both from imperfect instruments and from the environment, which must be controlled to exquisite precision if they are not to swamp the signals. Finally, the signals are obscured by other sources of microwave emission, especially galactic synchrotron and dust emission. Unlike the CMB, these sources do not have a black-body spectrum, so it is possible to distinguish between CMB and non-CMB sources if observations are made at enough microwave frequencies to break the degeneracy.

Power spectra of the CMB

This third challenge actually proves to be an astrophysical blessing as well as a cosmological curse: CMB observations are also excellent legacy surveys of the millimetre-wave sky, which can be used for a host of other science goals. These range from cataloguing galaxy clusters, to studying the Milky Way, to detecting spatial and temporal transients such as gamma-ray bursts via their afterglows.

Coming together

In 2013 the US CMB community came together in the Snowmass planning process, which informs the deliberations of the decadal Particle Physics Project Prioritization Panel (P5). We realised that achieving the sensitivity needed to make the next leap in CMB science would require an experiment of such magnitude (and therefore cost) that it could only be accomplished as a community-wide endeavour, and that we would therefore need to transition from multiple competing experiments to a single collaborative one. By analogy with the US dark-energy programme, this was designated a “Stage 4” experiment, and hence became known as CMB-S4. 

In 2014 a P5 report made the critical recommendation that the DOE should support CMB science as a core piece of its programme. The following year a National Academies report identified CMB science as one of three strategic priorities for the NSF Office of Polar Programs. In 2017 the DOE, NSF and NASA established a task force to develop a conceptual design for CMB-S4, and in 2019 the DOE took “Critical Decision 0”, identifying the mission need and initiating the CMB-S4 construction project. In 2020 Berkeley Lab was appointed the lead laboratory for the project, with Argonne, Fermilab and SLAC all playing key roles. Finally, late last year, the long-awaited Astro2020 report unconditionally recommended CMB-S4 as a joint NSF and DOE project with an estimated cost of $650 million. With these recommendations in place, the CMB-S4 construction project could begin.

CMB-S4 constraints

From the outset, CMB-S4 was intended to be the first sub-orbital CMB experiment designed to reach specific critical scientific thresholds, rather than simply to maximise the science return under a particular cost cap. Furthermore, as a community-wide collaboration, CMB-S4 will be able to adopt and adapt the best of all previous experiments’ technologies and methodologies – including operating at the site best suited to each science goal. One third of the major questions and discovery areas identified across the six Astro2020 science panels depend on CMB observations.

The critical degrees of freedom in the design of any observation are the sky area, frequency coverage, frequency-dependent depth and angular resolution, and observing cadence. Having reviewed the requirements across the gamut of CMB science, four driving science goals have been identified for CMB-S4. 

For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds

The first is to test models of inflation via the primordial gravitational waves they naturally generate. Such gravitational waves are the only known source of a primordial B-mode polarisation signal. The size of these primordial B-modes is quantified by the ratio of their power to that of the temperature power spectrum – the scalar-to-tensor ratio, designated r. For the largest and most popular classes of inflationary models, CMB-S4 will make a 5σ detection of r, while failure to make such a measurement will put an upper limit of r ≤ 0.001 at 95% confidence, setting a rigorous constraint on alternative models (see “Constraining inflation” figure). The large-scale B-mode polarisation signal encoding r is the faintest of all the CMB signals, requiring both the deepest measurement and the widest low-resolution frequency coverage of any CMB-S4 science case.

The second goal concerns the dark universe. Dark matter and dark energy make up 95% of the universe’s mass-energy content, and their particular form and composition impact the growth of structure and thus the small-scale CMB anisotropies. The collective influence of the three known light-relic particles (the Standard Model neutrinos) has already been observed in CMB data, but many new light species, such as axion-like particles and sterile neutrinos, are predicted by extensions of the Standard Model. CMB-S4’s goal, and the most challenging measurement in this arena, is to detect any additional light-relic species with freeze-out temperatures up to the QCD phase-transition scale. This corresponds to constraining the uncertainty on the number of light-relic species Neff to ≤ 0.06 at 95% confidence (see “Light relics” figure). Precise measurements of the small-scale temperature and E-mode polarisation signals that encode this signal require the largest sky area of any CMB-S4 science case. In addition, since the sum of the masses of the neutrinos impacts the degree of lensing of the E-mode polarisation into small-scale B-modes, CMB-S4 will be able to constrain this sum around a fiducial value of 58 meV with a 1σ uncertainty ≤ 24 meV (in conjunction with baryon acoustic oscillation measurements) and ≤ 14 meV with better measurements of the optical depth to reionisation. 

Current and anticipated CMB-S4 constraints

The third science goal is to understand the formation and evolution of galaxy clusters, and in particular to probe the early period of galaxy formation at redshifts z > 2. This is enabled by the Sunyaev–Zel’dovitch (SZ) effect, whereby CMB photons are up-scattered by the hot, moving gas in the intra-cluster medium. This shifts the CMB photons’ frequency spectrum, resulting in a decrement at frequencies below 217 GHz and an increment at frequencies above, therefore allowing clusters to be identified by matching up the corresponding cold and hot spots. A key feature of the SZ effect is its red-shift independence, allowing us to generate complete, flux-limited catalogues of clusters to the survey sensitivity. The small-scale temperature signals needed for such a catalogue require the highest angular resolution and the widest high-resolution frequency coverage of all the CMB-S4 science cases.

Finally, CMB-S4 aims to explore the mm-wave transient sky, in particular the rate of gamma-ray bursts to help constrain their mechanisms (a few hours to days after the initial event, gamma-ray bursts are observable at longer wavelengths). CMB-S4 will be so sensitive that even its daily maps will be deep enough to detect mm-wave transient phenomena – either spatial from nearby objects moving across our field, or temporal from distant objects exploding in our field. This is the only science goal that places constraints on the survey cadence, specifically on the lag between repeated observations of the same point on the sky. Given its large field of view, CMB-S4 will be an excellent tool for serendipitous discovery of transients but less useful for follow-up observations. The plan is therefore to issue daily alerts for other teams to follow up with targeted observations.

Survey design

While it would be possible to meet all of the CMB-S4 science goals with a single survey, the result – requiring the sensitivity of the inflation survey across the area of the light-relic survey – would be prohibitively expensive. Instead, the requirements have been decoupled into an ultra-deep, small-area survey to meet the inflation goal and a deep, wide-area survey to meet the light-relic goal, the union of these providing a two-tier “wedding cake” survey for the cluster and gamma-ray-burst goals.

Having set the survey requirements, the task was to identify sites at which these observations can most efficiently be made, taking into account the associated cost, schedule and risk. Water vapour is a significant source of noise at microwave frequencies, so the first requirement on any site is that it be high and dry. A handful of locations meet this requirement, and two of them – the South Pole and the high Chilean Atacama Desert – have both exceptional atmospheric conditions and long-standing US CMB programmes. Their positions on Earth also make them ideally suited to CMB-S4’s two-survey strategy: the polar location enables us to observe a small patch of sky continuously, minimising the time needed to reach the required observation depth, and the more equatorial Chilean location enables observations over a large sky area.

CMB-S4 observatory telescopes

Finally, we know that instrumental systematics will be the limiting factor in resolving the extraordinarily faint large-scale B-mode signal. To date, the experiments that have shown the best control of such systematics have used relatively small-aperture (~0.5 m) telescopes. However, the secondary lensing of the much brighter E-mode signal to B-modes, while enabling us to measure the neutrino-mass sum, also obscures the primordial B-mode signal coming from inflation. We therefore need a detailed measurement of this medium- to small-scale lensing signal in order to be able to remove it at the necessary precision. This requires larger, higher-resolution telescopes. The ultra-deep field is therefore itself composed of coincident low- and high-resolution surveys.

A key feature of CMB-S4 is that all of the technologies are already well-proven by the ongoing Stage 3 experiments. These include CMB-S4’s “founding four” experiments, the Atacama Cosmology Telescope (ACT) and POLARBEAR/Simons Array (PB/SA) in Chile, and BICEP/Keck (BK) and the South Pole Telescope (SPT) at the South Pole, which have pairwise-merged into the Simons and South Pole Observatories (SO and SPO). The ACT, PB/SA, BK and SPT are all single-aperture, single-site experiments, while SO and SPO are dual-aperture, single sites. CMB-S4 is therefore the first experiment able to take advantage of both apertures and both sites. 

The key difference with CMB-S4 is that it will deploy these technologies on an unprecedented scale. As a result, the primary challenges for CMB-S4 are engineering ones, both in fabricating detector and readout modules in huge numbers and in deploying them in cryostats on telescopes with unprecedented systematics control. The observatory will comprise: 18 small-aperture refractors collectively fielding about 150,000 detectors across eight frequencies for measuring large angular scales; one large-aperture reflector with about 130,000 detectors across seven frequencies for measuring medium-to-small angular scales in the ultra-deep survey from the South Pole; and two large-aperture reflectors collectively fielding about 275,000 detectors across six frequencies for measuring medium-to-small angular scales in the wide-deep survey from Chile (see “Looking up” image). The final configuration maximises the use of available atmospheric windows to control for microwave foregrounds (particularly synchrotron and dust emission at low and high frequencies, respectively), and to meet the frequency-dependent depth and angular-resolution requirements of the surveys. 

CMB-S4 will be able to adopt and adapt the best of all previous experiments technologies and methodologies

Covering the frequency range 20–280 GHz, the detectors employ dichroic pixels at all but one frequency (to maximise the use of the available focal plane) using superconducting transition-edge sensors, which have become the standard in the field. A major effort is already underway to scale up the production and reduce the fabrication variance of the detectors, taking advantage of the DOE national laboratories and industrial partners. Reading out such large numbers of detectors with limited power is a significant challenge, leading CMB-S4 to adopt the conservative but well-proven time-domain multiplexing approach. The detector and readout systems will be assembled into modules that will be cryogenically cooled to 100 mK to reduce instrument noise. Each large-aperture telescope will carry an 85-tube cryostat with a single wafer per optics tube; and each small-aperture telescope will carry a single optics tube with 12 wafers per tube, with three telescopes sharing a common mount. 

Prototyping of detector and readout fabrication lines, and building up module assembly and testing capabilities, is expected to begin in earnest this year. At the same time, the telescope designs will be refined and the data acquisition and management subsystems developed. The current schedule sees a staggered commissioning of the telescopes in 2028–2030, and operations running for seven years thereafter.

Shifting paradigms

CMB-S4 represents a paradigm shift for sub-orbital CMB experiments. For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds in fundamental physics, cosmology, astrophysics and astronomy, rather than by its cost cap. CMB-S4 will span the entire range of CMB science in a single experiment, take advantage of the best of all worlds in the design of its observation and instrumentation, and make the results available to the entire CMB community. As an extremely sensitive, two-tiered, multi-wavelength, mm-wave survey, it will also play a key role in multi-messenger astrophysics and transient science. Taken together, these measurements will constitute a giant leap in our study of the history of the universe.

Charm baryons constrain hadronisation

Figure 1

Understanding the mechanisms of hadron formation represents one of the most interesting open questions in particle physics. Hadronisation is a non-perturbative process that is not calculable in quantum chromodynamics and is typically described with phenomenological models, such as the Lund string model. Ultrarelativistic nuclear collisions, where a high-density plasma of deconfined quarks and gluons, the quark–gluon plasma (QGP), is created, provide an ideal setup to test the limits of this description. In these conditions, hadrons may be formed via a combination of deconfined quarks close in phase space. This process can lead, for example, to increased production of baryons with respect to mesons in momentum ranges up to 10 GeV/c. The ALICE and CMS experiments at the LHC, and PHENIX and STAR at RHIC, have indeed observed substantial modifications of the event hadro-chemistry in heavy-ion collisions compared to proton–proton and e+e collisions. In particular, the total abundances of light and strange hadrons were found to follow, quite remarkably, the “thermal’’ expectations for a deconfined medium close to equilibrium. 

Measurements of heavy-flavour hadron production play a unique role in such studies. Heavy quarks are mostly produced in hard scatterings at the early stages of the collisions, well before the QGP is formed. Furthermore, their thermal production is negligible since their masses are larger than the typical QGP temperature. Due to the much better theoretical control on their production and propagation in the medium, heavy quarks provide unique constraints on the QGP properties and the nature of hadronisation mechanisms, compared to light quarks. Heavy-flavour measurements in heavy-ion collisions also test whether the transverse momenta (pT) integrated yields of charm hadrons are consistent with the hypothesis of statistical models, in which charm quarks are expected to reach an almost complete thermalisation in the QGP, despite being initially very far from equilibrium.

ALICE has recently made an improvement towards a quantitative understanding of hadron formation from a QGP

The ALICE experiment has recently made an improvement towards a quantitative understanding of hadron formation from a QGP by performing the first measurement of the charm baryon-to-meson ratio Λ+c/D0 in central (head-on) Pb–Pb collisions at √sNN = 5.02 TeV. By exploiting its unique tracking and particle-identification capabilities, and using machine-learning techniques, ALICE has measured the ratio down to very low pT (less than 1 GeV/c), where hadronisation mechanisms via a combination of quarks are expected to dominate (figure 1, left). The measured production ratio of Λ+c/D0 in central Pb–Pb collisions is found to be larger than in pp collisions at pT of 4–8 GeV/c (figure 1, right). On the other hand, the pT-integrated ratio was found to be compatible with the result of pp collisions within one standard deviation. 

A comparison with theoretical calculations confirms the discrimination power of this measurement. The experimental data are well described by transport models that include mechanisms of the combination of quarks from the deconfined medium (TAMU and Catania). Given the current uncertainties, a conclusive answer on the agreement with statistical models (SHMc) cannot yet be reached. This motivates future high-precision and more differential measurements with the upgraded ALICE detector during the upcoming LHC Run-3 Pb–Pb runs. Thanks to the increased rate-capabilities of the new readout systems of the time projection chamber and the new inner tracking system, ALICE will increase its acquisition rate by up to a factor of about 50 in Pb–Pb collisions and will benefit from a much higher tracking resolution (by a factor 3–6 for low-pT tracks). High-accuracy measurements performed in Runs 3 and 4 will therefore provide significant discrimination power on theoretical calculations and strong constraints on the mechanisms underlying the hadronisation of charm quarks from the QGP.

Precision Z-boson production measurements

Figure 1

The precise determination of the Z-boson parameters at e+e colliders was crucial for the establishment of the electroweak theory of the Standard Model. Today, the Z boson has become an essential object of experimental study at the LHC. In particular, measurements of the Z boson’s production and decay properties in high-energy proton–proton collisions provide insights into the parton distribution functions (PDFs) of the proton and are an implicit test of quantum chromodynamics (QCD). 

Recently, using a sample of Z → μ+μ events, the LHCb collaboration reported the most precise measurement to date of the Z-boson production cross section in the forward region at a centre-of-mass energy of 13 TeV (see figure 1). The collaboration also reported the first measurements of the angular coefficients in Z → μ+μ decays in the forward region, which encode key information about the QCD mechanisms underlying the Z-boson production mechanism. In addition to improving knowledge of the proton PDFs, these two analyses contribute to the study of spin-momentum correlations in the proton, complementing ATLAS and CMS measurements in the central region.

In addition to the up and down valence quarks, a proton comprises a sea of quark–antiquark pairs primarily produced via gluon splitting. Given their similar masses, one would expect that the nucleon sea is flavour-symmetric for up and down quarks. However, in the early 1990s, the New Muon Collaboration at CERN found that this symmetry is violated. Later, the ratio of down antiquarks to up antiquarks in the proton was directly measured by the NA51 experiment at CERN and the NuSea/E866 experiment at Fermilab, revealing a significant asymmetry in the sea-quark PDF distributions. Recently, the SeaQuest/E906 experiment at Fermilab reported a new result on this ratio, showing different trends in the larger Bjorken-× range (× > 0.2) compared to the previous results and raising the tension with the NuSea measurement. 

With a detector instrumented in the forward region, LHCb is ideally placed to study decays of highly boosted Z bosons produced by interactions between one parton with large-× and another with small-×. Considering that both the NuSea and SeaQuest results have large contributions from nuclear effects, the current LHCb measurement of the Z production cross section based on a data sample of 5.1 fb–1 provides important complementary constraints in the large-× region.

The measurement of the angular coefficient “A2” in Z → μ+μ decays is sensitive to the transverse-momentum-dependent (TMD) PDFs, as it is proportional to the convolution of the two so-called Boer–Mulders functions of the two initial partons. A measurement of A2 can thus provide stringent constraints on the nonperturbative partonic spin-momentum correlation within unpolarised protons. By comparing the measured A2 in different dimuon mass ranges, the LHCb measurement provides an important input for the determination of the proton TMD PDFs, which are crucial to properly describe the production of electroweak bosons at the LHC. Together with the production cross section, these results from LHCb reinforce the importance of a forward detector to complement other measurements at the LHC.

Ruins of ancient star system found within our galaxy

C-19

Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individual stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so stars, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indicating a common origin.

The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.

The stars appear not only to be very old, but also very similar to one another, indicating a common origin

One way to discern the age of globular clusters is to study the elemental composition of the stars within them. This is often expressed as the metallicity, which is the ratio of all elements heavier than hydrogen and helium (confusingly referred to as metals in the astronomical community) to these two light elements. Hydrogen and helium were produced during the Big Bang, while anything heavier was produced in the first generation of stars, implying that the first generation of stars had zero metallicity and that the metallicity increases with each generation. Until recently the lowest metallicities of stars in globular clusters were 0.2% that of the Sun. This “lower floor” in metallicity was thought to put constraints on their maximum age and size, with lower-metallicity clusters thought to be unable to survive to this day. The newly discovered stream, however, has metallicities lower than 0.05% that of the Sun, changing this perception.

Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individuals stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so starts, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indication a common origin.

The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.

Captured clusters

The stars in the recently observed C-19 stream are no longer a dense cluster. Rather, they all appear to follow the same orbit within our galaxy, the plane of which is almost perpendicular to the galactic disk in which we orbit its centre. This similarity in orbit, as well as their very similar metallicity and general chemical content, indicate that they once formed a globular cluster which was absorbed by the Milky Way. The orbit dynamics further indicate it was captured at a time when the potential well of the Milky Way was significantly smaller than it is now, implying that the capture of this cluster by our galaxy occurred long ago. Since then, the once dense cluster heated up and got smeared out as it orbited the galactic centre through interactions with the disk, as well as with the potential dark-matter halo.

The discovery, published in Nature, does not directly answer the question of where and how globular clusters were formed. It does however provide us with a nearby laboratory to study issues like cluster and galaxy formation, the merging of such objects and the subsequent destruction of the cluster through interactions with both baryonic as well as potential dark matter. This particular cluster furthermore consists of some of the oldest stars found, and could have been formed before the re-ionisation of the universe, which is thought to have taken place between 150 million and a billion years after the Big Bang. Further information about such ancient objects can be expected soon thanks to the recently launched James Webb Space Telescope. This instrument will be able to see some of the earliest formed galaxies, and can thereby provide additional clues on the origin of the fossils now found within our own galaxy.

Turning the screw on right-handed neutrinos

The KATRIN experiment

In the 1960s, the creators of the Standard Model made a smart choice: while all charged fermions came in pairs, with left-handed and right-handed components, neutrinos were only left-handed. This “handicap” of neutrinos allowed physicists to accommodate in the most economical way important features of the experimental data at that time. First, such left-handed-only neutrinos are naturally massless, and second, individual leptonic flavours (electron, muon and tau) are automatically conserved.

It is now well established that neutrinos have masses and that the neutrino flavours mix with each other, in similarity with quarks. If this were known 55 years ago, Weinberg’s seminal 1967 work “A Model of Leptons” would be different: in addition to the left-handed neutrinos, it would very likely also contain their right-handed counterparts. The structure of the Standard Model (SM) dictates that these new states, if they exist, are the only singlets with respect to weak-isospin and hyper-charge gauge symmetry and thus do not participate directly in electroweak interactions (see “On the other hand” figure). This makes right-handed neutrinos (also referred to as sterile neutrinos, singlet fermions or heavy neutral leptons) very special: unlike charged quarks and leptons, which get their masses from the Yukawa interaction with the Brout–Englert–Higgs field, the masses of right-handed neutrinos depend on an additional parameter – the Majorana mass – which is not related to the vacuum expectation value and which results in the violation of lepton-number conservation. As such, right-handed neutrinos are also sometimes referred to as Majorana leptons or Majorana fermions.

Leaving aside the possible signals of eV-scale neutrino states reported in recent years, all established experimental signatures of neutrino oscillations can be explained by the SM with the addition of two heavy-neutral leptons (HNLs). If there were only one HNL, then two out of three SM neutrinos would be massless; with two HNLs, only one of the SM neutrinos is massless – this is not excluded experimentally. Any larger number of HNLs is also possible.

Fermion content

The simplest way to extend the SM in the neutrino sector is to add several HNLs and no other new particles. Already this class of theories is very rich (different numbers of HNLs and different values of their masses and couplings imply very different phenomenology), and contains several different scenarios explaining not only the observed masses and flavour oscillations of the SM neutrinos but also other phenomena that are not accommodated by the SM. The scenario in which the Majorana masses of right-handed neutrinos are much higher than the electroweak scale is known as the “type I see-saw model”, first put forward in the late 1970s. The theory with three right-handed neutrinos (the same as the number of generations in the SM) with their masses below the electroweak scale is called the neutrino minimal standard model (νMSM), and was proposed in the mid-2000s.

Would these new particles be useful for anything else besides neutrino physics? The answer is yes. The first, lightest HNL N1 may serve as a dark-matter particle, whereas the other two HNLs N2,3 not only “give” masses to active neutrinos but can also lead to the matter–antimatter asymmetry of the universe. In other words, the SM extended by just three HNLs could solve the key outstanding observational problems of the SM, provided the masses and couplings of the HNLs are chosen in a specific domain. 

The masses of heavy neutral leptons

The leptonic extension of the SM by right-handed neutrinos is quite similar to the gradual adaptation of electroweak theory to experimental data during the past 50 years. While the bosonic sector of the electroweak model remains intact from 1967, with the discoveries of the W and Z bosons in 1983 and the Higgs boson in 2012, the fermionic sector evolved from one to two to three generations, revealing the remarkable symmetry between quarks and leptons. It took about 20 years to find all the quarks and leptons of the third generation. How much time it will take to discover HNLs, if they indeed exist, depends crucially on their masses.

The value of the Majorana mass, and therefore the physical mass of an HNL, is arbitrary from a theoretical point of view and cannot be found from neutrino-oscillation experiments. The famous see-saw formula that relates the observed masses of the active neutrinos to the Majorana masses of HNLs has a degeneracy: change the Yukawa couplings of HNLs to neutrinos by a factor x and the HNL masses by a factor x2, and the active neutrino masses and the physics of their oscillations remain intact. The scale of HNL masses thus can be any number from a fraction of an eV to 1015 GeV (see “Options abound” figure). Moreover, there could be several HNLs with very different masses. Indeed, even in the SM the masses of charged fermions, though they share a similar origin, differ by almost six orders of magnitude. 

Motivated by the value of the active neutrino masses, the HNL could be light, with masses of the order of 1 eV. Alternatively, similar to the known quarks and charged leptons, they could be somewhere around the GeV or Fermi scale. Or they could be close to the grand unification scale, 1015 GeV, where the strong and electromagnetic interactions are thought to be unified. These possibilities have different theoretical and experimental consequences. 

The case of the light sterile neutrino

The see-saw formula tells us that if the mass of HNLs is around 1 eV, their Yukawa couplings should be of the order of 10–12. Such light sterile neutrinos can be potentially observed in neutrino experiments, as they can be involved in the oscillations together with the three active neutrino species. Several experiments – including LSND, GALLEX, SAGE, MiniBooNE and BEST – have reported anomalies in neutrino-oscillation data (the so-called short-baseline, gallium and reactor anomalies) that could be interpreted as a signal for the existence of light sterile neutrinos. However, it looks difficult, if not impossible, to reconcile the existence of these states with recent negative results of other experiments such as MINOS+, MicroBooNE and IceCUBE, accounting for additional constraints coming from β-decay, neutrinoless double-β decay and cosmology.

Cosmological bounds

The parameters of light sterile neutrinos required to explain the experimental anomalies are in strong tension with the cosmological bounds (see “Cosmological bounds” figure). For example, their mixing angle with the ordinary neutrinos should be sufficiently large that these states would have been produced abundantly in the early universe, affecting its expansion rate during Big Bang nucleosynthesis and thus changing the abundances of the light elements. In addition, light sterile neutrinos would affect the formation of structure. Having been created in the hot early universe with relativistic velocities, they would have escaped from forming structures until they cooled down in much later epochs. This so-called “hot dark matter” scenario would mean that the smallest structures, which form first, and the larger ones, which require much more time to develop, would experience different amounts of dark matter. Moreover, the presence of such particles would affect baryon acoustic oscillations and therefore impact the value of the Hubble constant deduced from them.

Besides tensions between the experiments and cosmological bounds, light sterile neutrinos do not provide any solution to the outstanding problems of the SM. They cannot be dark-matter particles because they are too light, nor can they produce the baryon asymmetry of the universe as their Yukawa couplings are too small to give any substantial contribution to lepton-number violation at the temperatures (> 160 GeV) at which the anomalous electroweak processes with baryon non-conservation have a chance to convert a lepton asymmetry into a baryon asymmetry. 

Three Fermi-scale heavy neutral leptons

Another possible scale for HNL masses is around a GeV, plus or minus a few orders of magnitude. Right-handed neutrinos with such masses do not interfere with active-neutrino oscillations because the corresponding length over which these oscillations may occur is far too small. As only two active-neutrino mass differences are fixed by neutrino-oscillation experiments, it is sufficient to have two HNLs N2,3 with appropriate Yukawa couplings to active neutrinos: to get the correct neutrino masses, they should not be smaller than ~10–8 (compared to the electron Yukawa coupling of ~10–6). These two HNLs may produce the baryon asymmetry of the universe, as we explain later, whereas the lightest singlet fermion, N1, may interact with neutrinos much more weakly and thus can be a dark-matter particle (although unstable, its lifetime can greatly exceed the age of the universe). 

Three main considerations determine the possible range of masses and couplings of the dark-matter sterile neutrino (see “Dark-matter constraints” figure). The first is cosmological production. If N1 interact too strongly, they would be overproduced in ℓ+ N1ν reactions and make the abundance of dark matter larger than what is inferred by observations, providing an upper limit on their interaction strength. Conversely, the requirement to produce enough dark matter results in a lower bound on the mixing angle that depends on the conditions in the early universe during the epoch of N1 production. Moreover, the lower bound completely disappears if N1 can also be produced at very high temperatures by interactions related to gravity or at the end of cosmological inflation. The second consideration is X-ray data. Radiative N1γν decays produce a narrow line that can be detected by X-ray telescopes such as XMM–Newton or Chandra, resulting in an upper limit on the mixing angle between sterile and active neutrinos. While this upper limit depends on the uncertainties in the distribution of dark matter in the Milky Way and other nearby galaxies and clusters, as well as on the modelling of the diffuse X-ray background, it is possible to marginalise these to obtain very robust constraints. 

Dark-matter constraints

The third consideration for the sterile neutrino’s properties is structure formation. If N1 is too light, a very large number-density of such particles is required to make an observed halo of a small galaxy. As HNLs are fermions, however, their number density cannot exceed that of a completely degenerate Fermi gas, placing a very robust lower bound on the N1 mass. This bound can be further improved by taking into account that light dark-matter particles remain relativistic until late epochs and therefore suppress or erase density perturbations on small scales. As a result, they would affect the inner structure of the halos of the Milky Way and other galaxies, as well as the matter distribution in the intergalactic medium, in ways that can be observed via gravitational-lensed galaxies, gaps in the stellar streams in galaxies and the spectra of distant quasars. 

Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM

The upper limits on the interaction strength of sterile neutrinos fixes the overall scale of active neutrino masses in the νMSM. The dark-matter sterile neutrino effectively decouples from the see-saw formula, making the mass of one of the active neutrinos much smaller than the observed solar and atmospheric neutrino-mass differences and fixing the masses of the two other active neutrinos to approximately 0.009 eV and 0.05 eV (for the normal ordering) and to the near-degenerate value 0.05 eV for the inverted ordering.

HNLs at the GeV scale and beyond 

Our universe is baryon-asymmetric – it does not contain antimatter in amounts comparable with the matter. Though the SM satisfies all three “Sakharov conditions” necessary for baryon-asymmetry generation (baryon number non-conservation, C and CP-violation, and departure from thermal equilibrium), it cannot explain the observed baryon asymmetry. The Kobayashi–Maskawa CP-violation is too small to produce any substantial effects, and departures from thermal equilibrium are tiny at the temperatures at which the anomalous fermion-number non-conserving processes are active. This is not the case with two GeV-scale HNLs: these particles are not in thermal equilibrium for temperatures above a few tens of GeV, and CP violation in their interactions with leptons can be large. As a result, a lepton asymmetry is produced, which is converted into baryon asymmetry by the baryon-number violating reactions of the SM.

The requirement to get baryon asymmetry in the νMSM puts stringent constraints on the masses and coupling of HNLs (see “Baryon-asymmetry constraints” figure). The mixing angle of these particles cannot be too large, otherwise they equilibrate and erase the baryon asymmetry, and it cannot be below a certain value because it would make the active neutrino masses too small. We know that their mass should be larger than that of the pion, otherwise their decays in the early universe would break the success of Big Bang nucleosynthesis. In addition, the masses of two HNLs should be close to each other so as to enhance CP-violating effects. Interestingly, the HNLs with these properties are within the experimental reach of existing and future accelerators, as we shall see.

Baryon-asymmetry constraints

The final possible choice of HNL masses is associated with the grand unification scale, ~1015 GeV. To get the correct neutrino masses, the Yukawa couplings of a pair of these superheavy particles should be of the order of one, in which case the baryon asymmetry of the universe can be produced via thermal leptogenesis and anomalous baryon- and lepton-number non-conservation at high temperatures. The third HNL, if interacting extremely weakly, may play the role of a dark-matter particle, as described previously. Another possibility is that there are three superheavy HNLs and one light one, to play the role of dark matter. This model, as well as that with HNL masses of the order of the electroweak scale, may therefore solve the most pressing problems of the SM. The only trouble is that we will never be able to test it experimentally, since the masses of N2,3 are beyond the reach of any current or future experiment.

Experimental opportunities

It is very difficult to detect HNLs experimentally. Indeed, if the masses of these particles are within the reach of current and planned accelerators, they must interact orders of magnitude more weakly than the ordinary weak interactions. As for the dark-matter sterile neutrino, the most promising route is indirect detection with X-ray space telescopes. The new X-ray spectrometer XRISM, which is planned to be launched this year, has great potential to unambiguously detect a signal from dark-matter decay. Like many astrophysical observatories, however, it will not be able to determine the particle origin of this signal. Thus, complementary laboratory searches are needed. One experimental proposal that claims a sufficient sensitivity to enter into the cosmologically relevant region is HUNTER, based on radio­active atom trapping and high-resolution decay-product spectrometry. Sterile neutrinos with masses of around a keV can also show up as a kink in the β-decay spectrum of radioactive nuclei, as discussed by the ambitious PTOLEMY proposal. The current generation of experiments that study β-decay spectra – KATRIN and Troitsk nu-mass – also perform searches for keV HNLs, but they are sensitive to significantly larger mixing angles than required for a dark-matter particle. Extending the KATRIN experiment with a multi-pixel silicon drift detector, TRISTAN, will significantly improve the sensitivity here.

The most promising perspectives to find N2,3 responsible for neutrino masses and baryogenesis are experiments at the intensity frontier. For HNL masses below 5 GeV (the beauty threshold) the best strategy is to direct proton beams at a target to create K, D or B mesons that decay producing HNLs, and then to search for HNL decays through “nothing leptons and hadrons” processes in a near detector. This strategy was used in the previous PS191 experiment at CERN’s Proton Synchrotron (PS), NOMAD, BEBC and CHARM at the Super Proton Synchrotron (SPS) and NuTeV at Fermilab. There are several proposals for future experiments along these lines. The proposed SHiP experiment at the SPS Beam Dump Facility has the best potential as it can potentially cover almost all parameter space down to the lowest bound on coupling constants coming from neutrino masses. The SHiP collaboration has already performed detailed studies and beam tests, and the experiment is under consideration by the SPS and PS experiments committee. A smaller-scale proposal, SHADOWS, covers part of the interesting parameter space.

Electron coupling

The search for HNLs can be carried out at the near detectors of DUNE at Fermilab and T2K/T2HK in Japan, which are due to come online later this decade. The LHC experiments ATLAS, CMS, LHCb, FASER and SND, as well as the proposed CODEX-b facility, can also be used, albeit with fewer chances to enter deeply into the cosmologically interesting part of the HNL parameter space. The decays of HNLs can also be searched for at future huge detectors such as MATHUSLA. And, going to larger HNL masses, breakthroughs can be made at the proposed Future Circular Collider FCC-ee, studying the processes Z νN with a displaced vertex (DV) corresponding to the subsequent decay of N to available channels (see “Electron coupling” figure).

Conclusions

Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM. But the situation is very different from that in the period preceding the discovery of the Higgs boson, where the consistency of the SM together with other experimental results allowed us to firmly conclude that either the Higgs boson had to be discovered at the LHC, or new physics beyond the SM must show up. Although we know for sure that the SM is incomplete, we do not have a firm prediction about where to search for new particles nor what their masses, spins, interaction types and strengths are.

Experimental guidance and historical experience suggest that the SM should be extended in the fermion sector, and the completion of the SM with three Majorana fermions solves the main observational problems of the SM at once. If this extension of the SM is correct, the only new particles to be discovered in the future are three Majorana fermions. They have remained undetected so far because of their extremely weak interactions with the rest of the world.

The LS2 vacuum challenge

Carbon coating of a beam screen

The second long-shutdown of the CERN accelerator complex (LS2) is complete. After three years of intense works at all levels across the accelerators and experiments, beams are expected in the LHC in April. For the accelerators, the main LS2 priorities were the consolidation of essential safety elements (dipole diodes) for the LHC magnets, several interventions for the High-Luminosity LHC (HL-LHC) and associated upgrades of the injection chain via the LHC Injectors Upgrade project. Contributing to the achievement of these and many other planned parallel activities, the CERN vacuum team has completed an intense period of work in the tunnels, workshops and laboratories. 

Particle beams require extremely low pressure in the pipes in which they travel to ensure that their lifetime is not limited by interactions with residual gas molecules and to minimise backgrounds in the physics detectors. During LS2, all of the LHC’s arcs were vented to the air after warm-up to room temperature and all welds were leak-checked after the diode consolidation (with only one leak found among the 1796 tests performed). The vacuum team also replaced or consolidated around 150 turbomolecular pumps acting on the cryogenic insulation vacuum. In total, 2.4 km of non-evaporable-getter (NEG)-coated beampipes were also opened to the air at room temperature – an exhaustive programme of work spanning mechanical repair and upgrade (across 120 weeks), bake-out (90 weeks) and NEG activation (45 weeks). The vacuum level in these beampipes is now in the required range, with most of the pressure readings below 10–10 mbar. 

CMS beampipe and RF box installation

The vacuum control system was also significantly improved by reducing single points of failure, removing confusing architectures and, for the first time, using mobile vacuum equipment controlled and monitored wirelessly. In view of the higher LHC luminosity and the consequent higher radioactivity dose during Run 3 and beyond, the vacuum group has developed and installed new radiation-tolerant electronics controlling 100 vacuum gauges and valves in the LHC dispersion suppressors. This was the first step of a larger campaign to be implemented in the next long-shutdown, including the production of 1000 similar electronics cards for vacuum monitoring. In parallel, the control software was renewed. This included the introduction of resilient, scalable and self-healing web-based frameworks used by the biggest names in industry.

In the LHC experimental areas, the disassembling of the vacuum chambers at the beginning of LS2 required 93 interventions and 550 person-hours of work in the caverns, with the most impressive change in vacuum hardware implemented in CMS and LHCb (see “Interaction points” images). In CMS, a new 7.3 m-long beryllium beam-pipe with an internal diameter of 43.4 mm was installed and 12 new aluminium chambers were manufactured, surface-finished and NEG-coated at CERN. The mechanical installation, including alignments, pump-down and leak detection, took two months, while the bake-out and venting with ultra-pure neon required a further month. In LHCb, the vacuum team contributed to the new Vertex Locator (VELO). Its “RF box” – a delicate piece of equipment filled with silicon detectors, electronics and cooling circuits designed to protect the VELO without affecting the beams – is situated just a few mm from the beam with an aluminium window thinned down to 150 μm by chemical etching and then NEG-coated. As the VELO encloses the RF box and both volumes are under separated vacua, the pump-down is a critical operation because pressure differences across the thin window must be lower than 10 mbar to ensure mechanical integrity. The last planned activity for the vacuum team in LS2, the bake-out of the ATLAS beam pipes, took place in February.

Vacuum challenges 

From the list of successful achievements, it could be assumed that vacuum activities in LS2 have gone smoothly, with the team applying well known procedures and practicing knowledge accumulated over decades. However, as might be expected when working with several teams in parallel and at the limits of technology, with around 100 km of piping under vacuum for the LHC alone, this is far from the case. Since the beginning of LS2, CERN vacuum experts have experienced several technical issues and obstacles, a few of which deserve a mention (see “Overcoming the LS2 vacuum obstacles” panel). All these headaches have challenged our regular way of working and allowed us to reflect on procedures, communication and reporting, and technical choices. 

SPS vacuum team

But the real moment of truth is still yet to come, when the intensity of the LHC beams reaches the new nominal value boosted by the upgraded injectors. Under the spotlight will be surface electron emission, which drives the formation of electron clouds and their consequences, including beam instabilities and heat load on the cryogenic system. The latter showed anomalously high values during Run 2, with strong inhomogeneity along the ring indicating an uneven surface conditioning. The question is what will happen to the heat load during Run 3? Thanks to the effort and achievements of a dedicated taskforce, the scrubbing and following physics runs will provide a detailed answer in a few months. Last year, the task force installed additional instrumentation in the cryogenic lines in selected positions and, after many months of detective work, identified the most probable culprit of the puzzling heat-load values: the formation of a non-native copper oxide layer during electron bombardment of hydroxylated copper surfaces at cryogenic temperatures. UV exposure in selected gas, local bakeout and plasma etching are among the mitigation techniques we are going to investigate.

The HL-LHC horizon 

LS2 might only just have finished but we are already thinking about LS3 (2026–2028), whose leitmotif will be the finalisation of the HL-LHC project. Thanks to more focused beams at the collision points and an increased proton bunch population, the higher beam luminosity at CMS and ATLAS (peaking at a levelled value of 5 × 1034cm–2s–1) will enable an integrated luminosity of 3000 fb–1 in 12 years. For the HL-LHC vacuum systems, this requires a completely new design of the beam screens in the focusing area of the experiments, the implementation of carbon thin-film coatings in the unbaked beampipes to cope with the lower secondary electron yield threshold, and radiation-compatible equipment near the experiments and radiation-tolerant electronics down to the dispersion suppressor zones. 

Overcoming the LS2 vacuum obstacles

Unexpected interventions

Forgotten sponge 

During the first beam-commissioning of the PS, anomalous high proton losses were detected, generating pressure spikes and a high radioactive dose near one of the magnets. An endoscopic inspection (see image above, left) revealed the presence of an orange sponge that had been used to protect the vacuum chamber extremities before welding (and which had been left behind due to a miscommunication between the teams involved), blocking the lower half of the beam pipe. After days of investigation with the beams and interventions by technicians, the chamber was cut open and the offending object removed. 

Leaky junctions 

Having passed all tests before they were installed, new corrugated thin-walled vacuum chambers installed in the Proton Synchrotron Booster to reduce eddy-current effects suffered vacuum leaks after a few days of magnet pulsing. The leaks appeared in lip-welded junctions in several chambers, indicating a systematic production issue. Additional spare chambers were produced and, as the leaks remain tolerable, a replacement is planned during the next year-end technical stop. Until then, this issue will be the Sword of Damocles on the heads of the vacuum teams in charge of the LHC’s injectors.

Powering mismatch 

During the first magnet tests of the TT2 transfer line, a vacuum sector was suddenly air-vented. The support of the vacuum chambers was found to be broken; two bellows were destroyed (see image, middle), and the vacuum chamber twisted. The origin of the problem was a different powering scheme of the magnet embedding the chamber: faster magnetic pulses generated higher eddy-current and Lorentz forces that were incompatible with the beampipe design and supports. It was solved by inserting a thin insulation layer between vacuum flanges to interrupt the eddy current, a practice common in other parts of the injectors. 

QRL quirks 

The LHC’s helium transfer lines (QRL) require regular checks, especially after warm-up and cool-down. During LS2, the vacuum team installed two additional turbomolecular pumps to compensate for the rate increase of a known leak in sector B12, allowing operation until at least the next long-shutdown. Another troubling leak which opened only for helium pressures above 7 bar was detected in a beam-screen cooling circuit. Fixing it would have required the replacement of the nearby magnet but the leak turned out to be tolerable at cryogenic temperatures, although its on/off behaviour remains to be fully elucidated. 

Damaged disks 

Installed following the incident in sector 3–4 shortly after LHC startup, the beam vacuum in the LHC arcs is protected against overpressure by 832 “burst disks”. A 30 μm-thick stainless- steel disk membrane nominally breaks when the pressure in the vacuum system is 0.5 bar higher than the tunnel air pressure. Despite the careful venting procedure, 19 disks were either broken or damaged before the re-pumping of the arcs. Subsequent lab tests showed no damage in spare disks cycled 30 times at 1.1 bar. The vacuum teams replaced the damaged disks and are trying to understand the cause. 

Buckled fingers 

Before cool-down, a 34 mm-diameter ball fitted with a 40 MHz transmitter is pushed through the LHC beam pipes to check for obstacles. The typical defect is a buckling of the RF fingers in the plug-in modules (PIMs) that maintain electrical continuity as the machine thermally contracts. Unfortunately, in two cases the ball arrived damaged, and it took days to collect and identify all the broken pieces. A buckled finger was successfully found in sector 8-1, but another in sector 2-3 (see image, right) was revealed only when the pilot beam circulated. This forced a re-warming of the arc, venting of the beampipe and the replacement of the damaged PIM, followed by additional re-cooling and aperture and electrical tests. 

The first piece of vacuum equipment concerned is the “VAX”: a compact set of components, pumps, valves and gauges installed in an area of limited access and relatively high radioactivity between the last focusing magnet of the accelerator and the high-luminosity experiments. The VAX module is designed to be fully compatible with robot intervention, enabling leak detection, gasket change and complete removal of parts to be carried out remotely and safely. 

Despite the massive shielding between the experiment caverns and the accelerator tunnels, secondary particles from high-energy proton collisions can reach accelerator components outside the detector area. At nominal HL-LHC luminosity, up to 3.8 kW of power will be deposited in the tunnel on each side of CMS and ATLAS, of which 1.2 kW is intercepted by the 60 m-long sequence of final focusing magnets. Such a power is incompatible with magnet cooling at 1.9 K and, in the long run, could cause the insulation of the superconducting cables to deteriorate. To avoid this issue, the vacuum team designed a new beam screen equipped with tungsten-alloy shielding so that at least half of the power is captured before being transmitted to the magnet cold mass. 

All eyes are on the successful restart of the CERN accelerator complex and the beginning of LHC Run 3

The new HL-LHC beam screens took several years of design and manufacturing optimisation, multi-physics simulations and tests with prototypes. The most intense study concerned the mechanical integrity of this complicated object when the hosting magnet undergoes a quench, causing the current to drop from nearly 20 kA to 0 kA in a few tenths of a second. The manufacturing learning phase is now complete and the beam-screen facility will be ready this year, including the new laser-welding robot and cryogenic test benches. Carbon coating is the additional novelty of the HL-LHC beam screens, with the purpose of suppressing electron clouds (see “Beam screen” image). At the beginning of LS2 the first beam screens were successfully coated in situ, involving a small robot carrying carbon and titanium targets, and magnets for plasma confinement during deposition. 

The vacuum team is also involved in the production of crab cavities, another breakthrough brought by the HL-LHC project. The surfaces of these complex-shaped niobium objects are treated by a dedicated machine that can provide rotation while chemically polishing with a mixture of nitric, hydrofluoric and phosphoric acids. The vacuum system of the cryomodules in which the cavities are cooled at 2 K was also designed at CERN. 

Outlook

Vacuum technology for particle accelerators has been pioneered by CERN since its early days, with the Intersecting Storage Rings bringing the most important breakthroughs. Over the decades, the CERN vacuum group has merged surface-physics specialists, thin-film coating experts and galvanic-treatment professionals, together with teams of designers and colleagues dedicated to the operation of large vacuum equipment. In doing so, CERN has become one of the world’s leading R&D centres for extreme vacuum technology, contributing to major existing and future accelerator projects at CERN and beyond. With the HL-LHC in direct view, the vacuum team looks forward to attacking new challenges. For now, though, all eyes are on the successful restart of the CERN accelerator complex and the beginning of LHC Run 3.

New directions at DESY

Beate Heinemann

What attracted you to the position of DESY director of particle physics?

DESY is one of the largest and most important particle-physics laboratories in the world. I was born and grew up in Hamburg and took my first career steps at DESY during my university studies. I received my PhD there in 1999 and returned as a scientist in 2016, so I know the lab very well. It is a great lab and department, with many opportunities and so many excellent people. I am sure it will be fun to work with all of them and to develop a strategy for the future.

What previous management roles do you think will serve you best at DESY? 

Being ATLAS deputy spokesperson from 2013 to 2017 was one of the best roles I’ve had in my career, and I benefitted hugely from the experience. I was fortunate to have an excellent spokesperson in Dave Charlton and I learned a lot from him, as well as from many others I worked with. I try to understand enough details to make educated decisions but not to micromanage. I also think motivating people, listening to them and promoting their talents is key to achieving common goals.

What are the current and upcoming experiments at DESY?

The biggest on-going experimental activities in particle physics are the ATLAS and CMS experiments. We have large groups in both, and for each we are building a tracker end-cap based on silicon-strip detectors at our detector assembly facility, primarily together with German universities. This is a huge undertaking that is currently ongoing for the HL-LHC. Another important activity is to build a vertex detector to be installed in 2023 at the Belle II experiment running at KEK in Japan. We also have a significant programme of local experiments covering axion searches. One of the big projects next summer will be the start of the ALPS II experiment, which will look for axion-like particles by shining an intense laser on a “wall” and seeing if any laser photons appear on the other side, having been transformed into axions by a large magnetic field. We have two other axion experiments planned: BabyIAXO, which looks for axion-like particles coming from the Sun, for which construction is now starting; and MadMax, which looks for axions in the dark-matter halo. Axions were postulated by Peccei and Quinn to solve the strong-CP problem but are also a good candidate for dark matter if they exist. A further experiment, which DESY theorist Andreas Ringwald and I proposed, LUXE, would deliver the European XFEL 16.5 GeV electron beam into a high-intensity laser so that the beam electrons experience a very strong electromagnetic field within their rest frame. LUXE would reach the so-called Schwinger limit, and allow us to see what happens when QED becomes strong and transitions from the perturbative to the non-perturbative regime. 

There are many accelerators at DESY, such as PETRA, where the gluon was discovered in the 1970s. Today, PETRA is one of the best synchrotron-radiation facilities in the world and is used for a wide range of science, for example imaging of small structures such as viruses. It is an application of accelerators where the impact on society is more direct and obvious than it is in particle physics. 

How can we increase the visibility of particle physics to society?

This is a very important point. The knowledge we get from particle physics today is clear, but it is less clear how we can transfer this knowledge to help solve pressing problems in society, such as climate change or a pandemic. Humankind desires to increase its knowledge, and it is important that we continue with fundamental research purely to increase our knowledge. We have already come so far in the past 5000 years. And, many technical innovations were made for that purpose alone but then resulted in transformative changes. Take the idea of the accelerator. It was developed at Berkeley during the 1930s with no particular application in mind, but today is used routinely around the world to prolong life by irradiating tumours. Or the transistor, without which there would not be any computers, which was developed in the 1920s based on the then-emerging understanding of atoms. It is important to promote both targeted research that directly addresses problems as well as fundamental research, which every now and again will result in groundbreaking changes. When thinking about our projects and experiments we need to keep in mind if and how any of our technical developments can be made in a way that addresses big societal problems.

It is important that we inspire the general public, in particular the young, about science. Educational programmes are key, such as Beamline for Schools, which is one of CERN’s flagship schemes. This was hosted by DESY during Long Shutdown 2 and a team at DESY will continue the collaboration.

CERN recently launched its Quantum Technology Initiative. Does DESY have plans in this area? 

DESY received funding from the state of Brandenburg to build a centre for quantum computing, the CTQA, which is located at DESY’s Zeuthen site. Karl Jansen, one of our scientists there, has spent most of his life working on lattice QCD calculations and is leading this effort. I myself am involved in research using quantum computing for particle tracking at the LUXE experiment. The layout of the tracker for this experiment is simpler compared to the LHC experiments, which is why we want to do it here first. We have to understand how to use quantum computers in conjunction with classical computers to solve actual problems efficiently. There is no doubt that quantum computing solves questions that are otherwise not possible, and we also think they will be able to solve problems more efficiently by using less resources compared to classical machines. That could also contribute to reducing the impact of computing on climate change.

What was your participation in the 2020 update of the European strategy for particle physics (ESPPU) and how have things progressed since? 

It was exciting to be part of the ESPPU drafting process. I was very impressed by the sincerity and devotion of the people in the hall in Bad Honnef when the process concluded. There was a lot of respect and understanding of the different views on how to balance the scientific ambitions with the realities of funding, R&D needs and other factors.

The ALPS II experiment

The ESPPU recommended first and foremost to complete the HL-LHC upgrade. This is a big undertaking and demands our focus. For the future, an electron-positron Higgs factory is the highest priority, in addition to ramping up accelerator R&D. Last year an accelerator R&D roadmap was prepared following the ESPPU recommendation. Very different directions are laid out, and now the task is to understand how to prioritise and streamline the different directions, and to ensure the relevant aspects are progressing significantly by the next update (probably in 2026). For instance CERN’s main focus is R&D on the next generation of magnets for a new hadron machine, while DESY has a strong progamme in plasma-wakefield accelerators for electron machines. But both DESY and CERN are also contributing to other aspects and there are other labs and universities in Europe which make important contributions. At DESY we also try to exploit synergies between developing new accelerators for photon science and high-energy physics. 

What is the best machine to follow the LHC?

The next machine needs to be a collider that can measure the Higgs properties at the per-cent and even in some cases the per-mille level – a Higgs factory. In addition to the excellent scientific potential, factors to consider are timescale and cost, but also making it a “green” accelerator and considering its innovation potential. Finding a good balance there is not easy, and there are several proposals that were studied as part of the ESPPU.

What are your three most interesting open questions in particle physics? 

Mine are related to the Higgs boson. One is the matter–antimatter asymmetry, because the exact form of the electroweak phase transition is closely related to the Higgs field. If it was a smooth transition, it cannot explain the matter–antimatter asymmetry; if it was violent, it could potentially be able to explain it. We should be able to learn something about this with the HL-LHC, but to know for sure we need a future collider. The second question is why is there a muon? Flavour physics fascinates me, and the Higgs-boson is the only particle that distinguishes between the electron, the muon and the tau, which is why I would like to study it extensively. The third question is what is dark matter? One intriguing possibility is that the Higgs boson decays to dark-matter particles, and with a Higgs factory we could measure this, even if it only happens for 0.3% of all Higgs bosons. The Higgs boson is so important for understanding our universe, that’s why we need a Higgs factory, although we will already learn a lot from the LHC and HL-LHC.

Today, women make up more than 30% of the scientists at DESY, whereas in 2005 it was less than 10%

Is the community doing a good job in communicating beyond the field? 

It is crucial that scientists communicate scientific facts, especially now when there are “post-truth” tendencies in society. We have a duty as people who are publicly funded to communicate our work to the public. Many people are excited about the origin of the universe and the fundamental laws of physics we are studying. Activities such as the CERN and DESY open days attract many visitors. We also see really good turnouts at public lectures as well as during our “science on tap” activity in Hamburg. I gave a talk about the first minutes of the universe, and the bar was packed and people had many questions during one of these events. We should all spend some of our time communicating science. Of course, we have to mostly do the actual research, otherwise we do not have anything to communicate. 

You are the first female director in DESY’s 60-year history. What do you think about the situation for women in physics, for instance the “25 by ‘25” initiative?

The 25 by ‘25 initiative is good. We have been fortunate at DESY that there was a strong drive from the German government. Research funding has increased a lot during the past 10–15 years and there was dedicated funding available to attract women to large research centres. Today, women make up more than 30% of the scientists at DESY, whereas in 2005 it was less than 10%. Having special programmes unfortunately appears to be necessary as change happens too slowly by itself otherwise. Having women in visible roles in science is important. I myself was inspired by several women in particle physics, such as Beate Naroska, the only female professor at the physics department when I was a student, Young-Kee Kim, who was spokesperson of the CDF experiment when I was a postdoc and later deputy-director of Fermilab, and last but not least Fabiola Gianotti, who was spokesperson of ATLAS when I joined and is now the Director-General of CERN. 

Webb prepares to eye dark universe

After 25 years of development, the James Webb Space Telescope (JWST) successfully launched from Europe’s spaceport in French Guiana on the morning of 25 December. Nerves were on edge as the Ariane 5 rocket blasted its $10 billion cargo through the atmosphere, aided by a velocity kick from its equatorial launch site. An equally nail-biting moment came 27 minutes later, when the telescope separated from the launch vehicle and deployed its solar array. In scenes reminiscent of those at CERN on 10 September 2008 when the first protons made their way around the LHC, the JWST command centre erupted in applause. “Go Webb, go!” cheered the ground team as the craft drifted into the darkness.

The result of an international partnership between NASA, ESA and the Canadian Space Agency, Webb took a similar time to design and build as the LHC and cost almost twice as much. Its science goals are also complementary to particle physics. The 6.2 tonne probe’s primary mirror – the largest ever flown in space, with a diameter of 6.5 m compared to 2.4 m for its predecessor, Hubble – will detect light, stretched to the infrared by the expansion of the universe, from the very first galaxies. In addition to shedding new light on the formation of galaxies and planets, Webb will deepen our understanding of dark matter and dark energy. “The promise of Webb is not what we know we will discover,” said NASA administrator Bill Nelson after the launch. “It’s what we don’t yet understand or can’t yet fathom about our universe. I can’t wait to see what it uncovers!”

The promise of Webb is not what we know we will discover. It’s what we don’t yet understand or can’t yet fathom about our universe

Bill Nelson

Five days after launch, Webb successfully unfurled and tensioned its 300 m2 sunshield. Although the craft’s final position at Earth–Sun Lagrange point 2 (L2) ensures that it is sheltered by Earth’s shadow, further protection from sunlight is necessary to keep its four science instruments operating at 34 K. The delicate deployment procedure involved 139 release mechanisms, 70 hinge assemblies, some 400 pulleys and 90 individual cables – each of which was a potential single-point failure. Just over one week later, on 7 and 8 January, the two wings of the primary mirror, which had to be folded in for launch, were opened, involving the final four of a total of 178 release mechanisms. The ground team then began the long procedure of aligning the telescope optics via 126 actuators on the backside of the primary mirror’s 18 hexagonal segments. On 24 January, having completed a 1.51 million-km journey, the observatory successfully inserted itself into its orbit at L2, marking the end of the complex deployment process and the beginning of commissioning activities. The process will take months, with Webb scheduled to return its first science images in the summer.

James webb

The 1998 discovery of the accelerating expansion of the universe, which implies that around 70% of the universe is made up of an unknown dark energy, stemmed from observations of distant type-Ia supernovae that appeared fainter than expected. While the primary evidence came from ground-based observations, Hubble helped confirm the existence of dark energy via optical and near-infrared observations of supernovae at earlier times. Uniquely, Webb will allow cosmologists to see even farther, from as early as 200 million years after the Big Bang, while also extending the observation and cross-calibration of other standard candles, such as Cepheid variables and red giants, beyond what is currently possible with Hubble. Operating in the infrared rather than optical regime also means less scattering of light from interstellar gas.

With these capabilities, the JWST should enable the local rate of expansion to be determined to a precision of 1%. This will bring important information to the current tension between the measured expansion rate at early and late times, as quantified by the Hubble constant, and possibly shed light on the nature of dark energy.

Launching Webb is a huge celebration of the international collaboration that made this mission possible

Josef Aschbacher

By measuring the motion and gravitational lensing of early objects, Webb will also survey the distribution of dark matter, and might even hint at what it’s made of. “In order to make progress in the identification of dark matter, we need observations that clearly discriminate among the tens of possible explanations that theorists have put forward in the past four decades,” explains Gianfranco Bertone, director of the European Consortium for Astroparticle Theory. “If dark matter is ‘warm’ for example – meaning that it is composed of particles moving at mildly relativistic speeds when first structures are assembled – we should be able to detect its imprint on the number density of small dark-matter halos probed by the JWST. Or, if dark matter is made of primordial black holes, as suggested in the early 1970s by Stephen Hawking, the JWST could detect the faint emission produced by the accretion of gas onto these objects in early epochs.”

On 11 February, Webb returned images of its first star in the form of 18 blurry white dots, the product of the unaligned primary-mirror segments all reflecting light from the same star back at the secondary mirror and into its near-infrared camera. Though underwhelming at first sight, this and similar images are crucial to allow operators to gradually align and focus the hexagonal mirror segments until 18 images become one. After that, Webb will start downlinking science data at a rate of about 60 GB per day.

“Launching Webb is a huge celebration of the international collaboration that made this next-generation mission possible,” said ESA director-general Josef Aschbacher. “We are close to receiving Webb’s new view of the universe and the exciting scientific discoveries that it will make.”

RHIC stress-tests the future EIC

stellar_innards

The world’s longest-serving heavy-ion collider, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory, started its latest run in December. In addition to further probing the quark–gluon plasma, the focus of RHIC Run 22 (the 3.8 km-circumference collider’s 22nd run in as many years) is on testing innovative accelerator techniques and detector technologies for the Electron–Ion Collider (EIC) due to enter operation at Brookhaven in the early 2030s.

The EIC, which will add an electron storage ring to RHIC, will collide 5–18 GeV electrons (and possibly positrons) with ion beams of up to 275 GeV per nucleon, targeting luminosities of 1034 cm–2 s–1 and a beam polarisation of up to 85%. This will enable researchers to go beyond the present one-dimensional picture of nuclei and nucleons: by correlating the longitudinal components of the quark and gluon momenta with their transverse momenta and spatial distribution inside the nucleon, the EIC will enable 3D “nuclear femtography”.

Unique ability

Preparations for the EIC rely on RHIC’s unique ability to collide polarised proton beams via the use of helical dipole magnets, which offers a directional frame of reference to study hadron collisions. The last time polarised protons were collided at RHIC was 2017. For Run 22, the accelerator team aims to accumulate proton–proton collisions at the highest possible polarisation, and also at the highest energies (255 GeV per beam). To ensure the EIC hadron beams are as tightly packed as possible, thus maximising the luminosity, the accelerator team will try a technique previously used at RHIC to accelerate larger particles, but which has never been used with protons before.

“We are going to split each proton bunch into two when they’re still at low energy in the Booster, and accelerate those as two separate bunches,” explains Run-22 coordinator Vincent Schoefer. “That splitting will alleviate some of the stress during low energy, and then we can merge the bunches back together to put very dense bunches into RHIC.” Such merging is challenging, he adds, because it takes around 300,000 turns in the Alternating Gradient Synchrotron (the link between the Booster and RHIC), during which the protons must be handled “very gently”.

To further reduce the spread of high-energy hadron beams, the team will explore several cooling strategies (a major challenge for high-energy hadron beams) for possible use at the EIC. One is coherent electron cooling, whereby electrons from a high-gain free-electron laser are used to attract the protons closer to a central position. In addition, the team plans to ramp up beams of helium-3 ions to develop methods for measuring the polarisation of particles other than protons. Measuring how particles in the beam scatter off a gas target is the established method, but ions such as helium-3 can complicate matters by breaking up when they strike the target. To accurately measure the polarisation of helium-3 and other beams at the EIC, it is necessary to identify when this breakup occurs. During Run 22 the RHIC team will test its ability to accurately characterise scattering products using unpolarised helium-3 beams to develop new polarimetry methods.

During the run, RHIC’s recently upgraded STAR detector will track particles emerging from collisions at a wider range of angles than ever before (covering a rapidity of –1.5 – 4.2). The upgrades include finer granulated sensors for the inner part of the time projection chamber, and two new forward-tracking detectors and electromagnetic and hadronic calorimetery at one end of the detector, which will allow better reconstruction of jets.

Detector technologies

In addition to increasing the dataset for exploring colour-charge interactions, these upgrades will give physicists crucial information about the detector technologies and the behaviour of nucleon structure relevant to the EIC. RHIC’s other main detector, the upgraded PHENIX, is under construction and scheduled to enter operation during Run 23 next year.

“Our goal this run is basically doing EIC physics with proton–proton collisions,” says Elke-Caroline Aschenauer, who led the STAR upgrade project. “We have to verify that what you measure in electron–proton collisions at the EIC and in proton–proton events at RHIC is universal – meaning it doesn’t depend on which probe you use to measure it.”

bright-rec iop pub iop-science physcis connect