Comsol -leaderboard other pages

Topics

Dijet excess intrigues at CMS

The Standard Model (SM) has been extremely successful in describing the behaviour of elementary particles. Nevertheless, conundrums such as the nature of dark matter and the cosmological matter-antimatter asymmetry strongly suggest that the theory is incomplete. Hence, the SM is widely viewed as an effective low-energy limit of a more fundamental underlying theory which must be modified to describe particles and their interactions at higher energies.

A powerful way to discover new particles expected from physics beyond the SM is to search for high-mass dijet or multi-jet resonances, as these are expected to have large production cross-sections at hadron colliders. These searches look for a pair of jets originating from a pair of quarks or gluons, coming from the decay of a new particle “X” and appearing as a narrow bump in the invariant dijet-mass distribution. Since the energy scale of new physics is most likely high, it is natural to expect these new particles to be massive.

CMS_Figure1

CMS and ATLAS have performed a suite of single-dijet-resonance searches. The next step is to look for new identical-mass particles “X” that are produced in pairs, with (resonant mode) or without (non-resonant mode) a new intermediate heavier particle “Y” being produced and decaying to pairs of X. Such processes would yield two dijet resonances and four jets in the final state: the dijet mass would correspond to particle X and the four-jet mass to particle Y.

The CMS experiment was also motivated to search for Y→ XX → 4-jets by a candidate event recorded in 2017, which was presented by a previous CMS search for dijet resonances (figure 1). This spectacular event has four high transverse-momentum jets, forming two dijet pairs each with an invariant mass of 1.9 TeV and a four-jet invariant mass of 8 TeV.

CMS_Figure2

Presented on 14 March at Rencontres de Moriond, the CMS collaboration has found another very similar event in a new search optimised for this specific Y→ XX → 4-jets topology. These events could originate from quantum-chromodynamics processes, but those are expected to be extremely rare (figure 2). The two candidate events are clearly visible at high masses, distinct from all the rest. Also shown (magenta) is a simulation of a possible new-physics signal – a diquark decaying to vector-like quarks – with a four-jet mass of 8.4 TeV and a dijet mass of 2.1 TeV, which very nicely describes these two candidates.

The hypothesis that these events originate from the SM at the observed X and Y masses is disfavoured with a local significance of 3.9σ. Taking into account the full range of possible X and Y mass values, the compatibility of the observation with the SM expectation leads to a global significance of 1.6σ.

The upcoming LHC Run 3 and future High-Luminosity LHC runs will be crucial in telling us whether these events are statistical fluctuations of the SM expectation, or the first signs of yet another groundbreaking discovery at LHC.

Graph neural networks boost di-Higgs search

Figure 1

Two fundamental characteristics of the Higgs boson (H) that have yet to be measured precisely are its self-coupling λ, which indicates how strongly it interacts with itself, and its quartic coupling to the vector bosons, which mediate the weak force. These couplings can be directly accessed at the LHC by studying the production of Higgs-boson pairs, which is an extremely rare process occurring about 1000 times less frequently than single-H production. However, several new-physics models predict a significant enhancement in the HH production rate compared to the Standard Model (SM) prediction, especially when the H pairs are very energetic, or boosted. Recently, the CMS collaboration developed a new strategy employing graph neural networks to search for boosted HH production in the four-bottom-quark final state, which is one of the most sensitive modes currently under examination.

H pairs are produced primarily via gluon and vector-boson fusion. The former production mode is sensitive to the self-coupling, while the latter probes the quartic coupling involving a pair of weak vector bosons and two Higgs bosons. The extracted modifiers of the coupling-strength parameters, κλ and κ2V, quantify their strengths relative to the SM expectation.

This latest CMS search targets both production modes and selects two Higgs bosons with a high Lorentz boost. When each Higgs boson decays to a pair of bottom quarks, the two quarks are reconstructed as a single large-radius jet. The main challenge is thus to identify the specific H jet while rejecting the background from light-flavour quarks and gluons. Graph neural networks, such as the ParticleNet algorithm, have been shown to distinguish successfully between real H jets and background jets. Using measured properties of the particles and secondary vertices within the jet cone, this algorithm treats each jet as an unordered set of its constituents, considers potential correlations between them, and assigns each jet a probability to originate from a Higgs-boson decay. At an H-jet selection efficiency of 60%, ParticleNet rejects background jets twice as efficiently as the previous best algorithm (known as DeepAK8). A modified version of this algorithm is also used to improve the H-jet mass resolution by nearly 40%.

Using the full LHC Run-2 dataset, the new result excludes an HH production rate larger than 9 times the SM cross-section at 95% confidence level, versus an expected limit of 5. This represents an improvement by a factor of 30 compared to the previous best result for boosted HH production. The analysis yields a strong constraint on the HH production rate and κλ, and the most stringent constraint on κ2V to date, assuming all other H couplings to be at their SM values (see figure 1). For the first time, and with the assumption that the other couplings are consistent with the SM, the result excludes the κ2V = 0 scenario at over five standard deviations, confirming the existence of a quartic coupling between two vector bosons and two Higgs bosons. This search paves the way for a more extensive use of advanced machine-learning techniques, the exploration of the boosted HH production regime, and further investigation into the potentially anomalous character of the Higgs boson in Run 3 and beyond.

Extending the reach on Higgs’ self-coupling

Figure 1

The discovery of the Higgs boson and the comprehensive measurements of its properties provide a strong indication that the mechanism of electroweak symmetry breaking (EWSB) is compatible with the one predicted by Brout, Englert and Higgs (BEH) in 1964. But there remain unprobed features of EWSB, chiefly whether the form of the BEH potential follows the predicted “Mexican hat” shape. One of the parameters that determines the form of the BEH potential is the Higgs boson’s trilinear self-coupling, λ. Experimentally, this fundamental parameter can be measured via Higgs-boson pair (HH) production, where a single virtual Higgs boson splits into two Higgs bosons. However, such a measurement is very challenging as the Standard Model (SM) HH production cross-section is more than 1000 times lower than that of single Higgs-boson production.

Beyond the SM (BSM) physics with modified or new Higgs-boson couplings could lead to significantly enhanced HH production. Some BSM scenarios predict new heavy particles that may lead to resonant HH production, contrary to the non-resonant production featured by the triple-Higgs-boson coupling. New ATLAS results set tight constraints on both the non-resonant and resonant scenarios, showing that the boundaries of what can be achieved with the current and future LHC datasets can be significantly pushed.

The ATLAS collaboration recently released results of searches for HH production in three final states – bbγγ, bbττ and 4b (where one Higgs boson decays into two b-quarks and the other into two photons, two tau-leptons or two b-quarks) and their combination, exploiting the full LHC Run-2 dataset. The first two analyses target both resonant and non-resonant HH production, while the 4b analysis targets only resonant HH production. These three channels are the most sensitive final states in each scenario. The three decay modes of the second Higgs boson provide good sensitivity in different kinematic regions, so that the analyses are highly complementary. The HH → bbγγ process has the lowest branching ratio but high efficiency to trigger and reconstruct photons, as well as an excellent diphoton mass resolution, leading to the best sensitivity at low HH invariant masses. The HH → 4b final state has the highest branching ratio but suffers from the requirement to impose high transverse momentum b-jet trigger thresholds, the ambiguity in the Higgs boson reconstruction and the large multijet background. However, it provides the best sensitivity at high HH invariant masses. Finally, the HH → bbττ decay has a moderate branching ratio as well as a moderate background contamination, giving the best sensitivity in the intermediate HH mass range. 

BSM physics with new Higgs-boson couplings could lead to significantly enhanced HH production

With the latest analyses, a remarkably stringent observed (expected) upper limit of 3.1 (3.1) times the SM prediction on non-resonant HH production was obtained at 95% confidence level (CL). The coupling strength of the Higgs boson trilinear self-coupling in units of the SM value κλ is observed (expected) to be constrained between –1.0 and 6.6 (–1.2 and 7.2) at 95% CL (see figure 1). These are the world’s tightest constraints obtained on this process. The observed (expected) exclusion limits at 95% CL on the resonant HH production cross-section range between 1.1 and 595 fb (1.2 and 392 fb) for resonance masses between 250 and 5000 GeV. 

The sensitivity of the current analyses is still limited by statistical uncertainties and is expected to improve significantly with the future luminosity increase during LHC Run 3 and the HL-LHC programme. A comparison between the current results and previous partial Run-2 dataset results has shown that an improvement by more than a factor of three on the limits is achieved. A factor of two was expected from the larger dataset, and the remaining improvements arise from better object reconstruction and identification techniques, and new analy­sis methods. 

These latest results inspire confidence that the observation of the SM HH production and a precise measurement of the Higgs-boson trilinear self-coupling may be possible at the HL-LHC.

Exploring the CMB like never before

To address the major questions in cosmology, the cosmic microwave background (CMB) remains the single most important phenomenon that can be observed. Not this author’s words, but those of the recent US National Academies of Sciences, Engineering, and Medicine report Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Astro2020), which recommended that the US pursue a next-generation ground-based CMB experiment, CMB-S4, to enter operation in around 2030. 

The CMB comprises the photons created in the Big Bang. These photons have therefore experienced the entire history of the universe. Everything that has happened has left an imprint on them in the form of anisotropies in their temperature and polarisation with characteristic amplitudes and angular scales. The early universe was hot enough to be completely ionised, which meant that the CMB photons constantly scattered off free electrons. During this period the primary CMB anisotropies were imprinted, tracing the overall geometry of the universe, the fraction of the energy density in baryons, the number of light-relic particles and the nature of inflation. After about 375,000 years of expansion the universe cooled enough for neutral hydrogen atoms to be stable. With the free electrons rapidly swept up by protons, the CMB photons simply free-streamed in whatever direction they were last moving in. When we observe the CMB today we therefore see a snapshot of this so-called last-scattering surface.

The continued evolution of the universe had two main effects on the CMB photons. First, its ongoing expansion stretched their wavelengths to peak at microwave frequencies today. Second, the growth of structure eventually formed galaxy clusters that changed the direction, energy and polarisation of the CMB photons that pass through them, both from gravitational lensing by their mass and from inverse Compton scattering by the hot gas that makes up the inter-cluster medium. These secondary anisotropies therefore constrain all of the parameters that this history depends on, from the moment the first stars formed to the number of light-relic particles and the masses of neutrinos.  

The temperature anisotropies of the CMB

As noted by the Astro2020 report, the history of CMB research is that of continuously improving ground and balloon experiments, punctuated by comprehensive measurements from the major satellite missions COBE, WMAP and Planck. The increasing temperature and polarisation sensitivity and angular resolution of these satellites is evidenced in the depth and resolution of the maps they produced (see “Relic radiation” image”). However, such maps are just our view of the CMB – one particular realisation of a random process. To derive the underlying cosmology that gave rise to them, we need to measure the amplitude of the anisotropies on various angular scales (see “Power spectra” figure). Following the serendipitous discovery of the CMB in 1965, the first measurements of the temperature anisotropy were made by COBE in 1992. The first peak in the temperature power spectrum was measured by the BOOMERanG and MAXIMA balloons in 2000, followed by the E-mode polarisation of the CMB by the DASI experiment in 2002, and the B-mode polarisation by the South Pole Telescope and POLARBEAR experiments in 2015.

CMB-S4, a joint effort supported by the US Department of Energy (DOE) and the National Science Foundation (NSF), will help write the next chapter in this fascinating adventure. Planned to comprise 21 telescopes at the South Pole and in the Chilean Atacama Desert instrumented with more than 500,000 cryogenically-cooled superconducting detectors, it will exceed the capabilities of earlier generations of experiments by more than an order of magnitude and deliver transformative discoveries in fundamental physics, cosmology, astrophysics and astronomy.

The CMB-S4 challenge 

Three major challenges must be addressed to study the CMB at such levels of precision. Firstly, the signals are extraordinarily faint, requiring massive datasets to reduce the statistical uncertainties. Secondly, we have to contend with systematic effects both from imperfect instruments and from the environment, which must be controlled to exquisite precision if they are not to swamp the signals. Finally, the signals are obscured by other sources of microwave emission, especially galactic synchrotron and dust emission. Unlike the CMB, these sources do not have a black-body spectrum, so it is possible to distinguish between CMB and non-CMB sources if observations are made at enough microwave frequencies to break the degeneracy.

Power spectra of the CMB

This third challenge actually proves to be an astrophysical blessing as well as a cosmological curse: CMB observations are also excellent legacy surveys of the millimetre-wave sky, which can be used for a host of other science goals. These range from cataloguing galaxy clusters, to studying the Milky Way, to detecting spatial and temporal transients such as gamma-ray bursts via their afterglows.

Coming together

In 2013 the US CMB community came together in the Snowmass planning process, which informs the deliberations of the decadal Particle Physics Project Prioritization Panel (P5). We realised that achieving the sensitivity needed to make the next leap in CMB science would require an experiment of such magnitude (and therefore cost) that it could only be accomplished as a community-wide endeavour, and that we would therefore need to transition from multiple competing experiments to a single collaborative one. By analogy with the US dark-energy programme, this was designated a “Stage 4” experiment, and hence became known as CMB-S4. 

In 2014 a P5 report made the critical recommendation that the DOE should support CMB science as a core piece of its programme. The following year a National Academies report identified CMB science as one of three strategic priorities for the NSF Office of Polar Programs. In 2017 the DOE, NSF and NASA established a task force to develop a conceptual design for CMB-S4, and in 2019 the DOE took “Critical Decision 0”, identifying the mission need and initiating the CMB-S4 construction project. In 2020 Berkeley Lab was appointed the lead laboratory for the project, with Argonne, Fermilab and SLAC all playing key roles. Finally, late last year, the long-awaited Astro2020 report unconditionally recommended CMB-S4 as a joint NSF and DOE project with an estimated cost of $650 million. With these recommendations in place, the CMB-S4 construction project could begin.

CMB-S4 constraints

From the outset, CMB-S4 was intended to be the first sub-orbital CMB experiment designed to reach specific critical scientific thresholds, rather than simply to maximise the science return under a particular cost cap. Furthermore, as a community-wide collaboration, CMB-S4 will be able to adopt and adapt the best of all previous experiments’ technologies and methodologies – including operating at the site best suited to each science goal. One third of the major questions and discovery areas identified across the six Astro2020 science panels depend on CMB observations.

The critical degrees of freedom in the design of any observation are the sky area, frequency coverage, frequency-dependent depth and angular resolution, and observing cadence. Having reviewed the requirements across the gamut of CMB science, four driving science goals have been identified for CMB-S4. 

For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds

The first is to test models of inflation via the primordial gravitational waves they naturally generate. Such gravitational waves are the only known source of a primordial B-mode polarisation signal. The size of these primordial B-modes is quantified by the ratio of their power to that of the temperature power spectrum – the scalar-to-tensor ratio, designated r. For the largest and most popular classes of inflationary models, CMB-S4 will make a 5σ detection of r, while failure to make such a measurement will put an upper limit of r ≤ 0.001 at 95% confidence, setting a rigorous constraint on alternative models (see “Constraining inflation” figure). The large-scale B-mode polarisation signal encoding r is the faintest of all the CMB signals, requiring both the deepest measurement and the widest low-resolution frequency coverage of any CMB-S4 science case.

The second goal concerns the dark universe. Dark matter and dark energy make up 95% of the universe’s mass-energy content, and their particular form and composition impact the growth of structure and thus the small-scale CMB anisotropies. The collective influence of the three known light-relic particles (the Standard Model neutrinos) has already been observed in CMB data, but many new light species, such as axion-like particles and sterile neutrinos, are predicted by extensions of the Standard Model. CMB-S4’s goal, and the most challenging measurement in this arena, is to detect any additional light-relic species with freeze-out temperatures up to the QCD phase-transition scale. This corresponds to constraining the uncertainty on the number of light-relic species Neff to ≤ 0.06 at 95% confidence (see “Light relics” figure). Precise measurements of the small-scale temperature and E-mode polarisation signals that encode this signal require the largest sky area of any CMB-S4 science case. In addition, since the sum of the masses of the neutrinos impacts the degree of lensing of the E-mode polarisation into small-scale B-modes, CMB-S4 will be able to constrain this sum around a fiducial value of 58 meV with a 1σ uncertainty ≤ 24 meV (in conjunction with baryon acoustic oscillation measurements) and ≤ 14 meV with better measurements of the optical depth to reionisation. 

Current and anticipated CMB-S4 constraints

The third science goal is to understand the formation and evolution of galaxy clusters, and in particular to probe the early period of galaxy formation at redshifts z > 2. This is enabled by the Sunyaev–Zel’dovitch (SZ) effect, whereby CMB photons are up-scattered by the hot, moving gas in the intra-cluster medium. This shifts the CMB photons’ frequency spectrum, resulting in a decrement at frequencies below 217 GHz and an increment at frequencies above, therefore allowing clusters to be identified by matching up the corresponding cold and hot spots. A key feature of the SZ effect is its red-shift independence, allowing us to generate complete, flux-limited catalogues of clusters to the survey sensitivity. The small-scale temperature signals needed for such a catalogue require the highest angular resolution and the widest high-resolution frequency coverage of all the CMB-S4 science cases.

Finally, CMB-S4 aims to explore the mm-wave transient sky, in particular the rate of gamma-ray bursts to help constrain their mechanisms (a few hours to days after the initial event, gamma-ray bursts are observable at longer wavelengths). CMB-S4 will be so sensitive that even its daily maps will be deep enough to detect mm-wave transient phenomena – either spatial from nearby objects moving across our field, or temporal from distant objects exploding in our field. This is the only science goal that places constraints on the survey cadence, specifically on the lag between repeated observations of the same point on the sky. Given its large field of view, CMB-S4 will be an excellent tool for serendipitous discovery of transients but less useful for follow-up observations. The plan is therefore to issue daily alerts for other teams to follow up with targeted observations.

Survey design

While it would be possible to meet all of the CMB-S4 science goals with a single survey, the result – requiring the sensitivity of the inflation survey across the area of the light-relic survey – would be prohibitively expensive. Instead, the requirements have been decoupled into an ultra-deep, small-area survey to meet the inflation goal and a deep, wide-area survey to meet the light-relic goal, the union of these providing a two-tier “wedding cake” survey for the cluster and gamma-ray-burst goals.

Having set the survey requirements, the task was to identify sites at which these observations can most efficiently be made, taking into account the associated cost, schedule and risk. Water vapour is a significant source of noise at microwave frequencies, so the first requirement on any site is that it be high and dry. A handful of locations meet this requirement, and two of them – the South Pole and the high Chilean Atacama Desert – have both exceptional atmospheric conditions and long-standing US CMB programmes. Their positions on Earth also make them ideally suited to CMB-S4’s two-survey strategy: the polar location enables us to observe a small patch of sky continuously, minimising the time needed to reach the required observation depth, and the more equatorial Chilean location enables observations over a large sky area.

CMB-S4 observatory telescopes

Finally, we know that instrumental systematics will be the limiting factor in resolving the extraordinarily faint large-scale B-mode signal. To date, the experiments that have shown the best control of such systematics have used relatively small-aperture (~0.5 m) telescopes. However, the secondary lensing of the much brighter E-mode signal to B-modes, while enabling us to measure the neutrino-mass sum, also obscures the primordial B-mode signal coming from inflation. We therefore need a detailed measurement of this medium- to small-scale lensing signal in order to be able to remove it at the necessary precision. This requires larger, higher-resolution telescopes. The ultra-deep field is therefore itself composed of coincident low- and high-resolution surveys.

A key feature of CMB-S4 is that all of the technologies are already well-proven by the ongoing Stage 3 experiments. These include CMB-S4’s “founding four” experiments, the Atacama Cosmology Telescope (ACT) and POLARBEAR/Simons Array (PB/SA) in Chile, and BICEP/Keck (BK) and the South Pole Telescope (SPT) at the South Pole, which have pairwise-merged into the Simons and South Pole Observatories (SO and SPO). The ACT, PB/SA, BK and SPT are all single-aperture, single-site experiments, while SO and SPO are dual-aperture, single sites. CMB-S4 is therefore the first experiment able to take advantage of both apertures and both sites. 

The key difference with CMB-S4 is that it will deploy these technologies on an unprecedented scale. As a result, the primary challenges for CMB-S4 are engineering ones, both in fabricating detector and readout modules in huge numbers and in deploying them in cryostats on telescopes with unprecedented systematics control. The observatory will comprise: 18 small-aperture refractors collectively fielding about 150,000 detectors across eight frequencies for measuring large angular scales; one large-aperture reflector with about 130,000 detectors across seven frequencies for measuring medium-to-small angular scales in the ultra-deep survey from the South Pole; and two large-aperture reflectors collectively fielding about 275,000 detectors across six frequencies for measuring medium-to-small angular scales in the wide-deep survey from Chile (see “Looking up” image). The final configuration maximises the use of available atmospheric windows to control for microwave foregrounds (particularly synchrotron and dust emission at low and high frequencies, respectively), and to meet the frequency-dependent depth and angular-resolution requirements of the surveys. 

CMB-S4 will be able to adopt and adapt the best of all previous experiments technologies and methodologies

Covering the frequency range 20–280 GHz, the detectors employ dichroic pixels at all but one frequency (to maximise the use of the available focal plane) using superconducting transition-edge sensors, which have become the standard in the field. A major effort is already underway to scale up the production and reduce the fabrication variance of the detectors, taking advantage of the DOE national laboratories and industrial partners. Reading out such large numbers of detectors with limited power is a significant challenge, leading CMB-S4 to adopt the conservative but well-proven time-domain multiplexing approach. The detector and readout systems will be assembled into modules that will be cryogenically cooled to 100 mK to reduce instrument noise. Each large-aperture telescope will carry an 85-tube cryostat with a single wafer per optics tube; and each small-aperture telescope will carry a single optics tube with 12 wafers per tube, with three telescopes sharing a common mount. 

Prototyping of detector and readout fabrication lines, and building up module assembly and testing capabilities, is expected to begin in earnest this year. At the same time, the telescope designs will be refined and the data acquisition and management subsystems developed. The current schedule sees a staggered commissioning of the telescopes in 2028–2030, and operations running for seven years thereafter.

Shifting paradigms

CMB-S4 represents a paradigm shift for sub-orbital CMB experiments. For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds in fundamental physics, cosmology, astrophysics and astronomy, rather than by its cost cap. CMB-S4 will span the entire range of CMB science in a single experiment, take advantage of the best of all worlds in the design of its observation and instrumentation, and make the results available to the entire CMB community. As an extremely sensitive, two-tiered, multi-wavelength, mm-wave survey, it will also play a key role in multi-messenger astrophysics and transient science. Taken together, these measurements will constitute a giant leap in our study of the history of the universe.

Charm baryons constrain hadronisation

Figure 1

Understanding the mechanisms of hadron formation represents one of the most interesting open questions in particle physics. Hadronisation is a non-perturbative process that is not calculable in quantum chromodynamics and is typically described with phenomenological models, such as the Lund string model. Ultrarelativistic nuclear collisions, where a high-density plasma of deconfined quarks and gluons, the quark–gluon plasma (QGP), is created, provide an ideal setup to test the limits of this description. In these conditions, hadrons may be formed via a combination of deconfined quarks close in phase space. This process can lead, for example, to increased production of baryons with respect to mesons in momentum ranges up to 10 GeV/c. The ALICE and CMS experiments at the LHC, and PHENIX and STAR at RHIC, have indeed observed substantial modifications of the event hadro-chemistry in heavy-ion collisions compared to proton–proton and e+e collisions. In particular, the total abundances of light and strange hadrons were found to follow, quite remarkably, the “thermal’’ expectations for a deconfined medium close to equilibrium. 

Measurements of heavy-flavour hadron production play a unique role in such studies. Heavy quarks are mostly produced in hard scatterings at the early stages of the collisions, well before the QGP is formed. Furthermore, their thermal production is negligible since their masses are larger than the typical QGP temperature. Due to the much better theoretical control on their production and propagation in the medium, heavy quarks provide unique constraints on the QGP properties and the nature of hadronisation mechanisms, compared to light quarks. Heavy-flavour measurements in heavy-ion collisions also test whether the transverse momenta (pT) integrated yields of charm hadrons are consistent with the hypothesis of statistical models, in which charm quarks are expected to reach an almost complete thermalisation in the QGP, despite being initially very far from equilibrium.

ALICE has recently made an improvement towards a quantitative understanding of hadron formation from a QGP

The ALICE experiment has recently made an improvement towards a quantitative understanding of hadron formation from a QGP by performing the first measurement of the charm baryon-to-meson ratio Λ+c/D0 in central (head-on) Pb–Pb collisions at √sNN = 5.02 TeV. By exploiting its unique tracking and particle-identification capabilities, and using machine-learning techniques, ALICE has measured the ratio down to very low pT (less than 1 GeV/c), where hadronisation mechanisms via a combination of quarks are expected to dominate (figure 1, left). The measured production ratio of Λ+c/D0 in central Pb–Pb collisions is found to be larger than in pp collisions at pT of 4–8 GeV/c (figure 1, right). On the other hand, the pT-integrated ratio was found to be compatible with the result of pp collisions within one standard deviation. 

A comparison with theoretical calculations confirms the discrimination power of this measurement. The experimental data are well described by transport models that include mechanisms of the combination of quarks from the deconfined medium (TAMU and Catania). Given the current uncertainties, a conclusive answer on the agreement with statistical models (SHMc) cannot yet be reached. This motivates future high-precision and more differential measurements with the upgraded ALICE detector during the upcoming LHC Run-3 Pb–Pb runs. Thanks to the increased rate-capabilities of the new readout systems of the time projection chamber and the new inner tracking system, ALICE will increase its acquisition rate by up to a factor of about 50 in Pb–Pb collisions and will benefit from a much higher tracking resolution (by a factor 3–6 for low-pT tracks). High-accuracy measurements performed in Runs 3 and 4 will therefore provide significant discrimination power on theoretical calculations and strong constraints on the mechanisms underlying the hadronisation of charm quarks from the QGP.

Precision Z-boson production measurements

Figure 1

The precise determination of the Z-boson parameters at e+e colliders was crucial for the establishment of the electroweak theory of the Standard Model. Today, the Z boson has become an essential object of experimental study at the LHC. In particular, measurements of the Z boson’s production and decay properties in high-energy proton–proton collisions provide insights into the parton distribution functions (PDFs) of the proton and are an implicit test of quantum chromodynamics (QCD). 

Recently, using a sample of Z → μ+μ events, the LHCb collaboration reported the most precise measurement to date of the Z-boson production cross section in the forward region at a centre-of-mass energy of 13 TeV (see figure 1). The collaboration also reported the first measurements of the angular coefficients in Z → μ+μ decays in the forward region, which encode key information about the QCD mechanisms underlying the Z-boson production mechanism. In addition to improving knowledge of the proton PDFs, these two analyses contribute to the study of spin-momentum correlations in the proton, complementing ATLAS and CMS measurements in the central region.

In addition to the up and down valence quarks, a proton comprises a sea of quark–antiquark pairs primarily produced via gluon splitting. Given their similar masses, one would expect that the nucleon sea is flavour-symmetric for up and down quarks. However, in the early 1990s, the New Muon Collaboration at CERN found that this symmetry is violated. Later, the ratio of down antiquarks to up antiquarks in the proton was directly measured by the NA51 experiment at CERN and the NuSea/E866 experiment at Fermilab, revealing a significant asymmetry in the sea-quark PDF distributions. Recently, the SeaQuest/E906 experiment at Fermilab reported a new result on this ratio, showing different trends in the larger Bjorken-× range (× > 0.2) compared to the previous results and raising the tension with the NuSea measurement. 

With a detector instrumented in the forward region, LHCb is ideally placed to study decays of highly boosted Z bosons produced by interactions between one parton with large-× and another with small-×. Considering that both the NuSea and SeaQuest results have large contributions from nuclear effects, the current LHCb measurement of the Z production cross section based on a data sample of 5.1 fb–1 provides important complementary constraints in the large-× region.

The measurement of the angular coefficient “A2” in Z → μ+μ decays is sensitive to the transverse-momentum-dependent (TMD) PDFs, as it is proportional to the convolution of the two so-called Boer–Mulders functions of the two initial partons. A measurement of A2 can thus provide stringent constraints on the nonperturbative partonic spin-momentum correlation within unpolarised protons. By comparing the measured A2 in different dimuon mass ranges, the LHCb measurement provides an important input for the determination of the proton TMD PDFs, which are crucial to properly describe the production of electroweak bosons at the LHC. Together with the production cross section, these results from LHCb reinforce the importance of a forward detector to complement other measurements at the LHC.

Ruins of ancient star system found within our galaxy

C-19

Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individual stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so stars, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indicating a common origin.

The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.

The stars appear not only to be very old, but also very similar to one another, indicating a common origin

One way to discern the age of globular clusters is to study the elemental composition of the stars within them. This is often expressed as the metallicity, which is the ratio of all elements heavier than hydrogen and helium (confusingly referred to as metals in the astronomical community) to these two light elements. Hydrogen and helium were produced during the Big Bang, while anything heavier was produced in the first generation of stars, implying that the first generation of stars had zero metallicity and that the metallicity increases with each generation. Until recently the lowest metallicities of stars in globular clusters were 0.2% that of the Sun. This “lower floor” in metallicity was thought to put constraints on their maximum age and size, with lower-metallicity clusters thought to be unable to survive to this day. The newly discovered stream, however, has metallicities lower than 0.05% that of the Sun, changing this perception.

Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individuals stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so starts, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indication a common origin.

The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.

Captured clusters

The stars in the recently observed C-19 stream are no longer a dense cluster. Rather, they all appear to follow the same orbit within our galaxy, the plane of which is almost perpendicular to the galactic disk in which we orbit its centre. This similarity in orbit, as well as their very similar metallicity and general chemical content, indicate that they once formed a globular cluster which was absorbed by the Milky Way. The orbit dynamics further indicate it was captured at a time when the potential well of the Milky Way was significantly smaller than it is now, implying that the capture of this cluster by our galaxy occurred long ago. Since then, the once dense cluster heated up and got smeared out as it orbited the galactic centre through interactions with the disk, as well as with the potential dark-matter halo.

The discovery, published in Nature, does not directly answer the question of where and how globular clusters were formed. It does however provide us with a nearby laboratory to study issues like cluster and galaxy formation, the merging of such objects and the subsequent destruction of the cluster through interactions with both baryonic as well as potential dark matter. This particular cluster furthermore consists of some of the oldest stars found, and could have been formed before the re-ionisation of the universe, which is thought to have taken place between 150 million and a billion years after the Big Bang. Further information about such ancient objects can be expected soon thanks to the recently launched James Webb Space Telescope. This instrument will be able to see some of the earliest formed galaxies, and can thereby provide additional clues on the origin of the fossils now found within our own galaxy.

Turning the screw on right-handed neutrinos

The KATRIN experiment

In the 1960s, the creators of the Standard Model made a smart choice: while all charged fermions came in pairs, with left-handed and right-handed components, neutrinos were only left-handed. This “handicap” of neutrinos allowed physicists to accommodate in the most economical way important features of the experimental data at that time. First, such left-handed-only neutrinos are naturally massless, and second, individual leptonic flavours (electron, muon and tau) are automatically conserved.

It is now well established that neutrinos have masses and that the neutrino flavours mix with each other, in similarity with quarks. If this were known 55 years ago, Weinberg’s seminal 1967 work “A Model of Leptons” would be different: in addition to the left-handed neutrinos, it would very likely also contain their right-handed counterparts. The structure of the Standard Model (SM) dictates that these new states, if they exist, are the only singlets with respect to weak-isospin and hyper-charge gauge symmetry and thus do not participate directly in electroweak interactions (see “On the other hand” figure). This makes right-handed neutrinos (also referred to as sterile neutrinos, singlet fermions or heavy neutral leptons) very special: unlike charged quarks and leptons, which get their masses from the Yukawa interaction with the Brout–Englert–Higgs field, the masses of right-handed neutrinos depend on an additional parameter – the Majorana mass – which is not related to the vacuum expectation value and which results in the violation of lepton-number conservation. As such, right-handed neutrinos are also sometimes referred to as Majorana leptons or Majorana fermions.

Leaving aside the possible signals of eV-scale neutrino states reported in recent years, all established experimental signatures of neutrino oscillations can be explained by the SM with the addition of two heavy-neutral leptons (HNLs). If there were only one HNL, then two out of three SM neutrinos would be massless; with two HNLs, only one of the SM neutrinos is massless – this is not excluded experimentally. Any larger number of HNLs is also possible.

Fermion content

The simplest way to extend the SM in the neutrino sector is to add several HNLs and no other new particles. Already this class of theories is very rich (different numbers of HNLs and different values of their masses and couplings imply very different phenomenology), and contains several different scenarios explaining not only the observed masses and flavour oscillations of the SM neutrinos but also other phenomena that are not accommodated by the SM. The scenario in which the Majorana masses of right-handed neutrinos are much higher than the electroweak scale is known as the “type I see-saw model”, first put forward in the late 1970s. The theory with three right-handed neutrinos (the same as the number of generations in the SM) with their masses below the electroweak scale is called the neutrino minimal standard model (νMSM), and was proposed in the mid-2000s.

Would these new particles be useful for anything else besides neutrino physics? The answer is yes. The first, lightest HNL N1 may serve as a dark-matter particle, whereas the other two HNLs N2,3 not only “give” masses to active neutrinos but can also lead to the matter–antimatter asymmetry of the universe. In other words, the SM extended by just three HNLs could solve the key outstanding observational problems of the SM, provided the masses and couplings of the HNLs are chosen in a specific domain. 

The masses of heavy neutral leptons

The leptonic extension of the SM by right-handed neutrinos is quite similar to the gradual adaptation of electroweak theory to experimental data during the past 50 years. While the bosonic sector of the electroweak model remains intact from 1967, with the discoveries of the W and Z bosons in 1983 and the Higgs boson in 2012, the fermionic sector evolved from one to two to three generations, revealing the remarkable symmetry between quarks and leptons. It took about 20 years to find all the quarks and leptons of the third generation. How much time it will take to discover HNLs, if they indeed exist, depends crucially on their masses.

The value of the Majorana mass, and therefore the physical mass of an HNL, is arbitrary from a theoretical point of view and cannot be found from neutrino-oscillation experiments. The famous see-saw formula that relates the observed masses of the active neutrinos to the Majorana masses of HNLs has a degeneracy: change the Yukawa couplings of HNLs to neutrinos by a factor x and the HNL masses by a factor x2, and the active neutrino masses and the physics of their oscillations remain intact. The scale of HNL masses thus can be any number from a fraction of an eV to 1015 GeV (see “Options abound” figure). Moreover, there could be several HNLs with very different masses. Indeed, even in the SM the masses of charged fermions, though they share a similar origin, differ by almost six orders of magnitude. 

Motivated by the value of the active neutrino masses, the HNL could be light, with masses of the order of 1 eV. Alternatively, similar to the known quarks and charged leptons, they could be somewhere around the GeV or Fermi scale. Or they could be close to the grand unification scale, 1015 GeV, where the strong and electromagnetic interactions are thought to be unified. These possibilities have different theoretical and experimental consequences. 

The case of the light sterile neutrino

The see-saw formula tells us that if the mass of HNLs is around 1 eV, their Yukawa couplings should be of the order of 10–12. Such light sterile neutrinos can be potentially observed in neutrino experiments, as they can be involved in the oscillations together with the three active neutrino species. Several experiments – including LSND, GALLEX, SAGE, MiniBooNE and BEST – have reported anomalies in neutrino-oscillation data (the so-called short-baseline, gallium and reactor anomalies) that could be interpreted as a signal for the existence of light sterile neutrinos. However, it looks difficult, if not impossible, to reconcile the existence of these states with recent negative results of other experiments such as MINOS+, MicroBooNE and IceCUBE, accounting for additional constraints coming from β-decay, neutrinoless double-β decay and cosmology.

Cosmological bounds

The parameters of light sterile neutrinos required to explain the experimental anomalies are in strong tension with the cosmological bounds (see “Cosmological bounds” figure). For example, their mixing angle with the ordinary neutrinos should be sufficiently large that these states would have been produced abundantly in the early universe, affecting its expansion rate during Big Bang nucleosynthesis and thus changing the abundances of the light elements. In addition, light sterile neutrinos would affect the formation of structure. Having been created in the hot early universe with relativistic velocities, they would have escaped from forming structures until they cooled down in much later epochs. This so-called “hot dark matter” scenario would mean that the smallest structures, which form first, and the larger ones, which require much more time to develop, would experience different amounts of dark matter. Moreover, the presence of such particles would affect baryon acoustic oscillations and therefore impact the value of the Hubble constant deduced from them.

Besides tensions between the experiments and cosmological bounds, light sterile neutrinos do not provide any solution to the outstanding problems of the SM. They cannot be dark-matter particles because they are too light, nor can they produce the baryon asymmetry of the universe as their Yukawa couplings are too small to give any substantial contribution to lepton-number violation at the temperatures (> 160 GeV) at which the anomalous electroweak processes with baryon non-conservation have a chance to convert a lepton asymmetry into a baryon asymmetry. 

Three Fermi-scale heavy neutral leptons

Another possible scale for HNL masses is around a GeV, plus or minus a few orders of magnitude. Right-handed neutrinos with such masses do not interfere with active-neutrino oscillations because the corresponding length over which these oscillations may occur is far too small. As only two active-neutrino mass differences are fixed by neutrino-oscillation experiments, it is sufficient to have two HNLs N2,3 with appropriate Yukawa couplings to active neutrinos: to get the correct neutrino masses, they should not be smaller than ~10–8 (compared to the electron Yukawa coupling of ~10–6). These two HNLs may produce the baryon asymmetry of the universe, as we explain later, whereas the lightest singlet fermion, N1, may interact with neutrinos much more weakly and thus can be a dark-matter particle (although unstable, its lifetime can greatly exceed the age of the universe). 

Three main considerations determine the possible range of masses and couplings of the dark-matter sterile neutrino (see “Dark-matter constraints” figure). The first is cosmological production. If N1 interact too strongly, they would be overproduced in ℓ+ N1ν reactions and make the abundance of dark matter larger than what is inferred by observations, providing an upper limit on their interaction strength. Conversely, the requirement to produce enough dark matter results in a lower bound on the mixing angle that depends on the conditions in the early universe during the epoch of N1 production. Moreover, the lower bound completely disappears if N1 can also be produced at very high temperatures by interactions related to gravity or at the end of cosmological inflation. The second consideration is X-ray data. Radiative N1γν decays produce a narrow line that can be detected by X-ray telescopes such as XMM–Newton or Chandra, resulting in an upper limit on the mixing angle between sterile and active neutrinos. While this upper limit depends on the uncertainties in the distribution of dark matter in the Milky Way and other nearby galaxies and clusters, as well as on the modelling of the diffuse X-ray background, it is possible to marginalise these to obtain very robust constraints. 

Dark-matter constraints

The third consideration for the sterile neutrino’s properties is structure formation. If N1 is too light, a very large number-density of such particles is required to make an observed halo of a small galaxy. As HNLs are fermions, however, their number density cannot exceed that of a completely degenerate Fermi gas, placing a very robust lower bound on the N1 mass. This bound can be further improved by taking into account that light dark-matter particles remain relativistic until late epochs and therefore suppress or erase density perturbations on small scales. As a result, they would affect the inner structure of the halos of the Milky Way and other galaxies, as well as the matter distribution in the intergalactic medium, in ways that can be observed via gravitational-lensed galaxies, gaps in the stellar streams in galaxies and the spectra of distant quasars. 

Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM

The upper limits on the interaction strength of sterile neutrinos fixes the overall scale of active neutrino masses in the νMSM. The dark-matter sterile neutrino effectively decouples from the see-saw formula, making the mass of one of the active neutrinos much smaller than the observed solar and atmospheric neutrino-mass differences and fixing the masses of the two other active neutrinos to approximately 0.009 eV and 0.05 eV (for the normal ordering) and to the near-degenerate value 0.05 eV for the inverted ordering.

HNLs at the GeV scale and beyond 

Our universe is baryon-asymmetric – it does not contain antimatter in amounts comparable with the matter. Though the SM satisfies all three “Sakharov conditions” necessary for baryon-asymmetry generation (baryon number non-conservation, C and CP-violation, and departure from thermal equilibrium), it cannot explain the observed baryon asymmetry. The Kobayashi–Maskawa CP-violation is too small to produce any substantial effects, and departures from thermal equilibrium are tiny at the temperatures at which the anomalous fermion-number non-conserving processes are active. This is not the case with two GeV-scale HNLs: these particles are not in thermal equilibrium for temperatures above a few tens of GeV, and CP violation in their interactions with leptons can be large. As a result, a lepton asymmetry is produced, which is converted into baryon asymmetry by the baryon-number violating reactions of the SM.

The requirement to get baryon asymmetry in the νMSM puts stringent constraints on the masses and coupling of HNLs (see “Baryon-asymmetry constraints” figure). The mixing angle of these particles cannot be too large, otherwise they equilibrate and erase the baryon asymmetry, and it cannot be below a certain value because it would make the active neutrino masses too small. We know that their mass should be larger than that of the pion, otherwise their decays in the early universe would break the success of Big Bang nucleosynthesis. In addition, the masses of two HNLs should be close to each other so as to enhance CP-violating effects. Interestingly, the HNLs with these properties are within the experimental reach of existing and future accelerators, as we shall see.

Baryon-asymmetry constraints

The final possible choice of HNL masses is associated with the grand unification scale, ~1015 GeV. To get the correct neutrino masses, the Yukawa couplings of a pair of these superheavy particles should be of the order of one, in which case the baryon asymmetry of the universe can be produced via thermal leptogenesis and anomalous baryon- and lepton-number non-conservation at high temperatures. The third HNL, if interacting extremely weakly, may play the role of a dark-matter particle, as described previously. Another possibility is that there are three superheavy HNLs and one light one, to play the role of dark matter. This model, as well as that with HNL masses of the order of the electroweak scale, may therefore solve the most pressing problems of the SM. The only trouble is that we will never be able to test it experimentally, since the masses of N2,3 are beyond the reach of any current or future experiment.

Experimental opportunities

It is very difficult to detect HNLs experimentally. Indeed, if the masses of these particles are within the reach of current and planned accelerators, they must interact orders of magnitude more weakly than the ordinary weak interactions. As for the dark-matter sterile neutrino, the most promising route is indirect detection with X-ray space telescopes. The new X-ray spectrometer XRISM, which is planned to be launched this year, has great potential to unambiguously detect a signal from dark-matter decay. Like many astrophysical observatories, however, it will not be able to determine the particle origin of this signal. Thus, complementary laboratory searches are needed. One experimental proposal that claims a sufficient sensitivity to enter into the cosmologically relevant region is HUNTER, based on radio­active atom trapping and high-resolution decay-product spectrometry. Sterile neutrinos with masses of around a keV can also show up as a kink in the β-decay spectrum of radioactive nuclei, as discussed by the ambitious PTOLEMY proposal. The current generation of experiments that study β-decay spectra – KATRIN and Troitsk nu-mass – also perform searches for keV HNLs, but they are sensitive to significantly larger mixing angles than required for a dark-matter particle. Extending the KATRIN experiment with a multi-pixel silicon drift detector, TRISTAN, will significantly improve the sensitivity here.

The most promising perspectives to find N2,3 responsible for neutrino masses and baryogenesis are experiments at the intensity frontier. For HNL masses below 5 GeV (the beauty threshold) the best strategy is to direct proton beams at a target to create K, D or B mesons that decay producing HNLs, and then to search for HNL decays through “nothing leptons and hadrons” processes in a near detector. This strategy was used in the previous PS191 experiment at CERN’s Proton Synchrotron (PS), NOMAD, BEBC and CHARM at the Super Proton Synchrotron (SPS) and NuTeV at Fermilab. There are several proposals for future experiments along these lines. The proposed SHiP experiment at the SPS Beam Dump Facility has the best potential as it can potentially cover almost all parameter space down to the lowest bound on coupling constants coming from neutrino masses. The SHiP collaboration has already performed detailed studies and beam tests, and the experiment is under consideration by the SPS and PS experiments committee. A smaller-scale proposal, SHADOWS, covers part of the interesting parameter space.

Electron coupling

The search for HNLs can be carried out at the near detectors of DUNE at Fermilab and T2K/T2HK in Japan, which are due to come online later this decade. The LHC experiments ATLAS, CMS, LHCb, FASER and SND, as well as the proposed CODEX-b facility, can also be used, albeit with fewer chances to enter deeply into the cosmologically interesting part of the HNL parameter space. The decays of HNLs can also be searched for at future huge detectors such as MATHUSLA. And, going to larger HNL masses, breakthroughs can be made at the proposed Future Circular Collider FCC-ee, studying the processes Z νN with a displaced vertex (DV) corresponding to the subsequent decay of N to available channels (see “Electron coupling” figure).

Conclusions

Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM. But the situation is very different from that in the period preceding the discovery of the Higgs boson, where the consistency of the SM together with other experimental results allowed us to firmly conclude that either the Higgs boson had to be discovered at the LHC, or new physics beyond the SM must show up. Although we know for sure that the SM is incomplete, we do not have a firm prediction about where to search for new particles nor what their masses, spins, interaction types and strengths are.

Experimental guidance and historical experience suggest that the SM should be extended in the fermion sector, and the completion of the SM with three Majorana fermions solves the main observational problems of the SM at once. If this extension of the SM is correct, the only new particles to be discovered in the future are three Majorana fermions. They have remained undetected so far because of their extremely weak interactions with the rest of the world.

Webb prepares to eye dark universe

After 25 years of development, the James Webb Space Telescope (JWST) successfully launched from Europe’s spaceport in French Guiana on the morning of 25 December. Nerves were on edge as the Ariane 5 rocket blasted its $10 billion cargo through the atmosphere, aided by a velocity kick from its equatorial launch site. An equally nail-biting moment came 27 minutes later, when the telescope separated from the launch vehicle and deployed its solar array. In scenes reminiscent of those at CERN on 10 September 2008 when the first protons made their way around the LHC, the JWST command centre erupted in applause. “Go Webb, go!” cheered the ground team as the craft drifted into the darkness.

The result of an international partnership between NASA, ESA and the Canadian Space Agency, Webb took a similar time to design and build as the LHC and cost almost twice as much. Its science goals are also complementary to particle physics. The 6.2 tonne probe’s primary mirror – the largest ever flown in space, with a diameter of 6.5 m compared to 2.4 m for its predecessor, Hubble – will detect light, stretched to the infrared by the expansion of the universe, from the very first galaxies. In addition to shedding new light on the formation of galaxies and planets, Webb will deepen our understanding of dark matter and dark energy. “The promise of Webb is not what we know we will discover,” said NASA administrator Bill Nelson after the launch. “It’s what we don’t yet understand or can’t yet fathom about our universe. I can’t wait to see what it uncovers!”

The promise of Webb is not what we know we will discover. It’s what we don’t yet understand or can’t yet fathom about our universe

Bill Nelson

Five days after launch, Webb successfully unfurled and tensioned its 300 m2 sunshield. Although the craft’s final position at Earth–Sun Lagrange point 2 (L2) ensures that it is sheltered by Earth’s shadow, further protection from sunlight is necessary to keep its four science instruments operating at 34 K. The delicate deployment procedure involved 139 release mechanisms, 70 hinge assemblies, some 400 pulleys and 90 individual cables – each of which was a potential single-point failure. Just over one week later, on 7 and 8 January, the two wings of the primary mirror, which had to be folded in for launch, were opened, involving the final four of a total of 178 release mechanisms. The ground team then began the long procedure of aligning the telescope optics via 126 actuators on the backside of the primary mirror’s 18 hexagonal segments. On 24 January, having completed a 1.51 million-km journey, the observatory successfully inserted itself into its orbit at L2, marking the end of the complex deployment process and the beginning of commissioning activities. The process will take months, with Webb scheduled to return its first science images in the summer.

James webb

The 1998 discovery of the accelerating expansion of the universe, which implies that around 70% of the universe is made up of an unknown dark energy, stemmed from observations of distant type-Ia supernovae that appeared fainter than expected. While the primary evidence came from ground-based observations, Hubble helped confirm the existence of dark energy via optical and near-infrared observations of supernovae at earlier times. Uniquely, Webb will allow cosmologists to see even farther, from as early as 200 million years after the Big Bang, while also extending the observation and cross-calibration of other standard candles, such as Cepheid variables and red giants, beyond what is currently possible with Hubble. Operating in the infrared rather than optical regime also means less scattering of light from interstellar gas.

With these capabilities, the JWST should enable the local rate of expansion to be determined to a precision of 1%. This will bring important information to the current tension between the measured expansion rate at early and late times, as quantified by the Hubble constant, and possibly shed light on the nature of dark energy.

Launching Webb is a huge celebration of the international collaboration that made this mission possible

Josef Aschbacher

By measuring the motion and gravitational lensing of early objects, Webb will also survey the distribution of dark matter, and might even hint at what it’s made of. “In order to make progress in the identification of dark matter, we need observations that clearly discriminate among the tens of possible explanations that theorists have put forward in the past four decades,” explains Gianfranco Bertone, director of the European Consortium for Astroparticle Theory. “If dark matter is ‘warm’ for example – meaning that it is composed of particles moving at mildly relativistic speeds when first structures are assembled – we should be able to detect its imprint on the number density of small dark-matter halos probed by the JWST. Or, if dark matter is made of primordial black holes, as suggested in the early 1970s by Stephen Hawking, the JWST could detect the faint emission produced by the accretion of gas onto these objects in early epochs.”

On 11 February, Webb returned images of its first star in the form of 18 blurry white dots, the product of the unaligned primary-mirror segments all reflecting light from the same star back at the secondary mirror and into its near-infrared camera. Though underwhelming at first sight, this and similar images are crucial to allow operators to gradually align and focus the hexagonal mirror segments until 18 images become one. After that, Webb will start downlinking science data at a rate of about 60 GB per day.

“Launching Webb is a huge celebration of the international collaboration that made this next-generation mission possible,” said ESA director-general Josef Aschbacher. “We are close to receiving Webb’s new view of the universe and the exciting scientific discoveries that it will make.”

BASE breaks new ground in matter–antimatter tests

BASE

The BASE collaboration at the CERN Antiproton Decelerator (AD) has made the most precise comparison yet between the properties of matter and antimatter. Reporting in Nature in January, following a 1.5 year-long measurement campaign, the collaboration finds the charge-to-mass ratios of protons and antiprotons to be identical within an experimental uncertainty of just 16 parts per trillion. The result is four times more precise than the previous BASE comparison in 2015 and places strong constraints on possible violations of CPT invariance in the Standard Model.

The charge-to-mass ratio is now the most precisely measured property of the antiproton

Stefan Ulmer

Invariance under the simultaneous operations of charge conjugation, parity transformation and time reversal is a pillar of quantum field theories such as the Standard Model. Direct, high-precision tests of CPT invariance are therefore powerful probes of new physics, and of the possible mechanisms through which the universe came to be matter-dominated.

“The charge-to-mass ratio is now the most precisely measured property of the antiproton,” says BASE spokesperson Stefan Ulmer of RIKEN in Japan. “To reach this precision, we made considerable upgrades to the experiment and carried out the measurements when the antimatter factory was closed down, so that they would not be affected by disturbances from the experiment’s magnetic field.” The upgrades include a rigorous re-design of the cryostage of the experiment and the development of a multi-layer shielding-coil system, which considerably reduced magnetic-field fluctuations in the central measurement trap, explains Ulmer. “Another important ingredient is the implementation of a superconducting image-current detection system with tunable resonance frequency and ultra-high non-destructive detection efficiency, which eliminates the dominant systematic shift of the previous charge-to-mass ratio comparison.”

The BASE team confined antiprotons and negatively charged hydrogen ions in a state-of-the-art Penning trap, in which charged particles follow a cyclical trajectory with a frequency that scales with the trap’s magnetic-field strength and the particle’s charge-to-mass ratio.

By alternately feeding antiprotons and hydrogen ions one at a time into the trap, the team was able to measure their cyclotron frequencies under the same conditions. Performed over four campaigns between December 2017 and May 2019, the measurements involved more than 24,000 cyclotron-frequency comparisons, each lasting 260 seconds. Within the experimental uncertainty, the result, –(q/m)p/(q/m)= 1.000000000003(16), demonstrates that the Standard Model respects CPT invariance at an energy scale of 1.96×10–27 GeV at 68% confidence. It also improves knowledge of 10 coefficients in the Standard Model extension – a generalised, observer-independent effective field theory used for investigations of Lorentz violation.

Weak equivalence principle

The BASE team also used their data to test the weak equivalence principle, which states that different bodies in the same gravitational field undergo the same acceleration. Any difference between the gravitational interaction of protons and antiprotons, for example due to anomalous gravitational scalar or tensor couplings to antimatter, would result in a difference in the proton and antiproton cyclotron frequencies. Sampling the varying gravitational field of Earth as it orbits the Sun, BASE found no such difference, constraining the strength of anomalous antimatter/gravitational interactions to less than 1.8×10–7 and enabling the first differential test of the weak equivalence principle (WEP) using antiprotons.

“From this interpretation we constrain the differential matter–antimatter WEP-violating coefficient to less than 0.03, which is comparable to the initial precision goals of other AD experiments that aim to drop antihydrogen in the Earth’s gravitational field,” explains Ulmer. “BASE did not directly drop antimatter, but our measurement of the influence of gravity on a baryonic antimatter particle is, according to our understanding, conceptually very similar, indicating no anomalous interaction between antimatter and gravity at the achieved level of uncertainty.”

The collaboration expects to reach even higher sensitivities on both the WEP test and the proton–antiproton charge-to- mass ratio comparison by increasing the experiment’s magnetic-field strength, stability and homogeneity. Further improvements are anticipated from the use of transportable antiproton traps, such as BASE-STEP, which allow precision antiproton experiments to be moved from the fluctuating accelerator environment to a calm laboratory space.

bright-rec iop pub iop-science physcis connect