Since the discovery of the W boson at the SppS 40 years ago, collider experiments at CERN and elsewhere have measured its mass ever more precisely. Such measurements provide a vital test of the Standard Model’s consistency, since the W mass is closely related to the strength of the electroweak interaction and to the masses of the Z boson, top quark and Higgs boson; higher experimental precision is needed to keep up with the most recent electroweak calculations.
The latest experiment to weigh in on the W mass is ATLAS. Reanalysing a sample of 14 million W candidates produced in proton–proton collisions at 7 TeV, the collaboration finds Mw = 80.360 ± 0.005(stat) ± 0.015(syst) = 80.360 ±0.016 GeV. The value, which was presented on 23 March at the Rencontres de Moriond, is in agreement with all previous measurements except one – the latest measurement from the CDF experiment at the former Tevatron collider at Fermilab.
In 2017 ATLAS released its first measurement of the W-boson mass, which was determined using data recorded in 2011 when the LHC was running at a collision energy of 7 TeV (CERN Courier January/February 2017 p10). The precise result (80.370 ±0.019 GeV) agreed with the Standard Model prediction (80.354 ±0.007 GeV) and all previous experimental results, including those from the LEP experiments. But last year, the CDF collaboration announced an even more precise measurement, based on an analysis of its full dataset (CERN Courier May/June 2022 p9). The result (80.434 ±0.009 GeV) differed significantly from the Standard Model prediction and from the other experimental results (see figure), calling for more measurements to try to identify the source of the discrepancy.
In its new study, ATLAS reanalysed its 2011 data sample using a more advanced fitting technique as well as improved knowledge of the parton distribution functions that describe how the proton’s momentum is shared amongst its constituent quarks and gluons. In addition, the collaboration verified the theoretical description of the W-production process using dedicated LHC proton–proton runs. The new result is 10 MeV lower than the previous ATLAS result and 15% more precise.
“Due to an undetected neutrino in the particle’s decay, the W-mass measurement is among the most challenging precision measurements performed at hadron colliders. It requires extremely accurate calibration of the measured particle energies and momenta, and a careful assessment and excellent control of modelling uncertainties,” says ATLAS spokesperson Andreas Hoecker. “This updated result from ATLAS provides a stringent test and confirms the consistency of our theoretical understanding of electroweak interactions.”
The LHCb collaboration reported a measurement of the W mass in 2021, while the results from CMS are keenly anticipated. In the meantime, physicists from the Tevatron+LHC W-mass combination working group are calculating a combined mass value using the latest measurements from the LHC, Tevatron and LEP. This involves a detailed investigation of higher-order theoretical effects affecting hadron-collider measurements, explains CDF representative Chris Hays from the University of Oxford: “The aim is to give a comprehensive and quantitative overview of W-boson mass measurements and their compatibilities. While no significant issues have been identified that significantly change the measurement results, the studies will shed light on their details and differences.”
Faced with the no-show of phenomena beyond the Standard Model at the high mass and energy scales explored so far by the LHC, it has recently become a much considered possibility that new physics hides “in plain sight”, namely at mass scales that can be very easily accessed but at very small coupling strengths. If this were the case, then high-intensity experiments have an advantage: thanks to the large number of events that can be generated, even the most feeble couplings corresponding to the rarest processes can be accessible.
Such a high-intensity experiment is NA62 at CERN’s North Area. Designed to measure the ultra-rare kaon decay K → πνν, it has also released several results probing the existence of weakly coupled processes that could become visible in its apparatus, a prominent example being the decay of a kaon into a pion and an axion. But there is also an unusual way in which NA62 can probe this kind of physics using a configuration that was not foreseen when the experiment was planned, for which the first result was recently reported.
During normal NA62 operations, bunches of 400 GeV protons from the SPS are fired onto a beryllium target to generate secondary mesons from which, using an achromat, only particles with a fixed momentum and charge are selected. These particles (among them kaons) are then propagated along a series of magnets and finally arrive at the detector 100 m downstream. In a series of studies starting in 2015, however, NA62 collaborators with the help of phenomenologists began to explore physics models that could be tested if the target was removed and protons were fired directly into a “dump” that can be arranged by moving the achromat collimators. They concluded that various processes exist in which new MeV-scale particles such as dark photons could be produced and detected from their decays into di-lepton final states. The challenge is to keep the muon-induced background under control, which cannot be easily understood from simulations alone.
A breakthrough came in 2018 when beam physicists in the North Area understood how the beamline magnets could be operated in such a way as to vastly reduce the background of both muons and hadrons. Instead of using the two pairs of dipoles as a beam achromat for momentum selection, the currents in the second pair are set to induce additional muon sweeping. The scheme was verified during a 2021 run lasting 10 days, during which 1.4 × 1017 protons were collected on the beam dump. The first analysis of this rapidly collected dataset – a search for dark photons decaying to a di-muon final state – has now been performed.
Hypothesised to mediate a new gauge force, dark photons, A′, could couple to the Standard Model via mixing with ordinary photons. In the modified NA62 set-up, dark photons could be produced either via bremsstrahlung or decays of secondary mesons, the mechanisms differing in their cross-sections and distributions of the momenta and angles of the A′. No sign of A′ → μ+μ– was found, excluding a region of parameter space for dark-photon masses between 215 and 550 MeV at 90% confidence. A preliminary result for a search for A′ → e+e– was also presented at the Rencontres de Moriond in March.
“This result is a milestone,” explains analysis leader Tommaso Spadaro of LNF Frascati. “It proves the capability of NA62 for studying physics in the beam-dump configuration and paves the way for upcoming analyses checking other final states.”
Type1A supernovae play an important role in the universe, both as the main source of iron and as one of the principal tools for astronomers to measure cosmic-distance scales. They are also important for astroparticle physics, for example allowing the properties of the neutrino to be probed in an extreme environment.
Type1A supernovae make ideal cosmic rulers because they all look very similar, with roughly equal luminosity and emission characteristics. Therefore, when a cosmic explosion that matches the properties of a type1A supernova is detected, its luminosity can be directly used to measure the distance to its host galaxy. Despite this importance, the details surrounding the progenitors of these events are still not fully understood. Furthermore, a group of outliers, now known as type1ax events, has recently been identified that indicate there might be more than one path towards a type1A explosion.
The reason that typical type1A events all have a roughly equal luminosity is because of their progenitors. The general explanation for these events includes a binary system with at least one white dwarf: a very dense old star consisting mostly of oxygen and carbon that is not undergoing fusion. The system is only prevented from collapsing into a neutron star or black hole due to electron-degeneracy pressure. As the white dwarf accumulates matter from a nearby companion, its mass increases to a precise critical limit at which an uncontrolled thermonuclear explosion starts, resulting in the star being unbounded and seen as the supernova.
This peculiar binary system provides strong hints of a new type of progenitor that can explain up to 30% of all supernovae 1a events
As several X-ray sources were identified in the 1990s by the ROSAT mission as being white dwarfs with hydrogen burning on their surface, the source of matter that is accumulated by the white dwarf was long thought to be hydrogen from a companion star. The flaw with this model, however, is that type1A supernovae show no signs of any hydrogen. On the other hand, helium has been seen, particularly in the outlier type1ax supernovae events. These 1ax events, which are predicted to make up 30% of all type1A events, can be explained by a white dwarf accumulating helium from a companion star that has already shed all of its hydrogen. If the helium was able to accumulate on the surface in a stable way, without intermediate explosions due to violent ignition of the helium, it reaches a mass where it violently ignites on the surface. This in turn triggers the ignition of the core and could explain the type1ax events. Evidence of helium accumulating white dwarfs has, however, not been found.
Now, a group led by researchers from the Max Planck Institute for Extraterrestrial Physics (MPE) has used both optical data and X-ray data from the eROSITA and XMM Newton missions to find the first clear evidence of such a progenitor system. The group found an object, known as [HP99] 159, located in the Large Magellanic Cloud, which shows all the characteristics of a white dwarf surrounded by an accretion disk of helium. Using historical X-ray data from as far back as 50 years, the team also showed that the brightness of the source is relatively stable, thereby indicating that it is accumulating the helium at a stable rate, despite the accumulation rate being lower than theoretically predicted for stable burning. This indicates that the system is working its way towards ignition in the future.
The discovery of this new X-ray source therefore proves the existence of white dwarfs that accumulate helium from a companion star at a steady rate, thereby allowing them to reach the conditions to produce a supernova. This peculiar binary system already provides strong hints of a new type of progenitor that can explain up to 30% of all supernovae 1a events. Follow-up measurements will provide further insight into the complex physics at play in the thermonuclear explosions that produce these events, while [HP99] 159’s characteristics can be used to find similar sources.
In a 1961 book, Richard Feynman describes the great satisfaction he and Murray Gell-Mann felt in formulating a theory that explained the close equality of the Fermi constants for muon and neutron-beta decay. These two physicists and, independently, Gershtein and Zeldovich, had discovered the universality of weak interactions. It was a generalisation of the universality of electric charge and strongly suggested the existence of a common origin of the two interactions, an insight that was the basis for unified theories developed later.
Fermi’s description of neutron beta decay (n → p+e–+ νe) involved the product of two vector currents analogous to the electromagnetic current: a nuclear current transforming the neutron into a proton and a leptonic current creating the electron–antineutrino pair. Subsequent studies of nuclear decays and the discovery of parity violation complicated the description, introducing all possible kinds of relativistically invariant interactions that could be responsible for neutron beta decay.
The decay of the muon (μ– → νμ +e–+ νe) was also found to involve the product of two vector currents, one transforming the muon into its own neutrino and the other creating the electron–antineutrino pair. What Feynman and Gell-Mann, and Gershtein and Zeldovich, had found is that the nuclear and lepton vector currents have the same strength, despite the fact that the n → p transition is affected by the strong nuclear interaction while μ → νμ and e → νe transitions are not (we are anticipating here what was discovered only later, namely that the electron and muon each have their own neutrino).
At the end of the 1950s, simplicity finally emerged. As proposed by Sudarshan and Marshak, and by Feynman and Gell-Mann, all known beta decays are described by the products of two currents, each a combination of a vector and an axial vector current. Feynman notes: after 23 years, we are back to Fermi!
The book of 1961, however, also records Feynman’s dismay after the discovery that the Fermi constants of strange-particle beta decays, for example the lambda–hyperon beta decay: Λ→ p+e–+ νe were smaller by a factor of four or five than the theoretical prediction. In 1960 Gell-Mann, together with Maurice Lévy, had tried to solve the problem but, while taking a step in the right direction, they concluded that it was not possible to make quantitative predictions for the observed decays of the hyperons. It was up to Nicola Cabibbo, in a short article published in 1963 in Physical Review Letters, to reconcile strange-particle decays with the universality of weak interactions, paving the way to the modern unification of electromagnetic and weak interactions.
Over to Frascati
Nicola had graduated in Rome in 1958, under his tutor Bruno Touschek. Hired by Giorgio Salvini, he was the first theoretical physicist in the Electro-Synchrotron Frascati laboratories. There, Nicola met Raoul Gatto, five years his elder, who was coming back from Berkeley, and they began an extremely fruitful collaboration.
These were exciting times in Frascati: the first e+ e– collider, AdA (Anello di Accumulazione), was being realised, to be followed later by a larger machine, Adone, reaching up to 3 GeV in the centre of mass. New particles were studied at the electro-synchrotron, related to the newly discovered SU(3) flavour symmetry (e.g. the η meson). Cabibbo and Gatto authored an important article on e+ e– physics and, in 1961, they investigated the weak interactions of hadrons in the framework of the SU(3) symmetry. Gatto and Cabibbo and, at the same time, Coleman and Glashow, observed that vector currents associated with the SU(3) symmetry by Noether’s theorem include a strangeness-changing current, V(ΔS = 1), that could be associated with strangeness-changing beta-decays in addition to the isospin current, V(ΔS = 0), responsible for strangeness-non-changing beta decays – the same considered by Feynman and Gell-Mann. For strange-particle decays, the identification implied that the variation of strangeness in the hadronic system has to be equal to the variation of the electric charge (in short: ΔS = ΔQ). The rule is satisfied in Λ beta decay (Λ: S = –1, Q = 0; p: S = 0, Q = +1). However, it conflicted with a single event allegedly observed at Berkeley in 1962 and interpreted as Σ+→ n + μ+ + νμ, which had ΔS = –ΔQ (Σ+: S = –1, Q = +1; n: S = Q = 0). In addition, the problem remained of how to correctly formulate the concept of muon-hadron universality in the presence of four vector currents describing the transitions e → νe, μ → νμ, n → p and Λ→ p.
Cabibbo’s angle
In his 1963 paper, written while he was working at CERN, Nicola made a few decisive steps. First, he decided to ignore the evidence of a ΔS = –ΔQ component suggested by Berkeley’s Σ+→ n+μ++νμ event. Nicola was a good friend of Paolo Franzini, then at Columbia University, and the fact that Paolo, with larger statistics, had not yet seen any such event provided a crucial hint. Next, to describe both ΔS = 0 and ΔS = 1 weak decays, Nicola formulated a notion of universality between each leptonic vector current (electronic or muonic) and one, and only one, hadronic vector current. He assumed the current to be a combination of the two currents determined by the SU(3) symmetry that he had studied with Gatto in Frascati (also identified by Coleman and Glashow): Vhadron = aV(ΔS = 0) + bV(ΔS = 1), with a and b being numerical constants. By construction, V(ΔS = 0) and V(ΔS = 1) have the same strength of the electron or of the muon currents; for the hadronic current to have the same strength, one requires a2 + b2 = 1, that is a = cosθ, b = sinθ.
Cabibbo obtained the final expression of the hadronic weak current, adding to these hypotheses the V–A formulation of the weak interactions. The angle θ became a new constant of nature, known since then as the Cabibbo angle.
An important point is that the Cabibbo theory is based on the currents associated with SU(3) symmetry. For one, this means that it can be applied to the beta decays of all hadrons, mesons and baryons belonging to the different SU(3) multiplets. This was not the case for the precursory Gell-Mann–Lévy theory, which also assumed one hadron weak current but was formulated in terms of protons and lambdas, and could not be applied to the other hyperons or to the mesons. In addition, in the limit of exact SU(3) symmetry one can prove a non-renormalisation theorem for the ΔS = 1 vector current, which is entirely analogous to the one proved by Feynman and Gell-Mann for the ΔS = 0 isospin current. The Cabibbo combination, then, guarantees the universality of the full hadron weak current to the lepton current for any value of the Cabibbo angle, the suppression of the beta decays of strange particles being naturally explained by a small value of θ. Remarkably, a theorem derived by Ademollo and Gatto, and by Fubini a few years later, states that the non-renormalisation of the vector current’s strength is also valid to the first order in SU(3) symmetry breaking.
Photons and quarks
In many instances, Nicola mentioned that a source of inspiration for his assumption for the hadron current was the passage of photons through a polarimeter, a subject he had considered in Frascati in connection with possible experiments of electron scattering through polarised crystals. Linearly polarised photons can be described via two orthogonal states, but what is transmitted is only the linear combination corresponding to the direction determined by the polarimeter. Similarly, there are two orthogonal hadron currents, V (ΔS = 0) and V (ΔS = 1), but only the Cabibbo combination couples to the weak interactions.
An interpretation closer to particle physics came with the discovery of quarks. In quark language, V(ΔS = 0) induces the transition d → u and V(ΔS = 1) the transition s → u. The Cabibbo combination corresponds then to dC = (cos θd + sin θs) → u. Stated differently, the u quark is coupled by the weak interaction only to one, specific, superposition of d and s quarks: the Cabibbo combination dC. This is Cabibbo mixing, reflecting the fact that in SU(3) there are two quarks with the same charge –1/3.
A first comparison between theory and meson and hyperon beta-decay data was done by Cabibbo in his original paper, in the exact SU(3) limit. Specifically, the value of θ was obtained by comparing K+ and π+ semileptonic decays. In baryon semileptonic decays, the matrix elements of vector currents are determined by the SU(3) symmetry, while axial currents depend upon two parameters, the so-called F and D couplings. Many fits have been performed in successive years, which saw a dramatic increase in the decay modes observed, in statistics, and in precision.
Four decades after the 1963 paper, Cabibbo, with Earl Swallow and Roland Winston, performed a complete analysis of hyperon decays in the Cabibbo theory, then embedded in the three-generation Kobayashi and Maskawa theory, taking into account the momentum dependence of vector currents. In their words (and in modern notation):
“… we obtain Vus = 0.2250(27) (= sin θ). This value is of similar precision, but higher than the one derived from Kl3, and in better agreement with the unitarity requirement,
|Vud |2 + |Vus|2 + |Vub |2 = 1. We find that the Cabibbo model gives an excellent fit of the existing form factor data on baryon beta decays (χ2 = 2.96) for three degrees of freedom with F + D = 1.2670 ± 0.0030, F–D = –0.341±0.016, and no indication of flavour SU(3) breaking effects.”
The Cabibbo theory predicts a reduction in the nuclear Fermi constant squared with respect to the muonic one by a factor cos2θ = 0.97. The discrepancy had been noticed by Feynman and S Berman, one of Feynman’s students, who estimated the possible effect of electromagnetic radiative corrections. The situation is much clearer today, with precise data coming from super-allowed Fermi nuclear transitions and radiative corrections under control.
Closing up
From its very publication, the Cabibbo theory was seen as a crucial development. It indicated the correct way to embody lepton-hadron universality and it enjoyed a heartening phenomenological success, which in turn indicated that we could be on the right track for a fundamental theory of weak interactions.
The idea of quark mixing had profound consequences. It prompted the solution of the spectacular suppression of strangeness-changing neutral processes by the GIM mechanism (Glashow, Iliopoulos and Maiani), where the charm quark couples to the combination of down and strange quarks orthogonal to the Cabibbo combination. Building on Cabibbo mixing and GIM, it has been possible to extend to hadrons the unified SU(2)L⊗ U(1) theory formulated, for leptons, by Glashow, and by Weinberg and Salam.
There are very few articles in the scientific literature in which one does not feel the need to change a single word and Cabibbo’s is definitely one of them
CP symmetry violations observed experimentally had no place in the two-generation scheme (four quarks, four leptons) but found an elegant description by Makoto Kobayashi and Toshihide Maskawa in the extension to three generations. Quark mixing introduced by Cabibbo is now described by a three-by-three unitary matrix known in the literature as the Cabibbo–Kobayashi–Maskawa (CKM) matrix. In the past 50 years the CKM scheme has been confirmed with ever increasing accuracy by a plethora of measurements and impressive theoretical predictions (see “Testing quark mixing” figure). Major achievements have been obtained in the studies of charm- and beauty-particle decays and mixing. The CKM paradigm remains a great success in predicting weak processes and in our understanding of the sources of CP violation in our universe.
Nicola Cabibbo passed away in 2010. The authoritative book by Abraham Pais, in its chronology, cites the Cabibbo theory among the most important developments in post-war particle physics. In the History of CERN, Jean Iliopoulos writes: “There are very few articles in the scientific literature in which one does not feel the need to change a single word and Cabibbo’s is definitely one of them. With this work, he established himself as one of the leading theorists in the domain of weak interactions.”
Since their discovery 67 years ago, neutrinos from a range of sources – solar, atmospheric, reactor, geological, accelerator and astrophysical – have provided ever more powerful probes of nature. Although neutrinos are also produced abundantly in colliders, until now no neutrinos produced in such a way had been detected, their presence inferred instead via missing energy and momentum.
A new LHC experiment called FASER, which entered operations at the start of Run 3 last year, has changed this picture with the first observation of collider neutrinos. Announcing the result on 19 March at the Rencontres de Moriond, and in a paper submitted to Physical Review Letters on 24 March, the FASER collaboration reconstructed 153 candidate muon neutrino and antineutrino interactions in its spectrometer with a significance of 16 standard deviations above the background-only hypothesis. Being consistent with the characteristics expected from neutrino interactions in terms of secondary-particle production and spatial distribution, the results imply the observation of both neutrinos and antineutrinos with an incident neutrino energy significantly above 200 GeV. In addition, an ongoing analysis of data from an emulsion/tungsten subdetector called FASERν revealed a first electron–neutrino interaction candidate (see image).
“FASER has directly observed the interactions of neutrinos produced at a collider for the first time,” explains co-spokesperson Jamie Boyd of CERN. “This result shows the detector worked perfectly in 2022 and opens the door for many important future studies with high-energy neutrinos at the LHC.”
The extreme luminosity of proton–proton collisions at the LHC produces a large neutrino flux in the forward direction, with energies leading to cross-sections high enough for neutrinos to be detected using a compact apparatus. FASER is one of two new forward experiments situated at either side of LHC Point 1 to detect neutrinos produced in proton–proton collisions in ATLAS. The other, SND@LHC, also reported its first results at Moriond. The team found eight muon–neutrino candidate events against an expected background of 0.2, with an evaluation of systematic uncertainties ongoing.
Covering energies between a few hundred GeV and several TeV, FASER and SND@LHC narrow the gap between fixed-target and astrophysical neutrinos. One of the unexplored physics topics to which they will contribute is the study of high-energy neutrinos from astrophysical sources. Since the production mechanism and energy of neutrinos at the LHC is similar to that of very-high-energy neutrinos from cosmic-ray collisions with the atmosphere, FASER and SND@LHC can be used to precisely estimate this background. Another application is to measure and compare the production rate of all three types of neutrinos, providing an important test of the Standard Model.
Beyond neutrinos, the two experiments open new searches for feebly interacting particles and other new physics. In a separate analysis, FASER presented first results from a search for dark photons decaying to an electron-positron pair. No events were seen in an almost background-free analysis, yielding new constraints on dark photons with couplings of 10–5 to 10–4 and masses of between 10 and 100 MeV, in a region of parameter space motivated by dark matter.
At the recent Moriond Electroweak conference, the LHCb collaboration presented a new, high-precision measurement of charge–parity (CP) violation using a large sample of B0s→ ϕϕ decays, where the ϕ mesons are reconstructed in the K+K– final state. Proceeding via a loop transition (b → sss, such “penguin” decays are highly sensitive to possible contributions from unknown particles and therefore provide excellent probes for new sources of CP violation. To date, the only known source of CP violation, which is governed by the Cabibbo–Kobayashi–Maskawa matrix in the quark sector, is insufficient to account for the huge excess of matter over antimatter in the universe; extra sources of CP violation are required.
A B0s or B0s meson can change its flavour and oscillate into its antiparticle at a frequency Δms/2π, which has been precisely determined by the LHCb experiment. Thus a B0s meson can decay either directly to the ϕϕ state or via changing its flavour to the B–0s state. The phase difference between the two interfering amplitudes changes sign under CP transformations, denoted ϕs for B0s or –ϕs for B0sdecays. A time-dependent CP asymmetry can arise if the phase difference ϕs is nonzero. The asymmetry between the decay rates of initial B0s and B0smesons to the ϕϕ state as a function of the decay time follows a sine wave with amplitude sin(ϕs) and frequency Δms/2π. In the Standard Model (SM) the phase difference is predicted to be consistent with zero, ϕSMs= 0.00 ± 0.02 rad.
This is the most precise single measurement to date
The observed asymmetry as a function of the B0s→ϕϕ decay time and the projection of the best fit are shown in figure 1 for the Run 2 data sample. The measured asymmetry is diluted by the finite decay-time resolution and the nonzero flavour mis-identification rate of the initial B0s orB0s state, and averaged over two types of linear polarisation states of the ϕϕ system that have CP asymmetries with opposite signs. Taking these effects into account, LHCb measured the CP-violating phase using the full Run 2 data sample. The result, when combined with the Run 1 measurement, is ϕs = –0.074 ± 0.069 rad, which agrees with the SM prediction and improves significantly upon the previous LHCb measurement. In addition to the increased data sample size, the new analysis benefits from improvements in the algorithms for vertex reconstruction and determination of the initial flavour of the B0s orB0s mesons.
This is the most precise single measurement to date of time-dependent CP asymmetry in any b → stransition. With no evidence for CP violation, the result can be used to derive stringent constraints on the parameter space of physics beyond the SM. Looking to the future, the upgraded LHCb experiment and a planned future phase II upgrade will offer unique opportunities to further explore new-physics effects in b → sdecays, which could potentially provide insights into the fundamental origin of the puzzling matter–antimatter asymmetry.
Measurements of the production of hadrons containing heavy quarks (i.e. charm or beauty) in proton–proton (pp) collisions provide an important test of the accuracy of perturbative quantum chromodynamics (pQCD) calculations. The production of heavy quarks occurs in initial hard scatterings of quarks and gluons, whereas the production of light quarks in the underlying event is dominated by soft processes. Thus, measuring heavy-quark hadron production as a function of the charged-particle multiplicity provides insights into the interplay between soft and hard mechanisms of particle production.
Measurements in high-multiplicity pp collisions have shown features that resemble those associated with the formation of quark–gluon plasma in heavy-ion collisions, such as the enhancement of the production of particles with strangeness content and the modification of the baryon-to-meson production ratio as a function of transverse momentum (pT). These effects can be explained by two different types of models: statistical hadronisation models, which evaluate the population of hadron states according to statistical weights governed by the masses of the hadrons and a universal temperature, or models that include hadronisation via coalescence (or recombination) of quarks and gluons which are close in phase space. Both predict an enhancement of the baryon-to-meson and strange-to-non-strange hadron ratios as a function of charged-particle multiplicity.
In the charm sector, the ALICE collaboration has recently observed a multiplicity dependence of the pT-differential Λc+/D0 ratio, smoothly evolving from pp to lead–lead collisions, while no dependence was observed for the Ds+-meson production yield compared to the one of the D0 meson. Measurements of these phenomena in the beauty sector are needed to shed further light on the hadronisation mechanism.
To investigate beauty-quark production as a function of multiplicity and to put it in relation with that of charm quarks, ALICE measured for the first time the fraction of D0 and D+ originating from beauty-hadron decays (denoted as non-prompt) as a function of transverse momentum and charged-particle multiplicity in pp collisions at 13 TeV, using the Run 2 dataset. The measurement exploits different decay-vertex topologies of prompt and non-prompt D mesons with machine-learning classification techniques. The fractions of non-prompt D mesons were observed to somewhat increase with pT from about 5 to 10%, as expected by pQCD calculations (figure 1). Similar fractions were measured in different charged-particle multiplicity intervals, suggesting either no or only mild multiplicity dependence. This suggests a similar production mechanism of charm and beauty quarks as a function of multiplicity.
The possible influence of the hadronisation mechanism was investigated by comparing the measured D-meson non-prompt fractions with predictions based on Monte Carlo generators such as PYTHIA 8. A good agreement was observed with different PYTHIA tunes, with and without the inclusion of the colour-reconnection mechanism beyond the leading colour approximation (CR-BLC), which was introduced to describe the production of charm baryons in pp collisions. Only the CR-BLC “Mode 3” tune that predicts an increase (decrease) of hadronisation in baryons for beauty (charm) quarks at high multiplicity is disfavoured by the current data.
The measurements of non-prompt D0 and D+ mesons represent an important test of production and hadronisation models in the charm and beauty sectors, and pave the way for future measurements of exclusive reconstructed beauty hadrons in pp collisions as a function of charged-particle multiplicity.
The CMS collaboration has been relentlessly searching for physics beyond the Standard Model (SM) since the start of the LHC. One of the most appealing new theories is supersymmetry or SUSY – a novel fermion-boson symmetry that gives rise to new particles, “naturally” leads to a Higgs boson almost as light as the W and Z bosons, and provides candidate particles for dark matter (DM).
By the end of LHC Run 2, in 2018, CMS had accumulated a high-quality data sample of proton–proton (pp) collisions at an energy of 13 TeV, corresponding to an integrated luminosity of 137 fb–1. With such a large data set, it was possible to search for the production of strongly interacting SUSY particles, i.e. the partners of gluons (gluinos) and quarks (squarks), as well as for SUSY partners of the W and Z bosons (electroweakinos: winos and binos), of the Higgs boson (higgsinos), and of the leptons (sleptons). The cross sections for the direct production of SUSY electroweak particles are several orders of magnitude lower than those for gluino and squark pair production. However, if the partners of gluons and quarks are heavier than a few TeV, it could be that the SUSY electroweak sector is the only one accessible at the LHC. In the minimal SUSY extension of the SM, electroweakinos and higgsinos mix to form six mass eigenstates: two charged (charginos) and four neutral (neutralinos). The lightest neutralino is often considered to be the lightest SUSY particle (LSP) and a DM candidate.
CMS has recently reported results, based on the full Run 2 dataset, from searches for the electroweak production of sleptons, charginos and neutralinos. Decays of these particles to the LSP are expected to produce leptons, or Z, W and Higgs bosons. The Z and W bosons subsequently decay to leptons or quarks, while the Higgs boson primarily decays to b quarks. All final states have been explored with complementary channels to enhance the sensitivity to a wide range of electroweak SUSY mass hypotheses. These cover very compressed mass spectra, where the mass difference between the LSP and its parent particles is small (leading to low-momentum particles in the final state) as well as uncompressed scenarios that would instead produce highly boosted Z, W and Higgs bosons. None of the searches showed event counts that significantly deviate from the SM predictions.
CMS maximised the output of the Run 2 dataset, providing its legacy reference on electroweak SUSY searches
The next step was to statistically combine the results of mutually exclusive search channels to set the strongest possible constraints with the Run 2 dataset and interpret the results of searches in different final states under unique SUSY-model hypotheses. For the first time, fully leptonic, semi-leptonic and fully hadronic final states from six different CMS searches were combined to explore models that differ depending on whether the next-to-lightest supersymmetric partner (NLSP) is “wino-like” or “higgsino-like”, as shown in the left and right panels of figure 1, respectively. The former are now excluded up to NLSP masses of 875 GeV, extending the constraints obtained from individual searches by up to 100 GeV, while the latter are excluded up to NLSP masses of 810 GeV.
With this effort, CMS maximised the output of the Run 2 dataset, providing its legacy reference on electroweak SUSY searches. While the same data are still being used to search for new physics in yet uncovered corners of the accessible phase-space, CMS is planning to extend its reach in the upcoming years, profiting from the extension of the data set collected during LHC Run 3 at an unprecedented centre-of-mass energy of 13.6 TeV.
Untangling the evolution of the universe, in particular the nature of dark energy and dark matter, is a central challenge of modern physics. An ambitious new mission from the European Space Agency (ESA) called Euclid is preparing to investigate the expansion history of the universe and the growth of cosmic structures over the last 10 billion years, covering the entire period over which dark energy is thought to have played a significant role in the accelerating expansion. The 2 tonne, 4.5 m tall and 3.1 m diameter probe is undergoing final tests in Cannes, France, after which it will be shipped to Cape Canaveral in Florida and inserted into the faring of a SpaceX Falcon 9 rocket, with launch scheduled for July.
Let there be light
Euclid, which was selected by ESA for implementation in 2012 with a budget of about €600 million, has four main objectives. The first is to investigate whether dark energy is real, or whether the apparent acceleration of the universe is caused by a breakdown of general relativity on the largest scales. Second, if dark energy is real, Euclid will investigate whether it is a constant energy spread across space or a new force of nature that evolves with the expansion of the universe. A third objective is to investigate the nature of dark matter, the mass of neutrinos and whether there exist other, so-far undetected fast-moving particle species, and a fourth is to investigate statistics and properties of the early universe that seeded large-scale structures. To meet these goals, the six-year Euclid mission will use a three-mirror system to direct light from up to a billion galaxies across more than a third of the sky towards a visual imager for photometry and a near-infrared spectrophotometer.
So far, the best constraints on the geometry and expansion history of the universe come from cosmic-microwave background (CMB) surveys. Yet these missions are not the best tracers of the curvature, neutrino masses and expansion history, nor for identifying possible exotic subcomponents of dark matter. For this, large surveys on galaxy clustering are required. Euclid will use three methods to achieve this. The first is redshift-space distortions, which combines how fast galaxies move away from us due to the expansion of the universe and how fast galaxies move towards a region of strong gravitational pull in our line-of-sight; measuring these deformations in galactic positions enables the growth rate of structures as well as gravity to be investigated. The second is baryonic acoustic oscillations (BAOs), which arose when the universe was a plasma made from baryons and photons and set a characteristic scale that is related to the sound horizon at recombination. After recombination, photons decoupled from visible matter while baryons were pulled in by gravity and started to form bigger structures, with the BAO scale imprinted in galaxy distributions. BAOs thus serve as a ruler to trace the expansion rate of the universe. The third method, weak gravitational lensing, occurs when light from a background source is bent around a massive foreground object such as a galaxy cluster, from which the distribution of dark matter can be inferred.
As the breadth and precision of cosmological measurements increase, so do the links with particle physics. CERN and the Euclid Consortium (which consists of more than 2000 scientists from 300 institutes in 13 European countries, the US, Canada and Japan) signed a memorandum of understanding in 2016 after Euclid gained CERN recognised-experiment status in 2015. The collaboration was motivated by technical synergies for the mission’s Science Ground Segment (SGS), which will process about 850 Gbit of compressed data per day – the largest of any ESA mission to date. CERN is contributing with the provision of critical software tools and related support activities, explains CERN aerospace and environmental applications coordinator Enrico Chesta: “CernVM–FS, developed by the EP-SFT team to assist high-energy physics collaborations to deploy software on the distributed computing infrastructure used to run data-processing applications, has been integrated into Euclid SGS and will be used for software continuous deployment among the nine Euclid science data centres.”
Competitive survey
Euclid’s main scientific objectives also align closely with CERN’s physics challenges. A 2019 CERN-TH/Euclid workshop identified overlapping areas of interest and options for scientific visitor programmes, with topics of potential interest including N-body CMB simulations, redshift space distortions with relativistic effects, model selection of modified gravity, and dark-energy and neutrino-mass estimation from cosmic voids. Over the coming years, Euclid will provide researchers with data against which they can test different cosmological models. “Galaxy surveys have been happening for decades and have grown in scale, but we didn’t hear much about it because the CMB was, until now, more accurate,” says theorist Marko Simonović of CERN. “With Euclid there will be a competitive survey that is big enough to be comparable to CMB data. It is exciting to see what Euclid, and other new missions such as DESI, will tell us about cosmology. And maybe we will even discover something new.”
Like many physicists, Valeria Pettorino’s fascination with science started when she was a child. Her uncle, a physicist himself, played a major role by sharing his passion for science fiction, strings and extra dimensions. She studied physics and obtained her PhD from the University of Naples in 2005, followed by a postdoc at the University of Torino and then SISSA in Italy. In 2012 her path took her to the University of Geneva and a Marie Curie Fellowship, where she worked with theorist Martin Kunz from UNIGE/CERN – a mentor and role model ever since.
Visiting CERN was an invaluable experience that led to lifelong connections. “Meeting people who worked on particle-physics missions always piqued my interest, as they had such interesting stories and experiences to share,” Valeria explains. “I collaborated and worked alongside people from different areas in cosmology and particle physics, and I got the opportunity to connect with scientists working in different experiments.”
After the fellowship, Valeria went to the University of Heidelberg as a research group leader, and during this time she was selected for the “Science to Data Science” programme by the AI software company Pivigo. Working on artificial intelligence and unsupervised learning to analyse healthcare data for a start-up company in London, it presented her with the opportunity to widen her skillset.
Valeria’s career trajectory turned towards space science in 2007, when she began working for the Euclid mission of the European Space Agency (ESA) due to launch this year, with the aim to measure the geometry of the universe for the study of dark matter and energy. Currently co-lead of the Euclid theory science working group, Valeria has held a number of roles in the mission, including deputy manager of the communication group. In 2018 she became the CEA representative for Euclid–France communication and is currently director of research for the CEA astrophysics department/CosmoStat lab. She also worked on data analysis for ESA’s Planck mission from 2009 to 2018.
Mentoring and networking
In both research collaborations, Valeria worked on numerous projects that she coordinated from start to finish. While leading teams, she studied management with the goal of enabling everyone to reach their full potential. She also completed training in science diplomacy, which helped her gain valuable transferrable skills. “I decided to be proactive in developing my knowledge and started attending webinars, and then training on science diplomacy. I wanted to deepen my understanding on how science can have an impact on the world and society.” In 2022 Valeria was selected to participate in the first Science Diplomacy Immersion Programme organised by the Geneva Science and Diplomacy Anticipator (GESDA), which aims to take advantage of the ecosystem of international organisations in Geneva to anticipate, accelerate and translate emerging scientific themes into concrete actions.
I wanted to deepen my understanding on how science can have an impact on the world and society
Sharing experience and building connections between people have been a theme in Valeria’s career. Nowhere is this better illustrated than her role, since 2015, as a mentor for the Supernova Foundation – a worldwide mentoring and networking programme for women in physics. “Networking is very important in any career path and having the opportunity to encounter people from a diverse range of backgrounds allows you to grow your network both personally and professionally. The mentoring programme is open to all career levels. There are no barriers. It is a global network of people from 53 countries and there are approximately 300 women in the programme. I am convinced that it is a growing community that will continue to thrive.” Valeria has also acted as mentor for Femmes & Science (a French initiative by Paris-Saclay University) in 2021–2022, and was recently appointed as one of 100 mentors worldwide for #space4women, an initiative of the United Nations Office of Outer Space Affairs to support women pursuing studies in space science.
A member of the CERN Alumni Network, Valeria thoroughly enjoys staying connected with CERN. “Not only is the CERN Alumni Network excellent for CERN as it brings together a wide range of people from many career paths, but it also provides an opportunity for its members to understand and learn how science can be used outside of academia.”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.