Topics

Europe targets a user facility for plasma acceleration

Electron-driven plasma wakefield acceleration

Energetic beams of particles are used to explore the fundamental forces of nature, produce known and unknown particles such as the Higgs boson at the LHC, and generate new forms of matter, for example at the future FAIR facility. Photon science also relies on particle beams: electron beams that emit pulses of intense synchrotron light, including soft and hard X-rays, in either circular or linear machines. Such light sources enable time-resolved measurements of biological, chemical and physical structures on the molecular down to the atomic scale, allowing a diverse global community of users to investigate systems ranging from viruses and bacteria to materials science, planetary science, environmental science, nanotechnology and archaeology. Last but not least, particle beams for industry and health support many societal applications ranging from the X-ray inspection of cargo containers to food sterilisation, and from chip manufacturing to cancer therapy. 

This scientific success story has been made possible through a continuous cycle of innovation in the physics and technology of particle accelerators, driven for many decades by exploratory research in nuclear and particle physics. The invention of radio-frequency (RF) technology in the 1920s opened the path to an energy gain of several tens of MeV per metre. Very-high-energy accelerators were constructed with RF technology, entering the GeV and finally the TeV energy scales at the Tevatron and the LHC. New collision schemes were developed, for example the mini “beta squeeze” in the 1970s, advancing luminosity and collision rates by orders of magnitudes. The invention of stochastic cooling at CERN enabled the discovery of the W and Z bosons 40 years ago. 

However, intrinsic technological and conceptual limits mean that the size and cost of RF-based particle accelerators are increasing as researchers seek higher beam energies. Colliders for particle physics have reached a circumference of 27 km at LEP/LHC and close to 100 km for next-generation facilities such as the proposed Future Circular Collider. Machines for photon science, operating in the GeV regime, occupy a footprint of up to several km and the approval of new facilities is becoming limited by physical and financial constraints. As a result, the exponential progress in maximum beam energy that has taken place during the past several decades has started to saturate (see “Levelling off” figure). For photon science, where beam-time on the most powerful facilities is heavily over-subscribed, progress in scientific research and capabilities threatens to become limited by access. It is therefore hoped that the development of innovative and compact accelerator technology will provide a practical path to more research facilities and ultimately to higher beam energies for the same investment. 

Maximum beam acceleration

At present the most successful new technology relies on the concept of plasma acceleration. Proposed in 1979, this technique promises energy gains up to 100 GeV per metre of acceleration and therefore up to 1000 times higher than is possible in RF accelerators. In essence, the metallic walls of an RF cavity, with their intrinsic field limitations, are replaced by a dynamic and robust plasma structure with very high fields. First, the free electrons in a neutral plasma are used to convert the transverse ponderomotive force of a laser, or the transverse space charge force of a charged particle beam, into a longitudinal accelerating field. While the “light” electrons in the plasma column are expelled from the path of the driving force, the “heavy” plasma ions remain in place. The ions therefore establish a restoring force and re-attract the oscillating plasma electrons. A plasma cavity forms behind the drive pulse in which the main electron beam is placed and accelerated with up to 100 GV per metre. Difficulties in the plasma-acceleration scheme arise from the small scales involved (sub-mm transverse diameter), the required micrometre tolerances and stability. Different concepts include laser-driven plasma wakefield acceleration (LWFA), electron-driven plasma wakefield acceleration (PWFA) and proton-beam-driven plasma wakefield acceleration. Gains in electron energy have reached 8 GeV (BELLA, Berkeley), 42 GeV (FFTB, SLAC) and 2 GeV (AWAKE, CERN) in these three schemes, respectively. 

At the same time, the beam quality of plasma-acceleration schemes has advanced sufficiently to reach the quality required for free-electron lasers (FELs): linac-based facilities that produce extremely brilliant and short pulses of radiation for the study of ultrafast molecular and other processes. There have been several reports of free-electron lasing in plasma-based accelerators in recent years, one relying on LWFA by a team in China and one on PWFA by the EuPRAXIA team in Frascati, Italy. Another publication by a French and German team has recently demonstrated seeding of the FEL process in a LWFA plasma accelerator. 

Scientific and technical progress in plasma accelerators is driven by several dozen groups and a number of major test facilities worldwide, including internationally leading programmes at CERN, STFC, CNRS, DESY, various centres and institutes in the Helmholtz Association, INFN, LBNL, RAL, Shanghai XFEL, SCAPA, SLAC, SPRING-8, Tsinghua University and others. In Europe, the 2020 update of the European strategy for particle physics included plasma accelerators as one of five major themes, and a strategic analysis towards a possible plasma-based collider was published in a 2022 CERN Yellow Report on future accelerator R&D. 

Enter EuPRAXIA

In 2014 researchers in Europe agreed that a combined, coordinated R&D effort should be set up to realise a larger plasma-based accelerator facility that serves as a demonstrator. The project should aim to produce high-quality 5 GeV electron beams via innovative laser- and electron-driven plasma wakefield acceleration, achieving a significant reduction in size and possible savings in cost over state-of-the-art RF accelerators. This project was named the European Plasma Research Accelerator with Excellence in Applications (EuPRAXIA) and it was agreed that it should deliver pulses of X rays, photons, electrons and positrons to users from several disciplines. EuPRAXIA’s beams will mainly serve the fields of structural biology, chemistry, material science, medical imaging, particle-physics detectors and archaeology. It is not a dedicated particle-physics facility but will be an important stepping stone towards any plasma-based collider. 

EuPRAXIA project consortia

The EuPRAXIA project started in 2015 with a design study, which was funded under the European Union (EU) Horizon 2020 programme and culminated at the end of 2019 with the publication of the worldwide first conceptual design report for a plasma-accelerator facility. The targets set out in 2014 could all be achieved in the EuPRAXIA conceptual design. In particular, it was shown that sufficiently competitive performances could be reached and that an initial reduction in facility size by a factor of two-to-three is indeed achievable for a 5 GeV plasma-based FEL facility. The published design includes realistic constraints on transfer lines, facility infrastructure, laser-lab space, undulator technologies, user areas and radiation shielding. Several innovative solutions were developed, including the use of magnetic chicanes for high quality, multi-stage plasma accelerators. The EuPRAXIA conceptual design report was submitted to peer review and published in 2020. 

The EuPRAXIA implementation plan proposes a distributed research infrastructure with two construction and user sites and several centres of excellence. The presently foreseen centres, in the Czech Republic, France, Germany, Hungary, Portugal and the UK, will support R&D, prototyping and the construction of machine components for the two user sites. This distributed concept will ensure international competitiveness and leverage existing investments in Europe in an optimal way. Having received official government support from Italy, Portugal, the Czech Republic, Hungary and UK, the consortium applied in 2020 to the European Strategy Forum on Research Infrastructures (ESFRI). The proposed facility for a free-electron laser was then included in the 2021 ESFRI roadmap, which identifies those research facilities of pan-European importance that correspond to the long-term needs of European research communities. EuPRAXIA is the first ever plasma-accelerator project on the ESFRI roadmap and the first accelerator project since the 2016 placement of the High-Luminosity LHC. 

Stepping stones to a user facility 

In 2023 the European plasma-accelerator community received a major impulse for the development of a user-ready plasma-accelerator facility with the funding of several multi-million euro initiatives under the umbrella of the EuPRAXIA project. These are the EuPRAXIA preparatory phase, EuPRAXIA doctoral network and EuPRAXIA advanced photon sources, as well as funding for the construction of one of the EuPRAXIA sites in Frascati, near Rome (see “Frascati future” image). 

Proposed EuPRAXIA building

The EU, Switzerland and the UK have awarded €3.69 million to the EuPRAXIA preparatory phase, which comprises 34 participating institutes from Italy, the Czech Republic, France, Germany, Greece, Hungary, Israel, Portugal, Spain, Switzerland, the UK, the US and CERN as an international organisation. The new grant will give the consortium a unique chance to prepare the full implementation of EuPRAXIA over the next four years. The project will fund 548 person-months, including additional funding from the UK and Switzerland, and will be supported by an additional 1010 person-months in-kind. The preparatory-phase project will connect research institutions and industry from the above countries plus China, Japan, Poland and Sweden, which signed the EuPRAXIA ESFRI consortium agreement, and define the full implementation of the €569 million EuPRAXIA facility as a new, distributed research infrastructure for Europe. 

Alongside the EuPRAXIA preparatory phase, a new Marie Skłodowska-Curie doctoral network, coordinated by INFN, has also been funded by the EU and the UK. The network, which started in January 2023 and benefits from more than €3.2 million over its four-year duration, will offer 12 high-level fellowships between 10 universities, six research centres and seven industry partners that will carry out an interdisciplinary and cross-sector plasma-accelerator research and training programme. The project’s focus is on scientific and technical innovations, and on boosting the career prospects of its fellows.

EuPRAXIA at Frascati

Italy is supporting the EuPRAXIA advanced photon sources project (EuAPS) with €22 million. This project has been promoted by INFN, CNR and Tor Vergata University of Rome. EuAPS will fulfil some of the scientific goals defined in the EuPRAXIA conceptual design report by building and commissioning a distributed user facility providing users with advanced photon sources; these consist of a plasma-based betatron source delivering soft X-rays, a mid-power, high-repetition-rate laser and a high-power laser. The funding comes in addition to about €120 million for construction of the beam-driven facility and the FEL facility of EuPRAXIA at Frascati. R&D activities for the beam-driven facility are currently being performed at the INFN SPARC_LAB laboratory. 

EuPRAXIA is the first ever plasma-accelerator project on the ESFRI roadmap 

EuPRAXIA will be the user facility of the future for the INFN Frascati National Laboratory. The European site for the second, laser-driven leg of EuPRAXIA will be decided in 2024 as part of the preparatory-phase project. Present candidate sites include ELI-Beamlines in the Czech Republic, the future EPAC facility in the UK and CNR in Italy. With its foreseen electron energy range of 1–5 GeV, the facility will enable applications in diverse domains, for instance, as a compact free-electron laser, compact sources for medical imaging and positron generation, tabletop test beams for particle detectors, and deeply penetrating X-ray and gamma-ray sources for materials testing. The first parts of EuPRAXIA are foreseen to enter into operation in 2028 at Frascati and are designed to be a stepping stone for possible future plasma-based facilities, such as linear colliders at the energy frontier. The project is driven by the excellence, ingenuity and hard work of several hundred physicists, engineers, students and support staff who have worked on EuPRAXIA since 2015, connecting, at present, 54 institutes and industries from 18 countries in Europe, Asia and the US.

Back to life: LHC Run 3 recommences

LHC Run 3

Following its year-end technical stop (YETS) beginning on 28 November 2022, the LHC is springing back to life to continue Run 3 operations at the energy frontier. The restart process began on 13 February with the beam commissioning of Linac4, which was upgraded during the technical stop to allow a 30% increase in the peak current, to be taken advantage of in future runs. On 3 March the Proton Synchrotron Booster began beam commissioning, followed by the Proton Synchrotron (PS). 

In the early hours of 17 March, the PS sent protons down the transfer lines to the door of the Super Proton Synchrotron (SPS), and the meticulous process of adjusting the thousands of machine parameters began. Following a rigorous beam-based realignment campaign, and a brief interruption to allow transport and metrology experts to move selected magnets, sometimes by only a fraction of a millimetre, SPS operators re-injected the beam and quantified and validated the orbit correction ready for injection into the LHC. Right on schedule, on 28 March the first beams successfully entered the LHC. Thanks to very fast threading, both beams were circulating the same day, even producing first “splash” events in the detectors. As the Courier went to press, the intensity ramp-up was under way. Collisions in the LHC are expected to commence by the end of April, heralding the start of a relatively short but intense physics run that is scheduled to end on 30 October.

Refinements

Among many improvements to the accelerator complex made during the YETS, four modules in the SPS kicker system were upgraded to reduce the amount of heat deposited by the beam, and new instruments were installed in the LHC tunnel. These include the beam gas curtain, which will provide 2D images of the alignment of the beams to make data-taking more precise. Ten years in the making, the device was designed for the high-luminosity upgrade of the LHC (HL-LHC) as part of a collaboration between CERN, Liverpool University, the Cockcroft Institute and GSI.

“It’s a challenging year ahead, with the 2023 run length reduced by 20% for energy cost reasons,” says Rende Steerenberg, head of the operations group. “But we maintain the integrated-luminosity goal of 75 fb–1 by enhancing the beam performance and maximising beam availability.” 

To cope with the higher luminosities during Run 3, and to prepare for a further luminosity leap at the HL-LHC beginning in 2029, many upgrades to the four main LHC experiments took place during Long Shutdown 2 (LS2) from 2019 to 2022. While the bulk of HL-LHC upgrades for ATLAS and CMS will take place during LS3, beginning in 2026, the ALICE and LHCb detectors underwent significant transformations during LS2. In the final weeks leading to the LHC restart, the LHCb collaboration completed the last element of its Upgrade 1 – the upstream tracker. 

This advanced silicon-strip detector, located at the entrance of the LHCb bending magnet, allows fast determination of track momenta. This speeds up the LHCb trigger by a factor of three, which is vital to operate the newly installed 40 MHz fully software-based trigger. The new tracker will also improve the reconstruction efficiency of long-lived particles that decay after the vertex locator (VELO), and will provide better coverage overall, especially in the very forward regions. It is composed of 968 silicon-hybrid modules arranged in four vertical planes to handle the varying occupancy over the detector acceptance. A dedicated front-end ASIC, the “SALT chip”, provides pulse shaping with fast baseline restoration and digi­tisation, while nearby detector electronics implement the transformation to optical signals that are transmitted to the remote data-acquisition system in LHCb’s new data centre. Institutes from the US, Italy, Switzerland, Poland and China were involved in designing, building and testing the upstream tracker. Assembly began in 2021 and intensive work took place underground throughout the recent YETS, so the device installation was successfully completed by cavern closure on 27 March.

Under pressure

However, earlier in the year, there was an incident that affected another LHCb subdetector, the VELO. This occurred on 10 January, when there was a loss of control of the LHC primary vacuum system at the interface with the VELO. At the time, the primary and secondary vacuum volumes were filled with neon as the installation of the upstream tracker was taking place. A failure in one of the relays in the overpressure safety system not only prevented the safety system from triggering at the appropriate time, but also led to an issue with the power supply that supports some of the machine instrumentation, causing the pressure balancing system to mistakenly pump on the primary volume. The subsequent pressure build-up went beyond specification limits and led to a plastic deformation of the mechanical interface – an ultrathin aluminium shield called the “RF box” – between the LHC and detector volumes. The RF box is mechanically linked to the VELO and a change in its shape affects the degree to which the VELO can be moved and centred around the colliding beams.

To minimise any risk of impact on the other LHC experiments, the LHCb collaboration will wait until this year’s YETS to replace the RF box. In the meantime, the collaboration has been developing ways to mitigate the impact on data-taking, explains LHCb spokesperson Chris Parkes of the University of Manchester: “Initially we were very concerned that the VELO could have been damaged, but fortunately this is not the case. After much careful recovery work, we will be able to operate the system in 2023, and after the RF box is replaced, we will be back to full performance.”

ATLAS increases precision on W mass

Latest ATLAS measurement

Since the discovery of the W boson at the SppS 40 years ago, collider experiments at CERN and elsewhere have measured its mass ever more precisely. Such measurements provide a vital test of the Standard Model’s consistency, since the W mass is closely related to the strength of the electroweak interaction and to the masses of the Z boson, top quark and Higgs boson; higher experimental precision is needed to keep up with the most recent electroweak calculations. 

The latest experiment to weigh in on the W mass is ATLAS. Reanalysing a sample of 14 million W candidates produced in proton–proton collisions at 7 TeV, the collaboration finds Mw = 80.360 ± 0.005(stat) ± 0.015(syst) = 80.360 ±0.016 GeV. The value, which was presented on 23 March at the Rencontres de Moriond, is in agreement with all previous measurements except one – the latest measurement from the CDF experiment at the former Tevatron collider at Fermilab.

In 2017 ATLAS released its first measurement of the W-boson mass, which was determined using data recorded in 2011 when the LHC was running at a collision energy of 7 TeV (CERN Courier January/February 2017 p10). The precise result (80.370 ±0.019 GeV) agreed with the Standard Model prediction (80.354 ±0.007 GeV) and all previous experimental results, including those from the LEP experiments. But last year, the CDF collaboration announced an even more precise measurement, based on an analysis of its full dataset (CERN Courier May/June 2022 p9). The result (80.434 ±0.009 GeV) differed significantly from the Standard Model prediction and from the other experimental results (see figure), calling for more measurements to try to identify the source of the discrepancy. 

In its new study, ATLAS reanalysed its 2011 data sample using a more advanced fitting technique as well as improved knowledge of the parton distribution functions that describe how the proton’s momentum is shared amongst its constituent quarks and gluons. In addition, the collaboration verified the theoretical description of the W-production process using dedicated LHC proton–proton runs. The new result is 10 MeV lower than the previous ATLAS result and 15% more precise. 

“Due to an undetected neutrino in the particle’s decay, the W-mass measurement is among the most challenging precision measurements performed at hadron colliders. It requires extremely accurate calibration of the measured particle energies and momenta, and a careful assessment and excellent control of modelling uncertainties,” says ATLAS spokesperson Andreas Hoecker. “This updated result from ATLAS provides a stringent test and confirms the consistency of our theoretical understanding of electroweak interactions.” 

The LHCb collaboration reported a measurement of the W mass in 2021, while the results from CMS are keenly anticipated. In the meantime, physicists from the Tevatron+LHC W-mass combination working group are calculating a combined mass value using the latest measurements from the LHC, Tevatron and LEP. This involves a detailed investigation of higher-order theoretical effects affecting hadron-collider measurements, explains CDF representative Chris Hays from the University of Oxford: “The aim is to give a comprehensive and quantitative overview of W-boson mass measurements and their compatibilities. While no significant issues have been identified that significantly change the measurement results, the studies will shed light on their details and differences.”

Searching for dark photons in beam-dump mode

NA62 detector

Faced with the no-show of phenomena beyond the Standard Model at the high mass and energy scales explored so far by the LHC, it has recently become a much considered possibility that new physics hides “in plain sight”, namely at mass scales that can be very easily accessed but at very small coupling strengths. If this were the case, then high-intensity experiments have an advantage: thanks to the large number of events that can be generated, even the most feeble couplings corresponding to the rarest processes can be accessible.

Such a high-intensity experiment is NA62 at CERN’s North Area. Designed to measure the ultra-rare kaon decay K → πνν, it has also released several results probing the existence of weakly coupled processes that could become visible in its apparatus, a prominent example being the decay of a kaon into a pion and an axion. But there is also an unusual way in which NA62 can probe this kind of physics using a configuration that was not foreseen when the experiment was planned, for which the first result was recently reported. 

During normal NA62 operations, bunches of 400 GeV protons from the SPS are fired onto a beryllium target to generate secondary mesons from which, using an achromat, only particles with a fixed momentum and charge are selected. These particles (among them kaons) are then propagated along a series of magnets and finally arrive at the detector 100 m downstream. In a series of studies starting in 2015, however, NA62 collaborators with the help of phenomenologists began to explore physics models that could be tested if the target was removed and protons were fired directly into a “dump” that can be arranged by moving the achromat collimators. They concluded that various processes exist in which new MeV-scale particles such as dark photons could be produced and detected from their decays into di-lepton final states. The challenge is to keep the muon-induced background under control, which cannot be easily understood from simulations alone. 

A breakthrough came in 2018 when beam physicists in the North Area understood how the beamline magnets could be operated in such a way as to vastly reduce the background of both muons and hadrons. Instead of using the two pairs of dipoles as a beam achromat for momentum selection, the currents in the second pair are set to induce additional muon sweeping. The scheme was verified during a 2021 run lasting 10 days, during which 1.4 × 1017 protons were collected on the beam dump. The first analysis of this rapidly collected dataset – a search for dark photons decaying to a di-muon final state – has now been performed.

Hypothesised to mediate a new gauge force, dark photons, A′, could couple to the Standard Model via mixing with ordinary photons. In the modified NA62 set-up, dark photons could be produced either via bremsstrahlung or decays of secondary mesons, the mechanisms differing in their cross-sections and distributions of the momenta and angles of the A′. No sign of A′ → μ+μ was found, excluding a region of parameter space for dark-photon masses between 215 and 550 MeV at 90% confidence. A preliminary result for a search for A′ → e+e was also presented at the Rencontres de Moriond in March.

“This result is a milestone,” explains analysis leader Tommaso Spadaro of LNF Frascati. “It proves the capability of NA62 for studying physics in the beam-dump configuration and paves the way for upcoming analyses checking other final states.” 

X-ray source could reveal new class of supernovae

Large Magellanic Cloud

Type1A supernovae play an important role in the universe, both as the main source of iron and as one of the principal tools for astronomers to measure cosmic-distance scales. They are also important for astroparticle physics, for example allowing the properties of the neutrino to be probed in an extreme environment.

Type1A supernovae make ideal cosmic rulers because they all look very similar, with roughly equal luminosity and emission characteristics. Therefore, when a cosmic explosion that matches the properties of a type1A supernova is detected, its luminosity can be directly used to measure the distance to its host galaxy. Despite this importance, the details surrounding the progenitors of these events are still not fully understood. Furthermore, a group of outliers, now known as type1ax events, has recently been identified that indicate there might be more than one path towards a type1A explosion.

The reason that typical type1A events all have a roughly equal luminosity is because of their progenitors. The general explanation for these events includes a binary system with at least one white dwarf: a very dense old star consisting mostly of oxygen and carbon that is not undergoing fusion. The system is only prevented from collapsing into a neutron star or black hole due to electron-degeneracy pressure. As the white dwarf accumulates matter from a nearby companion, its mass increases to a precise critical limit at which an uncontrolled thermonuclear explosion starts, resulting in the star being unbounded and seen as the supernova.

This peculiar binary system provides strong hints of a new type of progenitor that can explain up to 30% of all supernovae 1a events

As several X-ray sources were identified in the 1990s by the ROSAT mission as being white dwarfs with hydrogen burning on their surface, the source of matter that is accumulated by the white dwarf was long thought to be hydrogen from a companion star. The flaw with this model, however, is that type1A supernovae show no signs of any hydrogen. On the other hand, helium has been seen, particularly in the outlier type1ax supernovae events. These 1ax events, which are predicted to make up 30% of all type1A events, can be explained by a white dwarf accumulating helium from a companion star that has already shed all of its hydrogen. If the helium was able to accumulate on the surface in a stable way, without intermediate explosions due to violent ignition of the helium, it reaches a mass where it violently ignites on the surface. This in turn triggers the ignition of the core and could explain the type1ax events. Evidence of helium accumulating white dwarfs has, however, not been found.

Now, a group led by researchers from the Max Planck Institute for Extraterrestrial Physics (MPE) has used both optical data and X-ray data from the eROSITA and XMM Newton missions to find the first clear evidence of such a progenitor system. The group found an object, known as [HP99] 159, located in the Large Magellanic Cloud, which shows all the characteristics of a white dwarf surrounded by an accretion disk of helium. Using historical X-ray data from as far back as 50 years, the team also showed that the brightness of the source is relatively stable, thereby indicating that it is accumulating the helium at a stable rate, despite the accumulation rate being lower than theoretically predicted for stable burning. This indicates that the system is working its way towards ignition in the future.

The discovery of this new X-ray source therefore proves the existence of white dwarfs that accumulate helium from a companion star at a steady rate, thereby allowing them to reach the conditions to produce a supernova. This peculiar binary system already provides strong hints of a new type of progenitor that can explain up to 30% of all supernovae 1a events. Follow-up measurements will provide further insight into the complex physics at play in the thermonuclear explosions that produce these events, while [HP99] 159’s characteristics can be used to find similar sources.

The Cabibbo angle, 60 years later

Nicola Cabibbo

In a 1961 book, Richard Feynman describes the great satisfaction he and Murray Gell-Mann felt in formulating a theory that explained the close equality of the Fermi constants for muon and neutron-beta decay. These two physicists and, independently, Gershtein and Zeldovich, had discovered the universality of weak interactions. It was a generalisation of the universality of electric charge and strongly suggested the existence of a common origin of the two interactions, an insight that was the basis for unified theories developed later. 

Fermi’s description of neutron beta decay (n → p+e+ νe) involved the product of two vector currents analogous to the electromagnetic current: a nuclear current transforming the neutron into a proton and a leptonic current creating the electron–antineutrino pair. Subsequent studies of nuclear decays and the discovery of parity violation complicated the description, introducing all possible kinds of relativistically invariant interactions that could be responsible for neutron beta decay. 

The decay of the muon (μ → νμ +e+ νe) was also found to involve the product of two vector currents, one transforming the muon into its own neutrino and the other creating the electron–antineutrino pair. What Feynman and Gell-Mann, and Gershtein and Zeldovich, had found is that the nuclear and lepton vector currents have the same strength, despite the fact that the n → p transition is affected by the strong nuclear interaction while μ → νμ and e → νe transitions are not (we are anticipating here what was discovered only later, namely that the electron and muon each have their own neutrino). 

At the end of the 1950s, simplicity finally emerged. As proposed by Sudarshan and Marshak, and by Feynman and Gell-Mann, all known beta decays are described by the products of two currents, each a combination of a vector and an axial vector current. Feynman notes: after 23 years, we are back to Fermi! 

The book of 1961, however, also records Feynman’s dismay after the discovery that the Fermi constants of strange-particle beta decays, for example the lambda–hyperon beta decay: Λ→ p+e+ νe were smaller by a factor of four or five than the theoretical prediction. In 1960 Gell-Mann, together with Maurice Lévy, had tried to solve the problem but, while taking a step in the right direction, they concluded that it was not possible to make quantitative predictions for the observed decays of the hyperons. It was up to Nicola Cabibbo, in a short article published in 1963 in Physical Review Letters, to reconcile strange-particle decays with the universality of weak interactions, paving the way to the modern unification of electromagnetic and weak interactions. 

Over to Frascati 

Nicola had graduated in Rome in 1958, under his tutor Bruno Touschek. Hired by Giorgio Salvini, he was the first theoretical physicist in the Electro-Synchrotron Frascati laboratories. There, Nicola met Raoul Gatto, five years his elder, who was coming back from Berkeley, and they began an extremely fruitful collaboration. 

These were exciting times in Frascati: the first e+ e collider, AdA (Anello di Accumulazione), was being realised, to be followed later by a larger machine, Adone, reaching up to 3 GeV in the centre of mass. New particles were studied at the electro-synchrotron, related to the newly discovered SU(3) flavour symmetry (e.g. the η meson). Cabibbo and Gatto authored an important article on e+ e physics and, in 1961, they investigated the weak interactions of hadrons in the framework of the SU(3) symmetry. Gatto and Cabibbo and, at the same time, Coleman and Glashow, observed that vector currents associated with the SU(3) symmetry by Noether’s theorem include a strangeness-changing current, V(ΔS = 1), that could be associated with strangeness-changing beta-decays in addition to the isospin current, V(ΔS = 0), responsible for strangeness-non-changing beta decays – the same considered by Feynman and Gell-Mann. For strange-particle decays, the identification implied that the variation of strangeness in the hadronic system has to be equal to the variation of the electric charge (in short: ΔS = ΔQ). The rule is satisfied in Λ beta decay (Λ: S = –1, Q = 0; p: S = 0, Q = +1). However, it conflicted with a single event allegedly observed at Berkeley in 1962 and interpreted as Σ+→ n + μ+ + νμ, which had ΔS = –ΔQ (Σ+: S = –1, Q = +1; n: S = Q = 0). In addition, the problem remained of how to correctly formulate the concept of muon-hadron universality in the presence of four vector currents describing the transitions e → νe, μ → νμ, n → p and Λ→ p.

Cabibbo’s angle

In his 1963 paper, written while he was working at CERN, Nicola made a few decisive steps. First, he decided to ignore the evidence of a ΔS = –ΔQ component suggested by Berkeley’s Σ+→ n+μ++νμ event. Nicola was a good friend of Paolo Franzini, then at Columbia University, and the fact that Paolo, with larger statistics, had not yet seen any such event provided a crucial hint. Next, to describe both ΔS = 0 and ΔS = 1 weak decays, Nicola formulated a notion of universality between each leptonic vector current (electronic or muonic) and one, and only one, hadronic vector current. He assumed the current to be a combination of the two currents determined by the SU(3) symmetry that he had studied with Gatto in Frascati (also identified by Coleman and Glashow): Vhadron = aV(ΔS = 0) + bV(ΔS = 1), with a and b being numerical constants. By construction, V(ΔS = 0) and V(ΔS = 1) have the same strength of the electron or of the muon currents; for the hadronic current to have the same strength, one requires a2 + b2 = 1, that is a = cosθ, b = sinθ. 

Cabibbo’s 1963 paper

Cabibbo obtained the final expression of the hadronic weak current, adding to these hypotheses the V–A formulation of the weak interactions. The angle θ became a new constant of nature, known since then as the Cabibbo angle. 

An important point is that the Cabibbo theory is based on the currents associated with SU(3) symmetry. For one, this means that it can be applied to the beta decays of all hadrons, mesons and baryons belonging to the different SU(3) multiplets. This was not the case for the precursory Gell-Mann–Lévy theory, which also assumed one hadron weak current but was formulated in terms of protons and lambdas, and could not be applied to the other hyperons or to the mesons. In addition, in the limit of exact SU(3) symmetry one can prove a non-renormalisation theorem for the ΔS = 1 vector current, which is entirely analogous to the one proved by Feynman and Gell-Mann for the ΔS = 0 isospin current. The Cabibbo combination, then, guarantees the universality of the full hadron weak current to the lepton current for any value of the Cabibbo angle, the suppression of the beta decays of strange particles being naturally explained by a small value of θ. Remarkably, a theorem derived by Ademollo and Gatto, and by Fubini a few years later, states that the non-renormalisation of the vector current’s strength is also valid to the first order in SU(3) symmetry breaking. 

Photons and quarks

In many instances, Nicola mentioned that a source of inspiration for his assumption for the hadron current was the passage of photons through a polarimeter, a subject he had considered in Frascati in connection with possible experiments of electron scattering through polarised crystals. Linearly polarised photons can be described via two orthogonal states, but what is transmitted is only the linear combination corresponding to the direction determined by the polarimeter. Similarly, there are two orthogonal hadron currents, V (ΔS = 0) and V (ΔS = 1), but only the Cabibbo combination couples to the weak interactions. 

An interpretation closer to particle physics came with the discovery of quarks. In quark language, V(ΔS = 0) induces the transition d → u and V(ΔS = 1) the transition s → u. The Cabibbo combination corresponds then to dC = (cos θd + sin θs) → u. Stated differently, the u quark is coupled by the weak interaction only to one, specific, superposition of d and s quarks: the Cabibbo combination dC. This is Cabibbo mixing, reflecting the fact that in SU(3) there are two quarks with the same charge –1/3. 

Testing quark mixing

A first comparison between theory and meson and hyperon beta-decay data was done by Cabibbo in his original paper, in the exact SU(3) limit. Specifically, the value of θ was obtained by comparing K+ and π+ semileptonic decays. In baryon semileptonic decays, the matrix elements of vector currents are determined by the SU(3) symmetry, while axial currents depend upon two parameters, the so-called F and D couplings. Many fits have been performed in successive years, which saw a dramatic increase in the decay modes observed, in statistics, and in precision. 

Four decades after the 1963 paper, Cabibbo, with Earl Swallow and Roland Winston, performed a complete analysis of hyperon decays in the Cabibbo theory, then embedded in the three-generation Kobayashi and Maskawa theory, taking into account the momentum dependence of vector currents. In their words (and in modern notation):
“… we obtain Vus = 0.2250(27) (= sin θ). This value is of similar precision, but higher than the one derived from Kl3, and in better agreement with the unitarity requirement,
|Vud |2 + |Vus|2 + |Vub |2 = 1. We find that the Cabibbo model gives an excellent fit of the existing form factor data on baryon beta decays (χ2 = 2.96) for three degrees of freedom with F + D = 1.2670 ± 0.0030, F–D = –0.341±0.016, and no indication of flavour SU(3) breaking effects.” 

The Cabibbo theory predicts a reduction in the nuclear Fermi constant squared with respect to the muonic one by a factor cos2 θ = 0.97. The discrepancy had been noticed by Feynman and S Berman, one of Feynman’s students, who estimated the possible effect of electromagnetic radiative corrections. The situation is much clearer today, with precise data coming from super-allowed Fermi nuclear transitions and radiative corrections under control.  

Closing up

From its very publication, the Cabibbo theory was seen as a crucial development. It indicated the correct way to embody lepton-hadron universality and it enjoyed a heartening phenomenological success, which in turn indicated that we could be on the right track for a fundamental theory of weak interactions. 

The idea of quark mixing had profound consequences. It prompted the solution of the spectacular suppression of strangeness-changing neutral processes by the GIM mechanism (Glashow, Iliopoulos and Maiani), where the charm quark couples to the combination of down and strange quarks orthogonal to the Cabibbo combination. Building on Cabibbo mixing and GIM, it has been possible to extend to hadrons the unified SU(2)L U(1) theory formulated, for leptons, by Glashow, and by Weinberg and Salam. 

There are very few articles in the scientific literature in which one does not feel the need to change a single word and Cabibbos is definitely one of them

CP symmetry violations observed experimentally had no place in the two-generation scheme (four quarks, four leptons) but found an elegant description by Makoto Kobayashi and Toshihide Maskawa in the extension to three generations. Quark mixing introduced by Cabibbo is now described by a three-by-three unitary matrix known in the literature as the Cabibbo–Kobayashi–Maskawa (CKM) matrix. In the past 50 years the CKM scheme has been confirmed with ever increasing accuracy by a plethora of measurements and impressive theoretical predictions (see “Testing quark mixing” figure). Major achievements have been obtained in the studies of charm- and beauty-particle decays and mixing. The CKM paradigm remains a great success in predicting weak processes and in our understanding of the sources of CP violation in our universe. 

Nicola Cabibbo passed away in 2010. The authoritative book by Abraham Pais, in its chronology, cites the Cabibbo theory among the most important developments in post-war particle physics. In the History of CERN, Jean Iliopoulos writes: “There are very few articles in the scientific literature in which one does not feel the need to change a single word and Cabibbo’s is definitely one of them. With this work, he established himself as one of the leading theorists in the domain of weak interactions.”

First collider neutrinos detected

Electron neutrino charged-current interaction

Since their discovery 67 years ago, neutrinos from a range of sources – solar, atmospheric, reactor, geological, accelerator and astrophysical – have provided ever more powerful probes of nature. Although neutrinos are also produced abundantly in colliders, until now no neutrinos produced in such a way had been detected, their presence inferred instead via missing energy and momentum. 

A new LHC experiment called FASER, which entered operations at the start of Run 3 last year, has changed this picture with the first observation of collider neutrinos. Announcing the result on 19 March at the Rencontres de Moriond, and in a paper submitted to Physical Review Letters on 24 March, the FASER collaboration reconstructed 153 candidate muon neutrino and antineutrino interactions in its spectrometer with a significance of 16 standard deviations above the background-only hypothesis. Being consistent with the characteristics expected from neutrino interactions in terms of secondary-particle production and spatial distribution, the results imply the observation of both neutrinos and antineutrinos with an incident neutrino energy significantly above 200 GeV. In addition, an ongoing analysis of data from an emulsion/tungsten subdetector called FASERν revealed a first electron–neutrino interaction candidate (see image). 

“FASER has directly observed the interactions of neutrinos produced at a collider for the first time,” explains co-spokesperson Jamie Boyd of CERN. “This result shows the detector worked perfectly in 2022 and opens the door for many important future studies with high-energy neutrinos at the LHC.” 

The extreme luminosity of proton–proton collisions at the LHC produces a large neutrino flux in the forward direction, with energies leading to cross-sections high enough for neutrinos to be detected using a compact apparatus. FASER is one of two new forward experiments situated at either side of LHC Point 1 to detect neutrinos produced in proton–proton collisions in ATLAS. The other, SND@LHC, also reported its first results at Moriond. The team found eight muon–neutrino candidate events against an expected background of 0.2, with an evaluation of systematic uncertainties ongoing. 

Covering energies between a few hundred GeV and several TeV, FASER and SND@LHC narrow the gap between fixed-target and astrophysical neutrinos. One of the unexplored physics topics to which they will contribute is the study of high-energy neutrinos from astrophysical sources. Since the production mechanism and energy of neutrinos at the LHC is similar to that of very-high-energy neutrinos from cosmic-ray collisions with the atmosphere, FASER and SND@LHC can be used to precisely estimate this background. Another application is to measure and compare the production rate of all three types of neutrinos, providing an important test of the Standard Model.

Beyond neutrinos, the two experiments open new searches for feebly interacting particles and other new physics. In a separate analysis, FASER presented first results from a search for dark photons decaying to an electron-positron pair. No events were seen in an almost background-free analysis, yielding new constraints on dark photons with couplings of 10–5 to 10–4 and masses of between 10 and 100 MeV, in a region of parameter space motivated by dark matter. 

New insights into CP violation via penguin decays

LHCb figure 1

At the recent Moriond Electroweak conference, the LHCb collaboration presented a new, high-precision measurement of charge–parity (CP) violation using a large sample of B0s→ ϕϕ decays, where the ϕ mesons are reconstructed in the K+K final state. Proceeding via a loop transition (b → sss, such “penguin” decays are highly sensitive to possible contributions from unknown particles and therefore provide excellent probes for new sources of CP violation. To date, the only known source of CP violation, which is governed by the Cabibbo–Kobayashi–Maskawa matrix in the quark sector, is insufficient to account for the huge excess of matter over antimatter in the universe; extra sources of CP violation are required.

A B0s or B0s meson can change its flavour and oscillate into its antiparticle at a frequency Δms/2π, which has been precisely determined by the LHCb experiment. Thus a B0s meson can decay either directly to the ϕϕ state or via changing its flavour to the B0s state. The phase difference between the two interfering amplitudes changes sign under CP transformations, denoted ϕs for B0s or –ϕs for B0s decays. A time-dependent CP asymmetry can arise if the phase difference ϕs is nonzero. The asymmetry between the decay rates of initial B0s and B0s mesons to the ϕϕ state as a function of the decay time follows a sine wave with amplitude sin(ϕs) and frequency Δms/2π. In the Standard Model (SM) the phase difference is predicted to be consistent with zero, ϕSMs  = 0.00 ± 0.02 rad.

This is the most precise single measurement to date

The observed asymmetry as a function of the B0sϕϕ decay time and the projection of the best fit are shown in figure 1 for the Run 2 data sample. The measured asymmetry is diluted by the finite decay-time resolution and the nonzero flavour mis-identification rate of the initial B0s or B0s state, and averaged over two types of linear polarisation states of the ϕϕ system that have CP asymmetries with opposite signs. Taking these effects into account, LHCb measured the CP-violating phase using the full Run 2 data sample. The result, when combined with the Run 1 measurement, is ϕs = –0.074 ± 0.069 rad, which agrees with the SM prediction and improves significantly upon the previous LHCb measurement. In addition to the increased data sample size, the new analysis benefits from improvements in the algorithms for vertex reconstruction and determination of the initial flavour of the B0s or B0s mesons.

This is the most precise single measurement to date of time-dependent CP asymmetry in any b → s transition. With no evidence for CP violation, the result can be used to derive stringent constraints on the parameter space of physics beyond the SM. Looking to the future, the upgraded LHCb experiment and a planned future phase II upgrade will offer unique opportunities to further explore new-physics effects in b → s decays, which could potentially provide insights into the fundamental origin of the puzzling matter–antimatter asymmetry.

Beauty quark production versus particle multiplicity

ALICE figure 1

Measurements of the production of hadrons containing heavy quarks (i.e. charm or beauty) in proton–proton (pp) collisions provide an important test of the accuracy of perturbative quantum chromodynamics (pQCD) calculations. The production of heavy quarks occurs in initial hard scatterings of quarks and gluons, whereas the production of light quarks in the underlying event is dominated by soft processes. Thus, measuring heavy-quark hadron production as a function of the charged-particle multiplicity provides insights into the interplay between soft and hard mechanisms of particle production.

Measurements in high-multiplicity pp collisions have shown features that resemble those associated with the formation of quark–gluon plasma in heavy-ion collisions, such as the enhancement of the production of particles with strangeness content and the modification of the baryon-to-meson production ratio as a function of transverse momentum (pT). These effects can be explained by two different types of models: statistical hadronisation models, which evaluate the population of hadron states according to statistical weights governed by the masses of the hadrons and a universal temperature, or models that include hadronisation via coalescence (or recombination) of quarks and gluons which are close in phase space. Both predict an enhancement of the baryon-to-meson and strange-to-non-strange hadron ratios as a function of charged-particle multiplicity.

In the charm sector, the ALICE collaboration has recently observed a multiplicity dependence of the pT-differential Λc+/D0 ratio, smoothly evolving from pp to lead–lead collisions, while no dependence was observed for the Ds+-meson production yield compared to the one of the D0 meson. Measurements of these phenomena in the beauty sector are needed to shed further light on the hadronisation mechanism.

To investigate beauty-quark production as a function of multiplicity and to put it in relation with that of charm quarks, ALICE measured for the first time the fraction of D0 and D+ originating from beauty-hadron decays (denoted as non-prompt) as a function of transverse momentum and charged-particle multiplicity in pp collisions at 13 TeV, using the Run 2 dataset. The measurement exploits different decay-vertex topologies of prompt and non-prompt D mesons with machine-learning classification techniques. The fractions of non-prompt D mesons were observed to somewhat increase with pT from about 5 to 10%, as expected by pQCD calculations (figure 1). Similar fractions were measured in different charged-particle multiplicity intervals, suggesting either no or only mild multiplicity dependence. This suggests a similar production mechanism of charm and beauty quarks as a function of multiplicity.

The possible influence of the hadronisation mechanism was investigated by comparing the measured D-meson non-prompt fractions with predictions based on Monte Carlo generators such as PYTHIA 8. A good agreement was observed with different PYTHIA tunes, with and without the inclusion of the colour-reconnection mechanism beyond the leading colour approximation (CR-BLC), which was introduced to describe the production of charm baryons in pp collisions. Only the CR-BLC “Mode 3” tune that predicts an increase (decrease) of hadronisation in baryons for beauty (charm) quarks at high multiplicity is disfavoured by the current data.

The measurements of non-prompt D0 and D+ mesons represent an important test of production and hadronisation models in the charm and beauty sectors, and pave the way for future measurements of exclusive reconstructed beauty hadrons in pp collisions as a function of charged-particle multiplicity.

Searching for electroweak SUSY: a combined effort

CMS figure 1.

The CMS collaboration has been relentlessly searching for physics beyond the Standard Model (SM) since the start of the LHC. One of the most appealing new theories is supersymmetry or SUSY – a novel fermion-boson symmetry that gives rise to new particles, “naturally” leads to a Higgs boson almost as light as the W and Z bosons, and provides candidate particles for dark matter (DM).

By the end of LHC Run 2, in 2018, CMS had accumulated a high-quality data sample of proton–proton (pp) collisions at an energy of 13 TeV, corresponding to an integrated luminosity of 137 fb–1. With such a large data set, it was possible to search for the production of strongly interacting SUSY particles, i.e. the partners of gluons (gluinos) and quarks (squarks), as well as for SUSY partners of the W and Z bosons (electroweakinos: winos and binos), of the Higgs boson (higgsinos), and of the leptons (sleptons). The cross sections for the direct production of SUSY electroweak particles are several orders of magnitude lower than those for gluino and squark pair production. However, if the partners of gluons and quarks are heavier than a few TeV, it could be that the SUSY electro­weak sector is the only one accessible at the LHC. In the minimal SUSY extension of the SM, electroweakinos and higgsinos mix to form six mass eigenstates: two charged (charginos) and four neutral (neutralinos). The lightest neutralino is often considered to be the lightest SUSY particle (LSP) and a DM candidate.

CMS has recently reported results, based on the full Run 2 dataset, from searches for the electroweak production of sleptons, charginos and neutralinos. Decays of these particles to the LSP are expected to produce leptons, or Z, W and Higgs bosons. The Z and W bosons subsequently decay to leptons or quarks, while the Higgs boson primarily decays to b quarks. All final states have been explored with complementary channels to enhance the sensitivity to a wide range of electroweak SUSY mass hypotheses. These cover very compressed mass spectra, where the mass difference between the LSP and its parent particles is small (leading to low-momentum particles in the final state) as well as uncompressed scenarios that would instead produce highly boosted Z, W and Higgs bosons. None of the searches showed event counts that significantly deviate from the SM predictions.

CMS maximised the output of the Run 2 dataset, providing its legacy reference on electroweak SUSY searches

The next step was to statistically combine the results of mutually exclusive search channels to set the strongest possible constraints with the Run 2 dataset and interpret the results of searches in different final states under unique SUSY-model hypotheses. For the first time, fully leptonic, semi-leptonic and fully hadronic final states from six different CMS searches were combined to explore models that differ depending on whether the next-to-lightest supersymmetric partner (NLSP) is “wino-like” or “higgsino-like”, as shown in the left and right panels of figure 1, respectively. The former are now excluded up to NLSP masses of 875 GeV, extending the constraints obtained from individual searches by up to 100 GeV, while the latter are excluded up to NLSP masses of 810 GeV.

With this effort, CMS maximised the output of the Run 2 dataset, providing its legacy reference on electroweak SUSY searches. While the same data are still being used to search for new physics in yet uncovered corners of the accessible phase-space, CMS is planning to extend its reach in the upcoming years, profiting from the extension of the data set collected during LHC Run 3 at an unprecedented centre-of-mass energy of 13.6 TeV.

bright-rec iop pub iop-science physcis connect