Topics

LHCb looks forward to the 2030s

LHCb Upgrade II detector

The LHCb collaboration is never idle. While building and commissioning its brand new Upgrade I detector, which entered operation last year with the start of LHC Run 3, planning for Upgrade II was already under way. This proposed new detector, envisioned to be installed during Long Shutdown 4 in time for High-Luminosity LHC (HL-LHC) operations continuing in Run 5, scheduled to begin in 2034/2035, would operate at a peak luminosity of 1.5 × 1034cm–2s–1. This is 7.5 times higher than at Run 3 and would generate data samples of heavy-flavoured hadron decays six times larger than those obtainable at the LHC, allowing the collaboration to explore a wide range of flavour-physics observables with extreme precision. Unprecedented tests of the CP-violation paradigm (see “On point” figure) and searches for new physics at double the mass scales possible during Run 3 are among the physics goals on offer. 

Attaining the same excellent performance as the original detector has been a pivotal constraint in the design of LHCb Upgrade I. While achieving the same in the much harsher collision environments at the HL-LHC remains the guiding principle for Upgrade II, the LHCb collaboration is investigating the possibilities to go even further. And these challenges need to be met while keeping the existing footprint and arrangement of the detector (see “Looking forward” figure). Radiation-hard and fast 3D silicon pixels, a new generation of extremely fast and efficient photodetectors, and front-end electronics chips based on 28 nm semiconductor technology are just a few examples of the innovations foreseen for LHCb Upgrade II, and will also set the direction of R&D for future experiments.

LHCb constraints

Rethinking the data acquisition, trigger and data processing, along with intense use of hardware accelerators such as field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), will be fundamental to manage the expected five-times higher average data rate than in Upgrade I. The Upgrade II “framework technical design report”, completed in 2022, is also the first to consider the experiment’s energy consumption and greenhouse-gas emissions, as part of a close collaboration with CERN to define an effective environmental protection strategy.

Extreme tracking 

At the maximum expected luminosity of the HL-LHC, around 2000 charged particles will be produced per bunch crossing within the LHCb apparatus. Efficiently reconstructing these particles and their associated decay vertices in real time represents a significant challenge. It requires the existing detector components to be modified to increase the granularity, reduce the amount of material and benefit from the use of precision timing.

The future VELO will be a true 4D-tracking detector

The new Vertex Locator (VELO) will be based, as it was for Upgrade I (CERN Courier May/June 2022 p38), on high-granularity pixels operated in vacuum in close proximity to the LHC beams. For Upgrade II, the trigger and online reconstruction will rely on the selection of events, or parts of events, with displaced tracks at the early stage of the event. The VELO must therefore be capable of independently reconstructing primary vertices and identifying displaced tracks, while coping with a dramatic increase in event rate and radiation dose. Excellent spatial resolution will not be sufficient, given the large density of primary interactions along the beam axis expected under HL-LHC conditions. A new coordinate – time – must be introduced. The future VELO will be a true 4D-tracking detector that includes timing information with a precision of better than 50 ps per hit, leading to a track time-stamp resolution of about 20 ps (see “Precision timing” figure). 

Precision timing

The new VELO sensors, which include 28 nm technology application-specific integrated circuits (ASICs), will need to achieve this time resolution while being radiation-hard. The important goal of a 10 ps time resolution has recently been achieved with irradiated prototype 3D-trench silicon sensors. Depending on the rate-capability of the new detectors, the pitch may have to be reduced and the mat­erial budget significantly decreased to reach comparable spatial resolution to the current Run 3 detector. The VELO mechanics have to be redesigned, in particular to reduce the material of the radio-frequency foil that separates the secondary vacuum – where the sensors are located – from the machine vacuum. The detector must be built with micron-level precision to control systematic uncertainties.

The tracking system will take advantage of a detector located upstream of the dipole magnet, the Upstream Tracker (UT), and of a detector made of three tracking stations, the Mighty Tracker (MT), located downstream of the magnet. In conjunction with the VELO, the tracking system ensures the ability to reconstruct the trajectory of charged particles bending through the detector due to the magnetic field, and provides a high-precision momentum measurement for each particle. The track direction is a necessary input to the photon-ring searches in Ring Imaging Cherenkov (RICH) detectors, which identify the particle species. Efficient real-time charged-particle reconstruction in a very high particle-density environment requires not only good detector efficiency and granularity, but also the ability to quickly reject combinations of hits not produced by the same particle. 

LHCb-dedicated high-voltage CMOS sensor

The UT and the inner region of the MT will be instrumented with high-granularity silicon pixels. The emerging radiation-hard monolithic active pixel sensor (MAPS) technology is a strong candidate for these detectors. LHCb Upgrade II would represent the first large-scale implementation of MAPS in a high-radiation environment, with the first prototypes currently being tested (see “Mighty pixels” figure). The outer region of the MT will be covered by scintillating fibres, as in Run 3, with significant developments foreseen to cope with the radiation damage. The availability of high-precision vertical-coordinate hit information in the tracking, provided for the first time in LHCb by pixels in the high-occupancy regions of the tracker, will be crucial to reject combinations of track segments or hits not produced by the same particle. To substantially extend the coverage of the tracking system to lower momenta, with consequent gains for physics measurements, the internal surfaces of the magnet side walls will be instrumented with scintillating bar detectors, the so-called magnet stations (MS). 

Extreme particle identification 

A key factor in the success of the LHCb experiment has been its excellent particle identification (PID) capabilities. PID is crucial to distinguish different decays with final-state topologies that are backgrounds to each other, and to tag the flavour of beauty mesons at production, which is a vital ingredient to many mixing and CP-violation measurements. For particle momenta from a few GeV/c up to 100 GeV/c, efficient hadron identification at LHCb is provided by two RICH detectors. Cherenkov light emitted by particles traversing the gaseous radiators of the RICHes is projected by mirrors onto a plane of photodetectors. To maintain Upgrade I performances, the maximum occupancy over the photodetector plane must be kept below 30%, the single-photon Cherenkov-angle resolution must be below 0.5 mrad, and the time resolution on single-photon hits should be well below 100 ps (see “RICH rewards” figure). 

Photon hits on the RICH photodetector plane

Next-generation silicon photomultipliers (SiPMs) with improved timing and a pixel size of 1 × 1 mm2, together with re-optimised optics, are deemed capable of delivering these specifications. The high “dark” rates of SiPMs, especially after elevated radiation doses, would be controlled with cryogenic cooling and neutron shielding. Vacuum tubes based on micro-channel plates (MCPs) are a potential alternative due to their excellent time resolution (30 ps) for single-photon hits and lower dark rate, but suffer in high-rate environments. New eco-friendly gaseous radiators with a lower refractive index can improve the PID performance at higher momenta (above 80 GeV/c), but meta-materials such as photonic crystals are also being studied. In the momentum region below 10 GeV/c, PID will profit from TORCH – an innovative 30 m2 time-of-flight detector consisting of quartz plates where charged particles produce Cherenkov light. The light propagates by internal reflection to arrays of high-granularity MCP–PMTs optimised to operate at high rates, with a prototype already showing performances close to the target of 70 ps per photon.

Excellent photon and π0 reconstruction and e–π separation are provided by LHCb’s electromagnetic calorimeter (ECAL). But the harsh occupancy conditions of the HL-LHC impose the development of 5D calorimetry, which complements precise position and energy measurements of electromagnetic clusters with a time resolution of about 20 ps. The most crowded inner regions will be equipped with so-called spaghetti calorimeter (SPACAL) technology, which consists of arrays of scintillating fibres either made of plastic or garnet crystals arranged along the beam direction, embedded in a lead or tungsten matrix. The less-crowded outer regions of the calorimeter will continue to be instrumented with the current “Shashlik” technology with refurbished modules and increased granularity. A timing layer, either based on MCPs or on alternated tungsten and silicon-sensor layers placed within the front and back ECAL sections, is also a possibility to achieve the ultimate time resolution. Several SPACAL prototypes have already demonstrated that time resolutions down to an impressive 15 ps are feasible (see “Spaghetti calorimetry” image).

A SPACAL prototype being prepared for beam tests

The final main LHCb subdetector is the muon system, based on four stations of multiwire proportional chambers (MWPCs) interleaved with iron absorbers. For Upgrade II, it is proposed that MWPCs in the inner regions, where the rate will be as high as a few MHz/cm2, are replaced with new-generation micro-pattern gaseous detectors, the micro-RWELL, a prototype of which has proved able to reach a detection efficiency of approximately 97% and a rate-capability of around 10 MHz/cm2. The outer regions, characterised by lower rates, will be instrumented either by reusing a large fraction (95%) of the current MWPCs or by implementing other solutions based on resistive plate chambers or scintillating-tile-based detectors. As with all Upgrade II subdetectors, dedicated ASICs in the front-end electronics, which integrate fast time-to-digital converters or high-frequency waveform samplers, will be necessary to measure time with the required precision.

Trigger and computing 

The detectors for LHCb Upgrade II will produce data at a rate of up to 200 Tbit/s (see “On the up” figure), which for practical reasons needs to be reduced by four orders of magnitude before being written to permanent storage. The data acquisition therefore needs to be reliable, scalable and cost-efficient. It will consist of a single type of custom-made readout board combined with readily available data-centre hardware. The readout boards collect the data from the various sub-detectors using the radiation-hard, low-power GBit transceiver links developed at CERN and transfer the data to a farm of readout servers via next- generation “PCI Express” connections or Ethernet. For every collision, the information from the subdetectors is merged by passing through a local area network to the builder server farm.

With up to 40 proton–proton interactions, every bunch crossing at the HL-LHC will contain multiple heavy-flavour hadrons within the LHCb acceptance. For efficient event selection, hits not associated with the proton–proton collision of interest need to be discarded as early as possible in the data-processing chain. The real-time analysis system performs reconstruction and data reduction in two high-level-trigger (HLT) stages. HLT1 performs track reconstruction and partial PID to apply inclusive selections, after which the data is stored in a large disk buffer while alignment and calibration tasks run in semi real-time. The final data reduction occurs at the HLT2 level, with exclusive selections based on full offline-quality event reconstruction. Starting from Upgrade I, all HLT1 algorithms are running on a farm of GPUs, which enabled, for the first time at the LHC, track reconstruction to be performed at a rate of 30 MHz. The HLT2 sequence, on the other hand, is run on a farm of CPU servers – a model that would be prohibitively costly for Upgrade II. Given the current evolution of processor performance, the baseline approach for Upgrade II is to perform the reconstruction algorithms of both HLT1 and HLT2 on GPUs. A strong R&D activity is also foreseen to explore alternative co-processors such as FPGAs and new emerging architectures.

Real-time versus the start date of various high-energy physics experiments

The second computing challenge for LHCb Upgrade II derives from detector simulations. A naive extrapolation from the computing needs of the current detector implies that 2.5 million cores will be needed for simulation in Run 5, which is one order of magnitude above what is available with a flat budget assuming a 10% performance increase of processors per year. All experiments in high-energy physics face this challenge, motivating a vigorous R&D programme across the community to improve the processing time of simulation tools such as GEANT4, both by exploiting co-processors and by parametrising the detector response with machine-learning algorithms.

Intimately linked with digital technologies today are energy consumption and efficiency. Already in Run 3, the GPU-based HLT1 is up to 30% more energy-efficient than the originally planned CPU-based version. The data centre is designed for the highest energy-efficiency, resulting in a power usage that compares favourably with other large computing centres. Also for Upgrade II, special focus will be placed on designing efficient code and fully exploiting efficient technologies, as well as designing a compact data acquisition system and optimally using the data centre.

A flavour of the future 

The LHC is a remarkable machine that has already made a paradigm-shifting discovery with the observation of the Higgs boson. Exploration of the flavour-physics domain, which is a complementary but equally powerful way to search for new particles in high-energy collisions, is essential to pursue the next major milestone. The proposed LHCb Upgrade II detector will be able to accomplish this by exploring energy scales well beyond those reachable by direct searches. The proposal has received strong support from the 2020 update of the European strategy for particle physics, and the framework technical design report was positively reviewed by the LHC experiments committee. The challenges of performing precision flavour physics in the very harsh conditions of the HL-LHC are daunting, triggering a vast R&D programme at the forefront of technology. The goal of the LHCb teams is to begin construction of all detector components in the next few years, ready to install the new detector at the time of Long Shutdown 4.

ALICE 3: a heavy-ion detector for the 2030s

ALICE 3

The ALICE experiment at the LHC was conceived to study the properties of the quark–gluon plasma (QGP), the state of matter prevailing a few microseconds after the Big Bang. Collisions between large nuclei in the LHC produce matter at temperatures of about 3 × 1012 K, sufficiently high to liberate quarks and gluons, and thus to study the deconfined QGP state in the laboratory. The heavy-ion programme at LHC Runs 1 and 2 has already enabled the ALICE collaboration to study the formation of the QGP, its collective expansion and its properties, using for example the interactions of heavy quarks and high-energy partons with the QGP. ALICE 3 builds on these discoveries to reach the next level of understanding. 

One of the most striking discoveries at the LHC is that J/ψ mesons not only “melt” in the QGP but can also be regenerated from charm quarks produced in independent hard scatterings. The LHC programme has also shown that the energy loss of partons propagating through the plasma depends on their mass. Furthermore, collective behaviour and enhanced strange-baryon production have been observed in selected proton–proton collisions in which large numbers of particles are produced, signalling that high densities may be reached in such collisions. 

During Long Shutdown 2, a major upgrade of the ALICE detector (ALICE 2) was completed on budget and in time for the start of Run 3 in 2022. Together with improvements in the LHC itself, the experiment will profit from a factor-50 higher Pb–Pb collision rate and also provide a better pointing resolution. This will bring qualitative improvements for the entire physics programme, in particular for the detection of heavy-flavour hadrons and thermal di-electron radiation. However, several important questions – for example concerning the mechanisms leading to thermal equilibrium and the formation of hadrons in the QGP – will remain open even after Runs 3 and 4. To address these, the collaboration is pursuing next-generation technologies to build a new detector with a significantly larger rapidity coverage and excellent pointing resolution and particle identification (see “Brand new” figure). A letter of intent for ALICE 3, to be installed in 2033/2034 (Long Shutdown 4) and operated during Runs 5 and 6 (starting in 2035), was submitted to the LHC experiments committee in 2021 and led to a positive evaluation by the extended review panel in March 2022. 

Behind the curtain of hadronisation

In heavy-ion collisions at the LHC, a large amount of energy is deposited in a small volume, forming a QGP. The plasma immediately starts expanding and cooling down, eventually reaching a temperature at which hadrons are formed. Although hadrons formed at the boundary of this phase transition carry information about the expansion of the plasma, they do not inform us directly about the temperature and other properties of the hot plasma phase of the collision before hadronisation takes place. Photons and di-lepton pairs, which are produced as thermal radiation in electromagnetic processes and do not participate in the strong interaction, allow us to look behind the curtain of hadronisation. However, measurements of photon and dilepton emission are challenging due to the large background from electromagnetic decays of light hadrons and weak decays of heavy-flavour hadrons. 

Distribution of electron–positron pairs in Pb–Pb collisions at the LHC

One of the goals of the current ALICE 2 upgrades is to enable the first measurements of the thermal emission of electron–positron pairs (from virtual photons), and thus to determine the average temperature of the system before the formation of hadrons, during Runs 3 and 4. To further understand the evolution of temperature with time, larger data samples and excellent background rejection are needed. The early-stage temperature is determined from the exponential slope of the mass distribution above the ρ resonance, i.e. pair masses larger than 1.2 GeV/c2 (see “Taking the temperature” figure, upper panel). ALICE 3 would be able to explore the time dependence of the temperature before hadronisation using more differential measurements, e.g. of the azimuthal asymmetry of di-electron emission and of the slope of the mass spectrum as a function of transverse momentum. 

The di-electron mass spectrum also carries unique information about the mechanism of chiral symmetry breaking – a fundamental quantum-chromodynamics (QCD) effect that generates most of the hadron mass. At the phase transition to the QGP, chiral symmetry is restored and quarks and gluons are deconfined. One of the predicted signals of this transition is mixing between the ρ and a1 vector-meson states, which gives the di-electron invariant mass spectrum a characteristic exponential shape in the mass range above the ρ meson peak (0.8–1.1 GeV/c2). Only the excellent electron identification and rejection of electrons from heavy-flavour decays possible with ALICE 3 can give physicists experimental access to this effect (see “Taking the temperature” figure, lower panel).

Multi-charm production

Another important goal of the ALICE physics programme is to understand how energetic quarks and gluons interact with the QGP and eventually thermalise and form a plasma that behaves as a fluid with very low internal friction. The thermalisation process and the properties of the QGP are governed by low-momentum interactions between quarks and gluons, which cannot be calculated using perturbative techniques. Experimental input is therefore important to understand these phenomena and to link them to fundamental QCD.

Heavy quarks  

The heavy charm and beauty quarks are of particular interest because their interactions with the plasma can be calculated using lattice-QCD techniques with good theoretical control. Heavy quarks and antiquarks are mostly produced as back-to-back pairs in hard scatterings in the early phase of the collision. Subsequent interactions between the quarks and the plasma change the angle between the quark and antiquark. In addition, the “drag” from the plasma leads to an asymmetry in the overall azimuthal distributions of heavy quarks (elliptic flow) with respect to the reaction plane. The size of these effects is a measure of the strength of the interactions with the plasma. Since quark flavour is conserved in interactions in the plasma, measurements of hadrons containing heavy quarks, such as the D meson and Λc baryon, are directly sensitive to the interactions between heavy quarks and the plasma. While the increase in statistics and the improved spatial resolution of ALICE 2 will already allow us to measure the production of charm baryons, measurements of azimuthal correlations of charm–hadron pairs are needed to directly address how they interact with the plasma. These will only become possible with the precision, statistics and acceptance of ALICE 3. 

Heavier beauty quarks are expected to take longer to thermalise and therefore lose less information through their interactions with the QGP. Therefore, systematic measurements of transverse-momentum distributions and azimuthal asymmetries of beauty mesons and baryons in heavy-ion collisions are essential to map out the interactions of heavy-flavour quarks with the QGP and to understand the mechanisms that drive the system towards thermal equilibrium.

To understand how hadrons emerge from the QGP, those containing multiple heavy quarks are of particular interest because they can only be formed from quarks that were produced in separate hard-scattering processes. If full thermal equilibrium is reached in Pb–Pb collisions, the production rates of such states are expected to be enhanced by up to three orders of magnitude with respect to pp collisions. This implies enormous sensitivity to the probability for combining independently produced quarks during hadronisation and to the degree of thermalisation. At ALICE 3, the precision with which multi-charm baryon yields can be measured is enhanced (see “Multi-charm production” figure). 

Model of a novel design for a retractable tracker

In addition to precision measurements of di-electrons and heavy-flavour hadrons, ALICE 3 will allow us to investigate many more aspects of the QGP. These include fluctuations of conserved quantum numbers, such as flavour and baryon number, which are sensitive to the nature of the deconfinement phase transition of QCD. ALICE 3 will also aim to answer questions in hadron physics, for example by searching for the existence of nuclei containing charm baryons (analogous to strange baryons in hypernuclei) and by studying the interaction potentials between unstable hadrons, which may elucidate the structure of exotic hadronic states that have recently been discovered in electron–positron collisions and in hadronic collisions at the LHC. In addition, ALICE 3 will use ultra-peripheral collisions to study the structure of resonances such as the ρ′ and to look for new fundamental particles, such as axion-like particles and dark photons. A dedicated detector system is foreseen to study very low-energy photon production, which can be used to test “soft theorems” that link the production of very soft photons in a collision to the hadronic final state.

Pushing the experimental limits 

To pursue this ambitious physics programme, ALICE 3 is designed to be a compact, large-acceptance tracking and particle-identification detector with excellent pointing resolution as well as high readout rates. The main tracking information is provided by an all-silicon tracker in a magnetic field provided by a superconducting magnet system, complemented by a dedicated vertex detector that will have to be retractable to provide the required aperture for the LHC at injection energy. To achieve the ultimate pointing resolution, the first hits must be detected as close as possible to the interaction point (5 mm at the highest energy) and the amount of material in front of it be kept to a minimum. The inner tracking layers will also enable so-called strangeness tracking – the direct detection of strange baryons before they decay – to improve the pointing resolution and suppress combinatorial background, for example in the measurement of multi-charm baryon decays.

ALICE 3 is a compact, large-acceptance tracking and particle-identification detector with excellent pointing resolution as well as high readout rates

First feasibility studies of the mechanical design and the integration with the LHC for the vertex tracker have been conducted and engineering models have been produced to demonstrate the concept and explore production techniques for the components (see “Close encounters” image). The detection layers are to be constructed from bent, wafer-scale pixel sensors. The development of the next generation of CMOS pixel sensors in 65 nm technology with higher radiation tolerance and improved spatial resolution has already started in the context of the ITS 3 project in ALICE, which will be an important milestone on the way to ALICE 3 (see “Next-gen tracking” image). The outer tracker, which has to cover the cylindrical volume to a radius of 80 cm over a total length of ±4 m, will also use CMOS pixel sensors. These will be integrated into larger modules for an effective instrumentation of about 60 m2 while minimising the material used for mechanical support and services. The foreseen material budget for the tracker is 1% of a radiation length per layer for the outer tracker, and only 0.05% per layer for the vertex tracker.

An engineering model of ITS 3

For particle identification, five different detector systems are foreseen: a silicon-based time-of-flight system and a ring-imaging Cherenkov (RICH) detector that provide hadron and electron identification over a broad momentum range, a muon identifier starting from a transverse momentum of about 1.5 GeV/c, an electromagnetic calorimeter for photon detection and identification, and a forward tracker to reconstruct photons at very low momentum from their conversions to electron–positron pairs. For the time-of-flight system, the main R&D line aims at the integration of a gain layer in monolithic CMOS sensors to achieve the required time resolution of at least 20 ps (alternatively, low-gain avalanche diodes with external readout circuitry can be used). The calorimeter is based on a combination of lead-sampling and lead-tungstate segments, both of which would be read out by commercially available silicon photomultipliers (SiPMs). For the detection layers of the muon identifier, both resistive plate chambers and scintillating bars are being considered. Finally, for the RICH design, the R&D goal is to integrate the digital readout circuitry in SiPMs to enable efficient detection of photons in the visible range. 

ALICE 3 provides a roadmap for an exciting heavy-ion physics programme, along with the other three large LHC experiments, in Runs 5 and 6. An R&D programme for the coming years is being set up to establish the technologies and enable the preparation of technical design reports in 2026/2027. These developments not only constitute an important contribution to the full physics exploitation of the LHC, but are of strategic interest for future particle detectors and will benefit the particle and nuclear physics community at large.

Plasma acceleration under the microscope

A team led by DESY researchers has used a noninvasive technique to measure the energy evolution of an electron bunch inside a laser-plasma accelerator for the first time, opening new possibilities to understand the fundamental mechanisms behind this next-generation accelerator technology.

Laser-driven plasma-wakefield acceleration, which is under study at DESY, SLAC and several other labs worldwide, promises to significantly reduce the size of particle accelerators. The idea is to use a high-power laser to create a plasma in a gas, in which charge displacements generate electric fields of the order 100 GV/m. Such fields can accelerate electron bunches to highly relativistic energies over short distances, outperforming conventional radio-frequency technologies by orders of magnitude. The AWAKE experiment at CERN, meanwhile, is a unique facility for the investigation of proton-driven plasma acceleration, which could enable even higher energies to be reached. Turning the concept of wakefield acceleration into a practical device, on the other hand, is a major challenge. 

Turning the concept of wakefield acceleration into a practical device is a major challenge

In order to understand and thus improve the process of laser-plasma acceleration, which lasts for a period of femtoseconds to picoseconds, it is essential to observe as precisely as possible how the properties of the accelerated particles change in the plasma. Publishing their results in December, a team led by DESY’s Simon Bohlen and Kristjan Põder tracked the evolution of the electron beam energy inside a laser-plasma accelerator with high spatial resolution. The feat was performed within a project called PLASMED X, which aims to develop a compact, narrowband and tunable X-ray source for medical imaging. 

The team began by splitting the laser beam into two parts: one was used for electron acceleration, while the other was superimposed so that the light could be scattered by the electrons. Using an X-ray detector to measure the energy of Thomson-scattered photons at 20 points over a 400 μm section of the plasma, the team was able to reconstruct the energy evolution of the electrons over most of the accelerator length without disturbing either the electron beam or the acceleration process itself. 

“We were able to show in our measurements that the acceleration gradient can change significantly over very short distances,” says Bohlen. “With the new measurement method, we now have direct insight into a plasma acceleration process and can thus investigate the direct influence of different laser parameters or geometries of plasma cells on the acceleration process.”

Gabriella Pálla 1934–2022

Gabriella Pálla

Gabriella Pálla, who laid the foundations for the participation of Hungarian groups in CERN experiments, passed away on 11 October 2022 at the age of 88.

Gabriella attended Eötvös Loránd University in 1953, and began her career in nuclear physics in 1958 at the KFKI Research Institute for Particle and Nuclear Physics. Her first position was at the atomic physics department under the supervision of Károly Simonyi (on the topic of fast neutron reactions). In the 1970s she received a Humboldt Research Fellowship and worked at the cyclotron at the University of Hamburg, later at Jülich. She received her PhD in 1972 at Eötvös University and gained a DSc titled “Direct reactions and the collective properties of nuclei” in 1987.

In the 1990s Gabriella’s attention turned towards heavy-ion physics. She helped initiate the Buda-TOF project at NA49 and NA61 and later became the Hungarian ALICE representative in the early years of the experiment. She received the Academy Prize in Physics from the Hungarian Academy of Sciences in 1999 and the Simonyi Károly Award in 2010.

Statistics meets gamma-ray astronomy

As a subfield of astroparticle physics, gamma-ray astronomy, investigates many questions rooted in particle physics in an astrophysical context. A prominent example is the search for self-annihilating Weakly Interacting Massive Particles (WIMPs) in the Milky Way as a signature of dark matter. Another long-standing problem is finding out where in the universe the cosmic-ray particles detected on Earth are accelerated to PeV energies and beyond.

With the imminent commissioning of the Cherenkov Telescope Array (CTA), which will comprise more than 100 telescopes located in the northern and southern hemispheres, gamma-ray astronomy is about to enter a new era. This was taken as an opportunity to discuss the statistical methods used to analyze data from Cherenkov telescopes at a dedicated PHYSTAT workshop hosted by the university of Berlin. More than 300 participants, including several statisticians, registered for PHYSTAT-Gamma from 28 to 30 September to discuss concrete statistical problems, find synergies between fields, and set the employed methods in a broader context.

Three main topics were addressed at the meeting across 13 talks and multiple discussion sessions: statistical analysis of data from gamma-ray observatories in a multi-wavelength context, connecting statisticians and gamma-ray astronomers, and astrophysical sources across different wavelengths. Many concrete physical questions in gamma-ray astronomy must be answered in an astrophysical context, which becomes visible only by observing the electromagnetic spectrum. A mutual understanding of the statistical methods and systematic errors is therefore needed. Josh Speagle (University of Toronto) proclaimed a potential ‘datapocalypse’ in the heterogeneity and amount of soon-to-be-expected astronomical data. Similarities between analyses in X- and gamma-ray astronomy gave hope for reducing the data heterogeneity. Further cause for optimism arose from new approaches for combining data from different observatories.

The second day of PHYSTAT-Gamma focused on building connections between statisticians and gamma-ray astronomers. Eric Feigelson (Penn State) gave an overview of astrostatistics, followed by deeper discussions of Bayesian methods in astronomy by Tom Loredo (Cornell) and techniques for fitting astrophysical models to data with bootstrap methods by Jogesh Babu (Penn State). The session concluded with an overview of statistical methods for the analysis of astronomical time series by Jeff Scargle (NASA).

The final day centered on the problem of how to match astrophysical sources across different wavelengths. CTA is expected to detect gamma rays from more than 1000 sources. Identifying the correct counterparts at other wavelengths will be essential to study the astrophysical context of the gamma-ray emission. Applying Bayesian methods, Tamas Budavari (Johns Hopkins) discussed the current state of the problem from a statistical point of view, followed by in-depth talks and discussions among experts from X-ray, gamma-ray, and radio astronomy.

Topics across all sessions were the treatment of systematic errors and the formats for exchanging data between experiments. Technical considerations appear to dominate the definition of data formats in astronomy currently. However, for example, as Fisher famously showed with the introduction of sufficiency, statistical aspects can help to find useful representations of data and might also be considered in the definition of future data formats.

PHYSTAT-gamma was only the first attempt to discuss statistical aspects of gamma-ray astronomy. For example, the LHCf experiment at CERN will help to improve the prediction of the gamma-ray flux, which is expected from astrophysical hadron colliders and measured by gamma-ray observatories like CTA. However, modeling uncertainties from particle physics must be treated appropriately to improve the constraints on astrophysical processes. The discussion of this and many further topics is planned for follow-up meetings.

Fundamental symmetries and interactions at PSI

PSI_2022

The triennial workshop “Physics of fundamental Symmetries and Interactions – PSI2022” took place for the sixth time at the Paul Scherrer Institut (PSI) in Switzerland from 17 to 22 October, bringing the worldwide fundamental symmetries community together. More than 190 participants including some 70 young scientists welcomed the close communication of an in-person meeting built around 35 invited and 25 contributed talks.

A central goal of the meeting series is to deepen relations between disciplines and scientists. This year, exceptionally, participants connected with the FIPs workshop at CERN on the second day of the conference, due to the common topics discussed.

With PSI’s leading high-intensity muon and pion beams, many topics in muon physics and lepton-flavour violation were highlighted. These covered rare muon decays (μ → e + γ, μ → 3e) and muon conversion (μ → e), muonic atoms and proton structure, and muon capture. Presentations covered complementary experimental efforts at J-PARC, Fermilab and PSI. The status of the muon g-2 measurement was reviewed from an experimental and theoretical perspective, where lattice-QCD calculations from 2021 and 2022 have intensified discussions around the tension with Standard Model expectations.

Fundamental physics using cold and ultracold neutrons was a second cornerstone of the programme. Searches for a neutron electric dipole moment (EDM) were discussed in contributions by collaborations from TRIUMF, LANL, SNS, ILL and PSI, complemented by presentations on searches for EDMs in atomic and molecular systems. Along with new results from neutron-beta-decay measurements, the puzzle of the neutron lifetime keeps the community busy, with improving “bottle” and “beam” measurements presently differing by more than 5 standard deviations. Several talks highlighted possible explanations via neutron oscillations into sterile or mirror states.

The current status of direct neutrino-mass measurements and future outlook down into the meV range was covered together with updates on searches for neutrinoless double-beta decay. An overview of the hunt for the unknown at the dark-matter frontier was presented together with new limits and plans from various searches. Ultraprecise atomic clocks were discussed allowing checks of general relativity and the Standard Model, and for searches beyond established theories. The final session covered the latest results from antiproton and antihydrogen experiments at CERN, demonstrating the outstanding precision achieved in CPT tests with these probes. The workshop was a great success and participants look forward to reconvening at PSI2025.

Higgs hunting in Paris

higgs_hunting_2022

The 12th Higgs Hunting workshop, which took place in Paris and Orsay from 12 to14 September, presented an overview of recent and new results in Higgs-boson physics. The results painted an increasingly detailed picture of Higgs-boson properties, thanks to the many analyses now reporting results based on the full LHC Run 2 dataset, with an integrated luminosity of about 140 fb-1. Searches for phenomena beyond the Standard Model (BSM) were also presented.

Highlights included new results from CMS on decays of Higgs bosons to b quarks and to invisible final states, and a new limit from ATLAS on lepton-flavour violating decays of the Higgs boson. Events with two Higgs bosons in the final state were used to set limits on interactions involving three Higgs bosons and between two Higgs bosons and two weak vector bosons. All the results remain compatible with Standard Model expectations, except for a small number of intriguing tensions in some BSM searches, such as small excesses in a search for heavier partners of the Higgs boson decaying to W-boson pairs and in a search for resonances produced alongside a Z boson and decaying to a pair of Higgs bosons. These deviations from theory will be followed up by ATLAS and CMS in further analyses using Run 2 and Run 3 data.

This year’s workshop was special as the event marked the tenth anniversary of the Higgs-boson discovery in 2012. Two historical talks given by the former ATLAS and CMS spokespersons Peter Jenni (University of Freiburg & CERN) and Jim Virdee (Imperial College) highlighted the long-term efforts that laid the foundation for the Higgs-boson discovery in 2012.

The workshop also hosted an in-depth discussion on future accelerators and related detector R&D. It focused on future efforts in Europe, the US and Latin America, and featured presentations by Karl Jakobs (University of Freiburg and chair of the European Committee for Future Accelerators), Meenashi Narain (Brwon University and convener of the energy frontier group of the Snowmass process), Maria-Teresa Tova (National University of La Plata) and representative for the Latin American strategy effort) and Emmanuel Perez (CERN), who discussed recent improvements in physics analyses at future colliders.

Recent theory developments were also extensively covered, in particular recent developments in higher-order computations by Michael Spira (PSI), which highlighted the agreement between experimental results and predictions. A review of recent theory progress towards future colliders was also presented by Gauthier Durieux (CERN), while Carlos Wagner (Enrico Fermi Institute, & Kavli Institute for Cosmological Physics) discussed the new-physics that can be explored via precise measurements of Higgs-boson couplings. Finally, a “vision” presentation by Marcela Carena (Fermilab) highlighted new opportunities for the study of electroweak baryogenesis in relation to Higgs-boson measurements.

Many experimental sessions were held regarding recent results on a wide variety of topics, some which will be relevant in upcoming Run 3 measurements. This includes measurements related to potential CP-violating effects in the Higgs sector, as well as effective field theories (EFTs). This latter topic allows a general description of deviations from Standard Model  predictions in Higgs-boson measurements and beyond, and much improved measurements in this direction are expected in Run 3. The search for  Higgs-boson pair production was also an important focus at the Paris meeting. The latest Run 2 analyses showed greatly improved sensitivity compared to earlier rounds, and further improvements are expected in Run 3. While sensitivity to the Standard Model signal is not expected until the High-Luminosity LHC, these searches should set strong constraints on BSM effects in the Higgs sector.

Concluding talks were given by Fabio Maltoni (Louvain) and Giacinto Piacquadio (Stony Brook), and the next Higgs Hunting workshop will be held in Orsay and Paris from 11 to 13 September 2023.

Back to the Swamp

Since its first revolution in the 1980s, string theory has been proposed as a framework to unify all known interactions in nature. As such, it is a perfect candidate to embed the standard models of particle physics and cosmology into a consistent theory of quantum gravity. Over the past decades, the quest to recover both models as low-energy effective field theories (EFTs) of string theory has led to many surprising results, and to the notion of a “landscape” of string solutions reproducing many key features of the universe.

back_to_the_swamp

Initially, the vast number of solutions led to the impression that any quantum field theory could be obtained as an EFT of string theory, hindering the predictive power of the theory. In fact, recent developments have shown that quite the opposite is true: many respectable-looking field theories become inconsistent when coupled to quantum gravity and can never be obtained as EFTs of string theory. This set is known as the “swampland” of quantum field theories. The task of the swampland programme is to determine the structure and boundaries of the swampland, and from there extract the predictive power of string theory. Over the past few years, deep connections between the swampland and a fundamental understanding of open questions in high-energy physics ranging from the hierarchy of fundamental scales to the origin and fate of the universe, have emerged.

The workshop Back to the Swamp, held at Instituto de Física Teórica UAM/CSIC in Madrid from 26 to 28 September, gathered leading experts in the field to discuss recent progress in our understanding of the swampland, as well as its implications for particle physics and cosmology. In the spirit of the two previous conferences Vistas over the Swampland and Navigating the Swampland, also hosted at IFT, the meeting featured 22 scientific talks and attracted about 100 participants.

The swampland programme has led to a series of conjectures that have sparked debate about how to connect string theory with the observed universe, especially with models of early-universe cosmology. This was reflected with several talks on the subject, ranging from new scrutiny of current proposals to obtain de Sitter vacua, which might not be consistently constructed in quantum gravity, new candidates for quintessence models that introduce a scalar field to explain the observed accelerated expansion  of the universe, and scenarios where dark matter is composed of primordial black holes. Several talks covered the implications of the programme for particle physics and quantum field theories in general. Topics included axion-based proposals to solve the strong-CP problem from the viewpoint of quantum gravity, as well as how axion physics and approximate symmetries can link swampland ideas with experiment and how the mathematical concept of “tameness” could describe those quantum field theories that are compatible with quantum gravity. Progress on the proposal to characterize large field distances and field-dependent weak couplings as emergent concepts, general bounds on supersymmetric quantum field theories from consistency of axionic string worldsheet theories, and several proposals on how dispersive bound and the boostrap programme are also relevant for swampland ideas. Finally, several talks covered more formal topics, such as a sharpened formulation of the distance conjecture, new tests of the tower weak gravity conjecture, the discovery of new corners in the string theory landscape, and arguments in favour of and against Euclidean wormholes.

The new results demonstrated the intense activity in the field and highlighted several current aspects of the swampland programme. It is clear that the different proposals and conjectures driving the programme have sharpened and become more interconnected. Each year, the programme attracts more scientists working in different specialities of string theory, and proposals to connect the swampland with experiment take a larger fraction of the efforts.

Chasing feebly interacting particles at CERN

What is the origin of neutrino masses and oscillations? What is the nature of Dark Matter? What mechanism generated matter-antimatter-asymmetry? What drove the inflation of our Universe and provides an explanation to Dark Energy? What is the origin of the hierarchy of scales? These are outstanding questions in particle physics that still require an answer.

So far, the experimental effort has been driven by theoretical arguments that favoured the existence of new particles with relatively large couplings to the Standard Model (SM) and masses commensurate the mass of the Higgs boson. Searching for these particles has been one of the main goals of the physics programme of the LHC. However, several beyond-the-SM theories predict the existence of light (sub-GeV) particles, which interact very weakly with the SM fields. Such feebly interacting particles (FIPs) can provide elegant explanations to several unresolved problems in modern physics. Furthermore, searching for them requires specific and distinct techniques, creating new experimental challenges along with innovative theoretical efforts.

FIPs are currently one of the most debated and discussed topics in fundamental physics and were recommended by the 2020 update of the European strategy for particle physics as a compelling field to explore in the next decade. The FIPs 2022 workshop held at CERN from 17 to 21 October was the second in a series dedicated to the physics of FIPs, attracted 320 experts from collider, beam-dump and fixed-target experiments, as well as from the astroparticle, cosmology, axion and dark-matter communities gathered to discuss the progress in experimental searches and new developments in underlying theoretical models.

The main goal of the workshop was to create a base for a multi-disciplinary and interconnected approach. The breadth of open questions in particle physics and their deep interconnection requires a diversified research programme with different experimental approaches and techniques, together with a strong and focused theoretical involvement. In particular, FIPs 2022, which is strongly linked with the Physics Beyond Colliders initiative at CERN, aimed at shaping the FIPs programme in Europe. Topics under discussion include the impact that FIPs might have in stellar evolution, ΛCDM cosmological-model parameters, indirect dark-matter detection, neutrino physics, gravitational-wave physics and AMO (atoms-molecular-optical) physics. This is in addition to searches currently performed at colliders and extracted beam lines worldwide.

The main sessions were organised around three main themes: light dark matter in particle and astroparticle physics and cosmology; ultra-light FIPs and their connection with cosmology and astrophysics; and heavy neutral leptons and their connection with neutrino physics. In addition, young researchers in the field presented and discussed their work in the “new ideas” sessions.

FIPs 2022 aimed not only to explore new answers to the unresolved questions in fundamental physics, but to analyse the technical challenges and necessary infrastructure and collaborative networks required to answer them. Indeed, no single experiment or laboratory would be able by itself to cover the large parameter space in terms of masses and couplings that FIPs models suggest. Synergy and complementarity among a great variety of experimental facilities are therefore paramount, calling for a deep collaboration across many laboratories and cross-fertilisation among different communities and experimental techniques. We believe that a network of interconnected laboratories can become a sustainable, flexible and efficient way of addressing the particle physics questions in the next millennium.

The next appointment for the community is the retreat/school “FIPs in the ALPs” to be held in Les Houches from 15 to 19 May 2023, to be followed by the next edition of the FIPs workshop at CERN in autumn 2024.

Remembering the W discovery

A W event recorded by UA1 in 1982

When the W and Z bosons were predicted in the mid-to-late 1960s, their masses were not known. Experimentalists therefore had no idea what energy they needed to produce them. That changed in 1973, when Gargamelle discovered neutral-current neutrino interactions and measured the cross-section ratio between neutral- and charged-current interactions. This ratio provided the first direct determination of the weak mixing angle, which, via the electroweak theory, predicted the W-boson mass to lie between 60 and 80 GeV, and the Z mass between 75 and 95 GeV – at least twice the energy of the leading accelerators of the day. 

By then, the world’s first hadron collider – the Intersecting Storage Rings (ISR) at CERN – was working well. Kjell Johnsen proposed a new superconducting ISR in the same tunnel, capable of reaching 240 GeV. A study group was formed. Then, in 1976, Carlo Rubbia, David Cline and Peter McIntyre suggested adding  an antiproton source to a conventional 400 GeV proton accelerator, either at Fermilab or at CERN, to transform it into a pp collider. The problem was that the antiprotons had to be accumulated
and cooled if the target luminosity (1029 cm–2s–1, providing about one Z event per day) was to be reached. Two methods were proposed: stochastic cooling by Simon van der Meer at CERN and electron cooling by Gersh Budker in Novosibirsk. 

CERN Director-General John Adams wasn’t too happy that as soon as the SPS had been built, physicists wanted to convert it into a pp collider. But he accepted the suggestion, and the idea of a superconducting ISR was abandoned. Following the Initial Cooling Experiment, which showed that the luminosity target was achievable with stochastic cooling, the SppS was approved in May 1978 and the construction of the Antiproton Accumulator (AA) by van der Meer and collaborators began. Around that time, the design of the UA1 experiment was also approved. 

A group of us proposed a second, simpler experiment in another interaction region (UA2), but it was put on hold for financial reasons. Then, at the end of 1978, Sam Ting proposed an experiment to go in the same place. His idea was to surround the beam with heavy material so that everything would be absorbed except for muons, making it good at identifying Z → μ+μ but far from good for W bosons decaying to a muon and a neutrino. In a tense atmosphere, Ting’s proposal was turned down and ours was approved.

First sightings

The first low-intensity pp collisions arrived in late 1981. In December 1982 the luminosity reached a sufficient level, and by the following month UA1 had recorded six W candidates and UA2 four. The background was minimal; there was nothing else we could think of that would produce such events. Carlo presented the UA1 events and Pierre Darriulat the UA2 ones at a workshop in Rome on 12–14 January 1983. On 20 January, Carlo announced the W discovery at a CERN seminar, and the next day I presented the UA2 results, confirming UA1. In UA2 we never discussed priority, because we all knew that it was Carlo who had made the whole project possible. 

Luigi Di Lella

The same philosophy guided the discovery of the Z boson. UA2 had recorded a candidate Z → e+e event in December 1982, also presented by Pierre at the Rome workshop. One electron was perfectly clear, whereas the other had produced a shower with many tracks. I had shown the event to Jack Steinberger, who strongly suggested we publish immediately; however, we decided to wait for the first “golden” event with both electrons unambiguously identified. Then, one night in May 1983, UA1 found a Z. As with ours, only one electron satisfied all electron-identification criteria, but Carlo used the event to announce a discovery. The UA1 results (based on four Z → e+e events and one Z → μ+μ) were published that July, followed by the UA2 results (based on eight Z → e+e events, including the 1982 one) a month later. 

The SppS ran until 1990, when it became clear that Fermilab’s Tevatron was going to put us out of business. In 1984–1985 the energy was increased from 546 to 630 GeV and in 1986 another ring was added to the AA, increasing the luminosity 10-fold. Following the 1984 Nobel prize to Rubbia and van der Meer, UA1 embarked on an ambitious new electromagnetic calorimeter that never quite worked. UA2 went on to make a precise measurement of the ratio mW/mZ, which, along with the first precise measurement of mZ at LEP, enabled us to determine the W mass with 0.5% precision and, via radiative corrections, to predict the mass of the top quark (160+50–60 GeV) several years before the Tevatron discovered it. 

Times have certainly changed since then, but the powerful interplay between theory, experiment and machine builders remains essential for progress in particle physics. 

bright-rec iop pub iop-science physcis connect