At the European Physical Society Conference on High Energy Physics, held in Hamburg in August, the LHCb collaboration announced first results on the production of antihelium and antihypertriton nuclei in proton–proton (pp) collisions at the LHC. These promising results open a new research field, that up to now has been pioneered by ground-breaking work from the ALICE collaboration on the central rapidity interval |y| < 0.5. By extending the measurements into the so-far unexplored forward region 1.0 < y < 4.0, the LHCb results provide new experimental input to derive the production cross sections of antimatter particles formed in pp collisions, which are not calculable from first principles.
LHCb’s newly developed helium-identification technique mainly exploits information from energy losses through ionisation in the silicon sensors upstream (VELO and TT stations) and downstream (Inner Tracker) of the LHCb magnet. The amplitude measurements from up to ~50 silicon layers are combined for each subdetector into a log-likelihood estimator. In addition, timing information from the Outer Tracker and velocity measurements from the RICH detectors are used to improve the separation power between heavy helium nuclei (with charge Z = 2) and lighter, singly charged particles (mostly charged pions). With a signal efficiency of about 50%, a nearly background-free sample of 1.1 × 105 helium and antihelium nuclei is identified in the data collected during LHC Run 2 from 2016 to 2018 (see figure, inset).
The helium identification method proves the feasibility of new research fields at LHCb
As a first step towards a light-nuclei physics programme in LHCb, hypertritons are reconstructed via their two-body decay into a now-identified helium nucleus and a charged pion. Hypertriton (3ΛH) is a bound state of a proton, a neutron and a Λ hyperon that can be produced via coalescence in pp collisions. These states provide experimental access to the hyperon–nucleon interaction through the measurement of their lifetime and of their binding energy. Hyperon–nucleon interactions have significant implications for the understanding of astrophysical objects such as neutron stars. For example, the presence of hypernuclei in the dense inner core can significantly suppress the formation of high-mass neutron stars. As a result, there is some tension between the observation of neutron stars heavier than two solar masses and corresponding hypertriton results from the STAR collaboration at Brookhaven. ALICE seems to have resolved the tension between hypertriton measurements at colliders and neutron stars. An independent confirmation of the ALICE result has up to now been missing, and can be provided by LHCb.
The invariant-mass distribution of hypertriton and antihypertriton candidates is shown in figure 1. More than 100 signal decays are reconstructed, with a statistical uncertainty on the mass of 0.16 MeV, similar to that of STAR. In a next step, corrections for efficiencies and acceptance obtained from simulation, as well as systematic uncertainties on the mass scale and lifetime measurement, will be derived.
The new helium identification method from LHCb summarised here proves the feasibility of a rich programme of measurements in QCD and astrophysics involving light antinuclei in the coming years. The collaboration also plans to apply the method to other LHCb Run 2 datasets, such as proton–ion, ion–ion and SMOG collision data.
About 90 physicists attended the sixth plenary workshop of the Muon g-2 Theory Initiative, held in Bern from 4 to 8 September, to discuss the status and strategies for future improvements of the Standard Model (SM) prediction for the anomalous magnetic moment of the muon. The meeting was particularly timely given the recent announcement of the results from runs two and three of the Fermilab g-2 experiment (Muon g-2 update sets up showdown with theory), which reduced the uncertainty of the world average to 0.19 ppm, in dire need of a SM prediction at commensurate precision. The main topics of the workshop were the two hadronic contributions to g-2, hadronic vacuum polarisation (HVP) and hadronic light-by-light scattering (HLbL), evaluated either with a lattice–QCD or data-driven approach.
Hadronic vacuum polarisation
The first one-and-a-half days were devoted to the evaluation of HVP – the largest QCD contribution to g-2, whereby a virtual photon briefly transforms into a hadronic “blob” before being reabsorbed – from e+e– data. The session started with a talk from the CMD-3 collaboration at the VEPP-2000 collider, whose recent measurement of the e+e–→ π+π– cross section generated shock waves earlier this year by disagreeing (at the level of 2.5–5σ) with all previous measurements used in the Theory Initiative’s 2020 white paper. The programme also featured a comparison with results from the earlier CMD-2 experiment, and a report from seminars and panel discussions organised by the Theory Initiative in March and July on the details of the CMD-3 result. While concerns remain regarding the estimate of certain systematic effects, no major shortcomings could be identified.
Further presentations from BaBar, Belle II, BESIII, KLOE and SND detailed their plans for new measurements of the 2π channel, which in the case of BaBar and KLOE involve large data samples never analysed before for this measurement. Emphasis was put on the role of radiative corrections, including a recent paper by BaBar on additional radiation in initial-state-radiation events and, in general, the development of higher-order Monte Carlo generators. Intensive discussions reflected a broad programme to clarify the extent to which tensions among the experiments can be due to higher-order radiative effects and structure-dependent corrections. Finally, updated combined fits were presented for the 2π and 3π channels, for the former assessing the level of discrepancy among datasets, and for the latter showing improved determinations of isospin-breaking contributions.
CMD-3 generated shock waves by disagreeing with all previous measurements at the level of 2.5-5σ
Six lattice collaborations (BMW, ETMC, Fermilab/HPQCD/MILC, Mainz, RBC/UKQCD, RC*) presented updates on the status of their respective HVP programmes. For the intermediate-window quantity (the contribution of the region of Euclidean time between about 0.4–1.0 fm, making up about one third of the total), a consensus has emerged that differs from e+e–-based evaluations (prior to CMD-3) by about 4σ, while the short-distance window comes out in agreement. Plans for improved evaluations of the long-distance window and isospin-breaking corrections were presented, leading to the expectation of new, full computations for the total HVP contribution in addition to the BMW result in 2024. Several talks addressed detailed comparisons between lattice-QCD and data-driven evaluations, which will allow physicists to better isolate the origin of the differences once more results from each method become available. A presentation on possible beyond-SM effects in the context of the HVP contribution showed that it seems quite unlikely that new physics can be invoked to solve the puzzles.
Light-by-light scattering
The fourth day of the workshop was devoted to the HLbL contribution, whereby the interaction of the muon with the magnetic field is mediated by a hadronic blob connected to three virtual photons. In contrast to HVP, here the data-driven and lattice-QCD evaluations agree. However, reducing the uncertainty by a further factor of two is required in view of the final precision expected from the Fermilab experiment. A number of talks discussed the various contributions that feed into improved phenomenological evaluations, including sub-leading contributions such as axial-vector intermediate states as well as short-distance constraints and their implementation. Updates on HLbL from lattice QCD were presented by the Mainz and RBC/UKQCD groups, as were results on the pseudoscalar transition form factor by ETMC and BMW. The latter in particular allow cross checks of the numerically dominant pseudoscalar- pole contributions between lattice QCD and data-driven evaluations.
It is critical that the Theory Initiative work continues beyond the lifespan of the Fermilab experiment
On the final day, the status of alternative methods to determine the HVP contribution were discussed, first from the MUonE experiment at CERN, then from τ data (by Belle, CLEOc, ALEPH and other LEP experiments). First MUonE results could become available at few-percent precision with data taken in 2025, while a competitive measurement would proceed after Long Shutdown 3. For the τ data, new input is expected from the Belle II experiment, but the critical concern continues to be control over isospin-breaking corrections. Progress in this direction from lattice QCD was presented by the RBC/UKQCD collaboration, together with a roadmap showing how, potentially in combination with data-driven methods, τ data could lead to a robust, complementary determination of the HVP contribution.
The workshop concluded with a discussion on how to converge on a recommendation for the SM prediction in time for the final Fermilab result, expected in 2025, including new information expected from lattice QCD, the BaBar 2π analysis and radiative corrections. A final decision for the procedure for an update of the 2020 white paper is planned to be taken at the next plenary meeting in Japan in September 2024. In view of the long-term developments discussed at the workshop – not least the J-PARC Muon g-2/EDM experiment, due to start taking data in 2028 – it is critical that the work by the Theory Initiative continues beyond the lifespan of the Fermilab experiment, to maximise the amount of information on physics beyond the SM that can be inferred from precision measurements of the anomalous magnetic moment of the muon.
High-energy heavy-ion collisions at the LHC exhibit strong collective flow effects in the azimuthal angle distribution of final-state particles. Since these effects are governed by the initial collision geometry of the two colliding nuclei and the hydrodynamic evolution of the collision, the study of anisotropic flow is a powerful way to characterise the production of the quark–gluon plasma (QGP) – an extreme state of matter expected to have existed in the early universe.
To their surprise, researchers on the ALICE experiment have now revealed similar flow signatures in small systems encompassing proton–proton (pp) and proton–lead (pPb) collisions, where QGP formation was previously assumed not to occur. The origin of the flow signals in small systems (and in particular whether the mechanisms behind these correlations in small systems share commonalities with heavy-ion collisions) are not yet fully understood. To better interpret these results, and thus to understand the limit of the system size that exhibits fluid-like behaviour, it is important to carefully single out possible scenarios that can mimic the effect of collective flow.
Anisotropic-flow measurements become more difficult in small systems because non-flow effects, such as the presence of jets, become more dominant. Thus, it is important to examine methods where non-flow effects are properly subtracted first. One of the methods, the so-called low-multiplicity template fit, has been widely used in several experiments to determine and subtract the non-flow elements.
The origin of the flow signals in small systems is not yet fully understood
The ALICE collaboration studied long-range angular correlations for pairs of charged particles produced in pp and pPb collisions at centre-of-mass energies of 13 TeV and 5.02 TeV, respectively. Flow coefficients were extracted from these correlations using the template-fit method in samples of events with different charged-particle multiplicities. This method considers that the yield of jet fragments increases as a function of particle multiplicity and allows physicists to examine assumptions made in the low-multiplicity template fit for the first time – demonstrating their validity, including a possible jet-shape modification.
Figure 1 shows the measurement of two components of anisotropic flow – elliptic (v2) and triangular (v3) – as a function of charged-particle multiplicity at midrapidity (Nch). The data show decreasing trends towards lower multiplicities. In pp collisions, the results suggest that the v2 signal disappears below Nch = 10. The results are then compared with hydrodynamic models. To accurately describe the data, especially for events with low multiplicities, a better understanding of initial conditions is needed.
These results can help to constrain the modelling of initial-state simulations, as the significance of initial-state effects increases for collisions resulting in low multiplicities. The measurements with larger statistics from Run 3 data will push down this multiplicity limit and reduce the associated uncertainties.
Quarks and gluons are the only known elementary particles that cannot be seen in isolation. Once produced, they immediately start a cascade of radiation (the parton shower), followed by confinement, when the partons bind into (colour-neutral) hadrons. These hadrons form the jets that we observe in detectors. The different phases of jet formation can help physicists understand various aspects of quantum chromodynamics (QCD), from parton interactions to hadron interactions – including the confinement transition leading to hadron formation, which is particularly difficult to model. However, jet formation cannot be directly observed. Recently, theorists proposed that the footprints of jet formation are encoded in the energy and angular correlations of the final particles, which can be probed through a set of observables called energy correlators. These observables record the largest angular distance between N particles within a jet (xL), weighted by the product of their energy fractions.
The CMS collaboration recently reported a measurement of the energy correlators between two (E2C) and three (E3C) particles inside a jet, using jets with pT in the 0.1–1.8 TeV range. Figure 1 (top) shows the measured E2C distribution. In each jet pT range, three scaling regions can be seen, corresponding to three stages in jet-formation evolution: parton shower, colour confinement and free hadrons (from right to left). The opposite E2C trends in the low and high xL regions indicate that the interactions between partons and those between hadrons are rather different; the intermediate region reflects the confinement transition from partons to hadrons.
Theorists have recently calculated the dynamics of the parton shower with unprecedented precision. Given the high precision of the calculations and of the measurements, the CMS team used the E3C over E2C ratio, shown in figure 1 (bottom), to evaluate the strong coupling constant αS. The ratio reduces the theoretical and experimental uncertainties, and therefore minimises the challenge of distinguishing the effects of αS variations from those of changes in quark–gluon composition. Since αS depends on the energy scale of the process under consideration, the measured value is given for the Z-boson mass: αS = 0.1229 with an uncertainty of 4%, dominated by theory uncertainties and by the jet-constituent energy-scale uncertainty. This value, which is consistent with the world average, represents the most precise measurement of αS using a method based on jet evolution.
Following a decision taken during the June session of the CERN Council to launch a technical design study for a new high-intensity physics programme at CERN’s North Area, a recommendation for experiment(s) that can best take advantage of the intense proton beam on offer is expected to be made by the end of 2023.
The design study concerns the extraction of a high-intensity beam from the Super Proton Synchrotron (SPS) to deliver up to a factor of approximately 20 more protons per year to ECN3 (Experimental Cavern North 3). It is an outcome of the Physics Beyond Colliders (PBC) initiative, which was launched in 2016 to explore ways to further diversify and expand the CERN scientific programme by covering kinematical domains that are complementary to those accessible to high-energy colliders, with a focus on programmes for the start of operations after Long Shutdown 3 towards the end of the decade.
CERN is confident in reaching the beam intensities required for all experiments
To employ a high-intensity proton beam at a fixed-target experiment in the North Area and to effectively exploit the protons accelerated by the SPS, the beam must be extracted slowly. In contrast to fast extraction within a single turn of the synchrotron, which utilises kicker magnets to change the path of a passing proton bunch, slow extraction gradually shaves the beam over several hundred thousand turns to produce a continuous flow of protons over a period of several seconds. One important limitation to overcome concerns particle losses during the extraction, foremost on the thin electrostatic extraction septum of the SPS but also along the transfer line leading to the North Area target stations. An R&D study backed by the PBC initiative has shown that it is possible to deflect the protons away from the blade of the electrostatic septum using thin, bent crystals. “Based on the technical feasibility study carried out in the PBC Beam Delivery ECN3 task force, CERN is confident in reaching the beam intensities required for all experiments,” says ECN3 project leader Matthew Fraser.
Currently, ECN3 hosts the NA62 experiment, which searches for ultra-rare kaon decays as well as for feebly-interacting particles (FIPs). Three experimental proposals that could exploit a high-intensity beam in ECN3 have been submitted to the SPS committee, and on 6 December the CERN research board is expected to decide which should be taken forward. The High-Intensity Kaon Experiment (HIKE), which requires an increase of the current beam intensity by a factor of between four and seven, aims to increase the precision on ultra-rare kaon decays to further constrain the Cabibbo–Kobayashi–Maskawa unitarity triangle and to search for decays of FIPs that may appear on the same axis as the dumped proton beam. Looking for off-axis FIP decays, the SHADOWS (Search for Hidden And Dark Objects With the SPS) programme could run alongside HIKE when operated in beam-dump mode. Alternatively, the SHiP (Search for Hidden Particles) experiment would investigate hidden sectors such as heavy neutral leptons in the GeV mass range and also enable access to muon- and tau-neutrino physics in a dedicated beam-dump facility installed in ECN3.
The ambitious programme to provide and prepare the high-intensity ECN3 facility for the 2030s onwards is driven in synergy with the North Area consolidation project, which has been ongoing since Long Shutdown 2. Works are planned to be carried out without impacting the other beamlines and experiments in the North Area, with first beam commissioning of the new facility expected from 2030.
“Once the experimental decision has been made, things will move quickly and the experimental groups will be able to form strong collaborations around a new ECN3 physics facility, upgraded with the help of CERN’s equipment and service groups,” says Markus Brugger, co-chair of the PBC ECN3 task force.
Innovation often becomes a form of competition. It can be thought of as a race among creative people, where standardized tools measure progress toward the finish line. For many who strive for technological innovation, one such tool is the vacuum gauge.
High-vacuum and ultra-high-vacuum (HV/UHV) environments are used for researching, refining and producing many manufactured goods. But how can scientists and engineers be sure that pressure levels in their vacuum systems are truly aligned with those in other facilities? Without shared vacuum standards and reliable tools for meeting these standards, key performance metrics – whether for scientific experiments or products being tested – may not be comparable. To realize a better ionization gauge for measuring pressure in HV/UHV environments, INFICON of Liechtenstein used multiphysics modelling and simulation to refine its product design.
A focus on gas density
The resulting Ion Reference Gauge 080 (IRG080) from INFICON is more accurate and reproducible when compared with existing ionization gauges. Development of the IRG080 was coordinated by the European Metrology Programme for Innovation and Research (EMPIR). This collaborative R&D effort by private companies and government research organizations aims to make Europe’s “research and innovation system more competitive on a global scale”. The project participants, working within EMPIR’S 16NRM05 Ion Gauge project, considered multiple options before agreeing that INFICON’s gauge design best fulfilled the performance goals.
Of course, different degrees of vacuum require their own specific approaches to pressure measurement. “Depending on conditions, certain means of measuring pressure work better than others,” explained Martin Wüest, head of sensor technology at INFICON. “At near-atmospheric pressures, you can use a capacitive diaphragm gauge. At middle vacuum, you can measure heat transfer occurring via convection.” Neither of these approaches is suitable for HV/UHV applications. “At HV/UHV pressures, there are not enough particles to force a diaphragm to move, nor are we able to reliably measure heat transfer,” added Wüest. “This is where we use ionization to determine gas density and corresponding pressure.”
The most common HV/UHV pressure-measuring tool is a Bayard–Alpert hot-filament ionization gauge, which is placed inside the vacuum chamber. The instrument includes three core building blocks: the filament (or hot cathode), the grid and the ion collector. Its operation requires the supply of low-voltage electric current to the filament, causing it to heat up. As the filament becomes hotter, it emits electrons that are attracted to the grid, which is supplied with a higher voltage. Some of the electrons flowing toward and within the grid will collide with any free-floating gas molecules that are circulating in the vacuum chamber. Electrons that collide with gas molecules will form ions that then flow toward the collector, with the measurable ion current in the collector proportional to the density of gas molecules in the chamber.
“We can then convert density to pressure, according to the ideal gas law,” explained Wüest. “Pressure will be proportional to the ion current divided by the electron current, [in turn] divided by a sensitivity factor that is adjusted depending on what gas is in the chamber.”
Better by design
Unfortunately, while the operational principles of the Bayard–Alpert ionization gauge are sound and well understood, their performance is sensitive to heat and rough handling. “A typical ionization gauge contains fine metal structures that are held in spring-loaded tension,” said Wüest. “Each time you use the device, you heat the filament to between 1200 and 2000 °C. That affects the metal in the spring and can distort the shape of the filament, [thereby] changing the starting location of the electron flow and the paths the electrons follow.”
At the same time, the core components of a Bayard–Alpert gauge can become misaligned all too easily, introducing measurement uncertainties of 10 to 20% – an unacceptably wide range of variation. “Most vacuum-chamber systems are overbuilt as a result,” noted Wüest, and the need for frequent gauge recalibration also wastes precious development time and money.
With this in mind, the 16NRM05 Ion Gauge project team set a measurement uncertainty target of 1% or less for its benchmark gauge design (when used to detect nitrogen gas). Another goal was to eliminate the need to recalibrate gas sensitivity factors for each gauge and gas species under study. The new design also needed to be unaffected by minor shocks and reproducible by multiple manufacturers.
To achieve these goals, the project team first dedicated itself to studying HV/UHV measurement. Their research encompassed a broad review of 260 relevant studies. After completing their review, the project partners selected one design that incorporates current best practice for ionization gauge design: INFICON’s IE514 extractor-type gauge. Subsequently, three project participants – at NOVA University Lisbon, CERN and INFICON – each developed their own simulation models of the IE514 design. Their results were compared to test results from a physical prototype of the IE514 gauge to ensure the accuracy of the respective models before proceeding towards an optimized gauge design.
Computing the sensitivity factor
Francesco Scuderi, an INFICON engineer who specializes in simulation, used the COMSOL Multiphysics® software to model the IE514. The model enabled analysis of thermionic electron emissions from the filament and the ionization of gas by those electrons. The model can also be used for comray-tracing the paths of generated ions toward the collector. With these simulated outputs, Scuderi could calculate an expected sensitivity factor, which is based on how many ions are detected per emitted electron – a useful metric for comparing the overall fidelity of the model with actual test results.
“After constructing the model geometry and mesh, we set boundary conditions for our simulation,” Scuderi explained. “We are looking to express the coupled relationship of electron emissions and filament temperature, which will vary from approximately 1400 to 2000 °C across the length of the filament. This variation thermionically affects the distribution of electrons and the paths they will follow.”
He continued: “Once we simulate thermal conditions and the electric field, we can begin our ray-tracing simulation. The software enables us to trace the flow of electrons to the grid and the resulting coupled heating effects.”
Next, the model is used to calculate the percentage of electrons that collide with gas particles. From there, ray-tracing of the resulting ions can be performed, tracing their paths toward the collector. “We can then compare the quantity of circulating electrons with the number of ions and their positions,” noted Scuderi. “From this, we can extrapolate a value for ion current in the collector and then compute the sensitivity factor.”
INFICON’s model did an impressive job of generating simulated values that aligned closely with test results from the benchmark prototype. This enabled the team to observe how changes to the modelled design affected key performance metrics, including ionization energy, the paths of electrons and ions, emission and transmission current, and sensitivity.
The end-product of INFICON’s design process, the IRG080, incorporates many of the same components as existing Bayard–Alpert gauges, but key parts look quite different. For example, the new design’s filament is a solid suspended disc, not a thin wire. The grid is no longer a delicate wire cage but is instead made from stronger formed metal parts. The collector now consists of two components: a single pin or rod that attracts ions and a solid metal ring that directs electron flow away from the collector and toward a Faraday cup (to catch the charged particles in vacuum). This arrangement, refined through ray-tracing simulation with the COMSOL Multiphysics® software, improves accuracy by better separating the paths of ions and electrons.
A more precise, reproducible gauge
INFICON, for its part, built 13 prototypes for evaluation by the project consortium. Testing showed that the IRG080 achieved the goal of reducing measurement uncertainty to below 1%. As for sensitivity, the IRG080 performed eight times better than the consortium’s benchmark gauge design. Equally important, the INFICON prototype yielded consistent results during multiple testing sessions, delivering sensitivity repeatability performance that was 13 times better than that of the benchmark gauge. In all, 23 identical gauges were built and tested during the project, confirming that INFICON had created a more precise, robust and reproducible tool for measuring HV/UHV conditions.
“We consider [the IRG080] a good demonstration of [INFICON’s] capabilities,” said Wüest.
Entanglement is an extraordinary feature of quantum mechanics: if two particles are entangled, the state of one particle cannot be described independently from the other. It has been observed in a wide variety of systems, ranging from microscopic particles such as photons or atoms to macroscopic diamonds, and over distances ranging from the nanoscale to hundreds of kilometres. Until now, however, entanglement has remained largely unexplored at the high energies accessible at hadron colliders, such as the LHC.
At the TOP 2023 workshop, which took place in Michigan this week, the ATLAS collaboration reported a measurement of entanglement using top-quark pairs with one electron and one muon in the final state selected from proton–proton collision data collected during LHC Run 2 at a centre-of-mass energy of 13 TeV, opening new ways to test the fundamental properties of quantum mechanics.
Two-qubit system The simplest system which gives rise to entanglement is a pair of qubits, as in the case of two spin-1/2 particles. Since top quarks are typically generated in top-antitop pairs (tt) at the LHC, they represent a unique high-energy example of such a two-qubit system. The extremely short lifetime of the top (10-25 s, which is shorter than the timescale for hadronisation and spin decorrelation) means that its spin information is directly transferred to its decay products. Close to threshold, the tt pair produced through gluon fusion is almost in a spin-singlet state, maximally entangled. By measuring the angular distributions of the tt decay products close to threshold, one can therefore conclude whether the tt pair is in an entangled state.
For this purpose, a single observable can be used as an entanglement witness, D. This can be measured from the distribution of cos?, where ? is the angle between the charged lepton directions in each of the parent top and anti-top rest frames, with D = −3⋅⟨cos?⟩. The entanglement criterion is given by D = tr(C)/3 < −1/3, where tr(C) is the sum of the diagonal elements of the spin-correlation matrix C of the tt̄ pair before hadronisation effects occur. Intuitively, this criterion can be understood from the fact that tr(C) is the expectation value of the product of the spin polarizations, tr(C) =〈σ⋅σ〉, with σ, σ being the t,t polarizations, respectively (classically tr(C) ≤ 1, since spin polarizations are unit vectors). D is measured in a region where the invariant mass is approximately twice the mass of the top quark, 340 < mtt < 380 GeV, and is performed at particle level, after hadronisation effects occur.
This constitutes the first observation of entanglement between a pair of quarks and the highest-energy measurement of entanglement
The shape of cos? is distorted by detector and event-selection effects for which it has to be corrected. A calibration curve connecting the value of D before and after the event reconstruction is extracted from simulation and used to derive D from the corresponding measurement, which is then compared to predictions from state-of-the-art Monte Carlo simulations. The measured value D = -0.547 ± 0.002 (stat.) ± 0.021 (syst.) is well beyond 5σ from the non-entanglement hypothesis. This constitutes the first-ever observation of entanglement between a pair of quarks and the highest-energy measurement of entanglement.
Apart from the intrinsic interest of testing entanglement under unprecedented conditions, this measurement paves the way to use the LHC as a novel facility to study quantum information. Prime examples are quantum discord, which is the most basic form of quantum correlations; quantum steering, which is how one subsystem can steer the state of the other one; and tests of Bell’s inequalities, which explore non-locality. Furthermore, borrowing concepts from quantum information theory inspires new approaches to search for physics beyond the Standard Model.
Electronics engineer Oscar Barbalat, who pioneered knowledge-transfer at CERN, died on 8 September 2023, aged 87.
Born in Liège, Belgium in 1935, Oscar joined CERN in 1961, working initially in the Proton Synchrotron (PS) radio-frequency (RF) group. At the time, the PS beam intensity was still below 109 protons per pulse and the beam-control system was somewhat difficult to master, even though the operations consisted mainly of striking internal targets at 24 GeV/c. The control system became increasingly complex when the PS slow-resonant extraction system of Hugh Hereward was put into service. As part of a team of expert accelerator physicists that included Dieter Möhl, Werner Hardt, Pierre Lefèvre and Aymar Sörensen, Oscar wrote a substantial FORTRAN simulation program to understand how the extraction efficiency depended on its numerous correlated parameters.
In the 1970s, the PS division set out to digitise the controls of all PS subsystems (Linac, PS Booster, RF, beam transport systems, vacuum system, beam observation, etc). These subsystems used independent control systems, which were based on different computers or operated manually. Oscar was tasked with devising a structured naming scheme for all the components of the PS complex. After producing several versions, in collaboration with all the experts, the fourth iteration of his proposed scheme was adopted in 1977. To design the scheme, Oscar used the detailed knowledge he had acquired of the accelerator systems and their control needs. His respectful and friendly but tenacious way with colleagues enabled him to explore their desires and problems, which he was then able to reconcile with the needs of the automated controls. Oscar was modest. In the acknowledgements of his naming scheme, he wrote: “This proposal is the result of numerous contributions and suggestions from the many members of the division who were interested in this problem and the author is only responsible for the inconsistencies that remain.”
On Giorgio Brianti’s initiative, following the interest of the CERN Council’s finance committee, the “Bureau de Liaison pour l’Industrie et la Technologie” (BLIT) was founded, with Oscar in charge. His activity began in 1974 and ended on his retirement in 1997. His approach to this new task was typical of his and CERN’s collaborative style: low-key and constructive. He was eager to inform himself of details and he had a talent for explaining technical aspects to others. It helped that he was well educated with broad interests in people, science, technology, languages along with cultural and societal purposes. He built a network of people who helped him and whom he convinced of the relevance of sharing technological insights beyond CERN.
After more than 20 years developing this area, he summarised the activities, successes and obstacles in Technology Transfer from Particle Physics, the CERN Experience 1974–1997. When activities began in the 1970s, few considered the usefulness of CERN technologies outside particle physics as a relevant objective. Now, CERN prominently showcases its impact on society. After his retirement, Oscar continued to be interested in CERN technology-transfer, and in 2012 he became a founding member of the international Thorium Energy Committee (iThEC), promoting R&D in thorium energy technologies.
No doubt, Oscar is the pioneer of what is now known as knowledge-transfer at CERN.
The 20th International Conference on B-Physics at Frontier Machines,Beauty 2023, was held in Clermont-Ferrand, France, from 3-7 July, hosted by the Laboratoire de Physique de Clermont (IN2P3/CNRS, Université Clermont Auvergne). It was the first in-person edition of the series since the pandemic, and attracted 75 participants from all over the world. The programme had 53 invited talks of which 13 were theoretical overviews. An important element was also the Young Scientist Forum, with 7 short presentations on recent results.
The key focus of the conference series is to review the latest results in heavy-flavour physics and discuss future directions. Heavy-flavour decays, in particular those of hadrons that contain b quarks, offer powerful probes of physics beyond the Standard Model (SM). Beauty 2023 took place 30 years after the opening meeting in the series. A dedicated session was devoted to reflections on the developments in flavour physics over this period, and also celebrating the life of Sheldon Stone, who passed away in October 2021. Sheldon was both an inspirational figure in flavour physics as a whole, a driving force behind the CLEO, BTeV and LHCb experiments, and a long-term supporter of the Beauty conference series.
LHC results Many important results have emerged from the LHC since the last Beauty conference. One concerns the CP-violating parameter sin2β, for which measurements by the BaBar and Belle experiments at the start of the millennium marked the dawn of the modern flavour-physics era. LHCb has now measured sin2β with a precision better than any other experiment, to match its achievement for ϕs, the analogous parameter in Bs0 decays, where ATLAS and CMS have also made a major contribution. Continued improvements in the knowledge of these fundamental parameters will be vital in probing for other sources of CP violation beyond the SM.
Over the past decade, the community has been intrigued by strong hints of the breakdown of lepton-flavour universality, one of the guiding tenets of the SM, in B decays. Following a recent update from LHCb, it seems that lepton universality may remain a good symmetry, at least in the class of electroweak-penguin decays such as B→K(*)l+l–, where much of the excitement was focused (CERN Courier January/February 2023 p7). Nonetheless, there remain puzzles to be understood in this sector of flavour physics, and anomalies are emerging elsewhere. For example, non-leptonic decays of the kind Bs→ Ds +K– show intriguing patterns through CP-violation and decay-rate information.
The July conference was noteworthy as being a showcase for the first major results to emerge from the Belle II experiment. Belle II has now collected 362 fb-1 of integrated luminosity on the Υ(4S) resonance, which constitutes a dataset similar in size to that accumulated by BaBar and the original Belle experiment, and results were shown from early tranches of this sample. In some cases, these results already match or exceed in sensitivity and precision what was achieved at the first generation of B-factory experiments, or indeed elsewhere. These advances can be attributed to improved instrumentation and analysis techniques. For example, world-leading measurements of the lifetimes of several charm hadrons were presented, including the D0, D+, Ds+ and Λc+. Belle II and its accelerator, SuperKEKB, will emerge from a year-long shutdown in December with the goal to increase the dataset by a factor of 10-20 in the coming half decade.
Full of promise The future experimental programme of flavour physics is full of promise. In addition to the upcoming riches expected from Belle II, an upgraded LHCb detector is being commissioned in order to collect significantly larger event samples over the coming decade. Upgrades to ATLAS and CMS will enhance these experiments’ capabilities in flavour physics during the High-Luminosity LHC era, for which a second upgrade to LHCb is also foreseen. Conference participants also learned of the exciting possibilities for flavour physics at the proposed future collider FCC-ee, where samples of several 1012 Z0 decays will open the door to ultra-precise measurements in an analysis environment much cleaner than at the LHC. These projects will be complemented by continued exploration of the kaon sector, and studies at the charm threshold for which a high-luminosity Super Tau Charm Factory is proposed in China.
The scientific programme of Beauty 2023 was complemented by outreach events in the city, including a `Pints of Science’ evening and a public lecture, as well as a variety of social events. These and the stimulating presentations made the conference a huge success, demonstrating that flavour remains a vibrant field and continues to be a key player in the search for new physics beyond the Standard Model.
Henri Navelet died on 3 July 2023 in Bordeaux, at the age of 84. Born on 28 October 1938, he studied at the École Normale Supérieure in Paris. He went on to become a specialist in strong interactions and was a leading member of the Service de Physique Théorique (SPhT, now Institut de Physique Théorique) of CEA Saclay since its creation in 1963. Henri stood out for his theoretical rigour and remarkable computational skills, which meant a great deal to his many collaborators.
In the 1960s, Henri was a member of the famous “CoMoNav” trio with two other SPhT researchers, Gilles Cohen-Tannoudji and André Morel. The trio was famous in particular for introducing the so-called Regge-pole absorption model into the phenomenology of high-energy (at the time!) strong interactions. This model was used by many physicists to untangle the multitude of reactions studied at CERN. Henri’s other noteworthy contributions include his work with Alfred H Mueller on very-high-energy particle jets, today commonly referred to as “Mueller-Navelet jets”, which are still the subject of experimental research and theoretical calculations in quantum chromodynamics.
Henri had a great sense of humour and human qualities that were highly motivating for his colleagues and the young researchers who met him during his long career. He was not only a great theoretical physicist, but also a passionate sportsman, training the younger generations. In particular, he ran the marathon in two hours, 59 minutes and 59 seconds. A valued researcher and friend has left us.