Topics

Rapid developments in precision predictions

High Precision for Hard Processes in Turin

Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. To keep pace with experimental observations at the LHC and elsewhere, precision computing has had to develop rapidly in recent years – efforts that have been monitored and driven by the biennial High Precision for Hard Processes (HP2) conference for almost two decades now. The latest edition attracted 120 participants to the University of Torino from 10 to 13 September 2024.

All speakers addressed the same basic question: how can we achieve the most precise theoretical description for a wide variety of scattering processes at colliders?

The recipe for precise prediction involves many ingredients, so the talks in Torino probed several research directions. Advanced methods for the calculation of scattering amplitudes were discussed, among others, by Stephen Jones (IPPP Durham). These methods can be applied to detailed high-order phenomenological calculations for QCD, electroweak processes and BSM physics, as illustrated by Ramona Groeber (Padua) and Eleni Vryonidou (Manchester). Progress in parton showers – a crucial tool to bridge amplitude calculations and experimental results – was presented by Silvia Ferrario Ravasio (CERN). Dedicated methods to deal with the delicate issue of infrared divergences in high-order cross-section calculations were reviewed by Chiara Signorile-Signorile (Max Planck Institute, Munich).

The Torino conference was dedicated to the memory of Stefano Catani, a towering figure in the field of high-energy physics, who suddenly passed away at the beginning of this year. Starting from the early 1980s, and for the whole of his career, Catani made groundbreaking contributions in every facet of HP2. He was an inspiration to a whole generation of physicists working in high-energy phenomenology. We remember him as a generous and kind person, and a scientist of great rigour and vision. He will be sorely missed.

AI treatments for stroke survivors

Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies. The toolbox of the high-energy physicist is well adapted to the task. To amplify CERN’s societal contributions through technological innovation, the Unleashing a Comprehensive, Holistic and Patient-Centric Stroke Management for a Better, Rapid, Advanced and Personalised Stroke Diagnosis, Treatment and Outcome Prediction (UMBRELLA) project – co-led by Vall d’Hebron Research Institute and Siemens Healthineers – was officially launched on 1 October 2024. The kickoff meeting in Barcelona, Spain, convened more than 20 partners, including Philips, AstraZeneca, KU Leuven and EATRIS. Backed by nearly €27 million from the EU’s Innovative Health Initiative and industry collaborators, the project aims to transform stroke care across Europe.

The meeting highlighted the urgent need to address stroke as a pressing health challenge in Europe. Each year, more than one million acute stroke cases occur in Europe, with nearly 10 million survivors facing long-term consequences. In 2017, the economic burden of stroke treatments was estimated to be €60 billion – a figure that continues to grow. UMBRELLA’s partners outlined their collective ambition to translate a vast and fragmented stroke data set into actionable care innovations through standardisation and integration.

UMBRELLA will utilise advanced digital technologies to develop AI-powered predictive models for stroke management. By standardising real-world stroke data and leveraging tools like imaging technologies, wearable devices and virtual rehabilitation platforms, UMBRELLA aims to refine every stage of care – from diagnosis to recovery. Based on post-stroke data, AI-driven insights will empower clinicians to uncover root causes of strokes, improve treatment precision and predict patient outcomes, reshaping how stroke care is delivered.

Central to this effort is the integration of CERN’s federated-learning platform, CAFEIN. A decentralised approach to training machine-learning algorithms without exchanging data, it was initiated thanks to seed funding from CERN’s knowledge transfer budget for the benefit of medical applications: now CAFEIN promises to enhance diagnosis, treatment and prevention strategies for stroke victims, ultimately saving countless lives. A main topic of the kickoff meeting was the development of the “U-platform” – a federated data ecosystem co-designed by Siemens Healthineers and CERN. Based on CAFEIN, the infrastructure will enable the secure and privacy preserving training of advanced AI algorithms for personalised stroke diagnostics, risk prediction and treatment decisions without sharing sensitive patient data between institutions. Building on CERN’s expertise, including its success in federated AI modelling for brain pathologies under the EU TRUST­roke project, the CAFEIN team is poised to handle the increasing complexity and scale of data sets required by UMBRELLA.

Beyond technological advancements, the UMBRELLA consortium discussed a plan to establish standardised protocols for acute stroke management, with an emphasis on integrating these protocols into European healthcare guidelines. By improving data collection and facilitating outcome predictions, these standards will particularly benefit patients in remote and underserved regions. The project also aims to advance research into the causes of strokes, a quarter of which remain undetermined – a statistic UMBRELLA seeks to change.

This ambitious initiative not only showcases CERN’s role in pioneering federated-learning technologies but also underscores the broader societal benefits brought by basic science. By pushing technologies beyond the state-of-the-art, CERN and other particle-physics laboratories have fuelled innovations that have an impact on our everyday lives. As UMBRELLA begins its journey, its success holds the potential to redefine stroke care, delivering life-saving advancements to millions and paving the way for a healthier, more equitable future.

Dark matter: evidence, theory and constraints

Dark Matter: Evidence, Theory and Constraints

Cold non-baryonic dark matter appears to make up 85% of the matter and 25% of the energy in our universe. However, we don’t yet know what it is. As the opening of many research proposals state, “The nature of dark matter is one of the major open questions in physics.”

The evidence for dark matter comes from astronomical and cosmological observations. Theoretical particle physics provides us with various well motivated candidates, such as weakly interacting massive particles (WIMPs), axions and primordial black holes. Each has different experimental and observational signatures and a wide range of searches are taking place. Dark-matter research spans a very broad range of topics and methods. This makes it a challenging research field to enter and master. Dark Matter: Evidence, Theory and Constraints by David Marsh, David Ellis and Viraf Mehta, the latest addition to the Princeton Series in Astrophysics, clearly presents the relevant essentials of all of these areas.

The book starts with a brief history of dark matter and some warm-up calculations involving units. Part one outlines the evidence for dark matter, on scales ranging from individual galaxies to the entire universe. It compactly summarises the essential background material, including cosmological perturbation theory.

Part two focuses on theories of dark matter. After an overview of the Standard Model of particle physics, it covers three candidates with very different motivations, properties and phenomenology: WIMPs, axions and primordial black holes. Part three then covers both direct and indirect searches for these candidates. I particularly like the schematic illustrations of experiments; they should be helpful for theorists who want to (and should!) understand the essentials of experimental searches.

The main content finishes with a brief overview of other dark-matter candidates. Some of these arguably merit more extensive coverage, in particular sterile neutrinos. The book ends with extensive recommendations for further reading, including textbooks, review papers and key research papers.

Dark-matter research spans a broad range of topics and methods, making it a challenging field to master

The one thing I would argue with is the claim in the introduction that dark matter has already been discovered. I agree with the authors that the evidence for dark matter is strong and currently cannot all be explained by modified gravity theories. However, given that all of the evidence for dark matter comes from its gravitational effects, I’m open to the possibility that our understanding of gravity is incorrect or incomplete. The authors are also more positive than I am about the prospects for dark-matter detection in the near future, claiming that we will soon know which dark-matter candidates exist “in the real pantheon of nature”. Optimism is a good thing, but this is a promise that dark-matter researchers (myself included…) have now been making for several decades.

The conversational writing style is engaging and easy to read. The annotation of equations with explanatory text is novel and helpful, and  the inclusion of numerous diagrams – simple and illustrative where possible and complex when called for – aids understanding. The attention to detail is impressive. I reviewed a draft copy for the publishers, and all of my comments and suggestions have been addressed in detail.

This book will be extremely useful to newcomers to the field, and I recommend it strongly to PhD students and undergraduate research students. It is particularly well suited as a companion to a lecture course, with numerous quizzes, problems and online materials, including numerical calculations and plots using Jupyter notebooks. It will also be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The B’s Ke+es

The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world to CERN from 23 to 25 October 2024. Patrick Koppenburg (Nikhef) began the meeting by looking back 10 years, when three and four sigma anomalies abounded: the inclusive/exclusive puzzles; the illuminatingly named P5 observable; and the lepton-universality ratios for rare B decays. While LHCb measurements have mostly eliminated the anomalies seen in the lepton-universality ratios, many of the other anomalies persist – most notably, the corresponding branching fractions for rare B-meson decays still appear to be suppressed significantly below Standard Model (SM) theory predictions. Sara Celani (Heidelberg) reinforced this picture with new results for Bs→ φμ+μ and Bs→ φe+e, showing the continued importance of new-physics searches in these modes.

Changing flavour

The discussion on rare B decays continued in the session on flavour-changing neutral-currents. With new lattice-QCD results pinning down short-distance local hadronic contributions, the discussion focused on understanding the long-distance contributions arising from hadronic resonances and charm rescattering. Arianna Tinari (Zurich) and Martin Hoferichter (Bern) judged the latter not to be dramatic in magnitude. Lakshan Madhan (Cambridge) presented a new amplitude analysis in which the long and short-distance contributions are separated via the kinematic dependence of the decay amplitudes. New theo­retical analyses of the nonlocal form factors for B → K(*)μ+μ and B → K(*)e+e were representative of the workshop as a whole: truly the bee’s knees.

Another challenge to accurate theory predictions for rare decays, the widths of vector final states, snuck its way into the flavour-changing charged-currents session, where Luka Leskovec (Ljubljana) presented a comprehensive overview of lattice methods for decays to resonances. Leskovec’s optimistic outlook for semileptonic decays with two mesons in the final state stood in contrast to prospects for applying lattice methods to D-D mixing: such studies are currently limited to the SU(3)-flavour symmetric point of equal light-quark masses, explained Felix Erben (CERN), though he offered a glimmer of hope in the form of spectral reconstruction methods currently under development.

LHCb’s beauty and charm physics programme reported substantial progress. Novel techniques have been implemented in the most recent CP-violation studies, potentially leading to an impressive uncertainty of just 1° in future measurements of the CKM angle gamma. LHCb has recently placed a special emphasis on beauty and charm baryons, where the experiment offers unique capabilities to perform many interesting measurements ranging from CP violation to searches for very rare decays and their form factors. Going from three quarks to four and five, the spectroscopy session illustrated the rich and complex debate around tetraquark and pentaquark states with a big open discussion on the underlying structure of the 20 or so discovered at LHCb: which are bound states of quarks and which are simply meson molecules? (CERN Courier November/December 2024 p26 and p33.)

LHCb’s ability to do unique physics was further highlighted in the QCD, electroweak (EW) and exotica session, where the collaboration has shown the most recent publicly available measurement of the weak-mixing angle in conjunction with W/Z-boson production cross-sections and other EW observables. LHCb have put an emphasis on combined QCD + QED and effective-field-theory calculations, and the interplay between EW precision observables and new-physics effects in couplings to the third generation. By studying phase space inaccessible to any other experiment, a study of hypothetical dark photons decaying to electrons showed the LHCb experiment to be a unique environment for direct searches for long-lived and low-mass particles.

Attendees left the workshop with a fresh perspective

Parallel to Implications 2024, the inaugural LHCb Open Data and Ntuple Wizard Workshop, took place on 22 October as a satellite event, providing theorists and phenomenologists with a first look at a novel software application for on-demand access to custom ntuples from the experiment’s open data. The LHCb Ntupling Service will offer a step-by-step wizard for requesting custom ntuples and a dashboard to monitor the status of requests, communicate with the LHCb open data team and retrieve data. The beta version was released at the workshop in advance of the anticipated public release of the application in 2025, which promises open access to LHCb’s Run 2 dataset for the first time.

A recurring satellite event features lectures by theorists on topics following LHCb’s scientific output. This year, Simon Kuberski (CERN) and Saša Prelovšek (Ljubljana) took the audience on a guided tour through lattice QCD and spectroscopy.

With LHCb’s integrated luminosity in 2024 exceeding all previous years combined, excitement was heightened. Attendees left the workshop with a fresh perspective on how to approach the challenges faced by our community.

From spinors to supersymmetry

From Spinors to Supersymmetry

This text is a hefty volume of around 1000 pages describing the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. The authors of this volume, Herbi Dreiner, Howard Haber and Stephen Martin, are household names in the phenomenology of particle physics with many original contributions in the topics that are covered in the book. Haber is also well known at CERN as a co-author of the legendary Higgs Hunter’s Guide (Perseus Books, 1990), a book that most collider physicists of the pre and early LHC eras are very familiar with.

The book starts with a 250-page introduction (chapters one to five) to the Standard Model (SM), covering more or less the theory material that one finds in standard advanced textbooks. The emphasis is on the theoretical side, with no discussion on experimental results, providing a succinct discussion of topics ranging from how to obtain Feynman rules to anomaly-cancellation calculations. In chapter six, extensions of the SM are discussed, starting with the seesaw-extended SM, moving on to a very detailed exposition of the two-Higgs-doublet model and finishing with grand unification theories (GUTs).

The second part of the book (from chapter seven onwards) is about supersymmetry in general. It begins with an accessible introduction that is also applicable to other beyond-SM-physics scenarios. This gentle and very pedagogical pattern continues to chapter eight, before proceeding to a more demanding supersymmetry-algebra discussion in chapter nine. Superfields, supersymmetric radiative corrections and supersymmetry symmetry breaking, which are discussed in the subsequent chapters, are more advanced topics that will be of interest to specialists in these areas.

The third part (chapter 13 onwards) discusses realistic supersymmetric models starting from the minimal supersymmetric SM (MSSM). After some preliminaries, chapter 15 provides a general presentation of MSSM phenomenology, discussing signatures relevant for proton–proton and electron–positron collisions, as well as direct dark-matter searches. A short discussion on beyond-MSSM scenarios is given in chapter 16, including NMSSM, seesaw, GUTs and R-parity violating theories. Phenomenological implications, for example their impact on proton decay, are also discussed.

Part four includes basic Feynman diagram calculations in the SM and MSSM using two-component spinor formalism. Starting from very simple tree-level SM processes, like Bhabha scattering and Z-boson decays, it proceeds with tree-level supersymmetric processes, standard one-loop calculations and their supersymmetric counterparts, and Higgs-boson mass corrections. The presentation of this is very practical and useful for those who want to see how to perform easy calculations in SM or MSSM using two-component spinor formalism. The material is accessible and detailed enough to be used for teaching master’s or graduate-level students.

A valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry

The book finishes with almost 200 pages of appendices covering all sorts of useful topics, from notation to commonly used identity lists and group theory.

The book requires some familiarity with master’s-level particle-physics concepts, for example via Halzen and Martin’s Quarks and Leptons or Paganini’s Fundamentals of Particle Physics. Some familiarity with quantum field theory is helpful but not needed for large parts of the book. No effort is made to be brief: two-component spinor formalism is discussed in all its detail in a very pedagogic and clear way. Parts two and three are a significant enhancement to the well known A Supersymmetry Primer (arXiv:hep-ph/9709356), which is very popular among beginners to supersymmetry and written by Stephen Martin, one of authors of this volume. A rich collection of exercises is included in every chapter, and the appendix chapters are no exception to this.

Do not let the word supersymmetry in the title to fool you: even if you are not interested in supersymmetric extensions you can find a detailed exposition on two-component formalism for spinors, SM calculations with this formalism and a detailed discussion on how to design extensions of the scalar sector of the SM. Chapter three is particularly useful, describing in 54 pages how to get from the two-component to the four-component spinor formalism that is more familiar to many of us.

This is a book for advanced graduate students and researchers in particle-physics phenomenology, which nevertheless contains much that will be of interest to advanced physics students and particle-physics researchers in boththeory and experiment. This is because the size of the volume allows the authors to start from the basics and dwell in topics that most other books of that type cover in less detail, making them less accessible. I expect that Dreiner, Haber and Martin will become a valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry.

Intensely focused on physics

The High Luminosity Large Hadron Collider, edited by Oliver Brüning and Lucio Rossi, is a comprehensive review of an upgrade project designed to boost the total event statistics of CERN’s Large Hadron Collider (LHC) by nearly an order of magnitude. The LHC is the world’s largest and, in many respects, most performant particle accelerator. It may well represent the most complex infrastructure ever built for scientific research. The increase in event rate is achieved by higher beam intensities and smaller beam sizes at the collision points.

Brüning and Rossi’s book offers a comprehensive overview of this work across 31 chapters authored by more than 150 contributors. Due to the mentioned complexity of the HL-LHC, it is advisable to read the excellent introductory chapter first to obtain an overview on the various physics aspects, different components and project structure. After coverage of the physics case and the upgrades to the LHC experiments, the operational experiences with the LHC and its performance development are described.

The LHC’s upgrade is a significant project, as evidenced by the involvement of nine collaborating countries including China and the US, a materials budget that exceeds one billion Swiss Francs, more than 2200 years of integrated work, and the complexity of the physics and engineering. The safe operation of the enormous beam intensity represented a major challenge for the original LHC, and will be even more challenging with the upgraded beam parameters. For example, the instantaneous power carried by the circulating beam will be 7.6 TW, while the total beam energy is then 680 MJ – enough energy to boil two tonnes of water. Such numbers should be compared with the extremely low power density of 30 mW/cm3, which is sufficient to quench a superconducting magnet coil and interrupt the operation of the entire facility.

The book continues with descriptions of the two subsystems of greatest importance for the luminosity increase: the superconducting magnets and the RF systems including the crab cavities.

The High Luminosity Large Hadron Collider

Besides the increase in intensity, the primary factor for instantaneous luminosity gain is obtained by a reduction in beam size at the interaction points (IPs), partly through a smaller emittance but mainly through improved beam optics. This change results in a larger beam in the superconducting quadrupoles beside the IP. To accommodate the upgraded beam and to shield the magnet coils from radiation, the aperture of these magnets is increased by more than a factor of two to 150 mm. New quadrupoles have been developed, utilising the superconductor material Nb3Sn, allowing higher fields at the location of the coils. Further measures include the cancellation of the beam crossing angle during collision by dynamic tilting of the bunch orientation using the superconducting crab cavities that were designed for this special application in the LHC. The authors make fascinating observations, for example regarding the enhanced sensitivity to errors due to the extreme beam demagnification at the IPs: a typical relative error of 10–4 in the strength of the IP quadrupoles results in a significant distortion in beam optics, a so-called beta-beat of 7%.

Chapter eight describes the upgrade to the beam-collimation system, which is of particular importance for the safe operation of high-intensity beams. For ion collimation, halo particles are extracted most efficiently using collimators made from bent crystals.

The book continues with a description of the magnet-powering circuits. For the new superconducting magnets CERN is using “superconducting links” for the first time: cable sets made of a high-temperature superconductor that can carry enormous currents on many circuits in parallel in a small cross section; it suffices to cool them to temperatures of around 20 to 30K with gaseous helium by evaporating some of the liquid helium that is used for cooling the superconducting magnets in the accelerator.

Magnetic efforts

The next chapters cover machine protection, the interface with the detectors and the cryogenic system. Chapter 15 is dedicated to the effects of beam-induced stray radiation, in particular on electronics – an effect that has become quite important at high intensities in recent years. Another chapter covers the development of an 11 Tesla dipole magnet that was intended to replace a regular superconducting magnet, thereby gaining space for additional collimators in the arc of the ring. Despite considerable effort, this programme was eventually dropped from the project because the new magnet technology could not be mastered with the required reliability for routine operation; and, most importantly, alternative collimation solutions were identified.

Other chapters describe virtually all the remaining technical subsystems and beam-dynamics aspects of the collider, as well as the extensive test infrastructure required before installation in the LHC. A whole chapter is dedicated to high-field-magnet R&D – a field of utmost importance to the development of a next-generation hadron collider beyond the LHC.

Brüning and Rossi’s book will interest accelerator physicists in that it describes many outstanding beam-physics aspects of the HL-LHC. Engineers and readers with an interest in technology will also find many technical details on its subsystems.

Open-science cloud takes shape in Berlin

Findable. Accessible. Interoperable. Reusable. That’s the dream scenario for scientific data and tools. The European Open Science Cloud (EOSC) is a pan-European initiative to develop a web of “FAIR” data services across all scientific fields. EOSC’s vision is to put in place a system for researchers in Europe to store, share, process, analyse and reuse research outputs such as data, publications and software across disciplines and borders.

EOSC’s sixth symposium attracted 450 delegates to Berlin from 21 to 23 October 2024, with a further 900 participating online. Since its launch in 2017, EOSC activities have focused on conceptualisation, prototyping and planning. In order to develop a trusted federation of research data and services for research and innovation, EOSC is being deployed as a network of nodes. With the launch during the symposium of the EOSC EU node, this year marked a transition from design to deployment.

While EOSC is a flagship science initiative of the European Commission, FAIR concerns researchers and stakeholders globally. Via the multiple projects under the wings of EOSC that collaborate with software and data institutes around the world, a pan-European effort can be made to ensure a research landscape that encourages knowledge sharing while recognising work and training the next generation in best practices in research. The EU node – funded by the European Commission, and the first to be implemented – will serve as a reference for roughly 10 additional nodes to be deployed in a first wave, with more to follow. They are accessible using any institutional credentials based on GÉANT’s MyAccess or with an EU login. A first operational implementation of the EOSC Federation is expected by the end of 2025.

A thematic focus of this year’s symposium was the need for clear guidelines on the adaption of FAIR governance for artificial intelligence (AI), which relies on the accessibility of large and high-quality datasets. It is often the case that AI models are trained with synthetic data, large-scale simulations and first-principles mathematical models, although these may only provide an incomplete description of complex and highly nonlinear real-world phenomena. Once AI models are calibrated against experimental data, their predictions become increasingly accurate. Adopting FAIR principles for the production, collection and curation of scientific datasets will streamline the design, training, validation and testing of AI models (see, for example, Y Chen et al. 2021 arXiv:2108.02214).

EOSC includes five science clusters, from natural sciences to social sciences, with a dedicated cluster for particle physics and astronomy called ESCAPE: the European Science Cluster of Astronomy and Particle Physics. The future deployment of the ESCAPE Virtual Research Environment across multiple nodes will provide users with tools to bring together diverse experimental results, for example, in the search for evidence of dark matter, and to perform new analyses incorporating data from complementary searches.

First signs of antihyperhelium-4

Heavy-ion collisions at the LHC create suitable conditions for the production of atomic nuclei and exotic hypernuclei, as well as their antimatter counterparts, antinuclei and antihypernuclei. Measurements of these forms of matter are important for understanding the formation of hadrons from the quark–gluon plasma and studying the matter–antimatter asymmetry seen in the present-day universe.

Hypernuclei are exotic nuclei formed by a mix of protons, neutrons and hyperons, the latter being unstable particles containing one or more strange quarks. More than 70 years since their discovery in cosmic rays, hypernuclei remain a source of fascination for physicists due to their rarity in nature and the challenge of creating and studying them in the laboratory.

In heavy-ion collisions, hypernuclei are created in significant quantities, but only the lightest hypernucleus, hypertriton, and its antimatter partner, antihypertriton, have been observed. Hypertriton is composed of a proton, a neutron and a lambda hyperon containing one strange quark. Antihypertriton is made up of an antiproton, an antineutron and an antilambda.

Following hot on the heels of the observation of antihyperhydrogen-4 (a bound state of an antiproton, two antineutrons and an antilambda) earlier this year by the STAR collaboration at the Relativistic Heavy Ion Collider (RHIC), the ALICE collaboration at the LHC has now seen the first ever evidence for antihyperhelium-4, which is composed of two antiprotons, an antineutron and an antilambda. The result has a significance of 3.5 standard deviations. If confirmed, antihyper­helium-4 would be the heaviest antimatter hypernucleus yet seen at the LHC.

Hypernuclei remain a source of fascination due to their rarity in nature and the challenge of creating and studying them in the lab

The ALICE measurement is based on lead–lead collision data taken in 2018 at a centre-of-mass energy of 5.02 TeV for each colliding pair of nucleons, be they protons or neutrons. Using a machine-learning technique that outperforms conventional hypernuclei search techniques, the ALICE researchers looked at the data for signals of hyperhydrogen-4, hyperhelium-4 and their antimatter partners. Candidates for (anti)hyperhydrogen-4 were identified by looking for the (anti)helium-4 nucleus and the charged pion into which it decays, whereas candidates for (anti)hyperhelium-4 were identified via its decay into an (anti)helium-3 nucleus, an (anti)proton and a charged pion.

In addition to finding evidence of antihyperhelium-4 with a significance of 3.5 standard deviations, and evidence of antihyperhydrogen-4 with a significance of 4.5 standard deviations, the ALICE team measured the production yields and masses of both hypernuclei.

For both hypernuclei, the measured masses are compatible with the current world-average values. The measured production yields were compared with predictions from the statistical hadronisation model, which provides a good description of the formation of hadrons and nuclei in heavy-ion collisions. This comparison shows that the model’s predictions agree closely with the data if both excited hypernuclear states and ground states are included in the predictions. The results confirm that the statistical hadronisation model can also provide a good description of the production of hyper­nuclei modelled to be compact objects with sizes of around 2 femtometres.

The researchers also determined the antiparticle-to-particle yield ratios for both hypernuclei and found that they agree with unity within the experimental uncertainties. This agreement is consistent with ALICE’s observation of the equal production of matter and antimatter at LHC energies and adds to the ongoing research into the matter–antimatter imbalance in the universe.

Tsung-Dao Lee 1926–2024

On 4 August 2024, the great physicist Tsung-Dao Lee (also known as T D Lee) passed away at his home in San Francisco, aged 97.

Born in 1926 to an intellectual family in Shanghai, Lee’s education was disrupted several times by the war against Japan. He neither completed high school nor graduated from university. In 1943, however, he took the national entrance exam and, with outstanding scores, was admitted to the chemical engineering department of Zhejiang University. He then transferred to the physics department of Southwest Associated University, a temporary setup during the war for Peking, Tsinghua and Nankai universities. In the autumn of 1946, under the recommendation of Ta-You Wu, Lee went to study at the University of Chicago under the supervision of Enrico Fermi, earning his PhD in June 1950.

From 1950 to 1953 Lee conducted research at the University of Chicago, the University of California, Berkeley and the Institute for Advanced Study, located in Princeton. During this period, he made significant contributions to particle physics, statistical mechanics, field theory, astrophysics, condensed-matter physics and turbulence theory, demonstrating a wide range of interests and deep insights in several frontiers of physics. In a 1952 paper on turbulence, for example, Lee pointed out the significant difference between fluid dynamics in two-dimensional and three-dimensional spaces, namely, there is no turbulence in two dimensions. This finding provided essential conditions for John von Neumann’s model, which used supercomputers to simulate weather.

Profound impact

During this period, Lee and Chen-Ning Yang collaborated on two foundational works in statistical physics concerning phase transitions, discovering the famous “unit circle theorem” on lattice gases, which had a profound impact on statistical mechanics and phase-transition theory.

Between 1952 and 1953, during a visit to the University of Illinois at Urbana-Champaign, Lee was inspired by discussions with John Bardeen (winner, with Leon Neil Cooper and John Robert Schrieffer, of the 1972 Nobel Prize in Physics for developing the first successful microscopic theory of superconductivity). Lee applied field-theory methods to study the motion of slow electrons in polar crystals, pioneering the use of field theory to investigate condensed matter systems. According to Schrieffer, Lee’s work directly influenced the development of their “BCS” theory of superconductivity.

In 1953, after taking an assistant professor position at Columbia University, Lee proposed a renormalisable field-theory model, widely known as the “Lee Model,” which had a substantial impact on the study of renormalisation in quantum field theory.

On 1 October 1956, Lee and Yang’s theory of parity non-conservation in weak interactions was published in Physical Review. It was quickly confirmed by the experiments of Chien-Shiung Wu and others, earning Lee and Yang the 1957 Nobel Prize in Physics – one of the fastest recognitions in the history of the Nobel Prize. The discovery of parity violation significantly challenged the established understanding of fundamental physical laws and directly led to the establishment of the universal V–A theory of weak interactions in 1958. It also laid the groundwork for the unified theory of weak and electromagnetic interactions developed a decade later.

In 1957, Lee, Oehme and Yang extended symmetry studies to combined charge–parity (CP) transformations. The CP non-conservation discovered in neutral K-meson decays in 1964 validated the importance of Lee and his colleagues’ theoretical work, as well as the later establishment of CP violation theories. The same year, Lee was appointed the Fermi Professor of Physics at Columbia.

In the 1970s, Lee published papers exploring the origins of CP violation, suggesting that it might stem from spontaneous symmetry breaking in the vacuum and predicting several significant phenomenological consequences. In 1974, Lee and G C Wick investigated whether spontaneously broken symmetries in the vacuum could be partially restored under certain conditions. They found that heavy-ion collisions could achieve this restoration and produce observable effects. This work pioneered the study of the quantum chromodynamics (QCD) vacuum, phase transitions and quark–gluon plasma. It also laid the theoretical and experimental foundation for relativistic heavy-ion collision physics.

From 1982, Lee devoted significant efforts to solving non-perturbative QCD using lattice-QCD methods. Together with Norman Christ and Fred Friedberg, he developed stochastic lattice field theory and promoted first-principle lattice simulations on supercomputers, greatly advancing lattice QCD research.

Immense respect

In 2011 Lee retired as a professor emeritus from Columbia at the age of 85. In China, he enjoyed immense respect, not only for being the first Chinese scientist (with Chen-Ning Yang) to win a Nobel Prize, but also for enhancing the level of science and education in China and promoting the Sino-American collaboration in high-energy physics. This led to the establishment and successful construction of China’s first major high-energy physics facility, the Beijing Electron–Positron Collider (BEPC). At the beginning of this century, Lee supported and personally helped the upgrade of BEPC, the Daya Bay reactor neutrino experiment and others. In addition, he initiated, promoted and executed the China–US Physics Examination and Application plan, the National Natural Science Foundation of China, and the postdoctoral system in China.

Tsung-Dao Lee’s contributions to an extraordinarily wide range of fields profoundly shaped humanity’s understanding of the basic laws of the universe.

Robert Aymar 1936–2024

Robert Aymar, CERN Director-General from January 2004 to December 2008, passed away on 23 September at the age of 88. An inspirational leader in big-science projects for several decades, including the International Thermonuclear Experimental Reactor (ITER), his term of office at CERN was marked by the completion of construction and the first commissioning of the Large Hadron Collider (LHC). His experience of complex industrial projects proved to be crucial, as the CERN teams had to overcome numerous challenges linked to the LHC’s innovative technologies and their industrial production.

Robert Aymar was educated at École Poly­technique in Paris. He started his career in plasma physics at the Commissariat à l’Énergie Atomique (CEA), since renamed the Commissariat à l’Énergie Atomique et aux Énergies Alternatives, at the time when thermonuclear fusion was declassified and research started on its application to energy production. After being involved in several studies at CEA, Aymar contributed to the design of the Joint European Torus, the European tokamak project based on conventional magnet technology, built in Culham, UK in the late 1970s. In the same period, CEA was considering a compact tokamak project based on superconducting magnet technology, for which Aymar decided to use pressurised superfluid helium cooling – a technology then recently developed by Gérard Claudet and his team at CEA Grenoble. Aymar was naturally appointed head of the Tore Supra tokamak project, built at CEA Cadarache from 1977 to 1988. The successful project served inter alia as an industrial-sized demonstrator of superfluid helium cryogenics, which became a key technology of the LHC.

As head of the Département des Sciences de la Matière at CEA from 1990 to 1994, Aymar set out to bring together the physics of the infinitely large and the infinitely small, as well as the associated instrumentation, in a department that has now become the Institut de Recherche sur les Lois Fondamentales de l’Univers. In that position, he actively supported CEA–CERN collaboration agreements on R&D for the LHC and served on many national and international committees. In 1993 he chaired the LHC external review committee, whose recommendation proved decisive in the project’s approval. From 1994 to 2003 he led the ITER engineering design activities under the auspices of the International Atomic Energy Agency, establishing the basic design and validity of the project that would be approved for construction in 2006. In 2001, the CERN Council called on his expertise once again by entrusting him to chair the external review committee for CERN’s activities.

When Robert Aymar took over as Director-General of CERN in 2004, the construction of the LHC was well under way. But there were many industrial and financial challenges, and a few production crises still to overcome. During his tenure, which saw the ramp-up, series production and installation of major components, the machine was completed and the first beams circulated. That first start-up in 2008 was followed by a major technical problem that led to a shutdown lasting several months. But the LHC had demonstrated that it could run, and in 2009 the machine was successfully restarted. Aymar’s term of office also saw a simplification of CERN’s structure and procedures, aimed at making the laboratory more efficient. He also set about reducing costs and secured additional funding to complete the construction and optimise the operation of the LHC. After retirement, he remained active as a scientific advisor to the head of the CEA, occasionally visiting CERN and the ITER construction site in Cadarache.

Robert Aymar was a dedicated and demanding leader, with a strong drive and search for pragmatic solutions in the activities he undertook or supervised. CERN and the LHC project owe much to his efforts. He was also a man of culture with a marked interest in history. It was a privilege to serve under his direction.

bright-rec iop pub iop-science physcis connect