Comsol -leaderboard other pages

Topics

From spinors to supersymmetry

From Spinors to Supersymmetry

This text is a hefty volume of around 1000 pages describing the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. The authors of this volume, Herbi Dreiner, Howard Haber and Stephen Martin, are household names in the phenomenology of particle physics with many original contributions in the topics that are covered in the book. Haber is also well known at CERN as a co-author of the legendary Higgs Hunter’s Guide (Perseus Books, 1990), a book that most collider physicists of the pre and early LHC eras are very familiar with.

The book starts with a 250-page introduction (chapters one to five) to the Standard Model (SM), covering more or less the theory material that one finds in standard advanced textbooks. The emphasis is on the theoretical side, with no discussion on experimental results, providing a succinct discussion of topics ranging from how to obtain Feynman rules to anomaly-cancellation calculations. In chapter six, extensions of the SM are discussed, starting with the seesaw-extended SM, moving on to a very detailed exposition of the two-Higgs-doublet model and finishing with grand unification theories (GUTs).

The second part of the book (from chapter seven onwards) is about supersymmetry in general. It begins with an accessible introduction that is also applicable to other beyond-SM-physics scenarios. This gentle and very pedagogical pattern continues to chapter eight, before proceeding to a more demanding supersymmetry-algebra discussion in chapter nine. Superfields, supersymmetric radiative corrections and supersymmetry symmetry breaking, which are discussed in the subsequent chapters, are more advanced topics that will be of interest to specialists in these areas.

The third part (chapter 13 onwards) discusses realistic supersymmetric models starting from the minimal supersymmetric SM (MSSM). After some preliminaries, chapter 15 provides a general presentation of MSSM phenomenology, discussing signatures relevant for proton–proton and electron–positron collisions, as well as direct dark-matter searches. A short discussion on beyond-MSSM scenarios is given in chapter 16, including NMSSM, seesaw, GUTs and R-parity violating theories. Phenomenological implications, for example their impact on proton decay, are also discussed.

Part four includes basic Feynman diagram calculations in the SM and MSSM using two-component spinor formalism. Starting from very simple tree-level SM processes, like Bhabha scattering and Z-boson decays, it proceeds with tree-level supersymmetric processes, standard one-loop calculations and their supersymmetric counterparts, and Higgs-boson mass corrections. The presentation of this is very practical and useful for those who want to see how to perform easy calculations in SM or MSSM using two-component spinor formalism. The material is accessible and detailed enough to be used for teaching master’s or graduate-level students.

A valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry

The book finishes with almost 200 pages of appendices covering all sorts of useful topics, from notation to commonly used identity lists and group theory.

The book requires some familiarity with master’s-level particle-physics concepts, for example via Halzen and Martin’s Quarks and Leptons or Paganini’s Fundamentals of Particle Physics. Some familiarity with quantum field theory is helpful but not needed for large parts of the book. No effort is made to be brief: two-component spinor formalism is discussed in all its detail in a very pedagogic and clear way. Parts two and three are a significant enhancement to the well known A Supersymmetry Primer (arXiv:hep-ph/9709356), which is very popular among beginners to supersymmetry and written by Stephen Martin, one of authors of this volume. A rich collection of exercises is included in every chapter, and the appendix chapters are no exception to this.

Do not let the word supersymmetry in the title to fool you: even if you are not interested in supersymmetric extensions you can find a detailed exposition on two-component formalism for spinors, SM calculations with this formalism and a detailed discussion on how to design extensions of the scalar sector of the SM. Chapter three is particularly useful, describing in 54 pages how to get from the two-component to the four-component spinor formalism that is more familiar to many of us.

This is a book for advanced graduate students and researchers in particle-physics phenomenology, which nevertheless contains much that will be of interest to advanced physics students and particle-physics researchers in boththeory and experiment. This is because the size of the volume allows the authors to start from the basics and dwell in topics that most other books of that type cover in less detail, making them less accessible. I expect that Dreiner, Haber and Martin will become a valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry.

Intensely focused on physics

The High Luminosity Large Hadron Collider, edited by Oliver Brüning and Lucio Rossi, is a comprehensive review of an upgrade project designed to boost the total event statistics of CERN’s Large Hadron Collider (LHC) by nearly an order of magnitude. The LHC is the world’s largest and, in many respects, most performant particle accelerator. It may well represent the most complex infrastructure ever built for scientific research. The increase in event rate is achieved by higher beam intensities and smaller beam sizes at the collision points.

Brüning and Rossi’s book offers a comprehensive overview of this work across 31 chapters authored by more than 150 contributors. Due to the mentioned complexity of the HL-LHC, it is advisable to read the excellent introductory chapter first to obtain an overview on the various physics aspects, different components and project structure. After coverage of the physics case and the upgrades to the LHC experiments, the operational experiences with the LHC and its performance development are described.

The LHC’s upgrade is a significant project, as evidenced by the involvement of nine collaborating countries including China and the US, a materials budget that exceeds one billion Swiss Francs, more than 2200 years of integrated work, and the complexity of the physics and engineering. The safe operation of the enormous beam intensity represented a major challenge for the original LHC, and will be even more challenging with the upgraded beam parameters. For example, the instantaneous power carried by the circulating beam will be 7.6 TW, while the total beam energy is then 680 MJ – enough energy to boil two tonnes of water. Such numbers should be compared with the extremely low power density of 30 mW/cm3, which is sufficient to quench a superconducting magnet coil and interrupt the operation of the entire facility.

The book continues with descriptions of the two subsystems of greatest importance for the luminosity increase: the superconducting magnets and the RF systems including the crab cavities.

The High Luminosity Large Hadron Collider

Besides the increase in intensity, the primary factor for instantaneous luminosity gain is obtained by a reduction in beam size at the interaction points (IPs), partly through a smaller emittance but mainly through improved beam optics. This change results in a larger beam in the superconducting quadrupoles beside the IP. To accommodate the upgraded beam and to shield the magnet coils from radiation, the aperture of these magnets is increased by more than a factor of two to 150 mm. New quadrupoles have been developed, utilising the superconductor material Nb3Sn, allowing higher fields at the location of the coils. Further measures include the cancellation of the beam crossing angle during collision by dynamic tilting of the bunch orientation using the superconducting crab cavities that were designed for this special application in the LHC. The authors make fascinating observations, for example regarding the enhanced sensitivity to errors due to the extreme beam demagnification at the IPs: a typical relative error of 10–4 in the strength of the IP quadrupoles results in a significant distortion in beam optics, a so-called beta-beat of 7%.

Chapter eight describes the upgrade to the beam-collimation system, which is of particular importance for the safe operation of high-intensity beams. For ion collimation, halo particles are extracted most efficiently using collimators made from bent crystals.

The book continues with a description of the magnet-powering circuits. For the new superconducting magnets CERN is using “superconducting links” for the first time: cable sets made of a high-temperature superconductor that can carry enormous currents on many circuits in parallel in a small cross section; it suffices to cool them to temperatures of around 20 to 30K with gaseous helium by evaporating some of the liquid helium that is used for cooling the superconducting magnets in the accelerator.

Magnetic efforts

The next chapters cover machine protection, the interface with the detectors and the cryogenic system. Chapter 15 is dedicated to the effects of beam-induced stray radiation, in particular on electronics – an effect that has become quite important at high intensities in recent years. Another chapter covers the development of an 11 Tesla dipole magnet that was intended to replace a regular superconducting magnet, thereby gaining space for additional collimators in the arc of the ring. Despite considerable effort, this programme was eventually dropped from the project because the new magnet technology could not be mastered with the required reliability for routine operation; and, most importantly, alternative collimation solutions were identified.

Other chapters describe virtually all the remaining technical subsystems and beam-dynamics aspects of the collider, as well as the extensive test infrastructure required before installation in the LHC. A whole chapter is dedicated to high-field-magnet R&D – a field of utmost importance to the development of a next-generation hadron collider beyond the LHC.

Brüning and Rossi’s book will interest accelerator physicists in that it describes many outstanding beam-physics aspects of the HL-LHC. Engineers and readers with an interest in technology will also find many technical details on its subsystems.

Open-science cloud takes shape in Berlin

Findable. Accessible. Interoperable. Reusable. That’s the dream scenario for scientific data and tools. The European Open Science Cloud (EOSC) is a pan-European initiative to develop a web of “FAIR” data services across all scientific fields. EOSC’s vision is to put in place a system for researchers in Europe to store, share, process, analyse and reuse research outputs such as data, publications and software across disciplines and borders.

EOSC’s sixth symposium attracted 450 delegates to Berlin from 21 to 23 October 2024, with a further 900 participating online. Since its launch in 2017, EOSC activities have focused on conceptualisation, prototyping and planning. In order to develop a trusted federation of research data and services for research and innovation, EOSC is being deployed as a network of nodes. With the launch during the symposium of the EOSC EU node, this year marked a transition from design to deployment.

While EOSC is a flagship science initiative of the European Commission, FAIR concerns researchers and stakeholders globally. Via the multiple projects under the wings of EOSC that collaborate with software and data institutes around the world, a pan-European effort can be made to ensure a research landscape that encourages knowledge sharing while recognising work and training the next generation in best practices in research. The EU node – funded by the European Commission, and the first to be implemented – will serve as a reference for roughly 10 additional nodes to be deployed in a first wave, with more to follow. They are accessible using any institutional credentials based on GÉANT’s MyAccess or with an EU login. A first operational implementation of the EOSC Federation is expected by the end of 2025.

A thematic focus of this year’s symposium was the need for clear guidelines on the adaption of FAIR governance for artificial intelligence (AI), which relies on the accessibility of large and high-quality datasets. It is often the case that AI models are trained with synthetic data, large-scale simulations and first-principles mathematical models, although these may only provide an incomplete description of complex and highly nonlinear real-world phenomena. Once AI models are calibrated against experimental data, their predictions become increasingly accurate. Adopting FAIR principles for the production, collection and curation of scientific datasets will streamline the design, training, validation and testing of AI models (see, for example, Y Chen et al. 2021 arXiv:2108.02214).

EOSC includes five science clusters, from natural sciences to social sciences, with a dedicated cluster for particle physics and astronomy called ESCAPE: the European Science Cluster of Astronomy and Particle Physics. The future deployment of the ESCAPE Virtual Research Environment across multiple nodes will provide users with tools to bring together diverse experimental results, for example, in the search for evidence of dark matter, and to perform new analyses incorporating data from complementary searches.

First signs of antihyperhelium-4

Heavy-ion collisions at the LHC create suitable conditions for the production of atomic nuclei and exotic hypernuclei, as well as their antimatter counterparts, antinuclei and antihypernuclei. Measurements of these forms of matter are important for understanding the formation of hadrons from the quark–gluon plasma and studying the matter–antimatter asymmetry seen in the present-day universe.

Hypernuclei are exotic nuclei formed by a mix of protons, neutrons and hyperons, the latter being unstable particles containing one or more strange quarks. More than 70 years since their discovery in cosmic rays, hypernuclei remain a source of fascination for physicists due to their rarity in nature and the challenge of creating and studying them in the laboratory.

In heavy-ion collisions, hypernuclei are created in significant quantities, but only the lightest hypernucleus, hypertriton, and its antimatter partner, antihypertriton, have been observed. Hypertriton is composed of a proton, a neutron and a lambda hyperon containing one strange quark. Antihypertriton is made up of an antiproton, an antineutron and an antilambda.

Following hot on the heels of the observation of antihyperhydrogen-4 (a bound state of an antiproton, two antineutrons and an antilambda) earlier this year by the STAR collaboration at the Relativistic Heavy Ion Collider (RHIC), the ALICE collaboration at the LHC has now seen the first ever evidence for antihyperhelium-4, which is composed of two antiprotons, an antineutron and an antilambda. The result has a significance of 3.5 standard deviations. If confirmed, antihyper­helium-4 would be the heaviest antimatter hypernucleus yet seen at the LHC.

Hypernuclei remain a source of fascination due to their rarity in nature and the challenge of creating and studying them in the lab

The ALICE measurement is based on lead–lead collision data taken in 2018 at a centre-of-mass energy of 5.02 TeV for each colliding pair of nucleons, be they protons or neutrons. Using a machine-learning technique that outperforms conventional hypernuclei search techniques, the ALICE researchers looked at the data for signals of hyperhydrogen-4, hyperhelium-4 and their antimatter partners. Candidates for (anti)hyperhydrogen-4 were identified by looking for the (anti)helium-4 nucleus and the charged pion into which it decays, whereas candidates for (anti)hyperhelium-4 were identified via its decay into an (anti)helium-3 nucleus, an (anti)proton and a charged pion.

In addition to finding evidence of antihyperhelium-4 with a significance of 3.5 standard deviations, and evidence of antihyperhydrogen-4 with a significance of 4.5 standard deviations, the ALICE team measured the production yields and masses of both hypernuclei.

For both hypernuclei, the measured masses are compatible with the current world-average values. The measured production yields were compared with predictions from the statistical hadronisation model, which provides a good description of the formation of hadrons and nuclei in heavy-ion collisions. This comparison shows that the model’s predictions agree closely with the data if both excited hypernuclear states and ground states are included in the predictions. The results confirm that the statistical hadronisation model can also provide a good description of the production of hyper­nuclei modelled to be compact objects with sizes of around 2 femtometres.

The researchers also determined the antiparticle-to-particle yield ratios for both hypernuclei and found that they agree with unity within the experimental uncertainties. This agreement is consistent with ALICE’s observation of the equal production of matter and antimatter at LHC energies and adds to the ongoing research into the matter–antimatter imbalance in the universe.

Tsung-Dao Lee 1926–2024

On 4 August 2024, the great physicist Tsung-Dao Lee (also known as T D Lee) passed away at his home in San Francisco, aged 97.

Born in 1926 to an intellectual family in Shanghai, Lee’s education was disrupted several times by the war against Japan. He neither completed high school nor graduated from university. In 1943, however, he took the national entrance exam and, with outstanding scores, was admitted to the chemical engineering department of Zhejiang University. He then transferred to the physics department of Southwest Associated University, a temporary setup during the war for Peking, Tsinghua and Nankai universities. In the autumn of 1946, under the recommendation of Ta-You Wu, Lee went to study at the University of Chicago under the supervision of Enrico Fermi, earning his PhD in June 1950.

From 1950 to 1953 Lee conducted research at the University of Chicago, the University of California, Berkeley and the Institute for Advanced Study, located in Princeton. During this period, he made significant contributions to particle physics, statistical mechanics, field theory, astrophysics, condensed-matter physics and turbulence theory, demonstrating a wide range of interests and deep insights in several frontiers of physics. In a 1952 paper on turbulence, for example, Lee pointed out the significant difference between fluid dynamics in two-dimensional and three-dimensional spaces, namely, there is no turbulence in two dimensions. This finding provided essential conditions for John von Neumann’s model, which used supercomputers to simulate weather.

Profound impact

During this period, Lee and Chen-Ning Yang collaborated on two foundational works in statistical physics concerning phase transitions, discovering the famous “unit circle theorem” on lattice gases, which had a profound impact on statistical mechanics and phase-transition theory.

Between 1952 and 1953, during a visit to the University of Illinois at Urbana-Champaign, Lee was inspired by discussions with John Bardeen (winner, with Leon Neil Cooper and John Robert Schrieffer, of the 1972 Nobel Prize in Physics for developing the first successful microscopic theory of superconductivity). Lee applied field-theory methods to study the motion of slow electrons in polar crystals, pioneering the use of field theory to investigate condensed matter systems. According to Schrieffer, Lee’s work directly influenced the development of their “BCS” theory of superconductivity.

In 1953, after taking an assistant professor position at Columbia University, Lee proposed a renormalisable field-theory model, widely known as the “Lee Model,” which had a substantial impact on the study of renormalisation in quantum field theory.

On 1 October 1956, Lee and Yang’s theory of parity non-conservation in weak interactions was published in Physical Review. It was quickly confirmed by the experiments of Chien-Shiung Wu and others, earning Lee and Yang the 1957 Nobel Prize in Physics – one of the fastest recognitions in the history of the Nobel Prize. The discovery of parity violation significantly challenged the established understanding of fundamental physical laws and directly led to the establishment of the universal V–A theory of weak interactions in 1958. It also laid the groundwork for the unified theory of weak and electromagnetic interactions developed a decade later.

In 1957, Lee, Oehme and Yang extended symmetry studies to combined charge–parity (CP) transformations. The CP non-conservation discovered in neutral K-meson decays in 1964 validated the importance of Lee and his colleagues’ theoretical work, as well as the later establishment of CP violation theories. The same year, Lee was appointed the Fermi Professor of Physics at Columbia.

In the 1970s, Lee published papers exploring the origins of CP violation, suggesting that it might stem from spontaneous symmetry breaking in the vacuum and predicting several significant phenomenological consequences. In 1974, Lee and G C Wick investigated whether spontaneously broken symmetries in the vacuum could be partially restored under certain conditions. They found that heavy-ion collisions could achieve this restoration and produce observable effects. This work pioneered the study of the quantum chromodynamics (QCD) vacuum, phase transitions and quark–gluon plasma. It also laid the theoretical and experimental foundation for relativistic heavy-ion collision physics.

From 1982, Lee devoted significant efforts to solving non-perturbative QCD using lattice-QCD methods. Together with Norman Christ and Fred Friedberg, he developed stochastic lattice field theory and promoted first-principle lattice simulations on supercomputers, greatly advancing lattice QCD research.

Immense respect

In 2011 Lee retired as a professor emeritus from Columbia at the age of 85. In China, he enjoyed immense respect, not only for being the first Chinese scientist (with Chen-Ning Yang) to win a Nobel Prize, but also for enhancing the level of science and education in China and promoting the Sino-American collaboration in high-energy physics. This led to the establishment and successful construction of China’s first major high-energy physics facility, the Beijing Electron–Positron Collider (BEPC). At the beginning of this century, Lee supported and personally helped the upgrade of BEPC, the Daya Bay reactor neutrino experiment and others. In addition, he initiated, promoted and executed the China–US Physics Examination and Application plan, the National Natural Science Foundation of China, and the postdoctoral system in China.

Tsung-Dao Lee’s contributions to an extraordinarily wide range of fields profoundly shaped humanity’s understanding of the basic laws of the universe.

Robert Aymar 1936–2024

Robert Aymar, CERN Director-General from January 2004 to December 2008, passed away on 23 September at the age of 88. An inspirational leader in big-science projects for several decades, including the International Thermonuclear Experimental Reactor (ITER), his term of office at CERN was marked by the completion of construction and the first commissioning of the Large Hadron Collider (LHC). His experience of complex industrial projects proved to be crucial, as the CERN teams had to overcome numerous challenges linked to the LHC’s innovative technologies and their industrial production.

Robert Aymar was educated at École Poly­technique in Paris. He started his career in plasma physics at the Commissariat à l’Énergie Atomique (CEA), since renamed the Commissariat à l’Énergie Atomique et aux Énergies Alternatives, at the time when thermonuclear fusion was declassified and research started on its application to energy production. After being involved in several studies at CEA, Aymar contributed to the design of the Joint European Torus, the European tokamak project based on conventional magnet technology, built in Culham, UK in the late 1970s. In the same period, CEA was considering a compact tokamak project based on superconducting magnet technology, for which Aymar decided to use pressurised superfluid helium cooling – a technology then recently developed by Gérard Claudet and his team at CEA Grenoble. Aymar was naturally appointed head of the Tore Supra tokamak project, built at CEA Cadarache from 1977 to 1988. The successful project served inter alia as an industrial-sized demonstrator of superfluid helium cryogenics, which became a key technology of the LHC.

As head of the Département des Sciences de la Matière at CEA from 1990 to 1994, Aymar set out to bring together the physics of the infinitely large and the infinitely small, as well as the associated instrumentation, in a department that has now become the Institut de Recherche sur les Lois Fondamentales de l’Univers. In that position, he actively supported CEA–CERN collaboration agreements on R&D for the LHC and served on many national and international committees. In 1993 he chaired the LHC external review committee, whose recommendation proved decisive in the project’s approval. From 1994 to 2003 he led the ITER engineering design activities under the auspices of the International Atomic Energy Agency, establishing the basic design and validity of the project that would be approved for construction in 2006. In 2001, the CERN Council called on his expertise once again by entrusting him to chair the external review committee for CERN’s activities.

When Robert Aymar took over as Director-General of CERN in 2004, the construction of the LHC was well under way. But there were many industrial and financial challenges, and a few production crises still to overcome. During his tenure, which saw the ramp-up, series production and installation of major components, the machine was completed and the first beams circulated. That first start-up in 2008 was followed by a major technical problem that led to a shutdown lasting several months. But the LHC had demonstrated that it could run, and in 2009 the machine was successfully restarted. Aymar’s term of office also saw a simplification of CERN’s structure and procedures, aimed at making the laboratory more efficient. He also set about reducing costs and secured additional funding to complete the construction and optimise the operation of the LHC. After retirement, he remained active as a scientific advisor to the head of the CEA, occasionally visiting CERN and the ITER construction site in Cadarache.

Robert Aymar was a dedicated and demanding leader, with a strong drive and search for pragmatic solutions in the activities he undertook or supervised. CERN and the LHC project owe much to his efforts. He was also a man of culture with a marked interest in history. It was a privilege to serve under his direction.

James D Bjorken 1934–2024

James Bjorken

Theoretical physicist James D “BJ” Bjorken, whose work played a key role in revealing the existence of quarks, passed away on 6 August aged 90. Part of a wave of young physicists who came to Stanford in the mid-1950s, Bjorken also made important contributions to the design of experiments and the efficient operation of accelerators.

Born in Chicago on 22 June 1934, James Daniel Bjorken grew up in Park Ridge, Illinois, where he was drawn to mathematics and chemistry. His father, who had immigrated from Sweden in 1923, was an electrical engineer who repaired industrial motors and generators. After earning a bachelor’s degree at MIT, he went to Stanford University as a graduate student in 1956. He was one of half a dozen MIT physicists, including his adviser Sidney Drell and future director of the SLAC National Accelerator Laboratory Burton Richter, who were drawn by new facilities on the Stanford campus. This included an early linear accelerator that scattered electrons off targets to explore the nature of the neutron and proton.

Ten years later those experiments moved to SLAC, where the newly constructed Stanford Linear Collider would boost electrons to much higher energies. By that time, theorists had proposed that protons and neutrons contained fundamental particles. But no one knew much about their properties or how to go about proving they were there. Bjorken, who joined the Stanford faculty in 1961, wrote an influential 1969 paper in which he suggested that electrons were bouncing off point-like particles within the proton, a process known as deep inelastic scattering. He started lobbying experimentalists to test it with the SLAC accelerator.

Carrying out the experiments would require a new mathematical language and Bjorken contributed to its development, with simplifications and improvements from two of his students (John Kogut and Davison Soper) and Caltech physicist Richard Feynman. In the late 1960s and early 1970s, those experiments confirmed that the proton does indeed consist of fundamental particles – a discovery honoured with the 1990 Nobel Prize in Physics for SLAC’s Richard Taylor and MIT’s Henry Kendall and Jerome Friedman. Bjorken’s role was later recognised by the prestigious Wolf Prize in Physics and the 2015 High Energy and Particle Physics Prize of the European Physical Society.

While the invention of “Bjorken scaling” was his most famous scientific achievement, Bjorken was also known for identifying a wide variety of interesting problems and tackling them in novel ways. He was somewhat iconoclastic. He also had colourful and often distinctly visual ways of thinking about physics – for instance, describing physics concepts in terms of plumbing or a baked Alaska. He never sought recognition for himself and was very generous in recognising the contributions of others.

In 1979 Bjorken headed east to become associate director for physics at Fermilab. He returned to SLAC in 1989, where he continued to innovate. Over the course of his career, among other things, he invented ideas related to the existence of the charm quark and the circulation of protons in a storage ring. He helped popularise the unitarity triangle and, along with Drell, co-wrote the widely used graduate-level textbooks Relativistic Quantum Mechanics and Relativistic Quantum Fields. In 2009 Bjorken contributed to an influential paper by three younger theorists suggesting approaches for searching for “dark” photons, hypothetical carriers of a new fundamental force.

He was also awarded the American Physical Society’s Dannie Heineman Prize, the Department of Energy’s Ernest Orlando Lawrence Award, and the Dirac Medal from the International Center for Theoretical Physics. In 2017 he shared the Robert R Wilson Prize for Achievement in the Physics of Particle Accelerators for groundbreaking theoretical work he did at Fermilab that helped to sharpen the focus of particle beams in many types of accelerators.

Known for his warmth, generosity and collaborative spirit, Bjorken passionately pursued many interests outside physics, from mountain climbing, skiing, cycling and windsurfing to listening to classical music. He divided his time between homes in Woodside, California and Driggs, Idaho, and thought nothing of driving long distances to see an opera in Chicago or dropping in unannounced at the office of some fellow physicist for deep conversations about general relativity, dark matter or dark energy – once remarking: “I’ve found the most efficient way to test ideas and get hard criticism is one-on-one conversation with people who know more than I do.”

Max Klein 1951–2024

Experimental particle physicist Max Klein, whose exceptional career spanned theory, detectors, accelerators and data analysis, passed away on 23 August 2024.

Born in Berlin in 1951, Max earned his diploma in physics in 1973 from Humboldt University of Berlin (HUB, East-Germany, GDR) with a thesis on low-energy heavy-ion physics. He received his PhD in 1977 from the Institute for High Energy Physics (IHEP) of the Academy of Sciences of the GDR in Zeuthen (now part of DESY) on the subject of multiparticle production, and his habilitation degree in 1984 from HUB. From 1973 to 1991 he conducted research at IHEP Zeuthen, spending several years from 1977 at the Joint Institute for Nuclear Research in Dubna, and from the 1980s at DESY and CERN. For his role in determining the asymmetry of the interaction of polarised positive and negative muons with the NA4 muon spectrometer at CERN’s SPS M2 muon beam, he was awarded the Max von Laue Medal by the Academy of Sciences of the GDR in 1985.

Max worked as a scientist at DESY from 1992 to 2006. As a member of the H1 experiment at the lepton–proton collider HERA since 1985, his research focused on investigating the internal structure of protons using deep inelastic scattering. He served as spokesperson of the H1 collaboration from 2002 to 2006 for two mandates.

Max became a professor at the University of Liverpool in 2006, and the following year he joined the ATLAS collaboration. He served as chair of the ATLAS publication committee and as editorial-board chair of the ATLAS detector paper and other important works. Max made key contributions to data analysis, notably on the high-precision 7 TeV inclusive W and Z boson production cross sections and associated properties, and was a convener of the PDF forum in 2015–2016. From 2017 to 2019, Max was chair of the ATLAS collaboration board, during which he made invaluable contributions to the experiment and collaboration life. He led the Liverpool ATLAS team from 2009 to 2017. Under his guidance, the 30-strong group contributed to the maintenance of the SCT detector, as well as to ATLAS data preparation and physics analyses. The group also developed hybrids, mechanics and software for the new ITk pixel and strip detectors.

In recent years, Max’s scientific contributions extended well beyond ATLAS. He was a strong advocate for the development of an electron-beam upgrade of the LHC, the LHeC, and collaborated closely with the CERN accelerator group and international teams on the development of energy-recovery linacs. Here, he was influential in the development of the PERLE demonstrator accelerator at IJCLab, for which he acted as spokesperson until 2023.

A strong advocate for the responsibility of scientists toward their societies

In 2013 Max was awarded the Max Born Prize by the Deutsche Physikalische Gesellschaft and the UK Institute of Physics for his fundamental experimental contributions to the elucidation of the proton structure using deep-inelastic scattering. The prize citation stands as a testament to his scientific stature: “In the last 40 years, Max Klein has dedicated himself to the study of the innermost structure of the proton. In the 1990s he was a leading figure in the discovery that gluons form a surprisingly large component of proton structure. These gluons play an important role in the production of Higgs bosons in proton–proton collisions for which experiments at CERN have recently found promising candidates.”

Besides being a distinguished scientist, Max was a man of unwavering principles, grounded in his selfless interactions with others and his deep sense of humanity. Drawing from his experience as a bridge between East and West, he was a strong advocate for international scientific collaboration and the responsibility of scientists toward their societies. He had a strong desire and ability to mentor and support students, postdocs and early-career researchers, and an admirably wise and calm approach to problem solving.

Max Klein had a profound knowledge of physics and a tireless dedication to ATLAS and to experimental particle physics in general. His passing is a profound loss for the entire community, but his legacy will endure.

Ian Shipsey 1959–2024

Ian Shipsey

Experimental particle physicist Ian Shipsey, a remarkable leader and individual, passed away suddenly and unexpectedly in Oxford on 7 October.

Ian was educated at Queen Mary University of London and the University of Edinburgh, where he earned his PhD in 1986 for his work on the NA31 experiment at CERN. Moving to the US, he joined Syracuse as a post-doc and then became a faculty member at Purdue, where, in 2007, he was elected Julian Schwinger Distinguished Professor of Physics. In 2013 he was appointed the Henry Moseley Centenary Professor of Experimental Physics at the University of Oxford.

Ian was a central figure behind the success of the CLEO experiment at Cornell, which was for many years the world’s pre-eminent detector in flavour physics. He led many analyses, most notably in semi-leptonic decays, from which he measured four different CKM matrix elements, and oversaw the construction of the silicon vertex detector for the CLEO III phase of the experiment. He served as co-spokesperson between 2001 and 2004, and was one of the intellectual leaders that saw the opportunity to re-configure the detector and the CESR accelerator as a facility for making precise exploration of physics at the charm threshold. The resulting CLEO-c programme yielded many important measurements in the charm system and enabled critical experimental validations of lattice–QCD predictions.

Influential voice

At CMS, Ian played a leading role in the construction of the forward-pixel detector, exploiting the silicon laboratory he had established at Purdue. His contributions to CMS physics analy­ses were no less significant. These included the observation of upsilon suppression in heavy-ion collisions (a smoking gun for the production of quark–gluon plasma) and the discovery, reported in a joint Nature paper with the LHCb collaboration, of the ultra-rare decay Bs→ μ+μ. He was also an influential voice as CMS collaboration board chair (2013–2014).

After moving to the University of Oxford and, in 2015, joining the ATLAS collaboration, Ian became Oxford’s ATLAS team leader and established state-of-the-art cleanrooms, which are used for the construction of the future inner tracker (ITk) pixel end-cap modules. Together with his students, he contributed to measurements of the Higgs boson mass and width, and to the search for its rare di-muon decay. Ian also led the UK’s involvement in LSST (now the Vera Rubin Observatory), where Oxford is providing deep expertise for the CCD cameras.

Following his tenure as the dynamic head of the particle physics sub-department, Ian was elected head of Oxford physics in 2018 and re-elected in 2023. Among his many successful initiatives, he played a leading role in establishing the £40 million UKRI “Quantum Technologies for Fundamental Physics” programme, which is advancing quantum-based applications across various areas of physics. With the support of this programme, he led the development of novel atom interferometers for light dark matter searches and gravitational-wave detection.

Ian took a central role in establishing roadmaps for detector R&D both in the US and (via ECFA) in Europe. He was one of the coordinators and driving force of the ECFA R&D roadmap panel, and co-chair of the US effort to define the basic research needs in this area. As chair of the ICFA instrumentation, innovation and development panel, he promoted R&D in instrumentation for particle physics and the recognition of excellence in this field.

Among his many prestigious honours, Ian was elected a Fellow of the Royal Society in 2022 and received the James Chadwick Medal and Prize from the Institute of Physics in 2019. He served on numerous collaboration boards, panels, and advisory and decision-making committees shaping national and international science strategies.

The success of Ian’s career is even more remarkable given that he lost his hearing in 1989. He received a cochlear implant, which restored limited auditory ability, and gave unforgettable talks on this subject, explaining the technology and its impact on his life.

Ian was an outstanding physicist and also a remarkable individual. His legacy is not only an extensive body of transformative scientific results, but also the impact that he had on all who met him. He was equally charming, whether speaking to graduate students or lab directors. Everyone felt better after talking to Ian. His success derived from a remarkable combination of optimism and limitless energy. Once he had identified the correct course of action, he would not allow himself to be dissuaded by cautious pessimists who worried about the challenges ahead. His colleagues and many graduate students will continue to benefit for many years from the projects he initiated. The example he set as a physicist, and the memories he leaves as friend, will endure still longer.

W mass snaps back

Based on the latest data inputs, the Standard Model (SM) constrains the mass of the W boson (mW) to be 80,353 ± 6 MeV. At tree level, mW depends only on the mass of the Z boson and the weak and electromagnetic couplings. The boson’s tendency to briefly transform into a top quark and a bottom quark causes the largest quantum correction. Any departure from the SM prediction could signal the presence of additional loops containing unknown heavy particles.

The CDF experiment at the Tevatron observed just such a departure in 2022, plunging the boson into a midlife crisis 39 years after it was discovered at CERN’s SpSS collider (CERN Courier September/October 2023 p27). A new measurement from the CMS experiment at the LHC now contradicts the anomaly reported by CDF. While the CDF result stands seven standard deviations above the SM, CMS’s measurement aligns with the SM prediction and previous results at the LHC. The CMS and CDF results claim joint first place in precision, provoking a dilemma for phenomenologists.

New-physics puzzle

“The result by CDF remains puzzling, as it is extremely difficult to explain the discrepancy with the three LHC measurements by the presence of new physics, in particular as there is also a discrepancy with D0 at the same facility,” says Jens Erler of Johannes Gutenberg-Universität Mainz. “Together with measurements of the weak mixing angle, the CMS result confirms the validity of the SM up to new physics scales well into the TeV region.”

“I would not call this ‘case closed’,” agrees Sven Heinemeyer of the Universidad Autónoma de Madrid. “There must be a reason why CDF got such an anomalously high value, and understanding what is going on may be very beneficial for future investigations. We know that the SM is not the last word, and there are clear cases that require physics beyond the SM (BSM). The question is at which scale BSM physics appears, or how strongly it is coupled to the SM particles.”

The result confirms the validity of the SM up to new physics scales well into the TeV region

To obtain their result, CDF analysed four million W-boson decays originating from 1.96 TeV proton–antiproton collisions at Fermilab’s Tevatron collider between 1984 and 2011. In stark disagreement with the SM, the analysis yielded a mass of 80,433.5 ± 9.4 MeV. This result induced the ATLAS collaboration to revisit its 2017 analysis of W → μν and W → eνdecays in 7 TeV proton–proton collisions using the latest global data on parton distribution functions, which describe the probable momenta of quarks and gluons inside the proton. A newly developed fit was also implemented. The central value remained consistent with the SM, with a reduced uncertainty of 16 MeV increasing its tension with the new CDF result. A less precise measurement by the LHCb collaboration also favoured the SM (CERN Courier May/June 2023 p10).

CMS now reports mW to be 80,360.2 ± 9.9 MeV, concluding a study of W → μν decays begun eight years ago.

“One of the main strategic choices of this analysis is to use a large dataset of Run 2 data,” says CMS spokesperson Gautier Hamel de Monchenault. “We are using 16.8 fb–1 of 13 TeV data at a relatively high pileup of on average 25 interactions per bunch crossing, leading to very large samples of about 7.5 million Z bosons and 90 million W bosons.”

With high pileup and high energies come additional challenges. The measurement uses an innovative analysis tech­nique that benchmarks W → μν decay systematics using Z → μμ decays as independent validation wherein one muon is treated as a neutrino. The ultimate precision of the measurement relies on reconstructing the muon’s momentum in the detector’s silicon tracker to better than one part in 10,000 – a groundbreaking level of accuracy built on minutely modelling energy loss, multiple scattering, magnetic-field inhomogeneities and misalignments. “What is remarkable is that this incredible level of precision on the muon momentum measurement is obtained without using Z → μμ as a calibration candle, but only using a huge sample of J/ψ→ μμ events,” says Hamel de Monchenault. “In this way, the Z → μμ sample can be used for an independent closure test, which also provides a competitive measurement of the Z mass.”

Measurement matters

Measuring mW using W → μν decays is challenging because the neutrino escapes undetected. mW must be inferred from either the distribution of the transverse mass visible in the events (mT) or the distribution of the transverse momentum of the muons (pT). The mT approach used by CDF is the most precise option at the Tevatron, but typically less precise at the LHC, where hadronic recoil is difficult to distinguish from pileup. The LHC experiments also face a greater challenge when reconstructing mW from distributions of pT. In proton–antiproton collisions at the Tevatron, W bosons could be created via the annihilation of pairs of valence quarks. In proton–proton collisions at the LHC, the antiquark in the annihilating pair must come from the less well understood sea; and at LHC energies, the partons have lower fractions of the proton’s momentum – a less well constrained domain of parton distribution functions.

“Instead of exploiting the Z → μμ sample to tune the parameters of W-boson production, CMS is using the W data themselves to constrain the theory parameters of the prediction for the pT spectrum, and using the independent Z → μμ sample to validate this procedure,” explains Hamel de Monchenault. “This validation gives us great confidence in our theory modelling.”

“The CDF collaboration doesn’t have an explanation for the incompatibility of the results,” says spokesperson David Toback of Texas A&M University. “Our focus is on the checks of our own analysis and understanding of the ATLAS and CMS methods so we can provide useful critiques that might be helpful in future dialogues. On the one hand, the consistency of the ATLAS and CMS results must be taken seriously. On the other, given the number of iterations and improvements needed over decades for our own analysis – CDF has published five times over 30 years – we still consider both LHC results ‘early days’ and look forward to more details, improved methodology and additional measurements.”

The LHC experiments each plan improvements using new data. The results will build on a legacy of electroweak precision at the LHC that was not anticipated to be possible at a hadron collider (CERN Courier September/October 2024 p29).

“The ATLAS collaboration is extremely impressed with the new measurement by CMS and the extraordinary precision achieved using high-pileup data,” says spokesperson Andreas Hoecker. “It is a tour de force, accomplished by means of a highly complex fit, for which we applaud the CMS collaboration.” ATLAS’s next measurement of mW will focus on low-pileup data, to improve sensitivity to mT relative to their previous result.

The ATLAS collaboration is extremely impressed with the new measurement by CMS

The LHCb collaboration is working on an update of their measurement using its full Run 2 data set. LHCb’s forward acceptance may prove to be powerful in a global fit. “LHCb probes parton density functions in different phase space regions, and that makes the measurements from LHCb anticorrelated with those of ATLAS and CMS, promising a significant impact on the average, even if the overall uncertainty is larger,” says spokesperson Vincenzo Vagnoni. The goal is to progress LHC measurements towards a combined precision of 5 MeV. CMS plans several improvements to their own analysis.

“There is still a significant factor to be gained on the momentum scale, with which we could reach the same precision on the Z-boson mass as LEP,” says Hamel de Monchenault. “We are confident that we can also use a future, large low-pileup run to exploit the W recoil and mT to complement the muon pT spectrum. Electrons can also be used, although in this case the Z sample could not be kept independent in the energy calibration.”

bright-rec iop pub iop-science physcis connect