Comsol -leaderboard other pages

Topics

Sigurd Hofmann 1944–2022

Sigurd Hofmann, an extraordinary scientist, colleague and teacher, passed away on 17 June 2022 at the age of 78. Remarkable in his scientific life was the discovery of proton radioactivity, which was achieved in 1981, as well as the synthesis of six new superheavy chemical elements between 1981 and 1996. 

Sigurd was born on 15 February 1944 in Böhmisch-Kamnitz (Bohemia) and studied physics at TH Darmstadt, where he received his diploma in 1969 and his doctorate in 1974 with Egbert Kankeleit. Afterwards, he joined the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, his scientific work there occupying him for almost 50 years. Accuracy and scientific exactness were important to him from the beginning. He investigated fusion reactions and radioactive decays in the group of Peter Armbruster and worked with Gottfried Münzenberg. 

Sigurd achieved international fame through the discovery of proton radioactivity from the ground state of 151Lu in 1981, a previously unknown decay mechanism. When analysing the data, he benefited from his pronounced thoroughness and scientific curiosity. At the same time, he begun work on the synthesis, unambiguous identification and study of the properties of the heaviest chemical elements, which were to shape his further scientific life. The first highlights were the synthesis of the new elements bohrium (Bh), hassium (Hs) and meitnerium (Mt) between 1981 and 1984, with which GSI entered the international stage of this renowned research field. The semiconductor detectors that Sigurd had developed specifically for these experiments were far ahead of their time, and are now used worldwide to search for new chemical elements. 

At the end of the 1990s Sigurd took over the management of the Separator for Heavy Ion Reaction Products (SHIP) group and, after making instrumental improvements to detectors and electronics, crowned his scientific success with the discovery of the elements darmstadtium (Ds), roentgenium (Rg) and copernicium (Cn) in the years 1994 to 1996. The concept for “SHIP-2000”, a strategy paper developed under his leadership in 1999 for long-term heavy-element research at GSI, is still relevant today. In 2009 he was appointed Helmholtz professor and from then on was able to devote himself entirely to scientific work again. For many years he also maintained an intensive collaboration and scientific exchange with his Russian colleagues in Dubna, where he co-discovered the element flerovium (Fl) in a joint experiment.

For his outstanding research work and findings, Sigurd received a large number of renowned awards and prizes; too many, in fact, to mention. A diligent writer and speaker, he was invited to talk at countless international conferences, authored a large number of review articles, books and book chapters, and many widely cited publications. He also liked to present scientific results at public events. In doing so, he was able to develop a thrilling picture of modern physics, but also of the big questions of cosmology and element synthesis in stars; he was also able to convey very clearly to the public how atoms can be made “visible”.

Many chapters of Sigurd’s contemporary scientific life are recorded in his 2002 book On Beyond Uranium (CRC Press). His modesty and friendly nature were remarkable. You could always rely on him. His care, accuracy and deliberateness in all work were outstanding, and his persistence was one of the foundations for ground-breaking scientific achievements. He was always in the office or at an experiment, even late in the evening and on weekends, so you could talk to him at any time and were always rewarded with detailed answers and competent advice.

We are pleased that we were able to work with such an excellent scientist and colleague, as well as an outstanding teacher and a great person, for so many years.

Karel Cornelis 1955–2022

Our dear colleague and friend Karel Cornelis passed away unexpectedly on 20 December 2022.

After finishing his studies in physics at the University of Leuven (Belgium), Karel joined CERN in 1983 as engineer-in-charge of the Super Proton Synchrotron (SPS) at the time when the machine was operated as a proton–antiproton collider. During his career Karel greatly contributed to the commissioning and performance development and follow-up of the SPS during its various phases as proton–antiproton collider, LEP injector, high-intensity fixed-target machine and as the LHC injector of proton and ion beams. He had a profound and extensive knowledge of the machine, from complex beam dynamics aspects to the engineering details of its various systems, and was the reference whenever new beam requirements or modes of operation were discussed. 

Karel was an extremely competent and rigorous physicist, but also a generous and dedicated mentor who trained generations of control-room technicians, shift leaders and machine physicists and engineers, helping them to grow and take on responsibilities while remaining available to lend a hand when needed. His positive attitude and humour have left a lasting imprint, so much so that “Think like a proton: always positive!” has become the motto of the SPS operation team, and is now visible in the SPS island in the CERN Control Centre.

Karel had the rare gift of explaining complex phenomena with simple but accurate models and clear examples, whether it was accelerator physics and technology, or physics and engineering more generally. He gave a fascinating series of machine shut-down lectures covering the history of the SPS, synchrotron radiation and one of his passions, aviation, with a talk on “Air and the airplanes that fly in it”. 

Karel was a larger-than-life tutor, friend, reference point, expert and father figure to generations of us. He was much missed in the SPS island and beyond following his retirement in September 2019, and will be even more so now.  

New physics in b decays

There are compelling reasons to believe that the Standard Model (SM) of particle physics, while being the most successful theory of the fundamental structure of the universe, does not offer the complete picture of reality. However, until now, no new physics beyond the SM has been firmly established through direct searches at different energy scales. This motivates indirect searches, performed by precision examination of phenomena sensitive to contributions from possible new particles, and comparing their properties with the SM expectations. This is conceptually similar to how, decades ago, our understanding of radioactive beta decay allowed the existence and properties of the W boson to be predicted.

New Physics in b decays, by Marina Artuso, Gino Isidori and the late Sheldon Stone, is dedicated to precision measurements in decays of hadrons containing a b quark. Due to their high mass, these hadrons can decay into dozens of different final states, providing numerous ways to challenge our understanding of particle physics. As is usual for indirect searches, the crucial task is to understand and control all SM contributions to these decays. For b-hadron decays, the challenge is to control the effects of the strong interaction, which is difficult to calculate.

Both sides of the coin

The authors committed to a challenging task: providing a snapshot of a field that has developed considerably during the past decade. They highlight key measurements that generated interest in the community, often due to hints of deviations from the SM expectations. Some of the reported anomalies have diminished since the book was published, after larger datasets were analysed. Others continue to intrigue researchers. This natural scientific progress leads to a better understanding of both the theoretical and experimental sides of the coin. The authors exercise reasonable caution over the significance of the anomalies they present, warning the reader of the look-elsewhere effect, and carefully define the relevant observables. When discussing specific decay modes, they explain their choice compared to other processes. This pedagogical approach makes the book very useful for early-career researchers diving into the topic. 

The book starts with a theoretical introduction to heavy-quark physics within the SM, plotting avenues for searches for possible new-physics effects. Key theoretical concepts are introduced, along with the experiments that contributed most significantly to the field. The authors continue with an overview of “traditional” new-physics searches, strongly interleaving them with precision measurements of the free parameters of the SM, such as the couplings between quarks and the W boson. By determining these parameters precisely with several alternative experimental approaches, one hopes to observe discrepancies. An in-depth review of the experimental measurements, also featuring their complications, is confronted with theoretical interpretations. While some of the discrepancies stand out, it is difficult to attribute them to new physics as long as alternative interpretations are not excluded.

New Physics in b Decays

The second half of the book dives into recent anomalies in decays with leptons, and the theoretical models attempting to address them. The authors reflect on theoretical and experimental work of the past decade and outline a number of pathways to follow. The book concludes with a short overview of searches for processes that are forbidden or extremely suppressed in the SM, such as lepton-flavour violation. These transitions, if observed, would represent an undeniable signature of new physics, although they only arise in a subset of new-physics scenarios. Such searches therefore allow strong limits to be placed on specific hypotheses. The book concludes with the authors’ view of the near future, which is already becoming reality. They expect the ongoing LHCb and Belle II experiments to have a decisive word on the current flavour anomalies, but also to deliver new, unexpected surprises. They rightly conclude that “It is difficult to make predictions, especially about the future.”

The remarkable feature of this book is that it is written by physicists who actively contributed to the development of numerous theoretical concepts and key experimental measurements in heavy-quark physics over the past decades. Unfortunately, one of the authors, Sheldon Stone, could not see his last book published. Sheldon was the editor of the book B decays, which served as the handbook on heavy-quark physics for decades. One can contemplate the impressive progress in the field by comparing the first edition of B decays in 1992 with New Physics in b decays. In the 1990s, heavy-quark decays were only starting to be probed. Now, they offer a well-oiled tool that can be used for precision tests of the SM and searches for minuscule effects of possible new physics, using decays that happen as rarely as once per billion b-hadrons.

The key message of this book is that theory and experiment must go hand in hand. Some parameters are difficult to calculate precisely and they need to be measured. The observables that are theoretically clean are often challenging experimentally. Therefore, the searches for new physics in b decays focus on processes that are accessible both from the theoretical and experimental points of view. The reach of such searches is constantly being broadened by painstakingly refining calculations and developing clever experimental techniques, with progress achieved through the routine work of hundreds of researchers in several experiments worldwide.

Cosmic rays for cultural heritage

In 1965, three years before being awarded a Nobel prize for his decisive contributions to elementary particle physics, Luis Alvarez proposed to use cosmic muons to look inside an Egyptian pyramid. A visit to the Giza pyramid complex a few years earlier had made him ponder why, despite the comparable size of the Great Pyramid of Khufu and the Pyramid of Khafre, the latter was built with a simpler structure – simpler even than the tomb of Khufu’s great-grandfather Sneferu, under whose reign there had been architectural experimentation and pyramids had grown in complexity. Only one burial chamber is known in the superstructure of Khafre’s pyramid, while two are located in the tombs of each of his two predecessors. Alvarez’s doubts were not shared by many archaeologists, and he was certainly aware that the history of architecture is not a continuous process and that family relationships can be complicated; but like many adventurers before him, he was fascinated by the idea that some hidden chambers could still be waiting to be discovered. 

The principles of muon radiography or “muography” were already textbook knowledge at that time. Muons are copiously produced in particle cascades originating from naturally occurring interactions between primary cosmic rays and atmospheric nuclei. The energy of most of those cosmogenic muons is large enough that, despite their relatively short intrinsic lifetime, relativistic dilation allows most of them to survive the journey from the upper atmosphere to Earth’s surface – where their penetration power makes them a promising tool to probe the depths of very large and dense volumes non-destructively. Thick and dense objects can attenuate the cosmic-muon flux significantly by stopping its low-energy component, thus providing a “shadow” analogous to conventional radiographies. The earliest known attempt to use the muon flux attenuation for practical purposes was the estimation of the overburden of a tunnel in Australia using Geiger counters on a rail, published in 1955 in an engineering journal. The obscure precedent was probably unknown to Alvarez, who didn’t cite it.

Led by Alvarez, the Joint Pyramid Project was officially established in 1966. The detector that the team built and installed in the known large chamber at the bottom of Khafre’s pyramid was based on spark chambers, which were standard equipment for particle-physics experiments at that time. Less common were the computers provided by IBM for Monte Carlo simulations, which played a crucial role in the data interpretation. It took some time for the project to take off. Just as the experiment was ready to take data, the Six-Day War broke out, delaying progress by several months until diplomatic relationships were restored between Cairo and Washington. All this might sound like a promising subject for a Hollywood blockbuster were it not for its anticlimax: no hidden chamber was found. Alvarez always insisted that there is a difference between not finding what you search for and conclusively excluding its existence, but despite this important distinction, one wonders how much muography’s fame would have benefitted from a discovery. Their study, published in Science in 1970, set an example that was followed in subsequent decades by many more interdisciplinary applications.  

The second pyramid to be muographed was in Mexico more than 30 years later, when researchers from the National Autonomous University of Mexico (UNAM) started to search for hidden chambers in the Pyramid of the Sun at Teotihuacan. Built by the Aztecs about 1800 years ago, it is the third largest pyramid in the world after Khufu’s and Khafre’s, and its purpose is still a mystery. Although there is no sign that it contains burial chambers, the hypothesis that this monument served as a tomb is not entirely ruled out. After more than a decade of data taking, the UNAM muon detector (composed of six layers of multi-wire chambers occupying a total volume of 1.5 m3) found no hidden chamber. But the researchers did find evidence, reported in 2013, for a very wide low-density volume in the southern side, which is still not understood and led to speculation that this side of the pyramid might be in danger of collapse.

Big void 

Muography returned to Egypt with the ScanPyramids project, which has been taking data since 2015. The project made the headlines in 2017 by revealing an unexpected low-density anomaly in Khufu’s Great Pyramid, tantalisingly similar in size and shape to the Grand Gallery of the same building. Three teams of physicists from Japan and France participated in the endeavour, cross-checking each other by using different detector technologies: nuclear emulsions, plastic scintillators and micromegas. The latter, being gaseous detectors, had to be located externally to the pyramid to comply with safety regulations. Publishing in Nature Physics, all three teams reported a statistically significant excess in muon flux originating from the same 3D position (see “Khufu’s pyramid” figure). 

Khufu’s pyramid

This year, based on a larger data sample, the Scan­Pyramids team concluded that this “Big Void” is a horizontal corridor about 9 m long with a transverse section of around 2 × 2 m2. Confidence in the solidity of these conclusions was provided by a cross-check measurement with ground-penetrating radar and ultrasounds, by Egyptian and German experts, which took data since 2020 and was published simultaneously. The consistency of the data from muography and conventional methods motivated visual inspection via an endoscope, confirming the claim. While the purpose of this unexpected feature of the pyramid is not yet known, the work represents the first characterisation of the position and dimensions of a void detected by cosmic-ray muons with a sensitivity of a few centimetres.

New projects exploring the Giza pyramids are now sprouting. A particularly ambitious project by researchers in Egypt, the US and the UK – Exploring the Great Pyramid (EGP) – uses movable large-area detectors to perform precise 3D tomography of the pyramid. Thanks to its larger surface and some methodological improvements, EGP aims to surpass ScanPyramids’ sensitivity after two years of data taking. Although still at the simulation studies stage, the detector technology – plastic scintillator bars with a triangular section and encapsulated wavelength shifter fibres – is already being used by the ongoing MURAVES muography project to scan the interior of the Vesuvius volcano in Italy. The project will also profit from synergy with the upcoming Mu2e experiment at Fermilab, where the very same detectors are used. Finally, proponents of the ScIDEP (Scintillator Imaging Detector for the Egyptian Pyramids) experiment from Egypt, the US and Belgium are giving Khafre’s pyramid a second look, using a high-resolution scintillator-based detector to take data from the same location as Alvarez’s spark chambers.

Muography data in the Xi’an city walls

Pyramids easily make headlines, but there is no scarcity of monuments around the world where muography can play a role. Recently, a Russian team used emulsion detectors to explore the Svyato–Troitsky Danilov Monastery, the main buildings of which have undergone several renovations across the centuries but with associated documentation lost. The results of their survey, published in 2022, include evidence for two unknown rooms and areas of significantly higher density (possible walls) in the immured parts of certain vaults, and of underground voids speculated to be ancient crypts or air ducts. Muography is also being used to preserve buildings of historical importance. The defensive wall structures of Xi’an, one of the Four Great Ancient Capitals of China, suffered serious damage due to heavy rainfall, but repairs in the 1980s were insufficiently documented, motivating non-destructive techniques to assess their internal status. Taking data from six different locations using a compact and portable muon detector to extract a 3D density map of a rampart, a Chinese team led by Lanzhou University has recently reported density anomalies that potentially pose safety hazards (see “Falling walls” figure). 

The many flavours of muography

All the examples described so far are based on the same basic principle as Alvarez’s experiment: the attenuation of the muon flux through dense matter. But there are other ways to utilise muons as probes. For example, it is possible to exploit their deflection in matter due to Coulomb scattering from nuclei, offering the possibility of elemental discrimination. Such muon scattering tomography (MST) has been proposed to help preserve the Santa Maria del Fiore cathedral in Florence, built between 1420 and 1436 by Filippo Brunelleschi, the iconic dome of which is cracking under its own weight. Accurate modelling is needed to guide reinforcement efforts, but uncertainties exist on the internal structure of the walls. According to some experts, Brunelleschi might have inserted iron chains inside the masonry of the dome to stabilise it; however, no conclusive evidence has been obtained with traditional remote-sensing methods. Searching for iron within masonry is therefore the goal of the proposed experiment (see “Preserving a masterpiece” figure), for which a proof-of-principle test on a mock-up wall has already been carried out in Los Alamos.

Beyond cultural heritage, muography has also been advocated as a powerful remote-sensing method for a variety of applications in the nuclear sector. It has been used, for example, to assess the damage and impact of radioactivity in the Fukushima power plant, where four nuclear reactors were damaged in 2011. Absorption-based muography was applied to determine the difference in the density, for example the thickness of the walls, within the nuclear reactor while MST was applied to locate the nuclear fuel. Muography, especially MST, has allowed the investigation of other extreme systems, including blast furnaces and nuclear waste barrels. 

Santa Maria del Fiore cathedral

Volcanology is a further important application of muography, where it is used to discover empty magma chambers and voids. As muons are better absorbed by thick and dense objects, such as rocks on the bottom of a volcano, the absorption provides key information about its inner structure. The density images created via muography can even be fed into machine-learning models to help predict eruptive patterns, and similar methods can be applied to glaciology, as has been done to estimate the topography of mountains hidden by overlaying glaciers. Among these projects is Eiger-μ, designed to explore the mechanisms of glacial erosion.

Powerful partnership 

Muography creates bridges across the world between particle physics and cultural-heritage preservation. The ability to perform radiography of a large object from a distance or from pre-existing tunnels is very appealing in situations where invasive excavations are impossible, as is often the case in highly populated urban or severely constrained areas. Geophysical remote-sensing methods are already part of the archaeological toolkit, but in general they are expensive, have a limited resolution and demand strong model assumptions for interpreting the data. Muography is now gaining acceptance in the cultural-heritage preservation world because its data are intrinsically directional and can be easily interpreted in terms of density distributions.

From the pioneering work of Alvarez to the state-of-the-art systems available today, progress in muography has gone hand-in-hand with the development of detectors for particle physics. The ScanPyramids project, for example, uses micropattern gaseous detectors such as those developed within the CERN RD51 collaboration and nuclear emulsion detectors as those of the OPERA neutrino experiment, while the upcoming EGP project will benefit from detector technologies for the Mu2e experiment at Fermilab. R&D for next-generation muography includes the development of scintillator-based muon detectors, resistive plate chambers, trackers based on multi-wire proportional chambers and more. There are proposals to use microstrip silicon detectors from the CMS experiment and Cherenkov telescopes inspired by the CTA astrophysics project, showing how R&D for fundamental physics continues to drive exotic applications in archaeology and cultural-heritage preservation.

Exploring the origins of matter–antimatter asymmetry

The first edition of the International Workshop on the Origin of Matter–Antimatter Asymmetry (CP2023), hosted by École de Physique des Houches, took place from 12 to 17 February. Around 50 physicists gathered to discuss the central problem connecting particle physics and cosmology: CP violation. Since one of the very first schools dedicated to time-reversal symmetry in the summer of 1952, chaired by Wolfgang Pauli, research has progressed significantly, especially with the formulation by Sakharov of the conditions necessary to produce the observed matter–antimatter asymmetry in the universe.

The workshop programme covered current and future experimental projects to probe the Sakharov conditions: collider measurements of CP violation (LHCb, Belle II, FCC-ee), searches for electric dipole moments (PSI, FNAL), long-baseline neutrino experiments (NOvA, DUNE, T2K, Hyper-Kamiokande, ESSnuSB) and searches for baryon- and lepton-number violating processes such as neutrinoless double beta decay (GERDA, CUORE, CUPID-Mo, KamLAND-Zen, EXO-200) and neutron–antineutron oscillations (ESS). These were put in context with the different theoretical approaches to baryogenesis and leptogenesis.

With the workshop’s aim to provide a discussion forum for junior and senior scientists from various backgrounds, and following the tradition of the Ecole des Houches, a six-hour mini-school took place in parallel with more specialised talks. A first lecture by Julia Harz (University of Mainz) introduced the hypotheses related to baryogenesis, and another by Adam Falkowski (IJCLab) described how CP violation is treated in effective field theory. Each lecture provided both a common theoretical background, and an opportunity to discuss the fundamental motivation driving experimental searches for new sources of CP violation in particle physics.

In his summary talk, Mikhail Shaposhnikov (EPFL Lausanne) explained that it is impossible to identify which mechanism leads to the existing baryon asymmetry in the universe. He added that we live in exciting times and reviewed the vast number of opportunities in experiment and theory lying ahead.

A bridge between popular and textbook science

Most popular science books are written to reach the largest audience possible, which comes with certain sacrifices. The assumption is that many readers might be deterred by technical topics and language, especially by equations that require higher mathematics. In physics one can therefore usually distinguish textbooks from popular physics books by flicking through the pages and checking for symbols.

The Biggest Ideas in the Universe: space, time, and motion, the first in a three-part series by Sean Carroll, goes against this trend. Written for “…people who have no mathematical experience than high-school algebra, but are willing to look at an equation and think about what it means”, there is no point in the book at which things are muddied because the maths becomes too advanced.

Concepts and theories

The first part of the book covers nine topics including conservation, space–time, geometry, gravity and black holes. Carroll spends the first few chapters introducing the reader to the thought process of a theoretical physicist: how to develop a sense for symmetries, the conservation of charges and expansions in small parameters. It also gives readers a fast introduction to calculus using geometric arguments to define derivatives and integrals. By the end of the third chapter, the concepts of differential equations, phase space and the principle of least action have been introduced.

The centre part of the book focusses on geometry. A discussion of the meaning of space and time in physics is followed by the introduction of Minkowski spacetime, with considerable effort given to the philosophical meaning of these concepts. The third part is the most technical. It covers differential geometry, a beautiful derivation of Einstein’s equation of general relativity and the final chapter uses the Schwarzschild solution to discuss black holes.

The Biggest Ideas in the Universe

It is a welcome development that publishers and authors such as Carroll are confident that books like this will find a sizeable readership (another good, recent example of advanced popular physics texts is Leonard Susskind’s “A Theoretical Minimum” series). Many topics in physics can only be fully appreciated if the equations are explained and if chapters go beyond the limitations of typical popular science books. Carroll’s writing style and the structure of the book help to make this case: all concepts are carefully introduced and even though the book is very dense and covers a lot of material, everything is interconnected and readers won’t feel lost while reading. Regular reference to the historical steps in discovering theo­ries and concepts loosen up the text. Two examples are the correspondence between Leibniz and Clarke about the nature of space and the interesting discussion of Einstein and Hilbert’s different approaches to general relativity. The whole series of books, of which two of the three parts will be published soon, is accompanied by recorded lectures that are freely available online and present the topic of every chapter, along with answers to questions on these topics.

It is difficult to find any weaknesses in this book. Figures are often labelled with symbols that readers not used to physics notation can find in the text, so more text in the figures would make them even more accessible. Strangely, the section introducing entropy is not supported by equations and, given the technical detail of all other parts of the book, Carroll could have taken advantage of the mathematical groundwork of the previous chapters here.

I want to emphasise that every topic discussed in The Biggest Ideas in the Universe is well established physics. No flashy but speculative theories or unbalanced focus on science-fiction ideas, which are often used to attract readers to theoretical physics, appear. It stands apart from similar titles by offering insights that can only be obtained if the underlying equations are explained and not just mentioned.

Anyone who is interested in fundamental physics is encouraged to read this book, especially young people interested in studying physics because they will get an excellent idea of the type of physical arguments they will encounter at university. Those who think their mathematical background isn’t sufficient will likely learn many new things, even though the later chapters are quite technical. And if you are at the other end of the spectrum, such as a working physicist, you will find the philosophical discussions of familiar concepts and the illuminating arguments included to elicit physical intuition most useful.

Digging deeper into invisible Higgs-boson decays

ATLAS figure 1

Studies of the Higgs boson by ATLAS and CMS have observed and measured a large spectrum of production and decay mechanisms. Its relatively long lifetime and low expected width (4.1 MeV, compared with the GeV-range decay widths of the W and Z bosons) make the Higgs boson a sensitive probe for small couplings to new states that may measurably distort its branching fractions. The search for invisible or yet undetected decay channels is thus highly relevant.

Dark-matter (DM) particles created in LHC collisions would have no measurable interaction with the ATLAS detector and thus would be “invisible”, but could still be detected via the observation of missing transverse momentum in an event, similarly to neutrinos. The Standard Model (SM) predicts the Higgs boson to decay invisibly via H → ZZ*→ 4ν in only 0.1% of cases. However, this value could be significantly enhanced if the Higgs boson decays into a pair of (light enough) DM particles. Thus, by constraining the branching fraction of Higgs-boson decays to invisible particles it is possible to constrain DM scenarios and probe other physics beyond the SM (BSM).

The ATLAS collaboration has performed comprehensive searches for invisible decays of the Higgs boson considering all its major production modes: vector-boson fusion with and without additional final-state photons, gluon fusion in association with a jet from initial-state radiation, and associated production with a leptonically decaying Z boson or a top quark–antiquark pair. The results of these searches have now been combined, including inputs from Runs 1 and 2 analyses. They yield an upper limit of 10.7% on the branching ratio of the Higgs boson to invisible particles at 95% confidence level, for an unprecedented expected sensitivity of 7.7%. The result is used to extract upper limits on the spin-independent DM-nucleon scattering cross section for DM masses smaller than about 60 GeV in a variety of Higgs-portal models (figure 1). In this range and for the models considered, invisible Higgs-boson decays are more sensitive than the results from DM-nucleon scattering detection experiments.

ATLAS figure 2

An alternative way to constrain possible undetected decays of the Higgs boson is to measure its total decay width ΓH. Combining the observed value of the width with measurements of the branching fractions to observed decays allows the partial width for decays to new particles to be inferred. Directly measuring ΓH at the LHC is not possible as it is much smaller than the detector resolution. However, ΓH can be constrained by taking advantage of an unusual feature of the H  ZZ(*) decay channel: the rapid increase in available phase space for the H  ZZ(*) decay as mH approaches the 2mZ threshold counteracts the mass dependence of Higgs-boson production. Furthermore, this far “off-shell” production above 2mZ has a negligible ΓH dependence, unlike “on-shell” production near the Higgs-boson mass at 125 GeV. Comparing the Higgs-boson production rates in these two regions therefore allows an indirect measurement of ΓH. Although some assumptions are required (e.g. that the relation between on-shell and off-shell production is not modified by BSM effects), the measurement is sensitive to the value of ΓH expected in the SM. Recently, ATLAS measured the off-shell production cross-section using both the four-charged lepton (4l) and two-charged lepton plus two neutrino (2l2v) final states, finding evidence for off-shell Higgs-boson production with a significance of 3.3 σ (figure 2). By combining both the previously measured on-shell Higgs-boson production-cross section and the of-shell Higgs-boson production-cross section, ΓH was found to be 4.5+3.3–2.5 MeV, which agrees with the SM prediction of 4.1 MeV but leaves plenty of room for possible BSM contributions.

This sensitivity will improve thanks to the new data to be collected in Run 3 of the LHC, which should more than triple the size of the Run 2 dataset.

Design principles of theoretical physics

“Now I know what the atom looks like!” Ernest Rutherford’s simple statement belies the scientific power of reductionism. He had recently discovered that atoms have substructure, notably that they comprise a dense positively charged nucleus surrounded by a cloud of negatively charged electrons. Zooming forward in time, that nucleus ultimately gave way further when protons and neutrons were revealed at its core. A few stubborn decades later they too gave way with our current understanding being that they are comprised of quarks and gluons. At each step a new layer of nature is unveiled, sometimes more, sometimes less numerous in “building blocks” than the one prior, but in every case delivering explanations, even derivations, for the properties (in practice, parameters) of the previous layer. This strategy, broadly defined as “build microscopes, find answers” has been tremendously successful, arguably for millennia.

Natural patterns

While investigating these successively explanatory layers of nature, broad patterns emerge. One of which is known colloquially as “naturalness”. This pattern essentially asserts that in reversing the direction and going from one microscopic theory, “the UV-completion”, to its larger-scale shell, “the IR”, the values of parameters measured in the latter are, essentially, “typical”. Typical, in the sense that they reflect the scales, magnitudes and, perhaps most importantly, the symmetries of the underlying UV completion. As Murray Gell-Mann once said: “everything not forbidden is compulsory”.

So, if some symmetry is broken by a large amount by some interaction in the UV theory, the same symmetry, in whatever guise it may have adopted, will also be broken by a large amount in the IR theory. The only exception to this is accidental fine-tuning, where large UV-breakings can in principle conspire and give contributions to IR-breakings that, in practical terms, accidentally cancel to a high degree, giving a much smaller parameter than expected in the IR theory. This is colloquially known as “unnaturalness”.

There are good examples of both instances. There is no symmetry in QCD that could keep a proton light; unsurprisingly it has mass of the same order as the dominant mass scale in the theory, the QCD scale, mp ~ ΛQCD. But there is a symmetry in QCD that keeps the pion light. The only parameters in UV theory that break this symmetry are the light quark masses. Thus, the pion mass-squared is expected to be around m2π ~ mqΛQCD. Turns out, it is.

There are also examples of unnatural parameters. If you measure enough different physical observables, observations that are unlikely on their own become possible in a large ensemble of measurements – a sort of theoretical “look elsewhere effect”. For example, consider the fact that the Moon almost perfectly obscures the Sun during a lunar eclipse. There is no symmetry which requires that the angular size of the Moon should almost match that of the Sun to an Earth-based observer. Yet, given many planets and many moons, this will of course happen for some planetary systems.

However, if an observation of a parameter returns an apparently unnatural value, can one be sure that it is accidentally small? In other words, can we be confident we have definitively explored all possible phenomena in nature that can give rise to naturally small parameters? 

From 30 January to 3 February, participants of an informal CERN theory institute “Exotic Approaches to Naturalness” sought to answer this question. Drawn from diverse corners of the theorist zoo, more than 130 researchers gathered, both virtually and in person, to discuss questions of naturalness. The invited talks were chosen to expose phenomena in quantum field theory and beyond which challenge the naive naturalness paradigm.

Coincidences and correlations

The first day of the workshop considered how apparent numerical coincidences can lead to unexpectedly small parameters in the IR due to the result of selection rules that do not immediately manifest from a symmetry, known as “natural zeros”. A second set of talks considered how, going beyond quantum field theory, the UV and IR can potentially be unexpectedly correlated, especially in theories containing quantum gravity, and how this correlation can lead to cancellations that are not apparent from a purely quantum field theory perspective.

The second day was far-ranging, with the first talk unveiling some lower dimensional theories of the sort one more readily finds in condensed matter systems, in which “topological” effects lead to constraints on IR parameters. A second discussed how fundamental properties, such as causality, can impose constraints on IR parameters unexpectedly. The last demonstrated how gravitational effective theories, including those describing the gravitational waves emitted in binary black hole inspirals, have their own naturalness puzzles.

The ultimate goal is to now go forth and find new angles of attack on the biggest naturalness questions in fundamental physics

Midweek, alongside an inspirational theory colloquium by Nathaniel Craig (UC Santa Barbara), the potential role of cosmology in naturalness was interrogated. An early example made famous by Steven Weinberg concerns the role of the “anthropic principle” in the presently measured value of the cosmological constant. However, since then, particularly in recent years, theorists have found many possible connections and mechanisms linking naturalness questions to our universe and beyond.

The fourth day focussed on the emerging world of generalised and higher-form symmetries, which are new tools in the arsenal of the quantum field theorist. It was discussed how naturalness in IR parameters may potentially arise as a consequence of these recently uncovered symmetries, but whose naturalness would otherwise be obscured from view within a traditional symmetry perspective. The final day studied connections between string theory, the swampland and naturalness, exploring how the space of theories consistent with string theory leads to restricted values of IR parameters, which potentially links to naturalness. An eloquent summary was delivered by Tim Cohen (CERN).

Grand slam

In some sense the goal of the workshop was to push back the boundaries by equipping model builders with new and more powerful perspectives and theoretical tools linked to questions of naturalness, broadly defined. The workshop was a grand slam in this respect. However, the ultimate goal is to now go forth and use these new tools to find new angles of attack on the biggest naturalness questions in fundamental physics, relating to the cosmological constant and the Higgs mass.

The Standard Model, despite being an eminently marketable logo for mugs and t-shirts, is incomplete. It breaks down at very short distances and thus it is the IR of some more complete, more explanatory UV theory. We don’t know what this UV theory is, however, it apparently makes unnatural predictions for the Higgs mass and cosmological constant. Perhaps nature isn’t unnatural and generalised symmetries are as-yet hidden from our eyes, or perhaps string theory, quantum gravity or cosmology has a hand in things? It’s also possible, of course, that nature has fine-tuned these parameters by accident, however, that would seem – à la Weinberg – to point towards a framework in which such parameters are, in principle, measured in many different universes. All of these possibilities, and more, were discussed and explored to varying degrees.

Perhaps the most radical possibility, the most “exotic approach to naturalness” of all, would be to give up on naturalness altogether. Perhaps, in whatever framework UV completes the Standard Model, parameters such as the Higgs mass are simply incalculable, unpredictable in terms of more fundamental parameters, at any length scale. Shortly before the advent of relativity, quantum mechanics, and all that have followed from them, Lord Kelvin (attribution contested) once declared: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”. The breadth of original ideas presented at the “Exotic Approaches to Naturalness” workshop, and the new connections constantly being made between formal theory, cosmology and particle phenomenology, suggest it would be similarly unwise now, as it was then, to make such a wager.

We can’t wait for a future collider

Imagine a world without a high-energy collider. Without our most powerful instrument for directly exploring the smallest scales, we would be incapable of addressing many open questions in particle physics. With the US particle-physics community currently debating which machines should succeed the LHC and how we should fit into the global landscape, this possibility is a serious concern. 

The good news is that physicists generally agree on the science case for future colliders. Questions surrounding the Standard Model itself, in particular the microscopic nature of the Higgs boson and the origin of electroweak symmetry breaking, can only be addressed at high-energy colliders. We also know the Standard Model is not the complete picture of the universe. Experimental observations and theoretical concerns strongly suggest the existence of new particles at the multi-TeV scale. 

The latest US Snowmass exercise and the European strategy update both advocate for the fast construction of an e+e Higgs factory followed by a multi-TeV collider. The former will enable us to measure the Higgs boson’s couplings to other particles with an order of magnitude better precision than the High-Luminosity LHC. The latter is crucial to unambiguously surpass exclusions from the LHC, and would be the only experiment where we could discover or exclude minimal dark-matter scenarios all the way up to their thermal targets. Most importantly, precise measurements of the Brout–Englert–Higgs potential at a 10 TeV scale collider are essential to understand what role the Higgs plays in the origin and evolution of the universe. 

We haven’t yet agreed on what to build, where and when. We face an unprecedented choice between scaling up existing collider technologies or pursuing new, compact and power-efficient options. We must also choose between centering the energy frontier at a single lab or restoring global balance to the field by hosting colliders at different sites. Our choices in the next few years could determine the next century of particle physics. 

Snowmass community workshop

The Future Circular Collider programme – beginning with a large circular e+e collider (FCC-ee) with energies ranging from 90 to 365 GeV, followed by a pp collider with energies up to 100 TeV (FCC-hh) – would build on the infrastructure and skills currently present at CERN. A circular e+e machine could support multiple interaction points, produce higher luminosity than a linear machine for energies of interest, and its tunnel could be re-used for a pp collider. While this staged approach has driven success in our field for decades, scaling up to a circumference of 100 km raises serious questions about feasibility, cost and power consumption. As a new assistant professor, I am also deeply concerned about gaps in data-taking and time­-scales. Even if there are no delays, I will likely retire during the FCC-ee run and die before the FCC-hh produces collisions. 

In contrast, there is a growing contingent of physicists who think that a paradigm shift is essential to reach the 10 TeV scale and beyond. The International Muon Collider collaboration has determined that, with targeted R&D to address engineering challenges and make design progress, a few-TeV μ+μ collider could be realised on a 20-year technically limited timeline, and would set the stage for an eventual 10 TeV machine. The latter could enable a mass reach equivalent to a 50–200 TeV hadron collider, in addition to precision electroweak measurements, with a lower price tag and significantly smaller footprint. A muon collider also opens the possibility to host different machines at different sites, easing the transition between projects and fostering a healthier, more global workforce. Assuming the technical challenges can be overcome, a muon collider would therefore be the most attractive way forward.

Assuming the technical challenges can be overcome, a muon collider would be the most attractive way forward

We are not yet ready to decide which path is most optimal, but we are already time-constrained. It is increasingly likely that the next machine will not turn on until after the High Luminosity-LHC. The most senior person today who could reasonably participate is roughly only 10 years into a permanent job. Early-career faculty, who would use this machine, are experienced enough to have well-informed opinions, but are not senior enough to be appointed to decision-making panels. While we value the wisdom of our senior colleagues, future colliders are inherently “early-career colliders”, and our perspectives must be incorporated. 

The US must urgently invest in future collider R&D. If other areas of physics progress faster than the energy frontier, our colleagues will disengage, move elsewhere and might not come back. If the size of the field and expertise atrophy before the next machine, we risk imperilling future colliders altogether. We agree on the physics case. We want the opportunity to access higher energies in our lifetimes. Let’s work together to choose the right path forward.

Stanisław Jadach 1947–2023

Stanisław Jadach, an outstanding theoretical physicist, died on 26 February at the age of 75. His foundational contributions to the physics programmes at LEP and the LHC, and for the proposed Future Circular Collider at CERN, have significantly helped to advance the field of elementary particle physics and its future aspirations.

Born in Czerteż, Poland, Jadach graduated in 1970 with a masters in physics from Jagiellonian University. There, he also defended his doctorate, received his habilitation degree and worked until 1992. During this period, whilst partly under martial law in Poland, Jadach took trips to Leiden, Paris, London, Stanford and Knoxville, and formed collaborations on precision theory calculations based on Monte Carlo event-generator methods. In 1992 he moved to the Institute of Nuclear Physics Polish Academy of Sciences (PAS) where, receiving the title of professor in 1994, he worked until his death. 

Prior to LEP, all calculations of radiative corrections were based on first- and, later, partially second-order results. This limited the theoretical precision to the 1% level, which was unacceptable for experiment. In 1987 Jadach solved that problem in a single-author report, inspired by the classic work of Yennie, Frautschi and Suura, featuring a new calculational method for any number of photons. It was widely believed that soft-photon approximations were restricted to many photons with very low energies and that it was impossible to relate, consistently, the distributions of one or two energetic photons to those of any number of soft photons. Jadach and his colleagues solved this problem in their papers in 1989 for differential cross sections, and later in 1999 at the level of spin amplitudes. A long series of publications and computer programmes for re-summed perturbative Standard Model calculations ensued. 

Most of the analysis of LEP data was based exclusively on the novel calculations provided by Jadach and his colleagues. The most important concerned the LEP luminosity measurement via Bhabha scattering, the production of lepton and quark pairs, and the production and decay of W and Z boson pairs. For the W-pair results at LEP2, Jadach and co-workers intelligently combined separate first-order calculations for the production and decay processes to achieve the necessary 0.5% theoretical accuracy, bypassing the need for full first-order calculations for the four-fermion process, which were unfeasible at the time. Contrary to what was deemed possible, Jadach and his colleagues achieved calculations that simultaneously take into account QED radiative corrections and the complete spin–spin correlation effects in the production and decay of two tau leptons. He also had success in the 1970s in novel simulations of strong interaction processes.

After LEP, Jadach turned to LHC physics. Among other novel results, he and his collaborators developed a new constrained Markovian algorithm for parton cascades, with no need to use backward evolution and predefined parton distributions, and proposed a new method, using a “physical” factorisation scheme, for combining a hard process at next-to leading order with a parton cascade, much simpler and more efficient than alternative methods.

Jadach was already updating his LEP-era calculations and software towards the increased precision of FCC-ee, and is the co-editor and co-author of a major paper delineating the need for new theoretical calculations to meet the proposed collider’s physics needs. He co-organised and participated in many physics workshops at CERN and in the preparation of comprehensive reports, starting with the famous 1989 LEP Yellow Reports.

Jadach, a member of the Polish Academy of Arts and Sciences (PAAS), received the most prestigious awards in physics in Poland: the Marie Skłodowska-Curie Prize (PAS), the Marian Mięsowicz Prize (PAAS), and the prize of the Minister of Science and Higher Education for lifetime scientific achievements. He was also a co-initiator and permanent member of the international advisory board of the RADCOR conference.

Stanisław (Staszek) was a wonderful man and mentor. Modest, gentle and sensitive, he did not judge or impose. He never refused requests and always had time for others. His professional knowledge was impressive. He knew almost everything about QED, and there were few other topics in which he was not at least knowledgeable. His erudition beyond physics was equally extensive. He is already profoundly and dearly missed.

bright-rec iop pub iop-science physcis connect