Topics

ALICE ups its game for sustainable computing

The Large Hadron Collider (LHC) roared back to life on 5 July 2022, when proton–proton collisions at a record centre-of-mass energy of 13.6 TeV resumed for Run 3. To enable the ALICE collaboration to benefit from the increased instantaneous luminosity of this and future LHC runs, the ALICE experiment underwent a major upgrade during Long Shutdown 2 (2019–2022) that will substantially improve track reconstruction in terms of spatial precision and tracking efficiency, in particular for low-momentum particles. The upgrade will also enable an increased interaction rate of up to 50 kHz for lead–lead (PbPb) collisions in continuous readout mode, which will allow ALICE to collect a data sample more than 10 times larger than the combined Run 1 and Run 2 samples.

ALICE is a unique experiment at the LHC devoted to the study of extreme nuclear matter. It comprises a central barrel (the largest data producer) and a forward muon “arm”. The central barrel relies mainly on four subdetectors for particle tracking: the new inner tracking system (ITS), which is a seven-layer, 12.5 gigapixel monolithic silicon tracker (CERN Courier July/August 2021 p29); an upgraded time projection chamber (TPC) with GEM-based readout for continuous operation; a transition radiation detector; and a time-of-flight detector. The muon arm is composed of three tracking devices: a newly installed muon forward tracker (a silicon tracker based on monolithic active pixel sensors), revamped muon chambers and a muon identifier.

Due to the increased data volume in the upgraded ALICE detector, storing all the raw data produced during Run 3 is impossible. One of the major ALICE upgrades in preparation for the latest run was therefore the design and deployment of a completely new computing model: the O2 project, which merges online (synchronous) and offline (asynchronous) data processing into a single software framework. In addition to an upgrade of the experiment’s computing farms for data readout and processing, this necessitates efficient online compression and the use of graphics processing units (GPUs) to speed up processing. 

Pioneering parallelism

As their name implies, GPUs were originally designed to accelerate computer-graphics rendering, especially in 3D gaming. While they continue to be utilised for such workloads, GPUs have become general-purpose vector processors for use in a variety of settings. Their intrinsic ability to perform several tasks simultaneously gives them a much higher compute throughput than traditional CPUs and enables them to be optimised for data processing rather than, say, data caching. GPUs thus reduce the cost and energy consumption of associated computing farms: without them, about eight times as many servers of the same type and other resources would be required to handle the ALICE TPC online processing of PbPb collision data at a 50 kHz interaction rate. 

ALICE detector dataflow

Since 2010, when the high-level trigger online computer farm (HLT) entered operation, the ALICE detector has pioneered the use of GPUs for data compression and processing in high-energy physics. The HLT had direct access to the detector readout hardware and was crucial to compress data obtained from heavy-ion collisions. In addition, the HLT software framework was advanced enough to perform online data reconstruction. The experience gained during its operation in LHC Run 1 and 2 was essential for the design and development of the current O2 software and hardware systems.

For data readout and processing during Run 3, the ALICE detector front-end electronics are connected via radiation-tolerant gigabit-transceiver links to custom field programmable gate arrays (see “Data flow” figure). The latter, hosted in the first-level processor (FLP) farm nodes, perform continuous readout and zero-suppression (the removal of data without physics signal). In the case of the ALICE TPC, zero-suppression reduces the data rate from a prohibitive 3.3 TB/s at the front end to 900 GB/s for 50 kHz minimum-bias PbPb operations. This data stream is then pushed by the FLP readout farm to the event processing nodes (EPN) using data-distribution software running on both farms. 

Located in three containers on the surface close to the ALICE site, the EPN farm currently comprises 350 servers, each equipped with eight AMD GPUs with 32 GB of RAM each, two 32-core AMD CPUs and 512 GB of memory. The EPN farm is optimised for the fastest possible TPC track reconstruction, which constitutes the bulk of the synchronous processing, and provides most of its computing power in the form of GPU processing. As data flow from the front end into the farms and cannot be buffered, the EPN computing capacity must be sufficient for the highest data rates expected during Run 3.

Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing

Due to the continuous readout approach at the ALICE experiment, processing does not occur on a particular “event” triggered by some characteristic pattern in detector signals. Instead, all data is read out and stored during a predefined time slot in a time frame (TF) data structure. The TF length is usually chosen as a multiple of one LHC orbit (corresponding to about 90 microseconds). However, since a whole TF must always fit into the GPU’s memory, the collaboration chose to use 32 GB GPU memory to grant enough flexibility in operating with different TF lengths. In addition, an optimisation effort was put in place to reuse GPU memory in consecutive processing steps. During the proton run in 2022 the system was stressed by increasing the proton collision rates beyond those needed in order to maximise the integrated luminosity for physics analyses. In this scenario the TF length was chosen to be 128 LHC orbits. Such high-rate tests aimed to reproduce occupancies similar to the expected rates of PbPb collisions. The experience of ALICE demonstrated that the EPN processing could sustain rates nearly twice the nominal design value (600 GB/s) originally foreseen for PbPb collisions. Using high-rate proton collisions at 2.6 MHz the readout reached 1.24 TB/s, which was fully absorbed and processed on the EPNs. However, due to fluctuations in centrality and luminosity, the number of TPC hits (and thus the required memory size) varies to a small extent, demanding a certain safety margin. 

Flexible compression 

At the incoming raw-data rates during Run 3, it is impossible to store the data – even temporarily. Hence, the outgoing data is compressed in real time to a manageable size on the EPN farm. During this network transfer, event building is carried out by the data distribution suite, which collects all the partial TFs sent by the detectors and schedules the building of the complete TF. At the end of the transfer, each EPN node receives and then processes a full TF containing data from all ALICE detectors. 

GPUs manufactured by AMD

The detector generating by far the largest data volume is the TPC, contributing more than 90% to the total data size. The EPN farm compresses this to a manageable rate of around 100 GB/s (depending on the interaction rate), which is then stored to the disk buffer. The TPC compression is particularly elaborate, employing several steps including a track-model compression to reduce the cluster entropy before the entropy encoding. Evaluating the TPC space-charge distortion during data taking is also the most computing-intensive aspect of online calibrations, requiring global track reconstruction for several detectors. At the increased Run 3 interaction rate, processing on the order of one percent of the events is sufficient for the calibration.

During data taking, the EPN system operates synchronously and the TPC reconstruction fully loads the GPUs. With the EPN farm providing 90% of its compute performance via GPUs, it is also desirable to maximise the GPU utilisation in the asynchronous phase. Since the relative contribution of the TPC processing to the overall workload is much smaller in the asynchronous phase, GPU idle times would be high and processing would be CPU-limited if the TPC part only ran on the GPUs. To use the GPUs maximally, the central-barrel asynchronous reconstruction software is being implemented with native GPU support. Currently, around 60% of the workload can run on a GPU, yielding a speedup factor of about 2.25 compared to CPU-only processing. With the full adaptation of the central-barrel tracking software to the GPU, it is estimated that 80% of the reconstruction workload could be processed on GPUs.

In contrast to synchronous processing, asynchronous processing includes the reconstruction of data from all detectors, and all events instead of only a subset; physics analysis-ready objects produced from asynchronous processing are then made available on the computing Grid. As a result, the processing workload for all detectors, except the TPC, is significantly higher in the asynchronous phase. For the TPC, clustering and data compression are not necessary during asynchronous processing, while the tracking runs on a smaller input data set because some of the detector hits were removed during data compression. Consequently, TPC processing is faster in the asynchronous phase than in the synchronous phase. Overall, the TPC contributes significantly to asynchronous processing, but is not dominant. The asynchronous reconstruction will be divided between the EPN farm and the Grid sites. While the final distribution scheme is still to be decided, the plan is to split reconstruction between the online computing farm, the Tier 0 and the Tier 1 sites. During the LHC shutdown periods, the EPN farm nodes will almost entirely be used for asynchronous processing.

Great shape

In 2021, during the first pilot-beam collisions at injection energy, synchronous processing was running and successfully commissioned. In 2022 it was used during nominal LHC operations, where ALICE performed online processing of pp collisions at a 2.6 MHz inelastic interaction rate. At lower interaction rates (both for pp and PbPb collisions), ALICE ran additional processing tasks on free EPN resources, for instance online TPC charged-particle energy-loss determination, which would not be possible at the full 50 kHz PbPb collision rate. The particle-identification performance is demonstrated in the figure “Particle ID”, in which no additional selections on the tracks or detector calibrations were applied.

ALICE TPC performance

Another performance metric used to assess the quality of the online TPC reconstruction is the charged-particle tracking efficiency. The efficiency for reconstructing tracks from PbPb collisions at a centre-of-mass energy of 5.52 TeV per nucleon pair ranges from 94–100% for pT > 0.1 GeV/c. Here the fake-track rate is rather negligible, however the clone rate increases significantly for low-pT primary tracks due to incomplete track merging of very low-momentum particles that curl in the ALICE solenoidal field and leave and enter the TPC multiple times.

The effective use of GPU resources provides extremely efficient processors. Additionally, GPUs deliver improved data quality and compute cost and efficiency – aspects that have not been overlooked by the other LHC experiments. To manage their data rates in real time, LHCb developed the Allen project, a first-level trigger processed entirely on GPUs that reduces the data rate prior to the alignment, calibration and final reconstruction steps by a factor of 30–60. With this approach, 4 TB/s are processed in real time, with 10 GB of the most interesting collisions selected for physics analysis. 

At the beginning of Run 3, the CMS collaboration deployed a new HLT farm comprising 400 CPUs and 400 GPUs. With respect to a traditional solution using only CPUs, this configuration reduced the processing time of the high-level trigger by 40%, improved the data-processing throughput by 80% and reduced the power consumption of the farm by 30%. ATLAS uses GPUs extensively for physics analyses, especially for machine-learning applications. Focus has also been placed on data processing, anticipating that in the following years much of that can be offloaded to GPUs. For all four LHC experiments, the future use of GPUs is crucial to reduce the cost, size and power consumption within the higher luminosities of the LHC.

Having pioneered the use of GPUs in high-energy physics for more than a decade, ALICE now employs GPUs heavily to speed up online and offline processing. Today, 99% of synchronous processing is performed on GPUs, dominated by the largest contributor, the TPC.

More code

On the other hand, only about 60% of asynchronous processing (for 650 kHz pp collisions) is currently running on GPUs, i.e. offline data processing on the EPN farm. For asynchronous processing, even if the TPC is still an important contributor to the compute load, there are several other subdetectors that are important. In fact, there is an ongoing effort to port considerably more code to the GPUs. Such an effort will increase the fraction of GPU-accelerated code to beyond 80% for full barrel tracking. Eventually ALICE aims to run 90% of the whole asynchronous processing on GPUs.

PbPb collisions in the ALICE TPC

In November 2022 the upgraded ALICE detectors and central systems saw PbPb collisions for the first time during a two-day pilot run at a collision rate of about 50 Hz. High-rate PbPb processing was validated by injecting Monte Carlo data into the readout farm and running the whole data processing chain on 230 EPN nodes. Due to the TPC data volumes being somewhat larger than initially expected, this stress test is now being revalidated with continuously optimised TPC firmware using 350 EPN nodes together with the final TPC firmware to provide the required 20% compute margin with respect to foreseen 50 kHz PbPb operations in October 2023. Together with the upgraded detector components, the ALICE experiment has never been in better shape to probe extreme nuclear matter during the current and future LHC runs.

The W boson’s midlife crisis

The discovery of the W boson at CERN in 1983 can well be considered the birth of precision electroweak physics. Measurements of the W boson’s couplings and mass have become ever more precise, progressively weaving in knowledge of other particle properties through quantum corrections. Just over a decade ago, the combination of several Standard Model (SM) parameters with measurements of the W-boson mass led to a prediction of a relatively low Higgs-boson mass, of order 100 GeV, prior to its discovery. The discovery of the Higgs boson in 2012 with a mass of about 125 GeV was hailed as a triumph of the SM. Last year, however, an unexpectedly high value of the W-boson mass measured by the CDF experiment threw a spanner into the works. One might say the 40-year-old W boson encountered a midlife crisis.

The mass of the W boson, mW, is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson. The mass of each fermion is determined by the strength of its interaction with the Brout–Englert–Higgs field, but this strength is currently only known to an accuracy of approximately 10% at best; future measurements from the High-Luminosity LHC and a future e+e collider are required to achieve percent-level accuracy. Meanwhile, mW is predicted with an accuracy better than 0.01%. At tree level, this mass depends only on the mass of the Z boson and the weak and electromagnetic couplings. The first measurements of mW by the UA1 and UA2 experiments at the SppS collider at CERN were in remarkable agreement with this prediction, within the large uncertainties. Further measurements at the Tevatron at Fermilab and the Large Electron Positron collider (LEP) at CERN achieved sufficient precision to probe the presence of higher-order electroweak corrections, such as from a loop containing top and bottom quarks.

Increasing sophistication

Measurements of mW at the four LEP experiments were performed in collisions producing two W bosons. Hadron colliders, by contrast, can produce a single W-boson resonance, simplifying the measurement when utilising the decay to an electron or muon and an associated neutrino. However, this simplification is countered by the complication of the breakup of the hadrons, along with multiple simultaneous hadron–hadron interactions. Measurements at the Tevatron and LHC have required increasing sophistication to model the production and decay of the W boson, as well as the final-state lepton’s interactions in the detectors. The average time between the available datasets and the resulting published measurement have increased from two years for the first CDF measurement in 1991 to more than 10 years for the most recent CDF measurement announced last year (CERN Courier May/June 2022 p9). The latter benefitted from a factor of four more W bosons than the previous measurement, but suffered from a higher number of additional simultaneous interactions. The challenge of modelling these interactions while also increasing the measurement precision required many years of detailed study. The end result, mW = 80433.5 ± 9.4 MeV, differs from the SM prediction of mW = 80357 ± 6 MeV by approximately seven standard deviations (see “Out of order” figure).

CDF measurement of the W mass

The SM calculation of mW includes corrections from single loops involving fermions or the Higgs boson, as well as from two-loop processes that also include gluons. The splitting of the W boson into a top- and bottom-quark loop produces the largest correction to the mass: for every 1 GeV increase in top-quark mass the predicted W mass increases by a little over 6 MeV. Measurements of the top-quark mass at the Tevatron and LHC have reached a precision of a few hundred MeV, thus contributing an uncertainty on mW of only a couple of MeV. The calculated mW depends only logarithmically on the Higgs-boson mass mH, and given the accuracy of the LHC mH measurements, it contributes negligibly to the uncertainty on mW. The tree-level dependence of mW on the Z-boson mass and on the electromagnetic coupling strength contribute an additional couple of MeV each to the uncertainty. The robust prediction of the SM allows an incisive test through mW measurements, and it would appear to fail in the face of the recent CDF measurement.

Since the release of the CDF result last year, physicists have held extensive and detailed discussions, with a recurring focus on the measurement’s compatibility with the SM prediction and with the measurements of other experiments. Further discussions and workshops have reviewed the suite of Tevatron and LHC measurements, hypothesising effects that could have led to a bias in one or more of the results. These potential effects are subtle, as fundamentally the W-boson signature is strikingly unique and simple: a single charged electron or muon with no observable particle balancing its momentum. Any source of bias would have to lie in a higher-order theoretical or experimental effect, and the analysts have studied and quantified these in great detail.

Progress

In the spring of this year ATLAS contributed an update to the story. The collaboration re-analysed its data from 2011 to apply a comprehensive statistical fit using a profile likelihood, as well as the latest global knowledge of parton distribution functions (PDFs) – which describe the momentum distribution functions of quarks and gluons inside the proton. The preliminary result (mW = 80360 ± 16 MeV) reduces the uncertainty and the central value of its previous result published in 2017, further increasing the tension between the ATLAS result and that of CDF.

Meanwhile, the Tevatron+LHC W-mass combination working group has carried out a detailed investigation of higher-order theoretical effects affecting hadron-collider measurements, and provided a combined mass value using the latest published measurement from each experiment and from LEP. These studies, due to be presented at the European Physical Society High-Energy Physics conference in Hamburg in late August, give a comprehensive and quantitative overview of W-boson mass measurements and their compatibilities. While no significant issues have been identified in the measurement procedures and results, the studies shed significant light on their details and differences.

LHC versus Tevatron

Two important aspects of the Tevatron and LHC measurements are the modelling of the momentum distribution of each parton in the colliding hadrons, and the angular distribution of the W boson’s decay products. The higher energy of the LHC increases the importance of the momentum distributions of gluons and of quarks from the second generation, though these can be constrained using the large samples of W and Z bosons. In addition, the combination of results from centrally produced W bosons at ATLAS with more forward W-boson production at LHCb reduces uncertainties from the PDFs. At the Tevatron, proton–antiproton collisions produced a large majority of W bosons via the valence up and down (anti)quarks inside the (anti)proton, and these are also constrained by measurements at the Tevatron. For the W-boson decay, the calculation is common to the LHC and the Tevatron, and precise measurements of the decay distributions by ATLAS are able to distinguish several calculations used in the experiments.

W-mass measuring

In any combination of measurements, the primary focus is on the uncertainty correlations. In the case of mW, many uncertainties are constrained in situ and are therefore uncorrelated. The most significant source of correlated uncertainty is the PDFs. In order to evaluate these correlations, the combination working group generated large samples of events and produced simplified models of the CDF, DØ and ATLAS detectors. Several sets of PDFs were studied to determine their compatibility with broader W- and Z-boson measurements at hadron colliders. For each of these sets the correlations and combined mW values were determined, opening a panorama view of the impact of PDFs on the measurement (see “Measuring up” figure).

The mass of the W boson is important because the SM predicts its value to high precision, in contrast with the masses of the fermions or the Higgs boson

The first conclusion from this study is that the compatibility of all PDF sets with W- and Z-boson measurements is generally low: the most compatible PDF set, CT18 from the CTEQ collaboration, gives a probability of only 1.5% that the suite of measurements are consistent with the predictions. Using this PDF set for the W-boson mass combination gives an even lower compatibility of 0.5%. When the CDF result is removed, the compatibility of the combined mW value is good (91%), and when comparing this “N-1” combined value to the CDF value for the CT18 set, the difference is 3.6σ. The results are considered unlikely to be compatible, though the possibility cannot be excluded in the absence of an identified bias. If the CDF measurement is removed, the combination yields a mass of mW = 80369.2 ± 13.3 MeV for the CT18 set, while including all measurements results in a mass of mW = 80394.6 ± 11.5 MeV. The former value is consistent with the SM prediction, while the latter value is 2.6σ higher.

Two scenarios

The results of the preliminary combination clearly separate two possible scenarios. In the first, the mW measurements are unbiased and differ due to large fluctuations and the PDF dependence of the W- and Z-boson data. In the second, a bias in one or more of the measurements produces the low compatibility of the measured values. Future measurements will clarify the likelihood of the first scenario, while further studies could identify effect(s) that point to the second scenario. In either case the next milestone will take time due to the exquisite precision that has now been reached, and to the challenges in maintaining analysis teams for the long timescales required to produce a measurement. The W boson’s midlife crisis continues, but with time and effort the golden years will come. We can all look forward to that.

A frog among birds

“Well, Doc, You’re In”: Freeman Dyson’s Journey through the Universe is a biographical account of an epochal theoretical physicist with a mind that was, by any measure, delightful and diverse. It portrays Dyson, a self-described frog among birds, as a one-off synthesis of blitz-spirit Britishness with American space-age can-do. Of the elite cadre of theoretical physicists who ushered in the era of quantum field theory, which dominates theoretical physics to this day, who else would have devoted so much time and sincere scientific energy to the development of a gargantuan spacecraft, powered by nuclear bombs periodically dropped beneath it, that would take human civilisation beyond our solar system!

Written by colleagues, friends, family members and selected experts, each chapter is more of a self-contained monograph, a link in a chain, than it is a portion of the continuous thread that one would find for a more traditional single-author biography. What is lost as a result of this format, such as an occasional repetition of key life moments, is more than sufficiently compensated by richness of perspective and a certain ease of pick-up put-down that comes from the narrational independence of the various chapters. If it has been a while since the reader last had a moment to pick it up, not much will be lost when one delves back in.

The early years of Dyson-caliber 20th-century theoretical physicists and mathematicians of his cohort are often interwoven with events surrounding the development of nuclear weapons or codebreaking. Dyson’s story as told in “Well, Doc, You’re In” stands apart in this respect, as he spent the war years working in Bomber Command for the Royal Air Force in England. His reflections on aspects of his own experience mirror, in some ways, the sentiments of future colleagues involved in the Manhattan project, noting: “Through science and technology, evil is organised bureaucratically so that no individual is responsible for what happens.”

“Well, Doc, You’re In”: Freeman Dyson’s Journey through the Universe

The following years spent wrestling with quantum electrodynamics (QED) at Cornell make for lighter reading. The scattered remarks from eminent theorists such as Bethe and Oppenheimer on Dyson and his work, as well as from Dyson on his eminent colleagues, bring a sense of reality to the unfolding developments that would ultimately become a momentous leap forward in the understanding of quantum field theory.

“The preservation and fostering of diversity is the great goal that I would like to see embodied in our ethical principles and in our political actions,” said Dyson. Following his deep contributions to QED, Dyson embraced this spirit of diversity and jumped from scientific pond to pond in search of progress, be it the stability of matter or the properties of random matrices. It is interesting to learn, with hindsight, of the questions that gripped Dyson’s imagination at a time when particle physics was entering a golden era. As a reader one almost feels the contrarian spirit, or rebellion, in these choices as they are laid out against this backdrop.

Although scientifically Dyson may have been a frog, jumping from pond to pond, professionally he was anything but. Aged 29 he moved to the Institute for Advanced Study at Princeton and he stayed there to the end. In around 1960 Dyson joined the JASON defence advisory group, a group of scientists advising the US government on scientific matters. He remained a member until his passing in 2020. This consistent backdrop makes for a biographical story, which is essentially free from the distractions of the professional manoeuvring that typically punctuates biographies of great scientists. A positive consequence is that the various authors, and the reader, may focus that bit more keenly on the workings of Dyson’s mind.

For as long as graduate students learn quantum field theory, they will encounter Dyson. Sci-fi fans will recognise the Dyson Sphere (a structure surrounding a star to allow advanced civilisations to harvest more energy) featured in Star Trek, or note the name of the Orion III Spaceplane in 2001: A Space Odyssey. Dyson’s legacy is as vast and diverse as the world his mind explored and “Well, Doc, You’re In” is a fascinating glimpse within.

Stanley Wojcicki 1937–2023

Stanley G Wojcicki, a long-time leader in experimental particle physics, died on 31 May at the age of 86. Stan made a number of seminal contributions to the field, beginning with the discovery of many short-lived particles as a graduate student at Berkeley. He quickly rose to prominence, becoming an expert on K-meson physics, where he made a series of investigations and discoveries that played an important role in understanding the structure of the Standard Model.

Stan hardly had a typical childhood. Born in Warsaw, Poland, his youth was dominated by World War II, which caused great hardships, including the separation of his family for several years, followed by a difficult life under the communist regime. Finally, his mother, brother and he managed to escape to Sweden. There, they were refugees for eight months, before they were finally able to move to the US. Stan’s father remained in Poland, where he was jailed for five years, and never received a visa to rejoin his family.

From a very young age, Stan was an exceptional student who loved and excelled at mathematics. He continued to stand out in school in his new country and gained admission to Harvard University as an undergraduate, majoring in physics. He went on to Berkeley as a graduate student in physics, which is where he and I met and became lifelong friends, colleagues and sometimes collaborators.

Upon receiving his PhD in 1962, Stan spent a year at CERN and Collège de France, Paris (1964–1965). He returned frequently to CERN, including for a period supported through a John Simon Guggenheim Fellowship in 1973–1974. During that year, Stan continued his research on the excited states of hadrons made from combinations of quarks. He continued his close association with CERN, once again as a scientific associate in 1980–1981, and for shorter periods throughout his career.

Stan was appointed assistant professor in the physics department at Stanford in 1966, advanced to full professor in 1974, served as chair from 1982–1985 and stayed on the faculty until his retirement in 2015. He characteristically became interested in the newest and most exciting areas in the field, and was quick to join the design effort for the Superconducting Super Collider (SSC). He served as deputy director of the SSC central design group in Berkeley and was deeply involved in proposing and obtaining approval for the construction of the SSC in Texas. He continued to be active in many aspects of the SSC until it was cancelled by Congress in 1993, and wrote an insightful two-volume history of the project.

After the SSC disappointment, Stan characteristically bounced back to take on a new emerging area of particle physics: neutrino masses and oscillations. He proposed and led the MINOS experiment, a key element of a long-baseline neutrino experiment that sent a beam of neutrinos through a near detector at Fermilab and to a second detector, 735 km away, in a deep mine in Minnesota. MINOS was very important in providing evidence confirming the observations of atmospheric neutrino oscillations from Super-Kamiokande in Japan.

Stan received many honours, including the Pontecorvo Prize in 2011 and the APS Panofsky Prize in 2015 for his neutrino work. He met his wife, Esther, while he was a PhD student at Berkeley. They married in 1961 and had three daughters of whom he was very proud, Susan (CEO of YouTube), Janet (professor of paediatrics at UCSF Medical School) and Anne (founder and CEO of 23andMe). He will be very much missed by his many long-time friends and colleagues. 

Gravitational waves: a golden era

An array of pulsars

The existence of dark matter in the universe is one of the most important puzzles in fundamental physics. It is inferred solely by means of its gravitational effects, such as on stellar motions in galaxies or on the expansion history of the universe. Meanwhile, non-gravitational interactions between dark matter and the known particles described by the Standard Model have not been detected, despite strenuous and advanced experimental efforts.

Such a situation suggests that new particles and fields, possibly similar to those of the Standard Model, may have been similarly present across the entire cosmological history of our universe, but with only very tiny interactions with visible matter. This intriguing idea is often referred to as the paradigm of dark sectors and is made even more compelling by the lack of new particles seen at the LHC and laboratory experiments so far.

Dark universe

Cosmological observations, above all those of the cosmic microwave background (CMB), currently represent the main tool to test such a paradigm. The primary example is that of dark radiation, i.e. putative new dark particles that, unlike dark matter, behave as relativistic species at the energy scales probed by the CMB. The most recent data collected by the Planck satellite constrain such dark particles to make at most around 30% of the energy of a single neutrino species at the recombination epoch (when atoms formed and the universe became transparent, around 380,000 years after the Big Bang).

While such observations represent a significant advance, the early universe was characterised by temperatures in the MeV range and above (enabling nucleosynthesis), possibly as large as 1016 GeV. Some of these temperatures correspond to energy scales that cannot be probed via the CMB, nor directly with current or prospective particle colliders. Even if new particles had significant interactions with SM particles at such high temperatures, any electromagnetic radiation in the hot universe was continuously scattered off matter (electrons), making it impossible for any light from such early epochs to reach our detectors today. The question then arises: is there another channel to probe the existence of dark sectors in the early universe? 

We are entering a golden era of GW observations across the frequency spectrum

For more than a century, a different signature of gravitational interactions has been known to be possible: waves, analogous to those of the electromagnetic field, carrying fluctuations of gravitational fields. The experimental effort to detect gravitational waves (GWs) had a first amazing success in 2015, when waves generated by the merger of two black holes were first detected by the LIGO and Virgo interferometers in the US and Italy.

Now, the GW community is on the cusp of another incredible milestone: the detection of a GW background, generated by all sources of GWs across the history of our universe. Recently, based on more than a decade of observations, several networks of radio telescopes called pulsar timing arrays (PTAs) – NANOGrav in North America, EPTA in Europe, PPTA in Australia and CPTA in China – produced tentative evidence for such a stochastic GW background based on the influence of GWs on pulsars (see “Hints of low-frequency gravitational waves found” and “Clocking gravity” image). Together with next-generation interferometer-based GW detectors such as LISA and the Einstein Telescope, and new theoretical ideas from particle physics, the observations suggest that we are entering an exciting new era of observational cosmology that connects the smallest and largest scales. 

Particle physics and the GW background

Once produced, GWs interact only very weakly with any other component of the universe, even at the high temperatures present at the earliest times. Therefore, whereas photons can tell us about the state of the universe at recombination, the GW background is potentially a direct probe of high-energy processes in the very early universe. Unlike GWs that reach Earth from the locations of binary systems of compact objects, the GW background is expected to be mostly isotropic in the sky, very much like the CMB. Furthermore, rather than being a transient signal, it should persist in the sensitivity bands of GW detectors, similar to a noise component but with peculiarities that are expected to make a detection possible. 

Colliding spherical pressure waves

As early as 1918, Einstein quantified the power emitted in GWs by a generic source. Compared to electromagnetic radiation, which is sourced by the dipole moment of a charge distribution, the power emitted in GWs is proportional to the third time derivative of the quadrupole moment of the mass-energy distribution of the source. Therefore, the two essential conditions for a source to emit GWs are that it should be sufficiently far from spherical symmetry and that its distribution should change sufficiently quickly with time.

What possible particle-physics sources would satisfy these conditions? One of the most thoroughly studied phenomena as a source of GWs is the occurrence of a phase transition, typically associated with the breaking of a fundamental symmetry. Specifically, only those phase transitions that proceed via the nucleation, expansion and collision of cosmic bubbles (analogous to the phase transition of liquid water to vapour) can generate a significant amount of GWs (see “Ringing out” image). Inside any such bubble the universe is already in the broken-symmetry phase, whereas beyond the bubble walls the symmetry is still unbroken. Eventually, the state of lowest energy inside the bubbles prevails via their rapid expansion and collisions, which fill up the universe. Even though such bubbles may initially be highly spherical, once they collide the energy distribution is far from being so, while their rapid expansion provides a time variation.  

The occurrence of two phase transitions is in fact predicted by the Standard Model (SM): one related to the spontaneous breaking of the electroweak SU(2) × U(1) symmetry, the other associated with colour confinement and thus the formation of hadronic states. However, dedicated analytical and numerical studies in the 1990s and 2000s concluded that the SM phase transitions are not expected to be of first order in the early universe. Rather, they are expected to proceed smoothly, without any violent release of energy to source GWs. 

Sensitivity of current and future GW observatories

This leads to a striking conclusion: a detection of the GW background would provide evidence for physics beyond the SM – that is, if its origin can be attributed to processes occurring in the early universe. This caveat is crucial, since astrophysical processes in the late universe also contribute to a stochastic GW background. 

In order to claim a particle-physics interpretation for any stochastic GW background, it is thus necessary to appropriately account for astrophysical sources and characterise the expected (spectral) shape of the GW signal from early-universe sources of interest. These tasks are being undertaken by a diverse community of cosmologists, particle physicists and astrophysicists at research institutions all around the world, including in the cosmology group in the CERN TH department.

Precise probing

For particle physicists and cosmologists, it is customary to express the strength of a given stochastic GW signal in terms of the fraction of the energy (density) of the universe today carried by those GWs. The CMB already constraints this “relic abundance” to be less than roughly 10% of ordinary radiation, or about one millionth of that of the dominant component of the universe today, dark energy. Remarkably, current GW detectors are already able to probe stochastic GWs that produce only one billionth of the energy density of the universe.

Generally, the stochastic GW signal from a given source extends over a broad frequency range. The spectrum from many early-universe sources typically peaks at a frequency linked to the expansion rate at the time the source was active, redshifted to today. Under standard assumptions, the early universe was dominated by radiation and the peak frequency of the GW signal increases linearly with the temperature. For instance, the GW frequency range in which LIGO/Virgo/KAGRA are most sensitive (10–100 Hz) corresponds to sources that were active when the universe was as hot as 108 GeV – six orders of magnitude higher than the LHC. The other currently operating GW observatories, PTAs, are sensitive to GWs of much smaller frequencies, around 10–9–10–7 Hz, which correspond to temperatures around 10 MeV to 1 GeV (see “Broadband” figure). These are the temperatures at which the QCD phase transition occurred. While, as mentioned above, a signal from the latter is not expected, dark sectors may be active at those temperatures and source a GW signal. In the near (and long-term) future, it is conceivable that new GW observatories will allow us to probe the stochastic GW background across the entire range of frequencies from nHz to 100 Hz. 

Laser-interferometer GW detectors on Earth and in space

Together with bubble collisions, another source of peaked GW spectra due to symmetry breaking in the early universe is the annihilation of topological defects, such as domain walls separating different regions of the universe (in this case the corresponding symmetry is a discrete symmetry). Violent (so-called resonant) decays of new particles, such as is predicted by some early-universe scenarios, may also strongly contribute to the GW background (albeit possibly only at very large frequencies, beyond the sensitivity reach of current and forecasted detectors). Yet another discoverable phenomenon is the collapse of large energy (density) fluctuations in the early universe, such as is predicted to occur in scenarios where the dark matter is made of primordial black holes.

On the other hand, particle-physics sources can also be characterised by very broad GW spectra without large peaks. The most important such source is the inflationary mechanism: during this putative phase of exponential expansion of the universe, GWs would be produced from quantum fluctuations of space–time, stretched by inflation and continuously re-entering the Hubble horizon (i.e. the causally connected part of the universe at any given time) throughout the cosmological evolution. The amount of such primordial GWs is expected to be small. Nonetheless, a broad class of inflationary models predicts GWs with frequencies and amplitudes such that they can be discovered by future measurements of the CMB. In fact, it is precisely via these measurements that Planck and BICEP/Keck Array have been able to strongly constrain the simplest models of inflation. The GWs that can be discovered via the CMB would have very small frequencies (around 10–17 Hz, corresponding to ~eV temperatures). The full spectrum would nonetheless extend to large frequencies, only with such a small amplitude that detection by GW observatories would be unfeasible (except perhaps for the futuristic Big Bang Observer – a proposed successor to the Laser Interferometer Space Antenna, LISA, currently being prepared by the European Space Agency). 

Feeling blue

Certain classes of inflationary models could also lead to “blue-tilted” (i.e. rising with frequency) spectra, which may then be observable at GW observatories. For instance, this can occur in models where the inflaton is a so-called axion field (a generalisation of the predicted Peccei–Quinn axion in QCD). Such scenarios naturally produce gauge fields during inflation, which can themselves act as sources of GWs, with possible peculiar properties such as circular polarisation and non-gaussianities. A final phenomenon that would generate a very broad GW spectrum, unrelated to inflation, is the existence of cosmic strings. These one-dimensional defects can originate, for instance, from the breaking of a global (or gauge) rotation symmetry and persist through cosmological history, analogous to cracks that appear in an ice crystal after a phase transition from water.

Astrophysical contributions to the stochastic GW background are certainly expected from binary black-hole systems. At the frequencies relevant for LIGO/Virgo/KAGRA, such background would be due to black holes with masses of tens of solar masses, whereas in the PTA sensitivity range the background is sourced by binaries of supermassive black holes (with masses up to millions of solar masses), such as those that are believed to exist at the centres of galaxies. The current PTA indications of a stochastic GW background require detailed analyses to understand whether the signal is due to a particle physics or an astrophysics source. A smoking gun for the latter origin would be the observation of significant anisotropies in the signal, as it would come from regions where more binary black holes are clustered. 

Polarised microwave emission from the CMB

We are entering a golden era of GW observations across the frequency spectrum, and thus in exploring particle physics beyond the reach of colliders and astrophysical phenomena at unprecedented energies. The first direct detection of GWs by LIGO in September 2015 was one of the greatest scientific achievements of the 21st century. The first generation of laser interferometric detectors (GEO600, LIGO, Virgo and TAMA) did not detect any signal and only constrained the gravitational-wave emission from several sources. The second generation (Advanced LIGO and Advanced Virgo) made the first direct detection and has observed almost 100 GW signals to date. The underground Kamioka Gravitational Wave Detector (KAGRA) in Japan joined the LIGO–VIRGO observations in 2020. As of 2021, the LIGO–Virgo–KAGRA collaboration is working to establish the International Gravitational Wave Network, to facilitate coordination among ground-based GW observatories across the globe. In the near future, LIGO India (IndIGO) will also join the network of terrestrial detectors. 

Despite being sensitive to changes in the arm length of the order of 10–18 m, the LIGO, Virgo and KAGRA detectors are not sensitive enough for precise astronomical studies of GW sources. This has motivated the new generation of detectors. The Einstein Telescope (ET) is a proposed design concept for a European third-generation GW detector underground, which will be 10 times more sensitive than the current advanced instruments (see “Joined-up thinking in vacuum science”). On Earth, however, gravitational waves with frequencies lower than 1 Hz are inaccessible due to terrestrial gravity gradient noise and limitations to the size of the device. Space-based detectors, on the other hand, can access frequencies as low as 10–4 Hz. Several space-based GW observatories are proposed that will ultimately form a network of laser interferometers in space. They include LISA (planned to launch around 2035), the Deci-hertz Interferometer Gravitational Wave Observatory (DECIGO) led by the Japan Aerospace Exploration Agency and two Chinese detectors, TianQin and Taiji (see “In synch” figure).

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders

A new kid on the block, atom interferometry, offers a complementary approach to laser interferometry for the detection of GWs. Two atom interferometers coherently manipulated by the same light field can be used as a differential phase meter tracking the distance traversed by the light field. Several terrestrial cold-atom experiments are under preparation, such as MIGA, ZAIGA and MAGIS, or being proposed, such as ELGAR and AION. These experiments will provide measurements in the mid-frequency range between 10–2–1 Hz. Moreover, a space-based cold-atom GW detector called the Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE) is expected to probe GWs in a much broader frequency range (10–7–10 Hz) compared to LISA.

Astrometry provides yet another powerful way to explore GWs that is not accessible to other probes, i.e. ultra-low frequencies of 10 nHz or less. Here, the passage of a GW over the Earth-star system induces a deflection in the apparent position of a star, which makes it possible to turn astrometric data into a nHz GW observatory. Finally, CMB missions have a key role to play in searching for possible imprints on the polarisation of CMB photons caused by a stochastic background of primordial GWs (see “Acoustic imprints” image). The wavelength of such primordial GWs can be as large as the size of our horizon today, associated with frequencies as low as 10–17 Hz. Whereas current CMB missions allow upper bounds on GWs, future missions such as the ground-based CMB-S4 (CERN Courier March/April 2022 p34) and space-based LiteBIRD observatories will improve this measurement to either detect primordial GWs or place yet stronger upper bounds on their existence.

Outlook 

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders, as well as for understanding astrophysical phenomena in extreme regimes. Several projects are planned and proposed to detect GWs across more than 20 decades of frequency. Such a wealth of data will provide a great opportunity to explore the universe in new ways during the next decades and open a wide window on possible physics beyond the SM.

Report explores quantum computing in particle physics

A quantum computer built by IBM

Researchers from CERN, DESY, IBM Quantum and more than 30 other organisations have published a white paper identifying activities in particle physics that could benefit from quantum-computing technologies. Posted on arXiv on 6 July, the 40 page-long paper is the outcome of a working group set up at the QT4HEP conference held at CERN last November, which identified topics in theoretical and experimental high-energy physics where quantum algorithms may produce significant insights and results that are very hard or even not accessible by classical computers. 

Combining quantum and information theory, quantum computing is natively aligned with the underlying physics of the Standard Model. Quantum bits, or qubits, are the computational representation of a state that can be entangled and brought into superposition. Once measured, qubits do not represent discrete numbers 0 and 1 as their classical counterparts, but a probability ranging from 0 to 1. Hence quantum-computing algorithms can be exploited to achieve computational advantages in terms of speed and accuracy, especially for processes that are yet to be understood. 

“Quantum computing is very promising, but not every problem in particle physics is suited to this model of computing,” says Alberto Di Meglio, head of IT Innovation at CERN and one of the white paper’s lead authors alongside Karl Jansen of DESY and Ivano Tavernelli of IBM Quantum. “It’s important to ensure that we are ready and that we can accurately identify the areas where these technologies have the potential to be most useful.” 

Neutrino oscillations in extreme environments, such as supernovae, are one promising example given. In the context of quantum computing, neutrino oscillations can be considered strongly coupled many-body systems that are driven by the weak interaction. Even a two-flavour model of oscillating neutrinos is almost impossible to simulate exactly for classical computers, making this problem well suited for quantum computing. The report also identifies lattice-gauge theory and quantum field theory in general as candidates that could enjoy a quantum advantage. The considered applications include quantum dynamics, hybrid quantum/classical algorithms for static problems in lattice gauge theory, optimisation and classification problems. 

With quantum computing we address problems in those areas that are very hard to tackle with classical methods

In experimental physics, potential applications range from simulations to data analysis and include jet physics, track reconstruction and algorithms used to simulate the detector performance. One key advantage here is the speed up in processing time compared to classical algorithms. Quantum-computing algorithms might also be better at finding correlations in data, while Monte Carlo simulations could benefit from random numbers generated by a quantum computer. 

“With quantum computing we address problems in those areas that are very hard – or even impossible – to tackle with classical methods,” says Karl Jansen (DESY). “We can now explore physical systems to which we still do not have access.” 

The working group will meet again at CERN for a special workshop on 16 and 17 November, immediately before the Quantum Techniques in Machine Learning conference from 19 to 24 November.

Hints of low-frequency gravitational waves found

Since their direct discovery in 2015 by the LIGO and Virgo detectors, gravitational waves (GWs) have opened a new view on extreme cosmic events such as the merging of black holes. These events typically generate gravitational waves with frequencies of a few tens to a few thousand hertz, within reach of ground-based detectors. But the universe is also expected to be pervaded by low-frequency GWs in the nHz range, produced by the superposition of astrophysical sources and possibly by high-energy processes at the very earliest times (see “Gravitational waves: a golden era”). 

Announced in late June, news that pulsar timing arrays (PTAs), which infer the presence of GWs via detailed measurements of the radio emission from pulsars, had seen the first evidence for such a stochastic GW background was therefore met with delight by particle physicists and cosmologists alike. “For me it feels that the first gravitational wave observed by LIGO is like seeing a star for the first time, and now it’s like seeing the cosmic microwave background for the first time,” says CERN theorist Valerie Domcke.

Clocking signals

Whereas the laser interferometers LIGO and Virgo detect relative length changes in two perpendicular arms, PTAs clock the highly periodic signals from millisecond pulsars (rapidly rotating neutron stars), some of which are in Earth’s line of sight. A passing GW perturbs spacetime and induces a small delay in the observed arrival time of the pulses. By observing a large sample of pulsars over a long period and correlating the signals, PTAs effectively turn the galaxy into a low-frequency GW observatory. The challenge is to pick out the characteristic signature of this stochastic background, which is expected to induce “red noise” (meaning there should be greater power at lower fluctuation frequencies) in the differences between the measured arrival times of the pulsars and the timing-model predictions. 

The smoking gun of a nHz GW detection is a measurement of the so-called Hellings–Downs (HD) curve based on general relativity. This curve predicts the arrival-time correlations as a function of angular separation for pairs of pulsars, which vary because the quadrupolar nature of GWs introduces directionally dependent changes. 

Following its first hints of these elusive correlations in 2020, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has released the results of its 15-year dataset. Based on observations of 68 millisecond-pulsars distributed over half the galaxy (21 more than in the last release) by the Arecibo Observatory, the Green Bank Telescope and the Very Large Array, the team finds 4σ evidence for HD correlations in both frequentist and Bayesian analyses.

We are opening a new window in the GW universe, where we can observe unique sources and phenomena

A similar signal is seen by the independent European PTA, and the results are also supported by data from the Parkes PTA and others. “Once the partner collaborations of the International Pulsar Timing Array (which includes NANOGrav, the European, Parkes and Indian PTAs) combine these newest datasets, this may put us over the 5σ threshold,” says NANOGrav spokesperson Stephen Taylor. “We expect that it will take us about a year to 18 months to finalise.”

It will take longer to decipher the precise origin of the low-frequency PTA signals. If the background is aniso­tropic, astrophysical sources such as supermassive black-hole binaries would be the likely origin and one could therefore learn about their environment, population and how galaxies merge. Phase transitions or other cosmological sources tend to lead to an isotropic background. Since the shape of the GW spectrum encodes information about the source, with more data it should become possible to disentangle the signatures of the two potential sources. PTAs and current, as well as next-generation, GW detectors such as LISA and the Einstein Telescope complement each other as they cover different frequency ranges. For instance, LISA could detect the same supermassive black-hole binaries as PTAs but at different times during and after their merger. 

“We are opening a new window in the gravitational-wave universe in the nanohertz regime, where we can observe unique sources and phenomena,” says European PTA collaborator Caterina Tiburzi of the Cagliari Observatory in Sardinia.

Muon g-2 update sets up showdown with theory

Muon g-2 measurement

On 10 August, the Muon g-2 collaboration at Fermilab presented its latest measurement of the anomalous magnetic moment of the muon aμ. Combining data from Run 1 to Run 3, the collaboration found aμ = 116 592 055 (24) × 10–11, representing a factor-of-two improvement on the precision of its initial 2021 result. The experimental world average for aμ now stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in 2020. However, calculations based on a different theoretical approach (lattice QCD) and a recent analysis of e+e data that feeds into the prediction are in tension with the 2020 calculation, and more work is needed before the discrepancy is understood.

The anomalous magnetic moment of the muon aμ = (g-2)/2 (where g is the muon’s gyromagnetic ratio) is the difference between the observed value of the muon’s magnetic moment and the Dirac prediction (g = 2) due to contributions of virtual particles. This makes measurements of aμ, which is one of the most precisely calculated and measured quantities in physics, an ideal testbed for physics beyond the SM. To measure it, a muon beam is sent into a superconducting storage ring reused from the former g-2 experiment at Brookhaven National Laboratory. Initially aligned, the muon spin axes precess as they interact with the magnetic field. Detectors located along the ring’s inner circumference allow the precession rate and thus aμ to be determined. Many improvements to the setup have been made since the first run, including better running conditions, more stable beams and an improved knowledge of the magnetic field.

The new result is based on data taken from 2019 and 2020, and has four times the statistics compared to the 2021 result. The collaboration also decreased the systematic uncertainty to levels beyond its initial goals. Currently, about 25% of the total data (Run 1–Run 6) has been analysed. The collaboration plans to publish its final results in 2025, targeting a precision of 0.14 ppm compared to the current 0.2 ppm. “We have moved the accuracy bar of this experiment one step further and now we are waiting for the theory to complete the calculations and cross-checks necessary to match the experimental accuracy,” explains collaboration co-spokesperson Graziano Venanzoni of INFN Pisa and the University of Liverpool. “A huge experimental and theoretical effort is going on, which makes us confident that theory prediction will be in time for the final experimental result from FNAL in a few years from now.”

The theoretical picture is foggy. The SM prediction for the anomalous magnetic moment receives contributions from the electromagnetic, electroweak and strong interactions. While the former two can be computed to high precision in perturbation theory, it is only possible to compute the latter analytically in certain kinematic regimes. Contributions from hadronic vacuum polarisation and hadronic light-by-light scattering dominate the overall theoretical uncertainty on aμ at 83% and 17%, respectively.

To date, the experimental results are confronted with two theory predictions: one by the Muon g-2 Theory Initiative based on the data-driven “R-ratio” method, which relies on hadronic cross-section measurements, and one by the Budapest–Marseille–Wuppertal (BMW) collaboration based on simulations of lattice QCD and QED. The latter significantly reduces the discrepancy between the theoretical and measured values. Adding a further puzzle, a recently published value of hadronic cross-section measurements by the CMD-3 collaboration that contrasts with all other experiments narrows the gap between the Muon g-2 Theory Initiative and the BMW predictions (see p19).

“This new result by the Fermilab Muon g-2 experiment is a true milestone in the precision study of the Standard Model,” says lattice gauge theorist Andreas Jüttner of CERN and the University of Southampton. “This is really exciting – we are now faced with getting to the roots of various tensions between experimental and theoretical findings.”

Milos Lokajicek 1952–2023

Milos Lokajicek, a long-time employee of the division of elementary particle physics of the Institute of Physics of the Czech Academy of Sciences, passed away in June at the age of 70. Milos was involved in almost all the key experiments in which the Czech particle-physics community participated, especially in the collection and processing of experimental data.

Milos began his career in the 1980s on an experiment at the Serpukhov accelerator in the former USSR, investigating proton–antiproton and later deuteron–antideuteron collisions in the Ludmila hydrogen bubble chamber. After obtaining his PhD in 1984, while still at JINR Dubna, he was also involved in the DELPHI experiment at LEP, which played a key role in the Czech Republic’s entry into CERN in 1993. 

After returning to the Institute of Physics, he was at the origin of the participation of Czech physicists in the ATLAS experiment at the LHC, the construction of which was approved in 1994. Together with other staff of the Institute of Physics and colleagues from Charles University, he initiated the construction of the ATLAS TileCal hadron calorimeter and built a laboratory for the assembly and testing of the calorimeter submodules in the former garage of the Institute of Physics. 

Since his participation in the Ludmila and DELPHI experiments, Milos focused on data processing. Already in the mid-1990s, he had built a computer farm for data processing and modelling at the Institute of Physics, which today serves several large experiments. 

In 1997, together with colleagues from Charles University and the Czech Technical University, he initiated the group’s participation in the D0 experiment at the Tevatron, Fermilab. Participation in this experiment was important for the training of young physicists in ATLAS, the construction of which was beginning at that time. After the Tevatron was decommissioned in 2011, Milos obtained funding for the Fermilab–CZ research infrastructure in 2016 with a gradual transition to the neutrino-physics programme. He worked on the NOvA experiment and also used his experience and contacts at CERN for the future DUNE experiment.

The reach of Milos’s work extends far beyond his home institute. Within the Czech Republic, it was the coordination of the activities of Czech institutions in Fermilab and the development of data processing. He was also a long-standing member of the Committee for Cooperation of the Czech Republic with CERN. His international reputation is documented by numerous memberships in steering committees of experiments and projects, and a number of conferences he co-organised. Among the most important are ACAT 2014, CHEP2009, DØ Week 2008 and ATLAS Week 2003.

Milos’s collegiality and friendship will be missed by all of us.

James Hartle 1939–2023

Jim Hartle

James Burkett Hartle passed away on 17 May in Zurich at the age of 83. Known as the father of quantum cosmology, Jim made landmark contributions to our understanding of the origin of the universe.

Born in Baltimore, Maryland, Jim obtained his undergraduate degree in physics at Princeton University, where he was mentored by John Wheeler. He attended graduate school at Caltech where he worked under Murray Gell-Mann, earning his PhD in 1964 with a dissertation entitled The complex angular momentum in three-particle potential scattering.

After graduating, Jim briefly taught at Princeton before joining the faculty at the University of California, Santa Barbara (UCSB) in 1966. Excited by the discoveries of pulsars, quasars and the cosmic microwave background radiation, Jim turned away from particle physics. In the late 1960s he wrote a series of influential papers, one with Kip Thorne, on the dynamics of rotating neutron stars. The pair organised regular gatherings between their research groups, which turned into the Pacific Coast Gravity Meetings that still run today.

In 1971 Jim used a Sloan Fellowship to go to the University of Cambridge, where he was immersed in the emerging fields of relativistic astrophysics and cosmology. There he met Stephen Hawking, with whom he developed a remarkable long-term collaboration. Two of their papers became classics: one, in 1976, introduced the Hartle–Hawking quantum state for matter outside a black hole, which is fundamental to black-hole thermodynamics and inspired the so-called Euclidean approach to quantum gravity; the other, in 1983, put forward the Hartle–Hawking “no-boundary” wave function of the universe, showing for the first time how the conditions at the Big Bang could be determined by physical theory.

Except for a brief appointment at the University of Chicago, Jim spent his entire career at UCSB, an environment he found congenial, supportive and inspiring. Jim was a wise and caring mentor to countless young scientists and, though reluctant to venture into the public arena, he also did much to forge a strong physics community. In 1979 he cofounded the Institute for Theoretical Physics (now the Kavli Institute for Theoretical Physics) at Santa Barbara, a mecca for physicists ever since.

The Hartle–Hawking wave function not only revolutionised quantum cosmology but also raised tantalising new questions. Jim began to think more deeply about what it entails to apply quantum mechanics to the universe as a whole. Throughout the 1990s, he and Gell-Mann developed the consistent-histories formulation of quantum mechanics, which clarified the physical nature of the branching process in Everettian quantum mechanics and was sufficiently general to describe single closed systems.

While part of some extraordinary collaborations, Jim was also an independent thinker. About one-third of his publications are beautifully written single-author papers often touching on seemingly intractable questions, far from current fashions and approached with enormous care and open-mindedness. In 2003 Jim published Gravity: An Introduction to Einstein’s General Relativity, a textbook gem with a minimum of new mathematics and a wealth of illustrations that made Einstein’s theory accessible to nearly all physics majors.

Jim retired in 2005, to focus on physics. In 2006 he became an external professor at the Santa Fe Institute, collaborating with Gell-Mann during summer visits. That year also marks the start of my own collaboration with Jim. We took up quantum cosmology again and became immersed in some of the field’s heated debates. Unperturbed, Jim set out the beacons. Often, we would be joined by Hawking, who by then had great difficulties communicating, to flesh out the predictions of the no-boundary wave function. Studying the role of the observer in a quantum universe, we were led to a top-down approach to cosmology in which quantum observations retroactively determine the outcome of the Big Bang, thereby realising an old vision of Wheeler’s.

Few scholars ventured as deeply into the fundamentals of physics as Jim did. A selection of his reflections on the deeper nature of physical theory were published in 2021 in The Quantum Universe: Essays on Quantum Mechanics, Quantum Cosmology, and Physics in General. With characteristic humility, however, Jim reminded us that he didn’t have a philosophical agenda.

Despite suffering the devastations of Alzheimer’s disease, physics remained the driving force in Jim’s life until the very end. Yet his intellectual curiosity stretched much further. He was a polymath and an eclectic reader whose interests ranged from Middle Eastern and Mayan archaeology, to American colonial history, Russian literature and eccentric 19th-century religious female figures. Above all, Jim was an exceptionally generous, wise, humble and gentle man.

bright-rec iop pub iop-science physcis connect