Comsol -leaderboard other pages

Topics

The history of heavy ions

Across a career that accompanied the emergence of heavy-ion physics at CERN, Hans Joachim Specht was often a decisive voice in shaping the experimental agenda and the institutional landscape in Europe. Before he passed away last May, he and fellow editors Sanja Damjanovic (GSI), Volker Metag (University of Giessen) and Jürgen Schukraft (Yale University) finalised the manuscript for Scientist and Visionary – a new biographical work that offers both a retrospective on Specht’s wide-ranging scientific contributions and a snapshot of four decades of evolving research at CERN, GSI and beyond.

Precision and rigour

Specht began his career in nuclear physics under the mentorship of Heinz Maier-Leibnitz at the Technische Universität München. His early work was grounded in precision measurements and experimental rigour. Among his most celebrated early achievements were the discoveries of superheavy quasi-molecules and quasi-atoms, where electrons can be bound for short times to a pair of heavy ions, and nuclear-shape isomerism, where nuclei exhibit long-lived prolate or oblate deformations. These milestones significantly advanced the understanding of atomic and nuclear structure. Around 1979, he shifted focus, joining the emerging efforts at CERN to explore the new frontier of ultra-relativistic heavy-ion collisions, which was started five years earlier at Berkeley by the GSI-LBL collaboration. It was Bill Willis, one of CERN’s early advocates for high-energy nucleus–nucleus collisions, who helped draw Specht into this developing field. That move proved foundational for both Specht and CERN.

From the early 1980s through to 2010, Specht played leading roles in four CERN nuclear-collision experiments: R807/808 at the Intersecting Storage Rings, and HELIOS, CERES/NA45 and NA60 at the Super Proton Synchrotron (SPS). As the book describes, he was instrumental, and not only in their scientific goals, namely to search for the highest temperatures of the newly formed hot, dense QCD matter, exceeding the well established Hagedorn limiting hadron fluid temperature of roughly 160 MeV. The overarching aim was to establish that quasi-thermalised gluon matter and even quark–gluon matter can be created at the SPS. Specht was also involved in the design and execution of these detectors. At the Universität Heidelberg, he built a heavy-ion research group and became a key voice in securing German support for CERN’s heavy-ion programme.

CERES was Spechts brainchild, and stood out for its bold concept

As spokesperson of the HELIOS experiment from 1984 onwards, Specht gained recognition as a community leader. But it was CERES, his brainchild, that stood out for its bold concept: to look for thermal dileptons using a hadron-blind detector – a novel idea at the time that introduced the concept of heavy-ion collision experiments. Despite considerable scepticism, CERES was approved in 1989 and built in under two years. Its results on sulphur–gold collisions became some of the most cited of the SPS era, offering strong evidence for thermal lepton-pair production, potentially from a quark–gluon plasma – a hot and deconfined state of QCD matter then hypothesised to exist at high temperatures and densities, such as in the early universe. Such high temperatures, above the hadrons’ limiting Hagedorn temperature of 160 MeV, had not yet been experimentally demonstrated at LBNL’s Bevalac and Brookhaven’s Alternating Gradient Synchrotron.

Advising ALICE

In the early 1990s, while CERES was being upgraded for lead–gold runs, Specht co-led a European Committee for Future Accelerators working group that laid the groundwork for ALICE, the LHC’s dedicated heavy-ion experiment. His Heidelberg group formally joined ALICE in 1993. Even after becoming scientific director of GSI in 1992, Specht remained closely involved as an advisor.

Specht’s next major CERN project was NA60, which collided a range of nuclei in a fixed-target experiment at the SPS and pushed dilepton measurements to new levels of precision. The NA60 experiment achieved two breakthroughs: a nearly perfect thermal spectrum consistent with blackbody radiation of temperatures 240 to 270 MeV, some hundred MeV above the previous highest hadron Hagedorn temperature of 160 MeV. Clear evidence of in-medium modification of the ρ meson was observed, due to meson collisions with nucleons and heavy baryon resonances, showing that this medium is not only hot, but also that its net baryon density is high. These results were widely seen as strong confirmation of the lattice–QCD-inspired quark–gluon plasma hypothesis. Many chapter authors, some of whom were direct collaborators, others long-time interpreters of heavy-ion signals, highlight the impact NA60 had on the field. Earlier claims, based on competing hadronic signals for deconfinement, such as strong collective hydrodynamic flow, J/ψ melting and quark recombination, were often also described by hadronic transport theory, without assuming deconfinement.

Hans Joachim Specht: Scientist and Visionary

Specht didn’t limit himself to fundamental research. As director of GSI, he oversaw Europe’s first clinical ion-beam cancer therapy programme using carbon ions. The treatment of the first 450 patients at GSI was a breakthrough moment for medical physics and led to the creation of the Heidelberg Ion Therapy centre in Heidelberg, the first hospital-based hadron therapy centre in Europe. Specht later recalled the first successful treatment as one of the happiest moments of his career. In their essays, Jürgen Debus, Hartmut Eickhoff and Thomas Nilsson outline how Specht steered GSI’s mission into applied research without losing its core scientific momentum.

Specht was also deeply engaged in institutional planning, helping to shape the early stages of the Facility for Antiproton and Ion Research, a new facility to study heavy ion collisions, which is expected to start operations at GSI at the end of the decade. He also initiated plasma-physics programmes, and contributed to the development of detector technologies used far beyond CERN or GSI. In parallel, he held key roles in international science policy, including within the Nuclear Physics Collaboration Committee, as a founding board member of the European Centre for Theoretical Studies in Nuclear Physics in Trento, and at CERN as chair of the Proton Synchrotron and Synchro-Cyclotron Committee, and as a decade-long member of the Scientific Policy Committee.

The book doesn’t shy away from more unusual chapters either. In later years, Specht developed an interest in the neuroscience of music. Collaborating with Hans Günter Dosch and Peter Schneider, he explored how the brain processes musical structure – an example of his lifelong intellectual curiosity and openness to interdisciplinary thinking.

Importantly, Scientist and Visionary is not a hagiography. It includes a range of perspectives and technical details that will appeal to both physicists who lived through these developments and younger researchers unfamiliar with the history behind today’s infrastructure. At its best, the book serves as a reminder of how much experimental physics depends not just on ideas, but on leadership, timing and institutional navigation.

That being said, it is not a typical scientific biography. It’s more of a curated mosaic, constructed through personal reflections and contextual essays. Readers looking for deep technical analysis will find it in parts, especially in the sections on CERES and NA60, but its real value lies in how it tracks the development of large-scale science across different fields, from high-energy physics to medical applications and beyond.

For those interested in the history of CERN, the rise of heavy-ion physics, or the institutional evolution of European science, this is a valuable read. And for those who knew or worked with Hans Specht, it offers a fitting tribute – not through nostalgia, but through careful documentation of the many ways Hans shaped the physics and the institutions we now take for granted.

Two takes on the economics of big science

At the 2024 G7 conference on research infrastructure in Sardinia, participants were invited to think about the potential socio-economic impact of the Einstein Telescope. Most physicists would have no expectation that a deeper knowledge of gravitational waves will have any practical usage in the foreseeable future. What, then, will be the economic impact of building a gravitational-wave detector hundreds of metres underground in some abandoned mines? What will be the societal impact of several kilometres of lasers and mirrors?

Such questions are strategically important for the future of fundamental science, which is increasingly often big science. Two new books tackle its socio-economic impacts head on, though with quite different approaches, one more qualitative in its research, and the other more quantitative. What are the pros and cons of qualitative versus quantitative analysis in social sciences? Personally, as an economist, at a certain point I would tend to say show me the figures! But, admittedly, when assessing the socio-economic impact of large-scale research infrastructures, if good statistical data is not available, I would always prefer a fine-grained qualitative analysis to quantitative models based on insufficient data.

Big Science, Innovation & Societal Contributions, edited by Shantha Liyanage (CERN), Markus Nordberg (CERN) and Marilena Streit-Bianchi (vice president of ARSCIENCIA), takes the qualitative route – a journey into mostly uncharted territory, asking difficult questions about the socio-economic impact of large-scale research infrastructures.

Big Science, Innovation & Societal Contributions

Some figures about the book may be helpful: the three editors were able to collect 15 chapters, with about 100 figures and tables, to involve 34 authors, to list more than 700 references, and to cover a wide range of scientific fields, including particle physics, astrophysics, medicine and computer science. A cursory reading of the list of about 300 acronyms, from AAI (Architecture Adaptive Integrator) to ZEPLIN (ZonEd Proportional scintillation in Liquid Noble gas detector), would be a good test to see how many research infrastructures and collaborations you already know.

After introducing the LHC, a chapter on new accelerator technologies explores a remarkable array of applications of accelerator physics. To name a few: CERN’s R&D in superconductivity is being applied in nuclear fusion; the CLOUD experiment uses particle beams to model atmospheric processes relevant to climate change (CERN Courier January/February 2025 p5); and the ELISA linac is being used to date Australian rock art, helping determine whether it originates from the Pleistocene or Holocene epochs (CERN Courier March/April 2025 p10).

A wide-ranging exploration of how large-scale research infrastructures generate socio-economic value

The authors go on to explore innovation with a straightforward six-step model: scanning, codification, abstraction, diffusion, absorption and impacting. This is a helpful compass to build a narrative. Other interesting issues discussed in this part of the book include governance mechanisms and leadership of large-scale scientific organisations, including in gravitational-wave astronomy. No chapter better illustrates the impact of science on human wellbeing than the survey of medical applications by Mitra Safavi-Naeini and co-authors, which covers three major domains of applications in medical physics: medical imaging with X-rays and PET; radio­therapy targeting cancer cells internally with radioactive drugs or externally using linacs; and more advanced but expensive particle-therapy treatments with beams of protons, helium ions and carbon ions. Personally, I would expect that some of these applications will be enhanced by artificial intelligence, which in turn will have an impact on science itself in terms of digital data interpretation and forecasting.

Sociological perspectives

The last part of the book takes a more sociological perspective, with discussions about cultural values, the social responsibility to make sure big data is open data, and social entrepreneurship. In his chapter on the social responsibility of big science, Steven Goldfarb stresses the importance of the role of big science for learning processes and cultural enhancement. This topic is particularly dear to me, as my previous work on the cost–benefit analysis of the LHC revealed that the value of human capital accumulation for early-stage researchers is among the biggest contributions to the machine’s return on investment.

I recommend Big Science, Innovation & Societal Contributions as a highly infor­mative, non-technical and updated introduction to the landscape of big science, but I would suggest complemen­ting it with another very recent book, The Economics of Big Science 2.0, edited by Johannes Gutleber and Panagiotis Charitos, both currently working at CERN. Charitos was also the co-editor of the volume’s predecessor, The Economics of Big Science, which focuses more on science policy, as well as public investment in science.

Why a “2.0” book? There is a shift of angle. The Economics of Big Science 2.0 builds upon the prior volume, but offers a more quantitative perspective on big science. Notably, it takes advantage of a larger share of contributions by economists, including myself as co-author of a chapter about the public’s perception of CERN.

The Economics of Big Science 2.0

It is worth clarifying that economics, as a domain within the paradigm of social sciences more generally, has its rules of the game and style. For example, the social sciences can be used as an umbrella encompassing sociology, political science, anthropology, history, management and communication studies, linguistics, psychology and more. The role of economics within sociology is to build quantitative models and to test them with statistical evidence, a field also known as econometrics.

Here, the authors excel. The Economics of Big Science 2.0 offers a wide-ranging exploration of how large-scale research infrastructures generate socio-economic value, primarily driven by quantitative analysis. The authors explore a diverse range of empirical methods, from forming cost–benefit analyses to evaluating econometric modelling, allowing them to assess the tangible effects of big science across multiple fields. There is a unique challenge for applied economics here, as big science centres by definition do not come in large numbers, however the authors involve large numbers of stakeholders, allowing for a statistical analysis of impacts, and the estimation of expected values, standard errors and confidence intervals.

Societal impact

The Economics of Big Science 2.0 examines the socio-economic impact of ESA’s space programmes, the local economic benefits from large-scale facilities and the efficiency benefits from open science. The book measures public attitudes toward and awareness of science within the context of CERN, offering insights into science’s broader societal impacts. It grounds its analyses in a series of focused case studies, including particle colliders such as the LHC and FCC, synchrotron light sources like ESRF and ALBA, and radio telescopes such as SARAO, illustrating the economic impacts of big science through a quantitative lens. In contrast to the more narrative and qualitative approach of Big Science, Innovation & Societal Contributions, The Economics of Big Science 2.0 distinguishes itself through a strong reliance on empirical data.

Ivan Todorov 1933–2025

Ivan Todorov, theoretical physicist of outstanding academic achievements and a man of remarkable moral integrity, passed away on 14 February in his hometown of Sofia. He is best known for his prominent works on the group-theoretical methods and the mathematical foundations of quantum field theory.

Ivan was born on 26 October 1933 into a family of literary scholars who played an active role in Bulgarian academic life. After graduating from the University of Sofia in 1956, he spent several years at JINR in Dubna and at IAS Princeton, before joining INRNE in Sofia. In 1974 he became a full member of the Bulgarian Academy of Sciences.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions. The classification and the complete description of the unitary representations of the conformal group have been collected in two well known and widely used monographs by him and his collaborators. Ivan’s research on constructive quantum field theories and the books devoted to the axiomatic approach have largely influenced modern developments in this area. His early scientific results related to the analytic properties of higher loop Feynman diagrams have also found important applications in perturbative quantum field theory.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions

The scientifically highly successful international conferences and schools organised in Bulgaria during the Cold War period under the guidance of Ivan served as meeting grounds for leading Russian and East European theoretical physicists and their West European and American colleagues. They were crucial for the development of theoretical physics in Bulgaria.

Everybody who knew Ivan was impressed by his vast culture and acute intellectual curiosity. His profound and deep knowledge of modern mathematics allowed him to remain constantly in tune with new trends and ideas in theoretical physics. Ivan’s courteous and smiling way of discussing physics, always peppered with penetrating comments and suggestions, was inimitable. His passing is a great loss for theoretical physics, especially in Bulgaria, where he mentored a generation of researchers.

Jonathan L Rosner 1941–2025

Jon Rosner

Jonathan L Rosner, a distinguished theoretical physicist and professor emeritus at the University of Chicago, passed away on 24 May 2025. He made profound contributions to particle physics, particularly in quark dynamics and the Standard Model.

Born in New York City, Rosner grew up in Yonkers, NY. He earned his Bachelor of Arts in Physics from Swarthmore College in 1962 and completed his PhD at Princeton University in 1965 with Sam Treiman as his thesis advisor. His early academic appointments included positions at the University of Washington and Tel Aviv University. In 1969 he joined the faculty at the University of Minnesota, where he served until 1982. That year, he became a professor at the University of Chicago, where he remained a central figure in the Enrico Fermi Institute and the Department of Physics until his retirement in 2011.

Rosner’s research spanned a broad spectrum of topics in particle physics, with a focus on the properties and interactions of quarks and leptons in the Standard Model and beyond.

In a highly influential paper in 1969, he pointed out that the duality between hadronic s-channel scattering and t-channel exchanges could be understood graphically, in terms of quark worldlines. Approximately three months before the “November revolution”, i.e. the experimental discovery of charm–anticharm particles, together with the late Mary K Gaillard and Benjamin W Lee, Jon published a seminal paper predicting the properties of hadronic states containing charm quarks.

He made significant contributions to the study of mesons and baryons, exploring their spectra and decay processes. His work on quarkonium systems, particularly the charmonium and bottomonium states, provided critical insights into the strong force that binds quarks together. He also made masterful use of algebraic methods in predicting and analysing CP-violating observables.

In more recent years, Jon focused on exotic combinations of quarks and antiquarks, tetra­quarks and pentaquarks. In 2017 he co-authored a Physical Review Letters paper that provided the first robust prediction of a bbud tetraquark that would be stable under the strong interaction (CERN Courier November/December 2024 p33).

What truly set Jon apart was his rare ability to seamlessly integrate theoretical acumen with practical experimental engagement. While primarily a theoretician, he held a deep appreciation for experimental data and actively participated in the experimental endeavour. A prime example of this was his long-standing involvement with the CLEO collaboration at Cornell University.

He also collaborated on studies related to the detection of cosmic-ray air showers and contributed to the development of prototype systems for detecting radio pulses associated with these high-energy events. His interdisciplinary approach bridged theoretical predictions with experimental observations, enhancing the coherence between theory and practice in high-energy physics.

Unusually for a theorist, Jon was a high-level expert in electronics, rooted through his deep life-long interest in amateur short-wave radio. As with everything else, he did it very thoroughly, from physics analysis to travelling to solar eclipses to take advantage of the increased propagation range of the electromagnetic waves caused by changes in the ionosphere. Rosner was also deeply committed to public service within the scientific community. He served as chair of the Division of Particles and Fields of the American Physical Society in 2013, during which he played a central role in organising the “Snowmass on the Mississippi” conference. This event was an essential part of the long-term strategic planning for the US high-energy physics programme. His leadership and vision were widely recognised and appreciated by his peers.

Throughout his career, Rosner received numerous accolades. He was a fellow of the American Physical Society and was awarded fellowships from the Alfred P. Sloan Foundation and the John Simon Guggenheim Memorial Foundation. His publication record includes more than 500 theoretical papers, reflecting his prolific and highly impactful career in physics. He is survived by his wife, Joy, their two children, Hannah and Benjamin, and a granddaughter, Sadie.

César Gómez 1954–2025

César Gómez, whose deep contributions to gauge theory and quantum gravity were matched by his scientific leadership, passed away on 7 April 2025 after a short fight against illness, leaving his friends and colleagues with a deep sense of loss.

César gained his PhD in 1981 from Universidad de Salamanca, where he became professor after working at Harvard, the Institute for Advanced Study and CERN. He held an invited professorship at the Université de Genève between 1987 and 1991, and in this same year, he moved to Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, where he eventually became a founding member of the Instituto de Física Teórica (IFT) UAM–CSIC. He became emeritus in 2024.

Among the large number of topics he worked on during his scientific career, César was initially fascinated by the dynamics of gauge theories. He dedicated his postdoctoral years to problems concerning the structure of the quantum vacuum in QCD, making some crucial contributions.

Focusing in the 1990s on the physics of two-dimensional conformal field theories, he used his special gifts to squeeze physics out of formal structures, leaving his mark in works ranging from superstrings to integrable models, and co-authoring with Martí Ruiz-Altaba and Germán Sierra the book Quantum Groups in Two-Dimensional Physics (Cambridge University Press, 1996). With the new century and the rise of holography, César returned to the topics of his youth: the renormalisation group and gauge theories, now with a completely different perspective.

Far from settling down, in the last decade we discover a very daring César, plunging together with Gia Dvali and other collaborators into a radical approach to understand symmetry breaking in gauge theories, opening new avenues in the study of black holes and the emergence of spacetime in quantum gravity. The magic of von Neumann algebras inspired him to propose an elegant, deep and original understanding of inflationary universes and their quantum properties. This research programme led him to one of his most fertile and productive periods, sadly truncated by his unexpected passing at a time when he was bursting with ideas and projects.

César’s influence went beyond his papers. After his arrival at CSIC as an international leader in string theory, he acted as a pole of attraction. His impact was felt both through the training of graduate students, as well as by the many courses he imparted that left a lasting memory on the new generations.

Contrasting with his abstract scientific style, César also had a pragmatic side, full of vision, momentum and political talent. A major part of his legacy is the creation of the IFT, whose existence would be unthinkable without César among the small group of theoretical physicists from Universidad Autónoma de Madrid and CSIC who made a dream come true. For him, the IFT was more than his research institute, it was the home he helped to build.

Philosophy was a true second career for César, dating back to his PhD in Salamanca and strengthened at Harvard, where he started a lifelong friendship with Hilary Putnam. The philosophy of language was one of his favourite subjects for philosophical musings, and he dedicated to it an inspiring book in Spanish in 2003.

Cesar’s impressive and eclectic knowledge of physics always transformed blackboard discussions into a delightful and fascinating experience, while his extraordinary ability to establish connections between apparently remote notions was extremely motivating at the early stages of a project. A regular presence at seminars and journal clubs, and always conspicuous by his many penetrating and inspiring questions, he was a beloved character among graduate students, who felt the excitement of knowing that he could turn every seminar into a unique event.

César was an excellent scientist with a remarkable personality. He was a wonderful conversationalist on any possible topic, encouraging open discussions free of prejudice, and building bridges with all conversational partners. He cherished his wife Carmen and daughters Ana and Pepa, who survive him.

Farewell, dear friend. May you rest in peace, and may your memory be our blessing.

Quantum simulators in high-energy physics

In 1982 Richard Feynman posed a question that challenged computational limits: can a classical computer simulate a quantum system? His answer: not efficiently. The complexity of the computation increases rapidly, rendering realistic simulations intractable. To understand why, consider the basic units of classical and quantum information.

A classical bit can exist in one of two states: |0> or |1>. A quantum bit, or qubit, exists in a superposition α|0> + β|1>, where α and β are complex amplitudes with real and imaginary parts. This superposition is the core feature that distinguishes quantum bits and classical bits. While a classical bit is either |0> or |1>, a quantum bit can be a blend of both at once. This is what gives quantum computers their immense parallelism – and also their fragility.

The difference becomes profound with scale. Two classical bits have four possible states, and are always in just one of them at a time. Two qubits simultaneously encode a complex-valued superposition of all four states.

Resources scale exponentially. N classical bits encode N boolean values, but N qubits encode 2N complex amplitudes. Simulating 50 qubits with double-precision real numbers for each part of the complex amplitudes would require more than a petabyte of memory, beyond the reach of even the largest supercomputers.

Direct mimicry

Feynman proposed a different approach to quantum simulation. If a classical computer struggles, why not use one quantum system to emulate the behaviour of another? This was the conceptual birth of the quantum simulator: a device that harnesses quantum mechanics to solve quantum problems. For decades, this visionary idea remained in the realm of theory, awaiting the technological breakthroughs that are now rapidly bringing it to life. Today, progress in quantum hardware is driving two main approaches: analog and digital quantum simulation, in direct analogy to the history of classical computing.

Optical tweezers

In analog quantum simulators, the physical parameters of the simulator directly correspond to the parameters of the quantum system being studied. Think of it like a wind tunnel for aeroplanes: you are not calculating air resistance on a computer but directly observing how air flows over a model.

A striking example of an analog quantum simulator traps excited Rydberg atoms in precise configurations using highly focused laser beams known as “optical tweezers”. Rydberg atoms have one electron excited to an energy level far from the nucleus, giving them an exaggerated electric dipole moment that leads to tunable long-range dipole–dipole interactions – an ideal setup for simulating particle interactions in quantum field theories (see “Optical tweezers” figure).

The positions of the Rydberg atoms discretise the space inhabited by the quantum fields being modelled. At each point in the lattice, the local quantum degrees of freedom of the simulated fields are embodied by the internal states of the atoms. Dipole–dipole interactions simulate the dynamics of the quantum fields. This technique has been used to observe phenomena such as string breaking, where the force between particles pulls so strongly that the vacuum spontaneously creates new particle–antiparticle pairs. Such quantum simulations model processes that are notoriously difficult to calculate from first principles using classical computers (see “A philosophical dimension” panel).

Universal quantum computation

Digital quantum simulators operate much like classical digital computers, though using quantum rather than classical logic gates. While classical logic manipulates classical bits, quantum logic manipulates qubits. Because quantum logic gates obey the Schrödinger equation, they preserve information and are reversible, whereas most classical gates, such as “AND” and “OR”, are irreversible. Many quantum gates have no classical equivalent, because they manipulate phase, superposition or entanglement – a uniquely quantum phenomenon in which two or more qubits share a combined state. In an entangled system, the state of each qubit cannot be described independently of the others, even if they are far apart: the global description of the quantum state is more than the combination of the local information at every site.

A philosophical dimension

The discretisation of space by quantum simulators echoes the rise of lattice QCD in the 1970s and 1980s. Confronted with the non-perturbative nature of the strong interaction, Kenneth Wilson introduced a method to discretise spacetime, enabling numerical solutions to quantum chromodynamics beyond the reach of perturbation theory. Simulations on classical supercomputers have since deepened our understanding of quark confinement and hadron masses, catalysed advances in high-performance computing, and inspired international collaborations. It has become an indispensable tool in particle physics (see “Fermilab’s final word on muon g-2”).

In classical lattice QCD, the discretisation of spacetime is just a computational trick – a means to an end. But in quantum simulators this discretisation becomes physical. The simulator is a quantum system governed by the same fundamental laws as the target theory.

This raises a philosophical question: are we merely modelling the target theory or are we, in a limited but genuine sense, realising it? If an array of neutral atoms faithfully mimics the dynamical behaviour of a specific gauge theory, is it “just” a simulation, or is it another manifestation of that theory’s fundamental truth? Feynman’s original proposal was, in a sense, about using nature to compute itself. Quantum simulators bring this abstract notion into concrete laboratory reality.

By applying sequences of quantum logic gates, a digital quantum computer can model the time evolution of any target quantum system. This makes them flexible and scalable in pursuit of universal quantum computation – logic able to run any algorithm allowed by the laws of quantum mechanics, given enough qubits and sufficient time. Universal quantum computing requires only a small subset of the many quantum logic gates that can be conceived, for example Hadamard, T and CNOT. The Hadamard gate creates a superposition: |0> (|0> + |1>) / 2. The T gate applies a 45° phase rotation: |1> eiπ/4|1>. And the CNOT gate entangles qubits by flipping a target qubit if a control qubit is |1>. These three suffice to prepare any quantum state from a trivial reference state: |ψ> = U1 U2 U3 … UN |0000…000>.

Trapped ions

To bring frontier physics problems within the scope of current quantum computing resources, the distinction between analog and digital quantum simulations is often blurred. The complexity of simulations can be reduced by combining digital gate sequences with analog quantum hardware that aligns with the interaction patterns relevant to the target problem. This is feasible as quantum logic gates usually rely on native interactions similar to those used in analog simulations. Rydberg atoms are a common choice. Alongside them, two other technologies are becoming increasingly dominant in digital quantum simulation: trapped ions and superconducting qubit arrays.

Trapped ions offer the greatest control. Individual charged ions can be suspended in free space using electromagnetic fields. Lasers manipulate their quantum states, inducing interactions between them. Trapped-ion systems are renowned for their high fidelity (meaning operations are accurate) and long coherence times (meaning they maintain their quantum properties for longer), making them excellent candidates for quantum simulation (see “Trapped ions” figure).

Superconducting qubit arrays promise the greatest scalability. These tiny superconducting circuit materials act as qubits when cooled to extremely low temperatures and manipulated with microwave pulses. This technology is at the forefront of efforts to build quantum simulators and digital quantum computers for universal quantum computation (see “Superconducting qubits” figure).

The noisy intermediate-scale quantum era

Despite rapid progress, these technologies are at an early stage of development and face three main limitations.

The first problem is that qubits are fragile. Interactions with their environment quickly compromise their superposition and entanglement, making computations unreliable. Preventing “decoherence” is one of the main engineering challenges in quantum technology today.

The second challenge is that quantum logic gates have low fidelity. Over a long sequence of operations, errors accumulate, corrupting the result.

Finally, quantum simulators currently have a very limited number of qubits – typically only a few hundred. This is far fewer than what is needed for high-energy physics (HEP) problems.

Superconducting qubits

This situation is known as the noisy “intermediate-scale” quantum era: we are no longer doing proof-of-principle experiments with a few tens of qubits, but neither can we control thousands of them. These limitations mean that current digital simulations are often restricted to “toy” models, such as QED simplified to have just one spatial and one time dimension. Even with these constraints, small-scale devices have successfully reproduced non-perturbative aspects of the theories in real time and have verified the preservation of fundamental physical principles such as gauge invariance, the symmetry that underpins the fundamental forces of the Standard Model.

Quantum simulators may chart a similar path to classical lattice QCD, but with even greater reach. Lattice QCD struggles with real-time evolution and finite-density physics due to the infamous “sign problem”, wherein quantum interference between classically computed amplitudes causes exponentially worsening signal-to-noise ratios. This renders some of the most interesting problems unsolvable on classical machines.

Quantum simulators do not suffer from the sign problem because they evolve naturally in real-time, just like the physical systems they emulate. This promises to open new frontiers such as the simulation of early-universe dynamics, black-hole evaporation and the dense interiors of neutron stars.

Quantum simulators will powerfully augment traditional theoretical and computational methods, offering profound insights when Feynman diagrams become intractable, when dealing with real-time dynamics and when the sign problem renders classical simulations exponentially difficult. Just as the lattice revolution required decades of concerted community effort to reach its full potential, so will the quantum revolution, but the fruits will again transform the field. As the aphorism attributed to Mark Twain goes: history never repeats itself, but it often rhymes.

Quantum information

One of the most exciting and productive developments in recent years is the unexpected, yet profound, convergence between HEP and quantum information science (QIS). For a long time these fields evolved independently. HEP explored the universe’s smallest constituents and grandest structures, while QIS focused on harnessing quantum mechanics for computation and communication. One of the pioneers in studying the interface between these fields was John Bell, a theoretical physicist at CERN.

Just as the lattice revolution needed decades of concerted community effort to reach its full potential, so will the quantum revolution

HEP and QIS are now deeply intertwined. As quantum simulators advance, there is a growing demand for theoretical tools that combine the rigour of quantum field theory with the concepts of QIS. For example, tensor networks were developed in condensed-matter physics to represent highly entangled quantum states, and have now found surprising applications in lattice gauge theories and “holographic dualities” between quantum gravity and quantum field theory. Another example is quantum error correction – a vital QIS technique to protect fragile quantum information from noise, and now a major focus for quantum simulation in HEP.

This cross-disciplinary synthesis is not just conceptual; it is becoming institutional. Initiatives like the US Department of Energy’s Quantum Information Science Enabled Discovery (QuantISED) programme, CERN’s Quantum Technology Initiative (QTI) and Europe’s Quantum Flagship are making substantial investments in collaborative research. Quantum algorithms will become indispensable for theoretical problems just as quantum sensors are becoming indispensable to experimental observation (see “Sensing at quantum limits”).

The result is the emergence of a new breed of scientist: one equally fluent in the fundamental equations of particle physics and the practicalities of quantum hardware. These “hybrid” scientists are building the theoretical and computational scaffolding for a future where quantum simulation is a standard, indispensable tool in HEP. 

Four ways to interpret quantum mechanics

One hundred years after its birth, quantum mechanics is the foundation of our understanding of the physical world. Yet debates on how to interpret the theory – especially the thorny question of what happens when we make a measurement – remain as lively today as during the 1930s.

The latest recognition of the fertility of studying the interpretation of quantum mechanics was the award of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser and Anton Zeilinger. The motivation for the prize pointed out that the bubbling field of quantum information, with its numerous current and potential technological applications, largely stems from the work of John Bell at CERN the 1960s and 1970s, which in turn was motivated by the debate on the interpretation of quantum mechanics.

The majority of scientists use a textbook formulation of the theory that distinguishes the quantum system being studied from “the rest of the world” – including the measuring apparatus and the experimenter, all described in classical terms. Used in this orthodox manner, quantum theory describes how quantum systems react when probed by the rest of the world. It works flawlessly.

Sense and sensibility

The problem is that the rest of the world is quantum mechanical as well. There are of course regimes in which the behaviour of a quantum system is well approximated by classical mechanics. One may even be tempted to think that this suffices to solve the difficulty. But this leaves us in the awkward position of having a general theory of the world that only makes sense under special approximate conditions. Can we make sense of the theory in general?

Today, variants of four main ideas stand at the forefront of efforts to make quantum mechanics more conceptually robust. They are known as physical collapse, hidden variables, many worlds and relational quantum mechanics. Each appears to me to be viable a priori, but each comes with a conceptual price to pay. The latter two may be of particular interest to the high-energy community as the first two do not appear to fit well with relativity.

Probing physical collapse

The idea of the physical collapse is simple: we are missing a piece of the dynamics. There may exist a yet-undiscovered physical interaction that causes the wavefunction to “collapse” when the quantum system interacts with the classical world in a measurement. The idea is empirically testable. So far, all laboratory attempts to find violations of the textbook Schrödinger equation have failed (see “Probing physical collapse” figure), and some models for these hypothetical new dynamics have been ruled out by measurements.

The second possibility, hidden variables, follows on from Einstein’s belief that quantum mechanics is incomplete. It posits that its predictions are exactly correct, but that there are additional variables describing what is going on, besides those in the usual formulation of the theory: the reason why quantum predictions are probabilistic is our ignorance of these other variables.

The work of John Bell shows that the dynamics of any such theory will have some degree of non-locality (see “Non-locality” image). In the non-relativistic domain, there is a good example of a theory of this sort, that goes under the name of de Broglie–Bohm, or pilot-wave theory. This theory has non-local but deterministic dynamics capable of reproducing the predictions of non-relativistic quantum-particle dynamics. As far as I am aware, all existing theories of this kind break Lorentz invariance, and the extension of hidden variable theories to quantum-field theoretical domains appears cumbersome.

Relativistic interpretations

Let me now come to the two ideas that are naturally closer to relativistic physics. The first is the many-worlds interpretation – a way of making sense of quantum theory without either changing its dynamics or adding extra variables. It is described in detail in this edition of CERN Courier by one of its leading contemporary proponents (see “The minimalism of many worlds“), but the main idea is the following: being a genuine quantum system, the apparatus that makes a quantum measurement does not collapse the superposition of possible measurement outcomes – it becomes a quantum superposition of the possibilities, as does any human observer.

Non-locality

If we observe a singular outcome, says the many-worlds interpretation, it is not because one of the probabilistic alternatives has actualised in a mysterious “quantum measurement”. Rather, it is because we have split into a quantum superposition of ourselves, and we just happen to be in one of the resulting copies. The world we see around us is thus only one of the branches of a forest of parallel worlds in the overall quantum state of everything. The price to pay to make sense of quantum theory in this manner is to accept the idea that the reality we see is just a branch in a vast collection of possible worlds that include innumerable copies of ourselves.

Relational interpretations are the most recent of the four kinds mentioned. They similarly avoid physical collapse or hidden variables, but do so without multiplying worlds. They stay closer to the orthodox textbook interpretation, but with no privileged status for observers. The idea is to think of quantum theory in a manner closer to the way it was initially conceived by Born, Jordan, Heisenberg and Dirac: namely in terms of transition amplitudes between observations rather than quantum states evolving continuously in time, as emphasised by Schrödinger’s wave mechanics (see “A matter of taste” image).

Observer relativity

The alternative to taking the quantum state as the fundamental entity of the theory is to focus on the information that an arbitrary system can have about another arbitrary system. This information is embodied in the physics of the apparatus: the position of its pointer variable, the trace in a bubble chamber, a person’s memory or a scientist’s logbook. After a measurement, these physical quantities “have information” about the measured system as their value is correlated with a property of the observed systems.

Quantum theory can be interpreted as describing the relative information that systems can have about one another. The quantum state is interpreted as a way of coding the information about a system available to another system. What looks like a multiplicity of worlds in the many-worlds interpretation becomes nothing more than a mathematical accounting of possibilities and probabilities.

A matter of taste

The relational interpretation reduces the content of the physical theory to be about how systems affect other systems. This is like the orthodox textbook interpretation, but made democratic. Instead of a preferred classical world, any system can play a role that is a generalisation of the Copenhagen observer. Relativity teaches us that velocity is a relative concept: an object has no velocity by itself, but only relative to another object. Similarly, quantum mechanics, interpreted in this manner, teaches us that all physical variables are relative. They are not properties of a single object, but ways in which an object affects another object.

The QBism version of the interpretation restricts its attention to observing systems that are rational agents: they can use observations and make probabilistic predictions about the future. Probability is interpreted subjectively, as the expectation of a rational agent. The relational interpretation proper does not accept this restriction: it considers the information that any system can have about any other system. Here, “information” is understood in the simple physical sense of correlation described above.

Like many worlds – to which it is not unrelated – the relational interpretation does not add new dynamics or new variables. Unlike many worlds, it does not ask us to think about parallel worlds either. The conceptual price to pay is a radical weakening of a strong form of realism: the theory does not give us a picture of a unique objective sequence of facts, but only perspectives on the reality of physical systems, and how these perspectives interact with one another. Only quantum states of a system relative to another system play a role in this interpretation. The many-worlds interpretation is very close to this. It supplements the relational interpretation with an overall quantum state, interpreted realistically, achieving a stronger version of realism at the price of multiplying worlds. In this sense, the many worlds and relational interpretations can be seen as two sides of the same coin.

Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics

I have only sketched here the most discussed alternatives, and have tried to be as neutral as possible in a field of lively debates in which I have my own strong bias (towards the fourth solution). Empirical testing, as I have mentioned, can only test the physical collapse hypothesis.

There is nothing wrong, in science, in using different pictures for the same phenomenon. Conceptual flexibility is itself a resource. Specific interpretations often turn out to be well adapted to specific problems. In quantum optics it is sometimes convenient to think that there is a wave undergoing interference, as well as a particle that follows a single trajectory guided by the wave, as in the pilot-wave hidden-variable theory. In quantum computing, it is convenient to think that different calculations are being performed in parallel in different worlds. My own field of loop quantum gravity treats spacetime regions as quantum processes: here, the relational interpretation merges very naturally with general relativity, because spacetime regions themselves become quantum processes, affecting each other.

Richard Feynman famously wrote that “every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping that they will give him different ideas for guessing.” I think that this is where we are, in trying to make sense of our best physical theory. We have various ways to make sense of it. We do not yet know which of these will turn out to be the most fruitful in the future.

Sensing at quantum limits

Atomic energy levels. Spin orientations in a magnetic field. Resonant modes in cryogenic, high-quality-factor radio-frequency cavities. The transition from superconducting to normal conducting, triggered by the absorption of a single infrared photon. These are all simple yet exquisitely sensitive quantum systems with discrete energy levels. Each can serve as the foundation for a quantum sensor – instruments that detect single photons, measure individual spins or record otherwise imperceptible energy shifts.

Over the past two decades, quantum sensors have taken on leading roles in the search for ultra-light dark matter and in precision tests of fundamental symmetries. Examples include the use of atomic clocks to probe whether Earth is sweeping through oscillating or topologically structured dark-matter fields, and cryogenic detectors to search for electric dipole moments – subtle signatures that could reveal new sources of CP violation. These areas have seen rapid progress, as challenges related to detector size, noise, sensitivity and complexity have been steadily overcome, opening new phase space in which to search for physics beyond the Standard Model. Could high-energy particle physics benefit next?

Low-energy particle physics

Most of the current applications of quantum sensors are at low energies, where their intrinsic sensitivity and characteristic energy scales align naturally with the phenomena being probed. For example, within the Project 8 experiment at the University of Washington, superconducting sensors are being developed to tackle a longstanding challenge: to distinguish the tiny mass of the neutrino from zero (see “Quantum-noise limited” image). Inward-looking phased arrays of quantum-noise-limited microwave receivers allow spectroscopy of cyclotron radiation from beta-decay electrons as they spiral in a magnetic field. The shape of the endpoint of the spectrum is sensitive to the mass of the neutrino and such sensors have the potential to be sensitive to neutrino masses as low as 40 meV.

Quantum-noise limited

Beyond the Standard Model, superconducting sensors play a central role in the search for dark matter. At the lowest mass scales (peV to meV), experiments search for ultralight bosonic dark-matter candidates such as axions and axion-like particles (ALPs) through excitations of the vacuum field inside high–quality–factor microwave and millimetre-wave cavities (see “Quantum sensitivity” image). In the meV range, light-shining-through-wall experiments aim to reveal brief oscillations into weakly coupled hidden-sector particles such as dark photons or ALPs, and may employ quantum sensors for detecting reappearing photons, depending on the detection strategy. In the MeV to sub-GeV mass range, superconducting sensors are used to detect individual photons and phonons in cryogenic scintillators, enabling sensitivity to dark-matter interactions via electron recoils. At higher masses, reaching into the GeV regime, superfluid helium detectors target nuclear recoils from heavier dark matter particles such as WIMPs.

These technologies also find broad application beyond fundamental physics. For example, in superconducting and other cryogenic sensors, the ability to detect single quanta with high efficiency and ultra-low noise is essential. The same capabilities are the technological foundation of quantum communication.

Raising the temperature

While many superconducting quantum sensors require ultra-low temperatures of a few mK, some spin-based quantum sensors can function at or near room temperature. Spin-based sensors, such as nitrogen-vacancy (NV) centres in diamonds and polarised rubidium atoms, are excellent examples.

NV centres are defects in the diamond lattice where a missing carbon atom – the vacancy – is adjacent to a lattice site where a carbon atom has been replaced by a nitrogen atom. The electronic spin states in NV centres have unique energy levels that can be probed by laser excitation and detection of spin-dependent fluorescence.

Researchers are increasingly exploring how quantum-control techniques can be integrated into high-energy-physics detectors

Rubidium is promising for spin-based sensors because it has unpaired electrons. In the presence of an external magnetic field, its atomic energy levels are split by the Zeeman effect. When optically pumped with laser light, spin-polarised “dark” sublevels – those not excited by the light – become increasingly populated. These aligned spins precess in magnetic fields, forming the basis of atomic magnetometers and other quantum sensors.

Being exquisite magnetometers, both devices make promising detectors for ultralight bosonic dark-matter candidates such as axions. Fermion spins may interact with spatial or temporal gradients of the axion field, leading to tiny oscillating energy shifts. The coupling of axions to gluons could also show up as an oscillating nuclear electric dipole moment. These interactions could manifest as oscillating energy-level shifts in NV centres, or as time-varying NMR-like spin precession signals in the atomic magnetometers.

Large-scale detectors

The situation is completely different in high-energy physics detectors, which require numerous interactions between a particle and a detector. Charged particles cause many ionisation events, and when a neutral particle interacts it produces charged particles that result in similarly numerous ionisations. Even if quantum control were possible within individual units of a massive detector, the number of individual quantum sub-processes to be monitored would exceed the possibilities of any realistic device.

Increasingly, however, researchers are exploring how quantum-control techniques – such as manipulating individual atoms or spins using lasers or microwaves – can be integrated into high-energy-physics detectors. These methods could enhance detector sensitivity, tune detector response or enable entirely new ways of measuring particle properties. While these quantum-enhanced or hybrid detection approaches are still in their early stages, they hold significant promise.

Quantum dots

Quantum dots are nanoscale semiconductor crystals – typically a few nanometres in diameter – that confine charge carriers (electrons and holes) in all three spatial dimensions. This quantum confinement leads to discrete, atom-like energy levels and results in optical and electronic properties that are highly tunable with size, shape and composition. Originally studied for their potential in optoelectronics and biomedical imaging, quantum dots have more recently attracted interest in high-energy physics due to their fast scintillation response, narrow-band emission and tunability. Their emission wavelength can be precisely controlled through nanostructuring, making them promising candidates for engineered detectors with tailored response characteristics.

Chromatic calorimetry

While their radiation hardness is still under debate and needs to be resolved, engineering their composition, geometry, surface and size can yield very narrow-band (20 nm) emitters across the optical spectrum and into the infrared. Quantum dots such as these could enable the design of a “chromatic calorimeter”: a stack of quantum-dot layers, each tuned to emit at a distinct wavelength; for example red in the first layer, orange in the second and progressing through the visible spectrum to violet. Each layer would absorb higher energy photons quite broadly but emit light in a narrow spectral band. The intensity of each colour would then correspond to the energy absorbed in that layer, while the emission wavelength would encode the position of energy deposition, revealing the shower shape (see “Chromatic calorimetry” figure). Because each layer is optically distinct, hermetic isolation would be unnecessary, reducing the overall material budget.

Rather than improving the energy resolution of existing calorimeters, quantum dots could provide additional information on the shape and development of particle showers if embedded in existing scintillators. Initial simulations and beam tests by CERN’s Quantum Technology Initiative (QTI) support the hypothesis that the spectral intensity of quantum-dot emission can carry information about the energy and species of incident particles. Ongoing work aims to explore their capabilities and limitations.

Beyond calorimetry, quantum dots could be formed within solid semiconductor matrices, such as gallium arsenide, to form a novel class of “photonic trackers”. Scintillation light from electronically tunable quantum dots could be collected by photodetectors integrated directly on top of the same thin semiconductor structure, such as in the DoTPiX concept. Thanks to a highly compact, radiation-tolerant scintillating pixel tracking system with intrinsic signal amplification and minimal material budget, photonic trackers could provide a scintillation-light-based alternative to traditional charge-based particle trackers.

Living on the edge

Low temperatures also offer opportunities at scale – and cryogenic operation is a well-established technique in both high-energy and astroparticle physics, with liquid argon (boiling point 87 K) widely used in time projection chambers and some calorimeters, and some dark-matter experiments using liquid helium (boiling point 4.2 K) to reach even lower temperatures. A range of solid-state detectors, including superconducting sensors, operate effectively at these temperatures and below, and offer significant advantages in sensitivity and energy resolution.

Single-photon phase transitions

Magnetic microcalorimeters (MMCs) and transition-edge sensors (TESs) operate in the narrow temperature range where a superconducting material undergoes a rapid transition from zero resistance to finite values. When a particle deposits energy in an MMC or TES, it slightly raises the temperature, causing a measurable increase in resistance. Because the transition is extremely steep, even a tiny temperature change leads to a detectable resistance change, allowing precise calorimetry.

Functioning at millikelvin temperatures, TESs provide much higher energy resolution than solid-state detectors made from high-purity germanium crystals, which work by collecting electron–hole pairs created when ionising radiation interacts with the crystal lattice. TESs are increasingly used in high-resolution X-ray spectroscopy of pionic, muonic or antiprotonic atoms, and in photon detection for observational astronomy, despite the technical challenges associated with maintaining ultra-low operating temperatures.

By contrast, superconducting nanowire and microwire single-photon detectors (SNSPDs and SMSPDs) register only a change in state – from superconducting to normal conducting – allowing them to operate at higher temperatures than traditional low-temperature sensors. When made from high–critical-temperature (Tc) superconductors, operation at temperatures as high as 10 K is feasible, while maintaining excellent sensitivity to energy deposited by charged particles and ultrafast switching times on the order of a few picoseconds. Recent advances include the development of large-area devices with up to 400,000 micron-scale pixels (see “Single-photon phase transitions” figure), fabrication of high-Tc SNSPDs and successful beam tests of SMSPDs. These technologies are promising candidates for detecting milli-charged particles – hypothetical particles arising in “hidden sector” extensions of the Standard Model – or for high-rate beam monitoring at future colliders.

Rugged, reliable and reproducible

Quantum sensor-based experiments have vastly expanded the phase space that has been searched for new physics. This is just the beginning of the journey, as larger-scale efforts build on the initial gold rush and new quantum devices are developed, perfected and brought to bear on the many open questions of particle physics.

Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance

To fully profit from their potential, a vigorous R&D programme is needed to scale up quantum sensors for future detectors. Ruggedness, reliability and reproducibility are key – as well as establishing “proof of principle” for the numerous imaginative concepts that have already been conceived. Challenges range from access to test infrastructures, to standardised test protocols for fair comparisons. In many cases, the largest challenge is to foster an open exchange of ideas given the numerous local developments that are happening worldwide. Finding a common language to discuss developments in different fields that at first glance may have little in common, builds on a willingness to listen, learn and exchange.

The European Committee for Future Accelerators (ECFA) detector R&D roadmap provides a welcome framework for addressing these challenges collaboratively through the Detector R&D (DRD) collaborations established in 2023 and now coordinated at CERN. Quantum sensors and emerging technologies are covered within the DRD5 collaboration, which ties together 112 institutes worldwide, many of them leaders in their particular field. Only a third stem from the traditional high-energy physics community.

These efforts build on the widespread expertise and enthusiastic efforts at numerous institutes and tie in with the quantum programmes being spearheaded at high-energy-physics research centres, among them CERN’s QTI. Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance. The best approach may prove to be “targeted blue-sky research”: a willingness to explore completely novel concepts while keeping their ultimate usefulness for particle physics firmly in mind.

A new probe of radial flow

Radial-flow fluctuations

The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma (QGP). The two analyses offer a fresh perspective into the fluid-like behaviour of QCD matter under extreme conditions, such as those that prevailed after the Big Bang. The measurements are highly complementary, with ALICE drawing on their detector’s particle-identification capabilities and ATLAS leveraging the experiment’s large rapidity coverage.

At the Large Hadron Collider, lead–ion collisions produce matter at temperatures and densities so high that quarks and gluons momentarily escape their confinement within hadrons. The resulting QGP is believed to have filled the universe during its first few microseconds, before cooling and fragmenting into mesons and baryons. In the laboratory, these streams of particles allow researchers to reconstruct the dynamical evolution of the QGP, which has long been known to transform anisotropies of the initial collision geometry into anisotropic momentum distributions of the final-state particles.

Compelling evidence

Differential measurements of the azimuthal distributions of produced particles over the last decades have provided compelling evidence that the outgoing momentum distribution reflects a collective response driven by initial pressure gradients. The isotropic expansion component, typically referred to as radial flow, has instead been inferred from the slope of particle spectra (see figure 1). Despite its fundamental role in driving the QGP fireball, radial flow lacked a differential probe comparable to those of its anisotropic counterparts.

ATLAS measurements of radial flow

That situation has now changed. The ALICE and ATLAS collaborations recently employed the novel observable v0(pT) to investigate radial flow directly. Their independent results demonstrate, for the first time, that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour. The isotropic expansion of the QGP and its azimuthal modulations ultimately depend on the hydrodynamic properties of the QGP, such as shear or bulk viscosity, and can thus be measured to constrain them.

Traditionally, radial flow has been inferred from the slope of pT-spectra, with the pT-integrated radial-flow extracted via fits to “blast wave” models. The newly introduced differential observable v0(pT) captures fluctuations in spectral shape across pT bins. v0(pT) retains differential sensitivity, since it is defined as the correlation (technically the normalised covariance) between the fraction of particles in a given pT-interval and the mean transverse momentum of the collision products within a single event, [pT]. Roughly speaking, a fluctuation raising [pT] produces a positive v0(pT) at high pT due to the fractional yield increasing; conversely, the fractional yield decreasing at low pT causes a negative v0(pT). A pseudorapidity gap between the measurement of mean pT and the particle yields is used to suppress short-range correlations and isolate the long-range, collective signal. Previous studies observed event-by-event fluctuations in [pT], related to radial flow over a wide pT range and quantified by the coefficient v0ref, but they could not establish whether these fluctuations were correlated across different pT intervals – a crucial signature of collective behaviour.

Origins

The ATLAS collaboration performed a measurement of v0(pT) in the 0.5 to 10 GeV range, identifying three signatures of the collective origin of radial flow (see figure 2). First, correlations between the particle yield at fixed pT and the event-wise mean [pT] in a reference interval show that the two-particle radial flow factorises into single-particle coefficients as v0(pT) × v0ref for pT < 4 GeV, independent of the reference choice (left panel). Second, the data display no dependence on the rapidity gap between correlated particles, suggesting a long-range effect intrinsic to the entire system (middle panel). Finally, the centrality dependence of the ratio v0(pT)/v0ref followed a consistent trend from head-on to peripheral collisions, effectively cancelling initial geometry effects and supporting the interpretation of a collective QGP response (right panel). At higher pT, a decrease in v0(pT) and a splitting with respect to centrality suggest the onset of non-thermal effects such as jet quenching. This may reveal fluctuations in jet energy loss – an area warranting further investigation.

ALICE measurements of radial flow

Using more than 80 million collisions at a centre-of-mass energy of 5.02 TeV, ALICE extracted v0(pT) for identified pions, kaons and protons across a broad range of centralities. ALICE observes v0(pT) to be negative at low pT, reflecting the influence of mean-pT fluctuations on the spectral shape (see figure 3). The data display a clear mass ordering at low pT, from protons to kaons to pions, consistent with expectations from collective radial expansion. This mass ordering reflects the greater “push” heavier particles experience in the rapidly expanding medium. The picture changes above 3 GeV, where protons have larger v0(pT) values than pions and kaons, perhaps indicating the contribution of recombination processes in hadron production.

The results demonstrate that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour

The two collaborations’ measurements of the new v0(pT) observable highlight its sensitivity to the bulk-transport properties of the QGP medium. Comparisons with hydrodynamic calculations show that v0(pT) varies with bulk viscosity and the speed of sound, but that it has a weaker dependence on shear viscosity. Hydrodynamic predictions reproduce the data well up to about 2 GeV, but diverge at higher momenta. The deviation of non-collective models like HIJING from the data underscores the dominance of final-state, hydrodynamic-like effects in shaping radial flow.

These results advance our understanding of one of the most extreme regimes of QCD matter, strengthening the case for the formation of a strongly interacting, radially expanding QGP medium in heavy-ion collisions. Differential measurements of radial flow offer a new tool to probe this fluid-like expansion in detail, establishing its collective origin and complementing decades of studies of anisotropic flow.

Neutron stars as fundamental physics labs

Neutron stars are truly remarkable systems. They pack between one and two times the mass of the Sun into a radius of about 10 kilometres. Teetering on the edge of gravitational collapse into a black hole, they exhibit some of the strongest gravitational forces in the universe. They feature extreme densities in excess of atomic nuclei. And due to their high densities they produce weakly interacting particles such as neutrinos. Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use these extreme environments as precise laboratories for fundamental physics.

Perhaps the most intriguing open question surrounding neutron stars is what is actually inside them. Clearly they are primarily composed of neutrons, but many theories suggest that other forms of matter should appear in the highest density regions near the centre of the star, including free quarks, hyperons and kaon or pion condensates. Diverse data can constrain these hypotheses, including astronomical inferences of the masses and radii of neutron stars, observations of the mergers of neutron stars by LIGO, and baryon production patterns and correlations in heavy-ion collisions at the LHC. Theoretical consistency is critical here. Several talks highlighted the importance of low-energy nuclear data to understand the behaviour of nuclear matter at low densities, though also emphasising that at very high densities and energies any description should fall within the realm of QCD – a theory that beautifully describes the dynamics of quarks and gluons at the LHC.

Another key question for neutron stars is how fast they cool. This depends critically on their composition. Quarks, hyperons, nuclear resonances, pions or muons would each lead to different channels to cool the neutron star. Measurements of the temperatures and ages of neutron stars might thereby be used to learn about their composition.

Research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics

The workshop revealed that research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics including tests of particles beyond the Standard Model, including the axion: a very light and weakly coupled dark-matter candidate that was initially postulated to explain the “strong CP problem” of why strong interactions are identical for particles and antiparticles. The workshop allowed particle theorists to appreciate the various possible uncertainties in their theoretical predictions and propagate them into new channels that may allow sharper tests of axions and other weakly interacting particles. An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars.

While many uncertainties remain, the workshop revealed that the field is open and exciting, and that upcoming observations of neutron stars, including neutron-star mergers or the next galactic supernova, hold unique opportunities to understand fundamental questions from the nature of dark matter to the strong CP problem.

bright-rec iop pub iop-science physcis connect