Reconciling general relativity and quantum mechanics remains a central problem in fundamental physics. Though successful in their own domains, the two theories resist unification and offer incompatible views of space, time and matter. The field of quantum gravity, which has sought to resolve this tension for nearly a century, is still plagued by conceptual challenges, limited experimental guidance and a crowded landscape of competing approaches. Now in its third instalment, the “Quantum Gravity” conference series addresses this fragmentation by promoting open dialogue across communities. Organised under the auspices of the International Society for Quantum Gravity (ISQG), the 2025 edition took place from 21 to 25 July at Penn State University. The event gathered researchers working across a variety of frameworks – from random geometry and loop quantum gravity to string theory, holography and quantum information. At its core was the recognition that, regardless of specific research lines or affiliations, what matters is solving the puzzle.
One step to get there requires understanding the origin of dark energy, which drives the accelerated expansion of the universe and is typically modelled by a cosmological constant Λ. Yasaman K Yazdi (Dublin Institute for Advanced Studies) presented a case for causal set theory, reducing spacetime to a discrete collection of events, partially ordered to capture cause–effect relationships. In this context, like a quantum particle’s position and momentum, the cosmological constant and the spacetime volume are conjugate variables. This leads to the so-called “ever-present Λ” models, where fluctuations in the former scale as the inverse square root of the latter, decreasing over time but never vanishing. The intriguing agreement between the predicted size of these fluctuations and the observed amount of dark energy, while far from resolving quantum cosmology, stands as a compelling motivation for pursuing the approach.
In the spirit of John Wheeler’s “it from bit” proposal, Jakub Mielczarek (Jagiellonian University) suggested that our universe may itself evolve by computing – or at least admit a description in terms of quantum information processing. In loop quantum gravity, space is built from granular graphs known as spin networks, which capture the quantum properties of geometry. Drawing on ideas from tensor networks and holography, Mielczarek proposed that these structures can be reinterpreted as quantum circuits, with their combinatorial patterns reflected in the logic of algorithms. This dictionary offers a natural route to simulating quantum geometry, and could help clarify quantum theories that, like general relativity, do not rely on a fixed background.
Quantum clues
What would a genuine quantum theory of spacetime achieve, though? According to Esteban Castro Ruiz (IQOQI), it may have to recognise that reference frames, which are idealised physical systems used to define spatio-temporal distances, must themselves be treated as quantum objects. In the framework of quantum reference frames, notions such as entanglement, localisation and superposition become observer-dependent. This leads to a perspective-neutral formulation of quantum mechanics, which may offer clues for describing physics when spacetime is not only dynamical, but quantum.
The conference’s inclusive vocation came through most clearly in the thematic discussion sessions, including one on the infamous black-hole information problem chaired by Steve Giddings (UC Santa Barbara). A straightforward reading of Stephen Hawking’s 1974 result suggests that black holes radiate, shrink and ultimately destroy information – a process that is incompatible with standard quantum mechanics. Any proposed resolution must face sharp trade-offs: allowing information to escape challenges locality, losing it breaks unitarity and storing it in long-lived remnants undermines theoretical control. Giddings described a mild violation of locality as the lesser evil, but the controversy is far from settled. Still, there is growing consensus that dissolving the paradox may require new physics to appear well before the Planck scale, where quantum-gravity effects are expected to dominate.
Once the domain of pure theory, quantum gravity has become eager to engage with experiment
Among the few points of near-universal agreement in the quantum-gravity community has long been the virtual impossibility of detecting a graviton, the hypothetical quantum of the gravitational field. According to Igor Pikovski (Stockholm University), things may be less bleak than once thought. While the probability of seeing graviton-induced atomic transitions is negligible due to the weakness of gravity, the situation is different for massive systems. By cooling a macroscopic object close to absolute zero, Pikovski suggested, the effect could be amplified enough, with current interferometers simultaneously monitoring gravitational waves in the correct frequency window. Such a signal would not amount to a definitive proof of gravity’s quantisation, just as the photoelectric effect could not definitely establish the existence of photons, nor would it single out a specific ultraviolet model. However, it could constrain concrete predictions and put semiclassical theories under pressure. Giulia Gubitosi (University of Naples Federico II) tackled phenomenology from a different angle, exploring possible deviations from special relativity in models where spacetime becomes non-commutative. There, coordinates are treated like quantum operators, leading to effects like decoherence, modified particle speeds and soft departures from locality. Although such signals tend to be faint, they could be enhanced by high-energy astrophysical sources: observations of neutrinos corresponding to gamma-ray bursts are now starting to close in on these scenarios. Both talks reflected a broader, cultural shift: quantum gravity, once the domain of pure theory, has become eager to engage with experiment.
Quantum Gravity 2025 offered a wide snapshot of a field still far from closure, yet increasingly shaped by common goals, the convergence of approaches and cross-pollination. As intended, no single framework took centre stage, with a dialogue-based format keeping focus on the central, pressing issue at hand: understanding the quantum nature of spacetime. With limited experimental guidance, open exchange remains key to clarifying assumptions and avoiding duplication of efforts. Building on previous editions, the meeting pointed toward a future where quantum-gravity researchers will recognise themselves as part of a single, coherent scientific community.
In June 2025, physicists met at Saariselkä, Finland to discuss recent progress in the field of ultra-peripheral collisions (UPCs). All the major LHC experiments measure UPCs – events where two colliding nuclei miss each other, but nevertheless interact via the mediation of photons that can propagate long distances. In a case of life imitating science, almost 100 delegates propagated to a distant location in one of the most popular hiking destinations in northern Lapland to experience 24-hour daylight and discuss UPCs in Finnish saunas.
UPC studies have expanded significantly since the first UPC workshop in Mexico in December 2023. The opportunity to study scattering processes in a clean photon–nucleus environment at collider energies has inspired experimentalists to examine both inclusive and exclusive scattering processes, and to look for signals of collectivity and even the formation of quark–gluon plasma (QGP) in this unique environment.
For many years, experimental activity in UPCs was mainly focused on exclusive processes and QED phenomena including photon–photon scattering. This year, fresh inclusive particle-production measurements gained significant attention, as well as various signatures of QGP-like behaviour observed by different experiments at RHIC and at the LHC. The importance of having complementing experiments to perform similar measurements was also highlighted. In particular, the ATLAS experiment joined the ongoing activities to measure exclusive vector–meson photoproduction, finding a cross section that disagrees with the previous ALICE measurements by almost 50%. After long and detailed discussions, it was agreed that different experimental groups need to work together closely to resolve this tension before the next UPC workshop.
Experimental and theoretical developments very effectively guide each other in the field of UPCs. This includes physics within and beyond the Standard Model (BSM), such as nuclear modifications to the partonic structure of protons and neutrons, gluon-saturation phenomena predicted by QCD (CERN Courier January/February 2025 p31), and precision tests for BSM physics in photon–photon collisions. The expanding activity in the field of UPCs, together with the construction of the Electron Ion Collider (EIC) at Brookhaven National Laboratory in the US, has also made it crucial to develop modern Monte Carlo event generators to the level where they can accurately describe various aspects of photon–photon and photon–nucleus scatterings.
As a photon collider, the LHC complements the EIC. While the centre-of-mass energy at the EIC will be lower, there is some overlap between the kinematic regions probed by these two very different collider projects thanks to the varying energy spectra of the photons. This allows the theoretical models needed for the EIC to be tested against UPC data, thereby reducing theoretical uncertainty on the predictions that guide the detector designs. This complementarity will enable precision studies of QCD phenomena and BSM physics in the 2030s.
In 1982 Richard Feynman posed a question that challenged computational limits: can a classical computer simulate a quantum system? His answer: not efficiently. The complexity of the computation increases rapidly, rendering realistic simulations intractable. To understand why, consider the basic units of classical and quantum information.
A classical bit can exist in one of two states: |0> or |1>. A quantum bit, or qubit, exists in a superposition α|0> + β|1>, where α and β are complex amplitudes with real and imaginary parts. This superposition is the core feature that distinguishes quantum bits and classical bits. While a classical bit is either |0> or |1>, a quantum bit can be a blend of both at once. This is what gives quantum computers their immense parallelism – and also their fragility.
The difference becomes profound with scale. Two classical bits have four possible states, and are always in just one of them at a time. Two qubits simultaneously encode a complex-valued superposition of all four states.
Resources scale exponentially. N classical bits encode N boolean values, but N qubits encode 2N complex amplitudes. Simulating 50 qubits with double-precision real numbers for each part of the complex amplitudes would require more than a petabyte of memory, beyond the reach of even the largest supercomputers.
Direct mimicry
Feynman proposed a different approach to quantum simulation. If a classical computer struggles, why not use one quantum system to emulate the behaviour of another? This was the conceptual birth of the quantum simulator: a device that harnesses quantum mechanics to solve quantum problems. For decades, this visionary idea remained in the realm of theory, awaiting the technological breakthroughs that are now rapidly bringing it to life. Today, progress in quantum hardware is driving two main approaches: analog and digital quantum simulation, in direct analogy to the history of classical computing.
In analog quantum simulators, the physical parameters of the simulator directly correspond to the parameters of the quantum system being studied. Think of it like a wind tunnel for aeroplanes: you are not calculating air resistance on a computer but directly observing how air flows over a model.
A striking example of an analog quantum simulator traps excited Rydberg atoms in precise configurations using highly focused laser beams known as “optical tweezers”. Rydberg atoms have one electron excited to an energy level far from the nucleus, giving them an exaggerated electric dipole moment that leads to tunable long-range dipole–dipole interactions – an ideal setup for simulating particle interactions in quantum field theories (see “Optical tweezers” figure).
The positions of the Rydberg atoms discretise the space inhabited by the quantum fields being modelled. At each point in the lattice, the local quantum degrees of freedom of the simulated fields are embodied by the internal states of the atoms. Dipole–dipole interactions simulate the dynamics of the quantum fields. This technique has been used to observe phenomena such as string breaking, where the force between particles pulls so strongly that the vacuum spontaneously creates new particle–antiparticle pairs. Such quantum simulations model processes that are notoriously difficult to calculate from first principles using classical computers (see “A philosophical dimension” panel).
Universal quantum computation
Digital quantum simulators operate much like classical digital computers, though using quantum rather than classical logic gates. While classical logic manipulates classical bits, quantum logic manipulates qubits. Because quantum logic gates obey the Schrödinger equation, they preserve information and are reversible, whereas most classical gates, such as “AND” and “OR”, are irreversible. Many quantum gates have no classical equivalent, because they manipulate phase, superposition or entanglement – a uniquely quantum phenomenon in which two or more qubits share a combined state. In an entangled system, the state of each qubit cannot be described independently of the others, even if they are far apart: the global description of the quantum state is more than the combination of the local information at every site.
A philosophical dimension
The discretisation of space by quantum simulators echoes the rise of lattice QCD in the 1970s and 1980s. Confronted with the non-perturbative nature of the strong interaction, Kenneth Wilson introduced a method to discretise spacetime, enabling numerical solutions to quantum chromodynamics beyond the reach of perturbation theory. Simulations on classical supercomputers have since deepened our understanding of quark confinement and hadron masses, catalysed advances in high-performance computing, and inspired international collaborations. It has become an indispensable tool in particle physics (see “Fermilab’s final word on muon g-2”).
In classical lattice QCD, the discretisation of spacetime is just a computational trick – a means to an end. But in quantum simulators this discretisation becomes physical. The simulator is a quantum system governed by the same fundamental laws as the target theory.
This raises a philosophical question: are we merely modelling the target theory or are we, in a limited but genuine sense, realising it? If an array of neutral atoms faithfully mimics the dynamical behaviour of a specific gauge theory, is it “just” a simulation, or is it another manifestation of that theory’s fundamental truth? Feynman’s original proposal was, in a sense, about using nature to compute itself. Quantum simulators bring this abstract notion into concrete laboratory reality.
By applying sequences of quantum logic gates, a digital quantum computer can model the time evolution of any target quantum system. This makes them flexible and scalable in pursuit of universal quantum computation – logic able to run any algorithm allowed by the laws of quantum mechanics, given enough qubits and sufficient time. Universal quantum computing requires only a small subset of the many quantum logic gates that can be conceived, for example Hadamard, T and CNOT. The Hadamard gate creates a superposition: |0>→ (|0> + |1>) / √2. The T gate applies a 45° phase rotation: |1>→ eiπ/4|1>. And the CNOT gate entangles qubits by flipping a target qubit if a control qubit is |1>. These three suffice to prepare any quantum state from a trivial reference state: |ψ> = U1 U2 U3 … UN |0000…000>.
To bring frontier physics problems within the scope of current quantum computing resources, the distinction between analog and digital quantum simulations is often blurred. The complexity of simulations can be reduced by combining digital gate sequences with analog quantum hardware that aligns with the interaction patterns relevant to the target problem. This is feasible as quantum logic gates usually rely on native interactions similar to those used in analog simulations. Rydberg atoms are a common choice. Alongside them, two other technologies are becoming increasingly dominant in digital quantum simulation: trapped ions and superconducting qubit arrays.
Trapped ions offer the greatest control. Individual charged ions can be suspended in free space using electromagnetic fields. Lasers manipulate their quantum states, inducing interactions between them. Trapped-ion systems are renowned for their high fidelity (meaning operations are accurate) and long coherence times (meaning they maintain their quantum properties for longer), making them excellent candidates for quantum simulation (see “Trapped ions” figure).
Superconducting qubit arrays promise the greatest scalability. These tiny superconducting circuit materials act as qubits when cooled to extremely low temperatures and manipulated with microwave pulses. This technology is at the forefront of efforts to build quantum simulators and digital quantum computers for universal quantum computation (see “Superconducting qubits” figure).
The noisy intermediate-scale quantum era
Despite rapid progress, these technologies are at an early stage of development and face three main limitations.
The first problem is that qubits are fragile. Interactions with their environment quickly compromise their superposition and entanglement, making computations unreliable. Preventing “decoherence” is one of the main engineering challenges in quantum technology today.
The second challenge is that quantum logic gates have low fidelity. Over a long sequence of operations, errors accumulate, corrupting the result.
Finally, quantum simulators currently have a very limited number of qubits – typically only a few hundred. This is far fewer than what is needed for high-energy physics (HEP) problems.
This situation is known as the noisy “intermediate-scale” quantum era: we are no longer doing proof-of-principle experiments with a few tens of qubits, but neither can we control thousands of them. These limitations mean that current digital simulations are often restricted to “toy” models, such as QED simplified to have just one spatial and one time dimension. Even with these constraints, small-scale devices have successfully reproduced non-perturbative aspects of the theories in real time and have verified the preservation of fundamental physical principles such as gauge invariance, the symmetry that underpins the fundamental forces of the Standard Model.
Quantum simulators may chart a similar path to classical lattice QCD, but with even greater reach. Lattice QCD struggles with real-time evolution and finite-density physics due to the infamous “sign problem”, wherein quantum interference between classically computed amplitudes causes exponentially worsening signal-to-noise ratios. This renders some of the most interesting problems unsolvable on classical machines.
Quantum simulators do not suffer from the sign problem because they evolve naturally in real-time, just like the physical systems they emulate. This promises to open new frontiers such as the simulation of early-universe dynamics, black-hole evaporation and the dense interiors of neutron stars.
Quantum simulators will powerfully augment traditional theoretical and computational methods, offering profound insights when Feynman diagrams become intractable, when dealing with real-time dynamics and when the sign problem renders classical simulations exponentially difficult. Just as the lattice revolution required decades of concerted community effort to reach its full potential, so will the quantum revolution, but the fruits will again transform the field. As the aphorism attributed to Mark Twain goes: history never repeats itself, but it often rhymes.
Quantum information
One of the most exciting and productive developments in recent years is the unexpected, yet profound, convergence between HEP and quantum information science (QIS). For a long time these fields evolved independently. HEP explored the universe’s smallest constituents and grandest structures, while QIS focused on harnessing quantum mechanics for computation and communication. One of the pioneers in studying the interface between these fields was John Bell, a theoretical physicist at CERN.
Just as the lattice revolution needed decades of concerted community effort to reach its full potential, so will the quantum revolution
HEP and QIS are now deeply intertwined. As quantum simulators advance, there is a growing demand for theoretical tools that combine the rigour of quantum field theory with the concepts of QIS. For example, tensor networks were developed in condensed-matter physics to represent highly entangled quantum states, and have now found surprising applications in lattice gauge theories and “holographic dualities” between quantum gravity and quantum field theory. Another example is quantum error correction – a vital QIS technique to protect fragile quantum information from noise, and now a major focus for quantum simulation in HEP.
This cross-disciplinary synthesis is not just conceptual; it is becoming institutional. Initiatives like the US Department of Energy’s Quantum Information Science Enabled Discovery (QuantISED) programme, CERN’s Quantum Technology Initiative (QTI) and Europe’s Quantum Flagship are making substantial investments in collaborative research. Quantum algorithms will become indispensable for theoretical problems just as quantum sensors are becoming indispensable to experimental observation (see “Sensing at quantum limits”).
The result is the emergence of a new breed of scientist: one equally fluent in the fundamental equations of particle physics and the practicalities of quantum hardware. These “hybrid” scientists are building the theoretical and computational scaffolding for a future where quantum simulation is a standard, indispensable tool in HEP.
One hundred years after its birth, quantum mechanics is the foundation of our understanding of the physical world. Yet debates on how to interpret the theory – especially the thorny question of what happens when we make a measurement – remain as lively today as during the 1930s.
The latest recognition of the fertility of studying the interpretation of quantum mechanics was the award of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser and Anton Zeilinger. The motivation for the prize pointed out that the bubbling field of quantum information, with its numerous current and potential technological applications, largely stems from the work of John Bell at CERN the 1960s and 1970s, which in turn was motivated by the debate on the interpretation of quantum mechanics.
The majority of scientists use a textbook formulation of the theory that distinguishes the quantum system being studied from “the rest of the world” – including the measuring apparatus and the experimenter, all described in classical terms. Used in this orthodox manner, quantum theory describes how quantum systems react when probed by the rest of the world. It works flawlessly.
Sense and sensibility
The problem is that the rest of the world is quantum mechanical as well. There are of course regimes in which the behaviour of a quantum system is well approximated by classical mechanics. One may even be tempted to think that this suffices to solve the difficulty. But this leaves us in the awkward position of having a general theory of the world that only makes sense under special approximate conditions. Can we make sense of the theory in general?
Today, variants of four main ideas stand at the forefront of efforts to make quantum mechanics more conceptually robust. They are known as physical collapse, hidden variables, many worlds and relational quantum mechanics. Each appears to me to be viable a priori, but each comes with a conceptual price to pay. The latter two may be of particular interest to the high-energy community as the first two do not appear to fit well with relativity.
The idea of the physical collapse is simple: we are missing a piece of the dynamics. There may exist a yet-undiscovered physical interaction that causes the wavefunction to “collapse” when the quantum system interacts with the classical world in a measurement. The idea is empirically testable. So far, all laboratory attempts to find violations of the textbook Schrödinger equation have failed (see “Probing physical collapse” figure), and some models for these hypothetical new dynamics have been ruled out by measurements.
The second possibility, hidden variables, follows on from Einstein’s belief that quantum mechanics is incomplete. It posits that its predictions are exactly correct, but that there are additional variables describing what is going on, besides those in the usual formulation of the theory: the reason why quantum predictions are probabilistic is our ignorance of these other variables.
The work of John Bell shows that the dynamics of any such theory will have some degree of non-locality (see “Non-locality” image). In the non-relativistic domain, there is a good example of a theory of this sort, that goes under the name of de Broglie–Bohm, or pilot-wave theory. This theory has non-local but deterministic dynamics capable of reproducing the predictions of non-relativistic quantum-particle dynamics. As far as I am aware, all existing theories of this kind break Lorentz invariance, and the extension of hidden variable theories to quantum-field theoretical domains appears cumbersome.
Relativistic interpretations
Let me now come to the two ideas that are naturally closer to relativistic physics. The first is the many-worlds interpretation – a way of making sense of quantum theory without either changing its dynamics or adding extra variables. It is described in detail in this edition of CERN Courier by one of its leading contemporary proponents (see “The minimalism of many worlds“), but the main idea is the following: being a genuine quantum system, the apparatus that makes a quantum measurement does not collapse the superposition of possible measurement outcomes – it becomes a quantum superposition of the possibilities, as does any human observer.
If we observe a singular outcome, says the many-worlds interpretation, it is not because one of the probabilistic alternatives has actualised in a mysterious “quantum measurement”. Rather, it is because we have split into a quantum superposition of ourselves, and we just happen to be in one of the resulting copies. The world we see around us is thus only one of the branches of a forest of parallel worlds in the overall quantum state of everything. The price to pay to make sense of quantum theory in this manner is to accept the idea that the reality we see is just a branch in a vast collection of possible worlds that include innumerable copies of ourselves.
Relational interpretations are the most recent of the four kinds mentioned. They similarly avoid physical collapse or hidden variables, but do so without multiplying worlds. They stay closer to the orthodox textbook interpretation, but with no privileged status for observers. The idea is to think of quantum theory in a manner closer to the way it was initially conceived by Born, Jordan, Heisenberg and Dirac: namely in terms of transition amplitudes between observations rather than quantum states evolving continuously in time, as emphasised by Schrödinger’s wave mechanics (see “A matter of taste” image).
Observer relativity
The alternative to taking the quantum state as the fundamental entity of the theory is to focus on the information that an arbitrary system can have about another arbitrary system. This information is embodied in the physics of the apparatus: the position of its pointer variable, the trace in a bubble chamber, a person’s memory or a scientist’s logbook. After a measurement, these physical quantities “have information” about the measured system as their value is correlated with a property of the observed systems.
Quantum theory can be interpreted as describing the relative information that systems can have about one another. The quantum state is interpreted as a way of coding the information about a system available to another system. What looks like a multiplicity of worlds in the many-worlds interpretation becomes nothing more than a mathematical accounting of possibilities and probabilities.
The relational interpretation reduces the content of the physical theory to be about how systems affect other systems. This is like the orthodox textbook interpretation, but made democratic. Instead of a preferred classical world, any system can play a role that is a generalisation of the Copenhagen observer. Relativity teaches us that velocity is a relative concept: an object has no velocity by itself, but only relative to another object. Similarly, quantum mechanics, interpreted in this manner, teaches us that all physical variables are relative. They are not properties of a single object, but ways in which an object affects another object.
The QBism version of the interpretation restricts its attention to observing systems that are rational agents: they can use observations and make probabilistic predictions about the future. Probability is interpreted subjectively, as the expectation of a rational agent. The relational interpretation proper does not accept this restriction: it considers the information that any system can have about any other system. Here, “information” is understood in the simple physical sense of correlation described above.
Like many worlds – to which it is not unrelated – the relational interpretation does not add new dynamics or new variables. Unlike many worlds, it does not ask us to think about parallel worlds either. The conceptual price to pay is a radical weakening of a strong form of realism: the theory does not give us a picture of a unique objective sequence of facts, but only perspectives on the reality of physical systems, and how these perspectives interact with one another. Only quantum states of a system relative to another system play a role in this interpretation. The many-worlds interpretation is very close to this. It supplements the relational interpretation with an overall quantum state, interpreted realistically, achieving a stronger version of realism at the price of multiplying worlds. In this sense, the many worlds and relational interpretations can be seen as two sides of the same coin.
Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics
I have only sketched here the most discussed alternatives, and have tried to be as neutral as possible in a field of lively debates in which I have my own strong bias (towards the fourth solution). Empirical testing, as I have mentioned, can only test the physical collapse hypothesis.
There is nothing wrong, in science, in using different pictures for the same phenomenon. Conceptual flexibility is itself a resource. Specific interpretations often turn out to be well adapted to specific problems. In quantum optics it is sometimes convenient to think that there is a wave undergoing interference, as well as a particle that follows a single trajectory guided by the wave, as in the pilot-wave hidden-variable theory. In quantum computing, it is convenient to think that different calculations are being performed in parallel in different worlds. My own field of loop quantum gravity treats spacetime regions as quantum processes: here, the relational interpretation merges very naturally with general relativity, because spacetime regions themselves become quantum processes, affecting each other.
Richard Feynman famously wrote that “every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping that they will give him different ideas for guessing.” I think that this is where we are, in trying to make sense of our best physical theory. We have various ways to make sense of it. We do not yet know which of these will turn out to be the most fruitful in the future.
The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma (QGP). The two analyses offer a fresh perspective into the fluid-like behaviour of QCD matter under extreme conditions, such as those that prevailed after the Big Bang. The measurements are highly complementary, with ALICE drawing on their detector’s particle-identification capabilities and ATLAS leveraging the experiment’s large rapidity coverage.
At the Large Hadron Collider, lead–ion collisions produce matter at temperatures and densities so high that quarks and gluons momentarily escape their confinement within hadrons. The resulting QGP is believed to have filled the universe during its first few microseconds, before cooling and fragmenting into mesons and baryons. In the laboratory, these streams of particles allow researchers to reconstruct the dynamical evolution of the QGP, which has long been known to transform anisotropies of the initial collision geometry into anisotropic momentum distributions of the final-state particles.
Compelling evidence
Differential measurements of the azimuthal distributions of produced particles over the last decades have provided compelling evidence that the outgoing momentum distribution reflects a collective response driven by initial pressure gradients. The isotropic expansion component, typically referred to as radial flow, has instead been inferred from the slope of particle spectra (see figure 1). Despite its fundamental role in driving the QGP fireball, radial flow lacked a differential probe comparable to those of its anisotropic counterparts.
That situation has now changed. The ALICE and ATLAS collaborations recently employed the novel observable v0(pT) to investigate radial flow directly. Their independent results demonstrate, for the first time, that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour. The isotropic expansion of the QGP and its azimuthal modulations ultimately depend on the hydrodynamic properties of the QGP, such as shear or bulk viscosity, and can thus be measured to constrain them.
Traditionally, radial flow has been inferred from the slope of pT-spectra, with the pT-integrated radial-flow extracted via fits to “blast wave” models. The newly introduced differential observable v0(pT) captures fluctuations in spectral shape across pT bins. v0(pT) retains differential sensitivity, since it is defined as the correlation (technically the normalised covariance) between the fraction of particles in a given pT-interval and the mean transverse momentum of the collision products within a single event, [pT]. Roughly speaking, a fluctuation raising [pT] produces a positive v0(pT) at high pT due to the fractional yield increasing; conversely, the fractional yield decreasing at low pT causes a negative v0(pT). A pseudorapidity gap between the measurement of mean pT and the particle yields is used to suppress short-range correlations and isolate the long-range, collective signal. Previous studies observed event-by-event fluctuations in [pT], related to radial flow over a wide pT range and quantified by the coefficient v0ref, but they could not establish whether these fluctuations were correlated across different pT intervals – a crucial signature of collective behaviour.
Origins
The ATLAS collaboration performed a measurement of v0(pT) in the 0.5 to 10 GeV range, identifying three signatures of the collective origin of radial flow (see figure 2). First, correlations between the particle yield at fixed pT and the event-wise mean [pT] in a reference interval show that the two-particle radial flow factorises into single-particle coefficients as v0(pT) × v0ref for pT < 4 GeV, independent of the reference choice (left panel). Second, the data display no dependence on the rapidity gap between correlated particles, suggesting a long-range effect intrinsic to the entire system (middle panel). Finally, the centrality dependence of the ratio v0(pT)/v0ref followed a consistent trend from head-on to peripheral collisions, effectively cancelling initial geometry effects and supporting the interpretation of a collective QGP response (right panel). At higher pT, a decrease in v0(pT) and a splitting with respect to centrality suggest the onset of non-thermal effects such as jet quenching. This may reveal fluctuations in jet energy loss – an area warranting further investigation.
Using more than 80 million collisions at a centre-of-mass energy of 5.02 TeV, ALICE extracted v0(pT) for identified pions, kaons and protons across a broad range of centralities. ALICE observes v0(pT) to be negative at low pT, reflecting the influence of mean-pT fluctuations on the spectral shape (see figure 3). The data display a clear mass ordering at low pT, from protons to kaons to pions, consistent with expectations from collective radial expansion. This mass ordering reflects the greater “push” heavier particles experience in the rapidly expanding medium. The picture changes above 3 GeV, where protons have larger v0(pT) values than pions and kaons, perhaps indicating the contribution of recombination processes in hadron production.
The results demonstrate that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour
The two collaborations’ measurements of the new v0(pT) observable highlight its sensitivity to the bulk-transport properties of the QGP medium. Comparisons with hydrodynamic calculations show that v0(pT) varies with bulk viscosity and the speed of sound, but that it has a weaker dependence on shear viscosity. Hydrodynamic predictions reproduce the data well up to about 2 GeV, but diverge at higher momenta. The deviation of non-collective models like HIJING from the data underscores the dominance of final-state, hydrodynamic-like effects in shaping radial flow.
These results advance our understanding of one of the most extreme regimes of QCD matter, strengthening the case for the formation of a strongly interacting, radially expanding QGP medium in heavy-ion collisions. Differential measurements of radial flow offer a new tool to probe this fluid-like expansion in detail, establishing its collective origin and complementing decades of studies of anisotropic flow.
Neutron stars are truly remarkable systems. They pack between one and two times the mass of the Sun into a radius of about 10 kilometres. Teetering on the edge of gravitational collapse into a black hole, they exhibit some of the strongest gravitational forces in the universe. They feature extreme densities in excess of atomic nuclei. And due to their high densities they produce weakly interacting particles such as neutrinos. Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use these extreme environments as precise laboratories for fundamental physics.
Perhaps the most intriguing open question surrounding neutron stars is what is actually inside them. Clearly they are primarily composed of neutrons, but many theories suggest that other forms of matter should appear in the highest density regions near the centre of the star, including free quarks, hyperons and kaon or pion condensates. Diverse data can constrain these hypotheses, including astronomical inferences of the masses and radii of neutron stars, observations of the mergers of neutron stars by LIGO, and baryon production patterns and correlations in heavy-ion collisions at the LHC. Theoretical consistency is critical here. Several talks highlighted the importance of low-energy nuclear data to understand the behaviour of nuclear matter at low densities, though also emphasising that at very high densities and energies any description should fall within the realm of QCD – a theory that beautifully describes the dynamics of quarks and gluons at the LHC.
Another key question for neutron stars is how fast they cool. This depends critically on their composition. Quarks, hyperons, nuclear resonances, pions or muons would each lead to different channels to cool the neutron star. Measurements of the temperatures and ages of neutron stars might thereby be used to learn about their composition.
Research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics
The workshop revealed that research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics including tests of particles beyond the Standard Model, including the axion: a very light and weakly coupled dark-matter candidate that was initially postulated to explain the “strong CP problem” of why strong interactions are identical for particles and antiparticles. The workshop allowed particle theorists to appreciate the various possible uncertainties in their theoretical predictions and propagate them into new channels that may allow sharper tests of axions and other weakly interacting particles. An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars.
While many uncertainties remain, the workshop revealed that the field is open and exciting, and that upcoming observations of neutron stars, including neutron-star mergers or the next galactic supernova, hold unique opportunities to understand fundamental questions from the nature of dark matter to the strong CP problem.
In June 1925, Werner Heisenberg retreated to the German island of Helgoland seeking relief from hay fever and the conceptual disarray of the old quantum theory. On this remote, rocky outpost in the North Sea, he laid the foundations of matrix mechanics. Later, his “island epiphany” would pass through the hands of Max Born, Wolfgang Pauli, Pascual Jordan and several others, and become the first mature formulation of quantum theory. From 9 to 14 June 2025, almost a century later, hundreds of researchers gathered on Helgoland to mark the anniversary – and to deal with pressing and unfinished business.
Alfred D Stone (Yale University) called upon participants to challenge the folklore surrounding quantum theory’s birth. Philosopher Elise Crull (City College of New York) drew overdue attention to Grete Hermann, who hinted at entanglement before it had a name and anticipated Bell in identifying a flaw in von Neumann’s no-go theorem, which had been taken as proof that hidden-variable theories are impossible. Science writer Philip Ball questioned Heisenberg’s epiphany itself: he didn’t invent matrix mechanics in a flash, claims Ball, nor immediately grasp its relevance, and it took months, and others, to see his contribution for what it was (see “Lend me your ears” image).
Building on a strong base
A clear takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, nevertheless remain open to interpretation, and any future progress will depend on excavating them directly (see “Four ways to interpret quantum mechanics“).
Does the quantum wavefunction represent an objective element of reality or merely an observer’s state of knowledge? On this question, Helgoland 2025 could scarcely have been more diverse. Christopher Fuchs (UMass Boston) passionately defended quantum Bayesianism, which recasts the Born probability rule as a consistency condition for rational agents updating their beliefs. Wojciech Zurek (Los Alamos National Laboratory) presented the Darwinist perspective, for which classical objectivity emerges from redundant quantum information encoded across the environment. Although Zurek himself maintains a more agnostic stance, his decoherence-based framework is now widely embraced by proponents of many-worlds quantum mechanics (see “The minimalism of many worlds“).
The foundations of quantum mechanics remain open to interpretation, and any future progress will depend on excavating them directly
Markus Aspelmeyer (University of Vienna) made the case that a signature of gravity’s long-speculated quantum nature may soon be within experimental reach. Building on the “gravitational Schrödinger’s cat” thought experiment proposed by Feynman in the 1950s, he described how placing a massive object in a spatial superposition could entangle a nearby test mass through their gravitational interaction. Such a scenario would produce correlations that are inexplicable by classical general relativity alone, offering direct empirical evidence that gravity must be described quantum-mechanically. Realising this type of experiment requires ultra-low pressures and cryogenic temperatures to suppress decoherence, alongside extremely low-noise measurements of gravitational effects at short distances. Recent advances in optical and optomechanical techniques for levitating and controlling nanoparticles suggest a path forward – one that could bring evidence for quantum gravity not from black holes or the early universe, but from laboratories on Earth.
Information insights
Quantum information was never far from the conversation. Isaac Chuang (MIT) offered a reconstruction of how Heisenberg might have arrived at the principles of quantum information, had his inspiration come from Shannon’s Mathematical Theory of Communication. He recast his original insights into three broad principles: observations act on systems; local and global perspectives are in tension; and the order of measurements matters. Starting from these ingredients, one could in principle recover the structure of the qubit and the foundations of quantum computation. Taking the analogy one step further, he suggested that similar tensions between memorisation and generalisation – or robustness and adaptability – may one day give rise to a quantum theory of learning.
Helgoland 2025 illustrated just how much quantum mechanics has diversified since its early days. No longer just a framework for explaining atomic spectra, the photoelectric effect and black-body radiation, it is at once a formalism describing high-energy particle scattering, a handbook for controlling the most exotic states of matter, the foundation for information technologies now driving national investment plans, and a source of philosophical conundrums that, after decades at the margins, has once again taken centre stage in theoretical physics.
Active galactic nuclei (AGNs) are extremely energetic regions at the centres of galaxies, powered by accretion onto a supermassive black hole. Some AGNs launch plasma outflows moving near light speed. Blazars are a subclass of AGNs whose jets are pointed almost directly at Earth, making them appear exceptionally bright across the electromagnetic spectrum. A new analysis of an exceptional flare of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer (IXPE) has now shed light on their emission mechanisms.
The spectral energy distribution of blazars generally has two broad peaks. The low-energy peak from radio to X-rays is well explained by synchrotron radiation from relativistic electrons spiraling in magnetic fields, but the origin of the higher-energy peak from X-rays to γ-rays is a longstanding point of contention, with two classes of models, dubbed hadronic and leptonic, vying to explain it. Polarisation measurements offer a key diagnostic tool, as the two models predict distinct polarisation signatures.
Model signatures
In hadronic models, high-energy emission is produced by protons, either through synchrotron radiation or via photo-hadronic interactions that generate secondary particles. Hadronic models predict that X-ray polarisation should be as high as that in the optical and millimetre bands, even in complex jet structures.
Leptonic models are powered by inverse Compton scattering, wherein relativistic electrons “upscatter” low-energy photons, boosting them to higher energies with low polarisation. Leptonic models can be further subdivided by the source of the inverse-Compton-scattered photons. If initially generated by synchrotron radiation in the AGN (synchrotron self-Compton, SSC), modest polarisation (~50%) is expected due to the inherent polarisation of synchrotron photons, with further reductions if the emission comes from inhomogeneous or multiple emitting regions. If initially generated by external sources (external Compton, EC), isotropic photon fields from the surrounding structures are expected to average out their polarisation.
IXPE launched on 9 December 2021, seeking to resolve such questions. It is designed to have 100-fold better sensitivity to the polarisation of X-rays in astrophysical sources than the last major X-ray polarimeter, which was launched half a century ago (CERN Courier July/August 2022 p10). In November 2023, it participated in a coordinated multiwavelength campaign spanning radio, millimetre and optical, and X-ray bands targeted the blazar BL Lacertae, whose X-ray emission arises mostly from the high-energy component, with its low-energy synchrotron component mainly at infrared energies. The campaign captured an exceptional flare, providing a rare opportunity to test competing emission models.
Optical telescopes recorded a peak optical polarisation of 47.5 ± 0.4%, the highest ever measured in a blazar. The short-mm (1.3 mm) polarisation also rose to about 10%, with both bands showing similar trends in polarisation angle. IXPE measured no significant polarisation in the 2 to 8 keV X-ray band, placing a 3σ upper limit of 7.4%.
The striking contrast between the high polarisation in optical and mm bands, and a strict upper limit in X-rays, effectively rules out all single-zone and multi-region hadronic models. Had these processes dominated, the X-ray polarisation would have been comparable to the optical. Instead, the observations strongly support a leptonic origin, specifically the SSC model with a stratified or multi-zone jet structure that naturally explains the low X-ray polarisation.
A key feature of the flare was the rapid rise and fall of optical polarisation
A key feature of the flare was the rapid rise and fall of optical polarisation. Initially, it was low, of order 5%, and aligned with the jet direction, suggesting the dominance of poloidal or turbulent fields. A sharp increase to nearly 50%, while retaining alignment, indicates the sudden injection of a compact, toroidally dominated magnetic structure.
The authors of the analysis propose a “magnetic spring” model wherein a tightly wound toroidal field structure is injected into the jet, temporarily ordering the magnetic field and raising the optical polarisation. As the structure travels outward, it relaxes, likely through kink instabilities, causing the polarisation to decline over about two weeks. This resembles an elastic system, briefly stretched and then returning to equilibrium.
A magnetic spring would also explain the multiwavelength flaring. The injection boosted the total magnetic field strength, triggering an unprecedented mm-band flare powered by low-energy electrons with long cooling times. The modest rise in mm-wavelength polarisation (green points) suggests emission from a large, turbulent region. Meanwhile, optical flaring (black points) was suppressed due to the rapid synchrotron cooling of high-energy electrons, consistent with the observed softening of the optical spectrum. No significant γ-ray enhancement was observed, as these photons originate from the same rapidly cooling electron population.
Turning point
These findings mark a turning point in high-energy astrophysics. The data definitively favour leptonic emission mechanisms in BL Lacertae during this flare, ruling out efficient proton acceleration and thus any associated high-energy neutrino or cosmic-ray production. The ability of the jet to sustain nearly 50% polarisation across parsec scales implies a highly ordered, possibly helical magnetic field extending far from the supermassive black hole.
The results cement polarimetry as a definitive tool in identifying the origin of blazar emission. The dedicated Compton Spectrometer and Imager (COSI) γ-ray polarimeter is soon set to complement IXPE at even higher energies when launched by NASA in 2027. Coordinated campaigns will be crucial for probing jet composition and plasma processes in AGNs, helping us understand the most extreme environments in the universe.
Fermilab’s Muon g-2 collaboration has given its final word on the magnetic moment of the muon. The new measurement agrees closely with a significantly revised Standard Model (SM) prediction. Though the experimental measurement will likely now remain stable for several years, theorists expect to make rapid progress to reduce uncertainties and resolve tensions underlying the SM value. One of the most intriguing anomalies in particle physics is therefore severely undermined, but not yet definitively resolved.
The muon g-2 anomaly dates back to the late 1990s and early 2000s, when measurements at Brookhaven National Laboratory (BNL) uncovered a possible discrepancy by comparison to theoretical predictions of the so-called muon anomaly, aμ = (g-2)/2. aμ expresses the magnitude of quantum loop corrections to the leading-order prediction of the Dirac equation, which multiplies the classical gyromagnetic ratio of fundamental fermions by a “g-factor” of precisely two. Loop corrections of aμ ~ 0.1% quantify the extent to which virtual particles emitted by the muon further increase the strength of its interaction with magnetic fields. Were measurements to be shown to deviate from SM predictions, this would indicate the influence of virtual fields beyond the SM.
Move on up
In 2013, the BNL experiment’s magnetic storage ring was transported from Long Island, New York, to Fermilab in Batavia, Illinois. After years of upgrades and improvements, the new experiment began in 2017. It now reports a final precision of 127 parts per billion (ppb), bettering the experiment’s design precision of 140 ppb, and a factor of four more sensitive than the BNL result.
“First and foremost, an increase in the number of stored muons allowed us to reduce our statistical uncertainty to 98 ppb compared to 460 ppb for BNL,” explains co-spokesperson Peter Winter of Argonne National Laboratory, “but a lot of technical improvements to our calorimetry, tracking, detector calibration and magnetic-field mapping were also needed to improve on the systematic uncertainties from 280 ppb at BNL to 78 ppb at Fermilab.”
This formidable experimental precision throws down the gauntlet to the theory community
The final Fermilab measurement is (116592070.5 ± 11.4 (stat.) ± 9.1(syst.) ± 2.1 (ext.)) × 10–11, fully consistent with the previous BNL measurement. This formidable precision throws down the gauntlet to the Muon g-2 Theory Initiative (TI), which was founded to achieve an international consensus on the theoretical prediction.
The calculation is difficult, featuring contributions from all sectors of the SM (CERN Courier March/April 2025 p21). The TI published its first whitepaper in 2020, reporting aμ = (116591810 ± 43) × 10–11, based exclusively on a data-driven analysis of cross-section measurements at electron–positron colliders (WP20). In May, the TI updated its prediction, publishing a value aμ = (116592033 ± 62) × 10–11, statistically incompatible with the previous prediction at the level of three standard deviations, and with an increased uncertainty of 530 ppb (WP25). The new prediction is based exclusively on numerical SM calculations. This was made possible by rapid progress in the use of lattice QCD to control the dominant source of uncertainty, which arises due to the contribution of so-called hadronic vacuum polarisation (HVP). In HVP, the photon representing the magnetic field interacts with the muon during a brief moment when a virtual photon erupts into a difficult-to-model cloud of quarks and gluons.
Significant shift
“The switch from using the data-driven method for HVP in WP20 to lattice QCD in WP25 results in a significant shift in the SM prediction,” confirms Aida El-Khadra of the University of Illinois, chair of the TI, who believes that it is not unreasonable to expect significant error reductions in the next couple of years. “There still are puzzles to resolve, particularly around the experimental measurements that are used in the data-driven method for HVP, which prevent us, at this point in time, from obtaining a new prediction for HVP in the data-driven method. This means that we also don’t yet know if the data-driven HVP evaluation will agree or disagree with lattice–QCD calculations. However, given the ongoing dedicated efforts to resolve the puzzles, we are confident we will soon know what the data-driven method has to say about HVP. Regardless of the outcome of the comparison with lattice QCD, this will yield profound insights.”
We are making plans to improve experimental precision beyond the Fermilab experiment
On the experimental side, attention now turns to the Muon g-2/EDM experiment at J-PARC in Tokai, Japan. While the Fermilab experiment used the “magic gamma” method first employed at CERN in the 1970s to cancel the effect of electric fields on spin precession in a magnetic field (CERN Courier September/October 2024 p53), the J-PARC experiment seeks to control systematic uncertainties by exercising particularly tight control of its muon beam. In the Japanese experiment, antimatter muons will be captured by atomic electrons to form muonium, ionised using a laser, and reaccelerated for a traditional precession measurement with sensitivity to both the muon’s magnetic moment and its electric dipole moment (CERN Courier July/August 2024 p8).
“We are making plans to improve experimental precision beyond the Fermilab experiment, though their precision is quite tough to beat,” says spokesperson Tsutomu Mibe of KEK. “We also plan to search for the electric dipole moment of the muon with an unprecedented precision of roughly 10–21 e cm, improving the sensitivity of the last results from BNL by a factor of 70.”
With theoretical predictions from high-order loop processes expected to be of the order 10–38 e cm, any observation of an electric dipole moment would be a clear indication of new physics.
“Construction of the experimental facility is currently ongoing,” says Mibe. “We plan to start data taking in 2030.”
Just as water takes the form of ice, liquid or vapour, QCD matter exhibits distinct phases. But while the phase diagram of water is well established, the QCD phase diagram remains largely conjectural. The STAR collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) recently completed a new beam-energy scan (BES-II) of gold–gold collisions. The results narrow the search for a long-sought-after “critical point” in the QCD phase diagram.
“BES-II precision measurements rule out the existence of a critical point in the regions of the QCD phase diagram accessed at LHC and top RHIC energies, while still allowing the possibility at lower collision energies,” says Bedangadas Mohanty of the National Institute of Science Education and Research in India, who co-led the analysis. “The results refine earlier BES-I indications, now with much reduced uncertainties.”
At low temperatures and densities, quarks and gluons are confined within hadrons. Heating QCD matter leads to the formation of a deconfined quark–gluon plasma (QGP), while increasing the density at low temperatures is expected to give rise to more exotic states such as colour superconductors. Above a certain threshold in baryon density, the transition from hadron gas to QGP is expected to be first-order – a sharp, discontinuous change akin to water boiling. As density decreases, this boundary gives way to a smooth crossover where the two phases blend. A hypothetical critical point marks the shift between these regimes, much like the endpoint of the liquid–gas coexistence line in the phase diagram of water (see “Phases of QCD” figure).
Heavy-ion collisions offer a way to observe this phase transition directly. At the Large Hadron Collider, the QGP created in heavy-ion collisions transitions smoothly to a hadronic gas as it cools, but the lower energies explored by RHIC probe the region of phase space where the critical point may lie.
To search for possible signatures of a critical point, the STAR collaboration measured gold–gold collisions at centre-of-mass energies between 7.7 and 27 GeV per nucleon pair. The collaboration reports that their data deviate from frameworks that do not include a critical point, including the hadronic transport model, thermal models with canonical ensemble treatment, and hydrodynamic approaches with excluded-volume effects. Depending on the choice of observable and non-critical baseline model, the significance of the deviations ranges from two to five standard deviations, with the largest effects seen in head-on collisions when using peripheral collisions as a reference.
“None of the existing theoretical models fully reproduce the features observed in the data,” explains Mohanty. “To interpret these precision measurements, it is essential that dynamical model calculations that include critical-point physics be developed.” The STAR collaboration is now mapping lower energies and higher baryon densities using a fixed target (FXT) mode, wherein a 1 mm gold foil sits 2 cm below the beam axis.
“The FXT data are a valuable opportunity to explore QCD matter at high baryon density,” says Mohanty. “Data taking will conclude later this year when RHIC transitions to the Electron–Ion Collider. The Compressed Baryonic Matter experiment at FAIR in Germany will then pick up the study of the QCD critical point towards the end of the 2020s.”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.