Comsol -leaderboard other pages

Topics

Loopsummit returns to Cadenabbia

Measurements at high-energy colliders such as the LHC, the Electron–Ion Collider (EIC) and the FCC will be performed at the highest luminosities. The analysis of the high-precision data taken there will require a significant increase in the accuracy of theoretical predictions. To achieve this, new mathematical and algorithmic technologies are needed. Developments in precision Standard Model calculations have been rapid since experts last met for Loopsummit-1 at Cadenabbia on the banks of Lake Como in 2021 (CERN Courier November/December 2021 p24). Loopsummit-2, held in the same location from 20 to 25 July this year, summarised this formidable body of work.

As higher experimental precision relies on new technologies, new theory results require better algorithms, both from the mathematical and computer-algebraic side, and new techniques in quantum field theory. The central software package for perturbative calculations, FORM, now has a new major release, FORM 5. Progress has also been achieved in integration-by-parts reduction, which is of central importance for reducing to a much smaller set of master integrals. New developments were also reported in analytic and numerical Feynman-diagram integration using Mellin–Barnes techniques, new compact function classes such as Feynman–Fox integrals, and modern summation technologies and methods to establish and solve gigantic recursions and differential equations of degree 4000 and order 100. The latest results on elliptic integrals and progress on the correct treatment of the γ5-problem in real dimensions were also presented. These technologies allow the calculation of processes up to five loops and in the presence of more scales at two- and three-loop order. New results for single-scale quantities like quark condensates and the ρ-parameter were also reported.

In the loop

Measurements at future colliders will depend on the precise knowledge of parton distribution functions, the strong coupling constant αs(MZ) and the heavy-quark masses. Experience suggests that going from one loop order to the next in the massless and massive cases takes 15 years or more, as new technologies must be developed. By now, most of the space-like four-loop splitting functions governing scaling violations are known with a good precision, as well as new results for the three-loop time-like splitting functions. The massive three-loop Wilson coefficients for deep-inelastic scattering are now complete, requiring far larger and different integral spaces compared with the massless case. Related to this are the Wilson coefficients of semi-inclusive deep-inelastic scattering at next-to-next-to leading order (NNLO), which will be important to tag individual flavours at the EIC. For the αs(MZ) measurement at low-scale processes, the correct treatment of renormalon contributions is necessary. Collisions at high energies also allow the detailed study of scattering processes in the forward region of QCD. Other long-term projects concern NNLO corrections for jet-production at e+e and hadron colliders, and other related processes like Higgs-boson and top-quark production, in some cases with a large number of partons in the final state. This also includes the use of effective Lagrangians.

Many more steps lie ahead if we are to match the precision of measurements at high-luminosity colliders

The complete calculation of difficult processes at NNLO and beyond always drives the development of term-reduction algorithms and analytic or numerical integration technologies. Many more steps lie ahead in the coming years if we are to match the precision of measurements at high-luminosity colliders. Some of these will doubtless be reported at Loopsummit-3 in summer 2027.

Geneva witnesses astroparticle boom

ICRC 2025

The 39th edition of the International Cosmic Ray Conference (ICRC), a key biennial conference in astroparticle physics, was held in Geneva from 15 to 24 July. Plenary talks covered solar, galactic and ultra-high-energy cosmic rays. A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves. Talks were informed by limits from the LHC and elsewhere on dark-matter particles and primordial black-holes. The bundle of constraints has improved very significantly over the past few years, allowing more meaningful and stringent tests.

Solar modelling

The Sun and its heliosphere, where the solar wind offers insights into magnetic reconnection, shock acceleration and diffusion, are now studied in situ thanks to the Solar Orbiter and Parker Solar Probe spacecraft. Long-term PAMELA and AMS data, spanning over an 11-year solar cycle, allow precise modelling of solar modulation of cosmic-ray fluxes below a few tens of GeV. AMS solar proton data show a 27-day periodicity up to 20 GV, caused by corotating interaction regions where fast solar wind overtakes slower wind, creating shocks. AMS has recorded 46 solar energetic particle (SEP) events, the most extreme reaching a few GV, from magnetic-reconnection flares or fast coronal mass ejections. While isotope data once suggested such extreme events occur every 1500 years, Kepler observations of Sun-like stars indicate they may happen every 100 years, releasing more than 1034 erg, often during weak solar minima, and linked to intense X-ray flares.

The spectrum of galactic cosmic rays, studied with high-precision measurements from satellites (DAMPE) and ISS-based experiments (AMS-02, CALET, ISS-CREAM), is not a single power law but shows breaks and slope changes, signatures of diffusion or source effects. A hardening at about 500 GV, common to all primaries, and a softening at 10 TV, are observed in protons and He spectra by all experiments – and for the first time also in DAMPE’s O and C. As the hardening is detected in primary spectra scaling at the same rigidity (charge, not mass) as in secondary-to-primary ratios, they are attributed to propagation in the galaxy and not to source-related effects. This is supported by secondary (Li, Be, B) spectra with breaks about twice as strong as primaries (He, C, O). A second hardening at 150 TV was reported by ISS-CREAM (p) and DAMPE (p + He) for the first time, broadly consistent – within large hadronic-model and statistical uncertainties – with indirect ground-based results from GRAPES and LHAASO.

A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves

Ratios of secondary over primary species versus rigidity R (energy per unit charge) probe the ratio of the galactic halo size H to the energy-dependent diffusion coefficient D(R), and so measure the “grammage” of material through which cosmic rays propagate. Unstable/stable secondary isotope ratios probe the escape times of cosmic rays from the halo (H2/D(R)), so from both measurements H and D(R) can be derived. The flattening evidenced by the highest energy point at 10 to 12 GeV/nucleon of the 10Be/9Be ratio as a function of energy, hints at a possibly larger halo than previously believed beyond 5 kpc, to be tested by HELIX. AMS-02 spectra of single elements will soon allow separation of the primary and secondary fractions for each nucleus, also based on spallation cross-sections. Anomalies remain, such as a flattening at ~7 TeV/nucleon in Li/C and B/C, possibly indicating reacceleration or source grammage. AMS-02’s 7Li/6Li ratio disagrees with pure secondary models, but cross-section uncertainties preclude firm conclusions on a possible Li primary component, which would be produced by a new population of sources.

The muon puzzle

The dependency of ground-based cosmic-ray measurements on hadronic models has been widely discussed by Boyd and Pierog, highlighting the need for more measurements at CERN, such as the recent proton-O run being analysed by LHCf. The EPOS–LHC model, based on the core–corona approach, shows reduced muon discrepancies, producing more muons and a heavier composition, namely deeper shower maxima (+20 g/cm2) than earlier models. This clarifies the muon puzzle raised by Pierre Auger a few years ago of a larger muon content in atmospheric showers than simulations. A fork-like structure remains in the knee region of the proton spectrum, where the new measurements presented by LHAASO are in agreement with IceTop/IceCube, and could lead to a higher content of protons beyond the knee than hinted at by KASCADE and the first results of GRAPES. Despite the higher proton fluxes, a dominance of He above the knee is observed, which requires a special kind of close-by source to be hypothesised.

Multi-messenger approaches

Gamma-ray and neutrino astrophysics were widely discussed at the conference, highlighting the relevance of multi-messenger approaches. LHAASO produced impressive results on UHE astrophysics, revealing a new class of pevatrons: microquasars alongside young massive clusters, pulsar wind nebulae (PWNe) and supernova remnants.

Microquasars are gamma-ray binaries containing a stellar-mass black hole that drives relativistic jets while accreting matter from their companion stars. Outstanding examples include Cyg X-3, a potential PeV microquasar, from which the flux of PeV photons is 5–10 times higher than in the rest of the Cygnus bubble.

Five other microquasars are observed beyond 100 TeV: SS 433, V4641 Sgr, GRS 1915 + 105, MAXI J1820 + 070 and Cygnus X-1. SS 433 is a microquasar with two gamma-ray emitting jets nearly perpendicular to our line of sight, terminated at 40 pc from the black hole (BH) identified by HESS and LHAASO beyond 10 TeV. Due to the Klein–Nishina effect, the inverse Compton flux above ~10 TeV is gradually suppressed, and an additional spectral component is needed to explain the flux around 100 TeV.

Gamma-ray and neutrino astrophysics were widely discussed at the conference

Beyond 100 TeV, LHAASO also identifies a source coincident with a giant molecular cloud; this component may be due to protons accelerated close to the BH or in the lobes. These results demonstrate the ability to resolve the morphology of extended galactic sources. Similarly, ALMA has discovered two hotspots, both at 0.28° (about 50 pc) from GRS 1915 + 105 in opposite directions from its BH. These may be interpreted as two lobes, or the extended nature of the LHAASO source may instead be due to the spatial distribution of the surrounding gas, if the emission from GRS 1915 + 105 is dominated by hadronic processes.

Further discussions addressed pulsar halos and PWNe as unique laboratories for studying the diffusion of electrons and mysterious as-yet-unidentified pevatrons, such as MGRO J1908 + 06, coincident with a SNR (favoured) and a PSR. One of these sources may finally reveal an excess in KM3NeT or IceCube neutrinos, proving their cosmic-ray accelerator nature directly.

The identification and subtraction of source fluxes on the galactic plane is also important for the measurement of the galactic plane neutrino flux by IceCube. This currently assumes a fixed spectral index of E–2.7, while authors like Grasso et al. presented a spectrum becoming as soft as E–2.4, closer to the galactic centre. The precise measurements of gamma-ray source fluxes and the diffuse emission from galactic cosmic rays interacting in the interstellar matter lead to better constraints on neutrino observations and on cosmic ray fluxes around the knee.

Cosmogenic origins

KM3NeT presented a neutrino of energy well beyond the diffuse cosmic neutrino flux of IceCube, which does not extend beyond 10 PeV (CERN Courier March/April 2025 p7). Its origin was widely discussed at the conference. The large error on its estimated energy – 220 PeV, within a 1σ confidence interval of 110 to 790 PeV – makes it nevertheless compatible with the flux observed by IceCube, for which a 30 TeV break was first hypothesised at this conference. If events of this kind are confirmed, they could have transient or dark-matter origins, but a cosmogenic origin is improbable due to the IceCube and Pierre Auger limits on the cosmogenic neutrino flux.

Quantum gravity beyond frameworks

Matvej Bronštejn

Reconciling general relativity and quantum mechanics remains a central problem in fundamental physics. Though successful in their own domains, the two theories resist unification and offer incompatible views of space, time and matter. The field of quantum gravity, which has sought to resolve this tension for nearly a century, is still plagued by conceptual challenges, limited experimental guidance and a crowded landscape of competing approaches. Now in its third instalment, the “Quantum Gravity” conference series addresses this fragmentation by promoting open dialogue across communities. Organised under the auspices of the International Society for Quantum Gravity (ISQG), the 2025 edition took place from 21 to 25 July at Penn State University. The event gathered researchers working across a variety of frameworks – from random geometry and loop quantum gravity to string theory, holography and quantum information. At its core was the recognition that, regardless of specific research lines or affiliations, what matters is solving the puzzle.

One step to get there requires understanding the origin of dark energy, which drives the accelerated expansion of the universe and is typically modelled by a cosmological constant Λ. Yasaman K Yazdi (Dublin Institute for Advanced Studies) presented a case for causal set theory, reducing spacetime to a discrete collection of events, partially ordered to capture cause–effect relationships. In this context, like a quantum particle’s position and momentum, the cosmological constant and the spacetime volume are conjugate variables. This leads to the so-called “ever-present Λ” models, where fluctuations in the former scale as the inverse square root of the latter, decreasing over time but never vanishing. The intriguing agreement between the predicted size of these fluctuations and the observed amount of dark energy, while far from resolving quantum cosmology, stands as a compelling motivation for pursuing the approach.

In the spirit of John Wheeler’s “it from bit” proposal, Jakub Mielczarek (Jagiellonian University) suggested that our universe may itself evolve by computing – or at least admit a description in terms of quantum information processing. In loop quantum gravity, space is built from granular graphs known as spin networks, which capture the quantum properties of geometry. Drawing on ideas from tensor networks and holography, Mielczarek proposed that these structures can be reinterpreted as quantum circuits, with their combinatorial patterns reflected in the logic of algorithms. This dictionary offers a natural route to simulating quantum geometry, and could help clarify quantum theories that, like general relativity, do not rely on a fixed background.

Quantum clues

What would a genuine quantum theory of spacetime achieve, though? According to Esteban Castro Ruiz (IQOQI), it may have to recognise that reference frames, which are idealised physical systems used to define spatio-temporal distances, must themselves be treated as quantum objects. In the framework of quantum reference frames, notions such as entanglement, localisation and superposition become observer-dependent. This leads to a perspective-neutral formulation of quantum mechanics, which may offer clues for describing physics when spacetime is not only dynamical, but quantum.

The conference’s inclusive vocation came through most clearly in the thema­tic discussion sessions, including one on the infamous black-hole information problem chaired by Steve Giddings (UC Santa Barbara). A straightforward reading of Stephen Hawking’s 1974 result suggests that black holes radiate, shrink and ultimately destroy information – a process that is incompatible with standard quantum mechanics. Any proposed resolution must face sharp trade-offs: allowing information to escape challenges locality, losing it breaks unitarity and storing it in long-lived remnants undermines theoretical control. Giddings described a mild violation of locality as the lesser evil, but the controversy is far from settled. Still, there is growing consensus that dissolving the paradox may require new physics to appear well before the Planck scale, where quantum-gravity effects are expected to dominate.

Once the domain of pure theory, quantum gravity has become eager to engage with experiment

Among the few points of near-universal agreement in the quantum-gravity community has long been the virtual impossibility of detecting a graviton, the hypothetical quantum of the gravitational field. According to Igor Pikovski (Stockholm University), things may be less bleak than once thought. While the probability of seeing graviton-induced atomic transitions is negligible due to the weakness of gravity, the situation is different for massive systems. By cooling a macroscopic object close to absolute zero, Pikovski suggested, the effect could be amplified enough, with current interferometers simultaneously monitoring gravitational waves in the correct frequency window. Such a signal would not amount to a definitive proof of gravity’s quantisation, just as the photoelectric effect could not definitely establish the existence of photons, nor would it single out a specific ultraviolet model. However, it could constrain concrete predictions and put semiclassical theories under pressure. Giulia Gubitosi (University of Naples Federico II) tackled phenomenology from a different angle, exploring possible deviations from special relativity in models where spacetime becomes non-commutative. There, coordinates are treated like quantum operators, leading to effects like decoherence, modified particle speeds and soft departures from locality. Although such signals tend to be faint, they could be enhanced by high-energy astrophysical sources: observations of neutrinos corresponding to gamma-ray bursts are now starting to close in on these scenarios. Both talks reflected a broader, cultural shift: quantum gravity, once the domain of pure theory, has become eager to engage with experiment.

Quantum Gravity 2025 offered a wide snapshot of a field still far from closure, yet increasingly shaped by common goals, the convergence of approaches and cross-pollination. As intended, no single framework took centre stage, with a dialogue-based format keeping focus on the central, pressing issue at hand: understanding the quantum nature of spacetime. With limited experimental guidance, open exchange remains key to clarifying assumptions and avoiding duplication of efforts. Building on previous editions, the meeting pointed toward a future where quantum-gravity researchers will recognise themselves as part of a single, coherent scientific community.

Ultra-peripheral physics in the ultraperiphery

In June 2025, physicists met at Saariselkä, Finland to discuss recent progress in the field of ultra-peripheral collisions (UPCs). All the major LHC experiments measure UPCs – events where two colliding nuclei miss each other, but nevertheless interact via the mediation of photons that can propagate long distances. In a case of life imitating science, almost 100 delegates propagated to a distant location in one of the most popular hiking destinations in northern Lapland to experience 24-hour daylight and discuss UPCs in Finnish saunas.

UPC studies have expanded significantly since the first UPC workshop in Mexico in December 2023. The opportunity to study scattering processes in a clean photon–nucleus environment at collider energies has inspired experimentalists to examine both inclusive and exclusive scattering processes, and to look for signals of collectivity and even the formation of quark–gluon plasma (QGP) in this unique environment.

For many years, experimental activity in UPCs was mainly focused on exclusive processes and QED phenomena including photon–photon scattering. This year, fresh inclusive particle-production measurements gained significant attention, as well as various signatures of QGP-like behaviour observed by different experiments at RHIC and at the LHC. The importance of having complementing experiments to perform similar measurements was also highlighted. In particular, the ATLAS experiment joined the ongoing activities to measure exclusive vector–meson photoproduction, finding a cross section that disagrees with the previous ALICE measurements by almost 50%. After long and detailed discussions, it was agreed that different experimental groups need to work together closely to resolve this tension before the next UPC workshop.

Experimental and theoretical developments very effectively guide each other in the field of UPCs. This includes physics within and beyond the Standard Model (BSM), such as nuclear modifications to the partonic structure of protons and neutrons, gluon-saturation phenomena predicted by QCD (CERN Courier January/February 2025 p31), and precision tests for BSM physics in photon–photon collisions. The expanding activity in the field of UPCs, together with the construction of the Electron Ion Collider (EIC) at Brookhaven National Laboratory in the US, has also made it crucial to develop modern Monte Carlo event generators to the level where they can accurately describe various aspects of photon–photon and photon–nucleus scatterings.

As a photon collider, the LHC complements the EIC. While the centre-of-mass energy at the EIC will be lower, there is some overlap between the kinematic regions probed by these two very different collider projects thanks to the varying energy spectra of the photons. This allows the theoretical models needed for the EIC to be tested against UPC data, thereby reducing theoretical uncertainty on the predictions that guide the detector designs. This complementarity will enable precision studies of QCD phenomena and BSM physics in the 2030s.

Becoming T-shaped

Heike Riel

For Heike Riel, IBM fellow and head of science and technology at IBM Research, successful careers in science are built not by choosing between academia and industry, but by moving fluidly between them. With a background in semiconductor physics and a leadership role in one of the world’s top industrial research labs, Riel learnt to harness the skills she picked up in academia, and now uses them to build real-world applications. Today, IBM collaborates with academia and industry partners on projects ranging from quantum computing and cybersecurity to developing semiconductor chips for AI hardware.

“I chose semiconductor physics because I wanted to build devices, use electronics and understand photonics,” says Riel, who spent her academic years training to be an applied physicist. “There’s fundamental science to explore, but also something that can be used as a product to benefit society. That combination was very motivating.”

Hands-on mindset

For experimental physicists, this hands-on mindset is crucial. But experiments also require infrastructure that can be difficult to access in purely academic settings. “To do experiments, you need cleanrooms, fabrication tools and measurement systems,” explains Riel. “These resources are expensive and not always available in university labs.” During her first industry job at Hewlett-Packard in Palo Alto, Riel realised just how much she could achieve if given the right resources and support. “I felt like I was then the limit, not the lab,” she recalls.

This experience led Riel to proactively combine academic and industrial research in her PhD with IBM, where cutting-edge experiments are carried out towards a clear, purpose-driven goal within a structured research framework, leaving lots of leeway for creativity. “We explore scientific questions, but always with an application in mind,” says Riel. “Whether we’re improving a product or solving a practical problem, we aim to create knowledge and turn it into impact.”

Shifting gears

According to Riel, once you understand the foundations of fundamental physics, and feel as though you have learnt all the skills you can leach from it, then it’s time to consider shifting gears and expanding your skills with economics or business. In her role, understanding economic value and organisational dynamics is essential. But Riel advises against independently pursuing an MBA. “Studying economics or an MBA later is very doable,” she says. “In fact, your company might even financially support you. But going the other way – starting with economics and trying to pick up quantum physics later – is much harder.”

Riel sees university as a precious time to master complex subjects like quantum mechanics, relativity and statistical physics – topics that are difficult to revisit later in life. “It’s much easier to learn theoretical physics as a student than to go back to it later,” she says. “It builds something more important than just knowledge: it builds your tolerance for frustration, and your capacity for deep logical thinking. You become extremely analytical and much better at breaking down problems. That’s something every employer values.”

In demand

High-energy physicists are even in high demand in fields like consulting, says Riel. A high-achieving academic has a really good chance at being hired, as long as they present their job applications effectively. When scouring applications, recruiters look for specific key words and transferable skills, so regardless of the depth or quality of your academic research, the way you present yourself really counts. Physics, Riel argues, teaches a kind of thinking that’s both analytical and resilient. With experimental physics, your application can be tailored towards hands-on experience and understanding tangible solutions to real-world problems. For theoretical physicists, your application should demonstrate logical problem-solving and thinking outside of the box. “The winning combination is having aspects of both,” says Riel.

On top of that, research in physics increases your “frustration tolerance”. Every physicist has faced failure at one point during their academic career. But their determination to persevere is what makes them resilient. Whether this is through constantly thinking on your feet, or coming up with new solutions to the same problems, this resilience is what can make a physicist’s application pierce through the others. “In physics, you face problems every day that don’t have easy answers, and you learn how to deal with that,” explains Riel. “That mindset is incredibly useful, whether you’re solving a semiconductor design problem or managing a business unit.”

Academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application

Riel champions the idea of the “T-shaped person”: someone with deep expertise in one area (the vertical stroke of the T) and broad knowledge across fields (the horizontal bar of the T). “You start by going deep – becoming the go-to person for something,” says Riel. This deep knowledge builds your credibility in your desired field: you become the expert. But after that, you need to broaden your scope and understanding.

That breadth can include moving between fields, working on interdisciplinary projects, or applying physics in new domains. “A T-shaped person brings something unique to every conversation,” adds Riel. “You’re able to connect dots that others might not even see, and that’s where a lot of innovation happens.”

Adding the bar on the T means that you can move fluidly between different fields, including through academia and industry. For this reason, Riel believes that the divide between academia and industry is less rigid than people assume, especially in large research organisations like IBM. “We sit in that middle ground,” she explains. “We publish papers. We work with universities on fundamental problems. But we also push toward real-world solutions, products and economic value.”

The difficult part is making the leap from academia to industry. “You need the confidence to make the decision, to choose between working in academia or industry,” says Riel. “At some point in your PhD, your first post-doc, or maybe even your second, you need to start applying your practical skills to industry.” Companies like IBM offer internships, PhDs, research opportunities and temporary contracts for physicists all the way from masters students to high-level post-docs. These are ideal ways to get your foot in the door of a project, get work published, grow your network and garner some of those industry-focused practical skills, regardless of the stage you are at in your academic career. “You can learn from your colleagues about economy, business strategy and ethics on the job,” says Riel. “If your team can see you using your practical skills and engaging with the business, they will be eager to help you up-skill. This may mean supporting you through further study, whether it’s an online course, or later an MBA.”

Applied knowledge

Riel notes that academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application. “US funding is often tied to applications, and they are much stronger at converting research into tangible products, whereas in Europe there is still more of a divide between knowledge creation and the next step to turn this into products,” she says. “But personally, I find it most satisfying when I can apply what I learn to something meaningful.”

That applied focus is also cyclical, she says. “At IBM, projects to develop hardware often last five to seven years. Software development projects have a much faster turnaround. You start with an idea, you prove the concept, you innovate the path to solve the engineering challenges and eventually it becomes a product. And then you start again with something new.” This is different to most projects in academia, where a researcher contributes to a small part of a very long-term project. Regardless of the timeline of the project, the skills gained from academia are invaluable.

For early-career researchers, especially those in high-energy physics, Riel’s message is reassuring: “Your analytical training is more useful than you think. Whether you stay in academia, move to industry, or float between both, your skills are always relevant. Keep learning and embracing new technologies.”

The key, she says, is to stay flexible, curious and grounded in your foundations. “Build your depth, then your breadth. Don’t be afraid of crossing boundaries. That’s where the most exciting work happens.”

The history of heavy ions

Across a career that accompanied the emergence of heavy-ion physics at CERN, Hans Joachim Specht was often a decisive voice in shaping the experimental agenda and the institutional landscape in Europe. Before he passed away last May, he and fellow editors Sanja Damjanovic (GSI), Volker Metag (University of Giessen) and Jürgen Schukraft (Yale University) finalised the manuscript for Scientist and Visionary – a new biographical work that offers both a retrospective on Specht’s wide-ranging scientific contributions and a snapshot of four decades of evolving research at CERN, GSI and beyond.

Precision and rigour

Specht began his career in nuclear physics under the mentorship of Heinz Maier-Leibnitz at the Technische Universität München. His early work was grounded in precision measurements and experimental rigour. Among his most celebrated early achievements were the discoveries of superheavy quasi-molecules and quasi-atoms, where electrons can be bound for short times to a pair of heavy ions, and nuclear-shape isomerism, where nuclei exhibit long-lived prolate or oblate deformations. These milestones significantly advanced the understanding of atomic and nuclear structure. Around 1979, he shifted focus, joining the emerging efforts at CERN to explore the new frontier of ultra-relativistic heavy-ion collisions, which was started five years earlier at Berkeley by the GSI-LBL collaboration. It was Bill Willis, one of CERN’s early advocates for high-energy nucleus–nucleus collisions, who helped draw Specht into this developing field. That move proved foundational for both Specht and CERN.

From the early 1980s through to 2010, Specht played leading roles in four CERN nuclear-collision experiments: R807/808 at the Intersecting Storage Rings, and HELIOS, CERES/NA45 and NA60 at the Super Proton Synchrotron (SPS). As the book describes, he was instrumental, and not only in their scientific goals, namely to search for the highest temperatures of the newly formed hot, dense QCD matter, exceeding the well established Hagedorn limiting hadron fluid temperature of roughly 160 MeV. The overarching aim was to establish that quasi-thermalised gluon matter and even quark–gluon matter can be created at the SPS. Specht was also involved in the design and execution of these detectors. At the Universität Heidelberg, he built a heavy-ion research group and became a key voice in securing German support for CERN’s heavy-ion programme.

CERES was Spechts brainchild, and stood out for its bold concept

As spokesperson of the HELIOS experiment from 1984 onwards, Specht gained recognition as a community leader. But it was CERES, his brainchild, that stood out for its bold concept: to look for thermal dileptons using a hadron-blind detector – a novel idea at the time that introduced the concept of heavy-ion collision experiments. Despite considerable scepticism, CERES was approved in 1989 and built in under two years. Its results on sulphur–gold collisions became some of the most cited of the SPS era, offering strong evidence for thermal lepton-pair production, potentially from a quark–gluon plasma – a hot and deconfined state of QCD matter then hypothesised to exist at high temperatures and densities, such as in the early universe. Such high temperatures, above the hadrons’ limiting Hagedorn temperature of 160 MeV, had not yet been experimentally demonstrated at LBNL’s Bevalac and Brookhaven’s Alternating Gradient Synchrotron.

Advising ALICE

In the early 1990s, while CERES was being upgraded for lead–gold runs, Specht co-led a European Committee for Future Accelerators working group that laid the groundwork for ALICE, the LHC’s dedicated heavy-ion experiment. His Heidelberg group formally joined ALICE in 1993. Even after becoming scientific director of GSI in 1992, Specht remained closely involved as an advisor.

Specht’s next major CERN project was NA60, which collided a range of nuclei in a fixed-target experiment at the SPS and pushed dilepton measurements to new levels of precision. The NA60 experiment achieved two breakthroughs: a nearly perfect thermal spectrum consistent with blackbody radiation of temperatures 240 to 270 MeV, some hundred MeV above the previous highest hadron Hagedorn temperature of 160 MeV. Clear evidence of in-medium modification of the ρ meson was observed, due to meson collisions with nucleons and heavy baryon resonances, showing that this medium is not only hot, but also that its net baryon density is high. These results were widely seen as strong confirmation of the lattice–QCD-inspired quark–gluon plasma hypothesis. Many chapter authors, some of whom were direct collaborators, others long-time interpreters of heavy-ion signals, highlight the impact NA60 had on the field. Earlier claims, based on competing hadronic signals for deconfinement, such as strong collective hydrodynamic flow, J/ψ melting and quark recombination, were often also described by hadronic transport theory, without assuming deconfinement.

Hans Joachim Specht: Scientist and Visionary

Specht didn’t limit himself to fundamental research. As director of GSI, he oversaw Europe’s first clinical ion-beam cancer therapy programme using carbon ions. The treatment of the first 450 patients at GSI was a breakthrough moment for medical physics and led to the creation of the Heidelberg Ion Therapy centre in Heidelberg, the first hospital-based hadron therapy centre in Europe. Specht later recalled the first successful treatment as one of the happiest moments of his career. In their essays, Jürgen Debus, Hartmut Eickhoff and Thomas Nilsson outline how Specht steered GSI’s mission into applied research without losing its core scientific momentum.

Specht was also deeply engaged in institutional planning, helping to shape the early stages of the Facility for Antiproton and Ion Research, a new facility to study heavy ion collisions, which is expected to start operations at GSI at the end of the decade. He also initiated plasma-physics programmes, and contributed to the development of detector technologies used far beyond CERN or GSI. In parallel, he held key roles in international science policy, including within the Nuclear Physics Collaboration Committee, as a founding board member of the European Centre for Theoretical Studies in Nuclear Physics in Trento, and at CERN as chair of the Proton Synchrotron and Synchro-Cyclotron Committee, and as a decade-long member of the Scientific Policy Committee.

The book doesn’t shy away from more unusual chapters either. In later years, Specht developed an interest in the neuroscience of music. Collaborating with Hans Günter Dosch and Peter Schneider, he explored how the brain processes musical structure – an example of his lifelong intellectual curiosity and openness to interdisciplinary thinking.

Importantly, Scientist and Visionary is not a hagiography. It includes a range of perspectives and technical details that will appeal to both physicists who lived through these developments and younger researchers unfamiliar with the history behind today’s infrastructure. At its best, the book serves as a reminder of how much experimental physics depends not just on ideas, but on leadership, timing and institutional navigation.

That being said, it is not a typical scientific biography. It’s more of a curated mosaic, constructed through personal reflections and contextual essays. Readers looking for deep technical analysis will find it in parts, especially in the sections on CERES and NA60, but its real value lies in how it tracks the development of large-scale science across different fields, from high-energy physics to medical applications and beyond.

For those interested in the history of CERN, the rise of heavy-ion physics, or the institutional evolution of European science, this is a valuable read. And for those who knew or worked with Hans Specht, it offers a fitting tribute – not through nostalgia, but through careful documentation of the many ways Hans shaped the physics and the institutions we now take for granted.

Two takes on the economics of big science

At the 2024 G7 conference on research infrastructure in Sardinia, participants were invited to think about the potential socio-economic impact of the Einstein Telescope. Most physicists would have no expectation that a deeper knowledge of gravitational waves will have any practical usage in the foreseeable future. What, then, will be the economic impact of building a gravitational-wave detector hundreds of metres underground in some abandoned mines? What will be the societal impact of several kilometres of lasers and mirrors?

Such questions are strategically important for the future of fundamental science, which is increasingly often big science. Two new books tackle its socio-economic impacts head on, though with quite different approaches, one more qualitative in its research, and the other more quantitative. What are the pros and cons of qualitative versus quantitative analysis in social sciences? Personally, as an economist, at a certain point I would tend to say show me the figures! But, admittedly, when assessing the socio-economic impact of large-scale research infrastructures, if good statistical data is not available, I would always prefer a fine-grained qualitative analysis to quantitative models based on insufficient data.

Big Science, Innovation & Societal Contributions, edited by Shantha Liyanage (CERN), Markus Nordberg (CERN) and Marilena Streit-Bianchi (vice president of ARSCIENCIA), takes the qualitative route – a journey into mostly uncharted territory, asking difficult questions about the socio-economic impact of large-scale research infrastructures.

Big Science, Innovation & Societal Contributions

Some figures about the book may be helpful: the three editors were able to collect 15 chapters, with about 100 figures and tables, to involve 34 authors, to list more than 700 references, and to cover a wide range of scientific fields, including particle physics, astrophysics, medicine and computer science. A cursory reading of the list of about 300 acronyms, from AAI (Architecture Adaptive Integrator) to ZEPLIN (ZonEd Proportional scintillation in Liquid Noble gas detector), would be a good test to see how many research infrastructures and collaborations you already know.

After introducing the LHC, a chapter on new accelerator technologies explores a remarkable array of applications of accelerator physics. To name a few: CERN’s R&D in superconductivity is being applied in nuclear fusion; the CLOUD experiment uses particle beams to model atmospheric processes relevant to climate change (CERN Courier January/February 2025 p5); and the ELISA linac is being used to date Australian rock art, helping determine whether it originates from the Pleistocene or Holocene epochs (CERN Courier March/April 2025 p10).

A wide-ranging exploration of how large-scale research infrastructures generate socio-economic value

The authors go on to explore innovation with a straightforward six-step model: scanning, codification, abstraction, diffusion, absorption and impacting. This is a helpful compass to build a narrative. Other interesting issues discussed in this part of the book include governance mechanisms and leadership of large-scale scientific organisations, including in gravitational-wave astronomy. No chapter better illustrates the impact of science on human wellbeing than the survey of medical applications by Mitra Safavi-Naeini and co-authors, which covers three major domains of applications in medical physics: medical imaging with X-rays and PET; radio­therapy targeting cancer cells internally with radioactive drugs or externally using linacs; and more advanced but expensive particle-therapy treatments with beams of protons, helium ions and carbon ions. Personally, I would expect that some of these applications will be enhanced by artificial intelligence, which in turn will have an impact on science itself in terms of digital data interpretation and forecasting.

Sociological perspectives

The last part of the book takes a more sociological perspective, with discussions about cultural values, the social responsibility to make sure big data is open data, and social entrepreneurship. In his chapter on the social responsibility of big science, Steven Goldfarb stresses the importance of the role of big science for learning processes and cultural enhancement. This topic is particularly dear to me, as my previous work on the cost–benefit analysis of the LHC revealed that the value of human capital accumulation for early-stage researchers is among the biggest contributions to the machine’s return on investment.

I recommend Big Science, Innovation & Societal Contributions as a highly infor­mative, non-technical and updated introduction to the landscape of big science, but I would suggest complemen­ting it with another very recent book, The Economics of Big Science 2.0, edited by Johannes Gutleber and Panagiotis Charitos, both currently working at CERN. Charitos was also the co-editor of the volume’s predecessor, The Economics of Big Science, which focuses more on science policy, as well as public investment in science.

Why a “2.0” book? There is a shift of angle. The Economics of Big Science 2.0 builds upon the prior volume, but offers a more quantitative perspective on big science. Notably, it takes advantage of a larger share of contributions by economists, including myself as co-author of a chapter about the public’s perception of CERN.

The Economics of Big Science 2.0

It is worth clarifying that economics, as a domain within the paradigm of social sciences more generally, has its rules of the game and style. For example, the social sciences can be used as an umbrella encompassing sociology, political science, anthropology, history, management and communication studies, linguistics, psychology and more. The role of economics within sociology is to build quantitative models and to test them with statistical evidence, a field also known as econometrics.

Here, the authors excel. The Economics of Big Science 2.0 offers a wide-ranging exploration of how large-scale research infrastructures generate socio-economic value, primarily driven by quantitative analysis. The authors explore a diverse range of empirical methods, from forming cost–benefit analyses to evaluating econometric modelling, allowing them to assess the tangible effects of big science across multiple fields. There is a unique challenge for applied economics here, as big science centres by definition do not come in large numbers, however the authors involve large numbers of stakeholders, allowing for a statistical analysis of impacts, and the estimation of expected values, standard errors and confidence intervals.

Societal impact

The Economics of Big Science 2.0 examines the socio-economic impact of ESA’s space programmes, the local economic benefits from large-scale facilities and the efficiency benefits from open science. The book measures public attitudes toward and awareness of science within the context of CERN, offering insights into science’s broader societal impacts. It grounds its analyses in a series of focused case studies, including particle colliders such as the LHC and FCC, synchrotron light sources like ESRF and ALBA, and radio telescopes such as SARAO, illustrating the economic impacts of big science through a quantitative lens. In contrast to the more narrative and qualitative approach of Big Science, Innovation & Societal Contributions, The Economics of Big Science 2.0 distinguishes itself through a strong reliance on empirical data.

Ivan Todorov 1933–2025

Ivan Todorov, theoretical physicist of outstanding academic achievements and a man of remarkable moral integrity, passed away on 14 February in his hometown of Sofia. He is best known for his prominent works on the group-theoretical methods and the mathematical foundations of quantum field theory.

Ivan was born on 26 October 1933 into a family of literary scholars who played an active role in Bulgarian academic life. After graduating from the University of Sofia in 1956, he spent several years at JINR in Dubna and at IAS Princeton, before joining INRNE in Sofia. In 1974 he became a full member of the Bulgarian Academy of Sciences.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions. The classification and the complete description of the unitary representations of the conformal group have been collected in two well known and widely used monographs by him and his collaborators. Ivan’s research on constructive quantum field theories and the books devoted to the axiomatic approach have largely influenced modern developments in this area. His early scientific results related to the analytic properties of higher loop Feynman diagrams have also found important applications in perturbative quantum field theory.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions

The scientifically highly successful international conferences and schools organised in Bulgaria during the Cold War period under the guidance of Ivan served as meeting grounds for leading Russian and East European theoretical physicists and their West European and American colleagues. They were crucial for the development of theoretical physics in Bulgaria.

Everybody who knew Ivan was impressed by his vast culture and acute intellectual curiosity. His profound and deep knowledge of modern mathematics allowed him to remain constantly in tune with new trends and ideas in theoretical physics. Ivan’s courteous and smiling way of discussing physics, always peppered with penetrating comments and suggestions, was inimitable. His passing is a great loss for theoretical physics, especially in Bulgaria, where he mentored a generation of researchers.

Jonathan L Rosner 1941–2025

Jon Rosner

Jonathan L Rosner, a distinguished theoretical physicist and professor emeritus at the University of Chicago, passed away on 24 May 2025. He made profound contributions to particle physics, particularly in quark dynamics and the Standard Model.

Born in New York City, Rosner grew up in Yonkers, NY. He earned his Bachelor of Arts in Physics from Swarthmore College in 1962 and completed his PhD at Princeton University in 1965 with Sam Treiman as his thesis advisor. His early academic appointments included positions at the University of Washington and Tel Aviv University. In 1969 he joined the faculty at the University of Minnesota, where he served until 1982. That year, he became a professor at the University of Chicago, where he remained a central figure in the Enrico Fermi Institute and the Department of Physics until his retirement in 2011.

Rosner’s research spanned a broad spectrum of topics in particle physics, with a focus on the properties and interactions of quarks and leptons in the Standard Model and beyond.

In a highly influential paper in 1969, he pointed out that the duality between hadronic s-channel scattering and t-channel exchanges could be understood graphically, in terms of quark worldlines. Approximately three months before the “November revolution”, i.e. the experimental discovery of charm–anticharm particles, together with the late Mary K Gaillard and Benjamin W Lee, Jon published a seminal paper predicting the properties of hadronic states containing charm quarks.

He made significant contributions to the study of mesons and baryons, exploring their spectra and decay processes. His work on quarkonium systems, particularly the charmonium and bottomonium states, provided critical insights into the strong force that binds quarks together. He also made masterful use of algebraic methods in predicting and analysing CP-violating observables.

In more recent years, Jon focused on exotic combinations of quarks and antiquarks, tetra­quarks and pentaquarks. In 2017 he co-authored a Physical Review Letters paper that provided the first robust prediction of a bbud tetraquark that would be stable under the strong interaction (CERN Courier November/December 2024 p33).

What truly set Jon apart was his rare ability to seamlessly integrate theoretical acumen with practical experimental engagement. While primarily a theoretician, he held a deep appreciation for experimental data and actively participated in the experimental endeavour. A prime example of this was his long-standing involvement with the CLEO collaboration at Cornell University.

He also collaborated on studies related to the detection of cosmic-ray air showers and contributed to the development of prototype systems for detecting radio pulses associated with these high-energy events. His interdisciplinary approach bridged theoretical predictions with experimental observations, enhancing the coherence between theory and practice in high-energy physics.

Unusually for a theorist, Jon was a high-level expert in electronics, rooted through his deep life-long interest in amateur short-wave radio. As with everything else, he did it very thoroughly, from physics analysis to travelling to solar eclipses to take advantage of the increased propagation range of the electromagnetic waves caused by changes in the ionosphere. Rosner was also deeply committed to public service within the scientific community. He served as chair of the Division of Particles and Fields of the American Physical Society in 2013, during which he played a central role in organising the “Snowmass on the Mississippi” conference. This event was an essential part of the long-term strategic planning for the US high-energy physics programme. His leadership and vision were widely recognised and appreciated by his peers.

Throughout his career, Rosner received numerous accolades. He was a fellow of the American Physical Society and was awarded fellowships from the Alfred P. Sloan Foundation and the John Simon Guggenheim Memorial Foundation. His publication record includes more than 500 theoretical papers, reflecting his prolific and highly impactful career in physics. He is survived by his wife, Joy, their two children, Hannah and Benjamin, and a granddaughter, Sadie.

César Gómez 1954–2025

César Gómez, whose deep contributions to gauge theory and quantum gravity were matched by his scientific leadership, passed away on 7 April 2025 after a short fight against illness, leaving his friends and colleagues with a deep sense of loss.

César gained his PhD in 1981 from Universidad de Salamanca, where he became professor after working at Harvard, the Institute for Advanced Study and CERN. He held an invited professorship at the Université de Genève between 1987 and 1991, and in this same year, he moved to Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, where he eventually became a founding member of the Instituto de Física Teórica (IFT) UAM–CSIC. He became emeritus in 2024.

Among the large number of topics he worked on during his scientific career, César was initially fascinated by the dynamics of gauge theories. He dedicated his postdoctoral years to problems concerning the structure of the quantum vacuum in QCD, making some crucial contributions.

Focusing in the 1990s on the physics of two-dimensional conformal field theories, he used his special gifts to squeeze physics out of formal structures, leaving his mark in works ranging from superstrings to integrable models, and co-authoring with Martí Ruiz-Altaba and Germán Sierra the book Quantum Groups in Two-Dimensional Physics (Cambridge University Press, 1996). With the new century and the rise of holography, César returned to the topics of his youth: the renormalisation group and gauge theories, now with a completely different perspective.

Far from settling down, in the last decade we discover a very daring César, plunging together with Gia Dvali and other collaborators into a radical approach to understand symmetry breaking in gauge theories, opening new avenues in the study of black holes and the emergence of spacetime in quantum gravity. The magic of von Neumann algebras inspired him to propose an elegant, deep and original understanding of inflationary universes and their quantum properties. This research programme led him to one of his most fertile and productive periods, sadly truncated by his unexpected passing at a time when he was bursting with ideas and projects.

César’s influence went beyond his papers. After his arrival at CSIC as an international leader in string theory, he acted as a pole of attraction. His impact was felt both through the training of graduate students, as well as by the many courses he imparted that left a lasting memory on the new generations.

Contrasting with his abstract scientific style, César also had a pragmatic side, full of vision, momentum and political talent. A major part of his legacy is the creation of the IFT, whose existence would be unthinkable without César among the small group of theoretical physicists from Universidad Autónoma de Madrid and CSIC who made a dream come true. For him, the IFT was more than his research institute, it was the home he helped to build.

Philosophy was a true second career for César, dating back to his PhD in Salamanca and strengthened at Harvard, where he started a lifelong friendship with Hilary Putnam. The philosophy of language was one of his favourite subjects for philosophical musings, and he dedicated to it an inspiring book in Spanish in 2003.

Cesar’s impressive and eclectic knowledge of physics always transformed blackboard discussions into a delightful and fascinating experience, while his extraordinary ability to establish connections between apparently remote notions was extremely motivating at the early stages of a project. A regular presence at seminars and journal clubs, and always conspicuous by his many penetrating and inspiring questions, he was a beloved character among graduate students, who felt the excitement of knowing that he could turn every seminar into a unique event.

César was an excellent scientist with a remarkable personality. He was a wonderful conversationalist on any possible topic, encouraging open discussions free of prejudice, and building bridges with all conversational partners. He cherished his wife Carmen and daughters Ana and Pepa, who survive him.

Farewell, dear friend. May you rest in peace, and may your memory be our blessing.

bright-rec iop pub iop-science physcis connect