Comsol -leaderboard other pages

Topics

The battle of the Big Bang

As Arthur Koestler wrote in his seminal 1959 work The Sleepwalkers, “The history of cosmic theories … may without exaggeration be called a history of collective obsessions and controlled schizophrenias; and the manner in which some of the most important individual discoveries were arrived at, reminds one more of a sleepwalker’s performance than an electronic’s brain.” Koestler’s trenchant observation about the state of cosmology in the first half of the 20th century is perhaps even more true of cosmology in the first half of the 21st, and Battle of the Big Bang: The New Tales of Our Cosmic Origins provides an entertaining – and often refreshingly irreverent – update on the state of current collective obsessions and controlled schizophrenias in cosmology’s effort to understand the origin of the universe. The product of a collaboration between a working cosmologist (Afshordi) and a science communicator (Halper), Battle of the Big Bang tells the story of our modern efforts to comprehend the nature of the first moments of time, back to the moment of the Big Bang and even before.

Rogues gallery

The story told by the book combines lucid explanations of a rogues’ gallery of modern cosmological theories, some astonishingly successful, others less so, interspersed with anecdotes culled from Halper’s numerous interviews with key players in the game. These stories of the real people behind the theories add humanistic depth to the science, and the balance between Halper’s engaging storytelling and Afshordi’s steady-handed illumination of often esoteric scientific ideas is mostly a winning combination; the book is readable, without sacrificing too much scientific depth. In this respect, Battle of the Big Bang is reminiscent of Dennis Overbye’s 1991 Lonely Hearts of the Cosmos. As with Overbye’s account of the famous conference-banquet fist fight between Rocky Kolb and Gary Steigman, there is no shortage here of renowned scientists behaving like children, and the “mean girls of cosmology” angle makes for an entertaining read. The story of University of North Carolina professor Paul Frampton getting catfished by cocaine smugglers posing as model Denise Milani and ending up in an Argentine prison, for example, is not one you see coming.

Battle of the Big Bang: The New Tales of Our Cosmic Origins

A central conflict propelling the narrative is the longstanding feud between Andrei Linde and Alan Guth, both originators of the theory of cosmological inflation, and Paul Steinhardt, also an originator of the theory who later transformed into an apostate and bitter critic of the theory he helped establish.

Inflation – a hypothesised period of exponential cosmic expansion by more than 26 orders of magnitude that set the initial conditions for the hot Big Bang – is the gorilla in the room, a hugely successful theory that over the past several decades has racked up win after win when confronted by modern precision cosmology. Inflation is rightly considered by most cosmologists to be a central part of the “standard” cosmology, and its status as a leading theory inevitably makes it a target of critics like Steinhardt, who argue that inflation’s inherent flexibility means that it is not a scientific theory at all. Inflation is introduced early in the book, and for the remainder, Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation: multiverses, bouncing universes, new universes birthed from within black holes, extra dimensions, varying light speed and “mirror” universes with reversed time all make appearances, a dizzying inventory of our most recent collective obsessions and schizophrenias.

In the later chapters, Afshordi describes some of his own efforts to formulate an alternative to inflation, and it is here that the book is at its strongest; the voice of a master of the craft confronting his own unconscious assumptions and biases makes for compelling reading. I have known Niayesh as a friend and colleague for more than 20 years. He is a fearlessly creative theorist with deep technical skill, but he has the heart of a rebel and a poet, and I found myself wishing that the book gave his unique voice more room to shine, instead of burying it beneath too many mundane pop-science tropes; the book could have used more of the science and less of the “science communication”. At times the pop-culture references come so thick that the reader feels as if he is having to shake them off his leg.

Compelling arguments

Anyone who reads science blogs or follows science on social media is aware of the voices, some of them from within mainstream science and many from further out on the fringe, arguing that modern theoretical physics suffers from a rigid orthodoxy that serves to crowd out worthy alternative ideas to understand problems such as dark matter, dark energy and the unification of gravity with quantum mechanics. This has been the subject of several books such as Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong. A real value in Battle of the Big Bang is to provide a compelling counterargument to that pessimistic narrative. In reality, ambitious scientists like nothing better than overturning a standard paradigm, and theorists have put the standard model of cosmology in the cross hairs with the gusto of assassins gunning for John Wick. Despite – or perhaps because of – its focus on conflict, this book ultimately paints a picture of a vital and healthy scientific process, a kind of controlled chaos, ripe with wild ideas, full of the clash of egos and littered with the ashes of failed shots at glory.

What the book is not is a reliable scholarly work on the history of science. Not only was the manuscript rather haphazardly copy-edited (the renowned Mount Palomar telescope, for example, is not “two hundred foot”, but in fact 200 inches), but the historical details are sometimes smoothed over to fit a coherent narrative rather than presented in their actual messy accuracy. While I do not doubt the anecdote of David Spergel saying “we’re dead”, referring to cosmic strings when data from the COBE satellite was first released, it was not COBE that killed cosmic strings. The blurry vision of COBE could accommodate either strings or inflation as the source of fluctuations in the cosmic microwave background (CMB), and it took a clearer view to make the distinction. The final nail in the coffin came from BOOMERanG nearly a decade later, with the observation of the second acoustic peak in the CMB. And it was not, as claimed here, BOOMERanG that provided the first evidence for a flat geometry to the cosmos; that happened a few years earlier, with the Saskatoon and CAT experiments.

Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation

The book makes a point of the premature death of Dave Wilkinson, when in fact he died at age 67, not (as is implied in the text) in his 50s. Wilkinson – who was my freshman physics professor – was a great scientist and a gifted teacher, and it is appropriate to memorialise him, but he had a long and productive career.

Besides these points of detail, there are some more significant omissions. The book relates the story of how the Ukrainian physicist Alex Vilenkin, blacklisted from physics and working as a zookeeper in Kharkiv, escaped the Soviet Union. Vilenkin moved to SUNY Buffalo, where I am currently a professor, because he had mistaken Mendel Sachs, a condensed matter theorist, for Ray Sachs, who originally predicted fluctuations in the CMB. It’s a funny story, and although the authors note that Vilenkin was blacklisted for refusing to be an informant for the KGB, they omit the central context that he was Jewish, one of many Jews banished from academic life by Soviet authorities who escaped the stifling anti-Semitism of the Soviet Union for scientific freedom in the West. This history resonates today in light of efforts by some scientists to boycott Israeli institutes and even blacklist Israeli colleagues. Unlike the minutiae of CMB physics, this matters, and Battle of the Big Bang should have been more careful to tell the whole story.

Quantum theory returns to Helgoland

In June 1925, Werner Heisenberg retreated to the German island of Helgoland seeking relief from hay fever and the conceptual disarray of the old quantum theory. On this remote, rocky outpost in the North Sea, he laid the foundations of matrix mechanics. Later, his “island epiphany” would pass through the hands of Max Born, Wolfgang Pauli, Pascual Jordan and several others, and become the first mature formulation of quantum theory. From 9 to 14 June 2025, almost a century later, hundreds of researchers gathered on Helgoland to mark the anniversary – and to deal with pressing and unfinished business.

Alfred D Stone (Yale University) called upon participants to challenge the folklore surrounding quantum theory’s birth. Philosopher Elise Crull (City College of New York) drew overdue attention to Grete Hermann, who hinted at entanglement before it had a name and anticipated Bell in identifying a flaw in von Neumann’s no-go theorem, which had been taken as proof that hidden-variable theories are impossible. Science writer Philip Ball questioned Heisenberg’s epiphany itself: he didn’t invent matrix mechanics in a flash, claims Ball, nor immediately grasp its relevance, and it took months, and others, to see his contribution for what it was (see “Lend me your ears” image).

Building on a strong base

A clear takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, nevertheless remain open to interpretation, and any future progress will depend on excavating them directly (see “Four ways to interpret quantum mechanics“).

Does the quantum wavefunction represent an objective element of reality or merely an observer’s state of knowledge? On this question, Helgoland 2025 could scarcely have been more diverse. Christopher Fuchs (UMass Boston) passionately defended quantum Bayesianism, which recasts the Born probability rule as a consistency condition for rational agents updating their beliefs. Wojciech Zurek (Los Alamos National Laboratory) presented the Darwinist perspective, for which classical objectivity emerges from redundant quantum information encoded across the environment. Although Zurek himself maintains a more agnostic stance, his decoherence-based framework is now widely embraced by proponents of many-worlds quantum mechanics (see “The minimalism of many worlds“).

The foundations of quantum mechanics remain open to interpretation, and any future progress will depend on excavating them directly

Markus Aspelmeyer (University of Vienna) made the case that a signature of gravity’s long-speculated quantum nature may soon be within experimental reach. Building on the “gravitational Schrödinger’s cat” thought experiment proposed by Feynman in the 1950s, he described how placing a massive object in a spatial superposition could entangle a nearby test mass through their gravitational interaction. Such a scenario would produce correlations that are inexplicable by classical general relativity alone, offering direct empirical evidence that gravity must be described quantum-mechanically. Realising this type of experiment requires ultra-low pressures and cryogenic temperatures to suppress decoherence, alongside extremely low-noise measurements of gravitational effects at short distances. Recent advances in optical and opto­mechanical techniques for levitating and controlling nanoparticles suggest a path forward – one that could bring evidence for quantum gravity not from black holes or the early universe, but from laboratories on Earth.

Information insights

Quantum information was never far from the conversation. Isaac Chuang (MIT) offered a reconstruction of how Heisenberg might have arrived at the principles of quantum information, had his inspiration come from Shannon’s Mathematical Theory of Communication. He recast his original insights into three broad principles: observations act on systems; local and global perspectives are in tension; and the order of measurements matters. Starting from these ingredients, one could in principle recover the structure of the qubit and the foundations of quantum computation. Taking the analogy one step further, he suggested that similar tensions between memorisation and generalisation – or robustness and adaptability – may one day give rise to a quantum theory of learning.

Helgoland 2025 illustrated just how much quantum mechanics has diversified since its early days. No longer just a framework for explaining atomic spectra, the photoelectric effect and black-body radiation, it is at once a formalism describing high-energy particle scattering, a handbook for controlling the most exotic states of matter, the foundation for information technologies now driving national investment plans, and a source of philosophical conundrums that, after decades at the margins, has once again taken centre stage in theoretical physics.

Exceptional flare tests blazar emission models

Active galactic nuclei (AGNs) are extremely energetic regions at the centres of galaxies, powered by accretion onto a supermassive black hole. Some AGNs launch plasma outflows moving near light speed. Blazars are a subclass of AGNs whose jets are pointed almost directly at Earth, making them appear exceptionally bright across the electro­magnetic spectrum. A new analysis of an exceptional flare of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer (IXPE) has now shed light on their emission mechanisms.

The spectral energy distribution of blazars generally has two broad peaks. The low-energy peak from radio to X-rays is well explained by synchrotron radiation from relativistic electrons spiraling in magnetic fields, but the origin of the higher-energy peak from X-rays to γ-rays is a longstanding point of contention, with two classes of models, dubbed hadronic and leptonic, vying to explain it. Polarisation measurements offer a key diagnostic tool, as the two models predict distinct polarisation signatures.

Model signatures

In hadronic models, high-energy emission is produced by protons, either through synchrotron radiation or via photo-hadronic interactions that generate secondary particles. Hadronic models predict that X-ray polarisation should be as high as that in the optical and millimetre bands, even in complex jet structures.

Leptonic models are powered by inverse Compton scattering, wherein relativistic electrons “upscatter” low-energy photons, boosting them to higher energies with low polarisation. Leptonic models can be further subdivided by the source of the inverse-Compton-scattered photons. If initially generated by synchrotron radiation in the AGN (synchrotron self-Compton, SSC), modest polarisation (~50%) is expected due to the inherent polarisation of synchrotron photons, with further reductions if the emission comes from inhomogeneous or multiple emitting regions. If initially generated by external sources (external Compton, EC), isotropic photon fields from the surrounding structures are expected to average out their polarisation.

IXPE launched on 9 December 2021, seeking to resolve such questions. It is designed to have 100-fold better sensitivity to the polarisation of X-rays in astrophysical sources than the last major X-ray polarimeter, which was launched half a century ago (CERN Courier July/August 2022 p10). In November 2023, it participated in a coordinated multiwavelength campaign spanning radio, millimetre and optical, and X-ray bands targeted the blazar BL Lacertae, whose X-ray emission arises mostly from the high-energy component, with its low-energy synchrotron component mainly at infrared energies. The campaign captured an exceptional flare, providing a rare opportunity to test competing emission models.

Optical telescopes recorded a peak optical polarisation of 47.5 ± 0.4%, the highest ever measured in a blazar. The short-mm (1.3 mm) polarisation also rose to about 10%, with both bands showing similar trends in polarisation angle. IXPE measured no significant polarisation in the 2 to 8 keV X-ray band, placing a 3σ upper limit of 7.4%.

The striking contrast between the high polarisation in optical and mm bands, and a strict upper limit in X-rays, effectively rules out all single-zone and multi-region hadronic models. Had these processes dominated, the X-ray polarisation would have been comparable to the optical. Instead, the observations strongly support a leptonic origin, specifically the SSC model with a stratified or multi-zone jet structure that naturally explains the low X-ray polarisation.

A key feature of the flare was the rapid rise and fall of optical polarisation

A key feature of the flare was the rapid rise and fall of optical polarisation. Initially, it was low, of order 5%, and aligned with the jet direction, suggesting the dominance of poloidal or turbulent fields. A sharp increase to nearly 50%, while retaining alignment, indicates the sudden injection of a compact, toroidally dominated magnetic structure.

The authors of the analysis propose a “magnetic spring” model wherein a tightly wound toroidal field structure is injected into the jet, temporarily ordering the magnetic field and raising the optical polarisation. As the structure travels outward, it relaxes, likely through kink instabilities, causing the polarisation to decline over about two weeks. This resembles an elastic system, briefly stretched and then returning to equilibrium.

A magnetic spring would also explain the multiwavelength flaring. The injection boosted the total magnetic field strength, triggering an unprecedented mm-band flare powered by low-energy electrons with long cooling times. The modest rise in mm-wavelength polarisation (green points) suggests emission from a large, turbulent region. Meanwhile, optical flaring (black points) was suppressed due to the rapid synchrotron cooling of high-energy electrons, consistent with the observed softening of the optical spectrum. No significant γ-ray enhancement was observed, as these photons originate from the same rapidly cooling electron population.

Turning point

These findings mark a turning point in high-energy astrophysics. The data definitively favour leptonic emission mechanisms in BL Lacertae during this flare, ruling out efficient proton acceleration and thus any associated high-energy neutrino or cosmic-ray production. The ability of the jet to sustain nearly 50% polarisation across parsec scales implies a highly ordered, possibly helical magnetic field extending far from the supermassive black hole.

The results cement polarimetry as a definitive tool in identifying the origin of blazar emission. The dedicated Compton Spectrometer and Imager (COSI) γ-ray polarimeter is soon set to complement IXPE at even higher energies when launched by NASA in 2027. Coordinated campaigns will be crucial for probing jet composition and plasma processes in AGNs, helping us understand the most extreme environments in the universe.

Fermilab’s final word on muon g-2

Fermilab’s Muon g-2 collaboration has given its final word on the magnetic moment of the muon. The new measurement agrees closely with a significantly revised Standard Model (SM) prediction. Though the experimental measurement will likely now remain stable for several years, theorists expect to make rapid progress to reduce uncertainties and resolve tensions underlying the SM value. One of the most intriguing anomalies in particle physics is therefore severely undermined, but not yet definitively resolved.

The muon g-2 anomaly dates back to the late 1990s and early 2000s, when measurements at Brookhaven National Laboratory (BNL) uncovered a possible discrepancy by comparison to theoretical predictions of the so-called muon anomaly, aμ = (g-2)/2. aμ expresses the magnitude of quantum loop corrections to the leading-order prediction of the Dirac equation, which multiplies the classical gyromagnetic ratio of fundamental fermions by a “g-factor” of precisely two. Loop corrections of aμ ~ 0.1% quantify the extent to which virtual particles emitted by the muon further increase the strength of its interaction with magnetic fields. Were measurements to be shown to deviate from SM predictions, this would indicate the influence of virtual fields beyond the SM.

Move on up

In 2013, the BNL experiment’s magnetic storage ring was transported from Long Island, New York, to Fermilab in Batavia, Illinois. After years of upgrades and improvements, the new experiment began in 2017. It now reports a final precision of 127 parts per billion (ppb), bettering the experiment’s design precision of 140 ppb, and a factor of four more sensitive than the BNL result.

“First and foremost, an increase in the number of stored muons allowed us to reduce our statistical uncertainty to 98 ppb compared to 460 ppb for BNL,” explains co-spokesperson Peter Winter of Argonne National Laboratory, “but a lot of technical improvements to our calorimetry, tracking, detector calibration and magnetic-field mapping were also needed to improve on the systematic uncertainties from 280 ppb at BNL to 78 ppb at Fermilab.”

This formidable experimental precision throws down the gauntlet to the theory community

The final Fermilab measurement is (116592070.5 ± 11.4 (stat.) ± 9.1(syst.) ± 2.1 (ext.)) × 10–11, fully consistent with the previous BNL measurement. This formidable precision throws down the gauntlet to the Muon g-2 Theory Initiative (TI), which was founded to achieve an international consensus on the theoretical prediction.

The calculation is difficult, featuring contributions from all sectors of the SM (CERN Courier March/April 2025 p21). The TI published its first whitepaper in 2020, reporting aμ = (116591810 ± 43) × 10–11, based exclusively on a data-driven analysis of cross-section measurements at electron–positron colliders (WP20). In May, the TI updated its prediction, publishing a value aμ = (116592033 ± 62) × 10–11, statistically incompatible with the previous prediction at the level of three standard deviations, and with an increased uncertainty of 530 ppb (WP25). The new prediction is based exclusively on numerical SM calculations. This was made possible by rapid progress in the use of lattice QCD to control the dominant source of uncertainty, which arises due to the contribution of so-called hadronic vacuum polarisation (HVP). In HVP, the photon representing the magnetic field interacts with the muon during a brief moment when a virtual photon erupts into a difficult-to-model cloud of quarks and gluons.

Significant shift

“The switch from using the data-driven method for HVP in WP20 to lattice QCD in WP25 results in a significant shift in the SM prediction,” confirms Aida El-Khadra of the University of Illinois, chair of the TI, who believes that it is not unreasonable to expect significant error reductions in the next couple of years. “There still are puzzles to resolve, particularly around the experimental measurements that are used in the data-driven method for HVP, which prevent us, at this point in time, from obtaining a new prediction for HVP in the data-driven method. This means that we also don’t yet know if the data-driven HVP evaluation will agree or disagree with lattice–QCD calculations. However, given the ongoing dedicated efforts to resolve the puzzles, we are confident we will soon know what the data-driven method has to say about HVP. Regardless of the outcome of the comparison with lattice QCD, this will yield profound insights.”

We are making plans to improve experimental precision beyond the Fermilab experiment

On the experimental side, attention now turns to the Muon g-2/EDM experiment at J-PARC in Tokai, Japan. While the Fermilab experiment used the “magic gamma” method first employed at CERN in the 1970s to cancel the effect of electric fields on spin precession in a magnetic field (CERN Courier September/October 2024 p53), the J-PARC experiment seeks to control systematic uncertainties by exercising particularly tight control of its muon beam. In the Japanese experiment, antimatter muons will be captured by atomic electrons to form muonium, ionised using a laser, and reaccelerated for a traditional precession measurement with sensitivity to both the muon’s magnetic moment and its electric dipole moment (CERN Courier July/August 2024 p8).

“We are making plans to improve experimental precision beyond the Fermilab experiment, though their precision is quite tough to beat,” says spokesperson Tsutomu Mibe of KEK. “We also plan to search for the electric dipole moment of the muon with an unprecedented precision of roughly 10–21 e cm, improving the sensitivity of the last results from BNL by a factor of 70.”

With theoretical predictions from high-order loop processes expected to be of the order 10–38 e cm, any observation of an electric dipole moment would be a clear indication of new physics.

“Construction of the experimental facility is currently ongoing,” says Mibe. “We plan to start data taking in 2030.”

STAR hunts QCD critical point

Phases of QCD

Just as water takes the form of ice, liquid or vapour, QCD matter exhibits distinct phases. But while the phase diagram of water is well established, the QCD phase diagram remains largely conjectural. The STAR collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) recently completed a new beam-energy scan (BES-II) of gold–gold collisions. The results narrow the search for a long-sought-after “critical point” in the QCD phase diagram.

“BES-II precision measurements rule out the existence of a critical point in the regions of the QCD phase diagram accessed at LHC and top RHIC energies, while still allowing the possibility at lower collision energies,” says Bedangadas Mohanty of the National Institute of Science Education and Research in India, who co-led the analysis. “The results refine earlier BES-I indications, now with much reduced uncertainties.”

At low temperatures and densities, quarks and gluons are confined within hadrons. Heating QCD matter leads to the formation of a deconfined quark–gluon plasma (QGP), while increasing the density at low temperatures is expected to give rise to more exotic states such as colour superconductors. Above a certain threshold in baryon density, the transition from hadron gas to QGP is expected to be first-order – a sharp, discontinuous change akin to water boiling. As density decreases, this boundary gives way to a smooth crossover where the two phases blend. A hypothetical critical point marks the shift between these regimes, much like the endpoint of the liquid–gas coexistence line in the phase diagram of water (see “Phases of QCD” figure).

Heavy-ion collisions offer a way to observe this phase transition directly. At the Large Hadron Collider, the QGP created in heavy-ion collisions transitions smoothly to a hadronic gas as it cools, but the lower energies explored by RHIC probe the region of phase space where the critical point may lie.

To search for possible signatures of a critical point, the STAR collaboration measured gold–gold collisions at centre-of-mass energies between 7.7 and 27 GeV per nucleon pair. The collaboration reports that their data deviate from frameworks that do not include a critical point, including the hadronic transport model, thermal models with canonical ensemble treatment, and hydrodynamic approaches with excluded-volume effects. Depending on the choice of observable and non-critical baseline model, the significance of the deviations ranges from two to five standard deviations, with the largest effects seen in head-on collisions when using peripheral collisions as a reference.

“None of the existing theoretical models fully reproduce the features observed in the data,” explains Mohanty. “To interpret these precision measurements, it is essential that dynamical model calculations that include critical-point physics be developed.” The STAR collaboration is now mapping lower energies and higher baryon densities using a fixed target (FXT) mode, wherein a 1 mm gold foil sits 2 cm below the beam axis.

“The FXT data are a valuable opportunity to explore QCD matter at high baryon density,” says Mohanty. “Data taking will conclude later this year when RHIC transitions to the Electron–Ion Collider. The Compressed Baryonic Matter experiment at FAIR in Germany will then pick up the study of the QCD critical point towards the end of the 2020s.”

Double plasma progress at DESY

What if, instead of using tonnes of metal to accelerate electrons, they were to “surf” on a wave of charge displacements in a plasma? This question, posed in 1979 by Toshiki Tajima and John Dawson, planted the seed for plasma wakefield acceleration (PWA). Scientists at DESY now report some of the first signs that PWA is ready to compete with traditional accelerators at low energies. The results tackle two of the biggest challenges in PWA: beam quality and bunch rate.

“We have made great progress in the field of plasma acceleration,” says Andreas Maier, DESY’s lead scientist for plasma acceleration, “but this is an endeavour that has only just started, and we still have a bit of homework to do to get the system integrated with the injector complexes of a synchrotron, which is our final goal.”

Riding a wave

PWA has the potential to radically miniaturise particle accelerators. Plasma waves are generated when a laser pulse or particle beam ploughs through a millimetres-long hydrogen-filled capillary, displacing electrons and creating a wake of alternating positive and negative charge regions behind it. The process is akin to flotsam and jetsam being accelerated in the wake of a speedboat, and the plasma “wakefields” can be thousands of times stronger than the electric fields in conventional accelerators, allowing particles to gain hundreds of MeV in just a few millimetres. But beam quality and intensity are significant challenges in such narrow confines.

In a first study, a team from the LUX experiment at DESY and the University of Hamburg demonstrated, for the first time, a two-stage correction system to dramatically reduce the energy spread of accelerated electron beams. The first stage stretches the longitudinal extent of the beam from a few femtoseconds to several picoseconds using a series of four zigzagging bending magnets called a magnetic chicane. Next, a radio-frequency cavity reduces the energy variation to below 0.1%, bringing the beam quality in line with conventional accelerators.

“We basically trade beam current for energy stability,” explains Paul Winkler, lead author of a recent publication on active energy compression. “But for the intended application of a synchrotron injector, we would need to stretch the electron bunches anyway. As a result, we achieved performance levels so far only associated with conventional accelerators.”

But producing high-quality beams is only half the battle. To make laser-driven PWA a practical proposition, bunches must be accelerated not just once a second, like at LUX, but hundreds or thousands of times per second. This has now been demonstrated by KALDERA, DESY’s new high-power laser system (see “Beam quality and bunch rate” image).

“Already, on the first try, we were able to accelerate 100 electron bunches per second,” says principal investigator Manuel Kirchen, who emphasises the complementarity of the two advances. The team now plans to scale up the energy and deploy “active stabilisation” to improve beam quality. “The next major goal is to demonstrate that we can contin­uously run the plasma accelerators with high stability,” he says.

With the exception of CERN’s AWAKE experiment (CERN Courier May/June 2024 p25), almost all plasma-wakefield accelerators are designed with medical or industrial applications in mind. Medical applications are particularly promising as they require lower beam energies and place less demanding constraints on beam quality. Advances such as those reported by LUX and KALDERA raise confidence in this new technology and could eventually open the door to cheaper and more portable X-ray equipment, allowing medical imaging and cancer therapy to take place in university labs and hospitals.

Plotting the discovery of Higgs pairs on Elba

Precise measurements of the Higgs self-coupling and its effects on the Higgs potential will play a key role in testing the validity of the Standard Model (SM). 150 physicists discussed the required experimental and theoretical manoeuvres on the serene island of Elba from 11 to 17 May at the Higgs Pairs 2025 workshop.

The conference mixed updates on theoretical developments in Higgs-boson pair production, searches for new physics in the scalar sector, and the most recent results from Run 2 and Run 3 of the LHC. Among the highlights was the first Run 3 analysis released by ATLAS on the search for di-Higgs production in the bbγγ final state – a particularly sensitive channel for probing the Higgs self-coupling. This result builds on earlier Run 2 analyses and demonstrates significantly improved sensitivity, now comparable to the full Run 2 combination of all channels. These gains were driven by the use of new b-tagging algorithms, improved mass resolution through updated analysis techniques, and the availability of nearly twice the dataset.

Complementing this, CMS presented the first search for ttHH production – a rare process that would provide additional sensitivity to the Higgs self-coupling and Higgs–top interactions. Alongside this, ATLAS presented first experimental searches for triple Higgs boson production (HHH), one of the rarest processes predicted by the SM. Work on more traditional final states such as bbττ and bbbb is ongoing at both experiments, and continues to benefit from improved reconstruction techniques and larger datasets. 

Beyond current data, the workshop featured discussions of the latest combined projection study by ATLAS and CMS, prepared as part of the input to the upcoming European Strategy Update. It extrapolates results of the Run 2 analyses to expected conditions of the High-Luminosity LHC (HL-LHC), estimating future sensitivities to the Higgs self-coupling and di-Higgs cross-section in scenarios with vastly higher luminosity and upgraded detectors. Under these assumptions, the combined sensitivity of ATLAS and CMS to di-Higgs production is projected to reach a significance of 7.6σ, firmly establishing the process. 

These projections provide crucial input for analysis strategy planning and detector design for the next phase of operations at the HL-LHC. Beyond the HL-LHC, efforts are already underway to design experiments at future colliders that will enhance sensitivity to the production of Higgs pairs, and offer new insights into electroweak symmetry breaking.

New frontiers in science in the era of AI

New Frontiers in Science in the Era of AI

At a time when artificial intelligence is more buzzword than substance in many corners of public discourse, New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers. This book is not another breathless ode to ChatGPT or deep learning, nor a dry compilation of technical papers. Instead, it’s a broad and ambitious survey, spanning particle physics, evolutionary biology, neuroscience and AI ethics, that seeks to make sense of how emerging technologies are reshaping not only the sciences but knowledge and society more broadly.

The book’s chapters, written by established researchers from diverse fields, aim to avoid jargon while attracting non-specialists, without compromising depth. The book offers an insight into how physics remains foundational across scientific domains, and considers the social, ethical and philosophical implications of AI-driven science.

The first section, “New Physics World”, will be the most familiar terrain for physicists. Ugo Moschella’s essay, “What Are Things Made of? The History of Particles from Thales to Higgs”, opens with a sweeping yet grounded narrative of how metaphysical questions have persisted alongside empirical discoveries. He draws a bold parallel between the ancient idea of mass emerging from a cosmic vortex and the Higgs mechanism, a poetic analogy that holds surprising resonance. Thales, who lived roughly from 624 to 545 BCE, proposed that water is the fundamental substance out of which all others are formed. Following his revelation, Pythagoras and Empedocles added three more items to complete the list of the elements: earth, air and fire. Aristotle added a fifth element: the “aether”. The physical foundation of the standard cosmological model of the ancient world is then rooted in the Aristotelian conceptions of movement and gravity, argues Moschella. His essay lays the groundwork for future chapters that explore entanglement, computation and the transition from thought experiments to quantum technology and AI.

A broad and ambitious survey spanning particle physics, evolutionary biology, neuroscience and AI ethics

The second and third sections venture into evolutionary genetics, epigenetics (the study of heritable changes in gene expression) and neuroscience – areas more peripheral to physics, but timely nonetheless. Contributions by Eva Jablonka, evolutionary theorist and geneticist from Tel Aviv University, and Telmo Pievani, a biologist from the University of Padua, explore the biological implications of gene editing, environmental inheritance and self-directed evolution, as well as the ever-blurring boundaries between what is considered “natural” versus “artificial”. The authors propose that the human ability to edit genes is itself an evolutionary agent – a novel and unsettling idea, as this would be an evolution driven by a will and not by chance. Neuroscientist Jason D Runyan reflects compellingly on free will in the age of AI, blending empirical work with philosophical questions. These chapters enrich the central inquiry of what it means to be a “knowing agent”: someone who acts on nature according to its will, influenced by biological, cognitive and social factors. For physicists, the lesson may be less about adopting specific methods and more about recognising how their own field’s assumptions – about determinism, emergence or complexity – are echoed and challenged in the life sciences.

Perspectives on AI

The fourth section, “Artificial Intelligence Perspectives”, most directly addresses the book’s central theme. The quality, scientific depth and rigour are not equally distributed between these chapters, but are stimulating nonetheless. Topics range from the role of open-source AI in student-led AI projects at CERN’s IdeaSquare and real-time astrophysical discovery. Michael Coughlin and colleagues’ chapter on accelerated AI in astrophysics stands out for its technical clarity and relevance, a solid entry point for physicists curious about AI beyond popular discourse. Absent is an in-depth treatment of current AI applications in high-energy physics, such as anomaly detection in LHC triggers or generative models for simulation. Given the book’s CERN affiliations, this omission is surprising and leaves out some of the most active intersections of AI and high-energy physics (HEP) research.

Even as AI expands our modelling capacity, the epistemic limits of human cognition may remain permanent

The final sections address cosmological mysteries and the epistemological limits of human cognition. David H Wolpert’s epilogue, “What Can We Know About That Which We Cannot Even Imagine?”, serves as a reminder that even as AI expands our modelling capacity, the epistemic limits of human cognition – including conceptual blind spots and unprovable truths – may remain permanent. This tension is not a contradiction but a sobering reflection on the intrinsic boundaries of scientific – and more widely human – knowledge.

This eclectic volume is best read as a reflective companion to one’s own work. For advanced students, postdocs and researchers open to thinking beyond disciplinary boundaries, the book is an enriching, if at times uneven, read.

To a professional scientist, the book occasionally romanticises interdisciplinary exchange between specialised fields without fully engaging with the real methodological difficulties of translating complex concepts to the other sciences. Topics including the limitations of current large-language models, the reproducibility crisis in AI research, and the ethical risks of data-driven surveillance would have benefited from deeper treatment. Ethical questions in HEP may be less prominent in the public eye, but still exist. To mention a few, there are the environmental impact of large-scale facilities, the question of spending a substantial amount of public money on such mega-science projects, the potential dual-use concerns of the technologies developed, the governance of massive international collaborations and data transparency. These deserve more attention, and the book could have explored them more thoroughly.

A timely snapshot

Still, the book doesn’t pretend to be exhaustive. Its strength lies in curating diverse voices and offering a timely snapshot of science, as well as shedding light on ethical and philosophical questions associated with science that are less frequently discussed.

There is a vast knowledge gap in today’s society. Researchers often become so absorbed in their specific domains that they lose sight of their work’s broader philosophical and societal context and the need to explain it to the public. Meanwhile, public misunderstanding of science, and the resulting confusion between fact, theory and opinion, is growing. This gulf provides fertile ground for political manipulation and ideological extremism. New Frontiers in Science in the Era of AI has the immense merit of trying to bridge that gap. The editors and contributors deserve credit for producing a work of both scientific and societal relevance.

Quantum culture

Kanta Dihal

How has quantum mechanics influenced culture in the last 100 years?

Quantum physics offers an opportunity to make the impossible seem plausible. For instance, if your superhero dies dramatically but the actor is still on the payroll, you have a few options available. You could pretend the hero miraculously survived the calamity of the previous instalment. You could also pretend the events of the previous instalment never happened. And then there is Star Wars: “Somehow, Palpatine returned.”

These days, however, quantum physics tends to come to the rescue. Because quantum physics offers the wonderful option to maintain that all previous events really happened, and yet your hero is still alive… in a parallel universe. Much is down to the remarkable cultural impact of the many-worlds interpretation of quantum physics, which has been steadily growing in fame (or notoriety) since Hugh Everett introduced it
in 1957.

Is quantum physics unique in helping fiction authors make the impossible seem possible?

Not really! Before the “quantum” handwave, there was “nuclear”: think of Dr Atomic from Watchmen, or Godzilla, as expressions of the utopian and dystopian expectations of that newly discovered branch of science. Before nuclear, there was electricity, with Frankenstein’s monster as perhaps its most important product. We can go all the way back to the invention of hydraulics in the ancient world, which led to an explosion of tales of liquid-operated automata – early forms of artificial intelligence – such as the bronze soldier Talos in ancient Greece. We have always used our latest discoveries to dream of a future in which our ancient tales of wonder could come true.

Is the many-worlds interpretation the most common theory used in science fiction inspired by quantum mechanics?

Many-worlds has become Marvel’s favourite trope. It allows them to expand on an increasingly entangled web of storylines that borrow from a range of remakes and reboots, as well as introducing gender and racial diversity into old stories. Marvel may have mainstreamed this interpretation, but the viewers of the average blockbuster may not realise exactly how niche it is, and how many alternatives there are. With many interpretations vying for acceptance, every once in a while a brave social scientist ventures to survey quantum-physicists’ preferences. These studies tend to confirm the dominance of the Copenhagen interpretation, with its collapse of the wavefunction rather than the branching universes characteristic of the Everett interpretation. In a 2016 study, for instance, only 6% of quantum physicists claimed that Everett was their favourite interpretation. In 2018 I looked through a stack of popular quantum-physics books published between 1980 and 2017, and found that more than half of these books endorse the many-worlds interpretation. A non-physicist might be forgiven for thinking that quantum physicists are split between two equal-sized enemy camps of Copenhagenists and Everettians.

What makes the many-worlds interpretation so compelling?

Answering this brings us to a fundamental question that fiction has enjoyed exploring since humans first told each other stories: what if? “What if the Nazis won the Second World War?” is pretty much an entire genre by itself these days. Before that, there were alternate histories of the American Civil War and many other key historical events. This means that the many-worlds interpretation fits smoothly into an existing narrative genre. It suggests that these alternate histories may be real, that they are potentially accessible to us and simply happening in a different dimension. Even the specific idea of branching alternative universes existed in fiction before Hugh Everett applied it to quantum mechanics. One famous example is the 1941 short story The Garden of Forking Paths by the Argentinian writer Jorge Luis Borges, in which a writer tries to create a novel in which everything that could happen, happens. His story anticipated the many-worlds interpretation so closely that Bryce DeWitt used an extract from it as the epigraph to his 1973 edited collection The Many-Worlds Interpretation of Quantum Mechanics. But the most uncanny example is, perhaps, Andre Norton’s science-fiction novel The Crossroads of Time, from 1956 – published when Everett was writing his thesis. In her novel, a group of historians invents a “possibility worlds” theory of history. The protagonist, Blake Walker, discovers that this theory is true when he meets a group of men from a parallel universe who are on the hunt for a universe-travelling criminal. Travelling with them, Blake ends up in a world where Hitler won the Battle of Britain. Of course, in fiction, only worlds in which a significant change has taken place are of any real interest to the reader or viewer. (Blake also visits a world inhabited by metal dinosaurs.) The truly uncountable number of slightly different universes Everett’s theory implies are extremely difficult to get our heads around. Nonetheless, our storytelling mindsets have long primed us for a fascination with the many-worlds interpretation.

Have writers put other interpretations to good use?

For someone who really wants to put their physics degree to use in their spare time, I’d recommend the works of Greg Egan: although his novel Quarantine uses the controversial conscious collapse interpretation, he always ensures that the maths checks out. Egan’s attitude towards the scientific content of his novels is best summed up by a quote on his blog: “A few reviewers complained that they had trouble keeping straight [the science of his novel Incandescence]. This leaves me wondering if they’ve really never encountered a book that benefits from being read with a pad of paper and a pen beside it, or whether they’re just so hung up on the idea that only non-fiction should be accompanied by note-taking and diagram-scribbling that it never even occurred to them to do this.”

What other quantum concepts are widely used and abused?

We have Albert Einstein to thank for the extremely evocative description of quantum entanglement as “spooky action at a distance”. As with most scientific phenomena, a catchy nickname such as this one is extremely effective for getting a concept to stick in the popular imagination. While Einstein himself did not initially believe quantum entanglement could be a real phenomenon, as it would violate local causality, we now have both evidence and applications of entanglement in the real world, most notably in quantum cryptography. But in science fiction, the most common application of quantum entanglement is in faster-than-light communication. In her 1966 novel Rocannon’s World, Ursula K Le Guin describes a device called the “ansible”, which interstellar travellers use to instantaneously communicate with each other across vast distances. Her term was so influential that it now regularly appears in science fiction as a widely accepted name for a faster-than-light communications device, the same way we have adopted the word “robot” from the 1920 play R.U.R. by Karel Čapek.

Fiction may get the science wrong, but that is often because the story it tries to tell existed long before the science

How were cultural interpretations of entanglement influenced by the development of quantum theory?

It wasn’t until the 1970s that no-signalling theorems conclusively proved that entanglement correlations, while instantaneous, cannot be controlled or used to send messages. Explaining why is a lot more complex than communicating the notion that observing a particle here has an effect on a particle there. Once again, quantum physics seemingly provides just enough scientific justification to resolve an issue that has plagued science fiction ever since the speed of light was discovered: how can we travel through space, exploring galaxies, settling on distant planets, if we cannot communicate with each other? This same line of thought has sparked another entanglement-related invention in fiction: what if we can send not just messages but also people, or even entire spaceships, across faster-than-light distances using entanglement? Conveniently, quantum physicists had come up with another extremely evocative term that fit this idea perfectly: quantum teleportation. Real quantum teleportation only transfers information. But the idea of teleportation is so deeply embedded in our storytelling past that we can’t help extrapolating it. From stories of gods that could appear anywhere at will to tales of portals that lead to strange new worlds, we have always felt limited by the speeds of travel we have managed to achieve – and once again, the speed of light seems to be a hard limit that quantum teleportation might be able to get us around. In his 2003 novel Timeline, Michael Crichton sends a group of researchers back in time using quantum teleportation, and the videogame Half-Life 2 contains teleportation devices that similarly seem to work through quantum entanglement.

What quantum concepts have unexplored cultural potential?

Clearly, interpretations other than many worlds have a PR problem, so is anyone willing to write a chart topper based on the relational interpretation or QBism? More generally, I think that any question we do not yet have an answer to, or any theory that remains untestable, is a potential source for an excellent story. Richard Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.” Ironically, it is precisely because of this that quantum physics has become such a widespread building block of science fiction: it is just hard enough to understand, just unresolved and unexplained enough to keep our hopes up that one day we might discover that interstellar communication or inter-universe travel might be possible. Few people would choose the realities of theorising over these ancient dreams. That said, the theorising may never have happened without the dreams. How many of your colleagues are intimately acquainted with the very science fiction they criticise for having unrealistic physics? We are creatures of habit and convenience held together by stories, physicists no less than everyone else. This is why we come up with catchy names for theories, and stories about dead-and-alive cats. Fiction may often get the science wrong, but that is often because the story it tries to tell existed long before the science.

A scientist in sales

Massimiliano Pindo

The boundary between industry and academia can feel like a chasm. Opportunity abounds for those willing to bridge the gap.

Massimiliano Pindo began his career working on silicon pixel detectors at the DELPHI experiment at the Large Electron–Positron Collider. While at CERN, Pindo developed analytical and technical skills that would later become crucial in his career. But despite his passion for research, doubts clouded his hopes for the future.

“I wanted to stay in academia,” he recalls. “But at that time, it was getting really difficult to get a permanent job.” Pindo moved from his childhood home in Milan to Geneva, before eventually moving back in with his parents while applying for his next research grant. “The golden days of academia where people got a fixed position immediately after a postdoc or PhD were over.”

The path forward seemed increasingly unstable, defined by short-term grants, constant travel and an inability to plan long-term. There was always a constant stream of new grant applications, but permanent contracts were few and far between. With competition increasing, job stability seemed further and further out of reach. “You could make a decent living,” Pindo says, “but the real problem was you could not plan your life.”

Translatable skills

Faced with the unpredictability of academic work, Pindo transitioned into industry – a leap that eventually led him to his current role as marketing and sales director at Renishaw, France, a global engineering and scientific technology company. Pindo was confident that his technical expertise would provide a strong foundation for a job beyond academia, and indeed he found that “hard” skills such as analytical thinking, problem-solving and a deep understanding of technology, which he had honed at CERN alongside soft skills such as teamwork, languages and communication, translated well to his work in industry.

“When you’re a physicist, especially a particle physicist, you’re used to breaking down complex problems, selecting what is really meaningful amongst all the noise, and addressing these issues directly,” Pindo says. His experience in academia gave him the confidence that industry challenges would pale in comparison. “I was telling myself that in the academic world, you are dealing with things that, at least on paper, are more complex and difficult than what you find in industry.”

Initially, these technical skills helped Pindo become a device engineer for a hardware company, before making the switch to sales. The gradual transition from academia to something more hands-on allowed him to really understand the company’s product on a technical level, which made him a more desirable candidate when transitioning into marketing.

“When you are in B2B [business-to-business] mode and selling technical products, it’s always good to have somebody who has technical experience in the industry,” explains Pindo. “You have to have a technical understanding of what you’re selling, to better understand the problems customers are trying to solve.”

However, this experience also allowed him to recognise gaps in his knowledge. As he began gaining more responsibility in his new, more business-focused role, Pindo decided to go back to university and get an MBA. During the programme, he was able to familiarise himself with the worlds of human resources, business strategy and management – skills that aren’t typically the focus in a physics lab.

Pindo’s journey through industry hasn’t been a one-way ticket out of academia. Today, he still maintains a foothold in the academic world, teaching strategy as an affiliated professor at the Sorbonne. “In the end you never leave the places you love,” he says. “I got out through the door – now I’m getting back in through the window!”

Transitioning between industry and academia was not entirely seamless. Misconceptions loomed on both sides, and it took Pindo a while to find a balance between the two.

“There is a stereotype that scientists are people who can’t adapt to industrial environments – that they are too abstract, too theoretical,” Pindo explains. “People think scientists are always in the clouds, disconnected from reality. But that’s not true. The science we make is not the science of cartoons. Scientists can be people who plan and execute practical solutions.”

The misunderstanding, he says, goes both ways. “When I talk to alumni still in academia, many think that industry is a nightmare – boring, routine, uninteresting. But that’s also false,” Pindo says. “There’s this wall of suspicion. Academics look at industry and think, ‘What do they want? What’s the real goal? Are they just trying to make more money?’ There is no trust.”

Tight labour markets

For Pindo, this divide is frustrating and entirely unnecessary. Now with years of experience navigating both worlds, he envisions a more fluid connection between academia and industry – one that leverages the strengths of both. “Industry is currently facing tight labour markets for highly skilled talent, and academia doesn’t have access to the money and practical opportunities that industry can provide,” says Pindo. “Both sides need to work together.”

To bridge this gap, Pindo advocates a more open dialogue and a revolving door between the two fields – one that allows both academics and industry professionals to move fluidly back and forth, carrying their expertise across boundaries. Both sides have much to gain from shared knowledge and collaboration. One way to achieve this, he suggests, is through active participation in alumni networks and university events, which can nurture lasting relationships and mutual understanding. If more professionals embraced this mindset, it could help alleviate the very instability that once pushed him out of academia, creating a landscape where the boundaries between science and industry blur to the benefit of both.

“Everything depends on active listening. You always have to learn from the person in front of you, so give them the chance to speak. We have a better world to build, and that comes only from open dialogue and communication.”

bright-rec iop pub iop-science physcis connect