Fast radio bursts (FRBs) are short but powerful bursts of radio waves that are believed to be emitted by dense astrophysical objects such as neutron stars or black holes. They were discovered by Duncan Lorimer and his student David Narkevic in 2007 while studying archival data from the Parkes radio telescope in Australia. Since then, more than a thousand FRBs have been detected, located both within and without the Milky Way. These bursts usually last only a few milliseconds but can release enormous amounts of energy – an FRB detected in 2022 gave off more energy in a millisecond than the Sun does in 30 years – however, the exact mechanism underlying their creation remains a mystery.
Inhomogeneities caused by the presence of gas and dust in the interstellar medium scatter the radio waves coming from an FRB. This creates a stochastic interference pattern on the signal, called scintillation – a phenomenon akin to the twinkling of stars. In a recent study, astronomer Kenzie Nimmo and her colleagues used scintillation data from FRB 20221022A to constrain the size of its emission region. FRB 20221022A is a 2.5 millisecond burst from a galaxy about 200 million light-years away. It was detected on 22 October 2022 by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB).
The CHIME telescope is currently the world’s leading FRB detector, discovering an average of three new FRBs every day. It consists of four stationary 20 m-wide and 100 m-long semi-cylindrical paraboloidal reflectors with a focal length of 5 m (see “Right on CHIME” figure). 256 dual-polarisation feeds suspended along each axis gives it a field of view of more than 200 square degrees. With a wide bandwidth, high sensitivity and a high-performance correlator to pinpoint where in the sky signals are coming from, CHIME is an excellent instrument for the detection of FRBs. The antenna receives radio waves in the frequency range of 400 to 800 MHz.
Two main classes of models compete to explain the emission mechanisms of FRBs. Near-field models hypothesise that emission occurs in close proximity to the turbulent magnetosphere of a central engine, while far-away models hypothesise that emission occurs in relativistic shocks that propagate out to large radial distances. Nimmo and her team measured two distinct scintillation scales in the frequency spectrum of FRB 20221022A: one originating from its host galaxy or local environment, and another from a scattering site within the Milky Way. By using these scattering sites as astrophysical lenses, they were able to constrain the size of the FRB’s emission region to better than 30,000 km. This emission size contradicted expectations from far-away models. It is more consistent with an emission process occurring within or just beyond the magnetosphere of a central compact object – the first clear evidence for the near-field class of models.
Additionally, FRB 20221022A’s detection paper notes a striking change in the burst’s polarisation angle – an “S-shaped” swing covering about 130° – over a mere 2.5 milliseconds. They interpret this as the emission beam physically sweeping across our line of sight, much like a lighthouse beam passing by an observer, and conclude that it hints at a magnetospheric origin of the emission, as highly magnetised regions can twist or shape how radio waves are emitted. The scintillation studies by Nimmo et al. independently support this conclusion, narrowing the possible sources and mechanisms that power FRBs. Moreover, they highlight the potential of the scintillation technique to explore the emission mechanisms in FRBs and understand their environments.
The field of FRB physics looks set to grow by leaps and bounds. CHIME can already identify host galaxies for FRBs, but an “outrigger” programme using similar detectors geographically displaced from the main telescope at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, aims to strengthen its localisation capabilities to a precision of tens of milliarcsecond. CHIME recently finished deploying its third outrigger telescope in northern California.
Collisions between lead ions at the LHC generate the hottest and densest system ever created in the laboratory. Under these extreme conditions, quarks and gluons are no longer confined inside hadrons but instead form a quark–gluon plasma (QGP). Being heavier than the more abundantly produced light quarks, charm quarks play a special role in probing the plasma since they are created in the collision before the plasma is formed and interact with the plasma as they traverse the collision zone. Charm jets, which are clusters of particles originating from charm quarks, have been investigated for the first time by the ALICE collaboration in Pb–Pb collisions at the LHC using the D0 mesons (that carry a charm quark) as tags.
The primary interest lies in measuring the extent of energy loss experienced by different types of particles as they traverse the plasma, referred to as “in-medium energy loss”. This energy loss specifically depends on the particle type and particle mass, varying between quarks and gluons. Due to their larger mass, charm quarks at low transverse momentum do not reach the speed of light and lose substantially less energy than light quarks through both collisional and radiative processes, as gluon radiation by massive quarks is suppressed: the so-called “dead-cone effect”. Additionally, gluons, which carry a larger colour charge than quarks, experience greater energy loss in the QGP as quantified by the Casimir factors CA = 3 for gluons and CF = 4/3 for quarks. This makes the charm quark an ideal probe for studying the QGP properties. ALICE is well suited to study the in-medium energy loss of charm quarks, which is dependent on the mass of the charm quark and its colour charge.
The production yield of charm jets tagged with fully reconstructed D0 mesons (D0→ K–π+) in central Pb–Pb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2 was measured by ALICE. The results are reported in terms of nuclear modification factor (RAA), which is the ratio of the particle production rate in Pb–Pb collisions to that in proton–proton collisions, scaled by the number of binary nucleon–nucleon collisions. A measured nuclear modification factor of unity would indicate the absence of final-state effects.
The results, shown in figure 1, show a clear suppression (RAA < 1) for both charm jets and inclusive jets (that mainly originate from light quarks and gluons) due to energy loss. Importantly, the charm jets exhibit less suppression than the inclusive jets within the transverse momentum range of 20 to 50 GeV, which is consistent with mass and colour-charge dependence.
The measured results are compared with theoretical model calculations that include mass effects in the in-medium energy loss. Among the different models, LIDO incorporates both the dead-cone effect and the colour-charge effects, which are essential for describing the energy-loss mechanisms. Consequently, it shows reasonable agreement with experimental data, reproducing the observed hierarchy between charm jets and inclusive jets.
The present finding provides a hint of the flavour-dependent energy loss in the QGP, suggesting that charm jets lose less energy than inclusive jets. This highlights the quark-mass and colour-charge dependence of the in-medium energy-loss mechanisms.
The Chamonix Workshop 2025, held from 27 to 30 January, brought together CERN’s accelerator and experimental communities to reflect on achievements, address challenges and chart a course for the future. As the discussions made clear, CERN is at a pivotal moment. The past decade has seen transformative developments across the accelerator complex, while the present holds significant potential and opportunity.
The workshop opened with a review of accelerator operations, supported by input from December’s Joint Accelerator Performance Workshop. Maintaining current performance levels requires an extraordinary effort across all the facilities. Performance data from the ongoing Run 3 shows steady improvements in availability and beam delivery. These results are driven by dedicated efforts from system experts, operations teams and accelerator physicists, all working to ensure excellent performance and high availability across the complex.
Electron clouds parting
Attention is now turning to Run 4 and the High-Luminosity LHC (HL-LHC) era. Several challenges have been identified, including the demand for high-intensity beams, radiofrequency (RF) power limitations and electron-cloud effects. In the latter case, synchrotron-radiation photons strike the beam-pipe walls, releasing electrons which are then accelerated by proton bunches, triggering a cascading electron-cloud buildup. Measures to address these issues will be implemented during Long Shutdown 3 (LS3), ensuring CERN’s accelerators continue to meet the demands of its diverse physics community.
LS3 will be a crucial period for CERN. In addition to the deployment of the HL-LHC and major upgrades to the ATLAS and CMS experiments, it will see a widespread programme of consolidation, maintenance and improvements across the accelerator complex to secure future exploitation over the coming decades.
Progress on the HL-LHC upgrade was reviewed in detail, with a focus on key systems – magnets, cryogenics and beam instrumentation – and on the construction of critical components such as crab cavities. The next two years will be decisive, with significant system testing scheduled to ensure that these technologies meet ambitious performance targets.
Planning for LS3 is already well advanced. Coordination between all stakeholders has been key to aligning complex interdependencies, and the experienced teams are making strong progress in shaping a resource-loaded plan. The scale of LS3 will require meticulous coordination, but it also represents a unique opportunity to build a more robust and adaptable accelerator complex for the future. Looking beyond LS3, CERN’s unique accelerator complex is well positioned to support an increasingly diverse physics programme. This diversity is one of CERN’s greatest strengths, offering complementary opportunities across a wide range of fields.
The high demand for beam time at ISOLDE, n_TOF, AD-ELENA and the North and East Areas underscores the need for a well-balanced approach that supports a broad range of physics. The discussions highlighted the importance of balancing these demands while ensuring that the full potential of the accelerator complex is realised.
Future opportunities such as those highlighted by the Physics Beyond Colliders study will be shaped by discussions being held as part of the update of the European Strategy for Particle Physics (ESPP). Defining the next generation of physics programmes entails striking a careful balance between continuity and innovation, and the accelerator community will play a central role in setting the priorities.
A forward-looking session at the workshop focused on the Future Circular Collider (FCC) Feasibility Study and the next steps. The physics case was presented alongside updates on territorial implementation and civil-engineering investigations and plans. How the FCC-ee injector complex would fit into the broader strategic picture was examined in detail, along with the goals and deliverables of the pre-technical design report (pre-TDR) phase that is planned to follow the Feasibility Study’s conclusion.
While the FCC remains a central focus, other future projects were also discussed in the context of the ESPP update. These include mature linear-collider proposals, the potential of a muon collider and plasma wakefield acceleration. Development of key technologies, such as high-field magnets and superconducting RF systems, will underpin the realisation of future accelerator-based facilities.
The next steps – preparing for Run 4, implementing the LS3 upgrade programmes and laying the groundwork for future projects – are ambitious but essential. CERN’s future will be shaped by how well we seize these opportunities.
The shared expertise and dedication of CERN’s personnel, combined with a clear strategic vision, provide a solid foundation for success. The path ahead is challenging, but with careful planning, collaboration and innovation, CERN’s accelerator complex will remain at the heart of discovery for decades to come.
The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.
Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.
Evolving architecture
Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.
Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.
To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.
ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.
The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.
Substantial upgrades
The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.
Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.
The third edition of TDHEP suggests that innovation in this field is only set to accelerate
The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.
Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.
Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.
These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.
Space Oddities takes readers on a journey through the mysteries of modern physics, from the smallest subatomic particles to the vast expanse of stars and space. Harry Cliff – an experimental particle physicist at Cambridge University – unravels some of the most perplexing anomalies challenging the Standard Model (SM), with behind-the-scenes scoops from eight different experiments. The most intriguing stories concern lepton universality and the magnetic moment of the muon.
Theoretical predictions have demonstrated an extremely precise value for the muon’s magnetic moment, experimentally verified to an astonishing 11 significant figures. Over the last few years, however, experimental measurements have suggested a slight discrepancy – the devil lying in the 12th digit. 2021 measurements at Fermilab disagreed with theory predictions at 4σ. Not enough to cause a “scientific earthquake”, as Cliff puts it, but enough to suggest that new physics might be at play.
Just as everything seemed to be edging towards a new discovery, Cliff introduces the “villains” of the piece. Groundbreaking lattice–QCD predictions from the Budapest–Marseille–Wuppertal collaboration were published on the same day as a new measurement from Fermilab. If correct, these would destroy the anomaly by contradicting the data-driven theory consensus. (“Yeah, bullshit,” said one experimentalist to Cliff when put to him that the timing wasn’t intended to steal the experiment’s thunder.) The situation is still unresolved, though many new theoretical predictions have been made and a new theoretical consensus is imminent (see “Do muons wobble faster than expected“). Regardless of the outcome, Cliff emphasises that this research will pave the way for future discoveries, and none of it should be taken for granted – even if the anomaly disappears.
“One of the challenging aspects of being part of a large international project is that your colleagues are both collaborators and competitors,” Cliff notes. “When it comes to analysing the data with the ultimate goal of making discoveries, each research group will fight to claim ownership of the most interesting topics.”
This spirit of spurring collaborator- competitors on to greater heights of precision is echoed throughout Cliff’s own experience of working in the LHCb collaboration, where he studies “lepton universality”. All three lepton flavours – electron, muon and tau – should interact almost identically, except for small differences due to their masses. However, over the past decade several experimental results suggested that this theory might not hold in B-meson decays, where muons seemed to be appearing less frequently than electrons. If confirmed, this would point to physics beyond the SM.
Having been involved himself in a complementary but less sensitive analysis of B-meson decay channels involving strange quarks, Cliff recalls the emotional rollercoaster experienced by some of the key protagonists: the “RK” team from Imperial College London. After a year of rigorous testing, RK unblinded a sanity check of their new computational toolkit: a reanalysis of the prior measurement that yielded a perfectly consistent R value of 0.72 with an uncertainty of about 0.08, upholding a 3σ discrepancy. Now was the time to put the data collected since then through the same pasta machine: if it agreed, the tension between the SM and their overall measurement would cross the 5σ threshold. After an anxious wait while the numbers were crunched, the team received the results for the new data: 0.93 with an uncertainty of 0.09.
“Dreams of a major discovery evaporated in an instant,” recalls Cliff. “Anyone who saw the RK team in the CERN cafeteria that day could read the result from their faces.” The lead on the RK team, Mitesh Patel, told Cliff that they felt “emotionally train wrecked”.
One day we might make the right mistake and escape the claustrophobic clutches of the SM
With both results combined, the ratio averaged out to 0.85 ± 0.06, just shy of 3σ away from unity. While the experimentalists were deflated, Cliff notes that for theorists this result may have been more exciting than the initial anomaly, as it was easier to explain using new particles or forces. “It was as if we were spying the footprints of a great, unknown beast as it crashed about in a dark jungle,” writes Cliff.
Space Oddities is a great defence of irrepressible experimentation. Even “failed” anomalies are far from useless: if they evaporate, the effort required to investigate them pushes the boundaries of experimental precision, enhances collaboration between scientists across the world, and refines theoretical frameworks. Through retellings and interviews, Cliff helps the public experience the excitement of near breakthroughs, the heartbreak of failed experiments, and the dynamic interactions between theoretical and experimental physicists. Thwarting myths that physicists are cold, calculating figures working in isolation, Cliff sheds light on a community driven by curiosity, ambition and (healthy) competition. His book is a story of hope that one day we might make the right mistake and escape the claustrophobic clutches of the SM.
“I’ve learned so much from my mistakes,” read a poster above Cliff’s undergraduate tutor’s desk. “I think I’ll make another.”
The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted 346 physicists to Nagasaki, Japan, from 22 to 27 September 2024. Delegates discussed the recent experimental and theoretical findings on perturbative probes of the quark–gluon plasma (QGP) – a hot and deconfined state of matter formed in ultrarelativistic heavy-ion collisions.
The four main LHC experiments played a prominent role at the conference, presenting a large set of newly published results from studies performed on data collected during LHC Run 2, as well as several new preliminary results performed on the new data samples from Run 3.
Jet modifications
A number of significant results on the modification of jets in heavy-ion collisions were presented. Splitting functions characterising the evolution of parton showers are expected to be modified in the presence of the QGP, providing experimental access to the medium properties. A more differential look at these modifications was presented through a correlated measurement of the shared momentum fraction and opening angle of the first splitting satisfying the “soft drop” condition in jets. Additionally, energy–energy correlators have recently emerged as promising observables where the properties of jet modification in the medium might be imprinted at different scales on the observable.
The first measurements of the two-particle energy–energy correlators in p–Pb and Pb–Pb collisions were presented, showing modifications in both the small- and large-angle correlations for both systems compared to pp collisions. A long-sought after effect of energy exchanges between the jet and the medium is a correlated response of the medium in the jet direction. For the first time, measurements of hadron–boson correlations in events containing photons or Z bosons showed a clear depletion of the bulk medium in the direction of the Z boson, providing direct evidence of a medium response correlated to the propagating back-to-back jet. In pp collisions, the first direct measurement of the dead cone of beauty quarks, using novel machine-learning methods to reconstruct the beauty hadron from partial decay information, was also shown.
Several new results from studies of particle production in ultraperipheral heavy-ion collisions were discussed. These studies allow us to investigate the possible onset of gluon saturation at low Bjorken-x values. In this context, new results of charm photoproduction, with measurements of incoherent and coherent J/ψ mesons, as well as of D0 mesons, were released. Photonuclear production cross-sections of di-jets, covering a large interval of photon energies to scan over different regions of Bjorken-x, were also presented. These measurements pave the way for setting constraints on the gluon component of nuclear parton distribution functions at low Bjorken-x values, over a wide Q2 range, in the absence of significant final-state effects.
New experiments will explore higher-density regions of the QCD–matter phase diagram
During the last few years, a significant enhancement of charm and beauty-baryon production in proton–proton collisions was observed, compared to measurements in e+e– and ep collisions. These observations have challenged the assumption of the universality of heavy-quark fragmentation across different collision systems. Several intriguing measurements on this topic were released at the conference. In addition to an extended set of charm meson-to-meson and baryon-to-meson production yield ratios, the first measurements of the production of Σc0,++(2520) relative to Σc0,++(2455) at the LHC, obtained exploiting the new Run 3 data samples, were discussed. New insights on the structure of the exotic χc1(3872) state and its hadronisation mechanism were garnered by measuring the ratio of its production yield to that of ψ(2S) mesons in hadronic collisions.
Additionally, strange-to-non-strange production-yield ratios for charm and beauty mesons as a function of the collision multiplicity were released, pointing toward an enhanced strangeness production in a higher colour-density environment. Several theoretical approaches implementing modified hadronisation mechanisms with respect to in-vacuum fragmentation have proven to be able to reproduce at least part of the measurements, but a comprehensive description of the heavy-quark hadronisation, in particular for the baryonic sector, is still to be reached.
A glimpse into the future of the experimental opportunities in this field was also provided. A new and intriguing set of physics observables for a complete characterisation of the QGP with hard probes will become accessible with the planned upgrades of the ALICE, ATLAS, CMS and LHCb detectors, both during the next long LHC shutdown and in the more distant future. New experiments at CERN, such as NA60+, or in other facilities like the Electron–Ion Collider in the US and J-PARC-HI in Japan, will explore higher-density regions of the QCD–matter phase diagram.
The next edition of this conference series is scheduled to be held in Nashville, US, from 1 to 5 June 2026.
Throughout my experiences in the laboratory, I have seen how art is an important part of a scientist’s life. By being connected with art, scientists recognise that their activities are very embedded in contemporary culture. Science is culture. Through art and dialogues with artists, people realise how important science is for society and for culture in general. Science is an important cultural pillar in our society, and these interactions bring scientists meaning.
Are science and art two separate cultures?
Today, if you ask anyone: “What is nature?” they describe everything in scientific terms. The way you describe things, the mysteries of your research: you are actually answering the questions that are present in everyone’s life. In this case, scientists have a sense of responsibility. I think art helps to open this dialogue from science into society.
Do scientists have a responsibility to communicate their research?
All of us have a social responsibility in everything we produce. Ideas don’t belong to anyone, so it’s a collective endeavour. I think that scientists don’t have the responsibility to communicate the research themselves, but that their research cannot be isolated from society. I think it’s a very joyful experience to see that someone cares about what you do.
Why should artists care about science?
If you go to any academic institution, there’s always a scientific component, very often also a technological one. A scientific aspect of your life is always present. This is happening because we’re all on the same course. It’s a consequence of this presence of science in our culture. Artists have an important role in our society, and they help to spark conversations that are important to everyone. Sometimes it might seem as though they are coming from a very individual lens, but in fact they have a very large reach and impact. Not immediately, not something that you can count with data, but there is definitely an impact. Artists open these channels for communicating and thinking about a particular aspect of science, which is difficult to see from a scientific perspective. Because in any discipline, it’s amazing to see your activity from the eyes of others.
Creativity and curiosity are the parameters and competencies that make up artists and scientists
A few years back we did a little survey, and most of the scientists thought that by spending time with artists, they took a step back to think about their research from a different lens, and this changed their perspective. They thought of this as a very positive experience. So I think art is not only about communicating to the public, but about exploring the personal synergies of art and science. This is why artists are so important.
Do experimental and theoretical physicists have different attitudes towards art?
Typically, we think that theorists are much more open to artists, but I don’t agree. In my experiences at CERN, I found many engineers and experimental physicists being highly theoretical. Both value artistic perspectives and their ability to consider questions and scientific ideas in an unconventional way. Experimental physicists would emphasise engagement with instruments and data, while theoretical physicists would focus on conceptual abstraction.
By being with artists, many experimentalists feel that they have the opportunity to talk about things beyond their research. For example, we often talk about the “frontiers of knowledge”. When asked about this, experimentalists or theoretical physicists might tell us about something other than particle physics – like neuroscience, or the brain and consciousness. A scientist is a scientist. They are very curious about everything.
Do these interactions help to blur the distinction between art and science?
Well, here I’m a bit radical because I know that creativity is something we define. Creativity and curiosity are the parameters and competencies that make up artists and scientists. But to become a scientist or an artist you need years of training – it’s not that you can become one just because you are a curious and creative person.
Not many people can chat about particle physics, but scientists very often chat with artists. I saw artists speaking for hours with scientists about the Higgs field. When you see two people speaking about the same thing, but with different registers, knowledge and background, it’s a precious moment.
When facilitating these discussions between physicists and artists, we don’t speak only about physics, but about everything that worries them. Through that, grows a sort of intimacy that often becomes something else: a friendship. This is the point at which a scientist stops being an information point for an artist and becomes someone who deals with big questions alongside an artist – who is also a very knowledgeable and curious person. This is a process rich in contrast, and you get many interesting surprises out of these interactions.
But even in this moment, they are still artists and scientists. They don’t become this blurred figure that can do anything.
Can scientific discovery exist without art?
That’s a very tricky question. I think that art is a component of science, therefore science cannot exist without art – without the qualities that the artist and scientist have in common. To advance science, you have to create a question that needs to be answered experimentally.
Did discoveries in quantum mechanics affect the arts?
Everything is subjected to quantum mechanics. Maybe what it changed was an attitude towards uncertainty: what we see and what we think is there. There was an increased sense of doubt and general uncertainty in the arts.
Do art and science evolve together or separately?
I think there have been moments of convergence – you can clearly see it in any of the avant garde. The same applies to literature; for example, modernist writers showed a keen interest in science. Poets such as T S Eliot approached poetry with a clear resonance of the first scientific revolutions of the century. There are references to the contributions of Faraday, Maxwell and Planck. You can tell these artists and poets were informed and eager to follow what science was revealing about the world.
You can also note the influence of science in music, as physicists get a better understanding of the physical aspects of sound and matter. Physics became less about viewing the world through a lens, and instead focused on the invisible: the vibrations of matter, electricity, the innermost components of materials. At the end of the 19th and 20th centuries, these examples crop up constantly. It’s not just representing the world as you see it through a particular lens, but being involved in the phenomena of the world and these uncensored realities.
From the 1950s to the 1970s you can see these connections in every single moment. Science is very present in the work of artists, but my feeling is that we don’t have enough literature about it. We really need to conduct more research on this connection between humanities and science.
What are your favourite examples of art influencing science?
Feynman diagrams are one example. Feynman was amazing – a prodigy. Many people before him tried to represent things that escaped our intuition visually and failed. We also have the Pauli Archives here at CERN. Pauli was not the most popular father of quantum mechanics, but he was determined to not only understand mathematical equations but to visualise them, and share them with his friends and colleagues. This sort of endeavour goes beyond just writing – it is about the possibility of creating a tangible experience. I think scientists do that all the time by building machines, and then by trying to understand these machines statistically. I see that in the laboratory constantly, and it’s very revealing because usually people might think of these statistics as something no one cares about – that the visuals are clumsy and nerdy. But they’re not.
Even Leonardo da Vinci was known as a scientist and an artist, but his anatomical sketches were not discovered until hundreds of years after his other works. Newton was also paranoid about expressing his true scientific theories because of the social standards and politics of the time. His views were unorthodox, and he did not want to ruin his prestigious reputation.
Today’s culture also influences how we interpret history. We often think of Aristotle as a philosopher, yet he is also recognised for contributions to natural history. The same with Democritus, whose ideas laid foundations for scientific thought.
So I think that opening laboratories to artists is very revealing about the influence of today’s culture on science.
When did natural philosophy branch out into art and science?
I believe it was during the development of the scientific method: observation, analysis and the evolution of objectivity. The departure point was definitely when we developed a need to be objective. It took centuries to get where we are now, but I think there is a clear division: a line with philosophy, natural philosophy and natural history on one side, and modern science on the other. Today, I think art and science have different purposes. They convene at different moments, but there is always this detour. Some artists are very scientific minded, and some others are more abstract, but they are both bound to speculate massively.
It’s really good news for everyone that labs want to include non-scientists
For example, at our Arts at CERN programme we have had artists who were interested in niche scientific aspects. Erich Berger, an artist from Finland, was interested in designing a detector, and scientists whom he met kept telling him that he would need to calibrate the detector. The scientist and the artist here had different goals. For the scientist, the most important thing is that the detector has precision in the greatest complexity. And for the artist, it’s not. It’s about the process of creation, not the analysis.
Do you think that science is purely an objective medium while art is a subjective one?
No. It’s difficult to define subjectivity and objectivity. But art can be very objective. Artists create artefacts to convey their intended message. It’s not that these creations are standing alone without purpose. No, we are beyond that. Now art seeks meaning that is, in this context, grounded in scientific and technological expertise.
How do you see the future of art and science evolving?
There are financial threats to both disciplines. We are still in this moment where things look a bit bleak. But I think our programme is pioneering, because many scientific labs are developing their own arts programmes inspired by the example of Arts at CERN. This is really great, because unless you are in a laboratory, you don’t see what doing science is really about. We usually read science in the newspapers or listen to it on a podcast – everything is very much oriented to the communication of science, but making science is something very specific. It’s really good news for everyone that laboratories want to include non-scientists. Arts at CERN works mostly with visual artists, but you could imagine filmmakers, philosophers, those from the humanities, poets or almost anyone at all, depending on the model that one wants to create in the lab.
A new result from the LHCb collaboration supports the hypothesis that the rare decays B±→ K±e+e– and B±→ K±µ+µ– occur at the same rate, further tightening constraints on the magnitude of lepton flavour universality (LFU) violation in rare B decays. The new measurement is the most precise to date in the high-q2 region and the first of its kind at a hadron collider.
LFU is an accidental symmetry of the Standard Model (SM). Under LFU, each generation of lepton ℓ± (electron, muon and tau lepton) is equally likely to interact with the W boson in decay processes such as B±→ K±ℓ+ℓ–. This symmetry leads to the prediction that the ratio of branching fractions for these decay channels should be unity except for kinematic effects due to the different masses of the charged leptons. The most straightforward ratio to measure is that between the muon and electron decay modes, known as RK. Any significant deviation from RK = 1 could only be explained by the existence of new physics (NP) particles that preferentially couple to one lepton generation over another, violating LFU.
B±→ K±ℓ+ℓ– decays are a powerful probe for virtual NP particles. These decays involve an underlying b–to–s quark transition – an example of a flavour-changing neutral current (FCNC). FCNC transitions are extremely rare in the SM, as they occur only through higher-order Feynman diagrams. This makes them particularly sensitive to contributions from NP particles, which could significantly alter the characteristics of the decays. In this case, the mass of the NP particles could be much larger than can be produced directly at the LHC. “Indirect” searches for NP, such as measuring the precisely predicted ratio RK, can probe mass scales beyond the reach of direct-production searches with current experimental resources.
The new measurement is the most precise to date in the high-q2 region
In the decay process B±→ K±ℓ+ℓ–, the final-state leptons can also originate from an intermediate resonant state, such as a J/ψ or ψ(2S). These resonant channels occur through tree-level Feynman diagrams. Their contributions significantly outnumber the non-resonant FCNC processes and are not expected to be affected by NP. RK is therefore measured in ranges of dilepton invariant mass-squared (q2), which exclude these resonances, to preserve sensitivity to potential NP effects in FCNC processes.
The new result from the LHCb collaboration measures RK in the high-q2 region, above the ψ(2S) resonance. The high-q2 region data has a different composition of backgrounds compared to the low-q2 data, leading to different strategies for their rejection and modelling, and different systematic effects. With RK expected to be unity in all domains in the SM, low-q2 and high-q2 measurements offer powerfully complementary constraints on the magnitude of LFU-violating NP in rare B decays.
The new measurement of RK agrees with the SM prediction of unity and is the most precise to date in the high-q2 region (figure 1). It complements a refined analysis below the J/ψ resonance published by LHCb in 2023, which also reported RK consistent with unity. Both results use the complete proton–proton collision data collected by LHCb from 2011 to 2018. They lay the groundwork for even more precise measurements with data from Run 3 and beyond.
As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models.
The ATLAS collaboration recently published a new measurement of the B0 lifetime using B0→ J/ψK*0 decays from the entire Run-2 dataset it has recorded at 13 TeV. The result improves the precision of previous world-leading measurements by the CMS and LHCb collaborations by a factor of two.
Studies of b-hadron lifetimes probe our understanding of the weak interaction. The lifetimes of b-hadrons can be systematically computed within the heavy-quark expansion (HQE) framework, where b-hadron observables are expressed as a perturbative expansion in inverse powers of the b-quark mass.
ATLAS measures the “effective” B0 lifetime, which represents the average decay time incorporating effects from mixing and CP contributions, as τ(B0) = 1.5053 ± 0.0012 (stat.) ± 0.0035 (syst.) ps. The result is consistent with previous measurements published by ATLAS and other experiments, as summarised in figure 1. It also aligns with theoretical predictions from HQE and lattice QCD, as well as with the experimental world average.
The analysis benefitted from the large Run-2 dataset and a refined trigger selection, enabling the collection of an extensive sample of 2.5 million B0→ J/ψK*0 decays. Events with a J/ψ meson decaying into two muons with sufficient transverse momentum are cleanly identified in the ATLAS Muon Spectrometer by the first-level hardware trigger. In the next-level software trigger, exploiting the full detector information, these muons are then combined with two tracks measured by the Inner Detector, ensuring they originate from the same vertex.
The B0-meson lifetime is determined through a two-dimensional unbinned maximum-likelihood fit, utilising the measured B0-candidate mass and decay time, and accounting for both signal and background components. The limited hadronic particle-identification capability of ATLAS requires careful modelling of the significant backgrounds from other processes that produce J/ψ mesons. The sensitivity of the fit is increased by estimating the uncertainty of the decay-time measurement provided by the ATLAS tracking and vertexing algorithms on a per-candidate basis. The resulting lifetime measurement is limited by systematic uncertainties, with the largest contributions arising from the correlation between B0 mass and lifetime, and ambiguities in modelling the mass distribution.
ATLAS combined its measurement with the average decay width (Γs) of the light and heavy Bs-meson mass eigenstates, also measured by ATLAS, to determine the ratio of decay widths as Γd/Γs = 0.9905 ± 0.0022 (stat.) ± 0.0036 (syst.) ± 0.0057 (ext.). The result is consistent with unity and provides a stringent test of QCD predictions, which also support a value near unity.
When I was an undergraduate physics student in the mid-1980s, I fell in love with the philosophy of quantum mechanics. I devoured biographies of the greats of early-20th-century atomic physics – physicists like Bohr, Heisenberg, Schrödinger, Pauli, Dirac, Fermi and Born. To me, as I was struggling with the formalism of quantum mechanics, there seemed to be something so exciting, magical even, about that era, particularly those wonder years of the mid-1920s when its mathematical framework was being developed and the secrets of the quantum world were revealing themselves.
I went on to do a PhD in nuclear reaction theory, which meant I spent most of my time working through mathematical derivations, becoming familiar with S-matrices, Green’s functions and scattering amplitudes, scribbling pages of angular-momentum algebra and coding in Fortran 77. And I loved that stuff. There certainly seemed to be little time for worrying about what was really going on inside atomic nuclei. Indeed, I was learning that even the notion of something “really going on” was a vague one. My generation of theoretical physicists were still being very firmly told to “shut up and calculate”, as many adherents of the Copenhagen school of quantum mechanics were keen to advocate. To be fair, so much progress has been made over the past century, in nuclear and particle physics, quantum optics, condensed-matter physics and quantum chemistry, that philosophical issues were seen as an unnecessary distraction. I recall one senior colleague, frustrated by my abiding interest in interpretational matters, admonishing me with: “Jim, an electron is an electron is an electron. Stop trying to say more about it.” And there certainly seemed to be very little in the textbooks I was reading about unresolved issues arising from such topics as the EPR (Einstein–Podolsky–Rosen) paradox and the measurement problem, let alone any analysis of the work of Hugh Everett and David Bohm, who were regarded as mavericks. The Copenhagen hegemony ruled supreme.
What I wasn’t aware of until later in my career was that a community of physicists had indeed continued to worry and think about such matters. These physicists were doing more than just debating and philosophising – they were slowly advancing our understanding of the quantum world. Experimentalists such as Alain Aspect, John Clauser and Anton Zeilinger were devising ingenious experiments in quantum optics – all three of whom were only awarded the Nobel Prize for their work on tests of John Bell’s famous inequality in 2022, which says a lot about how we are only now acknowledging their contribution. Meanwhile, theorists such as Wojciech Zurek, Erich Joos, Deiter Zeh, Abner Shimony and Asher Peres, to name just a few, were formalising ideas on entanglement and decoherence theory. It is certainly high time that quantum-mechanics textbooks – even advanced undergraduate ones – should contain their new insights.
All of which brings me to Quantum Drama, a new popular-science book and collaboration between the physicist and science writer Jim Baggott and the late historian of science John L Heilbron. In terms of level, the book is at the higher end of the popular-science market and, as such, will probably be of most interest to, for example, readers of CERN Courier. If I have a criticism of the book it is that its level is not consistent. For it tries to be all things. On occasion, it has wonderful biographical detail, often of less well-known but highly deserving characters. It is also full of wit and new insights. But then sometimes it can get mired in technical detail, such as in the lengthy descriptions of the different Bell tests, which I imagine only professional physicists are likely to fully appreciate.
Having said that, the book is certainly timely. This year the world celebrates the centenary of quantum physics, since the publication of the momentous papers of Heisenberg and Schrödinger on matrix and wave mechanics, in 1925 and 1926, respectively. Progress in quantum information theory and in the development of new quantum technologies is also gathering pace right now, with the promise of quantum computers, quantum sensing and quantum encryption getting ever closer. This all provides an opportunity for the philosophy of quantum mechanics to finally emerge from the shadows into mainstream debate again.
A new narrative
So, what makes Quantum Drama stand out from other books that retell the story of quantum mechanics? Well, I would say that most historical accounts tend to focus only on that golden age between 1900 and 1927, which came to an end at the Solvay Conference in Brussels and those well-documented few days when Einstein and Bohr had their debate about what it all means. While these two giants of 20th-century physics make the front cover of the book, Quantum Drama takes the story on beyond that famous conference. Other accounts, both popular and scholarly, tend to push the narrative that Bohr won the argument, leaving generations of physicists with the idea that the interpretational issues had been resolved – apart that is, from the odd dissenting voices from the likes of Everett or Bohm who tried, unsuccessfully it was argued, to put a spanner in the Copenhagen works. All the real progress in quantum foundations after 1927, or so we were told, was in the development of quantum field theories, such as QED and QCD, the excitement of high-energy physics and the birth of the Standard Model, with the likes of Murray Gell-Mann and Steven Weinberg replacing Heisenberg and Schrödinger at centre stage. Quantum Drama takes up the story after 1927, showing that there has been a lively, exciting and ongoing dispute over what it all means, long after the death of those two giants of physics. In fact, the period up to Solvay 1927 is all dealt with in Act I of the book. The subtitle puts it well: From the Bohr–Einstein Debate to the Riddle of Entanglement.
The Bohr–Einstein debate is still very much alive and kicking
All in all, Quantum Drama delivers something remarkable, for it shines a light on all the muddle, complexity and confusion surrounding a century of debate about the meaning of quantum mechanics and the famous “Copenhagen spirit”, treating the subject with thoroughness and genuine scholarship, and showing that the Bohr–Einstein debate is still very much alive and kicking.