Comsol -leaderboard other pages

Topics

The other 99%

Quarks contribute less than 1% to the mass of protons and neutrons. This provokes an astonishing question: where does the other 99% of the mass of the visible universe come from? The answer lies in the gluon, and how it interacts with itself to bind quarks together inside hadrons.

Much remains to be understood about gluon dynamics. At present, the chief experimental challenge is to observe the onset of gluon saturation – a dynamic equilibrium between gluon splitting and recombination predicted by QCD. The experimental key looks likely to be a rare but intriguing type of LHC interaction known as an ultra­peripheral collision (UPC), and the breakthrough may come as soon as the next experimental run.

Gluon saturation is expected to end the rapid growth in gluon density measured at the HERA electron–proton collider at DESY in the 1990s and 2000s. HERA observed this growth as the energy of interactions increased and as the fraction of the proton’s momentum borne by the gluons (Bjorken x) decreased.

So gluons become more numerous in hadrons as their energy decreases – but to what end?

Gluonic hotspots are now being probed with unprecedented precision at the LHC and are central to understanding the high-energy regime of QCD

Nonlinear effects are expected to arise due to processes like gluon recombination, wherein two gluons combine to become one. When gluon recombination becomes a significant factor in QCD dynamics, gluon saturation sets in – an emergent phenomenon whose energy scale is a critical parameter to determine experimentally. At this scale, gluons begin to act like classical fields and gluon density plateaus. A dilute partonic picture transitions to a dense, saturated state. For recombination to take precedence over splitting, gluon momenta must be very small, corresponding to low values of Bjorken x. The saturation scale should also be directly proportional to the colour-charge density, making heavy nuclei like lead ideal for studying nonlinear QCD phenomena.

But despite strong theoretical reasoning and tantalising experimental hints, direct evidence for gluon saturation remains elusive.

Since the conclusion of the HERA programme, the quest to explore gluon saturation has shifted focus to the LHC. But with no point-like electron to probe the hadronic target, LHC physicists had to find a new point-like probe: light itself. UPCs at the LHC exploit the flux of quasi-real high-energy photons generated by ultra-relativistic particles. For heavy ions like lead, this flux of photons is enhanced by the square of the nuclear charge, enabling studies of photon-proton (γp) and photon-nucleus interactions at centre-of-mass energies reaching the TeV scale.

Keeping it clean

What really sets UPCs apart is their clean environment. UPCs occur at large impact parameters well outside the range of the strong nuclear force, allowing the nuclei to remain intact. Unlike hadronic collisions, which can produce thousands of particles, UPCs often involve only a few final-state particles, for example a single J/ψ, providing an ideal laboratory for gluon saturation. J/ψ are produced when a cc pair created by two or more gluons from one nucleus is brought on-shell by interacting with a quasi-real photon from the other nucleus (see “Sensitivity to saturation” figure).

Power-law observation

Gluon saturation models predict deviations in the γp → J/ψp cross section from the power-law behaviour observed at HERA. The LHC experiments are placing a significant focus on investigating the energy dependence of this process to identify potential signatures of saturation, with ALICE and LHCb extending studies to higher γp centre-of-mass energies (Wγp) and lower Bjorken x than HERA. The results so far reveal that the cross-section continues to increase with energy, consistent with the power-law trend (see “Approaching the plateau?” figure).

The symmetric nature of pp collisions introduces significant challenges. In pp collisions, either proton can act as the photon source, leading to an intrinsic ambiguity in identifying the photon emitter. In proton–lead (pPb) collisions, the lead nucleus overwhelmingly dominates photon emission, eliminating this ambiguity. This makes pPb collisions an ideal environment for precise studies of the photoproduction of J/ψ by protons.

During LHC Run 1, the ALICE experiment probed Wγp up to 706 GeV in pPb collisions, more than doubling HERA’s maximum reach of 300 GeV. This translates to probing Bjorken-x values as low as 10–5, significantly beyond the regime explored at HERA. LHCb took a different approach. The collaboration inferred the behaviour of pp collisions at high energies (“W+ solutions”) by assuming knowledge of their energy dependence at low energies (“W- solutions”), allowing LHCb to probe gluon energies as small as 10–6 in Bjorken x and Wγp up to 2 TeV.

There is not yet any theoretical consensus on whether LHC data align with gluon-saturation predictions, and the measurements remain statistically limited, leaving room for further exploration. Theoretical challenges include incomplete next-to-leading-order calculations and the reliance of some models on fits to HERA data. Progress will depend on robust and model-independent calculations and high-quality UPC data from pPb collisions in LHC Run 3 and Run 4.

Some models predict a slowing increase in the γp → J/ψp cross section with energy at small Bjorken x. If these models are correct, gluon saturation will likely be discovered in LHC Run 4, where we expect to see a clear observation of whether pPb data deviate from the power law observed so far.

Gluonic hotspots

If a UPC photon interacts with the collective colour field of a nucleus – coherent scattering – it probes its overall distribution of gluons. If a UPC photon interacts with individual nucleons or smaller sub-nucleonic structures – incoherent scattering – it can probe smaller-scale gluon fluctuations.

Simulations of the transverse density of gluons in protons

These fluctuations, known as gluonic hotspots, are theorised to become more numerous and overlap in the regime of gluon saturation (see “Onset of saturation” figure). Now being probed with unprecedented precision at the LHC, they are central to understanding the high-energy regime of QCD.

Gluonic hotspots are used to model the internal transverse structure of colliding protons or nuclei (see “Hotspot snapshots” figure). The saturation scale is inherently impact-parameter dependent, with the densest colour charge densities concentrated at the core of the proton or nucleus, and diminishing toward the periphery, though subject to fluctuations. Researchers are increasingly interested in exploring how these fluctuations depend on the impact parameter of collisions to better characterise the spatial dynamics of colour charge. Future analyses will pinpoint contributions from localised hotspots where saturation effects are most likely to be observed.

The energy dependence of incoherent or dissociative photoproduction promises a clear signature for gluon saturation, independent of the coherent power-law method described above. As saturation sets in, all gluon configurations in the target converge to similar densities, causing the variance of the gluon field to decrease, and with it the dissociative cross section. Detecting a peak and a decline in the incoherent cross-section as a function of energy would represent a clear signature of gluon saturation.

Simulations of the transverse density of gluons in lead nuclei

The ALICE collaboration has taken significant steps in exploring this quantum terrain, demonstrating the possibility of studying different geometrical configurations of quantum fluctuations in processes where protons or lead nucleons dissociate. The results highlight a striking correlation between momentum transfer, which is inversely proportional to the impact parameter, and the size of the target structure. The observation that sub-nucleonic structures impart the greatest momentum transfer is compelling evidence for gluonic quantum fluctuations at the sub-nucleon level.

Into the shadows

In 1982 the European Muon Collaboration observed an intriguing phenomenon: nuclei appeared to contain fewer gluons than expected based on the contributions from their individual protons and neutrons. This effect, known as nuclear shadowing, was observed in experiments conducted at CERN at moderate values of Bjorken x. It is now known to occur because the interaction of a probe with one gluon reduces the likelihood of the probe interacting with other gluons within the nucleus – the gluons hiding behind them, in their shadow, so to speak. At smaller values of Bjorken x, saturation further suppresses the number of gluons contributing to the interaction.

Nuclear suppression factor for lead relative to protons

The relationship between gluon saturation and nuclear shadowing is poorly understood, and separating their effects remains an open challenge. The situation is further complicated by an experimental reliance on lead–lead (PbPb) collisions, which, like pp collisions, suffer from ambiguity in identifying the interacting nucleus, unless the interaction is accompanied by an ejected neutron.

The ALICE, CMS and LHCb experiments have extensively studied nuclear shadowing via the exclusive production of vector mesons such as J/ψ in ultraperipheral PbPb
collisions. Results span photon–nucleus collision energies from 10 to 1000 GeV. The onset of nuclear shadowing, or another nonlinear QCD phenomenon like saturation, is clearly visible as a function of energy and Bjorken x (see “Nuclear shadowing” figure).

Multidimensional maps

While both saturation-based and gluon shadowing models describe the data reasonably well at high energies, neither framework captures the observed trends across the entire kinematic range. Future efforts must go beyond energy dependence by being differential in momentum transfer and studying a range of vector mesons with complementary sensitivities to the saturation scale.

Soon to be constructed at Brookhaven National Laboratory, the Electron-Ion Collider (EIC) promises to transform our understanding of gluonic matter. Designed specifically for QCD research, the EIC will probe gluon saturation and shadowing in unprecedented detail, using a broad array of reactions, collision species and energy levels. By providing a multidimensional map of gluonic behaviour, the EIC will address funda­mental questions such as the origin of mass and nuclear spin.

ALICE’s high-granularity forward calorimeter

Before then, a tenfold increase in PbPb statistics in LHC Runs 3 and 4 will allow a transformative leap in low Bjorken-x physics. Though not originally designed for low Bjorken-x physics, the LHC’s unparalleled energy reach and diverse range of colliding systems offers unique opportunities to explore gluon dynamics at the highest energies.

Enhanced capabilities

Surpassing the gains from increased luminosity alone, ALICE’s new triggerless detector readout mode will offer a vast improvement over previous runs, which were constrained by dedicated triggers and bandwidth limitations. Subdetector upgrades will also play an important role. The muon forward tracker has already enhanced ALICE’s capabilities, and the high-granularity forward calorimeter set to be installed in time for Run 4 is specifically designed to improve sensitivity to small Bjorken-x physics (see “Saturation specific” figure).

Ultraperipheral-collision physics at the LHC is far more than a technical exploration of QCD. Gluons govern the structure of all visible matter. Saturation, hotspots and shadowing shed light on the origin of 99% of the mass of the visible universe. 

Charm and synthesis

In 1955, after a year of graduate study at Harvard, I joined a group of a dozen or so students committed to studying elementary particle theory. We approached Julian Schwinger, one of the founders of quantum electrodynamics, hoping to become his thesis students – and we all did.

Schwinger lined us up in his office, and spent several hours assigning thesis subjects. It was a remarkable performance. I was the last in line. Having run out of well-defined thesis problems, he explained to me that weak and electromagnetic interactions share two remarkable features: both are vectorial and both display aspects of universality. Schwinger suggested that I create a unified theory of the two interactions – an electroweak synthesis. How I was to do this he did not say, aside from slyly hinting at the Yang–Mills gauge theory.

By the summer of 1958, I had convinced myself that weak and electromagnetic interactions might be described by a badly broken gauge theory, and Schwinger that I deserved a PhD. I had hoped to partly spend a postdoctoral fellowship in Moscow at the invitation of the recent Russian Nobel laureate Igor Tamm, and sought to visit Niels Bohr’s institute in Copenhagen while awaiting my Soviet visa. With Bohr’s enthusiastic consent, I boarded the SS Île de France with my friend Jack Schnepps. Following a memorable and luxurious crossing – one of the great ship’s last – Jack drove south to Padova to work with Milla Baldo-Ceolin’s emulsion group in Padova, and I took the slow train north to Copenhagen. Thankfully, my Soviet visa never arrived. I found the SU(2) × U(1) structure of the electroweak model in the spring of 1960 at Bohr’s famous institute at Blegsdamvej 19, and wrote the paper that would earn my share of the 1979 Nobel Prize.

We called the new quark flavour charm, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day

A year earlier, in 1959, Augusto Gamba, Bob Marshak and Susumo Okubo had proposed lepton–hadron symmetry, which regarded protons, neutrons and lambda hyperons as the building blocks of all hadrons, to match the three known leptons at the time: neutrinos, electrons and muons. The idea was falsified by the discovery of a second neutrino in 1962, and superseded in 1964 by the invention of fractionally charged hadron constituents, first by George Zweig and André Petermann, and then decisively by Murray Gell-Mann with his three flavours of quarks. Later in 1964, while on sabbatical in Copenhagen, James Bjorken and I realised that lepton–hadron symmetry could be revived simply by adding a fourth quark flavour to Gell-Mann’s three. We called the new quark flavour “charm”, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day.

Annus mirabilis

1964 was a remarkable year. In addition to the invention of quarks, Nick Samios spotted the triply strange Ω baryon, and Oscar Greenberg devised what became the critical notion of colour. Arno Penzias and Robert Wilson stumbled on the cosmic microwave background radiation. James Cronin, Val Fitch and others discovered CP violation. Robert Brout, François Englert, Peter Higgs and others invented spontaneously broken non-Abelian gauge theories. And to top off the year, Abdus Salam rediscovered and published my SU(2) × U(1) model, after I had more-or-less abandoned electroweak thoughts due to four seemingly intractable problems.

Four intractable problems of early 1964

How could the W and Z bosons acquire masses while leaving the photon massless?

Steven Weinberg, my friend from both high-school and college, brilliantly solved this problem in 1967 by subjecting the electroweak gauge group to spontaneous symmetry breaking, initiating the half-century-long search for the Higgs boson. Salam published the same solution in 1968.

How could an electroweak model of leptons be extended to describe the weak interactions of hadrons?

John Iliopoulos, Luciano Maiani and I solved this problem in 1970 by introducing charm and quark-lepton symmetry to avoid unobserved strangeness-changing neutral currents.

Was the spontaneously broken electroweak gauge model mathematically consistent?

Gerard ’t Hooft announced in 1971 that he had proven Steven Weinberg’s electroweak model to be renormalisable. In 1972, Claude Bouchiat, John Iliopoulos and Philippe Meyer demonstrated the electroweak model to be free of Adler anomalies provided that lepton–quark symmetry is maintained.

Could the electroweak model describe CP violation without invoking additional spinless fields?

In 1973, Makoto Kobayashi and Toshihide Maskawa showed that the electroweak model could easily and naturally violate CP if there are more than four quark flavours.

Much to my surprise and delight, all of them would be solved within just a few years, with the last theoretical obstacle removed by Makoto Kobayashi and Toshihide Maskawa in 1973 (see “Four intractable problems” panel). A few months later, Paul Musset announced that CERN’s Gargamelle detector had won the race to detect weak neutral-current interactions, giving the electroweak model the status of a predictive theory. Remarkably, the year had begun with Gell-Mann, Harald Fritzsch and Heinrich Leutwyler proposing QCD, and David Gross, Frank Wilczek and David Politzer showing it to be asymptotically free. The Standard Model of particle physics was born.

Charmed findings

But where were the charmed quarks? Early on Monday morning on 11 November, 1974, I was awakened by a phone call from Sam Ting, who asked me to come to his MIT office as soon as possible. He and Ulrich Becker were waiting for me impatiently. They showed me an amazingly sharp resonance. Could it be a vector meson like the ρ or ω and be so narrow, or was it something quite different? I hopped in my car and drove to Harvard, where my colleagues Alvaro de Rújula and Howard Georgi excitedly regaled me about the Californian side of the story. A few days later, experimenters in Frascati confirmed the BNL–SLAC discovery, and de Rújula and I submitted our paper “Is Bound Charm Found?” – one of two papers on the J/ψ discovery printed in Physical Review Letters on 5 July 1965 that would prove to be correct. Among five false papers was one written by my beloved mentor, Julian Schwinger.

Sam Ting at CERN in 1976

The second correct paper was by Tom Appelquist and David Politzer. Well before that November, they had realised (without publishing) that bound states of a charmed quark and its antiquark lying below the charm threshold would be exceptionally narrow due the asymptotic freedom of QCD. De Rújula suggested to them that such a system be called charmonium in an analogy with positronium. His term made it into the dictionary. Shortly afterward, the 1976 Nobel Prize in Physics was jointly awarded to Burton Richter and Sam Ting for “their pioneering work in the discovery of a heavy elementary particle of a new kind” – evidence that charm was not yet a universally accepted explanation. Over the next few years, experimenters worked hard to confirm the predictions of theorists at Harvard and Cornell by detecting and measuring the masses, spins and transitions among the eight sub-threshold charmonium states. Later on, they would do the same for 14 relatively narrow states of bottomonium.

Abdus Salam, Tom Ball and Paul Musset

Other experimenters were searching for particles containing just one charmed quark or antiquark. In our 1975 paper “Hadron Masses in a Gauge Theory”, de Rújula, Georgi and I included predictions of the masses of several not-yet-discovered charmed mesons and baryons. The first claim to have detected charmed particles was made in 1975 by Robert Palmer and Nick Samios at Brookhaven, again with a bubble-chamber event. It seemed to show a cascade decay process in which one charmed baryon decays into another charmed baryon, which itself decays. The measured masses of both of the charmed baryons were in excellent agreement with our predictions. Though the claim was not widely accepted, I believe to this day that Samios and Palmer were the first to detect charmed particles.

Sheldon Glashow and Steven Weinberg

The SLAC electron–positron collider, operating well above charm threshold, was certainly producing charmed particles copiously. Why were they not being detected? I recall attending a conference in Wisconsin that was largely dedicated to this question. On the flight home, I met my old friend Gerson Goldhaber, who had been struggling unsuccessfully to find them. I think I convinced him to try a bit harder. A couple of weeks later in 1976, Goldhaber and François Pierre succeeded. My role in charm physics had come to a happy ending. 

  • This article is adapted from a presentation given at the Institute of High-Energy Physics in Beijing on 20 October 2024 to celebrate the 50th anniversary of the discovery of the J/ψ.

Muon cooling kickoff at Fermilab

More than 100 accelerator scientists, engineers and particle physicists gathered in person and remotely at Fermilab from 30 October to 1 November for the first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider. High-energy muon colliders offer a unique combination of discovery potential and precision. Unlike protons, muons are point-like particles that can achieve comparable physics outcomes at lower centre-of-mass energies. The large mass of the muon also suppresses synchrotron radiation, making muon colliders promising candidates for exploration at the energy frontier.

The International Muon Collider Collaboration (IMCC), supported by the EU MuCol study, is working to assess the potential of a muon collider as a future facility, along with the R&D needed to make it a reality. European engagement in this effort crystalised following the 2020 update to the European Strategy for Particle Physics (ESPPU), which identified the development of bright muon beams as a high-priority initiative. Worldwide interest in a muon collider is quickly growing: the 2023 Particle Physics Project Prioritization Panel (P5) recently identified it as an important future possibility for the US particle-physics community; Japanese colleagues have proposed a muon-collider concept, muTRISTAN (CERN Courier July/August 2024 p8); and Chinese colleagues have actively contributed to IMCC efforts as collaboration members.

Lighting the way

The workshop focused on reviewing the scope and design progress of a muon-cooling demonstrator facility, identifying potential host sites and timelines, and exploring science programmes that could be developed alongside it. Diktys Stratakis (Fermilab) began by reviewing the requirements and challenges of muon cooling. Delivering a high-brightness muon beam will be essential to achieving the luminosity needed for a muon collider. The technique proposed for this is ionisation cooling, wherein the phase-space volume of the muon beam decreases as it traverses a sequence of cells, each containing an energy- absorbing mat­erial and accelerating radiofrequency (RF) cavities.

Roberto Losito (CERN) called for a careful balance between ambition and practicality – the programme must be executed in a timely way if a muon collider is to be a viable next-generation facility. The Muon Cooling Demonstrator programme was conceived to prove that this technology can be developed, built and reliably operated. This is a critical step for any muon-collider programme, as highlighted in the ESPPU–LDG Accelerator R&D Roadmap published in 2022. The plan is to pursue a staged approach, starting with the development of the magnet, RF and absorber technology, and demonstrating the robust operation of high-gradient RF cavities in high magnetic fields. The components will then be integrated into a prototype cooling cell. The programme will conclude with a demonstration of the operation of a multi-cell cooling system with a beam, building on the cooling proof of principle made by the Muon Ionisation Cooling Experiment.

Chris Rogers (STFC RAL) summarised an emerging consensus that it is critical to demonstrate the reliable operation of a cooling lattice formed of multiple cells. While the technological complexity of the cooling-cell prototype will undergo further review, the preliminary choice presents a moderately challenging performance that could be achieved within five to seven years with reasonable investment. The target cooling performance of a whole cooling lattice remains to be established and depends on future funding levels. However, delegates agreed that a timely demonstration is more important than an ambitious cooling target.

Worldwide interest in a muon collider is quickly growing

The workshop also provided an opportunity to assess progress in designing the cooling-cell prototype. Given that the muon beam originates from hadron decays and is initially the size of a watermelon, solenoid magnets were chosen as they can contain large beams in a compact lattice and provide focusing in both horizontal and vertical planes simultaneously. Marco Statera (INFN LASA) presented preliminary solutions for the solenoid coil configuration based on high-temperature superconductors operating at 20 K: the challenge is to deliver the target magnetic field profile given axial forces, coil stresses and compact integration.

In ionisation cooling, low-Z absorbers are used to reduce the transverse momenta of the muons while keeping the multiple scattering at manageable levels. Candidate materials are lithium hydride and liquid hydrogen. Chris Rogers discussed the need to test absorbers and containment windows at the highest intensities. The potential for performance tests using muons or intensity tests using another particle species such as protons was considered to verify understanding of the collective interaction between the beam and the absorber. RF cavities are required to replace longitudinal energy lost in the absorbers.  Dario Giove (INFN LASA) introduced the prototype of an RF structure based on three coupled 704 MHz cavities and presented a proposal to use existing INFN capabilities to carry out a test programme for materials and cavities in magnetic fields. The use of cavity windows was also discussed, as it would enable greater accelerating gradients, though at the cost of beam degradation, increased thermal loads and possible cavity detuning. The first steps in integ­rating these latest hardware designs into a compact cooling cell were presented by Lucio Rossi (INFN LASA and UMIL). Future work needs to address the management of the axial forces and cryogenic heat loads, Rossi observed.

Many institutes presented a strong interest in contributing to the programme, both in the hardware R&D and hosting the eventual demonstrator. The final sessions of the workshop focused on potential host laboratories.

The event underscored the critical need for sustained innovation, timely implementation and global cooperation

At CERN, two potential sites were discussed, with ongoing studies focusing on the TT7 tunnel, where a moderate-power 10 kW proton beam from the Proton Synchrotron could be used for muon production. Preliminary beam physics studies of muon beam production and transport are already underway. Lukasz Krzempek (CERN) and Paul Jurj (Imperial College London) presented the first integration and beam-physics studies of the demonstrator facility in the TT7 tunnel, highlighting civil engineering and beamline design requirements, logistical challenges and safety considerations, finding no apparent showstoppers.

Jeff Eldred (Fermilab) gave an overview of Fermilab’s broad range of candidate sites and proton-beam energies. While further feasibility studies are required, Eldred highlighted that using 8 GeV protons from the Booster is an attractive option due to the favourable existing infrastructure and its alignment with Fermilab’s muon-collider scenario, which envisions a proton driver based on the same Booster proton energy.

The Fermilab workshop represented a significant milestone in advancing the Muon Cooling Demonstrator, highlighting enthusiasm from the US community to join forces with the IMCC and growing interest in Asia. As Mark Palmer (BNL) observed in his closing remarks, the event underscored the critical need for sustained innovation, timely implementation and global cooperation to make the muon collider a reality.

CLOUD explains Amazon aerosols

In a paper published in the journal Nature, the CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models.

Aerosols are microscopic particles suspended in the atmosphere that arise from both natural sources and human activities. They play an important role in Earth’s climate system because they seed clouds and influence their reflectivity and coverage. Most aerosols arise from the spontaneous condensation of molecules that are present in the atmosphere only in minute concentrations. However, the vapours responsible for their formation are not well understood, particularly in the remote upper troposphere.

The CLOUD (Cosmics Leaving Outdoor Droplets) experiment at CERN is designed to investigate the formation and growth of atmospheric aerosol particles in a controlled laboratory environment. CLOUD comprises a 26 m3 ultra-clean chamber and a suite of advanced instruments that continuously analyse its contents. The chamber contains a precisely selected mixture of gases under atmospheric conditions, into which beams of charged pions are fired from CERN’s Proton Synchrotron to mimic the influence of galactic cosmic rays.

“Large concentrations of aerosol particles have been observed high over the Amazon rainforest for the past 20 years, but their source has remained a puzzle until now,” says CLOUD spokesperson Jasper Kirkby. “Our latest study shows that the source is isoprene emitted by the rainforest and lofted in deep convective clouds to high altitudes, where it is oxidised to form highly condensable vapours. Isoprene represents a vast source of biogenic particles in both the present-day and pre-industrial atmospheres that is currently missing in atmospheric chemistry and climate models.”

Isoprene is a hydrocarbon containing five carbon atoms and eight hydrogen atoms. It is emitted by broad-leaved trees and other vegetation and is the most abundant non-methane hydrocarbon released into the atmosphere. Until now, isoprene’s ability to form new particles has been considered negligible.

Seeding clouds

The CLOUD results change this picture. By studying the reaction of hydroxyl radicals with isoprene at upper tropospheric temperatures of –30 °C and –50 °C, the collaboration discovered that isoprene oxidation products form copious particles at ambient isoprene concentrations. This new source of aerosol particles does not require any additional vapours. However, when minute concentrations of sulphuric acid or iodine oxoacids were introduced into the CLOUD chamber, a 100-fold increase in aerosol formation rate was observed. Although sulphuric acid derives mainly from anthropogenic sulphur dioxide emissions, the acid concentrations used in CLOUD can also arise from natural sources.

In addition, the team found that isoprene oxidation products drive rapid growth of particles to sizes at which they can seed clouds and influence the climate – a behaviour that persists in the presence of nitrogen oxides produced by lightning at upper-tropospheric concentrations. After continued growth and descent to lower altitudes, these particles may provide a globally important source for seeding shallow continental and marine clouds, which influence Earth’s radiative balance – the amount of incoming solar radiation compared to outgoing longwave radiation (see “Seeding clouds” figure).

“This new source of biogenic particles in the upper troposphere may impact estimates of Earth’s climate sensitivity, since it implies that more aerosol particles were produced in the pristine pre-industrial atmosphere than previously thought,” adds Kirkby. “However, until our findings have been evaluated in global climate models, it’s not possible to quantify the effect.”

The CLOUD findings are consistent with aircraft observations over the Amazon, as reported in an accompanying paper in the same issue of Nature. Together, the two papers provide a compelling picture of the importance of isoprene-driven aerosol formation and its relevance for the atmosphere.

Since it began operation in 2009, the CLOUD experiment has unearthed several mechanisms by which aerosol particles form and grow in different regions of Earth’s atmosphere. “In addition to helping climate researchers understand the critical role of aerosols in Earth’s climate, the new CLOUD result demonstrates the rich diversity of CERN’s scientific programme and the power of accelerator-based science to address societal challenges,” says CERN Director for Research and Computing, Joachim Mnich.

Painting Higgs’ portrait in Paris

The 14th Higgs Hunting workshop took place from 23 to 25 September 2024 at Orsay’s IJCLab and Paris’s Laboratoire Astroparticule et Cosmologie. More than 100 participants joined lively discussions to decipher the latest developments in theory and results from the ATLAS and CMS experiments.

The portrait of the Higgs boson painted by experimental data is becoming more and more precise. Many new Run 2 and first Run 3 results have developed the picture this year. Highlights included the latest di-Higgs combinations with cross-section upper limits reaching down to 2.5 times the Standard Model (SM) expectations. A few excesses seen in various analyses were also discussed. The CMS collaboration reported a brand new excess of top–antitop events near the top–antitop production threshold, with a local significance of more than 5σ above the background described by perturbative quantum chromodynamics (QCD) only, that could be due to a pseudoscalar top–antitop bound state. A new W-boson mass measurement by the CMS collaboration – a subject deeply connected to electroweak symmetry breaking – was also presented, reporting a value consistent with the SM prediction with a very accurate precision of 9.9 MeV (CERN Courier November/December 2024 p7).

Parton shower event generators were in the spotlight. Historical talks by Torbjörn Sjöstrand (Lund University) and Bryan Webber (University of Cambridge) described the evolution of the PYTHIA and HERWIG generators, the crucial role they played in the discovery of the Higgs boson, and the role they now play in the LHC’s physics programme. Differences in the modelling of the parton–shower systematics by the ATLAS and CMS collaborations led to lively discussions!

The vision talk was given by Lance Dixon (SLAC) about the reconstruction of scattering amplitudes directly from analytic properties, as a complementary approach to Lagrangians and Feynman diagrams. Oliver Bruning (CERN) conveyed the message that the HL-LHC accelerator project is well on track, and Patricia McBride (Fermilab) reached a similar conclusion regarding ATLAS and CMS’s Phase-2 upgrades, enjoining new and young people to join the effort, to ensure they are ready and commissioned for the start of Run 4.

The next Higgs Hunting workshop will be held in Orsay and Paris from 15 to 17 July 2025, following EPS-HEP in Marseille from 7 to 11 July.

Trial trap on a truck

Thirty years ago, physicists from Harvard University set out to build a portable antiproton trap. They tested it on electrons, transporting them 5000 km from Nebraska to Massachusetts, but it was never used to transport antimatter. Now, a spin-off project of the Baryon Antibaryon Symmetry Experiment (BASE) at CERN has tested their own antiproton trap, this time using protons. The ultimate goal is to deliver antiprotons to labs beyond CERN’s reach.

“For studying the fundamental properties of protons and antiprotons, you need to take extremely precise measurements – as precise as you can possibly make it,” explains principal investigator Christian Smorra. “This level of precision is extremely difficult to achieve in the antimatter factory, and can only be reached when the accelerator is shut down. This is why we need to relocate the measurements – so we can get rid of these problems and measure anytime.”

The team has made considerable strides to miniaturise their apparatus. BASE-STEP is far and away the most compact design for an antiproton trap yet built, measuring just 2 metres in length, 1.58 metres in height and 0.87 metres across. Weighing in at 1 tonne, transportation is nevertheless a complex operation. On 24 October, 70 protons were introduced into the trap and lifted onto a truck using two overhead cranes. The protons made a round trip through CERN’s main site before returning home to the antimatter factory. All 70 protons were safely transported and the experiment with these particles continued seemlessly, successfully demonstrating the trap’s performance.

Antimatter needs to be handled carefully, to avoid it annihilating with the walls of the trap. This is hard to achieve in the controlled environment of a laboratory, let alone on a moving truck. Just like in the BASE laboratory, BASE–STEP uses a Penning trap with two electrode stacks inside a single solenoid. The magnetic field confines charged particles radially, and the electric fields trap them axially. The first electrode stack collects antiprotons from CERN’s antimatter factory and serves as an “airlock” by protecting antiprotons from annihilation with the molecules of external gases. The second is used for long-term storage. While in transit, non-destructive image-current detection monitors the particles and makes sure they have not hit the walls of the trap.

“We originally wanted a system that you can put in the back of your car,” says Smorra. “Next, we want to try using permanent magnets instead of a superconducting solenoid. This would make the trap even smaller and save CHF 300,000. With this technology, there will be so much more potential for future experiments at CERN and beyond.”

With or without a superconducting magnet, continuous cooling is essential to prevent heat from degrading the trap’s ultra-high vacuum. Penning traps conventionally require two separate cooling systems – one for the trap and one for the superconducting magnet. BASE-STEP combines the cooling systems into one, as the Harvard team proposed in 1993. Ultimately, the transport system will have a cryocooler that is attached to a mobile power generator with a liquid-helium buffer tank present as a backup. Should the power generator be interrupted, the back-up cooling system provides a grace period of four hours to fix it and save the precious cargo of antiprotons. But such a scenario carries no safety risk given the miniscule amount of antimatter being transported. “The worst that can happen is the antiprotons annihilate, and you have to go back to the antimatter factory to refill the trap,” explains Smorra.

With the proton trial-run a success, the team are confident they will be able to use this apparatus to successfully deliver antiprotons to precision laboratories in Europe. Next summer, BASE-STEP will load up the trap with 1000 antiprotons and hit the road. Their first stop is scheduled to be Heinrich Heine University in  Germany.

“We can use the same apparatus for the antiproton transport,” says Smorra. “All we need to do is switch the polarity of the electrodes.”

Emphasising the free circulation of scientists

Physics is a universal language that unites scientists worldwide. No event illustrates this more vividly than the general assembly of the International Union of Pure and Applied Physics (IUPAP). The 33rd assembly convened 100 delegates representing territories around the world in Haikou, China, from 10 to 14 October 2024. Amid today’s polarised global landscape, one clear commitment emerged: to uphold the universality of science and ensure the free movement of scientists.

IUPAP was established in 1922 in the aftermath of World War I to coordinate international efforts in physics. Its logo is recognisable from conferences and proceedings, but its mission is less widely understood. IUPAP is the only worldwide organisation dedicated to the advancement of all fields of physics. Its goals include promoting global development and cooperation in physics by sponsoring international meetings; strengthening physics education, especially in developing countries; increasing diversity and inclusion in physics; advancing the participation and recognition of women and of people from under-represented groups; enhancing the visibility of early-career talents; and promoting international agreements on symbols, units, nomenclature and standards. At the 33rd assembly, 300 physicists were elected to the executive council and specialised commissions for a period of three years.

Global scientific initiatives were highlighted, including the International Year of Quantum Science and Technology (IYQ2025) and the International Decade on Science for Sustainable Development (IDSSD) from 2024 to 2033, which was adopted by the United Nations General Assembly in August 2023. A key session addressed the importance of industry partnerships, with delegates exploring strategies to engage companies in IYQ2025 and IDSSD to further IUPAP’s mission of using physics to drive societal progress. Nobel laureate Giorgio Parisi discussed the role of physics in promoting a sustainable future, and public lectures by fellow laureates Barry Barish, Takaaki Kajita and Samuel Ting filled the 1820-seat Oriental Universal Theater with enthusiastic students.

A key focus of the meeting was visa-related issues affecting international conferences. Delegates reaffirmed the union’s commitment to scientists’ freedom of movement. IUPAP stands against any discrimination in physics and will continue to sponsor events only in locations that uphold this value – a stance that is orthogonal to the policy of countries imposing sanctions on scientists affiliated with specific institutions.

A joint session with the fall meeting of the Chinese Physical Society celebrated the 25th anniversary of the IUPAP working group “Women in Physics” and emphasised diversity, equity and inclusion in the field. Since 2002, IUPAP has established precise guidelines for the sponsorship of conferences to ensure that women are fairly represented among participants, speakers and committee members, and has actively monitored the data ever since. This has contributed to a significant change in the participation of women in IUPAP-sponsored conferences. IUPAP is now building on this still-necessary work on gender by focusing on discrimination on the grounds of disability and ethnicity.

The closing ceremony brought together the themes of continuity and change. Incoming president Silvina Ponce Dawson (University of Buenos Aires) and president-designate Sunil Gupta (Tata Institute) outlined their joint commitment to maintaining an open dialogue among all physicists in an increasingly fragmented world, and to promoting physics as an essential tool for development and sustainability. Outgoing leaders Michel Spiro (CNRS) and Bruce McKellar (University of Melbourne) were honoured for their contributions, and the ceremonial handover symbolised a smooth transition of leadership.

As the general assembly concluded, there was a palpable sense of momentum. From strategic modernisation to deeper engagement with global issues, IUPAP is well-positioned to make physics more relevant and accessible. The resounding message was one of unity and purpose: the physics community is dedicated to leveraging science for a brighter, more sustainable future.

The new hackerpreneur

The World Wide Web, AI and quantum computing – what do these technologies have in common? They all started out as “hacks”, says Jiannan Zhang, founder of the open-source community platform DoraHacks. “When the Web was invented at CERN, it demonstrated that in order to fundamentally change how people live and work, you have to think of new ways to use existing technology,” says Zhang. “Progress cannot be made if you always start from scratch. That’s what hackathons are for.”

Ten years ago, Zhang helped organise the first CERN Webfest, a hackathon that explores creative uses of technology for science and society. Webfest helped Zhang develop his coding skills and knowledge of physics by applying it to something beyond his own discipline. He also made long-lasting connections with teammates, who were from different academic backgrounds and all over the world. After participating in more hackathons, Zhang’s growing “hacker spirit” inspired him to start his own company. In 2024 Zhang returned to Webfest not as a participant, but as the CEO of DoraHacks.

Hackathons are social coding events often spanning multiple days. They are inclusive and open – no academic institution or corporate backing is required – making them accessible to a diverse range of talented individuals. Participants work in teams, pooling their skills to tackle technical problems through software, hardware or a business plan for a new product. Physicists, computer scientists, engineers and entrepreneurs all bring their strengths to the table. Young scientists can pursue work that may not fit within typical research structures, develop their skills, and build portfolios and professional networks.

“If you’re really passionate about some­thing, you should be able to jump on a project and work on it,” says Zhang. “You shouldn’t need to be associated with a university or have a PhD to pursue it.”

For early-career researchers, hackathons offer more than just technical challenges. They provide an alternative entry point into research and industry, bridging the gap between academia and real-world applications. University-run hackathons often attract corporate sponsors, giving them the budget to rent out stadiums with hundreds, sometimes thousands, of attendees.

“These large-scale hackathons really capture the attention of headhunters and mentors from industry,” explains Zhang. “They see the events as a recruitment pool. It can be a really effective way to advance careers and speak to representatives of big companies, as well as enhancing your coding skills.”

In the 2010s, weekend hackathons served as Zhang’s stepping stone into entrepreneurship. “I used to sit in the computer-science common room and work on my hacks. That’s how I met most of my friends,” recalled Zhang. “But later I realised that to build something great, I had to effectively organise people and capital. So I started to skip my computer-science classes and sneak into the business classrooms.” Zhang would hide in the back row of the business lectures, plotting his plan towards entrepreneurship. He networked with peers to evaluate different business models each day. “It was fun to combine our knowledge of engineering and business theory,” he added. “It made the journey a lot less stressful.”

But the transition from science to entrepreneurship was hard. “At the start you must learn and do everything yourself. The good thing is you’re exposed to lots of new skills and new people, but you also have to force yourself to do things you’re not usually good at.”

This is a dilemma many entrepreneurs face: whether to learn new skills from scratch, or to find business partners and delegate tasks. But finding trustworthy business partners is not always easy, and making the wrong decision can hinder the start up’s progress. That’s why planning the company’s vision and mission from the start is so important.

“The solution is actually pretty straight forward,” says Zhang. “You need to spend more time completing the important milestones yourself, to ensure you have a feasible product. Once you make the business plan and vision clear, you get support from everywhere.”

Decentralised community governance

Rather than hackathon participants competing for a week before abandoning their code, Zhang started DoraHacks to give teams from all over the world a chance to turn their ideas into fully developed products. “I want hackathons to be more than a recruitment tool,” he explains. “They should foster open-source development and decentralised community governance. Today, a hacker from Tanzania can collaborate virtually with a team in the US, and teams gain support to develop real products. This helps make tech fields much more diverse and accessible.”

Zhang’s company enables this by reducing logistical costs for organisers and providing funding mechanisms for participants, making hackathons accessible to aspiring researchers beyond academic institutions. As the community expands, new doors open for young scientists at the start of their careers.

“The business model is changing,” says Zhang. Hackathons are becoming fundamental to emerging technologies, particularly in areas like quantum computing, blockchain and AI, which often start out open source. “There will be a major shift in the process of product creation. Instead of building products in isolation, new technologies rely on platforms and infrastructure where hackers can contribute.”

Today, hackathons aren’t just about coding or networking – they’re about pushing the boundaries of what’s possible, creating meaningful solutions and launching new career paths. They act as incubators for ideas with lasting impact. Zhang wants to help these ideas become reality. “The future of innovation is collaborative and open source,” he says. “The old world relies on corporations building moats around closed-source technology, which is inefficient and inaccessible. The new world is centred around open platform technology, where people can build on top of old projects. This collaborative spirit is what makes the hacker movement so important.”

The value of being messy

The line between science communication and public relations has become increasingly blurred. On one side, scientific press officers highlight institutional success, secure funding and showcase breakthrough discoveries. On the other, science communicators and journalists present scientific findings in a way that educates and entertains readers – acknowledging both the triumphs and the inherent uncertainties of the scientific process.

The core difference between these approaches lies in how they handle the inevitable messiness of science. Science isn’t a smooth, linear path of consistent triumphs; it’s an uncertain, trial-and-error journey. This uncertainty, and our willingness to discuss it openly, is what distinguishes authentic science communication from a polished public relations (PR) pitch. By necessity, PR often strives to present a neat narrative, free of controversy or doubt, but this risks creating a distorted perception of what science actually is.

Finding your voice

Take, for example, the situation in particle physics. Experiments probing the fundamental laws of physics are often critiqued in the press for their hefty price tags – particularly when people are eager to see resources directed towards solving global crises like climate change or preventing future pandemics. When researchers and science communicators are finding their voice, a pressing question is how much messiness to communicate in uncertain times.

After completing my PhD as part of the ATLAS collaboration, I became a science journalist and communicator, connecting audiences across Europe and America with the joy of learning about fundamental physics. After a recent talk at the Royal Institution in London, in which I explained how ATLAS measures fundamental particles, I received an email from a colleague. The only question the talk prompted him to ask was about the safety of colliding protons, aiming to create undiscovered particles. This reaction reflects how scientific misinformation – such as the idea that experiments at CERN could endanger the planet – can be persistent and difficult to eradicate.

In response to such criticisms and concerns, I have argued many times for the value of fundamental physics research, often highlighting the vast number of technological advancements it enables, from touch screens to healthcare advances. However, we must be wary not to only rely on this PR tactic of stressing the tangible benefits of research, as it can sometimes sidestep the uncertainties and iterative nature of scientific investigation, presenting an oversimplified version of scientific progress.

From Democritus to the Standard Model

This PR-driven approach risks undermining public understanding and trust in science in the long run. When science is framed solely as a series of grand successes without any setbacks, people may become confused or disillusioned when they inevitably encounter controversies or failures. Instead, this is where honest science communication shines – admitting that our understanding evolves, that we make mistakes and that uncertainties are an integral part of the process.

Our evolving understanding of particle physics is a perfect illustration of this. From Democritus’ concept of “indivisible atoms” to the development of the Standard Model, every new discovery has refined or even overhauled our previous understanding. This is the essence of science – always refining, never perfect – and it’s exactly what we should be communicating to the public.

Embracing this messiness doesn’t necessarily reduce public trust. When presenting scientific results to the public, it’s important to remember that uncertainty can take many forms, and how we communicate these forms can significantly affect credibility. Technical uncertainty – expressing complexity or incomplete information – often increases audience trust, as it communicates the real intricacies of scientific research. Conversely, consensus uncertainty – spotlighting disagreements or controversies among experts – can have a negative impact on credibility. When it comes to genuine disagreements among scientists, effectively communicating uncertainty to the public requires a thoughtful balance. Transparency is key: acknowledging the existence of different scientific perspectives helps the public understand that science is a dynamic process. Providing context about why disagreements exist, whether due to limited data or competing theoretical frameworks, also helps in making the uncertainty comprehensible.

Embrace errors

In other words, the next time you present your latest results on social media, don’t shy away from including the error bars. And if you must have a public argument with a colleague about what the results mean, context is essential!

Acknowledging the existence of different scientific perspectives helps the public understand that science is a dynamic process

No one knows where the next breakthrough will come from or how it might solve the challenges we face. In an information ecosystem increasingly filled with misinformation, scientists and science communicators must help people understand the iterative, uncertain and evolving nature of science. As science communicators, we should be cautious not to stray too far into PR territory. Authentic communication doesn’t mean glossing over uncertainties but rather embracing them as an essential part of the story. This way, the public can appreciate science not just as a collection of established facts, but as an ongoing, dynamic process – messy, yet ultimately satisfying.

Cornering compressed SUSY

CMS figure 1

Since the LHC began operations in 2008, the CMS experiment has been searching for signs of supersymmetry (SUSY) – the only remaining spacetime symmetry not yet observed to have consequences for physics. It has explored higher and higher masses of supersymmetric particles (sparticles) with increasing collision energies and growing datasets. No evidence has been observed so far. A new CMS analysis using data recorded between 2016 and 2018 continues this search in an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The masses of SUSY sparticles have very important implications for both the physics of our universe and how they could be potentially produced and observed at experiments like CMS. The heavier the sparticle, the rarer its appearance. On the other hand, when heavy sparticles decay, their mass is converted to the masses and momenta of SM particles, like leptons and jets. These particles are detected by CMS, with large masses leaving potentially spectacular (and conspicuous) signatures. Each heavy sparticle is expected to continue to decay to lighter ones, ending with the lightest SUSY particles (LSPs). LSPs, though massive, are stable and do not decay in the detector. Instead, they appear as missing momentum. In cases of compressed sparticle mass spectra, the mass difference between the initially produced sparticles and LSPs is small. This means the low rates of production of massive sparticles are not accompanied by high-momentum decay products in the detector. Most of their mass ends up escaping in the form of invisible particles, significantly complicating observation.

This new CMS result turns this difficulty on its head, using a kinematic observable RISR, which is directly sensitive to the mass of LSPs as opposed to the mass difference between parent sparticles and LSPs. The result is even better discrimination between SUSY and SM backgrounds when sparticle spectra are more compressed.

This approach focuses on events where putative SUSY candidates receive a significant “kick” from initial-state radiation (ISR) – additional jets recoiling opposite the system of sparticles. When the sparticle masses are highly compressed, the invisible, massive LSPs receive most of the ISR momentum-kick, with this fraction telling us about the LSP masses through the RISR observable.

Given the generic applicability of the approach, the analysis is able to systematically probe a large class of possible scenarios. This includes events with various numbers of leptons (0, 1, 2 or 3) and jets (including those from heavy-flavour quarks), with a focus on objects with low momentum. These multiplicities, along with RISR and other selected discriminating variables, are used to categorise recorded events and a comprehensive fit is performed to all these regions. Compressed SUSY signals would appear at larger values of RISR, while bins at lower values are used to model and constrain SM backgrounds. With more than 2000 different bins in RISR, over several hundred object-based categ­ories, a significant fraction of the experimental phase space in which compressed SUSY could hide is scrutinised.

In the absence of significant observed deviations in data yields from SM expectations, a large collection of SUSY scenarios can be excluded at high confidence level (CL), including those with the production of stop quarks, EWKinos and sleptons. As can be seen in the results for stop quarks (figure 1), the analysis is able to achieve excellent sensitivity to compressed SUSY. Here, as for many of the SUSY scenarios considered, the analy­sis provides the world’s most stringent constraints on compressed SUSY, further narrowing the space it could be hiding.

bright-rec iop pub iop-science physcis connect