Comsol -leaderboard other pages

Topics

US publishes 40-year vision for particle physics

Elementary Particle Physics: The Higgs and Beyond

Big science requires long-term planning. In June, the US National Academies of Sciences, Engineering, and Medicine published an unprecedented 40-year strategy for US particle physics titled Elementary Particle Physics: The Higgs and Beyond. Its recommendations include participating in the proposed Future Circular Collider at CERN and hosting the world’s highest-energy elementary particle collider around the middle of the century (see “Eight recommendations” panel). The report assesses that a 10 TeV muon col­lider would complement the discovery potential of a 100 TeV proton collider.

“The shift to a 40-year horizon in the new report reflects a recognition that modern particle-physics projects and scientific questions are of unprecedented scale and complexity, demanding a much longer-term strategic commitment, international cooperation and investment for continued leadership,” says report co-chair Maria Spiropulu of the California Institute of Technology. “A staggered approach towards large research-infrastructure projects, rich in scientific advancement, technological breakthroughs and collaboration, can shield the field from stagnation.”

Eight recommendations

1. The US should host the world’s highest-energy elementary particle collider around the middle of the century. This requires the immediate creation of a national muon collider R&D programme to enable the construction of a demonstrator of the key new technologies and their integration.

2. The US should participate in the international Future Circular Collider Higgs factory currently under study at CERN to unravel the physics of the Higgs boson.

3. The US should continue to pursue and develop new approaches to questions ranging from neutrino physics and tests of fundamental symmetries to the mysteries of dark matter, dark energy, cosmic inflation and the excess of matter over antimatter in the universe.

4. The US should explore new synergistic partnerships across traditional science disciplines and funding boundaries.

5. The US should invest for the long journey ahead with sustained R&D funding in accelerator science and technology, advanced instrumentation, all aspects of computing, emerging technologies from other disciplines and a healthy core research programme.

6. The federal government should provide the means and the particle-physics community should take responsibility for recruiting, training, mentoring and retaining the highly motivated student and postdoctoral workforce required for the success of the field’s ambitious science goals.

7. The US should engage internationally through existing and new partnerships, and explore new cooperative planning mechanisms.

8. Funding agencies, national laboratories and universities should work to minimise the environmental impact of particle-physics research and facilities.

Source: National Academies of Sciences, Engineering, and Medicine 2025 Elementary Particle Physics: The Higgs and Beyond. Washington, DC: The National Academies Press.

The report is authored by a committee of leading scientists selected by the National Academies. Its mandate complements the grassroots-led Snowmass process and the budget-conscious P5 process (CERN Courier January/February 2024 p7). The previous report in this series, Revealing the Hidden Nature of Space and Time: Charting the Course for Elementary Particle Physics was published in 2006. It called for the full exploitation of the LHC, a strategic focus on linear-collider R&D, expanding particle astrophysics, and pursuing an internationally coordinated, staged programme in neutrino physics.

Two conclusions underpin the new report’s recommendations. The first identifies three workforce issues currently threatening the future of particle physics: the morale of early-career scientists, a shortfall in the number of accelerator scientists, and growing barriers to international exchanges. The second urges US leadership in elementary particle physics, citing benefits to science, the nation and humanity.

Full coherence at fifty

The most common neutrino interactions are the most difficult to detect. But thanks to advances in detector technology, coherent elastic neutrino–nucleus scattering (CEνNS) is emerging from behind backgrounds, 50 years after it was first hypothesised. These low-energy interactions are insensitive to the intricacies of nuclear or nucleon structure, making them a promising tool for precision searches for physics beyond the Standard Model. They also offer a route to miniaturising neutrino detectors.

“I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations,” says Manfred Lindner (Max Planck Institute for Nuclear Physics in Heidelberg), the spokesperson for the CONUS+ experiment, which reported the first evidence for fully coherent CEνNS in July. “The technology of CONUS+ is mature and seems scalable. I believe that we are at the beginning of precision neutrino physics with CEνNS and CONUS+ is one of the door openers!”

Act of hubris

Daniel Z Freedman is not best known for CEνNS, but in 1974 the future supergravity architect suggested that experimenters search for evidence of neutrinos interacting not with nucleons but “coherently” with entire nuclei. This process should dominate when the de Broglie wavelength of the neutrino is the diameter of the nucleus or larger. The question of which specific neutron exchanged a Z boson with the incoming neutrino would sum in the quantum amplitude rather than the probability, leading to an N2 dependence on the number of neutrons. As a result, CEνNS cross sections are typically enhanced by a factor of between 100 and 1000.

Freedman noted that his proposal may have been an “act of hubris”, because the interaction rate, detector resolution and backgrounds would all pose grave experimental difficulties. His caveat was perspicacious. It took until 2017 for indisputable evidence for CEνNS to emerge at Oak Ridge National Laboratory in the US, where the COHERENT experiment observed CEνNS by neutrinos with a maximum energy of 52 MeV, emerging from pion decays at rest (CERN Courier October 2017 p8). At these energies, the coherence condition is only partially fulfilled, and nuclear structure still plays a role.

The CONUS+ collaboration now presents evidence for CEνNS in the fully coherent regime. The experiment – one of many launched at nuclear reactors following the COHERENT demonstration – uses reactor electron anti-neutrinos with energies below 10 MeV generated across 119 days at the Leibstadt Nuclear Power Plant in Switzerland. The team observed 395 ± 106 neutrinos compared to a Standard Model expectation of 347 ± 59 events, corresponding to a statistical significance for the observation of CEνNS of 3.7σ.

I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations

It is no wonder that detection took 50 years. The only signal of CEνNS is a gentle nuclear recoil – an effect often compared to the effect of a ping-pong ball on a tanker. In CONUS+, the nuclear recoils of the CEνNS interactions are detected using the ionisation signal of point-contact high-purity germanium detectors with ultra-low energy thresholds as low as 160 eV.

The team has now increased the mass of their four semiconductor detectors from 1 to 2.4 kg to provide better statistics and potentially a lower threshold energy. CONUS+ is highly sensitive to physics beyond the Standard Model, says the team, including non-standard interaction parameters, new light mediators and electromagnetic properties of the neutrino such as electrical millicharges or neutrino magnetic moments. Lindner estimates that the CONUS+ technology could be scaled up to 100 kg, potentially yielding 100,000 CEνNS events per year of operation.

Into the neutrino fog

One researcher’s holy grail is another’s curse. In 2024, dark-matter experiments reported entering the “neutrino fog”, as their sensitivity to nuclear recoils crossed the threshold to detect a background of solar-neutrino CEνNS interactions. The PandaX-4T and XENONnT collaborations reported 2.6σ and 2.7σ evidence for CEνNS interactions in their liquid–xenon time projection chambers, based on estimated signals of 79 and 11 interactions, respectively. These were the first direct measurements of nuclear recoils from solar neutrinos with dark-matter detectors. Boron-8 solar neutrinos have slightly higher energies than those detected by CONUS+, and are also in the fully coherent regime.

CEνNS has promise for nuclear-reactor monitoring

“The neutrino flux in CONUS+ is many orders of magnitude bigger than in dark-matter detectors,” notes Lindner, who is also co-spokesperson of the XENON collaboration. “This is compensated by a much larger target mass, a larger CEνNS cross section due to the larger number of neutrons in xenon versus germanium, a longer running time and differences in detection efficiencies. Both experiments have in common that all backgrounds of natural or imposed radioactivity must be suppressed by many orders of magnitude such that the CEνNS process can be extracted over backgrounds.”

The current experimental frontier for CEνNS is towards low energy thresholds, concludes COHERENT spokesperson Kate Scholberg of Duke University. “The coupling of recoil energy to observable energy can be in the form of a dim flash of light picked up by light sensors, a tiny zap of charge collected in a semiconductor detector, or a small thermal pulse observed in a bolometer. A number of collaborations are pursuing novel technologies with sub-keV thresholds, among them cryogenic bolometers. A further goal is measurement over a range of nuclei, as this will test the SM prediction of an N2 dependence of the CEνNS cross section. And for higher-energy neutrino sources, for which the coherence is not quite perfect, there are opportunities to learn about nuclear structure. Another future possibility is directional recoil detection. If we are lucky, nature may give us a supernova burst of CEνNS recoils. As for societal applications, CEνNS has promise for nuclear-reactor monitoring for nonproliferation purposes due to its large cross section and interaction threshold below that for inverse-beta-decay of 1.8 MeV.”

Einstein Probe detects exotic gamma-ray bursts

Supernovae are some of the most well-known astrophysical phenomena. The energies involved in these powerful explosions are, however, dwarfed by a gamma-ray burst (GRB). These extra-galactic explosions form the most powerful electromagnetic explosions in the universe and play an important role in its evolution. First detected in 1967, they consist of a bright pulse of gamma rays, lasting from several seconds to several minutes. This is followed by an afterglow emission that can be measured from X-rays down to radio energies for days or even months. Thanks to 60 years of observations of these events by a range of detectors, we now know that the longer GRBs are an extreme version of a core-collapse supernova. In GRBs, the death of the heavy star is accompanied by two powerful relativistic jets. If such a jet points towards Earth we can detect gamma-ray photons even for GRBs at distances of billions of light years. Thanks to detailed observations, the afterglow is now understood to be the result of synchrotron emission produced as the jet crashes into the interstellar medium.

After the detection of over 10,000 gamma-ray components of GRBs by dedicated gamma-ray satellites, the most common models associate the longer ones with supernovae. This has been confirmed thanks to detections of afterglow emission coinciding with supernova events in other galaxies. The exact characteristics that cause some heavy stars to produce a GRB remain, however, poorly understood. Furthermore, many open questions remain regarding the nature and origin of the relativistic jets and how the gamma rays are produced within them.

While the emission has been studied extensively in gamma rays, detections at soft X-ray energies are limited. This changed in early 2024 with the launch of the Einstein Probe (EP) satellite. EP is a novel X-ray telescope, developed by the Chinese Academy of Sciences (CAS) in collaboration with ESA, the Max Planck Institute for Extraterrestrial Physics and the Centre National d’Études Spatiales. EP is unique in its wide field of view (1/11th of the sky) in soft X-rays, made possible thanks to complex X-ray optics. As GRBs occur at random positions in the sky at random times, the large field of view increases its chance to observe them. Within its first year EP detected several GRB events, most of which challenge our understanding of them.

One of these occurred on 14 April 2024. It consisted of a bright flash of X-rays lasting about 2.5 minutes. The event was also observed by ground-based optical and radio telescopes that were alerted to its location in the sky by EP. These observations at lower photon energies were consistent with a weak afterglow together with the signatures from a relatively standard supernova-like event. The supernova emission showed it to originate from a star which, prior to its death, had already shed its outer layers of hydrogen and helium. Along with the spectrum detected by EP, the detection of an afterglow indicates the existence of a relativistic jet. The overall picture is therefore consistent with a GRB. However, a crucial part was missing: a gamma-ray component.

In addition, the emission spectrum observed by EP looks significantly softer as it peaks at keV rather than the 100s of keV energies typical for GRBs. The results hint at this being at an explosion that produced a relativistic jet which – for unknown reasons – was not energetic enough to produce the standard gamma-ray emission. The progenitor star therefore appears to bridge the stellar population which causes a “simple” core collapse supernova and those that produce GRBs.

Another event, detected on 15 March 2024, produced soft X-rays consisting of six separate epochs spread out over 17 minutes. Here, a gamma-ray component was detected by NASA’s Swift BAT instrument, confirming it to be a GRB. However, unlike any other GRB, the gamma-ray emission started long after the onset of the X-ray emission. This lack of gamma-ray emission in the early stages is difficult to reconcile with standard emission models. There, the emission comes from a single uniform jet where the highest energies are emitted at the start when the jet is at its most energetic.

In their publication in Nature Astronomy, the EP collaboration suggests the possibility that the early X-ray emission comes from either shocks from the supernova explosion itself or from weaker relativistic jets preceding the main powerful jet. Other proposed explanations include complex jet structures and pose that EP observed the jet far away from its centre. In this explanation, the matter in the jet moves faster in the centre while at the edges its Lorentz factor (or velocity) is significantly slower, thereby producing a lower-energy longer-lasting emission, undetectable before the launch of EP.

Overall, the two detections appear to indicate that the GRBs detected over the last 60 years, where the emission was dominated by gamma rays, were only a subset of a more complex phenomenon. At a time where two of the most important instruments in GRB astronomy from the last two decades, NASA’s Fermi and Swift missions, are proposed to be switched off, EP is taking over an important role and opening the window to soft X-ray observations.

CP symmetry in diphoton Higgs decays

CMS figure 1

In addition to giving mass to elementary particles, the Brout–Englert–Higgs mechanism provides a testing ground for the fundamental symmetries of nature. In a recent analysis, the CMS collaboration searched for violations of charge–parity (CP) symmetry in the decays of Higgs bosons into two photons. The results set some of the strongest limits to date on anomalous Higgs-boson couplings that violate CP symmetry.

CP symmetry is particularly interesting as violations reveal fundamental differences in the behaviour of matter and antimatter, potentially explaining why the former appears to be much more abundant in the observed universe. While the Standard Model predicts that CP symmetry should be violated, the effect is not sufficient to account for the observed imbalance, motivating searches for additional sources of CP violation. CP symmetry requires that the laws of physics remain the same when particles are replaced by their corresponding antiparticles (C symmetry) and their spatial coordinates are reflected as in a mirror (P symmetry). In 1967, Andrei Sakharov established CP violation as one of three necessary requirements for a cosmic imbalance between matter and antimatter.

The CMS collaboration probed Higgs-boson interactions with electro­weak bosons and gluons, using decays into two energetic photons. This final state is particularly precise: photons are well reconstructed thanks to the energy resolution of the CMS electromagnetic calorimeter and backgrounds can be accurately estimated. The analysis employed 138 fb–1 of proton–proton collision data at a centre-of-mass energy of 13 TeV and focused on two main channels. Electroweak production of the Higgs boson, via vector boson fusion (VBF) or in association with a W or Z boson (VH), tests the Higgs boson’s couplings to electroweak gauge bosons. Gluon fusion, which occurs through loops dominated by the top quark, is sensitive to possible CP-violating interactions with fermions. A full angular analysis was performed to separate different coupling hypotheses, exploiting both the kinematic properties of the photons from the Higgs boson decay and the particles produced alongside it.

The matrix element likelihood approach (MELA) was used to minimise the number of observables, while retaining all essential information. Deep neural networks and boosted decision trees classified events based on their topology and kinematic properties, isolating signal-like events from background or alternative new-physics scenarios. Events were then grouped into analysis categories, each optimised to enhance sensitivity to anomalous couplings for a specific production mode.

The data favour the Standard Model configuration, with no significant deviation from its predictions (see figure 1). By placing some of the most stringent constraints yet on CP-violating interactions between the Higgs boson and vector bosons, the study highlights how precise measurements in simple final states can yield insights into the symmetries governing particle physics. With the upcoming data from Run 3 of the LHC and the High-Luminosity LHC, CMS is well positioned to push these limits further and potentially uncover hidden aspects of the Higgs sector.

Charming energy–energy correlators

ALICE figure 1

Narrow sprays of particles called jets erupt from high-energy quarks and gluons. The ALICE collaboration has now measured so-called energy–energy correlators (EECs) of charm-quark jets for the first time – revealing new details of the elusive “dead cone” effect.

Unlike in quantum electrodynamics, the quantum chromodynamics (QCD) coupling constant gets weaker at higher energies – a feature known as asymptotic freedom. This allows high-energy partons to scatter and radiate additional partons, forming showers. As their energy splits between more and more products, decreasing toward the characteristic QCD confinement scale, interactions grow strong enough to bind partons within colour-neutral hadrons. The structure, energy profile and angular distribution of particles within the jets bear traces of the initial collision and the parton-to-hadron transitions, making them powerful probes of both perturbative and non-perturbative QCD effects. To understand the interplay between these two regimes, researchers track how jet properties vary with the mass and colour of the initiating partons.

Due to the gluon’s larger colour charge, QCD predicts gluon-initiated jets to be broader and contain more low-momentum particles than those from quarks. Additionally, the significant mass of heavy quarks should suppress collinear gluon emission, inducing the so-called “dead-cone” effect at small angles. These expectations can be tested by comparing jet substructure across flavours. A key observable for this purpose is the EEC, which measures how energy is distributed within a jet as a function of the angular separation RL between particle pairs. The large-RL region is dominated by early partonic splittings, reflecting perturbative dynamics, while a small RL value corresponds to later radiation shaped by final-state hadrons. The intermediate-RL region captures the transition where hadronisation begins to affect the jet structure. This characteristic shape enables the separation of perturbative and non-perturbative regimes, revealing flavour-dependent dynamics of jet formation and hadronisation.

The ALICE Collaboration measured the EEC for charm–quark jets tagged with D0 mesons, reconstructed via the D0 K π+ decay mode (branching ratio 3.93 ± 0.04%), in proton–proton collisions at centre-of-mass energy 13 TeV. Jets are inferred from charged-particle tracks using the anti-kT algorithm, clustering products in momentum space with a resolution parameter R = 0.4.

At low transverse momentum, where the effect of the charm-quark mass is most prominent, the EEC amplitude is found to be significantly suppressed for charm jets relative to inclusive jets initiated by light-quarks and gluons. The difference is more pronounced at small angles due to the dead-cone effect (see figure 1). Despite the sizable charm–quark mass, the distribution peak position remains similar across the two populations, pointing to a complex mix of parton flavour effects in the shower evolution and enhanced non-perturbative contributions such as hadronisation. Perturbative QCD calculations reproduce the general shape at large RL but show tension near the peak, indicating the need for theoretical improvements for heavy-quark jets. The upward trend in the ratio of charm to inclusive jets as a function of RL, reproduced with PYTHIA 8, suggests that they deviate in fragmentation.

This first measurement of the heavy-flavour jet EEC helps disentangle perturbative and non-perturbative QCD effects in jet formation, constraining theoretical models. Furthermore, it provides an essential vacuum baseline for future studies in heavy-ion collisions, where the quark–gluon plasma is expected to alter jet properties.

Mapping rare Higgs-boson decays

ATLAS figure 1

Rare, unobserved decays of the Higgs boson are natural places to search for new physics. At the EPS-HEP conference, the ATLAS collaboration presented new improved measurements of two highly suppressed Higgs decays: into a pair of muons; and into a Z boson accompanied by a photon. Producing a single event of either H → μμ or H → Zγ→ (ee/μμ) γ at the LHC requires, on average, around 10 trillion proton–proton collisions. The H → μμ and H → Zγ signals appear as narrow resonances in the dimuon and Zγ invariant mass spectra, atop backgrounds some three orders of magnitude larger.

In the Standard Model, the Brout–Englert–Higgs mechanism gives mass to the muon through its Yukawa coupling to the Higgs field, which can be tested via the rare H → μμ decay. An indirect comparison with the well-known muon mass, determined to 22 parts per billion, provides a stringent test of the mechanism in the second fermion generation and is a powerful probe of new physics. With a branching ratio of just 0.02%, and a large background dominated by the Drell–Yan production of muon pairs through virtual photons or Z bosons, the inclusive signal-over-background ratio plunges to the level of one part in a thousand. To single out its decay signature, the ATLAS collaboration employed machine-learning techniques for background suppression and generated over five billion Drell–Yan Monte Carlo events at next-to-leading-order accuracy in QCD, all passed through the full detector simulation. This high-precision sample provides templates to refine the background model and minimise bias on the tiny H → μμ signal.

The Higgs boson can decay into a Z boson and a photon via loop diagrams involving W bosons and heavy charged fermions, like the top quark. Detecting this rare process would complete the suite of established decays into electroweak boson pairs and offer a window on physics beyond the Standard Model. To reduce QCD background and improve sensitivity, the ATLAS analysis focused on Z bosons further decaying into electron or muon pairs, with an overall branching fraction of 7%. This additional selection reduces the event rate to about one in 10,000 Higgs decays, with an inclusive signal-over-background ratio at the per-mille level. The low momenta of final-state particles, combined with the high-luminosity conditions of LHC Run 3, pose additional challenges for signal extraction and suppression of Z + jets backgrounds. To enhance signal significance, the ATLAS collaboration improved background modelling techniques, optimised event categorisation by Higgs production mode, and employed machine learning to boost sensitivity.

The two ATLAS searches are based on 165 fb–1 of LHC Run 3 proton–proton collision data collected between 2022 and 2024 at √s = 13.6 TeV, with a rigorous blinding procedure in place to prevent biases. Both channels show excesses at the Higgs-boson mass of 125.09 GeV, with observed (expected) 2.8σ (1.8σ) significance for H to μμ and 1.4σ (1.5σ) for H to Zγ. These results are strengthened by combining them with 140 fb–1 of Run-2 data collected at √s = 13 TeV, updating the H → μμ and H → Zγ observed (expected) significances to 3.4σ (2.5σ) and 2.5σ (1.9σ), respectively (see figure 1). The measured signal strengths are consistent with the Standard Model within uncertainties.

These results mark the ATLAS collaboration’s first evidence for the H → μμ decay, following the earlier claim by CMS based on Run-2 data (see CERN Courier September/October 2020 p7). Meanwhile, the H → Zγ search achieves a 19% increase in expected significance with respect to the combined ATLAS–CMS Run-2 analysis, which first reported evidence for this process. As Run 3 data-taking continues, the LHC experiments are closing in on establishing these two rare Higgs decay channels. Both will remain statistically limited throughout the LHC’s lifetime, with ample room for discovery in the high-luminosity phase.

Closing the gap on axion-like particles

LHCb figure 1

Axion-like particles (ALPs) are some of the most promising candidates for physics beyond the Standard Model. At the LHC, searches for ALPs that couple to gluons and photons have so far been limited to masses above 10 GeV due to trigger requirements that reduce low-energy sensitivity. In its first ever analysis on purely neutral final states, the LHCb collaboration has now extended this experimental reach and set new bounds on the ALP parameter space.

When a global symmetry is spontaneously broken, it gives rise to massless excitations called Goldstone bosons, which reflect the system’s freedom to transform continuously without changing its energy. It is thought that ALPs may arise via a similar mechanism, acquiring a small mass though, as they originate from symmetries that are only approximate. Depending on the underlying theory, they could contribute to dark matter, solve the strong-CP problem, or mediate interactions with a hidden sector. Their coupling to known particles varies across models, leading to a range of potential experimental signatures. Among the most compelling are those involving gluons and photons.

Thanks to the magnitude of the strong coupling constant, even a small interaction with gluons can dominate the production and decay of ALPs. This makes searches at the LHC challenging since low-energy jets in proton–proton collisions are often indistinguishable from the expected ALP decay signature. In this environment, a more effective approach is to focus on the photon channel and search for ALPs that are produced in proton–proton collisions – mostly via gluon–gluon fusion – and that decay into photon pairs. These processes have been investigated at the LHC, but previous searches were limited by trigger thresholds requesting photons with large momentum components transverse to the beam. This is particularly restrictive for low-mass ALPs, whose decay products are often too soft to pass these thresholds.

The new search, based on Run-2 data collected in 2018, overcomes this limitation by leveraging the LHCb detector’s flexible software-based trigger system, lower pile-up and forward geometry. The latter enhances sensitivity to products with a small momentum component transverse to the beam, making it well suited to probe resonances in the 4.9 to 19.4 GeV mass region. This is the first LHCb analysis of a purely neutral final state, hence requiring a new trigger and selection strategy, as well as a dedicated calibration procedure. Candidate photon pairs are identified from two high-energy calorimeter clusters, produced in isolation from the rest of the event, which could not originate from charged particles or neutral pions. ALP decays are then sought using maximum likelihood fits that scan the photon-pair invariant mass spectrum for peaks.

No photon-pair excess is observed over the background-only hypothesis, and upper limits are set on the ALP production cross-section times decay branching. These results constrain the ALP decay rate and its coupling to photons, probing a region of parameter space that has so far remained unexplored (see figure 1). The investigated mass range is also of interest beyond ALP searches. Alongside the main analysis, the study targeted two-photon decays of B0(s) and the little-studied ηb meson, almost reaching the sensitivity required for its detection.

The upgraded LHCb detector, which began operations with Run 3 in 2022, is expected to deliver another boost in sensitivity. This will allow future analyses to benefit from the extended flexibility of its purely software trigger, significantly larger datasets and a wider energy coverage of the upgraded calorimeter.

Four reasons dark energy should evolve with time

In the late 1990s, observational evidence accumulated that the universe is currently undergoing an accelerating expansion. Its cause remains a major mystery for physics. The term “dark energy” was coined to explain the data, however, we have no idea what dark energy is. All we know is that it makes up about 70% of the energy density of the universe, and that it does not behave like regular matter – if it is indeed matter and not a modification of the laws of gravity on cosmological scales. If it is matter, then it must have a pressure density close to p = –ρ, where ρ is its energy density. The cosmological constant in Einstein’s equations for spacetime acts precisely this way, and a cosmological constant has therefore long been regarded as the simplest explanation for the observations. It is the bedrock of the prevailing ΛCDM model of cosmology – a setup where dark energy is time-independent. But recent observations by the Dark Energy Spectroscopic Instrument provide tantalising evidence that dark energy might be time-dependent, with its pressure slightly increasing over time (CERN Courier May/June 2025 p11). If upcoming data confirm these results, it would require a paradigm shift in cosmology, ruling out the ΛCDM model.

Mounting evidence

From the point of view of fundamental theory, there are at least four good reasons to believe that dark energy must be time-dependent and cannot be a cosmological constant.

The first piece of evidence is well known: if there is a cosmological constant induced by a particle-physics description of matter, then its value should be 120 orders of magnitude larger than observations indicate. This is the famous cosmological constant problem.

Robert H Brandenberger

A second argument is the “infrared instability” of a spacetime induced by a cosmological constant. Alexander Polyakov (Princeton) has forcefully argued that inhomogeneities on very large length scales would gradually mask a preexisting cosmological constant, making it appear to vary over time.

Recently, other arguments have been put forwards indicating that dark energy must be time-dependent. Since quantum matter generates a large cosmological constant when treated as an effective field theory, it should be expected that the cosmological constant problem can only be addressed in a quantum theory of all forces. The best candidate we have is superstring theory. There is mounting evidence that – at least in the regions of the theory under mathematical control – it is impossible to obtain a positive cosmological constant corresponding to the observed accelerating expansion. But one can obtain time-dependent dark energy, for example in quintessence toy models.

Recent observations provide tantalising evidence that dark energy might be time-dependent

The final reason is known as the trans-Planckian censorship conjecture. As the nature of dark energy remains a complete mystery, it is often treated as an effective field theory. This means that one expands all fields in Fourier modes and quantises each field as a harmonic oscillator. The modes one uses have wavelengths that increase in proportion to the scale of space. This creates a theoretical headache at the highest energies. To avoid infinities, an “ultraviolet cutoff” is required at or below the Planck mass. This must be at a fixed physical wavelength. In order to maintain this cutoff in an expanding space, it is necessary to continuously create new modes at the cutoff scale as the wavelength of the previously present modes increases. This implies a violation of unitarity. If dark energy were a cosmological constant, then modes with wavelength equal to the cutoff scale at the present time would become classical at some time in the future, and the violation of unitarity would be visible in hypothetical future observations. To avoid this problem, we conclude that dark energy must be time-dependent.

Because of its deep implications for fundamental physics, we are eagerly awaiting new observational results that will shine more light on the issue of the time-dependence of dark energy.

High-energy physics meets in Marseille

EPS-HEP 2025

The 2025 European Physical Society Conference on High Energy Physics (EPS-HEP), held in Marseille from 7 to 11 July, took centre stage in this pivotal year for high-energy physics as the community prepares to make critical decisions on the next flagship collider at CERN to enable major leaps at the high-precision and high-energy frontiers. The meeting showcased the remarkable creativity and innovation in both experiment and theory, driving progress across all scales of fundamental physics. It also highlighted the growing interplay between particle, nuclear, astroparticle physics and cosmology.

Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries. This requires sustained investment from funding agencies, laboratories, universities and the broader community to support careers and recognise leadership in detectors, software and computing. Such support must extend across construction, commissioning and operation, and include strategic and basic R&D. The implementation of detector R&D (DRD) collaborations, as outlined in the 2021 ECFA roadmap, is an important step in this direction.

Physics thrives on precision, and a prime example this year came from the Muon g–2 collaboration at Fermilab, which released its final result combining all six data runs, achieving an impressive 127 parts-per-billion precision on the muon anomalous magnetic moment (CERN Courier July/August 2025 p7). The result agrees with the latest lattice–QCD predictions for the leading hadronic–vacuum-polarisation term, albeit within a four times larger theoretical uncertainty than the experimental one. Continued improvements to lattice QCD and to the traditional dispersion-relation method based on low-energy e+e and τ data are expected in the coming years.

Runaway success

After the remarkable success of LHC Run 2, Run 3 has now surpassed it in delivered luminosity. Using the full available Run-2 and Run-3 datasets, ATLAS reported 3.4σ evidence for the rare Higgs decay to a muon pair, and a new result on the quantum-loop mediated decay into a Z boson and a photon, now more consistent with the Standard Model prediction than the earlier ATLAS and CMS Run-2 combination (see “Mapping rare Higgs-boson decays”). ATLAS also presented an updated study of Higgs pair production with decays into two b-quarks and two photons, whose sensitivity was increased beyond statistical gains thanks to improved reconstruction and analysis. CMS released a new Run-2 search for Higgs decays to charm quarks in events produced with a top-quark pair, reaching sensitivity comparable to the traditional weak-boson-associated production. Both collaborations also released new combinations of nearly all their Higgs analyses from Run 2, providing a wide set of measurements. While ATLAS sees overall agreement with predictions, CMS observes some non-significant tensions.

Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries

A highlight in top-quark physics this year was the observation by CMS of an excess in top-pair production near threshold, confirmed at the conference by ATLAS (see “ATLAS confirms top–antitop excess”). The physics of the strong interaction predicts highly compact, colour-singlet, quasi-bound pseudoscalar top–antitop state effects arising from gluon exchange. Unlike bottomonium or charmonium, no proper bound state is formed due to the rapid weak decay of the top quark (see “Memories of quarkonia”). This “toponium” effect can be modelled with the use of non-relativistic QCD. Both experiments observed a cross section about 100 times smaller than for inclusive top-quark pair production. The subtle signal and complex threshold modelling make the analysis challenging, and warrant further theoretical and experimental investigation.

A major outcome of LHC Run 2 is the lack of compelling evidence for physics beyond the Standard Model. In Run 3, ATLAS and CMS continue their searches, aided by improved triggers, reconstruction and analysis techniques, as well as a dataset more than twice as large, enabling a more sensitive exploration of rare or suppressed signals. The experiments are also revisiting excesses seen in Run 2, for example, a CMS hint of a new resonance decaying into a Higgs and another scalar was not confirmed by a new ATLAS analysis including Run-3 data.

Hadron spectroscopy has seen a renaissance since Belle’s 2003 discovery of the exotic X(3872), with landmark advances at the LHC, particularly by LHCb. CMS recently reported three new four-charm-quark states decaying into J/ψ pairs between 6.6 and 7.1 GeV. Spin-parity analysis suggests they are tightly bound tetraquarks rather than loosely bound molecular states (CERN Courier November/December 2024 p33).

Rare observations

Flavour physics continues to test the Standard Model with high sensitivity. Belle-II and LHCb reported new CP violation measurements in the charm sector, confirming the expected small effects. LHCb observed, for the first time, CP violation in the baryon sector via Λb decays, a milestone in CP violation history. NA62 at CERN’s SPS achieved the first observation of the ultra-rare kaon decay K+→ π+νν with a branching ratio of 1.3 × 10–10, matching the Standard Model prediction. MEG-II at PSI set the most stringent limit to date on the lepton-flavour-violating decay μ → eγ, excluding branching fractions above 1.5 × 10–13. Both experiments continue data taking until 2026.

Heavy-ion collisions at the LHC provide a rich environment to study the quark–gluon plasma, a hot, dense state of deconfined quarks and gluons, forming a collective medium that flows as a relativistic fluid with an exceptionally low viscosity-to-entropy ratio. Flow in lead–lead collisions, quantified by Fourier harmonics of spatial momentum anisotropies, is well described by hydrodynamic models for light hadrons. Hadrons containing heavier charm and bottom quarks show weaker collectivity, likely due to longer thermalisation times, while baryons exhibit stronger flow than mesons due to quark coalescence. ALICE reported the first LHC measurement of charm–baryon flow, consistent with these effects.

Spin-parity analysis suggests the states are tightly bound tetraquarks

Neutrino physics has made major strides since oscillations were confirmed 27 years ago, with flavour mixing parameters now known to a few percent.  Crucial questions still remain: are neutrinos their own antiparticles (Majorana fermions)? What is the mass ordering – normal or inverted? What is the absolute mass scale and how is it generated? Does CP violation occur? What are the properties of the right-handed neutrinos? These and other questions have wide-ranging implications for particle physics, astrophysics and cosmology.

Neutrinoless double-beta decay, if observed, would confirm that neutrinos are Majorana particles. Experiments using xenon and germanium are beginning to constrain the inverted mass ordering, which predicts higher decay rates. Recent combined data from the long-baseline experiments T2K and NOvA show no clear preference for either ordering, but exclude vanishing CP violation at over 3σ in the inverted scenario. The KM3NeT detector in the Mediterranean, with its ORCA and ARCA components, has delivered its first competitive oscillation results, and detected a striking ~220 PeV muon neutrino, possibly from a blazar (CERN Courier March/April 2025 p7). The next-generation large-scale neutrino experiments JUNO (China), Hyper-Kamiokande (Japan) and LBNF/DUNE (USA) are progressing in construction, with data-taking expected to begin in 2025, 2028 and 2031, respectively. LBNF/DUNE is best positioned to determine the neutrino mass ordering, while Hyper-Kamiokande will be the most sensitive to CP violation. All three will also search for proton decay, a possible messenger of grand unification.

There is compelling evidence for dark matter from gravitational effects across cosmic times and scales, as well as indications that it is of particle origin. Its possible forms span a vast mass range, up to the ~100 TeV unitarity limit for a thermal relic, and may involve a complex, structured “dark sector”. The wide complementarity among the search strategies gives the field a unifying character. Direct detection experiments looking for tiny, elastic nuclear recoils, such as XENONnT (Italy), LZ (USA) and PandaX-4T (China), have set world-leading constraints on weakly interacting massive particles. XENONnT and PandaX-4T have also reported first signals from boron-8 solar neutrinos, part of the so-called “neutrino fog” that will challenge future searches. Axions, introduced theoretically to suppress CP violation in strong interactions, could be viable dark-matter candidates. They would be produced in the early universe with enormous number density, behaving, on galactic scales, as a classical, nonrelativistic, coherently oscillating bosonic field, effectively equivalent to cold dark matter. Axions can be detected via their conversion into photons in strong magnetic fields. Experiments using microwave cavities have begun to probe the relevant μeV mass range of relic QCD axions, but the detection becomes harder at higher masses. New concepts, using dielectric disks or wire-based plasmonic resonance, are under development to overcome these challenges.

Cosmological constraints

Cosmology featured prominently at EPS-HEP, driven by new results from the analysis of DESI DR2 baryon acoustic oscillation (BAO) data, which include 14 million redshifts. Like the cosmic microwave background (CMB), BAO also provides a “standard ruler” to trace the universe’s expansion history – much like supernovae (SNe) do as standard candles. Cosmological surveys are typically interpreted within the ΛCDM model, a six-parameter framework that remarkably accounts for 13.8 billion years of cosmic evolution, from inflation and structure formation to today’s energy content, despite offering no insight into the nature of dark matter, dark energy or the inflationary mechanism. Recent BAO data, when combined with CMB and SNe surveys, show a preference for a form of dark energy that weakens over time. Tensions also persist in the Hubble expansion rate derived from early-universe (CMB and BAO) and late-universe (SN type-Ia) measurements (CERN Courier March/April 2025 p28). However, anchoring SN Ia distances in redshift remains challenging, and further work is needed before drawing firm conclusions.

Cosmological fits also constrain the sum of neutrino masses. The latest CMB and BAO-based results within ΛCDM appear inconsistent with the lower limit implied by oscillation data for inverted mass ordering. However, firm conclusions are premature, as the result may reflect limitations in ΛCDM itself. Upcoming surveys from the Euclid satellite and the Vera C. Rubin Observatory (LSST) are expected to significantly improve cosmological constraints.

Cristinel Diaconu and Thomas Strebler, chairs of the local organising committee, together with all committee members and many volunteers, succeeded in delivering a flawlessly organised and engaging conference in the beautiful setting of the Palais du Pharo overlooking Marseille’s old port. They closed the event with a memorial phrase of British cyclist Tom Simpson: “There is no mountain too high.”

Probing the dark side from Kingston

The nature of dark matter remains one of the greatest unresolved questions in modern physics. While ground-based experiments persist in their quest for direct detection, astrophysical observations and multi-messenger studies have emerged as powerful complementary tools for constraining its properties. Stars across the Milky Way and beyond – including neutron stars, white dwarfs, red giants and main-sequence stars – are increasingly recognised as natural laboratories for probing dark matter through its interactions with stellar interiors, notably via neutron-star cooling, astero­seismic diagnostics of solar oscillations and gravitational-wave emission.

The international conference Dark Matter and Stars: Multi-Messenger Probes of Dark Matter and Modified Gravity (ICDMS) was held at Queen’s University in Kingston, Ontario, Canada, from 14 to 16 July. The meeting brought together around 70 researchers from across astrophysics, cosmology, particle physics and gravitational theory. The goal was to foster interdisciplinary dialogue on how observations of stellar systems, gravitational waves and cosmological data can help shed light on the dark sector. The conference was specifically dedicated to exploring how astrophysical and cosmological systems can be used to probe the nature of dark matter.

The first day centred on compact objects as natural laboratories for dark-matter physics. Giorgio Busoni (University of Adelaide) opened with a comprehensive overview of recent theoretical progress on dark-matter accumulation in neutron stars and white dwarfs, highlighting refinements in the treatment of relativistic effects, optical depth, Fermi degeneracy and light mediators – all of which have shaped the field in recent years. Melissa Diamond (Queen’s University) followed with a striking talk with a nod to Dr. Strangelove, exploring how accumulated dark matter might trigger thermonuclear instability in white dwarfs. Sandra Robles (Fermilab) shifted the perspective from neutron stars to white dwarfs, showing how they constrain dark-matter properties. One of the authors highlighted postmerger gravitational-wave observations as a tool to distinguish neutron stars from low-mass black holes, offering a promising avenue for probing exotic remnants potentially linked to dark matter. Axions featured prominently throughout the day, alongside extensive discussions of the different ways in which dark matter affects neutron stars and their mergers.

ICDMS continues to strengthen the interface between fundamental physics and astrophysical observations

On the second day, attention turned to the broader stellar population and planetary systems as indirect detectors. Isabelle John (University of Turin) questioned whether the anomalously long lifetimes of stars near the galactic centre might be explained by dark-matter accumulation. Other talks revisited stellar systems – white dwarfs, red giants and even speculative dark stars – with a focus on modelling dark-matter transport and its effects on stellar heat flow. Complementary detection strategies also took the stage, including neutrino emission, stochastic gravitational waves and gravitational lensing, all offering potential access to otherwise elusive energy scales and interaction strengths.

The final day shifted toward galactic structure and the increasingly close interplay between theory and observation. Lina Necib (MIT) shared stellar kinematics data used to map the Milky Way’s dark-matter distribution, while other speakers examined the reliability of stellar stream analyses and subtle anomalies in galactic rotation curves. The connection to terrestrial experiments grew stronger, with talks tying dark matter to underground detectors, atomic-precision tools and cosmological observables such as the Lyman-alpha forest and baryon acoustic oscillations. Early-career researchers contributed actively across all sessions, underscoring the field’s growing vitality and introducing a fresh influx of ideas that is expanding its scope.

The ICDMS series is now in its third edition. It began in 2018 at Instituto Superior Técnico, Portugal, and is poised to become an annual event. The next conference will take place at the University of Southampton, UK, in 2026, followed by the Massachusetts Institute of Technology in the US in 2027. With increasing participation and growing international interest, the ICDMS series continues to strengthen the interface between fundamental physics and astrophysical observations in the quest to understand the nature of dark matter.

bright-rec iop pub iop-science physcis connect