Comsol -leaderboard other pages

Topics

A step towards the Higgs self-coupling

ATLAS figure 1

A defining yet unobserved property of the Higgs boson is its ability to couple to itself. The ATLAS collaboration has now set new bounds on this interaction, by probing the rare production of Higgs-boson pairs. Since the self-coupling strength directly connects to the shape of the Higgs potential, any departure from the Standard Model (SM) prediction would have direct implications for electroweak symmetry breaking and the early history of the universe. This makes its measurement one of the most important objectives of modern particle physics.

Higgs-boson pair production is a thousand times less frequent than single-Higgs events, roughly corresponding to a single occurrence every three trillion proton–proton collisions at the LHC. Observing such a rare process demands both vast datasets and highly sophisticated analysis techniques, along with the careful choice of a sensitive probe. Among the most effective is the HH  bbγγ channel, where one Higgs boson decays into a bottom quark–antiquark pair and the other into two photons. This final state balances the statistical reach of the dominant Higgs decay to bottom quarks with the exceptionally clean signature offered by photon-pair measurements. Despite the small signal branching ratio of about 0.26%, the decay to two photons benefits from the excellent di-photon mass resolution and offers the highest efficiency among the leading HH channels. This provides the HH  bbγγ channel with an excellent sensitivity to variations in the trilinear self-coupling modifier κλ, defined as the ratio of the measured Higgs-boson self-coupling to the SM prediction.

In its new study, the ATLAS collaboration relied on Run 3 data collected between 2022 and 2024, and on the full Run 2 dataset, reaching an integrated luminosity of 308 fb–1. Events were selected with two high-quality photons and at least two b-tagged jets, identified using the latest and most performant ATLAS b-tagging algorithm. To further distinguish signal from background, dominated by non-resonant γγ+jets and single-Higgs production with H γγ, a set of machine-learning classifiers called “multivariate analysis discriminants” were trained and used to filter genuine HH  bbγγ signals.

The collaboration reported an HH  bbγγ signal significance of 0.84σ  under the background-only hypothesis, compared to a SM expectation of 1.01σ (see figure 1). At the 95% confidence level, the self-coupling modifier was constrained to –1.7 < κλ < 6.6. These results extend previous Run 2 analyses and deliver a substantially improved sensitivity, comparable to the observed (expected) significance of 0.4σ (1σ) in the combined Run 2 results across all channels. The improvement is primarily due to the adoption of advanced b-tagging algorithms, refined analysis techniques yielding better mass resolution and a larger dataset, more than double that of previous studies.

This result marks significant progress in the search for Higgs self-interactions at the LHC and highlights the potential of Run 3 data. With the full Run 3 dataset and the High-Luminosity LHC on the horizon, ATLAS is set to extend these measurements – improving our understanding of the Higgs boson and searching for possible signs of physics beyond the SM.

ALICE observes ρ–proton attraction

ALICE figure 1

The ALICE collaboration recently obtained the first direct measurement of the attraction between a proton and a ρ0 meson – a particle of particular interest due to its fleeting lifetime and close link to chiral symmetry breaking. The result establishes a technique known as femtoscopy as a new method for studying interactions between vector mesons and baryons, and opens the door to a systematic exploration of how short-lived hadrons behave.

Traditionally, interactions between baryons and vector mesons have been studied indirectly at low-energy facilities, using decay patterns or photoproduction measurements. These were mostly interpreted through vector–meson–dominance models developed in the 1960s, in which photons fluctuate into vector mesons to interact with hadrons. While powerful, these methods provide only partial information and cannot capture the full dynamics of the interaction. Direct measurements have long been out of reach, mainly because the extremely short lifetime of vector mesons – of the order of 1–10 fm/c – renders conventional scattering experiments impossible.

At the hadronic level, the strong force can be described as arising from the exchange of massive mesons, with the lightest among them, the pion, setting the interaction range to about 1.4 fm. For such a short-range effect to influence the products of a pp collision, the particles must be created close together and with low relative momentum, ensuring sufficient interaction time and a significant wavefunction overlap.

The ALICE collaboration has now studied this mechanism in high-multiplicity proton–proton (pp) collisions, at a centre-of-mass energy of 13 TeV, through femtoscopy, which examines correlations in the relative momentum (k*) of particle pairs in their rest frame. These were expected to carry information on the size and shape of the particle-emitting source at k* below about 200 MeV, with any deviations from unity indicating the presence of short-ranged forces.

To study the interaction between protons and ρ0 vector mesons, candidates were reconstructed via the hadronic decay channel ρ0 π+π, identified from π+π pairs within the 0.70–0.85 GeV invariant mass window. Since the ρ0 decays almost instantly into pions, only about 3% of the candidates were genuine ρ0 mesons. Background corrections were therefore essential to extract the ρ0–proton correlation function, defined as the ratio of the relative-momentum distribution of same-event pairs to that of mixed-event pairs. The result is consistent with unity at large relative momenta (k* > 200 MeV), as expected in the absence of strong forces. At lower values, however, a suppression with significance of about four standard deviations clearly signals ρ0–proton final-state interactions (see figure 1).

To interpret these results, ALICE used an effective field model based on chiral perturbation theory, which predicted two resonance states consistent with the formation of excited nucleon states. Because some pairs linger in these quasi-bound states instead of flying out freely, fewer emerge with nearly the same momentum. This results in a correlation suppression at low k* consistent with observations. Unlike photoproduction experiments and QCD sum rules, femtoscopy delivers the complete phase information of the ρ0–proton interaction. By analysing both ρ–proton and φ–proton pairs, ALICE extracted precise scattering parameters that can now be incorporated into theoretical models.

This measurement sets a benchmark for vector-meson–dominance models and establishes femtoscopy as a tool to probe interactions involving the shortest-lived hadrons, while providing essential input for understanding ρ–nucleon interactions in vacuum and describing the meson’s properties in heavy-ion collisions. Pinning down how the ρ meson behaves is crucial for interpreting dilepton spectra and the restoration of chiral symmetry, as differences between light quark masses become negligible at high energies. For example, the mass gap between the ρ and its axial counterpart, a1, comes from spontaneous chiral-symmetry breaking.

The measurement problem, measured

A century on, physicists still disagree on what quantum mechanics actually means. Nature recently surveyed more than a thousand researchers, asking about their views on the interpretation of quantum mechanics. When broken down by career stage, the results show that a diversity of views spans all generations.

Getting eccentric with age

The Copenhagen interpretation remains the most widely held view, placing the act of measurement at the core of quantum theory well into the 2020s. Epistemic or QBist approaches, where the quantum state expresses an observer’s knowledge or belief, form the next most common group, followed by Everett’s many-worlds framework, in which all quantum outcomes continue to coexist without collapse (CERN Courier July/August 2025 p26). Other views maintain small but steady followings, including pilot-wave theory, spontaneous-collapse models and relational quantum mechanics (CERN Courier July/August 2025 p21).

Fewer than 10% of physicists surveyed declined to express a view. Though this cohort purports to include proponents of the “shut up and calculate” school of thought, an apparently dwindling cohort of disinterested working physicists may simply be undersampled.

Crucially, confidence is modest. Most respondents view their preferred interpretation as an adequate placeholder or a useful conceptual tool. Only 24% are willing to describe their preferred interpretation as correct, leaving ample room for manoeuvre in the very foundations of fundamental physics.

Neural networks boost B-tagging

LHCb figure 1

The LHCb collaboration has developed a new inclusive flavour-tagging algorithm for neutral B-mesons. Compared to standard approaches, it can correctly identify 35% more B0 and 20% more B0s decays, expanding the dataset available for analysis. This increase in tagging power will allow for more accurate studies of charge–parity (CP) violation and B-meson oscillations.

In the Standard Model (SM), neutral B-mesons oscillate between particle and antiparticle states via second-order weak interactions involving a pair of W-bosons. Flavour-tagging techniques determine whether a neutral B-meson was initially produced as a B0 or its antiparticle B0, thereby enabling the measurement of time-dependent CP asymmetries. As the initial flavour can only be inferred indirectly from noisy, multi-particle correlations in the busy hadronic environment of the LHC, mistag rates have traditionally been high.

Until now, the LHCb collaboration has relied on two complementary flavour-tagging strategies. One infers the signal meson’s flavour by analysing the decay of the other b-hadron in the event, whose existence follows from bb pair production in the original proton-proton collision. Since the two hadrons originate from oppositely-charged, early-produced bottom quarks, the method is known as “opposite-side” (OS) tagging. The other strategy, or “same-side” (SS) tagging, uses tracks from the fragmentation process that produced the signal meson. Each provides only part of the picture, and their combination defined the state of the art in previous analyses.

The new algorithm adopts a more comprehensive approach. Using a deep neural network based on the “DeepSets” architecture, it incorporates information from all reconstructed tracks associated with the hadronisation process, rather than preselecting a subset of candidates. By considering the global structure of the event, the algorithm builds a more detailed inference of the meson’s initial flavour. This inclusive treatment of the available information increases both the sensitivity and the statistical reach of the tagging procedure.

The model was trained and calibrated using well-established B0 and B0s meson decay channels. When compared with the combination of opposite-side and same-side taggers, the inclusive algorithm displayed a 35% increase in tagging power for B0 mesons and 20% for B0s mesons (see figure 1). The improvement stems from gains in both the fraction of events that receive a flavour tag and how often the tag is correct. Tagging power is a critical figure of merit, as it determines the effective amount of usable data. Therefore, even modest gains can dramatically reduce statistical uncertainties in CP-violation and B-oscillation measurements, enhancing the experiment’s precision and discovery potential.

This development illustrates how algorithmic innovation can be as important as detector upgrades in pushing the boundaries of precision. The improved tagging power effectively expands the usable data sample without requiring additional collisions, enhancing the experiment’s capacity to test the SM and seek signs of new physics within the flavour sector. The timing is particularly significant as LHCb enters Run 3 of the LHC programme, with higher data rates and improved detector components. The new algorithm is designed to integrate smoothly with existing reconstruction and analysis frameworks, ensuring immediate benefits while providing scalability for the much larger datasets expected in future runs.

As the collaboration accumulates more data, the inclusive flavour-tagging algorithm is likely to become a central tool in data analysis. Its improved performance is expected to reduce uncertainties in some of the most sensitive measurements carried out at the LHC, strengthening the search for deviations from the SM.

Machine learning and the search for the unknown

CMS figure 1

In particle physics, searches for new phenomena have traditionally been guided by theory, focusing on specific signatures predicted by models beyond the Standard Model. Machine learning offers a different way forward. Instead of targeting known possibilities, it can scan the data broadly for unexpected patterns, without assumptions about what new physics might look like. CMS analysts are now using these techniques to conduct model-independent searches for short-lived particles that could escape conventional analyses.

Dynamic graph neural networks operate on graph-structured data, processing both the attributes of individual nodes and the relationships between them. One such model is ParticleNet, which represents large-radius-jet constituents as networks to identify N-prong hadronic decays of highly boosted particles, predicting their parent’s mass. The tool recently aided a CMS search for the single production of a heavy vector-like quark (VLQ) decaying into a top quark and a scalar boson, either the Higgs or a new scalar particle. Alongside ParticleNet, a custom deep neural network was trained to identify leptonic top-quark decays by distinguishing them from background processes over a wide range of momenta. With this approach, the analysis achieved sensitivity to VLQ production cross-sections as small as 0.15 fb. Emerging methods such as transformer networks can provide even more sensitivity in future searches (see figure 1).

CMS figure 2

Another novel approach combined two distinct machine-learning tools in the search for a massive scalar X decaying into a Higgs boson and a second scalar Y. While ParticleNet identified Higgs-boson decays to two bottom quarks, potential Y signals were assigned an “anomaly score” by an autoencoder – a neural network trained to reproduce its input and highlight atypical features in the data. This technique provided sensitivity to a wide range of unexpected decays without relying on specific theoretical models. By combining targeted identification with model-independent anomaly detection, the analysis achieved both enhanced performance and broad applicability.

Searches at the TeV scale sit at the frontier where not only more and more data but also algorithmic innovation drives experimental discovery. Tools such as targeted deep neural networks, parametric neural networks (PNNs) – which efficiently scan multi-dimensional mass landscapes (see figure 2) – and model-independent anomaly detection, are opening new ways to search for deviations from the Standard Model. Analyses of the full LHC Run 2 dataset have already revealed intriguing hints, with several machine-learning studies reporting local excesses – including a 3.6σ excess in a search for V′  VV or VH  jets, and deviations up to 3.3σ in various X  HY searches. While no definitive signal has yet emerged, the steady evolution of neural-network techniques is already changing how new phenomena are sought, and anticipation is high for what they may reveal in the larger Run 3 dataset.

Standardising sustainability: step one

For a global challenge like environmental sustainability, the only panacea is international cooperation. In September, the Sustainability Working Group, part of the Laboratory Directors Group (LDG), took a step forward by publishing a report for standardising the evaluation of the carbon impact of accelerator projects. The report challenges the community to align on a common methodology for assessing sustainability and defining a small number of figures of merit that future accelerator facilities must report.

“There’s never been this type of report before,” says Maxim Titov (CEA Saclay), who co-chairs the LDG Sustainability Working Group. “The LDG Working Group consisted of representatives with technical expertise in sustainability evaluation from large institutions including CERN, DESY, IRFU, INFN, NIKHEF and STFC, as well as experts from future collider projects who signed off on the numbers.”

The report argues that carbon assessment cannot be left to the end of a project. Instead, facilities must evaluate their lifecycle footprint starting from the early design phase, all the way through construction, operation and decommissioning. Studies already conducted on civil-engineering footprints of large accelerator projects outline a reduction potential of up to 50%, says Titov.

In terms of accelerator technology, the report highlights cooling, ventilation, cryogenics, the RF cavities that accelerate charged particles and the klystrons that power them, as the largest sources of inefficiency. The report places particular emphasis on klystrons, and identifies three high-efficiency designs currently under development that could boost the energy efficiency of RF cavities from 60 to 90% (CERN Courier May/June 2025 p30).

Carbon assessment cannot be left to the end of a project

The report also addresses the growing footprint of computing and AI. Training algorithms on more efficient hardware and adapting trigger systems to reduce unnecessary computation are identified as ways to cut energy use without compromising scientific output.

“You need to perform a life-cycle assessment at every stage of the project in order to understand your footprint, not just to produce numbers, but to optimise design and improve it in discussions with policymakers,” emphasises Titov. “Conducting sustainability assessments is a complex process, as the criteria have to be tailored to the maturity of each project and separately developed for scientists, policymakers, and society applications.”

Established by the CERN Council, the LDG is an international coordination body that brings together directors and senior representatives of the world’s major accelerator laboratories. Since 2021, the LDG has been composed of five expert panels: high-field magnets, RF structures, plasma and laser acceleration, muon colliders and energy-recovery linacs. The Sustainability Working Group was added in January 2024.

NuFact prepares for a precision era

The 26th edition of the International Workshop on Neutrinos from Accelerators (NuFact) attracted more than 200 physicists to Liverpool from 1 to 6 September. There was no shortage of topics to discuss. Delegates debated oscillations, scattering, accelerators, muon physics, beyond-PMNS physics, detectors, and inclusion, diversity, equity, education and outreach (IDEEO).

Neutrino physics has come a long way since the discovery of neutrino oscillations in 1998. Experiments now measure oscillation parameters with a precision of a few per cent. At NuFact 2025, the IceCube collaboration reported new oscillation measurements using atmospheric neutrinos from 11 years of observations at the South Pole. The measurements achieve world-leading sensitivity on neutrino mixing angles, alongside new constraints on the unitarity of the neutrino mixing matrix. Meanwhile, the JUNO experiment in China celebrated the start of data-taking with its liquid-scintillator detector (see “JUNO takes aim at neutrino-mass hierarchy”). JUNO will determine the neutrino mass ordering by observing the fine oscillation patterns of antineutrinos produced in nuclear reactors.

Neutrino scattering

Beyond oscillations, a major theme of the conference was neutrino scattering. Although neutrinos are the most abundant massive particles in the universe, their interactions with matter remain poorly understood. Measuring and modelling these processes is essential: they probe nuclear structure and hadronic physics in a novel way, while also providing the foundation for oscillation analyses in current and next-generation experiments. Exciting advances were reported across the field. The SBND experiment at Fermilab announced the collection of around three million neutrino interactions using the Booster Neutrino Beam. ICARUS presented its first neutrino–argon cross-section measurement. MicroBooNE, MINERvA and T2K showcased new results on neutrino–nucleus interaction and compared them with theoretical models. The e4ν collaboration highlighted electron beams as potential sources of data to refine neutrino-scattering models, supporting efforts to achieve the detailed interaction picture needed for the coming precision era of oscillation physics. At higher energies, FASER and SND@LHC showcased their LHC neutrino observations with both emulsion and electronic detectors.

Neutrino physics is one of the most vibrant and global areas of particle physics today

CERN’s role in neutrino physics was on display throughout the conference. Beyond the results from ICARUS, FASER and SND@LHC, other contributions included the first observation of neutrinos in the ProtoDUNE detectors, the status of the MUonE experiment – aimed at measuring the hadronic contribution to the muon anomalous magnetic moment – and the latest results from NA61. The role of CERN’s Neutrino Platform was also highlighted in contributions about the T2K ND280 near-detector upgrade and the WAGASCI–BabyMIND detector, both of which were largely assembled and tested at CERN. Discussions featured the results of the Water Cherenkov Test Experiment, which operated in the T9 beamline to prototype technology for Hyper-Kamio­kande, and other novel CERN-based ideas, such as nuSCOPE – a proposal for a short-baseline experiment that would “tag” individual neutrinos at production, formed from the merging of ENUBET and NuTag. Building on a proof-of-principle result from NA62, which identified a neutrino candidate via its parent kaon decay, this technique could represent a paradigm shift in neutrino beam characterisation.

NuFact 2025 reinforced the importance of diversity and inclusion in science. The IDEEO working group led discussions on how varied perspectives and equitable participation strengthen collaboration, improve problem solving and attract the next generation of researchers. Dedicated sessions on education and outreach also highlighted innovative efforts to engage wider communities and ensure that the future of neutrino physics is both scientifically robust and socially inclusive. From precision oscillation measurements to ambitious new proposals, NuFact 2025 demonstrated that neutrino physics is one of the most vibrant and global areas of particle physics today.

Mainz muses on future of kaon physics

The 13th KAONS conference convened almost 100 physicists in Mainz from 8 to 12 September. Since the first edition took place in Vancouver in 1988, the conference series has returned roughly every three years to bring together the global kaon-physics community. This edition was particularly significant, being the first since the decision not to continue CERN’s kaon programme with the proposed HIKE experiment (CERN Courier May/June 2024 p7).

CERN’s current NA62 effort was nevertheless present in force. Eight presentations spanned its wide-ranging programme, from precision studies of rare kaon decays to searches for lepton-flavour and lepton-number violation, and explorations beyond the Standard Model (SM). Complementary perspectives came from Japan’s KOTO experiment at J-PARC, from multipurpose facilities such as KLOE-2, Belle II and CERN’s LHCb experiment, as well as from a large and engaged theoretical community. Together, these contributions underscored the vitality of kaon physics: a field that continues to test the SM at the highest levels of precision, with a strong potential to uncover new physics.

NA62 reported a big success on the so-called “golden mode” ultra-rare decay K+ π+νν, a process that is highly sensitive to new physics (CERN Courier July/August 2024 p30). NA62 has already delivered remarkable progress in this domain: by analysing data up to 2022, the collaboration more than doubled its sample from 20 to 51 candidate events, achieving the first 5σ observation of the decay (CERN Courier November/December 2024 p11). This is the smallest branching fraction ever measured, and, intriguingly, shows a mild 1.7σ tension with the Standard Model prediction, which itself is known with a 2% theoretical uncertainty. With the experiment continuing to collect data until CERN’s next long shutdown (LS3), NA62’s final dataset is expected to triple the current statistics, sharpening what is already one of the most stringent tests of the SM.

Another major theme was the study of rare B-meson decays where kaons often appear in the final state, for example B  K* ( Kπ) ℓ+. Such processes are central to the long-debated “B anomalies,” in which certain branching fractions of rare semileptonic B decays show persistent tensions between experimental results and SM predictions (CERN Courier January/February 2025 p14). On the experimental front, CERN’s LHCb experiment continues to lead the field, delivering branching-fraction measurements with unprecedented precision. Progress is also being made on the theoretical side, though significant challenges remain in matching this precision. The conference highlighted new approaches reducing uncertainties and biases, based both on phenomenological techniques and lattice QCD.

Kaon physics is in a particularly dynamic phase. Theoretical predictions are reaching unprecedented precision, and two dedicated experiments are pushing the frontiers of rare kaon decays. At CERN, NA62 continues to deliver impactful results, even though plans for a next-stage European successor did not advance this year. Momentum is building in Japan, where the proposed KOTO-II upgrade, if approved, would secure the long-term future of the programme. Just after the conference, the KOTO-II collaboration held its first in-person meeting, bringing together members from both KOTO and NA62 – a promising sign for continued cross-fertilisation. Looking ahead, sustaining two complementary experimental efforts remains highly desirable: independent cross-checks and diversified systematics. Both will be essential to fully exploit the discovery potential of rare kaon decays.

ICFA meets in Madison

Once a year, the International Committee for Future Accelerators (ICFA) assembles for an in-person meeting, typically attached to a major summer conference. The 99th edition took place on 24 August at the Wisconsin IceCube Particle Astrophysics Center in downtown Madison, one day before Lepton–Photon 2025.

While the ICFA is neither a decision-making body nor a representation of funding agencies, its mandate assigns to the committee the important task of promoting international collaboration and coordination in all phases of the construction and exploitation of very-high-energy accelerators. This role is especially relevant in today’s context of strategic planning and upcoming decisions – with the ongoing European Strategy update, the Chinese decision process on CEPC in full swing, and the new perspectives emerging on the US–American side with the recent National Academy of Sciences report (CERN Courier September/October 2025 p10).

Consequently, the ICFA heard presentations on these important topics and discussed priorities and timelines. In addition, the theme of “physics beyond colliders” – and with it, the question of maintaining scientific diversity in an era of potentially vast and costly flagship projects – featured prominently. In this context, the importance of national laboratories capable of carrying out mid-sized particle-physics experiments was underlined. This also featured in the usual ICFA regional reports.

An important part of the work of the committee is carried out by the ICFA panels – groups of experts in specific fields of high relevance. The ICFA heard reports from the various panel chairs at the Wisconsin meeting, with a focus on the Instrumentation, Innovation and Development panel, where Stefan Söldner-Rembold (Imperial College London) recently took over as chair, succeeding the late Ian Shipsey. Among other things, the panel organises several schools and training events, such as the EDIT schools, as well as prizes that increase recognition for senior and early-career researchers working in the field of instrumentation.

Maintaining scientific diversity in an era of potentially vast and costly flagship projects  featured prominently

Another focus was the recent work of the Data Lifecycle panel chaired by Kati Lassila-Perini (University of Helsinki). This panel, together with numerous expert stakeholders in the field, recently published recommendations for best practices for data preservation and open science in HEP, advocating the application of the FAIR principles of findability, accessibility, interoperability and reusability at all levels of particle-physics research. The document provides guidance for researchers, experimental collaborations and organisations on implementing best-practice routines. It will now be distributed as broadly as possible and will hopefully contribute to the establishment of open and FAIR science practices.

Formally, the ICFA is a working group of the International Union for Pure and Applied Physics (IUPAP) and is linked to Commission C11, Particles and Fields. IUPAP has recently begun a “rejuvenation” effort that also involves rethinking the role of its working groups. Reflecting the continuity and importance of the ICFA’s work, Marcelo Gameiro Munhoz, chair of C11, presented a proposal to transform the ICFA into a standing committee under C11 – a new type of entity within IUPAP. This would allow ICFA to overcome its transient nature as a working group.

Finally, there were discussions on plans for a new set of ICFA seminars – triennial events in different world regions that assemble up to 250 leaders in the field. Following the 13th ICFA Seminar on Future Perspectives in High-Energy Physics, hosted by DESY in Hamburg in late 2023, the baton has now passed to Japan, which is finalising the location and date for the next edition, scheduled for late 2026.

Invisibles, in sight

Around 150 researchers gathered at CERN from 1 to 5 September to discuss the origin of the observed matter–antimatter asymmetry in the universe, the source of its accelerated expansion, the nature of dark matter and the mechanism behind neutrino masses. The vibrant atmosphere of the annual meeting of the Invisibles research network encouraged lively discussions, particularly among early-career researchers.

Marzia Bordone (University of Zurich) highlighted central questions in flavour physics, such as the tensions in the determinations of quark flavour-mixing parameters and the anomalies in leptonic and semileptonic B-meson decays (CERN Courier January/February 2025 p14). She showed that new bosons beyond the Standard Model that primarily interact with the heaviest quarks are theoretically well motivated and could be responsible for these flavour anomalies. Bordone emphasised that collaboration between experiment and theory, as well as data from future colliders like FCC-ee, will be essential to understand whether these effects are genuine signs of new physics.

Lina Necib (MIT) shared impressive new results on the distribution of galactic dark matter. Though invisible, dark matter interacts gravitationally and is present in all galaxies across the universe. Her team used exquisite data from the ESA Gaia satellite to track stellar trajectories in the Milky Way and determine the local dark-matter distribution to within 20–30% precision – which means about 300,000 dark-matter particles per cubic metre assuming they have mass similar to that of the proton. This is a huge improvement over what could be done just one decade ago, and will aid experiments in their direct search for dark matter in laboratories worldwide.

The most quoted dark-matter candidates at Invisibles25 were probably axions: particles once postulated to explain why the strong interactions that bind protons and neutrons behave in the same way for particles and antiparticles. Nicole Righi (King’s College London) discussed how these particles are ubiquitous in string theory. According to Righi, their detection may imply a hot Big Bang, with a rather late thermal stage, or hint at some special feature of the geometry of ultracompact dimensions related to quantum gravity.

The most intriguing talk was perhaps the CERN colloquium given by the 2011 Nobel laureate Adam Riess (Johns Hopkins University). By setting up an impressive system of distance measurements to extragalactic systems, Riess and his team have measured the expansion rate of the universe – the Hubble constant – with per cent accuracy. Their results indicate a value about 10% higher than that inferred from the cosmic microwave background within the standard ΛCDM model, a discrepancy known as the “Hubble tension”. After more than a decade of scrutiny, no single systematic error appears sufficient to account for it, and theoretical explanations remain tightly constrained (CERN Courier March/April 2025 p28). In this regard, Julien Lesgourgues (RWTH Aachen University) pointed out that, despite the thousands of papers written on the Hubble tension, there is no compelling extension of ΛCDM that could truly accommodate it.

While 95% of the universe’s energy density is invisible, the community studying it is very real. Invisibles now has a long history and is based on three innovative training networks funded by the European Union, as well as two Marie Curie exchange networks. The network includes more than 100 researchers and 50 PhD students spread across key beneficiaries in Europe, as well as America, Asia and Africa – CERN being one of their long-term partners. The energy and enthusiasm of the participants at this conference were palpable, as nature continues to offer deep mysteries that the Invisibles community strives to unravel.

bright-rec iop pub iop-science physcis connect