Comsol -leaderboard other pages

Topics

The measurement problem, measured

A century on, physicists still disagree on what quantum mechanics actually means. Nature recently surveyed more than a thousand researchers, asking about their views on the interpretation of quantum mechanics. When broken down by career stage, the results show that a diversity of views spans all generations.

Getting eccentric with age

The Copenhagen interpretation remains the most widely held view, placing the act of measurement at the core of quantum theory well into the 2020s. Epistemic or QBist approaches, where the quantum state expresses an observer’s knowledge or belief, form the next most common group, followed by Everett’s many-worlds framework, in which all quantum outcomes continue to coexist without collapse (CERN Courier July/August 2025 p26). Other views maintain small but steady followings, including pilot-wave theory, spontaneous-collapse models and relational quantum mechanics (CERN Courier July/August 2025 p21).

Fewer than 10% of physicists surveyed declined to express a view. Though this cohort purports to include proponents of the “shut up and calculate” school of thought, an apparently dwindling cohort of disinterested working physicists may simply be undersampled.

Crucially, confidence is modest. Most respondents view their preferred interpretation as an adequate placeholder or a useful conceptual tool. Only 24% are willing to describe their preferred interpretation as correct, leaving ample room for manoeuvre in the very foundations of fundamental physics.

Neural networks boost B-tagging

LHCb figure 1

The LHCb collaboration has developed a new inclusive flavour-tagging algorithm for neutral B-mesons. Compared to standard approaches, it can correctly identify 35% more B0 and 20% more B0s decays, expanding the dataset available for analysis. This increase in tagging power will allow for more accurate studies of charge–parity (CP) violation and B-meson oscillations.

In the Standard Model (SM), neutral B-mesons oscillate between particle and antiparticle states via second-order weak interactions involving a pair of W-bosons. Flavour-tagging techniques determine whether a neutral B-meson was initially produced as a B0 or its antiparticle B0, thereby enabling the measurement of time-dependent CP asymmetries. As the initial flavour can only be inferred indirectly from noisy, multi-particle correlations in the busy hadronic environment of the LHC, mistag rates have traditionally been high.

Until now, the LHCb collaboration has relied on two complementary flavour-tagging strategies. One infers the signal meson’s flavour by analysing the decay of the other b-hadron in the event, whose existence follows from bb pair production in the original proton-proton collision. Since the two hadrons originate from oppositely-charged, early-produced bottom quarks, the method is known as “opposite-side” (OS) tagging. The other strategy, or “same-side” (SS) tagging, uses tracks from the fragmentation process that produced the signal meson. Each provides only part of the picture, and their combination defined the state of the art in previous analyses.

The new algorithm adopts a more comprehensive approach. Using a deep neural network based on the “DeepSets” architecture, it incorporates information from all reconstructed tracks associated with the hadronisation process, rather than preselecting a subset of candidates. By considering the global structure of the event, the algorithm builds a more detailed inference of the meson’s initial flavour. This inclusive treatment of the available information increases both the sensitivity and the statistical reach of the tagging procedure.

The model was trained and calibrated using well-established B0 and B0s meson decay channels. When compared with the combination of opposite-side and same-side taggers, the inclusive algorithm displayed a 35% increase in tagging power for B0 mesons and 20% for B0s mesons (see figure 1). The improvement stems from gains in both the fraction of events that receive a flavour tag and how often the tag is correct. Tagging power is a critical figure of merit, as it determines the effective amount of usable data. Therefore, even modest gains can dramatically reduce statistical uncertainties in CP-violation and B-oscillation measurements, enhancing the experiment’s precision and discovery potential.

This development illustrates how algorithmic innovation can be as important as detector upgrades in pushing the boundaries of precision. The improved tagging power effectively expands the usable data sample without requiring additional collisions, enhancing the experiment’s capacity to test the SM and seek signs of new physics within the flavour sector. The timing is particularly significant as LHCb enters Run 3 of the LHC programme, with higher data rates and improved detector components. The new algorithm is designed to integrate smoothly with existing reconstruction and analysis frameworks, ensuring immediate benefits while providing scalability for the much larger datasets expected in future runs.

As the collaboration accumulates more data, the inclusive flavour-tagging algorithm is likely to become a central tool in data analysis. Its improved performance is expected to reduce uncertainties in some of the most sensitive measurements carried out at the LHC, strengthening the search for deviations from the SM.

Machine learning and the search for the unknown

CMS figure 1

In particle physics, searches for new phenomena have traditionally been guided by theory, focusing on specific signatures predicted by models beyond the Standard Model. Machine learning offers a different way forward. Instead of targeting known possibilities, it can scan the data broadly for unexpected patterns, without assumptions about what new physics might look like. CMS analysts are now using these techniques to conduct model-independent searches for short-lived particles that could escape conventional analyses.

Dynamic graph neural networks operate on graph-structured data, processing both the attributes of individual nodes and the relationships between them. One such model is ParticleNet, which represents large-radius-jet constituents as networks to identify N-prong hadronic decays of highly boosted particles, predicting their parent’s mass. The tool recently aided a CMS search for the single production of a heavy vector-like quark (VLQ) decaying into a top quark and a scalar boson, either the Higgs or a new scalar particle. Alongside ParticleNet, a custom deep neural network was trained to identify leptonic top-quark decays by distinguishing them from background processes over a wide range of momenta. With this approach, the analysis achieved sensitivity to VLQ production cross-sections as small as 0.15 fb. Emerging methods such as transformer networks can provide even more sensitivity in future searches (see figure 1).

CMS figure 2

Another novel approach combined two distinct machine-learning tools in the search for a massive scalar X decaying into a Higgs boson and a second scalar Y. While ParticleNet identified Higgs-boson decays to two bottom quarks, potential Y signals were assigned an “anomaly score” by an autoencoder – a neural network trained to reproduce its input and highlight atypical features in the data. This technique provided sensitivity to a wide range of unexpected decays without relying on specific theoretical models. By combining targeted identification with model-independent anomaly detection, the analysis achieved both enhanced performance and broad applicability.

Searches at the TeV scale sit at the frontier where not only more and more data but also algorithmic innovation drives experimental discovery. Tools such as targeted deep neural networks, parametric neural networks (PNNs) – which efficiently scan multi-dimensional mass landscapes (see figure 2) – and model-independent anomaly detection, are opening new ways to search for deviations from the Standard Model. Analyses of the full LHC Run 2 dataset have already revealed intriguing hints, with several machine-learning studies reporting local excesses – including a 3.6σ excess in a search for V′  VV or VH  jets, and deviations up to 3.3σ in various X  HY searches. While no definitive signal has yet emerged, the steady evolution of neural-network techniques is already changing how new phenomena are sought, and anticipation is high for what they may reveal in the larger Run 3 dataset.

Standardising sustainability: step one

For a global challenge like environmental sustainability, the only panacea is international cooperation. In September, the Sustainability Working Group, part of the Laboratory Directors Group (LDG), took a step forward by publishing a report for standardising the evaluation of the carbon impact of accelerator projects. The report challenges the community to align on a common methodology for assessing sustainability and defining a small number of figures of merit that future accelerator facilities must report.

“There’s never been this type of report before,” says Maxim Titov (CEA Saclay), who co-chairs the LDG Sustainability Working Group. “The LDG Working Group consisted of representatives with technical expertise in sustainability evaluation from large institutions including CERN, DESY, IRFU, INFN, NIKHEF and STFC, as well as experts from future collider projects who signed off on the numbers.”

The report argues that carbon assessment cannot be left to the end of a project. Instead, facilities must evaluate their lifecycle footprint starting from the early design phase, all the way through construction, operation and decommissioning. Studies already conducted on civil-engineering footprints of large accelerator projects outline a reduction potential of up to 50%, says Titov.

In terms of accelerator technology, the report highlights cooling, ventilation, cryogenics, the RF cavities that accelerate charged particles and the klystrons that power them, as the largest sources of inefficiency. The report places particular emphasis on klystrons, and identifies three high-efficiency designs currently under development that could boost the energy efficiency of RF cavities from 60 to 90% (CERN Courier May/June 2025 p30).

Carbon assessment cannot be left to the end of a project

The report also addresses the growing footprint of computing and AI. Training algorithms on more efficient hardware and adapting trigger systems to reduce unnecessary computation are identified as ways to cut energy use without compromising scientific output.

“You need to perform a life-cycle assessment at every stage of the project in order to understand your footprint, not just to produce numbers, but to optimise design and improve it in discussions with policymakers,” emphasises Titov. “Conducting sustainability assessments is a complex process, as the criteria have to be tailored to the maturity of each project and separately developed for scientists, policymakers, and society applications.”

Established by the CERN Council, the LDG is an international coordination body that brings together directors and senior representatives of the world’s major accelerator laboratories. Since 2021, the LDG has been composed of five expert panels: high-field magnets, RF structures, plasma and laser acceleration, muon colliders and energy-recovery linacs. The Sustainability Working Group was added in January 2024.

NuFact prepares for a precision era

The 26th edition of the International Workshop on Neutrinos from Accelerators (NuFact) attracted more than 200 physicists to Liverpool from 1 to 6 September. There was no shortage of topics to discuss. Delegates debated oscillations, scattering, accelerators, muon physics, beyond-PMNS physics, detectors, and inclusion, diversity, equity, education and outreach (IDEEO).

Neutrino physics has come a long way since the discovery of neutrino oscillations in 1998. Experiments now measure oscillation parameters with a precision of a few per cent. At NuFact 2025, the IceCube collaboration reported new oscillation measurements using atmospheric neutrinos from 11 years of observations at the South Pole. The measurements achieve world-leading sensitivity on neutrino mixing angles, alongside new constraints on the unitarity of the neutrino mixing matrix. Meanwhile, the JUNO experiment in China celebrated the start of data-taking with its liquid-scintillator detector (see “JUNO takes aim at neutrino-mass hierarchy”). JUNO will determine the neutrino mass ordering by observing the fine oscillation patterns of antineutrinos produced in nuclear reactors.

Neutrino scattering

Beyond oscillations, a major theme of the conference was neutrino scattering. Although neutrinos are the most abundant massive particles in the universe, their interactions with matter remain poorly understood. Measuring and modelling these processes is essential: they probe nuclear structure and hadronic physics in a novel way, while also providing the foundation for oscillation analyses in current and next-generation experiments. Exciting advances were reported across the field. The SBND experiment at Fermilab announced the collection of around three million neutrino interactions using the Booster Neutrino Beam. ICARUS presented its first neutrino–argon cross-section measurement. MicroBooNE, MINERvA and T2K showcased new results on neutrino–nucleus interaction and compared them with theoretical models. The e4ν collaboration highlighted electron beams as potential sources of data to refine neutrino-scattering models, supporting efforts to achieve the detailed interaction picture needed for the coming precision era of oscillation physics. At higher energies, FASER and SND@LHC showcased their LHC neutrino observations with both emulsion and electronic detectors.

Neutrino physics is one of the most vibrant and global areas of particle physics today

CERN’s role in neutrino physics was on display throughout the conference. Beyond the results from ICARUS, FASER and SND@LHC, other contributions included the first observation of neutrinos in the ProtoDUNE detectors, the status of the MUonE experiment – aimed at measuring the hadronic contribution to the muon anomalous magnetic moment – and the latest results from NA61. The role of CERN’s Neutrino Platform was also highlighted in contributions about the T2K ND280 near-detector upgrade and the WAGASCI–BabyMIND detector, both of which were largely assembled and tested at CERN. Discussions featured the results of the Water Cherenkov Test Experiment, which operated in the T9 beamline to prototype technology for Hyper-Kamio­kande, and other novel CERN-based ideas, such as nuSCOPE – a proposal for a short-baseline experiment that would “tag” individual neutrinos at production, formed from the merging of ENUBET and NuTag. Building on a proof-of-principle result from NA62, which identified a neutrino candidate via its parent kaon decay, this technique could represent a paradigm shift in neutrino beam characterisation.

NuFact 2025 reinforced the importance of diversity and inclusion in science. The IDEEO working group led discussions on how varied perspectives and equitable participation strengthen collaboration, improve problem solving and attract the next generation of researchers. Dedicated sessions on education and outreach also highlighted innovative efforts to engage wider communities and ensure that the future of neutrino physics is both scientifically robust and socially inclusive. From precision oscillation measurements to ambitious new proposals, NuFact 2025 demonstrated that neutrino physics is one of the most vibrant and global areas of particle physics today.

Mainz muses on future of kaon physics

The 13th KAONS conference convened almost 100 physicists in Mainz from 8 to 12 September. Since the first edition took place in Vancouver in 1988, the conference series has returned roughly every three years to bring together the global kaon-physics community. This edition was particularly significant, being the first since the decision not to continue CERN’s kaon programme with the proposed HIKE experiment (CERN Courier May/June 2024 p7).

CERN’s current NA62 effort was nevertheless present in force. Eight presentations spanned its wide-ranging programme, from precision studies of rare kaon decays to searches for lepton-flavour and lepton-number violation, and explorations beyond the Standard Model (SM). Complementary perspectives came from Japan’s KOTO experiment at J-PARC, from multipurpose facilities such as KLOE-2, Belle II and CERN’s LHCb experiment, as well as from a large and engaged theoretical community. Together, these contributions underscored the vitality of kaon physics: a field that continues to test the SM at the highest levels of precision, with a strong potential to uncover new physics.

NA62 reported a big success on the so-called “golden mode” ultra-rare decay K+ π+νν, a process that is highly sensitive to new physics (CERN Courier July/August 2024 p30). NA62 has already delivered remarkable progress in this domain: by analysing data up to 2022, the collaboration more than doubled its sample from 20 to 51 candidate events, achieving the first 5σ observation of the decay (CERN Courier November/December 2024 p11). This is the smallest branching fraction ever measured, and, intriguingly, shows a mild 1.7σ tension with the Standard Model prediction, which itself is known with a 2% theoretical uncertainty. With the experiment continuing to collect data until CERN’s next long shutdown (LS3), NA62’s final dataset is expected to triple the current statistics, sharpening what is already one of the most stringent tests of the SM.

Another major theme was the study of rare B-meson decays where kaons often appear in the final state, for example B  K* ( Kπ) ℓ+. Such processes are central to the long-debated “B anomalies,” in which certain branching fractions of rare semileptonic B decays show persistent tensions between experimental results and SM predictions (CERN Courier January/February 2025 p14). On the experimental front, CERN’s LHCb experiment continues to lead the field, delivering branching-fraction measurements with unprecedented precision. Progress is also being made on the theoretical side, though significant challenges remain in matching this precision. The conference highlighted new approaches reducing uncertainties and biases, based both on phenomenological techniques and lattice QCD.

Kaon physics is in a particularly dynamic phase. Theoretical predictions are reaching unprecedented precision, and two dedicated experiments are pushing the frontiers of rare kaon decays. At CERN, NA62 continues to deliver impactful results, even though plans for a next-stage European successor did not advance this year. Momentum is building in Japan, where the proposed KOTO-II upgrade, if approved, would secure the long-term future of the programme. Just after the conference, the KOTO-II collaboration held its first in-person meeting, bringing together members from both KOTO and NA62 – a promising sign for continued cross-fertilisation. Looking ahead, sustaining two complementary experimental efforts remains highly desirable: independent cross-checks and diversified systematics. Both will be essential to fully exploit the discovery potential of rare kaon decays.

ICFA meets in Madison

Once a year, the International Committee for Future Accelerators (ICFA) assembles for an in-person meeting, typically attached to a major summer conference. The 99th edition took place on 24 August at the Wisconsin IceCube Particle Astrophysics Center in downtown Madison, one day before Lepton–Photon 2025.

While the ICFA is neither a decision-making body nor a representation of funding agencies, its mandate assigns to the committee the important task of promoting international collaboration and coordination in all phases of the construction and exploitation of very-high-energy accelerators. This role is especially relevant in today’s context of strategic planning and upcoming decisions – with the ongoing European Strategy update, the Chinese decision process on CEPC in full swing, and the new perspectives emerging on the US–American side with the recent National Academy of Sciences report (CERN Courier September/October 2025 p10).

Consequently, the ICFA heard presentations on these important topics and discussed priorities and timelines. In addition, the theme of “physics beyond colliders” – and with it, the question of maintaining scientific diversity in an era of potentially vast and costly flagship projects – featured prominently. In this context, the importance of national laboratories capable of carrying out mid-sized particle-physics experiments was underlined. This also featured in the usual ICFA regional reports.

An important part of the work of the committee is carried out by the ICFA panels – groups of experts in specific fields of high relevance. The ICFA heard reports from the various panel chairs at the Wisconsin meeting, with a focus on the Instrumentation, Innovation and Development panel, where Stefan Söldner-Rembold (Imperial College London) recently took over as chair, succeeding the late Ian Shipsey. Among other things, the panel organises several schools and training events, such as the EDIT schools, as well as prizes that increase recognition for senior and early-career researchers working in the field of instrumentation.

Maintaining scientific diversity in an era of potentially vast and costly flagship projects  featured prominently

Another focus was the recent work of the Data Lifecycle panel chaired by Kati Lassila-Perini (University of Helsinki). This panel, together with numerous expert stakeholders in the field, recently published recommendations for best practices for data preservation and open science in HEP, advocating the application of the FAIR principles of findability, accessibility, interoperability and reusability at all levels of particle-physics research. The document provides guidance for researchers, experimental collaborations and organisations on implementing best-practice routines. It will now be distributed as broadly as possible and will hopefully contribute to the establishment of open and FAIR science practices.

Formally, the ICFA is a working group of the International Union for Pure and Applied Physics (IUPAP) and is linked to Commission C11, Particles and Fields. IUPAP has recently begun a “rejuvenation” effort that also involves rethinking the role of its working groups. Reflecting the continuity and importance of the ICFA’s work, Marcelo Gameiro Munhoz, chair of C11, presented a proposal to transform the ICFA into a standing committee under C11 – a new type of entity within IUPAP. This would allow ICFA to overcome its transient nature as a working group.

Finally, there were discussions on plans for a new set of ICFA seminars – triennial events in different world regions that assemble up to 250 leaders in the field. Following the 13th ICFA Seminar on Future Perspectives in High-Energy Physics, hosted by DESY in Hamburg in late 2023, the baton has now passed to Japan, which is finalising the location and date for the next edition, scheduled for late 2026.

Invisibles, in sight

Around 150 researchers gathered at CERN from 1 to 5 September to discuss the origin of the observed matter–antimatter asymmetry in the universe, the source of its accelerated expansion, the nature of dark matter and the mechanism behind neutrino masses. The vibrant atmosphere of the annual meeting of the Invisibles research network encouraged lively discussions, particularly among early-career researchers.

Marzia Bordone (University of Zurich) highlighted central questions in flavour physics, such as the tensions in the determinations of quark flavour-mixing parameters and the anomalies in leptonic and semileptonic B-meson decays (CERN Courier January/February 2025 p14). She showed that new bosons beyond the Standard Model that primarily interact with the heaviest quarks are theoretically well motivated and could be responsible for these flavour anomalies. Bordone emphasised that collaboration between experiment and theory, as well as data from future colliders like FCC-ee, will be essential to understand whether these effects are genuine signs of new physics.

Lina Necib (MIT) shared impressive new results on the distribution of galactic dark matter. Though invisible, dark matter interacts gravitationally and is present in all galaxies across the universe. Her team used exquisite data from the ESA Gaia satellite to track stellar trajectories in the Milky Way and determine the local dark-matter distribution to within 20–30% precision – which means about 300,000 dark-matter particles per cubic metre assuming they have mass similar to that of the proton. This is a huge improvement over what could be done just one decade ago, and will aid experiments in their direct search for dark matter in laboratories worldwide.

The most quoted dark-matter candidates at Invisibles25 were probably axions: particles once postulated to explain why the strong interactions that bind protons and neutrons behave in the same way for particles and antiparticles. Nicole Righi (King’s College London) discussed how these particles are ubiquitous in string theory. According to Righi, their detection may imply a hot Big Bang, with a rather late thermal stage, or hint at some special feature of the geometry of ultracompact dimensions related to quantum gravity.

The most intriguing talk was perhaps the CERN colloquium given by the 2011 Nobel laureate Adam Riess (Johns Hopkins University). By setting up an impressive system of distance measurements to extragalactic systems, Riess and his team have measured the expansion rate of the universe – the Hubble constant – with per cent accuracy. Their results indicate a value about 10% higher than that inferred from the cosmic microwave background within the standard ΛCDM model, a discrepancy known as the “Hubble tension”. After more than a decade of scrutiny, no single systematic error appears sufficient to account for it, and theoretical explanations remain tightly constrained (CERN Courier March/April 2025 p28). In this regard, Julien Lesgourgues (RWTH Aachen University) pointed out that, despite the thousands of papers written on the Hubble tension, there is no compelling extension of ΛCDM that could truly accommodate it.

While 95% of the universe’s energy density is invisible, the community studying it is very real. Invisibles now has a long history and is based on three innovative training networks funded by the European Union, as well as two Marie Curie exchange networks. The network includes more than 100 researchers and 50 PhD students spread across key beneficiaries in Europe, as well as America, Asia and Africa – CERN being one of their long-term partners. The energy and enthusiasm of the participants at this conference were palpable, as nature continues to offer deep mysteries that the Invisibles community strives to unravel.

Higgs hunters revel in Run 3 data

The 15th Higgs Hunting workshop took place from 15 to 17 July at IJCLab in Orsay and LPNHE in Paris. It offered an opportunity to about 100 participants to step back and review the most recent LHC Run 2 and 3 Higgs-boson results, together with some of the latest theoretical developments.

One of the highlights concerned the Higgs boson’s coupling to the charm quark, with the CMS collaboration presenting a new search using Higgs production in association with a top–antitop pair. The analysis, targeting Higgs decays into charm–quark pairs, reached a sensitivity comparable to the best existing direct constraints on this elusive interaction. New ATLAS analyses showcased the impact of the large Run 3 dataset, hinting at great potential for Higgs physics in the years to come – for example, Run 3 data has reduced the uncertainties on the coupling of the Higgs boson to muons and Zγ by 30% and 38%, respectively. On the di-Higgs front, the expected upper limit on the signal-strength modifier, measured in the bbγγ final state only, has now surpassed in sensitivity the combination of all Run 2 HH channels (see “A step towards the Higgs self-coupling”). The sensitivity to di-Higgs production is expected to improve significantly during Run 3, raising hopes of seeing a signal before the next long shutdown, from mid-2026 to the end of 2029.

Juan Rojo (Vrije Universiteit Amsterdam) discussed parton distribution functions for Higgs processes at the LHC, while Thomas Gehrmann (University of Zurich) reviewed recent developments in general Higgs theory. Mathieu Pellen (University of Freiburg) provided a review of vector-boson fusion, Jose Santiago Perez (University of Granada) summarised the effective field theory framework and Oleksii Matsedonskyi (University of Cambridge) reviewed progress on electroweak phase transitions. In his “vision” talk, Alfredo Urbano (INFN Rome) discussed the interplay between Higgs physics and early-universe cosmology. Finally, Benjamin Fuks (LPTHE, Sorbonne University) presented a toponium model, bringing the elusive romance of top–quark pairs back into the spotlight (CERN Courier September/October 2025 p9).

After a cruise on the Seine in the light of the Olympic Cauldron, participants were propelled toward the future during the European Strategy for Particle Physics session. The ESPPU secretary Karl Jakobs (University of Freiburg) and various session speakers set the stage for spirited and vigorous discussions of the options before the community – in particular, the scenarios to pursue should the FCC programme, the clear plan A, not be realised. The next Higgs Hunting workshop will be held in Orsay and Paris from 16 to 18 September 2026.

All aboard the scalar adventure

Since the discovery of the Higgs boson in 2012, the ATLAS and CMS collaborations have made significant progress in scrutinising its properties and interactions. So far, measurements are compatible with an elementary Higgs boson, originating from the minimal scalar sector required by the Standard Model. However, current experimental precision leaves ample room for this picture to change. In particular, the full potential of the LHC and its high-luminosity upgrade to search for a richer scalar sector beyond the Standard Model (BSM) is only beginning to be tapped.

The first Workshop on the Impact of Higgs Studies on New Theories of Fundamental Interactions, which took place on the Island of Capri, Italy, from 6 to 10 October 2025, gathered around 40 experimentalists and theorists to explore the pivotal role of the Higgs boson in exploring BSM physics. Participants discussed the implications of extended scalar sectors and the latest ATLAS and CMS searches, including current potential anomalies in LHC data.

“The Higgs boson has moved from the realm of being just a new particle to becoming a tool for searches for BSM particles,” said Greg Landsberg (Brown University) in an opening talk.

An extended scalar sector can address several mysteries in the SM. For example, it could serve as a mediator to a hidden sector that includes dark-matter particles, or play a role in generating the observed matter–antimatter asymmetry during an electroweak phase transition. Modified or extended Higgs sectors also arise in supersymmetric and other BSM models that address why the 125 GeV Higgs boson is so light compared to the Planck mass – despite quantum corrections that should drive it to much higher scales – and might shed light on the perplexing pattern of fermion masses and flavours.

One way to look for new physics in the scalar sector is modifications in the decay rates, coupling strengths and CP-properties of the Higgs boson. Another is to look for signs of additional neutral or charged scalar bosons, such as those predicted in longstanding two-Higgs-doublet or Higgs-triplet models. The workshop saw ATLAS and CMS researchers present their latest limits on extended Higgs sectors, which are based on an increasing number of model-independent or signature-based searches. While the data so far are consistent with the SM, a few mild excesses have attracted the attention of some theorists.

In diphoton final states, a slight excess of events persists in CMS data at a mass of 95 GeV. Hints of a small excess at a mass of 152 GeV are also present in ATLAS data, while a previously reported excess at 650 GeV has faded after full examination of Run 2 data. Workshop participants also heard suggestions that the Brout–Englert–Higgs potential could allow for a second resonance at 690 GeV.

The High-Luminosity LHC will enable us to explore the scalar sector in detail

“We haven’t seen concrete evidence for extended Higgs sectors, but intriguing features appear in various mass scales,” said CMS collaborator Sezen Sekmen (Kyungpook National University). “Run 3 ATLAS and CMS searches are in full swing, with improved triggering, object reconstruction and analysis techniques.”

Di-Higgs production, the rate of which depends on the strength of the Higgs boson’s self-coupling, offers a direct probe of the shape of the Brout–Englert–Higgs potential and is a key target of the LHC Higgs programme. Multiple SM extensions predict measurable effects on the di-Higgs production rate. In addition to non-resonant searches in di-Higgs production, ATLAS and CMS are pursuing a number of searches for BSM resonances decaying into a pair of Higgs bosons, which were shown during the workshop.

Rich exchanges between experimentalists and theorists in an informal setting gave rise to several new lines of attack for physicists to explore further. Moreover, the critical role of the High-Luminosity LHC to probe the scalar sector of the SM at the TeV scale was made clear.

“Much discussed during this workshop was the concern that people in the field are becoming demotivated by the lack of discoveries at the LHC since the Higgs, and that we have to wait for a future collider to make the next advance,” says organiser Andreas Crivellin (University of Zurich). “Nothing could be further from the truth: the scalar sector is not only the least explored of the SM and the one with the greatest potential to conceal new phenomena, but one that the High-Luminosity LHC will enable us to explore in detail.”

bright-rec iop pub iop-science physcis connect