Comsol -leaderboard other pages

Topics

DESI hints at evolving dark energy

The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.

Cosmological constant

In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.

These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (
CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.

The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).

Tension with ΛCDM

The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.

The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.

Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.

The new data release provides significant evidence of a deviation from the ΛCDM model

An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.

Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.

Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.

FCC feasibility study complete

The final report of a detailed study investigating the technical and financial feasibility of a Future Circular Collider (FCC) at CERN was released on 31 March. Building on a conceptual design study conducted between 2014 and 2018, the three-volume report is authored by over 1400 scientists and engineers in more than 400 institutes worldwide, and covers aspects of the project ranging from civil engineering to socioeconomic impact. As recommended in the 2020 update to the European Strategy for Particle Physics (ESPP), it was completed in time to serve as an input to the ongoing 2026 update to the ESPP (see “European strategy update: the community speaks“).

The FCC is a proposed collider infrastructure that could succeed the LHC in the 2040s. Its scientific motivation stems from the discovery in 2012 of the final particle of the Standard Model (SM), the Higgs boson, with a mass of just 125 GeV, and the wealth of precision measurements and exploratory searches during 15 years of LHC operations that have excluded many signatures of new physics at the TeV scale. The report argues that the FCC is particularly well equipped to study the Higgs and associated electroweak sectors in detail and that it provides a broad and powerful exploratory tool that would push the limits of the unknown as far as possible.

The report describes how the FCC will seek to address key domains formulated in the 2013 and 2020 ESPP updates, including: mapping the properties of the Higgs and electroweak gauge bosons with accuracies orders of magnitude better than today to probe the processes that led to the emergence of the Brout–Englert–Higgs field’s nonzero vacuum expectation value; ensuring a comprehensive and accurate campaign of precision electroweak, quantum chromodynamics, flavour and top-quark measurements sensitive to tiny deviations from the SM, probing energy scales far beyond the direct kinematic reach; improving by orders of magnitude the sensitivity to rare and elusive phenomena at low energies, including the possible discovery of light particles with very small couplings such as those relevant to the search for dark matter; and increasing by at least an order of magnitude the direct discovery reach for new particles at the energy frontier.

This technology has significant potential for industrial and societal applications

The FCC research programme outlines two possible stages: an electron–positron collider (FCC-ee) running at several centre-of-mass energies to serve as a Higgs, electroweak and top-quark factory, followed at a later stage by a proton–proton collider (FCC-hh) operating at an unprecedented collision energy. An FCC-ee with four detectors is judged to be “the electroweak, Higgs and top factory project with the highest luminosity proposed to date”, able to produce 6 × 1012 Z bosons, 2.4 × 108 W pairs, almost 3 × 106 Higgs bosons, and 2 × 106 top-quark pairs over 15 years of operations. Its versatile RF system would enable flexibility in the running sequence, states the report, allowing experimenters to move between physics programmes and scan through energies at ease. The report also outlines how the FCC-ee injector offers opportunities for other branches of science, including the production of spatially coherent photon beams with a brightness several orders of magnitude higher than any existing or planned light source.

The estimated cost of the construction of the FCC-ee is CHF 15.3 billion. This investment, which would be distributed over a period of about 15 years starting from the early 2030s, includes civil engineering, technical infrastructure, electron and positron accelerators, and four detectors.

Ready for construction

The report describes how key FCC-ee design approaches, such as a double-ring layout, top-up injection with a full-energy booster, a crab-waist collision scheme, and precise energy calibration, have been demonstrated at several previous or presently operating colliders. The FCC-ee is thus “technically ready for construction” and is projected to deliver four-to-five orders of magnitude higher luminosity per unit electrical power than LEP. During operation, its energy consumption is estimated to vary
from 1.1 to 1.8 TWh/y depending on the operation mode compared to CERN’s current consumption of about 1.3 TWh/y. Decarbonised energy including an ever-growing contribution from renewable sources would be the main source of energy for the FCC. Ongoing technology R&D aims at further increasing FCC-ee’s energy efficiency (see “Powering into the future”).

Assuming 14 T Nb3Sn magnet technology as a baseline design, a subsequent hadron collider with a centre-of-mass energy of 85 TeV entering operation in the early 2070s would extend the energy frontier by a factor six and provide an integrated luminosity five to 10 times higher than that of the HL-LHC during 25 years of operation. With four detectors, FCC-hh would increase the mass reach of direct searches for new particles to several tens of TeV, probing a broad spectrum of beyond-the-SM theories and potentially identifying the sources of any deviations found in precision measurements at FCC-ee, especially those involving the Higgs boson. An estimated sample of more than 20 billion Higgs bosons would allow the absolute determination of its couplings to muons, to photons, to the top quark and to Zγ below the percent level, while di-Higgs production would bring the uncertainty on the Higgs self-coupling below the 5% level. FCC-hh would also significantly advance understanding of the hot QCD medium by enabling lead–lead and other heavy-ion collisions at unprecedented energies, and could be configured to provide electron–proton and electron–ion collisions, says the report.

The FCC-hh design is based on LHC experience and would leverage a substantial amount of the technical infrastructure built for the first FCC stage. Two hadron injector options are under study involving a superconducting machine in either the LHC or SPS tunnel. For the purpose of a technical feasibility analysis, a reference scenario based on 14 T Nb3Sn magnets cooled to 1.9 K was considered, yielding 2.4 MW of synchrotron radiation and a power consumption of 360 MW or 2.3 TWh/y – a comparable power consumption to FCC-ee.

FCC-hh’s power consumption might be reduced below 300 MW if the magnet temperature can be raised to 4.5 K. Outlining the potential use of high-
temperature superconductors for 14 to 20 T dipole magnets operating at temperatures between 4.5 K and 20 K, the report notes that such technology could either extend the centre-of-mass energy of FCC-hh to 120 TeV or lead to significantly improved operational sustainability at the same collision energy. “The time window of more than 25 years opened by the lepton-collider stage is long enough to bring that technology to market maturity,” says FCC study leader Michael Benedikt  (CERN). “High-temperature superconductors have significant potential for industrial and societal applications, and particle accelerators can serve as pilots for market uptake, as was the case with the Tevatron and the LHC for NbTi technology.”

Society and sustainability

The report details the concepts and paths to keep the FCC’s environmental footprint low while boosting new technologies to benefit society and developing territorial synergies such as energy reuse. The civil construction process for FCC-ee, which would also serve FCC-hh, is estimated to result in about 500,000 tCO2(eq) over a period of 10 years, which the authors say corresponds to approximately one-third of the carbon budget of the Paris Olympic Games. A socio-economic impact assessment of the FCC integrating environmental aspects throughout its entire lifecycle reveals a positive cost–benefit ratio, even under conservative assumptions and adverse implementation conditions.

The actual journey towards the realisation of the FCC starts now

A major achievement of the FCC feasibility study has been the development of the layout and placement of the collider ring and related infrastructure, which have been optimised for scientific benefit while taking into account territorial compatibility, environmental and construction constraints, and cost. No fewer than 100 scenarios were developed and analysed before settling on the preferred option: a ring circumference of 90.7 km with shaft depths ranging between 200 and 400 m, with eight surface sites and four experiments. Throughout the study, CERN has been accompanied by its host states, France and Switzerland, working with entities at the local, regional and national levels to ensure a constructive dialogue with territorial stakeholders.

The final report of the FCC feasibility study together with numerous referenced technical documents have been submitted to the ongoing ESPP 2026 update, along with studies of alternative projects proposed by the community. The CERN Council may take a decision around 2028.

“After four years of effort, perseverance and creativity, the FCC feasibility study was concluded on 31 March 2025,” says Benedikt. “The actual journey towards the realisation of the FCC starts now and promises to be at least as fascinating as the successive steps that brought us to the present state.”

Gravitational remnants in the sky

Astrophysical gravitational waves have revolutionised astronomy; the eventual detection of cosmological gravitons promises to open an otherwise inaccessible window into the universe’s earliest moments. Such a discovery would offer profound insights into the hidden corners of the early universe and physics beyond the Standard Model. Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to the most exciting frontiers in modern cosmology and particle physics.

Giovannini is an esteemed scholar and household name in the fields of theoretical cosmology and early-universe physics. He has written influential research papers, reviews and books on cosmology, providing detailed discussions on several aspects of the early universe. He also authored 2008’s A Primer on the Physics of the Cosmic Microwave Background – a book most cosmologists are very familiar with.

In Relic Gravitons, Giovannini provides a comprehensive exploration of recent developments in the field, striking a remarkable balance between clarity, physical intuition and rigorous mathematical formalism. As such, it serves as an excellent reference – equally valuable for both junior researchers and seasoned experts seeking depth and insight into theoretical cosmology and particle physics.

Relic Gravitons opens with an overview of cosmological gravitons, offering a broad perspective on gravitational waves across different scales and cosmological epochs, while drawing parallels with the electromagnetic spectrum. This graceful introduction sets the stage for a well-contextualised and structured discussion.

Gravitational rainbow

Relic gravitational waves from the early universe span 30 orders of magnitude, from attohertz to gigahertz. Their wavelengths are constrained from above by the Hubble radius, setting a lower frequency bound of 10–18 Hz. At the lowest frequencies, measurements of the cosmic microwave background (CMB) provide the most sensitive probe of gravitational waves. In the nanohertz range, pulsar timing arrays serve as powerful astrophysical detectors. At intermediate frequencies, laser and atomic interferometers are actively probing the spectrum. At higher frequencies, only wide-band interferometers such as LIGO and Virgo currently operate, primarily within the audio band spanning from a few hertz to several kilohertz.

Relic Gravitons

The theoretical foundation begins with a clear and accessible introduction to tensor modes in flat spacetime, followed by spherical harmonics and polarisations. With these basics in place, tensor modes in curved spacetime are also explored, before progressing to effective action, the quantum mechanics of relic gravitons and effective energy density. This structured progression builds a solid framework for phenomenological applications.

The second part of the book is about the signals of the concordance paradigm, which includes discussions of Sakharov oscillations, short, intermediate and long wavelengths, before entering technical interludes in the next section. Here, Giovannini emphasises that the evolution of the comoving Hubble radius is uncertain, spectral energy density and other observables require approximate methods. The chapter expands to include conventional results using the Wentzel–Kramers–Brillouin approach, which is particularly useful when early-universe dynamics deviate from standard inflation.

Phenomenological implications are discussed in the final section, starting with the low-frequency branch that covers the analysis of the phenomenological implications in the lowest-frequency domain. Giovannini then examines the intermediate and high-frequency ranges. The concordance paradigm suggests that large-scale inhomogeneities originate from quantum mechanics, where traveling waves transform into standing waves. The penultimate chapter addresses the hot topic of the “quantumness” of relic gravitons, before diving into the conclusion. The book finishes with five appendices covering all sorts of useful topics, from notation to basic related topics in general relativity and cosmic perturbations.

Relic Gravitons is a must-read for anyone intrigued by the gravitational-wave background and its unparalleled potential to unveil new physics. It is an invaluable resource for those interested in gravitational waves and the unique potential to explore the unknown parts of particle physics and cosmology.

Colour information diffuses in Frankfurt

Quark Matter 2025

The 31st Quark Matter conference took place from 6 to 12 April at Goethe University in Frankfurt, Germany. This edition of the world’s flagship conference for ultra-relativistic heavy-ion physics was the best attended in the series’ history, with more than 1000 participants.

A host of experimental measurements and theoretical calculations targeted fundamental questions in many-body QCD. These included the search for a critical point along the QCD phase diagram, the extraction of the properties of the deconfined quark–gluon plasma (QGP) medium created in heavy-ion collisions, and the search for signatures of the formation of this deconfined medium in smaller collision systems.

Probing thermalisation

New results highlighted the ability of the strong force to thermalise the out-of-equilibrium QCD matter produced during the collisions. Thermalisation can be probed by taking advantage of spatial anisotropies in the initial collision geometry which, due to the rapid onset of strong interactions at early times, result in pressure gradients across the system. These pressure gradients in turn translate into a momentum-space anisotropy of produced particles in the bulk, which can be experimentally measured by taking a Fourier transform of the azimuthal distribution of final-state particles with respect to a reference event axis.

An area of active experimental and theoretical interest is to quantify the degree to which heavy quarks, such as charm and beauty, participate in this collective behaviour, which informs on the diffusion properties of the medium. The ALICE collaboration presented the first measurement of the second-order coefficient of the momentum anisotropy of charm baryons in Pb–Pb collisions, showing significant collective behaviour and suggesting that charm quarks undergo some degree of thermalisation. This collective behaviour appears to be stronger in charm baryons than charm mesons, following similar observations for light flavour.

A host of measurements and calculations targeted fundamental questions in many-body QCD

Due to the nature of thermalisation and the long hydrodynamic phase of the medium in Pb–Pb collisions, signatures of the microscopic dynamics giving rise to the thermalisation are often washed out in bulk observables. However, local excitations of the hydrodynamic medium, caused by the propagation of a high-energy jet through the QGP, can offer a window into such dynamics. Due to coupling to the coloured medium, the jet loses energy to the QGP, which in turn re-excites the thermalised medium. These excited states quickly decay and dissipate, and the local perturbation can partially thermalise. This results in a correlated response of the medium in the direction of the propagating jet, the distribution of which allows measurement of the thermalisation properties of the medium in a more controlled manner.

In this direction, the CMS collaboration presented the first measurement of an event-wise two-point energy–energy correlator, for events containing a Z boson, in both pp and Pb–Pb collisions. The two-point correlator represents the energy-weighted cross section of the angle between particle pairs in the event and can separate out QCD effects at different scales, as these populate different regions in angular phase space. In particular, the correlated response of the medium is expected to appear at large angles in the correlator in Pb–Pb collisions.

The use of a colourless Z boson, which does not interact in the QGP, allows CMS to compare events with similar initial virtuality scales in pp and Pb–Pb collisions, without incurring biases due to energy loss in the QCD probes. The collaboration showed modifications in the two-point correlator at large angles, from pp to Pb–Pb collisions, alluding to a possible signature of the correlated response of the medium to the traversing jets. Such measurements can help guide models into capturing the relevant physical processes underpinning the diffusion of colour information in the medium.

Looking to the future

The next addition of this conference series will take place in 2027 in Jeju, South Korea, and the new results presented there should notably contain the latest complement of results from the upgraded Run 3 detectors at the LHC and the newly commissioned sPHENIX detector at RHIC. New collision systems like O–O at the LHC will help shed light on many of the properties of the QGP, including its thermalisation, by varying the lifetime of the pre-equilibrium and hydrodynamic phases in the collision evolution.

PhyStat turns 25

Confidence intervals

On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars, which bring together physicists, statisticians and scientists from related fields to discuss, develop and disseminate methods for statistical data analysis and machine learning.

The special symposium heard from the founder and primary organiser of the PhyStat series Louis Lyons (Imperial College London and University of Oxford), who together with Fred James and Yves Perrin initiated the movement with the “Workshop on Confidence Limits” in January 2000. According to Lyons, the series was to bring together physicists and statisticians, a philosophy that has been followed and extended throughout the 22 PhyStat workshops and conferences, as well as numerous seminars and “informal reviews”. Speakers called attention to recognition from the Royal Statistical Society’s pictorial timeline of statistics, starting with the use of averages by Hippias of Elis in 450 BC and culminating with the 2012 discovery of the Higgs boson with 5σ significance.

Lyons and Bob Cousins (UCLA) offered their views on the evolution of statistical practice in high-energy physics, starting in the 1960s bubble-chamber era, strongly influenced by the 1971 book Statistical Methods in Experimental Physics by W T Eadie et al., its 2006 second edition by symposium participant Fred James (CERN), as well as Statistics for Nuclear and Particle Physics (1985) by Louis Lyons – reportedly the most stolen book from the CERN library. Both Lyons and Cousins noted the interest of the PhyStat community not only in practical solutions to concrete problems but also in foundational questions in statistics, with the focus on frequentist methods setting high-energy physics somewhat apart from the Bayesian approach more widely used in astrophysics.

Giving his view of the PhyStat era, ATLAS physicist and director of the University of Wisconsin Data Science Institute Kyle Cranmer emphasised the enormous impact that PhyStat has had on the field, noting important milestones such as the ability to publish full likelihood models through the statistical package RooStats, the treatment of systematic uncertainties with profile-likelihood ratio analyses, methods for combining analyses, and the reuse of published analyses to place constraints on new physics models. In regards to the next 25 years, Cranmer predicted the increasing use of methods that have emerged from PhyStat, such as simulation-based inference, and pointed out that artificial intelligence (the elephant in the room) could drastically alter how we use statistics.

Statistician Mikael Kuusela (CMU) noted that Phystat workshops have provided important two-way communication between the physics and statistics communities, citing simulation-based inference as an example where many key ideas were first developed in physics and later adopted by statisticians. In his view, the use of statistics in particle physics has emerged as “phystatistics”, a proper subfield with distinct problems and methods.

Another important feature of the PhyStat movement has been to encourage active participation and leadership by younger members of the community.  With its 25th anniversary, the torch is now passed from Louis Lyons to Olaf Behnke (DESY), Lydia Brenner (NIKHEF) and a younger team, who will guide Phystat into the next 25 years and beyond.

Gaseous detectors school at CERN

How do wire-based detectors compare to resistive-plate chambers? How well do micropattern gaseous detectors perform? Which gas mixtures optimise operation? How will detectors face the challenges of future more powerful accelerators?

Thirty-two students attended the first DRD1 Gaseous Detectors School at CERN last November. The EP-DT Gas Detectors Development (GDD) lab hosted academic lectures and varied hands-on laboratory exercises. Students assembled their own detectors, learnt about their operating characteristics and explored radiation-imaging methods with state-of-the-art readout approaches – all under the instruction of more than 40 distinguished lecturers and tutors, including renowned scientists, pioneers of innovative technologies and emerging experts.

DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. The collaboration focuses on knowledge sharing and scientific exchange, in addition to the development of novel gaseous detector technologies to address the needs of future experiments. This instrumentation school, initiated in DRD1’s first year, marks the start of a series of regular training events for young researchers that will also serve to exchange ideas between research groups and encourage collaboration.

The school will take place annually, with future editions hosted at different DRD1 member institutes to reach students from a number of regions and communities.

Planning for precision at Moriond

Since 1966 the Rencontres de Moriond has been one of the most important conferences for theoretical and experimental particle physicists. The Electroweak Interactions and Unified Theories session of the 59th edition attracted about 150 participants to La Thuile, Italy, from 23 to 30 March, to discuss electroweak, Higgs-boson, top-quark, flavour, neutrino and dark-matter physics, and the field’s links to astrophysics and cosmology.

Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. These are particularly important while the international community discusses future projects, basing projections on current results and technology. The conference heard how theoretical investigations of specific models and “catch all” effective field theories are being sharpened to constrain a broader spectrum of possible extensions of the Standard Model. Theoretical parametric uncertainties are being greatly reduced by collider precision measurements and lattice QCD. Perturbative calculations of short-distance amplitudes are reaching to percent-level precision, while hadronic long-distance effects are being investigated both in B-, D- and K-meson decays, as well as in the modelling of collider events.

Comprehensive searches

Throughout Moriond 2025 we heard how a broad spectrum of experiments at the LHC, B factories, neutrino facilities, and astrophysical and cosmological observatories are planning upgrades to search for new physics at both low- and high-energy scales. Several fields promise qualitative progress in understanding nature in the coming years. Neutrino experiments will measure the neutrino mass hierarchy and CP violation in the neutrino sector. Flavour experiments will exclude or confirm flavour anomalies. Searches for QCD axions and axion-like particles will seek hints to the solution of the strong CP problem and possible dark-matter candidates.

The Standard Model has so far been confirmed to be the theory that describes physics at the electroweak scale (up to a few hundred GeV) to a remarkable level of precision. All the particles predicted by the theory have been discovered, and the consistency of the theory has been proven with high precision, including all calculable quantum effects. No direct evidence of new physics has been found so far. Still, big open questions remain that the Standard Model cannot answer, from understanding the origin of neutrino masses and their hierarchy, to identifying the origin and nature of dark matter and dark energy, and explaining the dynamics behind the baryon asymmetry of the universe.

Several fields promise qualitative progress in understanding nature in the coming years

The discovery of the Higgs boson has been crucial to confirming the Standard Model as the theory of particle physics at the electroweak scale, but it does not explain why the scalar Brout–Englert–Higgs (BEH) potential takes the form of a Mexican hat, why the electroweak scale is set by a Higgs vacuum expectation value of 246 GeV, or what the nature of the Yukawa force is that results in the bizarre hierarchy of masses coupling the BEH field to quarks and leptons. Gravity is also not a component of the Standard Model, and a unified theory escapes us.

At the LHC today, the ATLAS and CMS collaborations are delivering Run 1 and 2 results with beyond-expectation accuracies on Higgs-boson properties and electroweak precision measurements. Projections for the high-luminosity phase of the LHC are being updated and Run 3 analyses are in full swing. The LHCb collaboration presented another milestone in flavour physics for the first time at Moriond 2025: the first observation of CP violation in baryon decays. Its rebuilt Run 3 detector with triggerless readout and full software trigger reported its first results at this conference.

Several talks presented scenarios of new physics that could be revealed in today’s data given theoretical guidance of sufficient accuracy. These included models with light weakly interacting particles, vector-like fermions and additional scalar particles. Other talks discussed how revisiting established quantum properties such as entanglement with fresh eyes could offer unexplored avenues to new theoretical paradigms and overlooked new-physics effects.

Pinpointing polarisation in vector-boson scattering

In the Standard Model (SM), W and Z bosons acquire mass and longitudinal polarisation through electroweak (EW) symmetry breaking, where the Brout–Englert–Higgs mechanism transforms Goldstone bosons into their longitudinal components. One of the most powerful ways to probe this mechanism is through vector-boson scattering (VBS), a rare process represented in figure 1, where two vector bosons scatter off each other. At high (TeV-scale) energies, interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. Without the Higgs boson’s couplings to these polarisation states, their interaction rates would grow uncontrollably with energy, eventually violating unitarity, indicating a complete breakdown of the SM.

Measuring the polarisation of same electric charge (same sign) W-boson pairs in VBS directly tests the predicted EW interactions at high energies through precision measurements. Furthermore, beyond-the-SM scenarios predict modifications to VBS, some affecting specific polarisation states, rendering such measurements valuable avenues for uncovering new physics.

ATLAS figure 2

Using the full proton–proton collision dataset from LHC Run 2 (2015–2018, 140 fb–1 at 13 TeV), the ATLAS collaboration recently published the first evidence for longitudinally polarised W bosons in the electroweak production of same-sign W-boson pairs in final states including two same-sign leptons (electrons or muons) and missing transverse momentum, along with two jets (EW W±W±jj). This process is categorised by the polarisation states of the W bosons: fully longitudinal (WL±WL±jj), mixed (WL±WT±jj), and fully transverse (WT±WT±jj). Measuring the polarisation states is particularly challenging due to the rarity of the VBS events, the presence of two undetected neutrinos, and the absence of a single kinematic variable that efficiently distinguishes between polarisation states. To overcome this, deep neural networks (DNNs) were trained to exploit the complex correlations between event kinematic variables that characterise different polarisations. This approach enabled the separation of the fully longitudinal WL±WL±jj from the combined WT±W±jj (WL±WT±jj plus WT±WT±jj) processes as well as the combined WL±W±jj (WL±WL±jj plus WL±WT±jj) from the purely transverse WT±WT±jj contribution.

To measure the production of WL±WL±jj and WL±W±jj processes, a first DNN (inclusive DNN) was trained to distinguish EW W±W±jj events from background processes. Variables such as the invariant mass of the two highest-energy jets provide strong discrimination for this classification. In addition, two independent DNNs (signal DNNs) were trained to extract polarisation information, separating either WL±WL±jj from WT±W±jj or WL±W±jj from WT±WT±jj, respectively. Angular variables, such as the azimuthal angle difference between the leading leptons and the pseudorapidity difference between the leading and subleading jets, are particularly sensitive to the scattering angles of the W bosons, enhancing the separation power of the signal DNNs. Each DNN is trained using up to 20 kinematic variables, leveraging correlations among them to improve sensitivity.

The signal DNN distributions, within each inclusive DNN region, were used to extract the WL±WL±jj and WL±W±jj polarisation fractions through two independent maximum-likelihood fits. The excellent separation between the WL±W±jj and WT±WT±jj processes can be seen in figure 2 for the WL±W±jj fit, achieving better separation for higher scores of the signal DNN, represented in the x-axis. An observed (expected) significance of 3.3 (4.0) standard deviations was obtained for WL±W±jj, providing the first evidence of same-sign WW production with at least one of the W bosons longitudinally polarised. No significant excess of events consistent with WL±WL±jj production was observed, leading to the most stringent 95% confidence-level upper limits to date on the WL±WL±jj cross section: 0.45 (0.70) fb observed (expected).

There is still much to understand about the electroweak sector of the Standard Model, and the measurement presented in this article remains limited by the size of the available data sample. The techniques developed in this analysis open new avenues for studying W- and Z-boson polarisation in VBS processes during the LHC Run 3 and beyond.

Particle Cosmology and Astrophysics

Particle Cosmology and Astrophysics

In 1989, Rocky Kolb and Mike Turner published The Early Universe – a seminal book that offered a comprehensive introduction to the then-nascent field of particle cosmology, laying the groundwork for a generation of physicists to explore the connections between the smallest and largest scales of the universe. Since then, the interfaces between particle physics, astrophysics and cosmology have expanded enormously, fuelled by an avalanche of new data from ground-based and space-borne observatories.

In Particle Cosmology and Astrophysics, Dan Hooper follows in their footsteps, providing a much-needed update that captures the rapid developments of the past three decades. Hooper, now a professor at the University of Wisconsin–Madison, addresses the growing need for a text that introduces the fundamental concepts and synthesises the vast array of recent discoveries that have shaped our current understanding of the universe.

Hooper’s textbook opens with 75 pages of “preliminaries”, covering general relativity, cosmology, the Standard Model of particle physics, thermodynamics and high-energy processes in astrophysics. Each of these disciplines is typically introduced in a full semester of dedicated study, supported by comprehensive texts. For example, students seeking a deeper understanding of high-energy phenomena are likely to benefit from consulting Longair’s High Energy Astrophysics or Sigl’s Astroparticle Physics. Similarly, those wishing to advance their knowledge in particle physics will find that more detailed treatments are available in Griffiths’ Introduction to Elementary Particles or Peskin and Schroeder’s An Introduction to Quantum Field Theory, to mention just a few textbooks recommended by the author.

A much-needed update that captures the rapid developments of the past three decades

By distilling these complex subjects into just enough foundational content, Hooper makes the field accessible to those who have been exposed to only a fraction of the standard coursework. His approach provides an essential stepping stone, enabling students to embark on research in particle cosmology and astrophysics with a well calibrated introduction while still encouraging further study through more specialised texts.

Part II, “Cosmology”, follows a similarly pragmatic approach, providing an updated treatment that parallels Kolb and Turner while incorporating a range of topics that have, in the intervening years, become central to modern cosmology. The text now covers areas such as cosmic microwave background (CMB) anisotropies, the evidence for dark matter and its potential particle candidates, the inflationary paradigm, and the evidence and possible nature of dark energy.

Hooper doesn’t shy away from complex subjects, even when they resist simple expositions. The discussion on CMB anisotropies serves as a case in point: anyone who has attempted to condense this complex topic into a few graduate lectures is aware of the challenge in maintaining both depth and clarity. Instead of attempting an exhaustive technical introduction, Hooper offers a qualitative description of the evolution of density perturbations and how one extracts cosmological parameters from CMB observations. This approach, while not substituting for the comprehensive analysis found in texts such as Dodelson’s Modern Cosmology or Baumann’s Cosmology, provides students with a valuable overview that successfully charts the broad landscape of modern cosmology and illustrates the interconnectedness of its many subdisciplines.

Part III, “Particle Astrophysics”, contains a selection of topics that largely reflect the scientific interests of the author, a renowned expert in the field of dark matter. Some colleagues might raise an eyebrow at the book devoting 10 pages each to entire fields such as cosmic rays, gamma rays and neutrino astrophysics, and 50 pages to dark-matter candidates and searches. Others might argue that a book titled Particle Cosmology and Astrophysics is incomplete without detailing the experimental techniques behind the extraordinary advances witnessed in these fields and without at least a short introduction to the booming field of gravitational-wave astronomy. But the truth is that, in the author’s own words, particle cosmology and astrophysics have become “exceptionally multidisciplinary,” and it is impossible in a single textbook to do complete justice to domains that intersect nearly all branches of physics and astronomy. I would also contend that it is not only acceptable but indeed welcome for authors to align the content of their work with their own scientific interests, as this contributes to the diversity of textbooks and offers more choice to lecturers who wish to supplement a standard curriculum with innovative, interdisciplinary perspectives.

Ultimately, I recommend the book as a welcome addition to the literature and an excellent introductory textbook for graduate students and junior scientists entering the field.

ALICE measures a rare Ω baryon

ALICE figure 1

Since the discovery of the electron and proton over 100 years ago, physicists have observed a “zoo” of different types of particles. While some of these particles have been fundamental, like neutrinos and muons, many are composite hadrons consisting of quarks bound together by the exchange of gluons. Studying the zoo of hadrons – their compositions, masses, lifetimes and decay modes – allows physicists to understand the details of the strong interaction, one of the fundamental forces of nature.

The Ω(2012) was discovered by the Belle Collaboration in 2018. The ALICE collaboration recently released an observation of a signal consistent with it with a significance of 15σ in proton–proton (pp) collisions at a centre-of-mass energy of 13 TeV. This is the first observation of the Ω(2012) by another experiment.

While the details of its internal structure are still up for debate, the Ω(2012) consists, at minimum, of three strange quarks bound together. It is a heavier, excited version of the ground-state Ω baryon discovered in 1964, which also contains three strange quarks. Multiple theoretical models predicted a spectrum of excited Ω baryons, with some calling for a state with a mass around 2 GeV. Following the discovery of the Ω(2012), theoretical work has attempted to describe its internal structure, with hypotheses including a simple three-quark baryon or a hadronic molecule.

Using a sample of a billion pp collisions, ALICE has measured the decay of Ω(2012) baryons to ΞK0S pairs. After traveling a few centimetres, these hadrons decay in turn, eventually producing a proton and four charged pions that are tracked by the ALICE detector.

ALICE’s measurements of the mass and width of the Ω(2012) are consistent with Belle’s, and superior precision on the mass. ALICE has also confirmed the rather narrow width of around 6 MeV, which indicates that the Ω(2012) is fairly long-lived for a particle that decays via the strong interaction. Belle and ALICE’s width measurements also lend support to the conclusion that the Ω(2012) has a spin-parity configuration of JP = 3/2.

ALICE also measured the number of Ω(2012) decays to ΞK0S pairs. By comparing this to the total Ω(2012) yield based on statistical thermal model calculations, ALICE has estimated the absolute branching ratio for the Ω(2012) → ΞK0 decay. A branching ratio is the probability of decay to a given mode. The ALICE results indicate that Ω(2012) undergoes two-body (ΞK) decays more than half the time, disfavouring models of the Ω(2012) structure that require large branching ratios for three-body decays.

The present ALICE results will help to improve the theoretical description of the structure of excited baryons. They can also serve as baseline measurements in searches for modifications of Ω-baryon properties in nucleus–nucleus collisions. In the future, Ω(2012) bary­ons may also serve as new probes to study the strangeness enhancement effect observed in proton–proton and nucleus–nucleus collisions.

bright-rec iop pub iop-science physcis connect