Comsol -leaderboard other pages

Topics

Chinese space station gears up for astrophysics

Completed in 2022, China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. Like the International Space Station, its ability to provide large amounts of power, support heavy payloads and access powerful communication and computing facilities give it many advantages over typical satellite platforms. As such, both Chinese and international collaborations have been developing a number of science missions ranging from optical astronomy to the detection of cosmic rays with PeV energies.

For optical astronomy, the space station will be accompanied by the Xuntian telescope, which can be translated to “survey the heavens”. Xuntian is currently planned to be launched in mid-2025 to fly alongside Tiangong, thereby allowing for regular maintenance. Although its spatial resolution will be similar to that of the Hubble Space Telescope, Xuntian’s field of view will be about 300 times larger, allowing the observation of many objects at the same time. In addition to producing impressive images similar to those sent by Hubble, the instrument will be important for cosmological studies where large statistics for astronomical objects are typically required to study their evolution.

Another instrument that will observe large portions of the sky is LyRIC (Lyman UV Radiation from Interstellar medium and Circum-galactic medium). After being placed on the space station in the coming years, LyRIC will probe the poorly studied far-ultraviolet regime that contains emission lines from neutral hydrogen and other elements. While difficult to measure, this allows studies of baryonic matter in the universe, which can be used to answer important questions such as why only about half of the total baryons in the standard “ΛCDM” cosmological model can be accounted for.

At slightly higher energies, the Diffuse X-ray Explorer (DIXE) aims to use a novel type of X-ray detector to reach an energy resolution better than 1% in the 0.1 to 10 keV energy range. It achieves this using cryogenic transition-edge sensors (TESs), which exploit the rapid change in resistance that occurs during a superconducting phase transition. In this regime, the resistivity of the material is highly dependent on its temperature, allowing the detection of minuscule temperature increases resulting from X-rays being absorbed by the material. Positioned to scan the sky above the Tiangong space station, DIXE will be able, among other things, to measure the velocity of mat­erial that appears to have been emitted by the Milky Way during an active stage of its central black hole. Its high-energy resolution will allow Doppler shifts of the order of several eV to be measured, requiring the TES detectors to operate at 50 mK. Achieving such temperatures demands a cooling system of 640 W – a power level that is difficult to achieve on a satellite, but relatively easy to acquire on a space station. As such, DIXE will be one of the first detectors using this new technology when it launches in 2025, leading the way for missions such as the European ATHENA mission that plans to use it starting in 2037.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up

POLAR-2 was accepted as an international payload on the China space station through the United Nations Office for Outer Space Affairs and has since become a CERN-recognised experiment. The mission started as a Swiss, German, Polish and Chinese collaboration building on the success of POLAR, which flew on the space station’s predecessor Tiangong-2. Like its earlier incarnation, POLAR-2 measures the polarisation of high-energy X rays or gamma rays to provide insights into, for example, the magnetic fields that produced the emission. As one of the most sensitive gamma-ray detectors in the sky, POLAR-2 can also play an important role in alerting other instruments when a bright gamma-ray transient, such as a gamma-ray burst, appears. The importance of such alerts has resulted in the expansion of POLAR-2 to include an accompanying imaging spectrometer, which will provide detailed spectral and location information on any gamma-ray transient. Also now foreseen for this second payload is an additional wide-field-of-view X-ray polarimeter. The international team developing the three instruments, which are scheduled to be launched in 2027, is led by the Institute of High Energy Physics in Beijing.

For studying the universe using even higher energy emissions, the space station will host the High Energy cosmic-Radiation Detection Facility (HERD). HERD is designed to study both cosmic rays and gamma rays at energies beyond those accessible to instruments like AMS-02, CALET (CERN Courier July/August 2024 p24) and DAMPE. It aims to achieve this, in part, by simply being larger, resulting in a mass that is currently only possible to support on a space station. The HERD calorimeter will be 55 radiation lengths long and consist of several tonnes of scintillating cubic LYSO crystals. The instrument will also use high-precision silicon trackers, which in combination with the deep calorimeter, will provide a better angular resolution and a geometrical acceptance 30 times larger than the present AMS-02 (which is due to be upgraded next year). This will allow HERD to probe the cosmic-ray spectrum up to PeV energies, filling in the energy gap between current space missions and ground-based detectors. HERD started out as an international mission with a large European contribution, however delays on the European side regarding participation, in combination with a launch requirement of 2027, mean that it is currently foreseen to be a fully Chinese mission.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up. As well as providing researchers with a pristine view of the electromagnetic universe, instruments such as HERD will enable vital cross-checks of data from AMS-02 and other unique experiments in space.

Taking the lead in the monopole hunt

ATLAS figure 1

Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. He pointed out that if monopoles exist, electric charge must be quantised, meaning that particle charges must be integer multiples of a fundamental charge. Electric charge quantisation is indeed observed in nature, with no other known explanation for this striking phenomenon. The ATLAS collaboration performed a search for these elusive particles using lead–lead (PbPb) collisions at 5.36 TeV from Run 3 of the Large Hadron Collider.

The search targeted the production of monopole–antimonopole pairs via photon–photon interactions, a process enhanced in heavy-ion collisions due to the strong electromagnetic fields (Z2) generated by the Z = 82 lead nuclei. Ultraperipheral collisions are ideal for this search, as they feature electromagnetic interactions without direct nuclear contact, allowing rare processes like monopole production to dominate in visible signatures. The ATLAS study employed a novel detection technique exploiting the expected highly ionising nature of these particles, leaving a characteristic signal in the innermost silicon detectors of the ATLAS experiment (figure 1).

The analysis employed a non-perturbative semiclassical model to estimate monopole production. Traditional perturbative models, which rely on Feynman diagrams, are inadequate due to the large coupling constant of magnetic monopoles. Instead, the study used a model based on the Schwinger mechanism, adapted for magnetic fields, to predict monopole production in the ultraperipheral collisions’ strong magnetic fields. This approach offers a more robust
theoretical framework for the search.

ATLAS figure 2

The experiment’s trigger system was critical to the search. Given the high ionisation signature of monopoles, traditional calorimeter-based triggers were unsuitable, as even high-momentum monopoles lose energy rapidly through ionisation and do not reach the calorimeter. Instead, the trigger, newly introduced for the 2023 PbPb data-taking campaign, focused on detecting the forward neutrons emitted during electromagnetic interactions. The level-1 trigger system identified neutrons using the Zero-Degree Calorimeter, while the high-level trigger required more than 100 clusters of pixel-detector hits in the inner detector – an approach sensitive to monopoles due to their high ionisation signatures.

Additionally, the analysis examined the topology of pixel clusters to further refine the search, as a more aligned azimuthal distribution in the data would indicate a signature consistent with monopoles (figure 1), while the uniform distribution typically associated with beam-induced backgrounds could be identified and suppressed.

No significant monopole signal is observed beyond the expected background, with the latter being estimated using a data-driven technique. Consequently, the analysis set new upper limits on the cross-section for magnetic monopole production (figure 2), significantly improving existing limits for low-mass monopoles in the 20–150 GeV range. Assuming a non-perturbative semiclassical model, the search excludes monopoles with a single Dirac magnetic charge and masses below 120 GeV. The techniques developed in this search will open new possibilities to study other highly ionising particles that may emerge from beyond-Standard Model physics.

Unprecedented progress in energy-efficient RF

Forty-five experts from industry and academia met in the magnificent city of Toledo, Spain from 23 to 25 September 2024 for the second workshop on efficient RF sources. Part of the I.FAST initiative on sustainable concepts and technologies (CERN Courier July/August 2024 p20), the event focused on recent advances in energy-efficient technology for RF sources essential to accelerators. Progress in the last two years has been unprecedented, with new initiatives and accomplishments around the world fuelled by the ambitious goals of new, high-energy particle-physics projects.

Out of more than 30 presentations, a significant number featured pulsed, high-peak-power RF sources working at frequencies above 3 GHz in the S, C and X bands. These involve high-efficiency klystrons that are being designed, built and tested for the KEK e/e+ Injector, the new EuPRAXIA@SPARC_LAB linac, the CLIC testing facilities, muon collider R&D, the CEPC injector linac and the C3 project. Reported increases in beam-to-RF power efficiency range from 15 percentage points for the retro­fit prototype for CLIC to more than 25 points (expected) for a new greenfield klystron design that can be used across most new projects.

A very dynamic area for R&D is the search of efficient sources for the continuous wave (CW) and long-pulse RF needed for circular accelerators. Typically working in the L-band, existing devices deliver less than 3 MW in peak power. Solid-state amplifiers, inductive output tubes, klystrons, magnetrons, triodes and exotic newly rediscovered vacuum tubes called “tristrons” compete in this arena. Successful prototypes have been built for the High-Luminosity LHC and CEPC with power efficiency gains of 10 to 20 points. In the case of the LHC, this will allow 15% more power without an impact on the electricity bill; in the case of a circular Higgs factory, this will allow a 30% reduction. CERN and SLAC are also investigating very-high-efficiency vacuum tubes for the Future Circular Collider with a potential reduction of close to 50% on the final electricity bill. A collaboration between academia and industry would certainly be required to bring this exciting new technology to light.

Besides the astounding advances in vacuum-tube technology, solid-state amplifiers based on cheap transistors are undergoing a major transformation thanks to the adoption of gallium-nitride technology. Commercial amplifiers are now capable of delivering kilowatts of power at low duty cycles with a power efficiency of 80%, while Uppsala University and the European Spallation Source have demonstrated the same efficiency for combined systems working in CW.

The search for energy efficiency does not stop at designing and building more efficient RF sources. All aspects of operation, power combination and using permanent magnets and efficient modulators need to be folded in, as described by many concrete examples during the workshop. The field is thriving.

ICFA talks strategy and sustainability in Prague

ICFA, the International Committee for Future Accelerators, was formed in 1976 to promote international collaboration in all phases of the construction and exploitation of very-high-energy accelerators. Its 96th meeting took place on 20 and 21 July during the recent ICHEP conference in Prague. Almost all of the 16 members from across the world attended in person, making the assembly lively and constructive.

The committee heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans, including a presentation by Paris Sphicas, the chair of the European Committee for Future Accelerators (ECFA), on the process for the update of the European strategy for particle physics (ESPP). Launched by CERN Council in March 2024, the ESPP update is charged with recommending the next collider project at CERN after HL-LHC operation.

A global task

The ESPP update is also of high interest to non-European institutions and projects. Consequently, in addition to the expected inputs to the strategy from European HEP communities, those from non-European HEP communities are also welcome. Moreover, the recent US P5 report and the Chinese plans for CEPC, with a potential positive decision in 2025/2026, and discussions about the ILC project in Japan, will be important elements of the work to be carried out in the context of the ESPP update. They also emphasise the global nature of high-energy physics.

An integral part of the work of ICFA is carried out within its panels, which have been very active. Presentations were given from the new panel on the Data Lifecycle (chair Kati Lassila-Perini, Helsinki), the Beam Dynamics panel (new chair Yuan He, IMPCAS) and the Advanced and Novel Accelerators panel (new chair Patric Muggli, Max Planck Munich, proxied at the meeting by Brigitte Cros, Paris-Saclay). The Instrumentation and Innovation Development panel (chair Ian Shipsey, Oxford) is setting an example with its numerous schools, the ICFA instrumentation awards and centrally sponsored instrumentation studentships for early-career researchers from underserved world regions. Finally, the chair of the ILC International Development Team panel (Tatsuya Nakada, EPFL) summarised the latest status of the ILC Technological Network, and the proposed ILC collider project in Japan.

ICFA noted interesting structural developments in the global organisation of HEP

A special session was devoted to the sustainability of HEP accelerator infrastructures, considering the need to invest efforts into guidelines that enable better comparison of the environmental reports of labs and infrastructures, in particular for future facilities. It was therefore natural for ICFA to also hear reports not only from the panel on Sustainable Accelerators and Colliders led by Thomas Roser (BNL), but also from the European Lab Directors Working Group on Sustainability. This group, chaired by Caterina Bloise (INFN) and Maxim Titov (CEA), is mandated to develop a set of key indicators and a methodology for the reporting on future HEP projects, to be delivered in time for the ESPP update.

Finally, ICFA noted some very interesting structural developments in the global organisation of HEP. In the Asia-Oceania region, ACFA-HEP was recently formed as a sub-panel under the Asian Committee for Future Accelerators (ACFA), aiming for a better coordination of HEP activities in this particular region of the world. Hopefully, this will encourage other world regions to organise themselves in a similar way in order to strengthen their voice in the global HEP community – for example in Latin America. Here, a meeting was organised in August by the Latin American Association for High Energy, Cosmology and Astroparticle Physics (LAA-HECAP) to bring together scientists, institutions and funding agencies from across Latin America to coordinate actions for jointly funding research projects across the continent.

The next in-person ICFA meeting will be held during the Lepton–Photon conference in Madison, Wisconsin (USA), in August 2025.

Isolating photons at low Bjorken x

ALICE figure 1

In high-energy collisions at the LHC, prompt photons are those that do not originate from particle decays and are instead directly produced by the hard scattering of quarks and gluons (partons). Due to their early production, they provide a clean method to probe the partons inside the colliding nucleons, and in particular the fraction of the momentum of the nucleon carried by each parton (Bjorken x). The distribution of each parton in Bjorken x is known as its parton distribution function (PDF).

Theoretical models of particle production rely on the precise knowledge of PDFs, which are derived from vast amounts of experimental data. The high centre-of-mass energies (√s) at the LHC probe very small values of the momentum fraction, Bjorken x. At “midrapidity”, when a parton scatters with a large angle with respect to the beam axis, and a prompt photon is produced in the final state, a useful approximation to Bjorken x is provided by the dimensionless variable xT = 2pT/√s, where pT is the transverse momentum of the prompt photon.

Prompt photons can also be produced by next-to-leading order processes such as parton fragmentation or bremsstrahlung. A clean separation of the different prompt photon sources is difficult experimentally, but fragmentation can be suppressed by selecting “isolated photons”. For a photon to be considered isolated, the sum of the transverse energies or transverse momenta of the particles produced in a cone around the photon must be smaller than some threshold – a selection that can be done both in the experimental measurement and theoretical calculations. An isolation requirement also helps to reduce the background of decay photons, since hadrons that can decay to photons are often produced in jet fragmentation.

The ALICE collaboration now reports the measurement of the differential cross-section for isolated photons in proton–proton collisions at √s = 13 TeV at midrapidity. The photon measurement is performed by the electromagnetic calorimeter, and the isolated photons are selected by combining with the data from the central inner tracking system and time-projection chamber, requiring that the summed pT of the charged particles in a cone of angular radius 0.4 radians centred on the photon candidate be smaller than 1.5 GeV/c. The isolated photon cross-sections are obtained within the transverse momentum range from 7 to 200 GeV/c, corresponding to 1.1 × 10–3 < xT < 30.8 × 10–3.

Figure 1 shows the new ALICE results alongside those from ATLAS, CMS and prior measurements in proton–proton and proton–antiproton collisions at lower values of √s. The figure spans more than 15 orders of magnitude on the y-axis, representing the cross-section, over a wide range of xT. The present measurement probes the smallest Bjorken x with isolated photons at midrapidity to date. The experimental data points show an agreement between all the measurements when scaled with the collision energy to the power n = 4.5. Such a scaling is designed to cancel the predicted 1/(pT)n dependence of partonic 2  2 scattering cross-sections in perturbative QCD and reveal insights into the gluon PDF (see “The other 99%“).

This measurement will help to constrain the gluon PDF and will play a crucial role in exploring medium-induced modifications of hard probes in nucleus–nucleus collisions.

R(D) ratios in line at LHCb

LHCb figure 1

The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation in the framework of the Standard Model (SM). The b  cτντ transition has the potential to reveal new particles or forces that interact primarily with third-generation particles, which are subject to the less stringent experimental constraints at present. As a tree-level SM process mediated by W-boson exchange, its amplitude is large, resulting in large branching fractions and significant data samples to analyse.

The observable under scrutiny is the ratio of decay rates between the signal mode involving τ and ντ leptons from the third generation of fermions and the normalisation mode containing μ and νμ leptons from the second generation. Within the SM, this lepton flavour universality (LFU) ratio deviates from unity only due to the different mass of the charged leptons – but new contributions could change the value of the ratios. A longstanding tension exists between the SM prediction and the experimental measurements, requiring further input to clarify the source of the discrepancy.

The LHCb collaboration analysed four decay modes: B0 D(*)+ν, with ℓ representing τ or μ. Each is selected using the same visible final state of one muon and light hadrons from the decay of the charm meson. In the normalisation mode, the muon originates directly from the B-hadron decay, while in the signal mode, it arises from the decay of the τ lepton. The four contributions are analysed simultaneously, yielding two LFU ratios between taus and muons – one using the ground state of the D+ meson and one the excited state D*+.

The control of the background contributions is particularly complicated in this analysis as the final state is not fully reconstructible, limiting the resolution on some of the discriminating variables. Instead, a three-dimensional template fit separates the signal and the normalisation from the background versus: the momentum transferred to the lepton pair (q2); the energy of the muon in the rest frame of the B meson (Eμ*); and the invariant mass missing from the visible system. Each contribution is modelled using a template histogram derived either from simulation or from selected control samples in data.

This constitutes the world’s second most precise measurement of R(D)

To prevent the simulated data sample size from becoming a limiting factor in the precision of the measurement, a fast tracker-only simulation technique was exploited for the first time in LHCb. Another novel aspect of this work is the use of the HAMMER software tool during the minimisation procedure of the likelihood fit, which enables a fast, but exact, variation of a template as a function of the decay-model parameters. This variation is important to allow the form factors of both the signal and normalisation channels to vary as the constraints derived from the predictions that use precise lattice calculations can have larger uncertainties than those obtained from the fit.

The fit projection over one of the discriminating variables is shown in figure 1, illustrating the complexity of the analysed data sample but nonetheless showcasing LHCb’s ability to distinguish the signal modes (red and orange) from the normalisation modes (two shades of blue) and background contributions.

The measured LFU ratios are in good agreement with the current world average and the predictions of the SM: R(D+) = 0.249 ± 0.043 (stat.) ± 0.047 (syst.) and R(D*+) = 0.402 ± 0.081(stat.) ± 0.085 (syst.). Under isospin symmetry assumptions, this constitutes the world’s second most precise measurement of R(D), following a 2019 measurement by the Belle collaboration. This analysis complements other ongoing efforts at LHCb and other experiments to test LFU across different decay channels. The precision of the measurements reported here is primarily limited by the size of the signal and control samples, so more precise measurements are expected with future LHCb datasets.

Rapid developments in precision predictions

High Precision for Hard Processes in Turin

Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. To keep pace with experimental observations at the LHC and elsewhere, precision computing has had to develop rapidly in recent years – efforts that have been monitored and driven by the biennial High Precision for Hard Processes (HP2) conference for almost two decades now. The latest edition attracted 120 participants to the University of Torino from 10 to 13 September 2024.

All speakers addressed the same basic question: how can we achieve the most precise theoretical description for a wide variety of scattering processes at colliders?

The recipe for precise prediction involves many ingredients, so the talks in Torino probed several research directions. Advanced methods for the calculation of scattering amplitudes were discussed, among others, by Stephen Jones (IPPP Durham). These methods can be applied to detailed high-order phenomenological calculations for QCD, electroweak processes and BSM physics, as illustrated by Ramona Groeber (Padua) and Eleni Vryonidou (Manchester). Progress in parton showers – a crucial tool to bridge amplitude calculations and experimental results – was presented by Silvia Ferrario Ravasio (CERN). Dedicated methods to deal with the delicate issue of infrared divergences in high-order cross-section calculations were reviewed by Chiara Signorile-Signorile (Max Planck Institute, Munich).

The Torino conference was dedicated to the memory of Stefano Catani, a towering figure in the field of high-energy physics, who suddenly passed away at the beginning of this year. Starting from the early 1980s, and for the whole of his career, Catani made groundbreaking contributions in every facet of HP2. He was an inspiration to a whole generation of physicists working in high-energy phenomenology. We remember him as a generous and kind person, and a scientist of great rigour and vision. He will be sorely missed.

AI treatments for stroke survivors

Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies. The toolbox of the high-energy physicist is well adapted to the task. To amplify CERN’s societal contributions through technological innovation, the Unleashing a Comprehensive, Holistic and Patient-Centric Stroke Management for a Better, Rapid, Advanced and Personalised Stroke Diagnosis, Treatment and Outcome Prediction (UMBRELLA) project – co-led by Vall d’Hebron Research Institute and Siemens Healthineers – was officially launched on 1 October 2024. The kickoff meeting in Barcelona, Spain, convened more than 20 partners, including Philips, AstraZeneca, KU Leuven and EATRIS. Backed by nearly €27 million from the EU’s Innovative Health Initiative and industry collaborators, the project aims to transform stroke care across Europe.

The meeting highlighted the urgent need to address stroke as a pressing health challenge in Europe. Each year, more than one million acute stroke cases occur in Europe, with nearly 10 million survivors facing long-term consequences. In 2017, the economic burden of stroke treatments was estimated to be €60 billion – a figure that continues to grow. UMBRELLA’s partners outlined their collective ambition to translate a vast and fragmented stroke data set into actionable care innovations through standardisation and integration.

UMBRELLA will utilise advanced digital technologies to develop AI-powered predictive models for stroke management. By standardising real-world stroke data and leveraging tools like imaging technologies, wearable devices and virtual rehabilitation platforms, UMBRELLA aims to refine every stage of care – from diagnosis to recovery. Based on post-stroke data, AI-driven insights will empower clinicians to uncover root causes of strokes, improve treatment precision and predict patient outcomes, reshaping how stroke care is delivered.

Central to this effort is the integration of CERN’s federated-learning platform, CAFEIN. A decentralised approach to training machine-learning algorithms without exchanging data, it was initiated thanks to seed funding from CERN’s knowledge transfer budget for the benefit of medical applications: now CAFEIN promises to enhance diagnosis, treatment and prevention strategies for stroke victims, ultimately saving countless lives. A main topic of the kickoff meeting was the development of the “U-platform” – a federated data ecosystem co-designed by Siemens Healthineers and CERN. Based on CAFEIN, the infrastructure will enable the secure and privacy preserving training of advanced AI algorithms for personalised stroke diagnostics, risk prediction and treatment decisions without sharing sensitive patient data between institutions. Building on CERN’s expertise, including its success in federated AI modelling for brain pathologies under the EU TRUST­roke project, the CAFEIN team is poised to handle the increasing complexity and scale of data sets required by UMBRELLA.

Beyond technological advancements, the UMBRELLA consortium discussed a plan to establish standardised protocols for acute stroke management, with an emphasis on integrating these protocols into European healthcare guidelines. By improving data collection and facilitating outcome predictions, these standards will particularly benefit patients in remote and underserved regions. The project also aims to advance research into the causes of strokes, a quarter of which remain undetermined – a statistic UMBRELLA seeks to change.

This ambitious initiative not only showcases CERN’s role in pioneering federated-learning technologies but also underscores the broader societal benefits brought by basic science. By pushing technologies beyond the state-of-the-art, CERN and other particle-physics laboratories have fuelled innovations that have an impact on our everyday lives. As UMBRELLA begins its journey, its success holds the potential to redefine stroke care, delivering life-saving advancements to millions and paving the way for a healthier, more equitable future.

Dark matter: evidence, theory and constraints

Dark Matter: Evidence, Theory and Constraints

Cold non-baryonic dark matter appears to make up 85% of the matter and 25% of the energy in our universe. However, we don’t yet know what it is. As the opening of many research proposals state, “The nature of dark matter is one of the major open questions in physics.”

The evidence for dark matter comes from astronomical and cosmological observations. Theoretical particle physics provides us with various well motivated candidates, such as weakly interacting massive particles (WIMPs), axions and primordial black holes. Each has different experimental and observational signatures and a wide range of searches are taking place. Dark-matter research spans a very broad range of topics and methods. This makes it a challenging research field to enter and master. Dark Matter: Evidence, Theory and Constraints by David Marsh, David Ellis and Viraf Mehta, the latest addition to the Princeton Series in Astrophysics, clearly presents the relevant essentials of all of these areas.

The book starts with a brief history of dark matter and some warm-up calculations involving units. Part one outlines the evidence for dark matter, on scales ranging from individual galaxies to the entire universe. It compactly summarises the essential background material, including cosmological perturbation theory.

Part two focuses on theories of dark matter. After an overview of the Standard Model of particle physics, it covers three candidates with very different motivations, properties and phenomenology: WIMPs, axions and primordial black holes. Part three then covers both direct and indirect searches for these candidates. I particularly like the schematic illustrations of experiments; they should be helpful for theorists who want to (and should!) understand the essentials of experimental searches.

The main content finishes with a brief overview of other dark-matter candidates. Some of these arguably merit more extensive coverage, in particular sterile neutrinos. The book ends with extensive recommendations for further reading, including textbooks, review papers and key research papers.

Dark-matter research spans a broad range of topics and methods, making it a challenging field to master

The one thing I would argue with is the claim in the introduction that dark matter has already been discovered. I agree with the authors that the evidence for dark matter is strong and currently cannot all be explained by modified gravity theories. However, given that all of the evidence for dark matter comes from its gravitational effects, I’m open to the possibility that our understanding of gravity is incorrect or incomplete. The authors are also more positive than I am about the prospects for dark-matter detection in the near future, claiming that we will soon know which dark-matter candidates exist “in the real pantheon of nature”. Optimism is a good thing, but this is a promise that dark-matter researchers (myself included…) have now been making for several decades.

The conversational writing style is engaging and easy to read. The annotation of equations with explanatory text is novel and helpful, and  the inclusion of numerous diagrams – simple and illustrative where possible and complex when called for – aids understanding. The attention to detail is impressive. I reviewed a draft copy for the publishers, and all of my comments and suggestions have been addressed in detail.

This book will be extremely useful to newcomers to the field, and I recommend it strongly to PhD students and undergraduate research students. It is particularly well suited as a companion to a lecture course, with numerous quizzes, problems and online materials, including numerical calculations and plots using Jupyter notebooks. It will also be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The B’s Ke+es

The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world to CERN from 23 to 25 October 2024. Patrick Koppenburg (Nikhef) began the meeting by looking back 10 years, when three and four sigma anomalies abounded: the inclusive/exclusive puzzles; the illuminatingly named P5 observable; and the lepton-universality ratios for rare B decays. While LHCb measurements have mostly eliminated the anomalies seen in the lepton-universality ratios, many of the other anomalies persist – most notably, the corresponding branching fractions for rare B-meson decays still appear to be suppressed significantly below Standard Model (SM) theory predictions. Sara Celani (Heidelberg) reinforced this picture with new results for Bs→ φμ+μ and Bs→ φe+e, showing the continued importance of new-physics searches in these modes.

Changing flavour

The discussion on rare B decays continued in the session on flavour-changing neutral-currents. With new lattice-QCD results pinning down short-distance local hadronic contributions, the discussion focused on understanding the long-distance contributions arising from hadronic resonances and charm rescattering. Arianna Tinari (Zurich) and Martin Hoferichter (Bern) judged the latter not to be dramatic in magnitude. Lakshan Madhan (Cambridge) presented a new amplitude analysis in which the long and short-distance contributions are separated via the kinematic dependence of the decay amplitudes. New theo­retical analyses of the nonlocal form factors for B → K(*)μ+μ and B → K(*)e+e were representative of the workshop as a whole: truly the bee’s knees.

Another challenge to accurate theory predictions for rare decays, the widths of vector final states, snuck its way into the flavour-changing charged-currents session, where Luka Leskovec (Ljubljana) presented a comprehensive overview of lattice methods for decays to resonances. Leskovec’s optimistic outlook for semileptonic decays with two mesons in the final state stood in contrast to prospects for applying lattice methods to D-D mixing: such studies are currently limited to the SU(3)-flavour symmetric point of equal light-quark masses, explained Felix Erben (CERN), though he offered a glimmer of hope in the form of spectral reconstruction methods currently under development.

LHCb’s beauty and charm physics programme reported substantial progress. Novel techniques have been implemented in the most recent CP-violation studies, potentially leading to an impressive uncertainty of just 1° in future measurements of the CKM angle gamma. LHCb has recently placed a special emphasis on beauty and charm baryons, where the experiment offers unique capabilities to perform many interesting measurements ranging from CP violation to searches for very rare decays and their form factors. Going from three quarks to four and five, the spectroscopy session illustrated the rich and complex debate around tetraquark and pentaquark states with a big open discussion on the underlying structure of the 20 or so discovered at LHCb: which are bound states of quarks and which are simply meson molecules? (CERN Courier November/December 2024 p26 and p33.)

LHCb’s ability to do unique physics was further highlighted in the QCD, electroweak (EW) and exotica session, where the collaboration has shown the most recent publicly available measurement of the weak-mixing angle in conjunction with W/Z-boson production cross-sections and other EW observables. LHCb have put an emphasis on combined QCD + QED and effective-field-theory calculations, and the interplay between EW precision observables and new-physics effects in couplings to the third generation. By studying phase space inaccessible to any other experiment, a study of hypothetical dark photons decaying to electrons showed the LHCb experiment to be a unique environment for direct searches for long-lived and low-mass particles.

Attendees left the workshop with a fresh perspective

Parallel to Implications 2024, the inaugural LHCb Open Data and Ntuple Wizard Workshop, took place on 22 October as a satellite event, providing theorists and phenomenologists with a first look at a novel software application for on-demand access to custom ntuples from the experiment’s open data. The LHCb Ntupling Service will offer a step-by-step wizard for requesting custom ntuples and a dashboard to monitor the status of requests, communicate with the LHCb open data team and retrieve data. The beta version was released at the workshop in advance of the anticipated public release of the application in 2025, which promises open access to LHCb’s Run 2 dataset for the first time.

A recurring satellite event features lectures by theorists on topics following LHCb’s scientific output. This year, Simon Kuberski (CERN) and Saša Prelovšek (Ljubljana) took the audience on a guided tour through lattice QCD and spectroscopy.

With LHCb’s integrated luminosity in 2024 exceeding all previous years combined, excitement was heightened. Attendees left the workshop with a fresh perspective on how to approach the challenges faced by our community.

bright-rec iop pub iop-science physcis connect