Comsol -leaderboard other pages

Topics

A clear guide for accelerator physicists

Special Topics in Accelerator Physics by Alexander Wu Chao introduces the global picture of accelerator physics, clearly establishing the scope of the book from the first page. The derivation and solution of concepts and equations is didactic throughout the chapters. Chao takes readers by the hand and guides them through important formulae and their limitations step-by-step, such that the reader does not miss the important parts – an extremely useful tactic for advanced masters or doctoral students when their topic of interest is among the eight special topics described.

In the first chapter, I particularly liked the way the author transitions from the Vlasov equation, a very powerful technique for studying beam–beam effects, towards the Fokker–Planck equation describing the statistical interaction of charged particles inside an accelerator. Chao pedagogically introduces the potential-well distortion, which is complemented by illustrations. The discussion on wakefield acceleration, taking readers deeper into the subject and extending it both for proton and electron beams, is timely. Extending the Fokker–Planck equation to 2D and 3D systems is particularly advanced but at the same time important. The author discusses the practical applications of the transient beam distribution in simple steps and introduces the higher order moments later. The proposed exercises, for some of which solutions are provided, are practical as well.

In chapter two, the concept of symplecticity, the conservation of phase space (a subject that causes much confusion), is discussed with concrete examples. Naming issues are meticulously explained, such as using the term short-magnet rather than thin-lens approximation in formula 2.6. Symplectic models for quadrupole magnets are introduced and the following discussion is extremely useful for students and accelerator physicists who will use symplectic codes such as MAD-X and who would like to understand the mathematical framework of their operation. This nicely conjuncts with the next chapter and the book offers useful insights to how these codes operate. In the discussion about third-order integration, Chao makes occasional mental leaps, which could be mitigated with an additional sentence. Although the discussion on higher order and canonical integrators is rather specialised, it is still very useful.

The author introduces the extremely convenient and broadly used truncated power series algebra (TPSA) technique, used to obtain maps, in chapter three. Chao explains in a simple manner the transition from the pre-TPSA algorithms (such as TRANSPORT or COSY) to symplectic algorithms such as MAD-X or PTC, as well as the reason behind this evolution. The clear “drawbacks” discussion is very useful in this regard. 

Special Topics in Accelerator Physics

The transition to Lie algebra in chapter four is masterful and pedagogical. Lie algebras, which can be an advanced topic and come with many formulas, are the main focus in this section of the book. In particular, the non-linearity of the drift space, which is absent of fields, should catch the reader’s attention. This is followed by specialised applications for expert readers only. One of this chapter’s highlights is the derivation of the sextupole pairing, which is complemented by that of Taylor maps up to the second order and its Lie algebra, although it would be better if the “Our plan” section was placed at the beginning of the chapter.

Chapter five covers proton-spin dynamics. Spinor formulas and the Froissart–Stora equation for the polarisation change are developed and explained. The Siberian snake technique remains one of the most well-known to retain beam polarisation, which the author discusses in detail. This links elegantly to chapter six, which introduces the reader to electron-spin dynamics where synchrotron radiation is the dominant effect and therefore constitutes a completely different research area. Chao focuses on the differences between the quantum and classical approach to synchrotron radiation, a phenomenon that cannot be ignored in high-brightness machines. Analogies between protons and electrons are then very well summarised in the recap figure 6.3. Section 6.5 is important for storage rings and leads smoothly to the Derbenev–Kondratenko formula and its applications.

Echoes

Chapter seven looks at echoes, a key technique when measuring diffusion in an accelerator, where the author introduces the reader to the generality of the term and the concept of echoes in accelerator physics. Transverse echoes (with and without diffusion) are quite analytical and the figures are didactic.

The book concludes with a very complete, concise and detailed chapter about beam–beam effects, which acts as an introduction to collider–accelerator physics for coherent- and incoherent-effects studies. Although synchro-betatron couplings causing resonant instabilities are advanced topics, they are often seen in practice when operating the machines, and the book offers the theoretical background for a deeper understanding of these effects.

Special Topics in Accelerator Physics is well written and develops the advanced subjects in a comprehensive, complete and pedagogical way.

A theory of theories

Production of Higgs bosons in ATLAS

High-energy physics spans a wide range of energies, from a few MeV to TeV, that are all relevant. It is therefore often difficult to take all phenomena into account at the same time. Effective field theories (EFTs) are designed to break down this range of scales into smaller segments so that physicists can work in the relevant range. Theorists “cut” their theory’s energy scale at the order of the mass of the lightest particle omitted from the theory, such as the proton mass. Thus, multi-scale problems reduce to separate and single-scale problems (see “Scales” image). EFTs are today also understood to be “bottom-up” theories. Built only out of the general field content and symmetries at the relevant scales, they allow us to test hypotheses efficiently and to select the most promising ones without needing to know the underlying theories in full detail. Thanks to their applicability to all generic classical and quantum field theories, the sheer variety of EFT applications is striking. 

In hindsight, particle physicists were working with EFTs from as early as Fermi’s phenomenological picture of beta decay in which a four-fermion vertex replaces the W-boson propagator because the momentum is much smaller compared to the mass of the W boson (see “Fermi theory” image). Like so many profound concepts in theoretical physics, EFT was first considered in a narrow phenomenological context. One of the earliest instances was in the 1960s, when ad-hoc methods of current algebras were utilised to study weak interactions of hadrons. This required detailed calculations, and a simpler approach was needed to derive useful results. The heuristic idea of describing hadron dynamics with the most general Lagrangian density based on symmetries, the relevant energy scale and the relevant particles, which can be written in terms of operators multiplied by Wilson coefficients, was yet to be known. With this approach, it was possible to encode local symmetries in terms of the current algebra due to their association with conserved currents. 

For strong interactions, physicists described the interaction between pions with chiral perturbation theory, an effective Lagrangian, which simplified current algebra calculations and enabled the low-energy theory to be investigated systematically. This “mother” of modern EFTs describes the physics of hadrons and remains valid to an energy scale of the proton mass. Heavy-quark effective theory (HQET), introduced by Howard Georgi in 1990, complements chiral perturbation theory by describing the interactions of charm and bottom quarks. HQET allowed us to make predictions on B-meson decay rates, since the corrections could now be classified. The more powers of energy are allowed, the more infinities appear. These infinities are cancelled by available counter-terms. 

Different effective field theories

Similarly, it is possible to regard the Standard Model as the truncation of a much more general theory including non-renormalisable interactions, which yield corrections of higher order in energy. This perception of the whole Standard Model as an effective field theory started to be formed in the late 1970s by Weinberg and others (see “All things EFT: a lecture series hosted at CERN” panel). Among the known corrections to the Standard Model that do not satisfy its approximate symmetries are neutrino masses, postulated in the 1960s and discovered via the observation of neutrino oscillations in the late 1990s. While the scope of EFTs was unclear initially, today we understand that all successful field theories, with which we have been working in many areas of theoretical physics, are nothing but effective field theories. EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments. The former is crucial for making accurate theoretical predictions, while the latter is central to the physics programme of CERN in general.

EFTs in particle physics

More than a decade has passed since the first run of the LHC, in which the Higgs boson and the mechanism for electroweak symmetry breaking were discovered. So far, there are no signals of new physics beyond the SM. EFTs are well suited to explore LHC physics in depth. A typical example for an event involving two scales is Higgs-boson production because there is a factor 10–100 between its mass and transverse momentum. The calculation of each Higgs-boson production process leads to large logarithms that can invalidate perturbation theory due to the large-scale separation. This is just one of many examples of the two-scale problem that arises when the full quantum field theory approach for high-energy colliders is applied. Traditionally, such two-scale problems have been treated in the framework of QCD factorisation and resummation. 

Fermi theory

Over the past two decades, it has been possible to recast two-scale problems at high-energy colliders with the advent of soft-collinear effective theory (SCET). SCET is nowadays a popular framework that is used to describe Higgs physics, jets and their substructure, as well as more formal problems, such as power corrections to reconstruct full amplitudes eventually. The difference between HQET and SCET is that SCET considers long-distance interactions between quarks and both soft and collinear particles, whereas HQET takes into account only soft interactions between a heavy quark and a parton. SCET is just one example where the EFT methodology has been indispensable, even though the underlying theory at much higher energies is known. Other examples of EFT applications include precision measurements of rare decays that can be described by QCD with its approximate chiral symmetry, or heavy quarks at finite temperature and density. EFT is also central to a deeper understanding of the so-called flavour anomalies, enabling comparisons between theory and experiment in terms of particular Wilson coefficients. 

All things EFT: a lecture series hosted at CERN

Steven Weinberg

A novel global lecture series titled “All things EFT” was launched at CERN in autumn 2020 as a cross-cutting online series focused on the universal concept of EFT, and its application to the many areas where it is now used as a core tool in theoretical physics. Inaugurated in a formidable historical lecture by the late Steven Weinberg, who reviewed the emergence and development of the idea of EFT through to its perception nowadays as encompassing all of quantum field theory and beyond, the lecture series has amassed a large following that is still growing. The series featured outstanding speakers, world-leading experts from cosmology to fluid dynamics, condensed-matter physics, classical and quantum gravity, string theory, and of course particle physics – the birthing bed of the powerful EFT framework. The second year of the series was kicked off in a lecture dedicated to the memory of Weinberg by Howard Georgi, who looked back on the development of heavy-quark effective theory and its immediate aftermath. 

Moreover, precision measurements of Higgs and electroweak observables at the LHC and future colliders will provide opportunities to detect new physics signals, such as resonances in invariant mass plots, or small deviations from the SM, seen in tails of distributions for instance at the HL-LHC – testing the perception of the SM as a low-energy incarnation of a more fundamental theo­­ry being probed at the electroweak scale. This is dubbed the SMEFT (SM EFT) or HEFT (Higgs EFT), depending on whether the Higgs fields are expressed in terms of the Higgs doublet or the physical Higgs boson. This particular EFT framework has recently been implemented in the data-analysis tools at the LHC, enabling the analyses across different channels and even different experiments (see “LHC physics” image). At the same time, the study of SMEFT and HEFT has sparked a plethora of theoretical investigations that have uncovered its remarkable underlying features, for example allowing EFT to be extended or placing constraints on the EFT coefficients due to Lorentz invariance, causality and analyticity.

EFTs in gravity

Since the inception of EFT, it was believed that the framework is applicable only to the description of quantum field theories for capturing the physics of elementary particles at high-energy scales, or alternatively at very small length scales. Thus, EFT seemed mostly irrelevant regarding gravitation, for which we are still lacking a full theory valid at quantum scales. The only way in which EFT seemed to be pertinent for gravitation was to think of general relativity as a first approximation to an EFT description of quantum gravity, which indeed provided a new EFT perspective at the time. However, in the past decade it has become widely acknowledged that EFT provides a powerful framework to capture gravitation occurring completely across large length scales, as long as these scales display a clear hierarchy. 

Gravitational-wave detectors

The most notable application to such classical gravitational systems came when it was realised that the EFT framework would be ideal to handle gravitational radiation emitted at the inspiral phase of a binary of compact objects, such as black holes. At this phase in the evolution of the binary, the compact objects are moving at non-relativistic velocities. Using the small velocity as the expansion parameter exhibits the separation between the various characteristic length scales of the system. Thus, the physics can be treated perturbatively. For example, it was found that even couplings manifestly change in classical systems across their characteristic scales, which was previously believed to be unique to quantum field theories. The application of EFT to the binary inspiral problem has been so successful that the precision frontier has been pushed beyond the state of the art, quickly surpassing the reach of work that has been focused on the two-body problem for decades via traditional methods in general relativity. 

This theoretical progress has made an even broader impact since the breakthrough direct discovery of gravitational waves (GWs) was announced in 2016. An inspiraling binary of black holes merged into a single black hole in less than a split second, releasing an enormous amount of energy in the form of GWs, which instigated even greater, more intense use of EFTs for the generation of theoretical GW data. In the coming years and decades, a continuous increase in the quantity and quality of real-world GW data is expected from the rapidly growing worldwide network of ground-based GW detectors, and future space-based interferometers, covering a wide range of target frequencies (see “Next generation” image).

EFTs in cosmology

Cosmology is inherently a cross-cutting domain, spanning scales over about 1060 orders of magnitude, from the Planck scale to the size of the observable universe. As such, cosmology generally cannot be expected to be tackled directly by each of the fundamental theories that capture particle physics or gravity. The correct description of cosmology relies heavily on the work in many disparate areas of research in theoretical and experimental physics, including particle physics and general relativity among many more.

Artist’s impression of the Euclid satellite

The development of EFT applications in cosmology – including EFTs of inflation, dark matter, dark energy and even EFTs of large-scale structure – has become essential to make observable predictions in cosmology. The discovery of the accelerated expansion of the universe in 1998 shows our difficulty in understanding gravity both at the quantum regime and the classical one. The cosmological constant problem and dark-matter paradigm might be a hint for alternative theories of gravity at very large scales. Indeed, the problems with gravity in the very-high and very-low energy range may well be tied together. The science programme of next-generation large surveys, such as ESA’s Euclid satellite (see “Expanding horizons” image), rely heavily on all these EFT applications for the exploitation of the enormous data that is going to be collected to constrain unknown cosmological parameters, thus helping to pinpoint viable theories.

The future of EFTs in physics

The EFT framework plays a key role at the exciting and rich interface between theory and experiment in particle physics, gravity and cosmology as well as in other domains, such as condensed-matter physics, which were not covered here. The technology for precision measurements in these domains is constantly being upgraded, and in the coming years and decades we are heading towards a growing influx of real-world data of higher quality. Future particle-collider projects, such as the Future Circular Collider at CERN, or China’s Circular Electron Positron Collider, are being planned and developed. Precision cosmology is also thriving, with an upcoming next-generation of very large surveys, such as the ground-based LSST, or space-based Euclid. GW detectors keep improving and multiplying, and besides those that are currently operating many more are planned, aimed at measuring various frequency ranges, which will enable a richer array of sources and events to be found.

EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments across all domains of physics

Half a century after the concept has formally emerged, effective field theory is still full of surprises. Recently, the physical space of EFTs has been studied as a fundamental entity in its own right. These studies, by numerous groups worldwide, have exposed a new hidden “totally positive’’ geometric structure dubbed the EFT-hedron that constrains the EFT expansion in any quantum field theory, and even string theory, from first principles, including causality, unitarity and analyticity, to be satisfied by any amplitudes of these theories. This recent formal progress reflects the ultimate leap in the perception of EFT nowadays as the most fundamental and most generic theory concept to capture the physics of nature at all scales. Clearly, in the vast array of formidable open questions in physics that still lie ahead, effective field theory is here to stay – for good.

A new ATLAS for the high-luminosity era

The discovery of the Higgs boson at the LHC in 2012 changed the landscape of high-energy physics forever. After just a few short years of data-taking by the ATLAS and CMS experiments, this last piece of the Standard Model (SM) was proven to exist. Since then, the Higgs sector has been studied using a rapidly growing dataset and, so far, all measurements agree with the SM predictions within the experimental uncertainties. In parallel, a comprehensive programme of searches for beyond-SM processes has been carried out, resulting in strong constraints on new physics. A harvest of precise measurements of a large variety of processes, confronted with state-of-the-art theoretical predictions, has further supported the SM. However, the theory lacks explanations for, among others, the nature of dark matter, the cosmological baryon asymmetry and neutrino masses. Importantly, the Higgs sector is related to “naturalness” problems that suggest the existence of new physics at the TeV scale, which the LHC can probe. 

The high-luminosity phase of the LHC (HL-LHC) will provide an order of magnitude more data starting from 2029, allowing precision tests of the properties of the Higgs boson and improved sensitivity to a wealth of new-physics scenarios. The HL-LHC will deliver to each of the ATLAS and CMS experiments approximately 170 million Higgs bosons and 120,000 Higgs-boson pairs over a period of about 10 years. By extrapolating Run 2 results to the HL-LHC dataset, this will increase the precision of most Higgs-boson coupling measurements: 2–4% precision on the couplings to W, Z and third-generation fermions; and approximately 50% precision on the self-coupling by combining the ATLAS and CMS datasets. The larger dataset will also give improved sensitivity to rare vector-boson scattering processes that will offer further insights into the Higgs sector. 

These precision measurements could reveal discrepancies with the SM predictions, which in turn could inform us about the energy scale of beyond-SM physics. In addition to improving SM measurements, the upgraded detectors and trigger systems being developed and constructed for the HL-LHC era will enable direct searches to better target new physics with challenging signatures. To achieve these goals, it will be essential to achieve a detailed understanding of the detector performance as well as to measure the integrated luminosity of the collected dataset to 1% precision.

Rising to the challenge 

To cope with the increased number of interactions when proton bunches collide at the HL-LHC, the ATLAS collaboration is working hard to upgrade its detectors with state-of-the-art instrumentation and technologies. These new detectors will need to cope with challenging radiation levels, higher data rates and an extreme high-occupancy environment with up to 200 proton–proton interactions per bunch crossing (see “Pileup” figure). Upgrades will include changes to the trigger and data-acquisition systems, a completely new inner tracker, as well as a new silicon timing detector (see “ATLAS Phase II” figure).

Simulated tt-bar event

The trigger and data-acquisition system will need to cope with a readout rate of 1 MHz, which is about 10 times higher than today. To achieve this, ATLAS will use a new architecture with a level-0 trigger (the first-level hardware trigger) based on the calorimeter and muon systems. Building on the upgrades for Run 3, which started in July 2022, the calorimeter will include capabilities for triggering at higher pseudorapidity, up to |η| = 4. During HL-LHC running, the global trigger system will be required to handle 50 Tb/s as input and to decide within 10 μs whether each event should be recorded or discarded, allowing for more sophisticated algorithms to be run online for particle identification. All the detectors will require substantial upgrades to handle the additional acceptance rates from the trigger.  

The readout electronics for the electromagnetic, forward and hadronic end-cap liquid-argon calorimeters, along with the hadronic tile calorimeter, will be replaced. The full calorimeter systems, segmented into 192,320 cells that are read out individually, will be read out for every bunch crossing at the full 40 MHz to provide full-granularity information to the trigger. This will require changes to both front-end electronics and off-detector components. 

The muon system will also see significant upgrades to the on-detector electronics of the resistive plate chambers (RPCs) and thin-gap chambers (TGCs) responsible for triggering on muons, as well as the muon drift tubes (MDTs) responsible for measuring the curvature of the tracks precisely. The MDTs will also be used for the first time in the level-0 trigger decisions. These improvements will allow all data to be sent to the back-end at 40 MHz, removing the need for readout buffers on the detector itself. All hits in the detector will be used to perform trigger logic in hardware using field programmable gate-arrays. Additional improvements to increase the trigger acceptance for muons will come in the form of a new layer of RPCs to be installed in the inner barrel layer, along with new MDTs in the small sectors. The Muon New Small Wheel system was installed during Long Shutdown 2 (LS2) from 2019 to 2022 and is located inside the end-cap toroid magnet containing both triggering and precision tracking chambers. Additional RPC upgrades were also made in the barrel leading up to Run 3, and the TGCs will be upgraded in the endcap region of the muon system during LS3.

State-of-the-art tracking 

The success of the research programme at the HL-LHC will strongly rely on the tracking performance, which in turn determines the ability to efficiently identify hadrons containing b and c quarks, in addition to tau and other charged leptons. Reconstructing individual particles in the HL-LHC collision environment with thousands of charged particles being produced within a region of about 10 cm will be very challenging. The entire tracking system, presently consisting of pixel and strip detectors and the transition radiation tracker, will be replaced by a new all-silicon pixel and strip tracker – the ITk. This  will feature higher granularity, increased radiation hardness and readout electronics that allow higher data rates and a longer trigger latency. The new pixel detector will also extend the pseudorapidity coverage in the forward region from |η| < 2.5 to |η| < 4, increasing the acceptance for important physics processes like vector-boson fusion (see “Pixel perfection” image).

Upgrades to the ATLAS detector

The ITk will comprise nine barrel layers, positioned at radii from 33 mm out to 1 m from the beam line, plus end-cap rings. It will be much more complex with respect to the present ATLAS tracker, featuring 10 times the number of strip channels and 60 times the number of pixel channels. The strip detectors will cover a total surface of 160 m2 with 60 million readout channels, and the pixels an area of 13 m2 with more than five billion readout channels. The innermost layer will be populated with radiation-hard 3D sensors, with pixel cells of 25 × 100 µm2 in the barrel part and 50 × 50 µm2 in the forward parts for improved tracking capabilities in the central and forward regions. Prototypes of the end-cap ring for the inner system and of the strip barrel stave are at an advanced stage (see “ITk prototyping” image). A unique feature of the trackers at the HL-LHC is that they will be operated for the first time with a serial powering scheme, in which a chain of modules is powered by a constant current. If the modules were to be powered in parallel, the high total current would lead to either high power losses or a large mass of cables within the volume of the detector, which would impact the tracking performance. 

Given the challenging conditions posed by the HL-LHC, ATLAS will construct a novel precision-timing silicon detector, the High-Granularity Timing Detector (HGTD), which provides a time resolution of 30 to 50 ps for charged particles. The detector will cover a pseudorapidity range of 2.4 < |η| < 4 and will comprise two double-sided silicon layers on each side of ATLAS with a total active area of 6.4 m2. The precise timing information will allow the collaboration to disentangle proton–proton interactions in the same bunch crossing in the time dimension, complementing the impressive spatial resolution of the ITk. Low-gain avalanche diodes (see “Clocking tracks” image) provide timing information that can be associated with tracks in the forward regions, where they are more difficult to assign to individual interactions using spatial information. With a timing resolution six times smaller than the temporal spread of the beam spot, tracks emanating from collisions occurring very close in space but well-separated in time can be distinguished. This is particularly important in the forward region, where reduced longitudinal impact-parameter resolution limits the performance.

Loaded prototypes

Building upon the insertable B-layer cooling system used since the start of Run 2, and to reduce the material budget, ATLAS will use a two-phase CO2 cooling system for the entire silicon ITk and HGTD detectors. These will allow the detectors to be cooled to around –35 °C during the entire lifetime of the HL-LHC. The low temperature is required to protect the silicon sensors from the expected high radiation dose received during their lifetime. Two-phase CO2 cooling is an environmentally friendly option compared to other suitable coolants. It provides a high heat transfer at reasonable flow parameters, a low viscosity (thus reducing the material used in the detector construction) and a well-suited temperature range for detector operations.

Luminous future

Precise knowledge of the luminosity is key for the ATLAS physics programme. To reach the goal of percent-level precision at the HL-LHC, ATLAS will upgrade the LUCID (Luminosity Cherenkov Integrating Detector) detector, a luminometer that is sensitive to charged particles produced at the interaction point. This is incredibly challenging given the number of interactions expected to be delivered by the machine, and the requirements on radiation hardness and long-term stability for the lifetime of the experiment. The HGTD will also provide online luminosity measurements on a bunch-by-bunch basis, and additional detector prototypes are being tested to provide the best possible precision for luminosity determination during HL-LHC running. Luminometers in ATLAS provide luminosity monitoring to the LHC every one to two seconds, which is required for efficient beam steering, machine optimisation and fast checking of running conditions. In the forward region, the zero-degree calorimeter, which is particularly important for determining the centrality in heavy-ion collisions, is also being redesigned for HL-LHC running.

Prototype wafer

The HL-LHC will deliver luminosities of up to 7.5 × 1034 cm–2s–1, and ATLAS will record data at a rate 10 times higher than in Run 2. The ability to process and analyse these data depends heavily on R&D in software and computing, to make use of resource-efficient storage solutions and opportunities that paradigm-shifting improvements like heterogeneous computing, hardware accelerators and artificial intelligence can bring. This is needed to simulate and process the high-occupancy HL-LHC events, but also to provide a better theoretical description of the kinematics.

New era

The Phase-II upgrade projects described are only possible through collaborative efforts between universities and laboratories across the world. The research teams are currently working intensely to finalise the designs, establish the assembly and testing procedures, and in some cases start construction. They will all be installed and commissioned during LS3 in time for the start of Run 4, currently planned for 2029.

To cope with the increased number of interactions when proton bunches collide at the HL-LHC, the ATLAS collaboration is working hard to upgrade its detectors with state-of-the-art instrumentation and technologies

The HL-LHC will provide an order of magnitude more data recorded with a dramatically improved ATLAS detector. It will usher in a new era of precision tests of the SM, and of the Higgs sector in particular, while also enhancing sensitivity to rare processes and beyond-SM signatures. The HL-LHC physics programme relies on the successful and timely completion of the ambitious detector upgrade projects, pioneering full-scale systems with state-of-the-art detector technologies. If nature is harbouring physics beyond the SM at the TeV scale, then the HL-LHC will provide the chance to find it in the coming decades. 

LHCb brings leptons into line

CCJanFeb23_NA_LHCb

At a seminar held at CERN today, the LHCb collaboration presented new measurements of rare B-meson decays that provide a high-precision test of lepton flavour universality, a key feature of the Standard Model (SM). Previous studies of these decays had hinted at intriguing tensions with predictions, but the results of an improved and wider-reaching analysis of the full LHCb dataset are in agreement with the SM.

A central mystery of particle physics is why the 12 elementary quarks and leptons are arranged in pairs across three generations, identical in all but mass. Lepton flavour universality (LFU) states that the SM gauge bosons are indifferent to which generation a charged lepton belongs, implying that certain decays of hadrons involving leptons from different generations should occur at the same rates. In recent years, however, an accumulation of results has suggested a possible violation of LFU in B-meson decays involving fundamental b- to s-quark transitions, such as the decay of a B into a K meson. Such processes are highly suppressed in the SM because they proceed through higher-order diagrams, making them promising channels in which to detect the possible influence of new particles.

A powerful test of LFU is to measure the relative rates of the processes B → Kμ+μ and B → Ke+e, a quantity called R(K), and the equivalent ratio for decays involving an excited kaon, R(K*). The SM predicts such ratios to be equal to unity once differences in the lepton masses are accounted for. In 2021, based on data collected during LHC Run 1 and Run 2, LHCb found R(K) to lie 3.1 σ below the SM prediction. For R(K*), measurements in 2017 based on Run 1 data were consistent with the SM at the level of 2–2.5 σ.

Earlier LHCb indications of anomalies with lepton flavour universality triggered immense excitement

The latest LHCb analysis simultaneously measures R(K) and R(K*) using the full Run 1 and Run 2 datasets. A sequence of multivariate selections and strict particle-identification requirements produced a higher signal purity and a better statistical sensitivity than the previous analysis. The two ratios were also computed in two bins of the squared di-lepton momentum-transfer q2, thereby producing four independent measurements. The measured values of R(K) and R(K*) are now compatible with the SM within 1 σ and supersede previous LHCb publications on these topics. The new value of R(K*) is based on an integrated luminosity three times larger than that used in 2017, and the two results are in broad agreement. For R(K) in the central q2 region, on the other hand, the new value is significantly higher than the 2021 result.

“Although a component of this shift can be attributed to statistical effects, it is understood that this change is primarily due to systematic effects,” explains LHCb spokesperson Chris Parkes of the University of Manchester. “The systematic shift in R(K) in the central q2 region compared to the 2021 result stems from an improved understanding of misidentified hadronic backgrounds to electrons, due to an underestimation of such backgrounds and the description of the distribution of these components in the fit. New datasets will allow us to further research this interesting topic, along with other key measurements relevant to the flavour anomalies.”

The search goes on

The flavour anomalies are a set of discrepancies observed over the past several years in processes involving b → s and b → c quark transitions. Among the former is the parameter P5′ based on angular distributions of the decay products of B-meson decays. Although these remain unaffected by the new LHCb result, tests of LFU via R(K)-type measurements are theoretically cleaner. On 18 October, complementing previous results by Belle, BaBar and LHCb, the LHCb collaboration made the first simultaneous measurement at a hadron collider of the parameter R(D), which compares the rates of B → Dτν and B → Dμν decays, and its counterpart R(D*). Involving b → c quark transitions, such decays proceed via the tree-level exchange of a virtual W boson. Based on Run 1 data, the new values of R(D) and R(D*) are compatible both with the current world average and with the SM prediction at 2.2 σ and 2.3 σ, respectively.

“Earlier LHCb indications of anomalies with lepton flavour universality triggered immense excitement, not least because possible new-physics explanations resonated with other hints of deviations from the SM,” says CERN theorist Michelangelo Mangano. “That such anomalies could have been real shows how little we know about the deep origin of flavour symmetries and their relation with the Higgs, and highlights the key role of experimental guidance. Theoretical efforts to interpret the anomalies explored novel avenues, exposing a myriad of unanticipated phenomena possibly emerging at distances shorter than those so far described by the SM. The latest LHCb findings take nothing away from our mission to push further the boundary of our knowledge, and the search for anomalies goes on!”

SLAC at 60: past, present, future

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

This year, SLAC celebrates its remarkable past while continuing its quest for a bright future. This presentation takes a look at how it all started with the lab’s two-mile-long linear accelerator and accompanying groundbreaking discoveries in particle physics; explores how the lab’s scientific mission has evolved over time to include many disciplines ranging from X-ray science to cosmology; and discusses the most exciting perspectives for future research, from developing new quantum technology to pushing the frontiers of our understanding of the universe on its largest scales.

Want to learn more on this subject?

JoAnne Hewett is a world-class theoretical physicist with well over 100 publications in theoretical high-energy physics. Her research probes the fundamental nature of space, matter and energy, where she most enjoys devising experimental tests for preposterous theoretical ideas. She is best known for her work on the possible existence of extra spatial dimensions. She has twice been a member of the HEPAP advisory panel and made major contribution to the recent Particle Physics Project Prioritization Panel (“P5”) plan, which defines US high-energy physics research priorities for the next 10 years.

Since joining the SLAC faculty in 1994, JoAnne has served in key leadership roles here at SLAC, including head of the theoretical physics group, deputy director of the Science Directorate and Director of SLAC’s Elementary Particle Physics (EPP) Division. During her tenure as EPP Division director, JoAnne aligned the program with the highest P5 priorities by establishing a neutrino theory program and extending SLAC’s experimental efforts work in accelerator-based neutrino physics and neutrinoless double-beta decay. She was elected a fellow of the American Physical Society in 2008 and named a fellow of the American Association for the Advancement of Science in 2009, and served as chair of the American Physical Society’s Division of Particles & Fields in 2016.





 

The axion search programme at DESY

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

The worldwide interest in axions and other weakly interacting slim particles (WISPs) as constituents of a dark sector of nature has strongly increased over the last years. A vibrant community is developing, constructing and operating corresponding experiments, so that most promising parameter regions will be probed within the next 15 years.

Many of these approaches rely on WISPs converting to photons. At DESY in Hamburg, larger-scale projects are pursued: the “light-shining-through-a-wall” experiment, ALPS II in the HERA tunnel, will start data taking soon. The solar helioscope BabyIAXO is nearly ready to start construction, while the dark matter haloscope MADMAX is in the prototyping phase.

This webinar will introduce the physics cases and focus on the axion search activities ongoing at DESY.

Want to learn more on this subject?

Axel Lindner was working in accelerator-based particle physics, astroparticle physics and management before he engaged in WISP searches in 2007 as the spokesperson of the ALPS I experiment. Since 2018 he has been leading a new experimental group at DESY in Hamburg in charge of realizing non-accelerator-based particle physics experiments on-site. Axel has been a member of the MADMAX and IAXO collaborations and spokesperson of ALPS II since 2012.

LHCb experiment meets theory

The 2022 edition of the yearly workshop “Implications of LHCb measurements and future prospects” from 19 to 21 October at CERN was already the 12th instance in a series of meetings between LHCb and the theory community. The large attendance, with 294 people registered, reflects the excitement of both the experimental and theory community for the physics case of LHCb. In several plenary streams the newest experimental and theoretical developments were presented in mixing and CP violation, flavour changing neutral and charged currents, QCD spectroscopy and exotic hadrons, electroweak physics (now yearly rotating with the stream on fixed target and heavy ion physics) as well as in the newly established stream on model building for flavour physics. The workshop was preceded by “Theory Lectures” about CP violation. This is a new initiative that will henceforth be held yearly in conjunction with the Implications Workshop on various topics of interest.

implications_workshop_2022

The conference opened with an overview of the LHCb experiment, where the first milestones of the Upgrade I commissioning were presented. The new fully software trigger scheme of LHCb, with the highest data processing scheme of any LHC experiment, has been successfully implemented for the full LHCb detector.

The hot-off-the-press result on the simultaneous determination of the ratios R(D*)= BR(B→D*τντ)/BR(B→D*μνμ) and R(D)= BR(B→D0τντ)/BR(B→D0μνμ) was shown. This result, which superseded the previous LHCb measurement, is 1.9 σ away from the Standard Model (SM) expectation. Another highlight was the first observation of the decay Λ0b→Λ+cτντ, and its use to test lepton flavour universality using the ratio of the tauonic to muonic decay, R(Λ+c), which is yet in agreement with the SM. The newest precision extractions of the moduli of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements |Vcb| and |Vub| were discussed, showing that the long-standing puzzle of inclusive versus exclusive measurements keeps being a hot topic with many upcoming developments in the near future.

Within the mixing and CP violation (CPV) stream, a major highlight was the measurement of the time-integrated CP asymmetry in Do→K+K decays, leading to the first determination of the direct CP asymmetries in both Do→K+K and Do→π+π in the latter case constituting the first evidence for CPV in a single charm decay. These results led to exciting discussions about the size of U-spin breaking and possible underlying mechanisms. A new theoretical methodology for the derivation of amplitude U-spin sum rules was presented, making sum rules feasible for any system at any order in the expansion in the symmetry-breaking terms.

Further major results were the determination of the charm mixing parameters
yCP-yCP, very large local CP asymmetries seen in B+→h+h’ h’+ (with h,h’=π,K), as well as a new simultaneous determination of the weak phase γ together with charm mixing and decay parameters. On the theory side, it was also presented the completion of the next-to-next-to-leading order (NNLO) QCD calculation of the width difference of Bos mesons, allowing for an improved comparison with the corresponding experimental results.

The versatility of LHCb was showcased by covering rare beauty, kaon and charm decays, along with new tests on lepton-flavor universality violation

The rare decays session again showcased the versatility of LHCb by covering rare beauty, kaon and charm decays, including the most recent results on lepton-flavor universality violation. Recent progress on handling QCD corrections of b→sℓ+, which are important for the interpretation of the B anomalies, were presented, and it was shown a new method for the extraction of CKM matrix elements using time-dependent kaon decays. Lots of future opportunities lie in the measurements of rare charmed baryon decays which are very little probed so far.

New exciting results were shown in the spectroscopy stream, where one new pentaquark and three new tetraquark states were presented, showing the leading contribution of LHCb to the discovery of exotics and yet-not-understood states. Progress on QCD predictions along with new data-driven and machine-learning based methods were discussed.

The BSM session gave a great overview of a diverse range of beyond-the-SM models including leptoquarks, Z’ models, axion-like particles (ALPs) as well as models with extra dimensions. Importantly, these models induce correlations between the B anomalies and other anomalies like g-2 or the Cabibbo angle anomaly. Complementary and partially competitive constraints on the viable model space come from direct searches and high-pT observables.

Interesting discussions took place in the electroweak precision-measurements session, where the LHCb W-boson mass measurement was presented, which is in line with the world average and in tension with the recent precise CDF measurement at the 4σ level. This measurement will soon be complemented with the full Run 2 dataset.

The workshop closed with a grand overview given in the keynote talk by Alexander Lenz. The next instance of the Implications Workshop will take place at CERN in October 2023.

Identifying dark matter

IDM participants

The international conference series on the identification of dark matter (IDM) was brought to life in 1996 with the motto that “it is of critical importance now not just to pursue further evidence for its existence but rather to identify what the dark matter is.” Despite earnest attempts to identify what dark matter comprises, the answer to this question remains elusive. Today, the evidence for dark matter is overwhelming; its amount is known to be around 27% of the universe’s energy-density budget. IDM2022 illuminated the dark-matter mystery from all angles, ranging from cosmological evidence via astrophysics to possible dark-matter particle candidates and their detection via indirect searches, direct searches and colliders.

The 14th edition of IDM took place in Vienna, Austria, from 18 to 22 July, attracting about 250 physicists and more than 200 contributions. The conference was initially scheduled for 2020 but changed to an online format due to the pandemic, while the in-person IDM was delayed until 2022. Many young scientists were able to meet the dark-matter community for the first time “in real life”. The Strings 2022 conference took place in Vienna simultaneously, with complementary presentations.

One focus of IDM2022 was the direct detection of dark matter. Tremendous progress in the sensitivity of direct detection experiments has been achieved in the past few decades over a wide dark-matter particle mass range. All major experiments presented their latest results. While in the past, direct searches focused on the classical WIMP region in a mass between a few GeV and several TeV, the search region is now enlarged towards even lighter dark-matter particles down to the keV region. Different mass regions require different technologies and new ideas were presented to increase the sensitivities towards these unexplored mass regions. For GeV WIMP dark-matter searches, the XENON collaboration displayed the first results from their latest setup, XENONnT, which has a significantly lower background level and recently eliminated a previously seen excess in XENON1T. The XENON, Darwin and LZ collaborations recently formed the XLZD collaboration with the aim of building a next-generation liquid-xenon experiment.

While the XENON1T excess is gone, direct-detection experiments exploring the sub-GeV mass regime still face unknown background contributions, especially in solid-state detectors. This is currently one of the biggest obstacles to increasing the sensitivity to even smaller cross-sections. No complete understanding has been achieved so far, but combining the results, knowledge and expertise of the experiments points to stress relaxations in crystals as one primary underlying source. To tackle this tricky problem, a subset of the IDM2022 participants held a dedicated satellite meeting. This EXCESS workshop was the third event of its kind, and the first to take place in person.  

The direct detection experiment DAMA has observed a statistically significant signal of an annual modulated event rate for several years. This observation is consistent with Earth moving through the dark-matter halo, but has not been confirmed by any other experiment. DAMA recently reduced the energy threshold to 0.5 keV electron equivalent by upgrading their readout electronics to further increase sensitivity. Several new dark-matter experiments based on the same target material – NaI – are running or being commissioned to provide more information on the long-standing DAMA observation: ANAIS, COSINE, COSINUS and SABRE. Even lighter forms of dark matter, such as axions and axion-like particles, were discussed, as well as the possibility that dark matter comprises bound states.

Primordial black holes are also attractive potential dark-matter candidates. Astronomical data from, for example,  microlensing, structure formation and gravitational waves hint at their existence. However, current data gives no handle on whether primordial black holes could be responsible for all the universe’s dark-matter content, or only correspond to part of the overall dark-matter density. Besides black-hole mergers, gravitational-wave signals can provide additional information to understand the origin of dark matter. In particular, processes in the early universe detected via gravitational waves could provide new insights into the particle nature of dark matter. With the increased sensitivity of operating and future gravitational-wave detectors, new players will provide additional data to unravel the dark-matter problem.

With a plethora of new ideas and experiments presented at this year’s IDM, the path is prepared for the next edition in L’Aquila, Italy, in 2024.

Neutrinos out of the blue

In the dark abysses of the Mediterranean Sea, what promises to be the world’s largest neutrino telescope, KM3NeT, is rapidly taking shape. Using transparent seawater as the detection medium, its large three-dimensional arrays of photosensors will instrument a volume of more than one cubic kilometre and detect the faint Cherenkov light induced by the passage of charged particles produced in nearby neutrino interactions. The main physics goals of KM3NeT are to detect high-energy cosmic neutrinos and identify their astrophysical origins, as well as to study the fundamental properties of the neutrino itself. 

KM3NeT (the Cubic Kilometre Neutrino Telescope) is the successor to the ANTARES neutrino telescope, which operated continuously from 2008 and has recently been decommissioned (see “The ANTARES legacy” panel). KM3NeT comprises two detectors: ARCA (Astroparticle Research with Cosmics in the Abyss), located at a depth of 3500 m offshore from Sicily, and ORCA (Oscillation Research with Cosmics in the Abyss), located at a depth of 2450 m offshore from southern France. ARCA is a sparse detector of about 1 km3 that is optimised for the detection of TeV–PeV neutrinos, while ORCA is a 7 Mt-dense detector optimised for sub-TeV neutrinos. The KM3NeT collaboration comprises more than 250 scientists from 16 countries.

The key technology is the digital optical module (DOM) – a pressure-resistant glass sphere hosting 31 three-inch photomutiplier tubes, various calibration devices and the readout electronics (see “Modular” image). A total of 18 DOMs are hosted on a single detection line, and the lines are anchored to the seafloor and held taut by a submerged buoy. The ORCA detector will comprise around 100 lines and the ARCA detector will have twice as many. The bases of the lines are connected via cables on the seafloor to junction boxes, from which electro-optical  cables many tens of kilometres long bring the data to shore along optical fibres. Information on every single photon is transmitted to the shore stations, where trigger algorithms are applied to select interesting events for offline analysis.

The assembly room for the KM3NeT optical modules

From the light pattern recorded by the DOMs, the energy and the direction of a neutrino can be estimated. Furthermore, the neutrino flavour can also be distinguished; muon neutrino charged–current (CC) interactions produce an extended track-like signature (see “Subsea shower” image) whereas electron– and tau–neutrino CC interactions, as well as neutral-current interactions, produce more compact shower-like events. By selecting up-going neutrinos, i.e. those that have travelled from the other side of Earth, the large background from down-going atmospheric muons can be rejected and a clean sample of neutrinos obtained. 

The first KM3NeT detection line was connected in 2016 and currently a total of 32 lines are operating at the two sites. The first science results with these partial detectors have already been obtained. 

Fundamental neutrino properties

Sixty-six years after their discovery, neutrinos remain the most mysterious of the fermions. As they whiz through the universe, barely interacting with any other particles, they have the unique ability to oscillate between their three different types or flavours (electron, muon and tau). The observation of neutrino oscillations in the late 1990s implies that neutrinos have a non-zero mass, contrary to the Standard Model expectation. Understanding the origin and order of the neutrino masses could therefore unlock a path to new physics. Numerous neutrino experiments around the world are closing in on the neutrino’s properties, using both artificial (accelerator and reactor) and natural (atmospheric and extraterrestrial) neutrino sources. 

The KM3NeT/ORCA array is optimised for the detection of atmospheric neutrinos, produced when cosmic rays strike atomic nuclei at an altitude of around 15 km. Such interactions produce a cascade of particles on Earth’s surface, mostly pions and kaons, which decay to neutrinos capable of traversing the entire planet. About two thirds of these are muon neutrinos and antineutrinos, and the remainder are electron neutrinos and antineutrinos. 

Measuring the directions and energies of the detected atmospheric neutrinos allows the oscillatory behaviour of neutrinos to be studied, and thus elements of the leptonic “PMNS” mixing matrix to be determined. The measured direction is used as a proxy for the distance the atmospheric neutrino has travelled through Earth between its points of production and detection. First preliminary results with six ORCA lines and one year of data clearly show the expected disappearance of muon neutrinos with increasing baseline/energy. The corresponding constraints on θ23 (the mixing angle between the m2 and m3 states) and Δm232 (the mass difference of the squared masses) already start to be competitive with multi-year results from the current long-baseline accelerator experiments (see “Physics debut” figure). 

The ANTARES legacy

A prototype of the KM3NeT DOM

Building a telescope anchored deep at the bottom of the sea requires skill, patience and expertise. KM3NeT would not be on its way without the invaluable experience gained from its older sibling, the ANTARES telescope. ANTARES operated continuously for more than 15 years, and pioneered solutions to construct and operate a neutrino detector in the challenging environment of the deep sea. Despite ANTARES containing only 12 detector lines compared to 86 in IceCube, its superior angular resolution (due to the intrinsic water properties) and its Northern Hemisphere location provided competitive results and valuable insights and constraints in various domains.

Following IceCube’s discovery of a diffuse flux of cosmic neutrinos, the ANTARES all-flavour neutrino data sample revealed a mild (1.8σ) excess of high-energy events consistent with the neutrino signal detected by IceCube. ANTARES also contributed strongly to the multi-messenger endeavour, participating in the search for a neutrino counterpart to major alerts from the LIGO/Virgo gravitational-wave interferometers, IceCube, ground-based imaging air Cherenkov telescopes, as well as X- and gamma-ray satellites. For instance, the TXS0506+056 blazar is the second most significant point source, with a local significance of 2.8σ, strengthening its case as the first high-energy neutrino source. ANTARES also distributed its own neutrino alerts with an unprecedented low latency for a neutrino telescope.

Its energy threshold of a few tens of GeV allowed the study of atmospheric muon neutrino disappearance due to neutrino oscillations and to constrain the “3+1” neutrino model. In this domain, results consistent with world best-fit values were obtained, as well as competitive limits on non-standard interactions. The data were also used to search for dark-matter particles that would have accumulated in astrophysical bodies like the Sun or the galactic centre before annihilating or decaying into neutrinos. Since no excesses were found, competitive limits were set that reduce the parameter space to be explored by direct, indirect (including KM3NeT) and collider dark-matter experiments.

Recently superseded in sensitivity by KM3NeT, ANTARES was finally decommissioned in February 2022.

A longer-term physics goal of KM3NeT is to determine the neutrino mass ordering, i.e. whether the third neutrino mass eigenstate is heavier or lighter than the first two. This is important to help constrain the plethora of theoretical models proposed to explain the neutrino masses. Due to the large distances travelled by atmospheric neutrinos as they pass through Earth’s mantle and core, subtle matter effects come into play and distort the expected oscillation pattern in the zenith angle/energy plane. By comparing the observed distortions to those expected for either “normal” or “inverted” mass ordering, and thanks to the large neutrino sample collected, the neutrino mass ordering can be determined. 

A 115-line configuration of ORCA operating for three years is expected to provide a three-sigma sensitivity for most θ23 values. KM3NeT could therefore be the first detector to unambiguously determine the neutrino mass ordering, on a time scale in advance of the planned long-baseline accelerator experiments. New-physics scenarios (for example, non-standard interactions, neutrino decays and sterile neutrinos) that modify the oscillation patterns recorded in both ORCA and ARCA have already been explored. While no significant deviations from the Standard Model have been observed, the enhanced sensitivity as the detectors grow will push the existing limits and probe uncharted territories.

Neutrino astronomy

At the beginning of the 1960s, it was realised that the neutrino could play a special role in the study of the universe at large. Weakly interacting with matter and electrically neutral, it enables exploration at greater distances and higher energies than is possible with conventional electromagnetic probes. In addition, neutrinos are the unambiguous smoking gun of hadronic acceleration processes occurring at their source. 

Subsea shower

Since the observation of a significant flux of cosmic high-energy neutrinos in the TeV–PeV range by the IceCube Neutrino Observatory at the South Pole in 2013, the focus of neutrino astronomers has been to identify the astrophysical origins of these neutrinos. Amongst the diverse possible sources, a multi-messenger approach has identified the first: the flaring blazar TXS0506+056. While other source candidates have appeared, such as tidal disruption events and radio-bright blazars, the currently identified source population(s) cannot fully explain the detected flux. Having a neutrino telescope with a sensitivity similar to that of IceCube and with a complementary field of view allows the full neutrino sky to be continuously monitored. KM3NeT’s location in the Northern Hemisphere provides an optimal view of the galactic plane and makes it the ideal instrument to detect, characterise and resolve sources that may emit galactic neutrinos. 

Soon, KM3NeT will start sending alerts to its multi- messenger partners – including conventional electromagnetic telescopes but also other neutrino telescopes such as IceCube and Baikal/GVD – when a neutrino candidate with a high probability of astrophysical origin is detected. This is right on time for the fourth observing run of the LIGO, Virgo and KAGRA gravitational-wave interferometers. While so far no neutrinos have been observed from binary compact systems detected through gravitational waves, a joint detection would reveal unique information on the high-energy processes in the environment of the mergers. Furthermore, the exceptional pointing resolution of KM3NeT would significantly reduce the region of interest where electromagnetic partners should search for a counterpart. The ARCA detector, for example, will benefit from the low optical scattering of deep seawater to reconstruct the direction of muon-neutrino events to less than 0.1 degrees at 100 TeV and around 1 degree for the electron/tau neutrino flavours. 

Neutrino oscillation parameters with KM3NeT/ORCA6

Last but not least, KM3NeT is already waiting for the next close-by core-collapse supernova. Such astrophysical events are rare: the first and only one ever detected in neutrinos, SN1987a, occurred 35 years ago. The KM3NeT DOMs are continuously monitoring for a short-duration increase in counting rates on many DOMs simultaneously – the signature of a flash of MeV supernova neutrinos passing through the detectors – and the detector is networked with other neutrino telescopes via the SuperNova Early Warning System (SNEWS). If a galactic supernova would happen today, the number of neutrinos detected by SNEWS would be four orders of magnitude more than for SN1987a! 

Whether the cosmic-neutrino sources are point-like, extended, transient or variable, the KM3NeT collaboration has developed reconstruction techniques, event selections and statistical frameworks to identify them and determine their characteristics. Disentangling the galactic from the extragalactic components, the steady from the transient and the electromagnetically bright from the obscure are on KM3NeT’s to-do list for the coming decade.

Marine science 

KM3NeT is important not only for particle physics, but is also a powerful tool for marine sciences. The acquisition of long-term oceanographic data helps researchers understand and eventually mitigate the harmful effects of global processes, such as climate change and anthropogenic impact, as well as study episodic events such as earthquakes, tsunamis, biodiversity changes and pollution – all of which are difficult to study with short-term conventional marine expeditions. To this end, the seafloor infrastructures of first the ANTARES and now the KM3NeT sites are unique cabled marine observatories. They are open to all scientific communities, and as such are important nodes of the European Multidisciplinary Seafloor and water-column Observatory, EMSO.

Sixty-six years after their discovery, neutrinos remain the most mysterious of the fermions

Furthermore, the KM3NeT optical sensors and the acoustics sensors (used for the positioning of the DOMs) themselves provide unique information on deep-sea bioluminescence and bioacoustics. The ANTARES collaboration has several publications studying deep-sea bioluminescence and acoustic detection of cetaceans, and recently KM3NeT invited citizen scientists to analyse its optical and acoustic data via the Zooniverse platform in the context of the EU project REINFORCE.

The KM3NeT detectors will continue to grow in size and sensitivity as additional new lines are installed over the next five years. With three major neutrino telescope facilities now online – Baikal/GVD, IceCube and KM3NeT – neutrino astronomy is truly entering its golden era. 

Rare B-meson decays to two muons

CMS figure 1

Studies of rare B-meson decays at the LHC provide a sensitive probe of physics beyond the Standard Model (SM) and allow us to explore energy scales much higher than those directly accessible. A key factor in the success of these studies is the availability of precise theoretical predictions that can be compared with experimentally accessible processes. The dimuon decays B0S μ+μ and B0 μ+μ are a case in point. In particular, studies of these decays could help researchers to understand the nature of several anomalies seen in other rare B-meson decays.

The CMS collaboration recently reported a new measurement of the B0S μ+μ branching fraction and effective lifetime, as well as the result of a search for the B0 μ+μ decay, using data recorded during LHC Run 2. This new study benefits not only from a large event sample but also from advanced machine-learning algorithms, which are used to uncover the rare signal events out of the overwhelming background. The B0S μ+μ signal is very clearly seen (see figure 1), leading to more precise measurements than previously achieved. The B0S μ+μ branching fraction is measured to be (3.8 ± 0.4) × 10–9, the relative uncertainty of 11% being a remarkable improvement with respect to that of the previous CMS result, 23%.

This measured value is consistent with the SM prediction of (3.7 ± 0.1) × 10–9, and reduces a previous tension between theory and experiment, which was based on the combination of the previous CMS result with the ATLAS and LHCb values. The variation in the central value of the CMS measurements is mostly driven by the use of a larger data sample and by the change of the B-hadron fragmentation fraction ratio (by about 8%). The measured effective lifetime of the B0S μ+μ  decay, 1.8 ± 0.2 ps, is also consistent with the SM prediction. The precision of this measurement is approaching the level necessary to probe the CP properties of B0S μ+μ, which could differ from the SM prediction. Finally, the B0 μ+μ decay remains unseen.

CMS physicists are looking forward to continuing these rare-decay studies with the large data samples to be collected during LHC Run 3. Besides the improved precision expected for B0S μ+μ measurements, seeing the first evidence of B0 μ+μ is high on their wish list.

bright-rec iop pub iop-science physcis connect