Topics

VELO’s voyage into the unknown

Marvellous modules

The first 10 years of the LHC have cemented the Standard Model (SM) as the correct theory of known fundamental particle interactions. But unexplained phenomena such as the cosmological matter–antimatter asymmetry, neutrino masses and dark matter strongly suggest the existence of new physics beyond the current direct reach of the LHC. As a dedicated heavy-flavour physics experiment, LHCb is ideally placed to allow physicists to look beyond this horizon. 

Measurements of the subtle effects that new particles can have on SM processes are fully complementary to searches for the direct production of new particles in high-energy collisions. As-yet unknown particles could contribute to the mixing and decay of beauty and charm hadrons, for example, leading to departures from the SM in decay rates, CP-violating asymmetries and other measurements. Rare processes for which the SM contribution occurs through loop diagrams are particularly promising for potential discoveries. Several anomalies recently reported by LHCb in such processes suggest that the cherished SM principle of lepton-flavour universality is under strain, leading to speculation that the discovery of new physics may not be far off.

Unique precision

In addition to precise theoretical predictions, flavour-physics measurements demand vast datasets and specialised detector and data-processing technology. To this end, the LHCb collaboration is soon to start taking data with an almost entirely new detector that will allow at least 50 fb–1 of data to be accumulated during Run 3 and Run 4, compared to 10 fb–1 from Run 1 and Run 2. This will enable many observables, in particular the flavour anomalies, to be measured with a precision unattainable at competing experiments. 

To allow LHCb to run at an instantaneous luminosity 10 times higher than during Run 2, much of the detector system and its readout electronics have been replaced, while a flexible full-software trigger system running at 40 MHz allows the experiment to maintain or even improve trigger efficiencies despite the larger interaction rate. During Long Shutdown 2, upgraded ring-imaging Cherenkov detectors and a brand new “SciFi” (scintillating fibre) tracker have been installed. A major part of LHCb’s metamorphosis – in process at the time of writing – is the installation of a new Vertex Locator (VELO) at the heart of the experiment. 

The VELO encircles the LHCb interaction point, where it contributes to triggering, tracking and vertexing. Its principal task is to pick out short-lived charm and beauty hadrons from the multitude of other particles produced by the colliding proton beams. Thanks to its close position to the interaction point and high granularity, the VELO can measure the decay time of B mesons with a precision of about 50 fs. 

Microcooling

The original VELO was based on silicon-strip detectors. Its upgraded version employs silicon pixel detectors to cope with the increased occupancies at higher luminosities and to stream complete events at 40 MHz, with an expected torrent of up to 3 Tb/s flowing from the VELO at full luminosity. A total of 52 silicon pixel detector modules, each with a sensitive surface of about 25 cm2, are mounted in two detector halves located on either side of the LHC beams and perpendicular to the beam direction (see “Marvellous modules” image). An important feature of the LHCb VELO is that it moves. During injection of LHC protons, the detectors are parked at a safe distance of 3 cm from the beams. But once stable beams are declared, the two halves are moved inward such that the detector sensors effectively enclose the beam. At that point the sensitive elements will be as close as 5.1 mm to the beams (compared to 8.2 mm previously), which is much closer than any of the other large LHC detectors and vital for the identification and reconstruction of charm- and beauty-hadron decays. 

The VELO’s close proximity to the interaction point requires a high radiation tolerance. This led the collaboration to opt for silicon-hybrid pixel detectors, which consist of a 200 μm-thick “p-on-n” pixel sensor bump-bonded to a 200 μm-thick readout chip with binary pixel readout. The CERN/Nikhef-designed “VeloPix” ASIC stems from the Medipix family and was specially developed for LHCb. It is capable of handling up to 900 million hits per second per chip, while withstanding the intense radiation environment. The data are routed through the vacuum via low-mass flex cables engineered by the University of Santiago de Compostela, then make the jump to atmosphere through a high-speed vacuum interface designed by Moscow State University engineers, which is connected to an optical board developed by the University of Glasgow. The data are then carried by optical fibres with the rest of the LHCb data to the event builder, trigger farm and disk buffers contained in modular containers in the LHCb experimental area.

The VELO modules were constructed at two production sites: Nikhef and the University of Manchester, where all the building blocks were delivered from the many institutes involved and assembled together over a period of about 1.5 years. After an extensive quality-assurance programme to assess the mechanical, electrical and thermal performance of each module, they were shipped in batches to the University of Liverpool to be mounted into the VELO halves. Finally, after population with modules, each half of the VELO detector was transported to CERN for installation in the LHCb experiment. The first half was installed on 2 March, and the second is being assembled.

Microchannel cooling

Keeping the VELO cool to prevent thermal runaway and minimise the effects of radiation damage was a major design challenge. The active elements in a VELO module consist of 12 front-end ASICs (VeloPix) and two control ASICs (GBTX), with a nominal power consumption of about 1.56 kW for each VELO half. The large radiation dose experienced by the silicon sensors is distributed highly non-uniformly and concentrated in the region closest to the beams, with a peak dose 60% higher than that experienced by the other LHC tracking detectors. Since the sensors are bump-bonded to the VeloPix chips, they are in direct contact with the ASICs, which are the main source of heat. The detector is also operated under vacuum, making heat removal especially difficult. These challenging requirements led LHCb to adopt microchannel cooling with evaporative CO2 as the coolant (see “Microcooling” image). 

Keeping the VELO cool to prevent thermal runaway and minimise the effects of radiation damage was a major design challenge

The circulation of coolant in microscopic channels embedded within a silicon wafer is an emergent technology, first implemented at CERN by the NA62 experiment. The VELO upgrade combines this with the use of bi-phase (liquid-to-gas) CO2, as used by LHCb in previous runs, in a single innovative system. The LHCb microchannel cooling plates were produced at CERN in collaboration with the University of Oxford. The bare plates were fabricated by CEA-Leti (Grenoble, France) by atomic-bonding two silicon wafers together, one with 120 × 200 μm trenches etched into it, for an overall thickness of 500 μm. This approach allows the design of a channel pattern to ensure a very homogeneous flow directly under the heat sources. The coolant is circulated inside the channels through exit and entry slits that are etched directly into the silicon after the bonding step. The cooling is so effective that it is possible to sustain an overhang of 5 mm closest to the beam, thus reducing the amount of material before the first measured points on each track. The use of microchannels to cool electronics is being investigated both for future LHCb upgrades and several other future detectors.

Module assembly and support

The microchannel plate serves as the core of the mechanical support for all the active components. The silicon sensors, already bump-bonded to their ASICs to form a tile, are precisely positioned with respect to the base and glued to the microchannel plate with a precision of 30 μm. The thickness of the glue layer is around 80 µm to produce low thermal gradients across the sensor. The front-end ASICs are then wire-bonded to custom-designed kapton–copper circuit boards, which are also attached to the microchannel substrate. The ASICs’ placement requires a precision of about 100 µm, such that the length and shape of the 420 wire-bonds are consistent along the tile. High-voltage, ultra-high-speed data links and all electrical services are designed and attached in such a way to produce a precise and lightweight detector (a VELO module weighs only 300 g) and therefore minimise the material in the LHCb acceptance.

Every step in the assembly of a module was followed by checks to ensure that the quality met the requirements. These included: metrology to assess the placement and attachment precision of the active components; mechanical tests to verify the effects of the thermal stress induced by temperature gradients; characterisation of the current-voltage behaviour of the silicon sensors; thermal performance measurements; and electrical tests to check the response of the pixel matrix. The results were then uploaded to a database, both to keep a record of all the measurements carried out and to run tests that assign a grade for each module. This allowed for continuous cross-checks between the two assembly sites. To quantify the effectiveness of the cooling design, the change in temperature on each ASIC as a function of the power consumption was measured. The LHCb modules have demonstrated thermal-figure-of-merit values as low as 2–3 K cm2 W–1. This performance surpasses what is possible with, for example, mono-phase microchannel cooling or integrated-pipe solutions. 

RF boxes

The delicate VELO modules are mounted onto two precision-machined bases, each housed within a hood (one for each side) that provides isolation from the atmosphere. The complex monolithic hoods were machined from one-tonne billets of aluminium to provide the vacuum tightness and the mechanical performance required. The hood and base system is also articulated to allow the detector to be retracted during injection and to be centred accurately around the collision point during stable beams. Pipes and cables for the electrical and cooling services are designed to absorb the approximately 3 cm motion of each VELO half without transferring any force to the modules, to be radiation tolerant, and to survive flexing thousands of times. 

Following the completion of each detector half, performance measurements of each module were compared with those taken at the production sites. Further tests ensured there are no leaks in the high-pressure cooling system or the vacuum volumes, in addition to safety checks that guarantee the long-term performance of the detector. A final set of measurements checks the alignment of the detector along the beam direction, which is extremely difficult once the VELO is installed. Before installation, the detectors are cooled close to their –30°C operating temperature and the position of the tips of the modules measured with a precision of 5 µm. Once complete, each half-tonne detector half is packed for transport into a frame designed to damp-out and monitor vibrations during its 1400 km journey by road from Liverpool to CERN.

RF boxes

One of the most intriguing technological challenges of the VELO upgrade was the design and manufacture of the RF boxes that separate the two detector halves from the primary beam vacuum, shielding the sensitive detectors from RF radiation generated by the beams and guiding the beam mirror currents to minimise wake-fields. The sides of the boxes facing the beams need to be as thin as possible to minimise the impact of particle scattering, yet at the same time they must be vacuum-tight. A further challenge was to design the structures such that they do not touch the silicon sensors even under pressure differences. Whereas the RF boxes of LHCb’s previous VELO were made from 300 μm-thick hot-pressed deformed sheets of aluminium foils welded together, the more complicated layout of the new VELO required them to be machined from solid blocks of small grain-sized forged aluminium. This highly specialised procedure was developed and carried out at Nikhef using a precision five-axis milling machine (see “RF boxes” image).

The VELO upgrade reflects the dedication and work of more than 150 people at 13 institutes over many years

In early prototypes, micro-enclosures led to small vacuum leaks when machining thin layers. A 3D forging technique, performed by block manufacturer Loire Industrie (France), reduced the porosity of the casts sufficiently to eliminate this problem. To form the very thin sides of a box, the inside of the block was milled first. It was then positioned on an aluminium mould. The 1 mm space between box and mould was filled with heated liquid wax, which forms a strong and stable bond at room temperature. The remaining material was then machined until a sturdy flange and box with a wall about 250 μm thick remained, or just over 1% of the original 325 kg block. To further minimise the thickness in the region closest to the beams, a procedure was developed at CERN to remove more material with a chemical agent, leaving a final wall with a thickness between 150 and 200 μm. The final step was the application of a Torlon coating on the inside for electrical insulation to the sensors, and a non-evaporable getter coating on the outside to improve the beam vacuum. The two boxes were installed in the vacuum tank in spring 2021, in advance of the insertion of the VELO modules.

Let collisions commence 

LHCb’s original VELO played a pivotal role in the experiment’s flavour-physics programme. This includes the 2019 discovery of CP violation in the charm sector, numerous matter–antimatter asymmetry measurements and rare-decay searches, and the recent hints of lepton non-universality in B decays. The upgraded VELO detector – in conjunction with the new software trigger, the RICH and SciFi detectors, and other upgrades – will extend LHCb’s capabilities to search for physics beyond the SM. It will remain in place for the start of High-Luminosity LHC operations in Run 4, contributing to the full exploitation of the LHC’s physics potential.

Proposed 15 years ago, with a technical design report published in 2013 and full approval the following year, the VELO upgrade reflects the dedication and work of more than 150 people at 13 institutes over many years. The device is now in final construction. One half is installed and is undergoing commissioning in LHCb, while the other is being assembled, and will be delivered to CERN for installation during a dedicated machine stop during May. The assembly and installation has been made considerably more challenging by COVID-19-related travel and working restrictions, with final efforts taking place around the clock to meet the tight LHC schedule. Everyone in the LHCb collaboration is therefore looking forward to seeing the first data from the new detectors and continuing the success of the LHC’s world-leading flavour-physics programme.

LHCb constrains cosmic antimatter production

LHCb figure 1

During their 10 million-year-long journey through the Milky Way, high-energy cosmic rays can collide with particles in the interstellar medium, the ultra-rarefied gas filling our galaxy and mostly composed of hydrogen and helium. Such rare encounters are believed to produce most of the small number of antiprotons, about one per 10,000 protons, that are observed in high-energy cosmic rays. But this cosmic antimatter could also originate from unconventional sources, such as dark-matter annihilation, motivating detailed investigations of antiparticles in space. This effort is currently led by the AMS-02 experiment on the International Space Station, which has reported results with unprecedented accuracy.

The interpretation of these precise cosmic antiproton data calls for a better understanding of the antiproton production mechanism in proton-gas collisions. Here, experiments at accelerators come to the rescue. The LHCb experiment has the unique capability of injecting gas into the vacuum of the LHC accelerator. By injecting helium, cosmic collisions are replicated in the detector and their products can be studied in detail. LHCb already provided a first key input into the understanding of cosmic antimatter by measuring the amount of antiprotons produced at the proton–helium collision vertex itself. In a new study, this measurement has been extended by including the significant fraction (about one third) of antiprotons resulting from the decays of antihyperons such as Λ, which contain a strange antiquark also produced in the collisions.

These antiprotons are displaced from the collision point in the detector, as the antihyperons can fly several metres through the detector before decaying. Different antihyperon states and decay chains are possible, all contributing to the cosmic antiproton flux. To count them, the LHCb team exploited two key features of its detector: the ability to distinguish antiprotons from other charged particles via two ring-imaging Cherenkov (RICH) detectors, and the outstanding resolution of the LHCb vertex locator. Thanks to the latter, when checking the compatibility of the identified antiproton tracks with the collision vertex, three classes of antiprotons can be clearly resolved (figure 1): “prompt” particles originating from the proton–helium collision vertex; detached particles from Λ decays; and more separated particles produced in secondary collisions with the detector material.

The majority of the detached antiprotons are expected to originate from Λ particles produced at the collision point decaying to an antiproton and a positive pion. A second study was thus performed to fully reconstruct these decays by identifying the decay vertex. The results of this complementary approach show that about 75% of the observed detached antiprotons originate from Λ decays, in good agreement with theoretical predictions.

These new results provide an important input for modelling the expected antiproton flux from cosmic collisions. No smoking gun for an exotic source of cosmic antimatter has emerged yet, while the accuracy of this quest would profit from more accelerator inputs. Thus, the LHCb collaboration plans to expand its “space mission” with the new gas target SMOG2. This facility/device could also enable collisions between protons and hydrogen or deuterium targets, further strengthening the ties between the particle and astroparticle physics communities.

Science diversity at the intensity and precision frontiers

The EHN1 experimental hall

While all eyes focus on the LHC restart, a diverse landscape of fixed-target experiments at CERN have already begun data-taking. Driven by beams from smaller accelerators in the LHC chain, they span a large range of research programmes at the precision and intensity frontiers, complementary to the LHC experiments. Several new experiments join existing ones in the new run period, in addition to a suite of test-beam and R&D facilities. 

At the North Area, which is served by proton and ion beams from the Super Proton Synchrotron (SPS), new physics programmes have been underway since the return of beams last year. Experiments in the North Area, which celebrated its 40th anniversary in 2019, are located at different secondary beamlines and span QCD, electroweak physics and QED, as well as dark-matter searches. “During Long Shutdown 2, a major overhaul of the North Area started and will continue during the next 10 years to provide the best possible beam and infrastructure for our users,” says Yacine Kadi, leader of the North Area consolidation project. “The most critical part of the project is to prepare for the future physics programme.”

The first phase of the AMBER facility at the M2 beamline is an evolution of COMPASS, which has operated since 2002 and focuses on the study of the gluon contribution to the nucleon spin structure. By measuring the proton charge radius via muon–proton elastic scattering, AMBER aims to clarify the long-standing proton–radius puzzle, offering a complementary approach to previous electron–proton scattering and spectroscopy measurements. A new data-acquisition system will enable the collaboration to measure the antiproton production cross-section to improve the sensitivity of searches for cosmic antiparticles from possible dark-matter annihilation. A third AMBER programme will concentrate on measurements of the kaon, pion and proton charge radii via Drell-Yan processes using heavy targets. 

A second North Area experiment specialising in hadron physics is NA61/SHINE, which underwent a major overhaul during Long Shutdown 2 (LS2), including the re-use of the vertex detector from the ALICE experiment. Building on its predecessor NA49, the 17 m-long NA61/SHINE facility, situated at the H2 beamline, focuses on three main areas: strong interactions, cosmic rays and cross-section measurements for neutrino physics. The collaboration continues its study of the energy dependence of hadron production in heavy-ion collisions, in which NA49 found irregularities. It also aims to observe the critical point at which the phase transition from a quark–gluon plasma to a hadron gas takes place, the threshold energy for which is only measurable at the SPS rather than at the higher energy LHC or RHIC experiments. By measuring hadron production from pion–carbon interactions, meanwhile, the team will study the properties of high-energy cosmic rays from cascades of charged particles. Finally, using kaons and pions produced from a target replicating that of the T2K experiment in Japan, NA61/SHINE will help to determine the neutrino flux composition at the future DUNE and Hyper-Kamiokande experiments for precise measurements of neutrino mixing angles and the CP-violating phase.

New physics

Situated at the same H2 beamline, the new NA65 “DsTau” experiment will study the production of Ds mesons. This is important because Ds decays are the main source of ντs in a neutrino beam, and are therefore relevant for neutrino-oscillation studies. After a successful pilot run in 2018, a measurement campaign began in 2021 to determine the ντ-production flux.

The newly renovated East Area

At the K12 secondary beamline, NA62 continues its measurement of the ultra-rare charged kaon decay to a charged pion, a neutrino and an antineutrino, which is very sensitive to possible physics beyond the Standard Model. The collaboration aims to increase its sensitivity to a level (10%) approaching theoretical uncertainties, thanks to further data and experimental improvements to the more than 200 m-long facility. One is the installation during LS2 of a muon veto hodoscope that helps to determine whether a muon is coming from a kaon decay or from other interactions. Since 2021, NA62 also operates as a beam-dump experiment, where its primary focus is to search for feebly-interacting particles. Here, the ability to determine whether muons come from the target absorber is even more important since they make up most of the background.

Dark interactions

Searching for new physics is the focus of NA64 at the H4 beamline, which studies the interaction between an electron beam and an active target to look for a hypothetical dark-photon mediator connecting the SM with a possible dark sector. With at least five times more data expected this year, and up to 10 times more data during the period of LHC Run 3, it could be possible to determine whether the dark mediator, should it exist, is either an elastic scalar or a Majorana particle. Adding further impetus to this programme is an unexpected 17 MeV peak reported in e+einternal pair production by the ATOMKI experiment and, more significantly, the tension between the measured and predicted values of the anomalous magnetic moment of the muon (g-2)μ, for which possible explanations include models that invoke a dark mediator. During a planned muon run at the M2 beamline, the collaboration aims to cover the relevant parameter space for the (g-2)μ anomaly.  

NA63 also receives electrons from the H4 beamline and uses a high-energy electron beam to study the behaviour of scattered electrons in a strong electromagnetic field. In particular, the experiment tests QED at higher orders, which have a gravitational analogue in extreme astroparticle physics phenomena such as black-hole inspirals and magnetars. The NA63 team will continue its measurements in June.

Besides driving the broad North Area physics programme, the SPS serves protons to AWAKE – a proof-of-principle experiment investigating the use of plasma wakefields driven by a proton bunch to accelerate charged particles. Following successful results from its first run, the collaboration aims to further develop methods to modulate the proton bunches to demonstrate scalable plasma-wakefield technology, and to prepare for the installation of a second plasma cell and an electron-beam system using the whole CNGS tunnel at the beginning of LS3 in 2026.

Located on the main CERN site, receiving beams from the Proton Synchrotron (PS), the East Area underwent a complete refurbishment during LS2, leading to a 90% reduction in its energy consumption. Its main experiment is CLOUD, which simulates the impact of particulates on cloud formation. This year, the collaboration will test a new detector component called FLOTUS, a 70 litre quartz chamber extending the simulation from a period of minutes to a maximum of 10 days. The PS also feeds the n_TOF facility, which last year marked 20 years of service to neutron science and its applications. A new third-generation spallation target installed and commissioned in 2021 will enable new n_TOF measurements relevant for nuclear astrophysics. 

Different dimensions

Taking CERN science into an altogether different dimension, the PS also links to the Antimatter Factory via the Antiproton Decelarator (AD) and ELENA rings, where several experiments are poised to test CPT invariance and antimatter gravitational interactions at increased levels of precision (see “Antimatter galore at ELENA” panel). Even closer to the proton beam source is the PS Booster, which serves the ISOLDE facility. ISOLDE covers a diverse programme across the physics of exotic nuclei and includes MEDICIS (devoted to the production of novel radioisotopes for medical research), ISOLTRAP (comprising four ion traps to measure ions) and COLLAPS and CRIS, which focus on laser spectroscopy. Its post-accelerators REX/HIE-ISOLDE increase the beam energy up to 10 MeV/u, making ISOLDE the only facility in the world that provides radioactive ion-beam acceleration in this energy range.

Antimatter galore at ELENA

Experiments in the AD hall

Served directly by the Antiproton Decelerator (AD) for the past two decades, experiments at the CERN Antimatter Factory are now connected to the new ELENA ring, which decelerates 5.3 MeV antiprotons from the AD to 100 keV to allow a 100-fold increase in the number of trapped antiprotons. Six experiments involving around 350 researchers use ELENA’s antiprotons for a range of unique measurements, from precise tests of CPT invariance to novel studies of antimatter’s gravitational interactions. 

The ALPHA experiment focuses on antihydrogen-spectroscopy measurements, recently reaching an accuracy of two parts per trillion in the transition from the ground state to the first excited state. By clocking the free-fall of antiatoms released from a trap, it is also planning to measure the gravitational mass of antihydrogen. ALPHA’s recent demonstration of laser-cooled antihydrogen has opened a new realm of precision on anti-hydrogen’s internal structure and gravitational interactions to be explored in upcoming runs.

ASACUSA specialises in spectroscopic measurements of antiprotonic helium, recently finding surprising behaviour. The experiment is also gearing up to perform hyperfine-splitting spectroscopy in antihydrogen using atomic-beam methods complementary to ALPHA’s trapping techniques.

GBAR and AEgIS target direct measurements of the Earth’s gravitational acceleration on antihydrogen. GBAR is developing a method to measure the free-fall of antihydrogen atoms, using sympathetic laser cooling to cool antihydrogen atoms and release them, after neutralisation, from a trap directly injected with antiprotons from ELENA, maximising antihydrogen production. AEgIS, having established pulsed formation of antihydrogen in 2018, is following a different approach based on measuring the vertical drop of a pulsed cold beam of antihydrogen atoms travelling horizontally through a device called a Moiré deflectometer.

BASE uses advanced Penning traps to compare matter and antimatter with extreme precision, recently finding the charge-to-mass ratios of protons and antiprotons to be identical within 16 parts per trillion. The data also allowed the collaboration to perform the first differential test of the weak equivalence principle using antiprotons, reaching the 3% level, with experiment improvements soon expected to increase the sensitivities of both measurements. The BASE team is also working on an improved measurement of the antiproton magnetic moment, the implementation of a transportable antiproton trap called BASE-STEP and improved searches for millicharged particles.

The newest AD experiment, PUMA, which is preparing for first commissioning later this year, aims to transport trapped antiprotons collected at ELENA to ISOLDE where, from next year, they will be annihilated on exotic nuclei to study neutron densities at the surface of nuclei. 

“Thanks to the beam provided by ELENA and the major upgrades of the experiments, we hope to see big progress in ultra-precise tests of CPT invariance, first and long-awaited antihydrogen-based studies of gravity, as well as the development of new technologies such as transportable antimatter traps,” says Stefan Ulmer, head of the AD user committee. 

Stable and highly customisable beams at the North and East areas also facilitate important detector R&D and test-beam activities. These include the recently approved Water-Cherenkov Test Experiment, which will help to develop detector techniques for long-baseline neutrino experiments, and new detector components for the LHC experiments and proposed future colliders. The CERN Neutrino Platform is dedicated to the development of detector technologies for neutrino experiments across the world. Upcoming activities including ongoing contributions to the future DUNE experiment in the US, in particular the two huge DUNE cryostats and R&D for “vertical drift” liquid-argon detection technology. In the East Area, the mixed-field irradiation (CHARM) and proton-irradiation (IRRAD) facilities provide key input to detector R&D and electronics tests, similar to the services provided by the SPS-driven GIF irradiation facility and HiRadMat.

With the many physics opportunities mapped out by Physics Beyond Colliders and the consolidation of our facilities, we are looking into a bright future

Johannes Bernhard

Fixed-target experiments in the North and East areas, along with experiments at ISOLDE and the AD, demonstrate the importance of diverse physics studies at CERN, when the best path to discover new physics is unclear. Some of these experiments emerged within the Physics Beyond Colliders initiative and there are many more on the horizon, such as KLEVER and the SPS Beam Dump Facility. “With the many physics opportunities mapped out by Physics Beyond Colliders and the consolidation of our facilities, we are looking into a bright future,” says Johannes Bernhard, head of the liaison to experiments section in the beams department. “We are always aiming to serve our users with the highest beam quality and performance possible.”

Compact XFELs for all

A prototype of the CLIC X-band structure

Originally considered a troublesome byproduct of particle accelerators designed to explore fundamental physics, synchrotron radiation is now an indispensable research tool across a wide spectrum of science and technology. The latest generation of synchrotron-radiation sources are X-ray free electron lasers (XFELs) driven by linacs. With sub-picosecond pulse lengths and wavelengths down to the hard X-ray range, these facilities offer unprecedented brilliance, exceeding that of third-generation synchrotrons based on storage rings by many orders of magnitude. However, the high costs and complexity of XFELs have meant that there are only a few such facilities currently in operation worldwide, including the European XFEL at DESY and LCLS-II at SLAC.

CompactLight, an EU-funded project involving 23 international laboratories and academic institutions, three private companies and five third parties, aims to use emerging and innovative accelerator technologies from particle physics to make XFELs more affordable, compact, power-efficient and performant. In the early stages of the project, a dedicated workshop was held at CERN to survey the X-ray characteristics needed by the many user communities. This formed the basis for a design based on the latest concepts for bright electron photo-injectors, high-gradient X-band radio-frequency structures developed in the framework of the Compact Linear Collider (CLIC), and innovative superconducting short-period undulators. After four years of work, the CompactLight team has completed a conceptual design report describing the proposed facility in detail.

The 360-page report sets out a hard X-ray (16–0.25 keV) facility with two separate beamlines offering soft and hard X-ray sources with a pulse-repetition rate of up to 1 kHz and 100 Hz, respectively. It includes a facility baseline layout and two main upgrades, with the most advanced option allowing the operation of both soft and hard X-ray beamlines simultaneously. The technology also offers preliminary evaluations of a very compact, soft X-ray FEL and of an X-ray source based on inverse Compton scattering, considered an affordable solution for university campuses, small labs and hospitals. 

CompactLight is the most significant current effort to enable greater diffusion of XFEL facilities, says the team, which plans to continue its activities beyond the end of its Horizon 2020 contract, improving the partnership and maintaining its leadership in compact acceleration and light production. “Compared to existing facilities, for the same operating wavelengths, the technical solutions adopted ensure that the CompactLight facility can operate with a lower electron beam energy and will have a significantly more compact footprint,” explains project coordinator Gerardo D’Auria. “All these enhancements make the proposed facility more attractive and more affordable to build and operate.”

• Based on an article in Accelerating News, 4 March.

Closing in on open questions

moriond

Around 140 physicists convened for one of the first in-person international particle-physics conferences in the COVID-19 era. The Moriond conference on electroweak interactions and unified theories, which took place from 12 to 19 March on the Alpine slopes of La Thuile in Italy, was a wonderful chance to meet friends and colleagues, to have spontaneous exchanges, to listen to talks and to prolong discussions over dinner.

The LHC experiments presented a suite of impressive results based on increasingly creative and sophisticated analyses, including first observations of rare Standard Model (SM) processes and the most recent insights in the search for new physics. ATLAS reported the first observation of the production of a single top quark in association with a photon, a rare process that is sensitive to the existence of new particles. CMS observed for the first time the electroweak production of a pair of opposite-sign W bosons, which is crucial to investigate the mechanism of electroweak symmetry breaking. The millions of Higgs bosons produced so far at the LHC have enabled detailed measurements and open a new window on rare phenomena, such as the rate of Higgs-boson decays to a charm quark–antiquark pair. CMS presented the world’s most stringent constraint on the coupling between the Higgs boson and the charm quark, improving their previous measurement by more than a factor of five, while ATLAS measurements demonstrated that it is weaker than the coupling between the Higgs boson and the bottom quark. On the theory side, various new signatures for extended Higgs sectors were proposed.

The LHC experiments presented a suite of impressive results based on increasingly creative and sophisticated analyses

Of special interest is the search for heavy resonances decaying to high-mass dijets. CMS reported the observation of a spectacular event with four high transverse-momentum jets, forming an invariant mass of 8 TeV. CMS now has two such events, exceeding the SM prediction with a local significance of 3.9σ, or 1.6σ when taking into account the full range of parameter space searched. Moderate excesses with a global significance of 2–2.5σ were observed in other channels, for example in a search by ATLAS for long-lived, heavy charged particles and in a search by CMS for new resonances that decay into two tau pairs. Data from Run 3 and future High-Luminosity LHC runs will show whether these excesses are statistical fluctuations of the SM expectation or signals of new physics.

Flavour anomalies

The persistent set of tensions between predictions and measurements in semi-leptonic b → s ℓ+ decays (ℓ = e, μ) were much discussed. LHCb has used various decay modes mediated by strongly suppressed flavour-changing neutral currents to search for deviations from lepton flavour universality (LFU). Other measurements of these transitions, including angular distributions and decay rates (for which the predictions are affected by troublesome hadronic corrections) as well as analyses of charged-current b→ cτ ν decays from BaBar, Belle and LHCb also show a consistent pattern of deviations from LFU. While none are individually significant enough to constitute clear evidence of new physics, they represent an intriguing pattern that can be explained by the same new-physics models. Theoretical talks on this subject proposed additional observables (based on baryon decays or leptons at high transverse momenta) to get more information on operators beyond the SM that would contribute to the anomalies. Updates from LHCb on several b → s ℓ+-related measurements with the full Run 1 and Run 2 datasets are eagerly awaited, while Belle II also has the potential to provide essential independent checks. The integrated SuperKEKB luminosity has now reached a third of the full Belle dataset, with Belle II presenting several impressive new results. These include measurements of the b → s ℓ+ decay branching fractions with a precision limited by the sample size and precise measurements of charmed particle lifetimes, including the individual world-best D and Λ+c  lifetimes, proving the excellent tracking and vertexing capabilities of the detector.

The other remarkable deviation from the SM prediction is the anomalous magnetic moment of the muon (g–2)μ, for which the SM prediction and the recent Fermilab measurement stand 4.2σ apart – or less, depending on whether the hadronic vacuum polarisation contribution to (g–2)μ is calculated using traditional “dispersive” methods or a recent lattice QCD calculation. The jury is still out on the theory side, but the ongoing analysis of Run 2 and Run 3 data at Fermilab will soon reduce the statistical uncertainty by more than a factor of two. The hottest issues in neutrinos – in particular their masses and mixing – were reviewed. The current leading long-baseline experiments – NOvA in the US and T2K in Japan – have helped to refine our understanding of oscillations, but the neutrino mass hierarchy and CP-violating phase remain to be determined. A great experimental effort is also being devoted to the search for neutrinoless double-beta decay (NDBD) which, if found, would prove that neutrinos are Majorana particles and have far-reaching implications in cosmology and particle physics. The GERDA experiment at Gran Sasso presented its final result, placing a lower limit on the NDBD half-life of 1.8 × 1026 years.

While tensions between solar-neutrino bounds and the reactor antineutrino anomaly are mostly resolved, the gallium anomaly remains

Another very important question is the possible existence of “sterile” neutrinos that do not participate in weak interactions, for which theoretical motivations were presented together with the robust experimental programme. The search for sterile neutrinos is motivated by a series of tensions in short-baseline experiments using neutrinos from accelerators (LSND, Mini-BooNE), nuclear reactors (the “reactor antineutrino anomaly”) and radioactive sources (the “gallium anomaly”), which cannot be accounted for by the standard three-neutrino framework. In particular, MicroBooNE has neither confirmed nor excluded the electron-like low-energy excess observed by MiniBooNE. While tensions between solar-neutrino bounds and the reactor antineutrino anomaly are mostly resolved, the gallium anomaly remains.

Dark matter and cosmology

The status of dark-matter searches both at the LHC and via direct astrophysical searches was comprehensively reviewed. The ongoing run of the 5.9 tonne XENONnT experiment, for example, should elucidate the 3.3σ excess observed by XENON1T in low-energy electron recoil events. The search for axions, which provide a dark-matter candidate as well as a solution to the strong-CP problem, cover different mass ranges depending on the axion coupling strength. The parameter space is wide, and Moriond participants heard how a discovery could happen at any moment thanks to experiments such as ADMX. The status of the Hubble tension was also reviewed.

The many theory talks described various beyond-the-SM proposals – including extra scalars and/or fermions and/or gauge symmetries – aimed at explaining LFU violation, (g–2)μ, the hierarchy among Yukawa couplings, neutrino masses and dark matter. Overall, the broad spectrum of informative presentations brilliantly covered the present status of open questions in phenomenological high-energy physics and shine a light on the many rich paths that demand further exploration.

CDF sets W mass against the Standard Model

CDF_detector

Ever since the W boson was discovered at CERN’s SppS four decades ago, successive collider experiments have pinned down its mass at increasing levels of precision. Unlike the fermion masses, the W mass is a clear prediction of the Standard Model (SM). At lowest order in electroweak theory, it depends solely on the mass of the Z boson and the value of the weak mixing angle. But higher-order corrections introduce an additional dependence on the gauge-boson couplings and the masses of other SM particles, in particular the heavy top quark and Higgs boson. With the precision of electroweak calculations now exceeding that of direct measurements, better knowledge of the measured W mass provides a vital test of the SM’s consistency.

The immediate reaction was silence

Chris Hays

A new measurement by the CDF collaboration based on data from the former Tevatron collider at Fermilab throws a curveball into this picture. Published today in Science, the CDF W-mass measurement – the most precise to date – stands 7σ from the SM prediction, upsetting decades of steady convergence between experiment and theory.

“I would say the immediate reaction was silence,” says Chris Hays, one of the CDF analysis leads, of the moment the measurement was unblinded on 19 November 2020. “Then there was some discussion to ensure the unblinding worked, i.e. that the value was correct, and to decide what would be the next steps.”

Long slog
CDF physicists have been measuring the mass of the W boson for more than 30 years via its decays to a lepton and a neutrino. In 2012, shortly after the Tevatron shut down, CDF published a W mass of 80,387 ± 12 (stat) ± 15 (syst) MeV based on 2.2 fb-1 of data, which significantly exceeded the precision of all previous measurements at that time combined. After 10 years of careful analysis and scrutiny of the full Tevatron dataset (8.8 fb-1, corresponding to about 4.2 million W-boson candidates), and taking into account an improved understanding of the detector and advances in the theoretical and experimental understanding of the W’s interactions with other particles, the new CDF result is twice as precise: 80,433.5 ± 6.4 (stat) ± 6.9 (syst) MeV.

In addition to the four-fold increase in statistics, the measurement benefits from a better understanding of systematic uncertainties. One significant change concerns the proton/antiproton parton distribution functions (PDFs), where the addition of LHC data to the PDF fits has reduced the uncertainty from 10 MeV to 3.9 MeV while also slightly raising the central value of the 2012 result.

LHCb-FIGURE-2022-003

“The 2012 and 2022 CDF values are in agreement at the level of two sigma accounting for the fact that approximately 25% of the events are in common, so the internal tension is not so significant,” explains CDF collaborator Mark Lancaster, who was an internal reviewer for the result. “But the tension with other results — particularly ATLAS at 80,370 ± 19 MeV and the SM at 80,357 ± 6 MeV — is significant. Many people from the LHC, Tevatron and theory community are presently working together to combine the results from the Tevatron, LHC and LEP and understand the correlations between them, e.g. in the PDFs and some of the higher order QCD and QED effects.”

It’s now up to theorists and other experiments to follow up on the CDF result, comments CDF co-spokesperson David Toback. “If the difference between the experimental and expected value is due to some kind of new particle or subatomic interaction, which is one of the possibilities, there’s a good chance it’s something that could be discovered in future experiments,” he says.

Cross checks
Results from the LHC experiments are crucial to enable a deeper understanding. One of the challenges  in measuring the W mass in high-rate proton-proton collisions at the LHC is event “pile-up”, which makes it hard to reconstruct the missing transverse energy from neutrinos. The higher collision energy at the LHC also means W bosons are produced with larger transverse momenta with respect to the beam axis, which needs to be properly modeled in order to measure the W boson mass precisely.

It takes years to build up the knowledge of the detector necessary to be able to address all the issues satisfactorily

Florencia Canelli

The ATLAS collaboration published the first high-precision measurement of the W mass at the LHC in 2018 based on data collected at a centre-of-mass energy of 7 TeV, and is currently working on new measurements. In September, based on 2016 data, LHCb published its first measurement of the W mass: 80,354 ± 32 MeV, and estimates that an uncertainty of 20 MeV or less is achievable with existing data. CMS is also proceeding with analyses that should soon see its first public result. “It’s an important measurement of our physics programme,” says CMS physics co-cordinator Florencia Canelli. “As the CDF result shows, precision physics can be a challenging and lengthy process: it takes a very long time to understand all aspects of the data to the level of precision required for a competitive W-mass measurement, and it takes years to build up the knowledge of the detector necessary to be able to address all the issues satisfactorily.”

The CDF result reiterates the central importance of precision measurements in the search for new physics, describe Claudio Campagnari (UC Santa Barbara) and Martijn Mulders (CERN) in a Perspective article accompanying the CDF paper. They point to the increased precision that will be available at the High-Luminosity LHC and the capabilities of future facilities such as the proposed Future Circular Collider, the e+e mode of which “would offer the best prospects for an improved W-boson mass measurement, with a projected sensitivity of 7 ppm”. Such a measurement would also demand the SM electroweak calculations be performed at higher orders, a challenge firmly in the sights of the theory community.

Following the 2012 discovery of the Higgs boson, it is not easy to tweak the SM parameters without ruining the excellent agreement with numerous measurements. Furthermore, unlike calculations such as that of the muon anomalous magnetic moment, which relies on significant input from QCD, the prediction of the W mass relies mostly on “cleaner” electroweak computations. Surveying possible new physics that could push the W mass to higher values than expected, the CDF paper points to hypotheses that offer a deeper understanding of the Higgs field, from which the SM particles get their masses. These include supersymmetry and Higgs-boson compositeness, both of which include a potential source of dark matter.

“Supersymmetry could make a significant change to the SM prediction of the W mass, although it seems difficult to explain as big an effect as seen experimentally,” says theorist John Ellis. “But one prediction I can make with confidence is a tsunami of arXiv papers in the weeks ahead.”

Toward a diffraction limited storage-ring-based X-ray source

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

Multi-bend achromat (MBA) lattices have initiated a fourth generation for storage-ring light sources with orders of magnitude increase in brightness and transverse coherence. A few MBA rings have been built, and many others are in design or construction worldwide, including upgrades of APS and ALS in the US.

The HMBA (hybrid MBA), developed for the successful ESRF–EBS MBA upgrade has proven to be very effective in addressing the nonlinear dynamics challenges associated with pushing the emittance toward the diffraction limit. The evolution of the HMBA ring designs will be described in this seminar. The new designs are consistent with the breaking of the lattice periodicity found in traditional circular light sources, inserting dedicated sections for efficient injection and additional emittance damping.

Techniques developed for high-energy physics rings to mitigate nonlinear dynamics challenges associated with breaking periodicity at collision points were applied in the HMBA designs for the injection and damping sections. These techniques were also used to optimise the individual HMBA cell nonlinear dynamics. The resulting HMBA can deliver the long-sought diffraction limited source while maintaining the temporal and transverse stability of third-generation light sources due to the long lifetime and traditional off-axis injection enabled by nonlinear dynamics optimisation, thus improving upon the performance of rings now under construction.

Want to learn more on this subject?

Pantaleo Raimondi, professor at the Stanford Linear Accelerator Center, research technical manager, SLAC National Accelerator Laboratory and previously director, Accelerator and Source Division, ESRF.

 

 


 

Snowmass back at KITP

snowmass_theory_frontier_image

Between February 23-25, the Kavli Institute of Theoretical Physics (KITP) in Santa Barbara, California, hosted the Theory Frontier conference of the US Particle Physics Community Planning Exercise, “Snowmass 2021“. Organised by the Division of Particles and Fields of the American Physical Society (APS DPF), Snowmass aims to identify and document a scientific vision for the future of particle physics in the U.S. and abroad. The event brought together theorists from the entire spectrum of high-energy physics, fostering dialogue and revealing common threads, to sketch a decadal vision for high-energy theory in advance of the main Snowmass Community Summer Study in Seattle on 17-26 July.

It was also one of the first large in-person meetings for the US particle physics community since the start of the COVID-19 pandemic.

The conference began in earnest with Juan Maldacena’s (IAS) vision for formal theory in the coming decade. Highlighting promising directions in quantum field theory and quantum gravity, he surveyed recent developments in “bootstrap” techniques for conformal field theories, amplitudes and cosmology; implications of quantum information for understanding quantum field theories; new dualities in supersymmetric and non-supersymmetric field theories; progress on the black-hole information problem; and constraints on effective field theories from consistent coupling to quantum gravity. Following talks by Eva Silverstein (U. Stanford) on quantum gravity and cosmology and Xi Dong (UC Santa Barbara) on geometry and entanglement, David Gross (KITP) brought the morning to a close by recalling the role of string theory in the quest for unification and emphasising its renewed promise in understanding QCD.

Clay Cordova (Chicago), David Simmons-Duffin (Caltech), Shu Heng Shao (IAS) and Ibrahima Bah (Johns Hopkins) followed with a comprehensive overview of recent progress in quantum field theory. Cordova’s summary of supersymmetric field theory touched on the classification of superconformal field theories, improved understanding of maximally supersymmetric theories in diverse dimensions, and connections between supersymmetric and non-supersymmetric dynamics. Simmons-Duffin made a heroic attempt to convey the essentials of the conformal bootstrap in a 15-minute talk, while Shao surveyed generalised global symmetries and Bah detailed geometric techniques guiding the classification of superconformal field theories.

The first afternoon began with Raman Sundrum’s (Maryland) vision for particle phenomenology, in which he surveyed the pressing questions motivating physics beyond the Standard Model, some promising theoretical mechanisms for answering them, and the experimental opportunities that follow. Tim Tait (UC Irvine) followed with an overview of dark- matter models and motivation, drawing a contrast between the more top-down perspective on dark matter prevalent during the previous Snowmass process in 2013 (also hosted by KITP) and the much broader bottom-up perspective governing today’s thinking. Devin Walker (Dartmouth) and Gilly Elor (Mainz) brought the first day’s physics talks to a close with bosonic dark matter and new ideas in baryogenesis.

The final session of the first day was devoted to issues of equity and inclusion in the high-energy theory community, with  DPF early-career member Julia Gonski (Columbia) making a persuasive case giving a voice to early-career physicists in the years between Snowmass processes.  Connecting from Cambridge, Howard Georgi (Harvard) delivered a compelling speech on the essential value of diversity in physics, recalling Ann Nelson’s legacy and reminding the packed auditorium that “progress will not happen at all unless the good people who think that there is nothing they can do actually wake up and start doing.” This was followed by a panel discussion moderated by Devin Walker (Dartmouth) and featuring Georgi, Bah, Masha Baryakhtar (Washington), and Tien-Tien Yu (Oregon) in dialogue about their experiences.

Developments across all facets of the high-energy theory community are shaping new ways of exploring the universe from the shortest length scales to the very longest

The second and third days of the conference spanned the entire spectrum of activity within high-energy theory, consolidated around quantum information science with talks by Tom Hartman (Cornell), Raphael Bousso (Berkeley), Hank Lamm (Fermilab) and Yoni Kahn (Illinois). Marius Wiesemann (MPI), Felix Kling (DESY) and Ian Moult (Yale) discussed simulations for collider physics, and Michael Wagman (Fermilab), Huey-Wen Lin (Michigan State) and Thomas Blum (Connecticut) emphasised recent progress in lattice gauge theory. Recent developments in precision theory were covered by Bernhard Mistlberger (CTP), Emanuele Mereghetti (LANL) and Dave Soper (Oregon) and the status of scattering-amplitudes applications by Nima Arkani-Hamed (IAS), Mikhail Solon (Caltech) and Henriette Elvang (Michigan). Masha Baryakhtar (Washington), Nicholas Rodd (CERN) and Daniel Green (UC San Diego) reviewed astroparticle and cosmology theory, followed by an overview of effective field theory approaches in cosmology and gravity by Mehrdad Mirbabayi (ICTP) and Walter Goldberger (Yale); Isabel Garcia Garcia (KITP) discussed alternative approaches to effective field theories in gravitation. Recent findings in neutrino theory were covered by Alex Friedland (SLAC), Mu Chun Chen (UC Irvine) and Zahra Tabrizi (Northwestern). Bridging these themes with talks on amplitudes and collider physics, machine learning for particle theory and cosmological implications of dark sector models were talks by Lance Dixon (SLAC), Jesse Thaler (MIT) and Neal Weiner (New York). Connections with the many other “frontiers” in the Snowmass process were underlined by Laura Reina (Florida State), Lian-Tao Wang (Chicago), Pedro Machado (Fermilab), Flip Tanedo (UC Riverside), Steve Gottlieb (Indiana), and Alexey Petrov (Wayne State).

The rich and broad programme of the Snowmass Theory Conference demonstrates the vibrancy of high-energy theory at this interesting juncture for the field, following the discovery of the final missing piece of the Standard Model, the Higgs boson, in 2012. Subsequent developments across all facets of the high-energy theory community are shaping new ways of exploring the universe from the shortest length scales to the very longest. The many thematic threads and opportunities covered in the conference bode well for the final Snowmass discussions with the whole community in Seattle this summer.

Gravitational-wave astronomy turns to AI

New frontiers in gravitational-wave (GW) astronomy were discussed in the charming and culturally vibrant region of Oaxaca, Mexico from 14 to 19 November. Around 37 participants attended the hybrid Banff International Research Station for Mathematical Innovation and Discovery (BIRS) “Detection and Analysis of Gravitational Waves in the Era of Multi-Messenger Astronomy: From Mathematical Modelling to Machine Learning’’ workshop. Topics ranged from numerical relativity to observational astrophysics and computer science, including the latest applications of machine-learning algorithms for the analysis of GW data.

GW observations are a new way to explore the universe’s deepest mysteries. They allow researchers to test gravity in extreme conditions, to get important clues on the mathematical structure and possible extension of general relativity, and to understand the origin of matter and the evolution of the universe. As more GW observations with increased detector sensitivities spur astrophysical and theoretical investigations, the analysis and interpretation of GW data faces new challenges which require close collaboration with all GW researchers. The Oaxaca workshop focused on a topic that is currently receiving a lot of attention: the development of efficient machine-learning (ML) methods and numerical-analysis algorithms for the detection and analysis of GWs. The programme gave participants an overview of new-physics phenomena that could be probed by current or next-generation GW detectors, as well as data-analysis tools that are being developed to search for astrophysical signals in noisy data.

Since their first detections in 2015, the LIGO and Virgo detectors have reached an unprecedented GW sensitivity. They have observed signals from binary black-hole mergers and a handful of signals from binary neutron star and mixed black hole-neutron star systems. In discussing the role that numerical relativity plays in unveiling the GW sky, Pablo Laguna and Deirdre Shoemaker (U. Texas) showed how it can help in understanding the physical signatures of GW events, for example by distinguishing black hole-neutron star binaries from binary black-hole mergers. On the observational side, several talks focused on possible signatures of new physics in future detections. Adam Coogan (U. de Montréal and Mila) and Gianfranco Bertone (U. of Amsterdam, and chair of EuCAPT) discussed dark-matter halos around black holes. Distinctive GW signals  could help to determine whether dark matter is made of a cold, collisionless particle via signatures of intermediate mass-ratio inspirals embedded in dark-matter halos. In addition, primordial black holes could be dark-matter candidates.

Bernard Mueller (U. Monash) and Pablo Cerdá-Durán (U. de Valencia) described GW emission from core-collapse supernovae. The range of current detectors is limited to the Milky Way, where the rate of supernovae is about one per century. However, if and when a galactic supernova happens, its GW signature will be within reach of existing detectors. Lorena Magaña Zertuche (U. of Mississippi) talked about the physics of black-hole ringdown – the process whereby gravitational waves are emitted in the aftermath of a binary black-hole merger – which is crucial for understanding astrophysical black holes and testing general relativity. Finally, Leïla Haegel (U. de Paris) described how the detection of GW dispersion would indicate the breaking of Lorentz symmetry: if a GW propagates according to a modified dispersion relation, its frequency modes will propagate at different speeds, changing  the phase evolution of the signals with respect to general relativity.

Machine learning
Applications of different flavours of ML algorithms to GW astronomy, ranging from the detection of GWs to their characterisation in detector simulations, were the focus of the rest of the workshop.

ML has seen a huge development in recent years and has been increasingly used in many fields of science. In GW astronomy, a variety of supervised, unsupervised, and reinforcement ML algorithms, such as deep learning, neural networks, genetic programming and support vector machines, have been developed. They have been used to successfully deal with noise in the detector, signal processing, data analysis for signal detections and for reducing the non-astrophysical background of GW searches. These algorithms must be able to deal with large data sets and demand  a high accuracy to model  theoretical waveforms and to perform  searches at the limit of instrument sensitivities. The next step for a successful use of ML in GW science will be the integration of ML techniques with more traditional numerical-analysis methods that have been developed for the modelling, real-time detection, and analysis of signals.

The BIRS workshop provided a broad overview of the latest advances in this field, as well as open questions that need to be solved to apply robust ML techniques to a wide range of problems. These include reliable background estimation, modelling gravitational waveforms in regions of the parameter space not covered by full numerical relativity simulations, and determining populations of GW sources and their properties. Although ML for GW astronomy is in its infancy, there is no doubt that it will play an increasingly important role in the detection and characterization of GWs leading to new discoveries.

Dijet excess intrigues at CMS

The Standard Model (SM) has been extremely successful in describing the behaviour of elementary particles. Nevertheless, conundrums such as the nature of dark matter and the cosmological matter-antimatter asymmetry strongly suggest that the theory is incomplete. Hence, the SM is widely viewed as an effective low-energy limit of a more fundamental underlying theory which must be modified to describe particles and their interactions at higher energies.

A powerful way to discover new particles expected from physics beyond the SM is to search for high-mass dijet or multi-jet resonances, as these are expected to have large production cross-sections at hadron colliders. These searches look for a pair of jets originating from a pair of quarks or gluons, coming from the decay of a new particle “X” and appearing as a narrow bump in the invariant dijet-mass distribution. Since the energy scale of new physics is most likely high, it is natural to expect these new particles to be massive.

CMS_Figure1

CMS and ATLAS have performed a suite of single-dijet-resonance searches. The next step is to look for new identical-mass particles “X” that are produced in pairs, with (resonant mode) or without (non-resonant mode) a new intermediate heavier particle “Y” being produced and decaying to pairs of X. Such processes would yield two dijet resonances and four jets in the final state: the dijet mass would correspond to particle X and the four-jet mass to particle Y.

The CMS experiment was also motivated to search for Y→ XX → 4-jets by a candidate event recorded in 2017, which was presented by a previous CMS search for dijet resonances (figure 1). This spectacular event has four high transverse-momentum jets, forming two dijet pairs each with an invariant mass of 1.9 TeV and a four-jet invariant mass of 8 TeV.

CMS_Figure2

Presented on 14 March at Rencontres de Moriond, the CMS collaboration has found another very similar event in a new search optimised for this specific Y→ XX → 4-jets topology. These events could originate from quantum-chromodynamics processes, but those are expected to be extremely rare (figure 2). The two candidate events are clearly visible at high masses, distinct from all the rest. Also shown (magenta) is a simulation of a possible new-physics signal – a diquark decaying to vector-like quarks – with a four-jet mass of 8.4 TeV and a dijet mass of 2.1 TeV, which very nicely describes these two candidates.

The hypothesis that these events originate from the SM at the observed X and Y masses is disfavoured with a local significance of 3.9σ. Taking into account the full range of possible X and Y mass values, the compatibility of the observation with the SM expectation leads to a global significance of 1.6σ.

The upcoming LHC Run 3 and future High-Luminosity LHC runs will be crucial in telling us whether these events are statistical fluctuations of the SM expectation, or the first signs of yet another groundbreaking discovery at LHC.

bright-rec iop pub iop-science physcis connect