Comsol -leaderboard other pages

Topics

LHC physics shines amid COVID-19 crisis

The eighth Large Hadron Collider Physics (LHCP) conference, originally scheduled to be held in Paris, was held as a fully online conference from 25 to 30 May. To enable broad participation, the organisers waived the registration fee, and, with the help of technical support from CERN, hosted about 1,300 registered participants from 56 countries, with attendees actively engaging via Zoom webinars. Even a poster session was possible, with 50 junior attendees from all over the world presenting their work via meeting rooms and video recordings. The organisers must be complimented for organising a pioneering virtual conference that succeeded in bringing the LHC community together, in larger and more diverse numbers than at previous editions.

LHCP20 presentations covered a wide assortment of topics and several new results with significantly enhanced sensitivity than was previously possible. These included both precision measurements with excellent potential to uncover discrepancies that can be explained only by beyond the Standard Model (SM) physics and direct searches using innovative techniques and advanced analysis methods to look for new particles.

The first observation of the combined production of three massive vector bosons was reported by CMS

The first observation of the combined production of three massive vector bosons (VVV with V = W or Z) was reported by the CMS experiment. In the nearly 40 years that have followed the discovery of the W and Z boson, their properties have been measured very precisely, including via “diboson” measurements of the simultaneous production of two vector bosons. However, “triboson” simultaneous production of three massive vector bosons had eluded us so far, as the cross sections are small and the background contributions are rather large. Such measurements are crucial to undertake, both to test the underlying theory and to probe non-standard interactions. For example, if new physics beyond the SM is present at high mass scales not far above 1 TeV, then cross section measurements for triboson final states might deviate from SM predictions. The CMS experiment took advantage of the large Run 2 dataset and machine learning techniques to search for these rare processes. Leveraging the relatively background-free leptonic final states, CMS collaborators were able to combine searches for different decay modes and different types of triboson production (WWW, WWZ, WZZ and ZZZ) to achieve the first observation of combined heavy triboson production (with an observed significance of 5.7 standard deviations) and at the same time evidence for WWW and WWZ production with observed significances of 3.3 and 3.4 standard deviations, respectively. While the results obtained so far are in agreement with SM predictions, more data is needed for the individual measurements of the WZZ and ZZZ processes.

Four-top-quark production

The first evidence for four-top-quark production was announced by ATLAS. The top-quark discovery in 1995 launched a rich programme of top-quark studies that includes precision measurements of its properties as well as the observation of single-top-quark production. In particular, since the large mass of the top quark is a result of its interaction with the Higgs field, studies of rare processes such as the simultaneous production of four top quarks can provide insights into properties of the Higgs boson. Within the SM, this process is extremely rare, occurring just once for every 70 thousand pairs of top quarks created at the LHC; on the other hand, numerous extensions of the SM predict exotic particles that couple to top quarks and lead to significantly higher production rates. The ATLAS experiment performed this challenging measurement using the full Run-2 dataset using sophisticated techniques and machine-learning methods applied to the multilepton final state to obtain strong evidence for this process. The observed signal significance was found to be 4.3 standard deviations, in excess of the expected sensitivity of 2.4, assuming SM four-top-quark-production properties. While the measured value of the cross section was found to consistent with the SM prediction within 1.7 standard deviations, the data collected during Run 3 will shed further light on this rare process.

The LHCb collaboration presented, with unprecedented precision, measurements of two properties of the mysterious X(3872) particle. Originally discovered by the Belle experiment in 2003 as a narrow state in the J/ψπ+π mass spectrum of B+→J/ψπ+πK+ decays, this particle has puzzled particle physicists ever since. The nature of the state is still unclear and several hypotheses have been proposed, such as its being an exotic tetraquark (a system of four quarks bound together), a two-quark hadron, or a molecular state consisting of two D mesons. LHCb collaborators reported the most precise mass measurement yet, and measured, for the first time, and with 5 standard-deviations significance, the width of the resonance (see LHCb interrogates X(3872) line-shape). Though the results favour its interpretation as a quasi-bound D0D*0 molecule, more data and additional analyses are needed to rule out other hypotheses.

Antideuterons could be produced during the annihilation or decay of neutralinos or sneutrinos

The ALICE collaboration presented a first measurement of the inelastic low-energy antideuteron cross section using p-Pb collisions at a centre-of-mass energy per nucleon–nucleon pair of 5.02 TeV. Low-energy antideuterons (composed of an antiproton and an antineutron) are predicted by some models to be a promising probe for indirect dark-matter searches. In particular, antideuterons could be produced during the annihilation or decay of neutralinos or sneutrinos, which are hypothetical dark-matter particles. Contributions from cosmic-ray interactions in the low-energy range below 1-2 GeV per nucleon are expected to be small. ALICE collaborators used a novel technique that utilised the detector material as an absorber for antideuterons to measure the production and annihilation rates of low energy antideuterons. The results from this measurement can be used in propagation models of antideuterons within the interstellar medium for interpreting dark-matter searches, including intriguing results from the AMS experiment. Future analyses with higher statistics data will improve the modelling as well as extend these studies to heavier antinuclei.

The above are just a few of the many excellent results that were presented at LHCP2020. The extraordinary performance of the LHC coupled with progress reported by the theory community, and the excellent data collected by the experiments, has inspired LHC physicists to continue with their rich harvest of physics results despite the current world crisis. Results presented at the conference showed that huge progress has been made on several fronts, and that Run 3 and the High-Luminosity LHC upgrade programme will enable further exploration of particle physics at the energy frontier.

IPAC goes virtual

More than 3000 accelerator specialists gathered in cyber-space from 11 to 14 May for the 11th International Particle Accelerator Conference (IPAC). The conference was originally destined for the GANIL laboratory in Caen, a charming city in Normandy, and host to the flagship radioactive-ion-beam facility SPIRAL-2, but the coronavirus pandemic forced the cancellation of the in-person meeting and the French institutes CNRS/IN2P3, CEA/IRFU, GANIL, Soleil and ESRF agreed to organise a virtual conference. Oral presentations and the accelerator-prize session were maintained, though unfortunately the poster and industry sessions had to be cancelled. The scientific programme committee whittled down more than 2000 proposals for talks into 77 presentations which garnered more than 43,000 video views across 60 countries, making IPAC’20 an involuntary pioneer of virtual conferencing and a lighthouse of science during the lockdown.

Recent trends indicate a move towards the use of permanent magnets

IPAC’20’s success relied on a programme of recent technical highlights, new developments and future plans in the accelerator world. Weighing in at 1,998 views, the most popular talk of the conference was by Ben Shepherd from STFC’s Daresbury Laboratory in the UK, who spoke on high-technology permanent magnets. Accelerators do not only accelerate ensembles of particles, but also use strong magnetic fields to guide and focus them into very small volumes, typically just micro or nanometres in size. Recent trends indicate a move towards the use of permanent magnets that provide strong fields but do not require external power, and can provide outstanding field quality. Describing the major advances for permanent magnets in terms of production, radiation resistance, tolerances and field tuning, Shepherd presented high tech devices developed and used for the SIRIUS, ESRF-EBS, SPRING-8, CBETA, SOLEIL and CUBE-ECRIS facilities, and also presented the Zero-Power Tunable Optics (ZEPTO) collaboration between STFC and CERN, which offers 15 – 60 T/m tunability in quadrupoles and 0.46 – 1.1 T in dipoles.

Top of the talks

The seven IPAC’20 presentations with the most views included four by outstanding female scientists. CERN Director General Fabiola Gianotti presented strategic considerations for future accelerator-based particle physics. While pointing out the importance of Europe participating in projects elsewhere in the world, she made the strong point that CERN should host an ambitious future collider, and discussed the options being considered, pointing to the update of the European Strategy for Particle Physics soon to be approved by the CERN Council. Sarah Cousineau from Oakridge reported on accelerator R&D as a driver for science in general, pointing out that accelerators have directly contributed to more than 25 Nobel Prizes, including the Higgs-boson discovery at the LHC in 2012. The development of superconducting accelerator technology has enabled projects for colliders, photon science, nuclear physics and neutron spallation sources around the world, with several light sources and neutron facilities currently engaged in COVID-19 studies.

SPIRAL-2 will explore exotic nuclei near the limits of the periodic table

The benefits of accelerator-based photon science for society was also emphasized by Jerry Hastings from Stanford University and SLAC, who presented the tremendous progress in structural biology driven by accelerator-based X-ray sources, and noted that research can be continued during COVID-19 times thanks to the remote synchrotron access pioneered at SSRL. Stressing the value of international collaboration, Hastings presented the outcome of an international X-ray facilities meeting that took place in April and defined an action plan for ensuring the best possible support to COVID-19 research. GANIL Director Alahari Navin presented new horizons in nuclear science, reviewing facilities around the world and presenting his own laboratory’s latest activities. GANIL has now started commissioning SPIRAL-2, which will allow users to explore the as-yet unknown properties of exotic nuclei near the limits of the periodic table of elements, and has performed its initial science experiment. Liu Lin from LNLS in Brazil presented the commissioning results for the new 4th generation SIRIUS light source, showing that the functionality of the facility has already been demonstrated by storing 15 mA of beam current. Last, but not least in the top-seven most-viewed talks, Anke-Susanne Müller from KIT presented the status of the study for a 100 km Future Circular Collider – just one of the options for an ambitious post-LHC project at CERN.

Many other highlights from the accelerator field were presented during IPAC’20. Kyo Shibata (KEK) discussed the progress in physics data-taking at the SuperKEKb factory, where the BELLE II experiment recently reported its first result. Ferdinand Willeke (BNL) presented the electron-ion collider approved to be built at BNL, Porntip Sudmuang (SLRI) showed construction plans for a new light source in Thailand, and Mohammed Eshraqi (ESS) discussed the construction of the European Spallation Source in Sweden. At the research frontier towards compact accelerators, Chang Hee Nam (IBS, Korea) explained prospects for laser-driven GeV-electron beams from plasma-wakefield accelerators and Arnd Specka (LLR/CNRS) showed plans for compact European plasma-accelerator facility EuPRAXIA, which is entering its next phase after successful completion of a conceptual-design report. The accelerator-application session rounded the picture off with presentations by Annalisa Patriarca (Institute Curie) about accelerator challenges in a new radiation-therapy technique called FLASH, in which ultra-fast delivery of radiation dose reduces damage to healthy tissue, by Charlotte Duchemin (CERN) on the production of non-conventional radionuclides for medical research at the MEDICIS hadron beam facility, by Toms Torims (Riga Technical University) on the treatment of marine exhaust gases using electron beams and by Adrian Fabich (SCK-CEN) on proton-driven nuclear-waste transmutation.

To the credit of the French organisers, the virtual setup worked seamlessly. The concept relied on pre-recorded presentations and a text-driven chat function which allowed registered participants to participate from time zones across the world. Activating the sessions in half-day steps preserved the appearance of live presentations to some degree, before a final live session, during which the four prizes of the accelerator group of the European Physical Society were awarded.

Funky physics at KIT

The FUNK experimental area, where the black-painted floor can be seen with the PMT-camera pillar at the centre and the mirror on the left. A black-cotton curtain encloses the whole area during running. Credit: KIT.

A new experiment at Karlsruhe Institute of Technology (KIT) called FUNK – Finding U(1)s of a Novel Kind – has reported its first results in the search for ultralight dark matter. Using a large spherical mirror as an electromagnetic dark-matter antenna, the FUNK team has set an improved limit on the existence of hidden photons as candidates for dark matter with masses in the eV range.

Despite overwhelming astronomical evidence for the existence of dark matter, direct searches for dark-matter particles at colliders and dedicated nuclear-recoil experiments have so far come up empty handed. With these searches being mostly sensitive to heavy dark-matter particles, namely weakly interacting massive particles (WIMPs), the search for alternative light dark-matter candidates is growing in momentum. Hidden photons, a cold, ultralight dark-matter candidate, arise in extensions of the Standard Model which contain a new U(1) gauge symmetry and are expected to couple very weakly to charged particles via kinetic mixing with regular photons. Laboratory experiments that are sensitive to such hidden or dark photons include helioscopes such as the CAST experiment at CERN, and “light-shining-through-a-wall” methods such as ALPS experiment at DESY.

FUNK exploits a novel “dish-antenna” method first proposed in 2012, whereby a hidden photon crossing a metallic spherical mirror surface would cause faint electromagnetic waves to be emitted almost perpendicularly to the mirror surface, and be focused on the radius point. The experiment was conceived in 2013 at a workshop at DESY when it was realised that there was a perfectly suited mirror — a prototype for the Pierre Auger Observatory with a surface area of 14 m2 – in the basement of KIT. Various photodetectors placed at the radius point allow FUNK to search for a signal in different wavelength ranges, corresponding to different hidden-photon masses. The dark-matter nature of a possible signal can then be verified by observing small daily and seasonal movements of the spot around the radius point as Earth moves through the dark-matter field. The broadband dish-antenna technique is able to scan hidden photons over a large parameter space.

The mass range of viable hidden-photon dark matter is huge

Joerg Jaeckel

Completed in 2018, the experiment took data during last year in several month-long runs using low-noise PMTs. In the mass range 2.5 – 7 eV, the data exclude a hidden-photon coupling stronger than 10−12 in kinetic mixing. “This is competitive with limits derived from astrophysical results and partially exceeds those from other existing direct-detection experiments,” says FUNK principal investigator Ralph Engel of KIT. So far two other experiments of this type have reported search results for hidden photons in this energy range — the dish-antenna at the University of Tokyo and the SHUKET experiment at Paris-Saclay – though FUNK’s factor-of-ten larger mirror surface brings a greater experimental sensitivity, says the team. Other experiments, such as NA64 at CERN which employs missing-energy techniques, are setting stringent bounds on the strength of dark-photon couplings for masses in the MeV range and above.

“The mass range of viable hidden-photon dark matter is huge,” says FUNK collaborator Joerg Jaeckel of Heidelberg University. “For this reason, techniques which can scan over a large parameter space are especially useful even if they cannot explore couplings as small as is possible with some other dedicated methods. A future exploitation of the setup in other wavelength ranges is possible, and FUNK therefore carries an enormous physics potential.”

100 TeV photons test Lorentz invariance

Over the past decades the photon emission from astronomical objects has been measured across 20 orders of magnitude in energy, from radio up to TeV gamma rays. This has not only led to many astronomical discoveries, but also, thanks to the extreme distances and energies involved, allowed researchers to test some of the fundamental tenets of physics. For example, the 2017 joint measurement of gravitational waves and gamma-rays from a binary neutron-star merger made it possible to determine the speed of gravity with a precision of less than 10-16 compared to the speed of light. Now, the High-Altitude Water Cherenkov (HAWC) collaboration has pushed the energy of gamma-ray observations into new territory, placing constraints on Lorentz-invariance violation (LIV) that are up to two orders of magnitude tighter than before.

Models incorporating LIV allow for modifications to the standard energy—momentum relationship dictated by special relativity, predicting phenomenological effects such as photon decay and photon splitting. Even if the probability for a photon to decay through such effects is small, the large distances involved in astrophysical measurements in principle allow experiments to detect it. The most striking implication would be the existence of a cutoff in the energy spectrum above which photons would decay while traveling towards Earth. Simply by detecting gamma-ray photons above the expected cutoff would put strong constraints on LIV.

HAWC

Increasing the energy limit for photons with which we observe the universe is, however, challenging. Since the flux of a typical source, such as a neutron star, decreases rapidly (by approximately two orders of magnitude for each order of magnitude increase in energy), ever larger detectors are needed to probe higher energies. Photons with energies of hundreds of GeV can still be directly detected using satellite-based detectors equipped with tracking and colorimetry. However, these instruments, such as the US-European Fermi-LAT detector and the Chinese-European DAMPE detector, require a mass of several tonnes, making launching them expensive and complex. To get to even higher energies ground-based detectors, which detect gamma-rays through the showers they induce in Earth’s atmosphere, are more popular. While they can be more easily scaled up in size than can space-based detectors, the indirect detection and the large background coming from cosmic rays make such measurements difficult.

It is likely that LIV will be further constrained in the near future, as a range of new high-energy gamma-ray detectors are developed

Recently, significant improvements have been made in ground-based detector technology and data analysis. The Japanese-Chinese Tibet air shower gamma-ray experiment ASγ, a Cherenkov-based detector array built at an altitude of 4 km in Yangbajing, added underground muon detectors to allow hadronic air showers to be differentiated from photon-induced ones via the difference in muon content. By additionally improving the data-analysis techniques to more accurately remove the isotropic all-sky background from the data, in 2019 the ASγ team managed to observe a source, in this case the Crab pulsar, at energies above 100 TeV for the first time. This ground-breaking measurement was soon followed by measurements of nine different sources above 56 TeV by the HAWC observatory located at 4 km altitude in the mountains near Puebla, Mexico.

These new measurements of astrophysical sources, which are likely all pulsars, could not only lead to an answer on the question where the highest-energy (PeV and above) cosmic rays are produced, but also allows new constraints to be placed on LIV. The spectra of the four sources studied by the collaboration did not show any signs of a cutoff, allowing the HAWC team to exclude the LIV energy scale to 2.2×1031  eV — an improvement of one-to-two orders of magnitude over previous limits.

It is likely that LIV will be further constrained in the near future, as a range of new high-energy gamma-ray detectors are developed. Perhaps the most powerful of these is the Large High Altitude Air Shower Observatory (LHAASO) located in the mountains of the Sichuan province of China. The construction of the detector array is ongoing while the first stage of the array commenced data taking in 2018. Once finished, LHAASO will be close to two orders of magnitude more sensitive than HAWC at 100 TeV and capable of pushing the photon energy into to the PeV range. Additionally, the limit of direct-detection measurements will be pushed beyond that from Fermi-LAT and DAMPE by the Chinese European High Energy cosmic Radiation Detector (HERD), a 1.8-tonne calorimeter surrounded by a tracker scheduled for launch in 2025 which is foreseen to be able to directly detect photons up to 100 TeV.

LHCb interrogates X(3872) line-shape

Figure 1

In 2003, the Belle collaboration reported the discovery of a mysterious new hadron, the X(3872), in the decay B+→X(3872)K+. Their analysis suggested an extremely small width, consistent with zero, and a mass remarkably close to the sum of the masses of the D0 and D*0 mesons. The particle’s existence was later confirmed by the CDF, D0, and BaBar experiments. LHCb first reported studies of the X(3872) in the data sample taken in 2010, and later unambiguously determined its quantum numbers to be 1++, leading the Particle Data Group to change the name of the particle to χc1(3872).

The nature of this state is still unclear. Until now, only an upper limit on the width of the χc1(3872) of 1.2 MeV has been available. No conventional hadron is expected to have such a narrow width in this part of the otherwise very well understood charmonium spectrum. Among the possible explanations are that it is a tetraquark, a molecular state, a hybrid state where the gluon field contributes to its quantum numbers, or a glueball without any valence quarks at all. A mixture of these explanations is also possible.

Two new measurements

As reported at the LHCP conference this week, the LHCb collaboration has now published two new measurements of the width of the χc1(3872), based on minimally overlapping data sets. The first uses Run 1 data corresponding to an integrated luminosity of 3 fb-1, in which (15.5±0.4)×103 χc1(3872) particles were selected inclusively from the decays of hadrons containing b quarks. The second analysis selected (4.23±0.07)×103 fully reconstructed B+→χc1(3872)K+ decays from the full Run 1–2 data set, which corresponds to an integrated luminosity of 9 fb-1. In both cases, the χc1(3872) particles were reconstructed through decays to the final state J/ψπ+π. For the first time the measured Breit-Wigner width was found to be non-zero, with a value close to the previous upper limit from Belle (see figure).

Combining the two analyses, the mass of the χc1(3872) was found to be 3871.64±0.06 MeV – just 70±120 keV below the D0D*0 threshold. The proximity of the χc1(3872) to this threshold puts a question mark on measuring the width using a simple fit to the well-known Breit-Wigner function, as this approach neglects potential distortions. Conversely, a precise measurement of the line-shape could help elucidate the nature of the χc1(3872). This has led LHCb to explore a more sophisticated Flatté parametrisation and report a measurement of the χc1(3872) line-shape with this model, including the pole positions of the complex amplitude. The results favour the interpretation of the state as a quasi-bound D0D*0 molecule, but other possibilities cannot yet be ruled out. Further studies are ongoing. Physicists from other collaborations are also keenly interested in the nature of the χc1(3872), and the very recent observation by CMS of the decay process Bs0→χc1(3872)? suggests another laboratory for studying its properties.

LEP-era universality discrepancy unravelled

Figure 1

The family of charged leptons is composed of the electron, muon (μ) and tau lepton (τ). According to the Standard Model (SM), these particles only differ in their mass: the muon is heavier than the electron and the tau is heavier than the muon. A remarkable feature of the SM is that each flavour is equally likely to interact with a W boson. This is known as lepton flavour universality.

In a new ATLAS measurement reported this week at the LHCP conference, a novel technique using events with top-quark pairs has been exploited to test the ratio of the probabilities for tau leptons and muons to be produced in W boson decays, R(τ/μ). In the SM, R(τ/μ) is expected to be unity, but a longstanding tension with this prediction has existed since the LEP era in the 1990s, where, from a combination of the four experiments, R(τ/μ) was measured to be 1.070 ± 0.026, deviating from the SM expectation by 2.7σ. This strongly motivated the need for new measurements with higher precision. If the LEP result were confirmed it would correspond to an unambiguous discovery of beyond the SM physics.

Tag and probe

To conclusively prove either that the LEP discrepancy is real or that it was just a statistical fluctuation, a precision of at least 1–2% is required — something previously not thought possible at a hadron collider like the LHC, where inclusive W bosons, albeit produced abundantly, suffer from large backgrounds and kinematic biases due to the online selection in the trigger. The key to achieving this is to obtain a sample of muons and tau leptons from W boson decays that is as insensitive as possible to the details of the trigger and object reconstruction used to select them. ATLAS has achieved this by exploiting both the LHC’s large sample of over 100 million top-quark pairs produced in the latest run, and the fact that top quarks decay almost exclusively to a W boson and a b quark. In a tag-and-probe approach, one W boson is used to select the events and the other is used, independently of the first, to measure the fractions of decays to tau-leptons and muons.

The analysis focuses on tau-lepton decays to a muon, rather than hadronic tau decays which are more complicated to reconstruct, thus reducing the systematic uncertainties associated with the object reconstruction. The lifetime of the tau lepton and its lower momentum decay products are exploited by the precise muon reconstruction available from the ATLAS detector to separate muons from tau-lepton decays and muons produced directly by a W decay (so-called prompt muons). Specifically, the absolute distance of closest approach of muon tracks in the plane perpendicular to the beam line, |d0μ| (figure 1), and the transverse momentum, pTμ, of the muons, are used to isolate these contributions. These variables, in particular |d0μ|, are calibrated using a pure sample of prompt muons from Z→μμ data.

The extraction of R(τ/μ) is performed using a fit to |d0μ| and pTμ where the cancellation of several systematic uncertainties is observed as they are correlated between the prompt μ and τ→μ contributions. This includes, for example, uncertainties related to jet reconstruction, flavour tagging and trigger efficiencies. As a result, the measurement obtains very high precision, surpassing that of the previous LEP measurement.

Figure 2

The measured value is R(τ/μ) = 0.992 ± 0.013 [ ± 0.007 (stat) ± 0.011 (syst) ], forming the most precise measurement of this ratio, with an uncertainty half the size of that from the combination of LEP results (figure 2). It is in agreement with the Standard Model expectation and suggests that the previous LEP discrepancy may be due to a fluctuation.

Though surviving this latest test, the principle of lepton flavour universality will not quite be out of the woods until the anomalies in B-meson decays recorded by the LHCb experiment (CERN Courier May/June 2020 p10) have also been definitively probed.

ALPHA sheds light on antihydrogen’s fine structure

The ALPHA collaboration at CERN has reported the first measurements of fine-structure effects and the Lamb shift in antihydrogen atoms. The results, published in Nature in February, bring further scrutiny to comparisons between antimatter and ordinary matter, which, if found to behave differently, would challenge CPT symmetry and shake the foundations of the Standard Model.

In 1947, US physicist Willis Lamb and his colleagues observed an incredibly small shift in the n = 2 energy levels of hydrogen in a vacuum. Under traditional physics theories of the day, namely the Dirac equation, these states should have the same energy and the Lamb shift shouldn’t exist. The discovery spurred the development of quantum electrodynamics (QED), which explains the discrepancy as being due to interactions between the atom’s constituents with vacuum-energy fluctuations, and won Lamb the Nobel Prize in Physics in 1955.

Antimatter spectroscopy

The ALPHA team creates antihydrogen atoms by binding antiprotons delivered by CERN’s Antiproton Decelerator (AD) with positrons. The antiatoms are then confined in a magnetic trap in an ultra-high vacuum, and illuminated with a laser to measure their spectral response. This technique enables the measurement of known quantum effects such as the fine structure and the Lamb shift, which have now been measured in the anti­hydrogen atom for the first time. The ALPHA team previously used this approach to measure other quantum effects in antihydrogen, the most recent being a measurement of the Lyman–alpha (1S–2P) transition in 2018.

Measured frequencies

The splitting of the n = 2 energy level of hydrogen is a separation between the 2P3/2 and 2P1/2 levels in the absence of a magnetic field, and is caused by the interaction between the electron’s spin and the orbital momentum. The classic Lamb shift is the splitting between the 2S1/2 and 2P1/2 levels, also in the absence of a magnetic field, and is the result of the effect on the electron of quantum fluctuations associated with virtual photons.

The work confirms that a key portion of QED holds up in both matter and antimatter

Jeffrey Hangst

In its new study, the ALPHA team determined the fine-structure splitting and the Lamb shift by inducing transitions between the lowest (n = 1) energy level of antihydrogen and the 2P3/2 and 2P1/2 levels in the presence of a 1  T magnetic field. Using the value of the frequency of a previously measured transition (1S–2S), the team was able to infer the values of the fine-structure splitting and the Lamb shift. The results were found to be consistent with theoretical predictions of the splittings in normal hydrogen, within the experimental uncertainties of 2% for the fine-structure splitting and 11% for the Lamb shift. “The work confirms that a key portion of QED holds up in both matter and antimatter, and probes aspects of antimatter interaction – such as the Lamb shift – that we have long looked forward to addressing,” says ALPHA spokesperson Jeffrey Hangst.

The seminal measurements of antihydrogen’s spectral structure that are now possible follow more than 30 years of effort by the low-energy antimatter community at CERN. The first antihydrogen atoms were observed at CERN’s LEAR facility in 1995 and, in 2002, the ATHENA and ATRAP collaborations produced cold (trappable) antihydrogen at the AD, opening the way to precision measurements of antihydrogen’s atomic spectra. In addition to spectral measurements, the charge-to-mass ratios for the proton and antiproton have been shown to agree to 69 parts per trillion by the BASE experiment, and the antiproton-to-electron mass ratio has been measured to agree with its proton counterpart to a level of 0.8 parts per billion by the ASACUSA experiment. The newly completed ELENA facility at the AD will increase the number of available antiprotons by up to two orders of magnitude.

Next for the ALPHA team is chilling large samples of antihydrogen using state-of-the-art laser cooling techniques. “These techniques will transform antimatter studies and will allow unprecedentedly high-precision comparisons between matter and antimatter,” says Hangst.

ATLAS extends search for top squark

Figure 1

Supersymmetry is an attractive extension of the Standard Model, and aims to answer some of the most fundamental open questions in modern particle physics. For example: why is the Higgs boson so light? What is dark matter and how does it fit in with our understanding of the universe? Do electroweak and strong forces unify at smaller distances?

Supersymmetry predicts a new partner for each elementary particle, including the heaviest particle ever observed – the top quark. If the partner of the top quark (the top squark, or “stop”) were not too heavy, the quantum corrections to the Higgs boson mass would largely cancel, thereby stabilising its small value of 125 GeV. Moreover, the lightest supersymmetric particle (LSP) may be stable and weakly interacting, providing a dark-matter candidate. Signs of the top squark, and thus supersymmetry, may yet be lurking in the enormous number of proton–proton collisions provided by the LHC.

Two new searches

The ATLAS collaboration recently released two new searches, each looking to detect pairs of top squarks by exploring the full LHC dataset corresponding to an integrated luminosity of 139 fb–1 recorded during Run 2. Each top squark decays to a top quark and an LSP that escapes the detector without interacting. Thus, our experimental signature is an event that is energetically unbalanced, with two sets of top-quark remnants and a large amount of missing energy.

A challenge for such searches is that the masses of the supersymmetric particles are unknown, leaving a large range of possibilities to explore. Depending on the mass difference between the top squark and the LSP, the final decay products can be (very) soft or (very) energetic, calling for different reconstruction techniques and sparking the development of new approaches. For example, novel “soft b-tagging” techniques, based on either pure secondary-vertex information or jets built from tracks, were implemented for the first time in these analyses to extend the sensitivity to lower kinematic regimes. This allowed the searches to probe small top squark–LSP mass differences down to 5 GeV for the first time.

Leptoquark decays would exhibit a similar experimental signature to top-squark decays

Other sophisticated analysis strategies, including the use of machine-learning techniques, improved the discrimination between the signal and Standard-Model background and maximised the sensitivity of the analysis. Furthermore, these two searches are designed in such a way as to fully complement one another. Together they greatly extend the reach in the top squark mass versus LSP mass plane, including the challenging region where the top squark masses are very close to the top mass (figure 1). No evidence of new physics was found in any of these searches.

Beyond supersymmetry, these search results are intriguing for other new-physics scenarios. For example, the decay of a hypothetical top quark–neutrino hybrid, called a leptoquark, would exhibit a similar experimental signature to a top-squark decay. The results also constrain models predicting dark matter produced with a pair of top quarks that do not originate from supersymmetry.

Boosting top-quark measurements

Figure 1

Weighing in at 180 times the mass of the proton, the top quark is the heaviest elementary particle discovered so far. Because of its large mass, it is the only quark that does not form bound states with other quarks but decays immediately after it has been produced. Despite its short lifetime, its existence has far-reaching consequences. It governs the stability of the electroweak vacuum, gives large contributions to the mass of the W boson, and influences many other important observables through quantum-loop corrections. An accurate knowledge of its mass is important for our understanding of fundamental interactions.

The top quark governs the stability of the electroweak vacuum

The LHC’s high centre-of-mass energy makes it an ideal laboratory to study the properties of the top quark with unprecedented precision. Such studies demand that jets originating from light and bottom quarks are measured very accurately, however, subtleties remain even then, as exact calculations are not possible for low-energy quarks and gluons once they start to form bound states. In this regime, our approximations become inaccurate, because the mass of the bound states becomes as large as the energy of the underlying process. An exciting way to overcome these difficulties is to measure top quarks that have been produced with very high transverse momenta and thus large Lorentz boosts. In these topologies, the decay products are highly collimated, and can be clearly assigned to a decaying top quark. Effects from the formation of hadrons play a minor effect in boosted topologies as the top quarks, which were originally produced in quark–antiquark pairs, move apart from each other fast enough that their decays can be considered to happen independently.

Boosted precision

By reconstructing a boosted top quark in a single jet, a measurement of the jet mass can be translated into one of the top-quark mass. The CMS collaboration has carried out such a measurement using the √s = 13 TeV data collected in 2016, reconstructing the top-quark jets with the novel XCone algorithm to obtain a top quark mass of 172.6 ± 2.5 GeV (figure 1). Due to this new way of reconstructing jets, an improvement of more than a factor of three relative to an earlier measurement at √s = 8 TeV has been achieved. Although the uncertainty is larger than for direct measurements, where top quarks are reconstructed from multiple jets or leptons and missing transverse momentum (which currently yield a world average of 172.9 ± 0.4 GeV from a combination of CMS, ATLAS and Tevatron measurements), this new result shows for the first time the potential of using boosted top quarks for precision measurements.

The jet mass can be translated into the top-quark mass

Measuring the properties of the top quark at high momenta enables detailed studies of a theoretically compelling kinematic regime that has not been accessible before. Different effects, such as the collinear radiation of gluons and quarks, govern its dynamics compared to top-quark production at low energies. Exploiting the full Run-2 dataset should allow CMS to extend this measurement to higher boosts, and establish the boosted regime for a number of precision measurements in the top-quark sector in Run 3 and at the high-luminosity LHC.

A first taste of neutrino physics

A string of optical detectors for the KM3NeT neutrino telescope

Almost 90 years since Pauli postulated its existence, much remains to be learnt about the neutrino. The observation in 1998 of neutrino oscillations revealed that the particle’s flavour and mass eigenstates mix and oscillate. At least two must be massive, like the other known fermions, though with far smaller masses. The need for a mechanism to generate such small masses strongly hints at the existence of new physics beyond the Standard Model. Faced with such compelling questions, neutrino experiments are springing up at an unprecedented rate, from a plethora of searches for neutrinoless double-beta decay to gigantic astrophysical–neutrino detectors at the South Pole (IceCube) and soon in the Mediterranean Sea (KM3NeT), and two projects of enormous scope on the horizon in DUNE and Hyper-Kamiokande. Now, then, is a timely moment for the publication of a tutorial for graduate students and young researchers who are entering this fast-moving field.

Access all areas

Edited by former spokesperson of the OPERA experiment Antonio Ereditato, The State of the Art of Neutrino Physics provides an historical account and introduction to basic concepts, reviews of the various subfields where neutrinos play a significant role, and gives a detailed account of the data produced by present experiments in operation. An extremely valuable compilation of topical articles, the book covers essentially all areas of research in experimental neutrino physics, from astrophysical, solar and atmospheric neutrinos to accelerator and reactor neutrinos. The large majority of the articles are written in a didactic style by leading experts in the field, allowing young researchers to acquaint themselves with the diverse research in the field. In particular the chapter describing the formalism of neutrino oscillations should be required reading for all aspiring neutrino physicists. In all cases special attention is given to experimental challenges.

The State of the Art of Neutrino Physics: A Tutorial for Graduate Students and Young Researchers

From the theory side, chapters cover measurements at neutrino experiments of the low-energy interactions of neutrinos with nuclei (a key way to reduce systematic uncertainties), the phenomenology and consequences of the yet-to-be-determined neutrino-mass hierarchy, and the possibility of CP violation in the lepton sector. A very detailed account of solar neutrinos and matter effects in the Sun is written by Alexei Smirnov, one of the inventors of the celebrated Mikheyev–Smirnov–Wolfenstein effect, which describes how weak interactions with electrons modify oscillation probabilities for the various neutrino flavours. More speculative scenarios, for example on the possibility of the existence of sterile neutrinos, are discussed as well.

For a book like this, which has the ambition to address a broad palette of neutrino questions, it is always difficult to be totally complete, but it comes close. Some topics have evolved in the details since 2016, when the material upon which the book is based was written, but that doesn’t take away from the book’s value as a tutorial. I recommend it very highly to young and not-so-young aspiring
neutrino aficionados alike.

bright-rec iop pub iop-science physcis connect