The XXXI International Workshop on Deep Inelastic Scattering and Related Subjects (DIS2024) will be organized in Grenoble, France, from April 8 to April 12, 2024.
The conference covers a large spectrum of topics in high energy physics. A significant part of the program is devoted to the most recent theoretical advances and results from large experiments at BNL, CERN, DESY, FNAL, JLab and KEK.
The venue of the workshop is the Maison MINATEC congress center which is part of the scientific area Grenoble Presqu’Île close to the city center.
The 21st International Conference on Strangeness in Quark Matter (SQM 2024) will focus on new experimental and theoretical developments on the role of strange and heavy-flavour quarks in high energy heavy-ion collisions and in astrophysical phenomena.
Scientific topics include:
Strangeness and heavy-quark production in nuclear collisions and hadronic interactions
Hadron resonances in the strongly-coupled partonic and hadronic medium
Bulk matter phenomena associated with strange and heavy quarks
To advance our understanding of gluonic saturated matter at the LHC, the ALICE collaboration has presented a new study using photon-induced interactions in ultra-peripheral collisions (UPCs). In this type of collision, one beam emits a very high energetic photon that strikes the other beam, giving rise to photon–proton, photon–nucleus and even photon–photon collisions.
While we know that the proton – and most of the visible matter of the universe – is made of quarks bound together by gluons, quantum chromodynamics (QCD)has not yet provided a complete understanding of the rich physics phenomena that occur in high-energy interactions involving hadrons. For example, it is not known how the distribution of gluons evolve at low values of Bjorken-x. The rapid increase in gluon density observed with decreasing x cannot continue forever as it would eventually violate unitarity. At some point “gluon saturation” must set in to curb this growth.
So far, it has been challenging to experimentally establish when saturation sets in. One can expect, however, that it should occur at lower energies for heavy nuclei than for protons. Thus, the ALICE Collaboration has studied the energy dependence of UPC processes for both protons and heavy nuclei. At the same time, other physics phenomena, such as gluon shadowing originating from multi-scattering processes, can exist with similar experimental signatures. The interplay between these phenomena is still an open problem in QCD.
ALICE has presented new results on J/ψ meson-production UPC, where the photon probes the whole nucleus. The new ALICE results, analysed using LHC Run 1 and Run 2 data, probe a wide range of photon-nucleus collision energies from around 10 GeV to 1000 GeV. These results confirm previous measurements by ALICE, obtained at lower energies, that indicated a strong nuclear suppression when such photon–nucleus data are compared to expectations from photon–proton interactions. The present analysis employs novel methods for extracting the energy dependence, providing new information to test theoretical models. The present data at high energies can be described by both saturation-based and gluon shadowing models. The coherent J/ψ meson production at low energy, in the anti-shadowing region, is not described by these models, nor can available models fully describe the energy dependence of this process over the explored energy range.
ALICE will continue to investigate these phenomena in LHC Runs 3 and 4, where high-precision measurements with larger data samples and upgraded detectors will provide more powerful tools to better understand gluonic saturated matter.
Charge-parity (CP) violation parameters in tree-dominated b → c c s quark transitions are a powerful probe of physics beyond the Standard Model (SM). When B0(s) and B0(s) mesons decay through these transitions to the same final-state particles, an interference between mixing and decay amplitudes occurs, making these processes particularly sensitive to CP violation.
In the SM, B0(s) – B0(s) mixing is possible because the flavour eigenstates are not the (physical) mass eigenstates: a neutral B meson, once produced, evolves as a quantum superposition of B0(s) and B0(s) states. Due to this time-dependent mixing amplitude, an interference between mixing and decay amplitudes can lead to an observable time-dependent CP asymmetry in the decay rates. It was through the observation of this phenomenon in the “golden mode” B0→ J/ψ K0s that, in 2001, the BaBar and Belle collaborations reported the first unequivocal evidence for CP violation in B decays, for which Kobayashi and Maskawa were awarded the 2008 Nobel Prize in Physics.
As the 3 × 3 Cabibbo–Kobayashi–Maskawa (CKM) matrix that describes quark mixing in the SM is expected to be unitary, it leads to relations among its complex elements. These can be represented as triangles in a complex plane, all of them with the same area (which is a measure of the amount of CP violation in the SM). The most famous of them, the so-called unitary triangle, has sides of roughly the same size and internal angles denoted as α, β and γ. Since individually none of the CKM parameters are predicted by theory, the search for new physics relies on over-constraining them by looking for any hint of internal inconsistency. For that, precision is the key.
LHCb has become a major actor in precision studies of CP violation
Having analysed the full proton–proton collision data set with 13 TeV, and adding it to previous measurements at 7 and 8 TeV, LHCb recently brought the CP-violating parameters in B0→ J/ψ K0sand in another golden channel, B0s→ J/ψ K+K–, to a new level of precision. These parameters (sin2β and φs, respectively) are predicted with high accuracy through global CKM fits and, given their clean experimental signatures, are paramount for new-physics searches. The measured time-dependent CP asymmetry of B0 and B0 decay rates is shown in figure 1 with the resulting amplitude proportional to sin2β. Similarly, the update of the B0s→ J/ψ K+K– analysis with the 13 TeV data resulted in the world’s most precise φs measurement. Both angles agree with SM expectations and with previous measurements.
These legacy results for sin2β and φs from the first LHC runs represent a new milestone in LHCb’s hunt for physics beyond the SM. Along with the world-leading determination of γ (with a current precision of less than four degrees), and the discovery of CP violation in charm in 2019, LHCb has fulfilled and exceeded its own goals of more than a decade ago, becoming the major actor in precision studies of CP violation. LHCb is taking data with a brand new detector at larger interaction rates than before, boosting the experimental sensitivity and tightening the grip around the Standard Model.
Since the discovery of the Higgs boson in 2012, its di-photon and four-lepton decays have played a crucial role in characterising its properties. Despite their small branching ratios, these decay channels are ideal for accurate measurements due to the excellent resolution and efficient identification of photons and leptons provided by the ATLAS detector.
The Higgs-boson mass (mH) is a free parameter of the Standard Model (SM) that must be determined experimentally. Its value governs the coupling strengths of the Higgs boson with the other SM particles. It also enters as logarithmic corrections to the SM predictions of the W-boson mass and effective weak mixing angle, whose precise measurements allow the electroweak model to be tested. Moreover, the Higgs mass determines the shape and energy evolution of the Brout–Englert–Higgs potential and thus the stability of the electroweak vacuum. A precise measurement of mH is therefore of paramount importance.
ATLAS has recently published a new result of the Higgs-boson mass in the H → γγ decay channel using proton–proton collision data from LHC Run 2 (2015–2018). The measurement requires a careful control of systematic uncertainties, primarily arising from the photon energy scale. The new analysis has achieved a substantial reduction by more than a factor of three of these uncertainties compared to the previous ATLAS result based on the 2015 and 2016 dataset. That improvement became possible after extensive efforts to refine the photon energy-scale calibration and associated uncertainties.
The calibration benefited from an improved understanding of the energy response across the longitudinal ATLAS electromagnetic calorimeter layers and of nonlinear electronics readout effects. A new correction was implemented in the extrapolation of the precisely measured electron-energy scale in Z → e+e– events to photons, to account for differences in the lateral shower development between electrons and photons. These improvements reduced the systematic uncertainty in the mass measurement by about 40%. Moreover, the extrapolation of the electron energy scale from Z → e+e– events to photons originating from the Higgs boson was further refined, and transverse-momentum dependent effects were corrected. Taken together, the improvements allowed ATLAS to measure the Higgs-boson mass in the di-photon channel with a precision of 1.1 per mille.
The new di-photon result was combined with the mH measurement in the H → ZZ*→ 4ℓ decay using the full Run 2 dataset, published by ATLAS in 2022, and with the corresponding Run 1 (2011–2012) measurements (see figure 1). The resulting combined Higgs-boson mass mH = 125.11 ± 0.11 GeV has a precision of 0.9 per mille and is dominated by statistical uncertainties that will further reduce with the Run 3 data.
The high level of readiness and excellent performance of the ATLAS detector also allowed first measurements of the fiducial Higgs-boson production cross-sections in the H → γγ and H → ZZ*→ 4ℓ decay channels using up to 31.4 fb–1 of data collected in 2022. Their extrapolation to full phase space and combination gives σ(pp → H) = 58.2 ± 8.7 pb, which agrees with the SM prediction of 59.9 ± 2.6 pb (see figure 2).
With the continuation of Run 3 data taking, the precision of the 13.6 TeV cross-section measurements will improve and the combination with the Run 2 data will allow the exploration of Higgs-boson properties with growing sensitivity.
Despite its exceptional success, we know that the standard model (SM) is incomplete. To date, the LHC has not yet found clear indications of physics beyond the SM (BSM), which might mean that the BSM energy scale is above what can be directly probed at the LHC. An alternative way to probe BSM physics is through searches of off-shell effects, which can be done using the effective field theory framework (EFT). By treating the SM Lagrangian as the lowest order term in a perturbative expansion, EFT allows us to include higher-dimension operators in the Lagrangian, while respecting the experimentally verified SM symmetries.
Operators
The CMS collaboration recently performed a search for BSM physics using EFT, analysing data containing top quarks with additional final-state leptons. The top quark is of particular interest because of its large mass, resulting in a Higgs–Yukawa coupling of order unity. Many BSM models connect the top-quark mass to large couplings to new physics. In the context of top quark EFT, there are 59 total operators at dimension six, controlled by the so-called Wilson coefficients, 26 of which produce final-state leptons. These coefficients enter the model as corrections to the SM matrix element, with a first term corresponding to the interference between the SM and BSM contributions, and a second term reflecting pure BSM effects.
The analysis was performed on the Run 2 proton–proton collisions sample, corresponding to an integrated luminosity of 138 fb–1. It obtained limits on those 26 dimension-six coefficients, simulated at detector level with leading order precision (plus an additional parton when possible), exploiting six final-state signals, with different numbers of top quarks and leptons: ttH, ttℓν, ttℓℓ, tℓℓq, tHq and tttt. The analysis splits the data into 43 discrete categories, based primarily on lepton multiplicity, total lepton charge, and total jet or b-quark jet multiplicities. The events are analysed as differential distributions in the kinematics of the final-state leptons and jets.
A statistical analysis is performed using a profiled likelihood to extract the 68% and 95% confidence intervals for all 26 Wilson coefficients by varying one of them while profiling the other 25. All the coefficients are compatible with zero (i.e. in agreement with the SM) at the 95% confidence level. For many of them, these results are the most competitive to date, even when compared to analyses that fit only one or two coefficients. Figure 1 shows how the 95% confidence intervals (2σ limit) translate into upper limits on the energy scale of the probed BSM interaction.
The CMS collaboration will continue to refine these measurements by expanding upon the final-state observables and leveraging the Run 3 data sample. With the HL-LHC quickly approaching, the future of BSM physics searches is full of potential.
The GBAR experiment at CERN has joined the select club of experiments that have succeeded in synthesising antihydrogen atoms. Located at the Antiproton Decelerator (AD), GBAR aims to test Einstein’s equivalence principle by measuring the acceleration of an antihydrogen atom in Earth’s gravitational field and comparing it with that of normal hydrogen.
Producing and slowing down an antiatom enough to see it in free fall is no mean feat. To achieve this, the AD’s 5.3 MeV antiprotons are decelerated and cooled in the ELENA ring and a packet of a few million 100 keV antiprotons is sent to GBAR every two minutes. A pulsed drift tube further decelerates the packet to an adjustable energy of a few keV. In parallel, a linear particle accelerator sends 9 MeV electrons onto a tungsten target, producing positrons, which are accumulated in a series of electromagnetic traps. Just before the antiproton packet arrives, the positrons are sent to a layer of nanoporous silica, from which about one in five positrons emerges as a positronium atom. When the antiproton packet crosses the resulting cloud of positronium atoms, a charge exchange can take place, with the positronium giving up its positron to the antiproton, forming antihydrogen.
At the end of 2022, during an operation that lasted several days, the GBAR collaboration detected some 20 antihydrogen atoms produced in this way, validating the “in-flight” production method for the first time. The collaboration will now improve the production of antihydrogen atoms to enable precision measurements, for example, of its spectroscopic properties.
The first antihydrogen atoms were produced at CERN’s LEAR facility in 1995, but at an energy too high for any measurement to be made. Following this early success, CERN’s Antiproton Accumulator (used for the discovery of the W and Z bosons in 1983) was repurposed as a decelerator, becoming the AD, which is unique worldwide in providing low-energy antiprotons to antimatter experiments. After the demonstration of storing antiprotons by the ATRAP and ATHENA experiments, ALPHA, a successor of ATHENA, was the first experiment to merge trapped antiprotons and positrons and to trap the resulting antihydrogen atoms. Since then, ATRAP and ASACUSA have also achieved these two milestones, and AEgIS has produced pulses of antiatoms. GBAR now joins this elite club, having produced 6 keV antihydrogen atoms in-flight.
GBAR is also not alone in its aim of testing Einstein’s equivalence principle with atomic antimatter. ALPHA and AEgIS are also working towards this goal using complementary approaches.
Within astronomy and cosmology, the idea that the universe is continuously expanding is a cornerstone of the standard cosmological model. For example, when measuring the distance of astronomical objects one often uses their redshift, which is induced by their velocity with respect to us due to the expansion. The expansion itself has, however, never been directly measured, i.e. no measurement exists that shows the increasing redshift with time of a single object. Although not far beyond the current capabilities of astrophysics, such a measurement is unlikely to be performed soon. Rather, evidence for it is based on correlations within populations of astrophysical objects. However, not all studies agree with this standard assumption.
One population study that supports the standard model concerns type 1A supernovae, specifically the observed correlation between their duration and distance. Such a correlation is predicted to be the result of time dilation induced by the higher velocity of more distant objects. Supporting this picture, gamma-ray bursts occurring at larger distances appear to, on average, last longer than those that occur nearby. However, similar studies of quasars thus far did not show any dependence of the length in their variability with their distance, thereby contradicting special relativity and leading to an array of alternative hypotheses.
Detailed studies
Quasars are active galaxies containing a supermassive blackhole surrounded by a relativistic accretion disk. Due to their brightness they can be observed with redshifts up to about z = 8, which, based on special relativity should show variabilities occurring √8 times slower than those that occur nearby. As previous studies did not observe such time dilation, alternative theories proposed included those that cast doubt on the extragalactic nature of quasars. A new, detailed study now removes the need for such theories.
These results do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model
In order to observe time dilation one requires a standard clock. Supernovae are ideal for this purpose because these explosions are all nearly identical, allowing their duration to be used to measure time dilation. For quasars the issue is more complicated as the variability of their brightness appears almost random. However, the variability can be modelled using a so-called dampened random walk (DRW), a random process combined with an exponential dampening component. This complex model does not allow the brightness of a quasar to be predicted, but contains a characteristic timescale in the exponent that should correlate to the redshift due to time dilation.
This idea has now been tested by Geraint Lewis and Brenden Brewer of the universities of Sydney and Auckland, respectively. The pair studied 190 quasars with redshifts up to z = 4, observed over a 20 year period by the Sloan Digital Sky Survey and PanSTARRS-1, and applied a Bayesian analysis to look for a correlation between the DRW parameters and their redshift. The data was found to match best a universe where the DRW parameters scale according to (1 + z)n with n = 1.28 ±0.29, thereby making it compatible with n = 1, the value expected by standard physics. This contradicts previous measurements, something the authors attribute to the smaller quasar sample used in previous studies. The complex nature of quasars and the large variability in their population requires long observations of a similar population to make the time dilation effect visible.
These new results, which were made possible due to the large amounts of data becoming available from large observatories, do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model.
At around 1 a.m. on 17 July, the LHC beams were dumped after only nine minutes in collision due to a radiofrequency interlock caused by an electrical perturbation. Approximately 300 milliseconds after the beams were cleanly dumped, several superconducting magnets lost their superconducting state, or quenched. Among them were the inner-triplet magnets located to the left of Point 8, which focus the beams for the LHCb experiment. While occasional quenches of some LHC magnets are to be expected, the large forces resulting from this particular event led to a breach of the vacuum helium pressure vessel, rapidly degrading the insulation vacuum and prompting a series of interventions with implications for the 2023 Run 3 schedule.
The leak occurred between the LHC’s cryogenic circuit, which contains the liquid helium, and the insulation vacuum that separates the cold magnet from the warm outer vessel (the cryostat) – a crucial barrier for preventing heat transfer from the surrounding LHC tunnel to the interior of the cryostat. As a result of the leak, the insulation vacuum filled with helium gas, cooling down the cryostat and causing condensation to form and freeze on the outside.
By 24 July the CERN teams had traced the leak to a crack in one of more than 2500 bellows that compensate for thermal expansion and contraction on the cryogenic distribution lines. Measuring just 1.6 mm long, it is thought to have been caused by a sudden increase in vacuum pressure when the magnet quench protection system (QPS) kicked in. Following the electrical perturbation, the QPS had dutifully triggered the quench heaters (which are designed to bring the whole magnet out of the superconducting state in a controlled and homogenous manner) of the magnets concerned, generating a heat wave according to expectations.
It is the first time that such a breachevent has occurred; the teamwork between many working groups, including safety, accelerator operations, vacuum, cryogenics, magnets, survey, beam instrumentation, machine protection, electrical quality assurance as well as material and mechanical engineering, made a quick assessment and action plan possible. On 25 July the affected bellow was removed. A new bellow was installed on 28 July, the affected modules were closed, and the insulation vacuum was pumped.
The electrical perturbation turned out to be caused by an uprooted tree falling on power lines in the nearby Swiss municipality of Morges. In early August, as the Courier went to press, the repairs were finished and the implications for Run physics were being assessed. The choice is between preparing the machine for a short-term proton–proton phase to account for some of the missed run time or sticking to the planned heavy-ion run at the end of the run year, since in 2022 there was no full heavy-ion run. The favoured scenario is to go with the latter and was presented to the LHC machine committee on 26 July.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.