More than a century after its discovery, the proton remains a source of intrigue, its charge-radius and spin posing puzzles that are the focus of intense study. But what of its mortal sibling, the neutron? In recent years, discrepancies between measurements of the neutron lifetime using different methods constitute a puzzle with potential implications for cosmology and particle physics. The neutron lifetime determines the ratio of protons to neutrons at the beginning of big-bang nucleosynthesis and thus affects the yields of light elements, and it is also used to determine the CKM matrix-element Vud in the Standard Model.
The neutron-lifetime puzzle stems from measurements using two techniques. The “bottle” method counts the number of surviving ultra-cold neutrons contained in a trap after a certain period, while the “beam” method uses the decay probability of the neutron obtained from the ratio of the decay rate to an incident neutron flux. Back in the 1990s, the methods were too imprecise to worry about differences between the results. Today, however, the average neutron lifetime measured using the bottle and beam methods, 879.4 ± 0.4 s and 888.0 ± 2.0 s, respectively, stand 8.6 s (or 4σ) apart.
We think it will take two years to obtain a competitive result from our experiment
Kenji Mishima
In an attempt to shed light on the issue, a team at Japan’s KEK laboratory in collaboration with Japanese universities has developed a new experimental setup. Similar to the beam method, it compares the decay rate to the reaction rate of neutrons in a pulsed beam from the Japan Proton Accelerator Research Complex (J-PARC). The decay rate and the reaction rate are determined by simultaneously detecting electrons from the neutron decay and protons from the reaction 3He →3H in a 1 m-long time-projection chamber containing diluted 3He, removing some of the systematic uncertainties that affect previous beam methods. The experiment is still in its early stages, and while the first results have been released – τn = 898 ± 10(stat)+15–18 (sys) s – the uncertainty is currently too large to draw conclusions.
“In the current situation, it is important to verify the puzzle by experiments in which different systematic errors dominate,” says Kenji Mishima of KEK, adding that further improvements in the statistical and systematic uncertainties are underway. “We think it will take two years to obtain a competitive result from our experiment.”
Several new-physics scenarios have been proposed as solutions of the neutron lifetime puzzle. These include exotic decay modes involving undetectable particles with a branching ratio of about 1%, such as “mirror neutrons” or dark-sector particles.
One day, around the time I started properly reading, somebody gave me a book about the sky, and I found it fascinating to think about what’s beyond the clouds and beyond where the planes and the birds fly. I didn’t know that you could actually make a living doing this kind of thing. At that age, you don’t know what a cosmologist is, unless you happen to meet one and ask what they do. You are just fascinated by questions like “how does it work?” and “how do you know?”.
Was there a point at which you decided to focus on theory?
Not really, and I still think I’m somewhat in-between, in the sense that I like to interpret data and am plugged-in to observational collaborations. I try to make connections to what the data mean in light of theory. You could say that I am a theoretical experimentalist. I made a point to actually go and serve at a telescope a couple of times, but you wouldn’t want to trust me in handling all of the nitty-gritty detail, or to move the instrument around.
What are your research interests?
I have several different research projects, spanning large-scale structure, dark energy, inflation and the cosmic microwave background. But there is a common philosophy: I like to ask how much can we learn about the universe in a way that is as robust as possible, where robust means as close as possible to the truth, even if we have to accept large error bars. In cosmology, everything we interpret is always in light of a theory, and theories are always at some level “spherical cows” – they are approximations. So, imagine we are missing something: how do I know I am missing it? It sounds vague, but I think the field of cosmology is ready to ask these questions because we are swimming in data, drowning in data, or soon will be, and the statistical error bars are shrinking.
This explains your current interest in the Hubble constant. What do you define as the Hubble tension?
Yes, indeed. When I was a PhD student, knowing the Hubble constant at the 40–50% level was great. Now, we are declaring a crisis in cosmology because there is a discrepancy at the very-few-percent level. The Hubble tension is certainly one of the most intriguing problems in cosmology today. Local measurements of the current expansion rate of the universe, for example based on supernovae as standard candles, which do not rely heavily on assumptions about cosmological models, give values that cluster around 73 km s–1 Mpc–1. Then there is another, indirect route to measuring what we believe is the same quantity but only within a model, the lambda-cold-dark-matter (ΛCDM) model, which is looking at the baby universe via the cosmic microwave background (CMB). When we look at the CMB, we don’t measure recession velocities, but we interpret a parameter within the model as the expansion rate of the universe. The ΛCDM model is extremely successful, but the value of the Hubble constant using this method comes out at around 67 km s–1 Mpc–1, and the discrepancy with local measurements is now 4σ or more.
What are the implications if this tension cannot be explained by systematic errors or some other misunderstanding of the data?
The Hubble constant is the only cosmological parameter in the ΛCDM universe that can be measured both directly locally and from classical cosmological observations such as the CMB, baryon acoustic oscillations, supernovae and big-bang nucleosynthesis. It’s also easy to understand what it is, and the error bars are becoming small enough that it is really becoming make-or-break for the ΛCDM model. The Hubble tension made everybody wake up. But before we throw the model out of the window, we need something more.
How much faith do you put in the ΛCDM model compared to, say, the Standard Model of particle physics?
It is a model that has only six parameters, most constrained at the percent level, which explains most of the observations that we have of the universe. In the case of Λ, which quantifies what we call dark energy, we have many orders of magnitude between theory and experiment to understand, and for dark matter we are yet to find a candidate particle. Otherwise, it does connect to fundamental physics and has been extremely successful. For 20 years we have been riding a wave of confirmation of the ΛCDM model, so we need to ask ourselves: if we are going to throw it out, what do we substitute it with? The first thing is to take small steps away from the model, say by adding one parameter. For a while, you could say that maybe there is something like an effective neutrino species that might fix it, but a solution like this doesn’t quite fit the CMB data any more. I think the community may be split 50/50 between being almost ready to throw the model out and keeping working with it, because we have nothing better to use.
It is really becoming make-or-break for the ΛCDM model
Could it be that general relativity (GR) needs to be modified?
Perhaps, but where do we modify it? People have tried to tweak GR at early times, but it messes around with the observations and creates a bigger problem than we already have. So, let’s say we modify in middle times – we still need it to describe the shape of the expansion history of the universe, which is close to ΛCDM. Or we could modify it locally. We’ve tested GR at the solar-system scale, and the accuracy of GPS is a vivid illustration of its effectiveness at a planetary scale. So, we’d need to modify it very close to where we are, and I don’t know if there are modifications on the market that pass all of the observational tests. It could also be that the cosmological constant changes value as the universe evolves, in which case the form of the expansion history would not be the one of ΛCDM. There is some wiggle room here, but changing Λ within the error bars is not enough to fix the mismatch. Basically, there is such a good agreement between the ΛCDM model and the observations that you can only tinker so much. We’ve tried to put “epicycles” everywhere we could, and so far we haven’t found anything that actually fixes it.
What about possible sources of experimental error?
Systematics are always unknowns that may be there, but the level of sophistication of the analyses suggests that if there was something major then it would have come up. People do a lot of internal consistency checks; therefore, it is becoming increasingly unlikely that it is only due to dumb systematics. The big change over the past two years or so is that you typically now have different data sets that give you the same answer. It doesn’t mean that both can’t be wrong, but it becomes increasingly unlikely. For a while people were saying maybe there is a problem with the CMB data, but now we have removed those data out of the equation completely and there are different lines of evidence that give a local value hovering around 73 km s–1 Mpc–1, although it’s true that the truly independent ones are in the range 70–73 km s–1 Mpc–1. A lot of the data for local measurements have been made public, and although it’s not a very glamorous job to take someone else’s data and re-do the analysis, it’s very important.
Is there a way to categorise the very large number of models vying to explain the Hubble tension?
Until very recently, there was an interpretation of early versus late models. But if this is really the case, then the tension should show up in other observables, specifically the matter density and age of the universe, because it’s a very constrained system. Perhaps there is some global solution, so a little change here and a little in the middle, and a little there … and everything would come together. But that would be rather unsatisfactory because you can’t point your finger at what the problem was. Or maybe it’s something very, very local – then it is not a question of cosmology, but whether the value of the Hubble constant we measure here is not a global value. I don’t know how to choose between these possibilities, but the way the observations are going makes me wonder if I should start thinking in that direction. I am trying to be as model agnostic as possible. Firstly, there are many other people that are thinking in terms of models and they are doing a wonderful job. Secondly, I don’t want to be biased. Instead I am trying to see if I can think one-step removed, which is very difficult, from a particular model or parameterisation.
What are the prospects for more precise measurements?
For the CMB, we have the CMB-S4 proposal and the Simons Array. These experiments won’t make a huge difference to the precision of the primary temperature-fluctuation measurements, but will be useful to disentangle possible solutions that have been proposed because they will focus on the polarisation of the CMB photons. As for the local measurements, the Dark Energy Spectroscopic Instrument, which started observations in May, will measure baryon acoustic oscillations at the level of galaxies to further nail down the expansion history of the low-redshift universe. However, it will not help at the level of local measurements, which are being pursued instead by the SH0ES collaboration. There is also another programme in Chicago focusing on the so-called tip of the red-giant-branch technique, with more results to come out. Observations of multiple images from strong gravitational lensing is another promising avenue that is very actively pursued, and, if we are lucky, gravitational waves with optical counterparts will bring in another important piece of the puzzle.
If we are lucky, gravitational waves with optical counterparts will bring in another important piece of
the puzzle
How do we measure the Hubble constant from gravitational waves?
It’s a beautiful measurement, as you can get a distance measurement without having to build a cosmic distance ladder, which is the case with the other local measurements that build distances via Cepheids, supernovae, etc. The recession velocity of the GW source comes from the optical counterpart and its redshift. The detection of the GW170817 event enabled researchers to estimate the Hubble constant to be 70 km s–1 Mpc–1, for example, but the uncertainties using this novel method are still very large, in the region of 10%. A particular source of uncertainty comes from the orientation of the gravitational-wave source with respect to Earth, but this will come down as the number of events increases. So this route provides a completely different window on the Hubble tension. Gravitational waves have been dubbed, rather poetically, “standard sirens”. When these determinations of the Hubble constant become competitive with existing measurements really depends on how many events are out there. Upgrades to LIGO, VIRGO, plus next-generation gravitational-wave observatories will help in this regard, but what if the measurements end up clustering between or beyond the late- and early-time measurements? Then we really have to scratch our heads!
How can results from particle physics help?
Principally, if we learn something about dark matter it could force us to reconsider our entire way to fit the observations, perhaps in a way that we haven’t thought of because dark matter may be hot rather than cold, or something else that interacts in completely different ways. Neutrinos are another possibility. There are models where neutrinos don’t behave like the Standard Model yet still fit the CMB observations. Before the Hubble tension came along, the hope was to say that we have this wonderful model of cosmology that fits really well and implies that we live in a maximally boring universe. Then we could have used that to eventually make the connection to particle physics, say, by constraining neutrino masses or the temperature of dark matter. But if we don’t live in a maximally boring universe, we have to be careful about playing this game because the universe could be much, much more interesting than we assumed.
The production of different types of hadrons provides insights into one of the most fundamental transitions in nature – the “hadronisation” of highly energetic partons into hadrons with confined colour charge. To understand how this transition takes place we have to rely on measurements, and measurement-driven modelling. This is because the strong interaction processes that govern hadronisation are characterised by a scale given by the typical size of hadrons – about 1 fm – and cannot be calculated with perturbative techniques. The ALICE collaboration has recently performed a novel study of hadronisation by comparing the production of strange neutral baryons and mesons inside and outside of charged-particle jets.
One of the ways to contrast baryon and meson production is to analyse the ratio of their momentum distributions. This has been done in most of the collision systems, but the comparison is particularly interesting in heavy-ion collisions, where a large baryon-to-meson enhancement is often referred to as the “baryon anomaly”. A characteristic maximum at intermediate transverse momenta (1–5 GeV) is found in all systems, but in Pb–Pb collisions the ratio is strongly increased, to the extent that it exceeds unity, implying the production of more baryons than mesons. The rise of the ratio has been associated with either hadron formation from the recombination of two or three quarks, or the migration of the heavier baryons to higher momenta by the strong all-particle “radial” flow associated with the production and expansion of a quark–gluon plasma.
A recent result adds an extra twist to the study of strange baryons and mesons
The ALICE collaboration has studied baryon-to-meson ratios extensively. A recent result adds an extra twist to the study of strange baryons and mesons by studying the ratios in two parts of the events separately – inside jets and in the event portion perpendicular to a jet cone. This allows physicists to look “under the peak” to reveal more about its origin. The latest study focuses on the neutral and weakly decaying Λ baryon and K0S meson – particles often known collectively as V0 due to their decay particles forming a “V” within a detector. The ALICE detector can reconstruct these decaying particles reliably even at high momenta via invariant-mass analysis using the charged-particle tracks seen in the detectors.
The particles associated with the jets show the typical ratio known from the high momentum tail of the inclusive baryon-to-meson distribution – essentially no enhancement – and similar values were found in both pp and p–Pb collisions, consistent with simulations of hard pp collisions using PYTHIA 8 (see figure 1). By contrast, the particles found away from jets do indeed show a baryon-to-meson enhancement that qualitatively resembles the observations in Pb–Pb collisions. The new study clarifies that the high rise of the ratio is associated with the soft part of the events (regions where no jet with more than pT = 10 GeV is produced) and brings the first quantitative guidance for modelling the baryon-to-meson enhancement with an additional important constraint – the absence of the jet. Moreover, finding that the “within-jet” ratio is similar in pp and p–Pb collisions, while the “out-of-jet” ratio shows larger values in p–Pb than in pp collisions, gives even more to ponder about the possible origin of the effect in relation to an expanding strongly interacting system. Future measurements involving multi-strange baryons may shed further light on this question.
Recent measurements bolstering the longstanding tension between the experimental and theoretical values of the muon’s anomalous magnetic moment generated a buzz in the community. Though with a much lower significance, a similar puzzle may also be emerging for the anomalous magnetic moment of the electron, ae.
Depending on which of two recent independent measurements of the fine-structure constant is used in the theoretical calculation of ae – one obtained at Berkeley in 2018 or the other at Kastler–Brossel Laboratory in Paris in 2020 – the Standard Model prediction stands 2.4σ higher or 1.6σ lower than the best experimental value, respectively. Motivated by this inconsistency, the NA64 collaboration at CERN set out to investigate whether new physics – in the form of a lightweight “X boson” – might be influencing the electron’s behaviour.
The generic X boson could be a sub- GeV scalar, pseudoscalar, vector or axial- vector particle. Given experimental constraints on its decay modes involving Standard Model particles, it is presumed to decay predominantly invisibly, for example into dark-sector particles. NA64 searches for X bosons by directing 100 GeV electrons generated by the SPS onto a target, and looking for missing energy in the detector via electron–nuclei scattering e–Z → e–ZX.
The result sets new bounds on the e–X interaction strength
Analysing data collected in 2016, 2017 and 2018, corresponding to about 3 × 1011 electrons-on-target, the NA64 team found no evidence for such events. The result sets new bounds on the e–X interaction strength and, as a result, on the contributions of X bosons to ae: X bosons with a mass below 1 GeV could contribute at most between one part in 1015 and one part in 1013, depending on the X-boson type and mass. These contributions are too small to explain the current anomaly in the electron’s anomalous magnetic moment, says NA64 spokesperson Sergei Gninenko. “But the fact that NA64 reached an experimental sensitivity that is better than the current accuracy of the direct measurements of ae, and of recent high-precision measurements of the fi ne-structure constant, is amazing.”
In a separate analysis, the NA64 team carried out a model-independent search for a particular pseudoscalar X boson with a mass of around 17 MeV. Coupling to electrons and decaying into e+e– pairs, the so-called “X17” has been proposed to explain an excess of e+e– pairs created during nuclear transitions of excited 8Be and 4He nuclei reported by the “ATOMKI” experiment in Hungary since 2015.
The e-X17 coupling strength is constrained by data: too large and the X17 would contribute too much to ae; too small and the X17 would decay too rarely and too far away from the ATOMKI target. In 2019, the NA64 team excluded a large range of couplings, although at large values, for a vector-like X17. More recently, they searched for a pseudoscalar X17, which has a lifetime about half that of the vector version for the same coupling strength. Re-analysing a sample of approximately 8.4 × 1010 electrons-on-target collected in 2017 and 2018 with 100 and 150 GeV electrons, respectively, the collaboration has now excluded couplings in the range 2.1–3.2 × 10–4 for a 17 MeV X-boson.
“We plan to further improve the sensitivity to vector and pseudoscalar X17’s after long shutdown 2, and also try to reconstruct the mass of X17, to be sure that if we see the signal it is the ATOMKI boson,” says Gninenko.
The ability of certain neutral mesons to oscillate between their matter and antimatter states at distinctly unworldly rates is a spectacular feature of quantum mechanics. The phenomenon arises when the states are orthogonal combinations of narrowly split mass eigenstates that gain a relative phase as the wavefunction evolves, allowing quarks and antiquarks to be interchanged at a rate that depends on the mass difference. Forbidden at tree level, proceeding instead via loops, such fl avour-changing neutral-current processes offer a powerful test of the Standard Model and a sensitive probe of physics beyond it.
Only four known meson systems can oscillate
Predicted by Gell-Mann and Pais in the 1950s, only four known meson systems (those containing quarks from different generations) can oscillate. K0–K0 oscillations were observed in 1955, B0–B0 oscillations in 1986 at the ARGUS experiment at DESY, and Bs0–Bs0 oscillations in 2006 by the CDF experiment at Fermilab. Following the first evidence of charmed-meson oscillations (D0–D0) at Belle and BaBar in 2007, LHCb made the first single-experiment observation confirming the process in 2012. Being relatively slow (more than 100 times the average lifetime of a D0 meson), the full oscillation period cannot be observed. Instead, the collaboration looked for small changes in the flavour mixture of the D0 mesons as a function of the time at which they decay via the Kπ final state.
On 4 June, during the 10th International Workshop on CHARM Physics, the LHCb collaboration reported the first observation of the mass difference between the D0–D0states, precisely determining the frequency of the oscillations. The value represents one of the smallest ever mass differences between two particles: 6.4 × 10–6 eV, corresponding to an oscillation rate of around 1.5 × 109 per second. Until now, the measured value of the mass-difference between the underlying D0 and D0 eigenstates was marginally compatible with zero. By establishing a non-zero value with high significance, the LHCb team was able to show that the data are consistent with the Standard Model, while significantly improving limits on mixing-induced CP violation in the charm sector.
“In the future we hope to discover time-dependent CP violation in the charm system, and the precision and luminosity expected from LHCb upgrades I and II may make this possible,” explains Nathan Jurik, a CERN fellow who worked on the analysis.
The latest measurements of neutral charm–meson oscillations follow hot on the heels of an updated LHCb measurement of the Bs0–Bs0 oscillation frequency announced in April, based on the heavy and light strange-beauty-meson mass difference. The very high precision of the Bs0–Bs0 measurement provides one of the strongest constraints on physics beyond the Standard Model. Using a large sample of Bs0 → Ds– π+ decays, the new measurement improves upon the previous precision of the oscillation frequency by a factor of two: Δms = 17.7683 ± 0.0051 (stat) ± 0.0032 (sys) ps–1 which, when combined with previous LHCb measurements, gives a value of 17.7656 ± 0.0057 ps–1. This corresponds to an oscillation rate of around 3 × 1012 per second, the highest of all four meson systems.
Three expert panellists will introduce the motivation for and status of the proposed Future Circular Collider at CERN, followed by a discussion and live questions from the audience, moderated by CERN Courier editor Matthew Chalmers.
» Accelerator physicist and FCC study leader Michael Benedikt (CERN/Vienna University of Technology) will report on the status and scope of the FCC Innovation Study, a European Union-funded project to assess the technical and financial feasibility of a 100 km electron-positron and proton-proton collider in the Geneva region.
» Experimental particle physicist Beate Heinemann (DESY/Albert-Ludwigs-Universität Freiburg) will explain how the Higgs boson opens a new window on fundamental physics, and why a post-LHC collider is essential to explore this and other hot topics such as flavour physics.
» Theoretical physicist Matthew McCullough (CERN) will explore the potential of a future circular collider to address the dark sector of the universe, and explain the importance of striving for the highest energies possible.
Michael Benedikt (left) completed his PhD on medical accelerators as a member of the CERN Proton-Ion Medical Machine Study group. He joined CERN’s accelerator operation group in 1997, where he headed different sections before becoming deputy group leader from 2006 to 2013. From 2008 to 2013, he was project leader for the accelerator complex for the MedAustron hadron therapy in Austria, and since 2013 he has led the Future Circular Collider Study at CERN.
Beate Heinemann (middle) completed her PhD at the University of Hamburg in 1999 in experimental particle physics at the HERA collider in Hamburg. She became a lecturer at the University of Liverpool in 2003, a professor at UC Berkeley in 2006 and a scientist at Lawrence Berkeley National Laboratory. She was deputy spokesperson of the ATLAS collaboration from 2013 to 2017, and since 2016 she is a leading scientist at DESY and W3 professor at Albert-Ludwigs-Universität Freiburg.
Matthew Mccullough (right) is a senior staff member in the CERN Theory Department. He completed his undergraduate and PhD degrees at the University of Oxford, followed by postdocs at MIT and CERN. His research interests cover physics beyond the Standard Model, from the origins of the Higgs boson to the nature of dark matter.
Think “neutrino detector” and images of giant installations come to mind, necessary to compensate for the vanishingly small interaction probability of neutrinos with matter. The extreme luminosity of proton-proton collisions at the LHC, however, produces a large neutrino flux in the forward direction, with energies leading to cross-sections high enough for neutrinos to be detected using a much more compact apparatus.
In March, the CERN research board approved the Scattering and Neutrino Detector (SND@LHC) for installation in an unused tunnel that links the LHC to the SPS, 480 m downstream from the ATLAS experiment. Designed to detect neutrinos produced in a hitherto unexplored pseudo-rapidity range (7.2 < ? < 8.6), the experiment will complement and extend the physics reach of the other LHC experiments — in particular FASERν, which was approved last year. Construction of FASERν, which is located in an unused service tunnel on the opposite side of ATLAS along the LHC beamline (covering |?|>9.1), was completed in March, while installation of SND@LHC is about to begin.
Both experiments will be able to detect neutrinos of all types, with SND@LHC positioned off the beamline to detect neutrinos produced at slightly larger angles. Expected to commence data-taking during LHC Run 3 in spring 2022, these latest additions to the LHC-experiment family are poised to make the first observations of collider neutrinos while opening new searches for feebly interacting particles and other new physics.
Neutrinos galore
SND@LHC will comprise 800 kg of tungsten plates interleaved with emulsion films and electronic tracker planes based on scintillating fibres. The emulsion acts as vertex detector with micron resolution while the tracker provides a time stamp, the two subdetectors acting as a sampling electromagnetic calorimeter. The target volume will be immediately followed by planes of scintillating bars interleaved with iron blocks serving as a hadron calorimeter, followed downstream by a muon-identification system.
During its first phase of operation, SND@LHC is expected to collect an integrated luminosity of 150 fb-1, corresponding to more than 1000 high-energy neutrino interactions. Since electron neutrinos and antineutrinos are predominantly produced by charmed-hadron decays in the pseudorapidity range explored, the experiment will enable the gluon parton-density function to be constrained in an unexplored region of very small x. With projected statistical and systematic uncertainties of 30% and 22% in the ratio between ?e and ??, and about 10% for both uncertainties in the ratio between ?e and ?? at high energies, the Run-3 data will also provide unique tests of lepton flavour universality with neutrinos, and have sensitivity in the search for feebly interacting particles via scattering signatures in the detector target.
“The angular range that SND@LHC will cover is currently unexplored,” says SND@LHC spokesperson Giovanni De Lellis. “And because a large fraction of the neutrinos produced in this range come from the decays of particles made of heavy quarks, these neutrinos can be used to study heavy-quark particle production in an angular range that the other LHC experiments can’t access. These measurements are also relevant for the prediction of very high-energy neutrinos produced in cosmic-ray interactions, so the experiment is also acting as a bridge between accelerator and astroparticle physics.”
A FASER first
FASERν is an addition to the Forward Search Experiment (FASER), which was approved in March 2019 to search for light and weakly interacting long-lived particles at solid angles beyond the reach of conventional collider detectors. Comprising a small and inexpensive stack of emulsion films and tungsten plates measuring 0.25 x 0.25 x 1.35 m and weighing 1.2 tonnes, FASERν is already undergoing tests. Smaller than SND, the detector is positioned on the beam-collision axis to maximise the neutrino flux, and should detect a total of around 20,000 muon neutrinos, 1300 electron neutrinos and 20 tau neutrinos in an unexplored energy regime at the TeV scale. This will allow measurements of the interaction cross-sections of all neutrino flavours, provide constraints on non-standard neutrino interactions, and improve measurements of proton parton-density functions in certain phase-space regions.
The final detector should do much better — it will be a hundred times bigger
Jamie Boyd
In May, based on an analysis of pilot emulsion data taken in 2018 using a target mass of just 10 kg, the FASERν team reported the detection of the first neutrino-interaction candidates, based on a measured 2.7σ excess of a neutrino-like signal above muon-induced backgrounds. The result paves the way for high-energy neutrino measurements at the LHC and future colliders, explains FASER co-spokesperson Jamie Boyd: “The final detector should do much better — it will be a hundred times bigger, be exposed to much more luminosity, have muon identification capability, and be able to link observed neutrino interactions in the emulsion to the FASER spectrometer. It is quite impressive that such a small and simple detector can detect neutrinos given that usual neutrino detectors have masses measured in kilotons.”
The Higgs boson was hypothesised to explain electroweak symmetry breaking nearly 50 years before its discovery. Its eventual discovery at the LHC took half a century of innovative accelerator and detector development, and extensive data analysis. Today, several outstanding questions in particle physics could be answered by higgsinos – theorised supersymmetric partners of an extended Higgs field. The higgsinos are a triplet of electroweak states, two neutral and one charged. If the lightest neutral state is stable, it can provide an explanation of astronomically observed dark matter. Furthermore, an intimate connection between higgsinos and the Higgs boson could explain why the mass of the Higgs boson is so much lighter than suggested by theoretical arguments. While higgsinos may not be much heavier than the Higgs boson, they would be produced more rarely and are significantly more challenging to find, especially if they are the only supersymmetric particles near the electroweak scale.
Higgsinos mix with other supersymmetric electroweak states, the wino and the bino, to form the physical particles that would be observed
The ATLAS collaboration recently released a set of results based on the full LHC Run 2 dataset that explore some of the most challenging experimental scenarios involving higgsinos. Each result tests different assumptions. Owing to quantum degeneracy, the higgsinos mix with other supersymmetric electroweak states, the wino and the bino, to form the physical particles that would be observed by the experiment. The mass difference between the lightest neutral and charged states, ∆m, depends on this mixing. Depending on the model assumptions, the phenomenology varies dramatically, requiring different analysis techniques and stimulating the development of new tools.
If ∆m is only a few hundred MeV, the small phase space suppresses the decay from the heavier states to the lightest one. The long-lived charged state flies partway through the inner tracker before decaying, and its short track can be measured. A search targeting this anomalous “disappearing track” signature was performed by exploiting novel requirements on the quality of the signal candidate and the ability of the ATLAS inner detectors to reconstruct short tracks. Finding that the number of short tracks is as expected from background processes alone, this search rules out higgsinos with lifetimes of a fraction of a nanosecond for masses up to 210 GeV.
If higgsinos mix somewhat with other supersymmetric electroweak states, they will decay promptly to the lightest stable higgsino and low-energy Standard Model particles. These soft decay products are extremely challenging to detect at the LHC, and ATLAS has performed several searches for events with two or three leptons to maximise the sensitivity to different values of ∆m. Each search features innovative optimisation and powerful discriminants to reject background. For the first time, ATLAS has performed a statistical combination of these searches, constraining higgsino masses to be larger than 150 GeV for ∆m above 2 GeV.
A final result targets higgsinos in models in which the lightest supersymmetric particle is not stable. In these scenarios, higgsinos may decay to triplets of quarks. A search designed around an adversarial neural network and employing a completely data-driven background estimation technique was developed to distinguish these rare decays from the overwhelming multi-jet background. This search is the first at the LHC to obtain sensitivity to this higgsino model, and rules out scenarios of the pair production of higgsinos with masses between 200 and 320 GeV (figure 1).
Together, these searches set significant constraints on higgsino masses, and for certain parameters provide the first extension of sensitivity since LEP. With the development of new techniques and more data to come, ATLAS will continue to seek higgsinos at higher masses, and to test other theoretical and experimental assumptions.
After many years of research and development, the ALPHA collaboration has succeeded in laser-cooling antihydrogen – opening the door to considerably more precise measurements of antihydrogen’s internal structure and gravitational interactions. The seminal result, reported on 31 March in Nature, could also lead to the creation of antimatter molecules and the development of antiatom interferometry, explains ALPHA spokesperson Jeffrey Hangst. “This is by far the most difficult experiment we have ever done,” he says. “We’re over the moon. About a decade ago, laser cooling of antimatter was in the realm of science fiction.”
The ALPHA collaboration synthesises antihydrogen from cryogenic plasmas of antiprotons and positrons at CERN’s Antiproton Decelerator (AD), storing the antiatoms in a magnetic trap. Lasers with particular frequencies are then used to measure the antiatoms’ spectral response. Finding any slight difference between spectral transitions in antimatter and matter would challenge charge–parity–time symmetry, and perhaps cast light on the cosmological imbalance of matter and antimatter.
Historically, researchers have struggled to laser-cool normal hydrogen, so this has been a bit of a crazy dream for us for many years.
Makoto Fujiwara
Following the first antihydrogen spectroscopy by ALPHA in 2012, in 2017 the collaboration measured the spectral structure of the antihydrogen 1S–2S transition with an outstanding precision of 2 × 10–12 – marking a milestone in the AD’s scientific programme. The following year, the team determined antihydrogen’s 1S–2P “Lyman–alpha” transition with a precision of a few parts in a hundred million, showing that it agrees with the prediction for the equivalent transition hydrogen to a precision of 5 × 10–8. However, to push the precision of spectroscopic measurements further, and to allow future measurements of the behaviour of antihydrogen in Earth’s gravitational field, the kinetic energy of the antiatoms must be lowered.
In their new study, the ALPHA researchers were able to laser-cool a sample of magnetically trapped antihydrogen atoms by repeatedly driving the antiatoms from the 1S to the 2P state using a pulsed laser with a frequency slightly below that of the transition between them. After illuminating the trapped antiatoms for several hours, the researchers observed a more than 10-fold decrease in their median kinetic energy, with many of the antiatoms attaining energies below 1 μeV. Subsequent spectroscopic measurements of the 1S–2S transition revealed that the cooling resulted in a spectral line about four times narrower than that observed without laser cooling – a proof-of-principle of the laser-cooling technique, with further statistics needed to improve the precision of the previous 1S–2S measurement (see figure).
“Historically, researchers have struggled to laser-cool normal hydrogen, so this has been a bit of a crazy dream for us for many years,” says Makoto Fujiwara, who proposed the use of a pulsed laser to cool trapped antihydrogen in ALPHA. “Now, we can dream of even crazier things with antimatter.”
Beauty baryons are a subject of great interest at the LHC, offering unique insights into the nature of the strong interaction and the mechanisms by which hadrons are formed. While the ground states Λb0, Σb±, Ξb–, Ξb0, Ωb– were observed at the Tevatron at Fermilab and the SPS at CERN, the LHC’s higher energy and orders-of-magnitude larger integrated luminosity have allowed the discovery of more than a dozen excited beauty baryon states among the 59 new hadrons observed at the LHC so far (see LHCb observes four new tetraquarks).
Many hadrons with one c or b quark are quite similar. Interchanging heavy-quark flavours does not significantly change the physics predicted by effective models assuming “heavy quark symmetry”. The well-established charm baryons and their excitations therefore provide excellent input for theories modelling the less well understood spectrum of beauty-baryons. A number of the lightest excited b baryons, such as Λb(5912)0, Λb(5920)0, and several excited Ξb and Ωb– states, have been observed, and are consistent with their charm partners. By contrast, however, heavier excitations, such as the Λb(6072)0 and Ξb(6227) isodoublet (particles that differ only by an up or down quark), cannot yet be readily associated with charmed partners.
New particles
The first particle observed by the CMS experiment, in 2012, was the beauty- strange baryon Ξb(5945)0 (CERN Courier June 2012 p6). It is consistent with being the beauty partner of the Ξc(2645)+ with spin-parity 3/2+, while the Ξb(5955)– and Ξb(5935)– states observed by LHCb are its isospin partner and the beauty partner of the Ξc′0, respectively. The charm sector also suggests the existence of prominent heavier isodoublets, called Ξb**: the lightest orbital Ξb excitations with orbital momentum between a light diquark (a pairing of a s quark with either a d or a u quark) and a heavy b quark. The isodoublet with spin-parity 1/2– decays into Ξb′ π± and the one with 3/2– into Ξb* π±.
The CMS collaboration has now observed such a baryon, Ξb(6100)–, via the decay sequence Ξb(6100)–→Ξb(5945)0π–→Ξb– π+ π–. The new state’s measured mass is 6100.3 ± 0.6 MeV, and the upper limit on its natural width is 1.9 MeV at 95% confidence level. The Ξb– ground state was reconstructed in two channels: J/ψ Ξ– and J/ψ Λ K–. The latter channel also includes partially reconstructed J/ψ Σ0 K– (where the photon from the Σ0→Λ γ decay is too soft to be reconstructed).
If the Ξb(6100)– baryon were only 13 MeV heavier, it would be above the Λb0 K– mass threshold
The observation of this baryon and the measurement of its properties are useful for distinguishing between different theoretical models predicting the excited beauty baryon states. It is curious to note that if the Ξb(6100)– baryon were only 13 MeV heavier, a tiny 0.2% change, it would be above the Λb0 K– mass threshold and could decay to this final state. The Ξb(6100)– might also shed light on the nature of previous discoveries: if it is the 3/2– member of the lightest orbital excitation isodoublet, then the Ξb(6227) isodoublet recently found by the LHCb collaboration could be the 3/2– orbital excitation of Ξb′ or Ξb* baryons.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.