The CP-violating angle γ of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix is a benchmark of the Standard Model, since it can be determined from tree-level beauty decays in an entirely data-driven way with negligible theoretical uncertainty. Comparisons between direct and indirect measurements of γ therefore provide a potent test for new physics. Before LHCb began taking data, γ was one of the poorest known constraints of the CKM unitarity triangle, but that is no longer the case.
A new result from LHCb marks an important change in strategy, by including not only results from beauty decays sensitive to γ but additionally exploiting the sensitivity to CP violation and mixing of charm meson (D0) decays. Mixing in the D0–D0 system proceeds via flavour-changing neutral currents, which may also be affected by contributions from new heavy particles. The process is described by two parameters: the mass difference, x, and width difference, y, between the two charm flavour states (see figure 1).
The latest combination takes the results of more than 20 LHCb beauty and charm measurements to determine γ = (65.4 –4.2+3.8 )°, which is the most precise measurement from a single experiment (see figure 2). Furthermore, various charm-mixing parameters were determined by combining, for the first time, both the beauty and charm datasets, which results in x= (0.400)% and y= (0.630)%. The latter is a factor-of-two more precise than the current world average, which is entirely due to the new methodology that harnesses additional sensitivity to the charm sector from beauty decays.
This demonstrates that LHCb has already achieved better precision than its original design goals. When the redesigned LHCb detector restarts operations in 2022, the target of sub-degree precision on γ, and the chance to observe CP violation in charm mixing, comes ever closer.
The 11th Higgs Hunting workshop took place remotely between 20 and 22 September 2021, with more than 300 registered participants engaging in lively discussions about the most recent results in the Higgs sector. ATLAS and CMS presented results based on the full LHC Run-2 dataset (up to 140 fb-1) recorded at 13 TeV. While all results remain compatible with Standard Model expectations, the precision of the measurements benefited from significant reductions in statistical uncertainties, more than three times smaller with the 13 TeV data than in previous LHC results at 7 and 8 TeV. This also brought into sharp relief the role of systematic uncertainties, which in some cases are becoming dominant.
The status of theory improvements and phenomenological interpretations, such as those from effective field theory, were also presented. Highlights included the Higgs pair-production process, which is particularly challenging at the LHC due to its low rate. ATLAS and CMS showed greatly improved sensitivity in various final states, thanks to improvements in analysis techniques. Also shown were results on the scattering of weak vector bosons, a process that is strongly related to the Higgs sector, highlighting large improvements from both the larger datasets and the higher collision energy available in Run 2.
Several searches for phenomena beyond the Standard Model – in particular for additional Higgs bosons – were presented. No significant excesses have yet been found.
The historical talk “The LHC timeline: a personal recollection (1980-2012)” was given by Luciano Maiani, former CERN Director-General, and concluding talks were given by Laura Reina (Florida) and Paolo Meridiani (Rome). A further highlight was the theory talk from Nathaniel Craig, who discussed the progress being made in addressing six open questions. Does the Higgs boson have a size? Does it interact with itself? Does it mediate a Yukawa force? Does it fulfill the naturalness strategy? Does it preserve causality? And does it realise electroweak symmetry?
The next Higgs Hunting workshop will be held in Orsay and Paris from 12 to 14 September 2022.
Cold atoms offer exciting prospects for high-precision measurements based on emerging quantum technologies. Terrestrial cold-atom experiments are already widespread, exploring both fundamental phenomena such as quantum phase transitions and applications such as ultra-precise timekeeping. The final quantum frontier is to deploy such systems in space, where the lack of environmental disturbances enables high levels of precision.
This was the subject of a workshop supported by the CERN Quantum Technology Initiative, which attracted more than 300 participants online from 23 to 24 September. Following a 2019 workshop triggered by the European Space Agency (ESA)’s Voyage 2050 call for ideas for future experiments in space, the main goal of this workshop was to begin drafting a roadmap for cold atoms in space.
The workshop opened with a presentation by Mike Cruise (University of Birmingham) on ESA’s vision for cold atom R&D for space: considerable efforts will be required to achieve the technical readiness level needed for space missions, but they hold great promise for both fundamental science and practical applications. Several of the cold-atom teams that contributed white papers to the Voyage 2050 call also presented their proposals.
Atomic clocks
Next came a session on atomic clocks, including descriptions of their potential for refining the definitions of SI units, such as the second, and distributing this new time-standard worldwide, and potential applications of atomic clocks to geodesy. Next-generation spacebased atomic-clock projects for these and other applications are ongoing in China, the US (Deep Space Atomic Clock) and Europe.
This was followed by a session on Earth observation, featuring the prospects for improved gravimetry using atom interferometry and talks on the programmes of ESA and the European Union. Quantum space gravimetry could contribute to studies of climate change, for example, by measuring the densities of water and ice very accurately and with improved geographical precision.
Cold-atom experiments in space offer great opportunities to probe the foundations of physics
For fundamental physics, prospects for space-borne cold-atom experiments include studies of wavefunction collapse and Bell correlations in quantum mechanics, probes of the equivalence principle by experiments like STEQUEST, and searches for dark matter.
The proposed AEDGE atom interferometer will search for ultralight dark matter and gravitational waves in the deci-Hertz range, where LIGO/Virgo/KAGRA and the future LISA space observatory are relatively insensitive, and will probe models of dark energy. AEDGE gravitational- wave measurements could be sensitive to first-order phase transitions in the early universe, as occur in many extensions of the Standard Model, as well as to cosmic strings, which could be relics of symmetries broken at higher energies than those accessible to colliders.
These examples show that cold-atom experiments in space offer great opportunities to probe the foundations of physics as well as make frontier measurements in astrophysics and cosmology.
Several pathfinder experiments are underway. These include projects for terrestrial atom interferometers on scales from 10 m to 1 km, such as the MAGIS project at Fermilab and the AION project in the UK, which both use strontium, and the MIGA project in France and proposed European infrastructure ELGAR, which both use rubidium. Meanwhile, a future stage of AION could be situated in an access shaft at CERN – a possibility that is currently under study, and which could help pave the way towards AEDGE. Pioneering experiments using Bose-Einstein condensates on research rockets and the International Space Station were also presented.
A strong feature of the workshop was a series of breakout sessions to enable discussions among members of the various participating communities (atomic clocks, Earth observation and fundamental science), as well as a group considering general perspectives, which were summarised in a final session. Reports from the breakout sessions will be integrated into a draft roadmap for the development and deployment of cold atoms in space. This will be set out in a white paper to appear by the end of the year and presented to ESA and other European space and funding agencies.
Space readiness
Achieving space readiness for cold-atom experiments will require significant research and development. Nevertheless, the scale of participation in the workshop and the high level of engagement testifies to the enthusiasm in the cold-atom community and prospective user communities for deploying cold atoms in space. The readiness of the different communities to collaborate in drafting a joint roadmap for the pursuit of common technological and scientific goals was striking.
The W boson was first directly observed in 1983 using the Super Proton Synchrotron proton–antiproton collider at CERN, resulting in a Nobel Prize the following year. Almost four decades later, the ATLAS collaboration has observed the simultaneous production of three W bosons for the first time.
The possible new interactions are represented by operator terms with anomalous triple and quartic gauge couplings
The study of multi-boson processes involving boson self-interactions provides unique insight into the nature of electroweak symmetry breaking and therefore enables rigorous tests of the Standard Model (SM). Likewise, deviations from SM predictions could indicate hints of beyond-Standard Model physics through, for example, interactions that exist at energies beyond the current reach of the LHC which avoid the requirement to create the particle directly. These effects could potentially result from interactions with virtual particles in loops or new amplitudes generated by a tree-level exchange. In an effective field theory (EFT) approach, the possible new interactions are represented by operator terms with anomalous triple and quartic gauge couplings, both of which are present in WWW production.
Signal events
At leading order, the WWW signal is produced through the different mechanisms presented in the Feynman diagrams shown in figure 1. While there are many decay modes, ATLAS used four final-state channels where the signal-to-background ratio is big enough to observe the signal. The first three channels result from the decay of two of the Ws into charged lepton–neutrino pairs, with the same electric- charge sign of the charged leptons, and the decay of the third W into a pair of quarks observed as hadronic jets: the two-lepton (2l) channel, with flavour combinations ee, eμ and μμ. Additionally, WWW production is measured in the three-lepton (3l) channel, where each W decays into a charged lepton–neutrino pair, requiring no same-flavour opposite-sign charged-lepton pairs, and thus reducing the Z-boson background.
A multivariate analysis using a boosted decision tree (BDT) was used to discriminate the signal from the background, with the BDT trained using 12 discriminating input variables in the 2l channel and 11 input variables in the 3l channel. A binned maximum likelihood fit was performed on the BDT distributions with four free-floating parameters: the signal strength and three normalisation factors for the dominant WZ background. The BDT distributions were fitted in the four signal regions simultaneously with the trilepton invariant mass distribution in three WZ control regions (WZ plus 0, 1, ≥ 2 jets). The resulting BDT distribution for the 3l channel is shown in figure 2.
The large event samples (139 fb–1) provided by the full Run-2 data set, the implementation of multivariate techniques, and an improved ATLAS detector and reconstruction performance enabled the observation and the cross-section measurement of this rare process. The observed (expected) significance of the measurement is 8.2 (5.4) standard deviations compared to the hypothesis with no WWW signal. The cross section is measured to be 850 ± 100 (stat.) ± 80 (syst.) fb, as derived from the observed signal strength (the ratio of measured to predicted yields) of 1.66 ± 0.28. The observed signal significance is within 2.4σ of the SM prediction. The full Run-3 data set is anticipated to more than double the number of signal events and will enable a more precise measurement of WWW production. Higher precision cross-section measurements and detailed differential distributions will elucidate the compatibility with the SM, and an EFT approach can quantify the sensitivity to anomalous gauge couplings in a search for new physics.
A new measurement by the ALICE collaboration has demonstrated for the first time that jets become narrower after “quenching” in quark–gluon plasma (QGP). RHIC and LHC data show that the QGP behaves like a strongly-coupled liquid with very low viscosity, but it is an open question how this arises from the asymptotic limit of weakly-coupled quarks and gluons at short lengths. The new results provide quantitative new insights into the hot and dense medium created in heavy-ion collisions and how it modifies the substructure of jets and dissipates part of their energy.
An important property of the QGP is its ability to “resolve” nearby partons as effectively independent colour charges above the medium’s characteristic resolution scale – a parameter that is very poorly predicted by theory, but thought to be in the vicinity of a femtometre or less. In recent years, jet quenching has been proposed to determine this scale. Jets originate from a single quark or gluon that showers into more partons, either by radiating a gluon or splitting into a quark–antiquark pair. When a jet moves through the medium, each individual splitting results in two distinct colour charges that, depending on their angular separation and the medium’s resolution length, can interact as one coherent object or two independent charges. At the LHC, we can put our understanding of this resolution scale to test using special measurements of the angular structure of jets. This allows us to test whether wider jets are more likely to be resolved.
To identify the relevant two-prong splittings, ALICE “groomed” jets using track clustering. The algorithm reclusters and unwinds the jet shower to find the first parton splitting satisfying a grooming condition (figure 1). The excellent tracking resolution in ALICE allows for very precise measurements of jet substructure even at small angular distance scales. The angular width of the jet was found to be significantly modified in Pb–Pb compared to pp collisions (figure 2). In particular, wider splittings are suppressed in Pb–Pb compared to pp collisions, demonstrating that the interaction of jets with the QGP filters out wide jets.
This measurement is the first of its kind to be fully corrected for large background effects, allowing direct quantitative comparisons with theoretical calculations of jet quenching. Most theoretical models describe the general narrowing trend seen in the data, despite the different implementations of jet-medium interactions. The data is consistent with models implementing an incoherent interaction in which the medium resolves the splittings (Pablos, Lres = 0). Interestingly, however, another calculation demonstrates this narrowing effect with a fully coherent interaction, in which the jet splittings are not resolved, but by modifying the initial quark and gluon fractions (Yuan, quark). While the precision of the data currently precludes a precise extraction of the medium’s resolving power within a given model, the measurement places quantitative constraints on medium properties, and demonstrates for the first time a direct modification to the angular structure of jets in heavy-ion collisions. This opens the door to increasingly precise measurements with the high-precision data anticipated in LHC Run 3.
The possibility that the proton wave function may contain a |uudcc> component in addition to the g → cc splitting arising from perturbative gluon radiation has been debated for decades. In favour of such “intrinsic charm” (IC), light-front QCD (LFQCD) calculations predict that non-perturbative IC manifests as percent-level valence-like charm content in the parton distribution functions (PDFs) of the proton. On the other hand, if the charm–quark content is entirely perturbative in nature, the charm PDF should resemble that of the gluon and decrease sharply at large momentum fractions, x. The proton could also contain intrinsic beauty, but suppressed by a factor of order m2c/m2b. The picture for intrinsic strangeness is somewhat murkier due to the lighter mass of the strange quark.
Measurements of charm-hadron production in deep-inelastic scattering and in fixed-target experiments, with typical momentum transfers below Q = 10 GeV, have been interpreted as evidence both for and against the IC predicted by LFQCD. Even though such experiments are in principle sensitive to valence-like c-quark content, interpreting low-Q data is challenging since it requires a careful theoretical treatment of hadronic and nuclear effects. Recent global PDF analyses, which also include measurements by ATLAS, CMS and LHCb, are inconclusive and can only exclude a relatively large IC component carrying more than a few percent of the momentum of the proton.
Using its Run-2 data, LHCb recently studied IC by making the first measurement of the fraction of Z+jet events that contain a charm jet in the forward region of proton–proton collisions. Since Zc production is inherently at large Q, above the electroweak scale, hadronic effects are small. A leading-order Zc production mechanism is gc → Zc scattering (figure 1), where in the forward region one of the initial partons must have large x, hence Zc production probes the valence-like region.
The spectrum observed by LHCb exhibits a sizable enhancement at forward Z rapidities (figure 2), consistent with the effect expected if the proton wave function contains the |uudcc> component predicted by LFQCD. Incorporating these results into global PDF analyses should strongly constrain the large-x charm PDF, both in size and shape – and could reveal that the proton contains valence-like intrinsic charm.
These results demonstrate the unique sensitivity of the LHCb experiment to the valence-like content of the proton. Looking forward to Run 3, increased luminosity will lead to a substantial improvement in the precision of this measurement, which should provide an even clearer picture of just how charming the proton is.
New ways to detect long-lived particles (LLPs) are opening up avenues for searching for physics beyond the Standard Model (SM). LLPs could provide evidence for a hidden dark sector of particles that includes dark-matter candidates and could be studied via “portal interactions” with the visible universe. By employing the CMS experiment’s muon spectrometer in a novel way, the collaboration has recently deployed a powerful new technique for detecting LLPs that decay between 6 and 10 metres from the primary interaction point.
An LLP decaying in the endcap muon spectrometer volume should produce a particle shower when its decay products interact with the return yoke of the CMS solenoid. The secondary particles produced by the shower would traverse the gaseous regions of the cathode-strip chamber (CSC) detector and produce a large multiplicity of signals on the wire anodes and strip cathodes. Localised hits are reconstructed by combining these signals using a density-based clustering algorithm. This is the first time the CSC detectors have been used as a sampling calorimeter to try to detect and identify LLP decays.
Searching for CSC clusters with a sufficiently large number of hits suppresses background processes while maintaining a high efficiency for detecting potential LLP decays. The large amount of steel in the CMS return yoke nearly eliminates “punch-through” hadrons that are not fully stopped by the calorimeter, potentially mimicking the signature of an LLP. The largest remaining source of backgrounds is known LLPs produced by SM processes such as the neutral kaon, KL. These particles are copiously produced in LHC collisions and, on rare occasions, traverse the material without being stopped. Kaons are predominantly produced with much lower energies than the signal LLPs and therefore result in clusters with a smaller number of hits. Requiring clusters with more than 130 CSC hits suppresses these dominant background events to a negligible level (see figure 1).
This search improves on the previous best results by more than a factor of six
Using the full Run-2 dataset, the CMS collaboration detected no excess of particle-shower events above the expected backgrounds, setting constraints on a benchmark-simplified model of scalar LLP production mediated by the Higgs boson (a so-called Higgs portal model). This search improves on the previous best results by more than a factor of six (two) for an LLP mass of 7 GeV (≥ 15) GeV for a proper decay length (cτ) of the scalar larger than 100 m. It is the first to be sensitive to LLP decays with cτ up to 1000 m and masses between 40 and 55 GeV at branching ratios of the Higgs to a pair of LLPs below 20%.
This novel approach to identifying showers in muon detectors opens up an exciting new programme of searches for LLPs in a wide variety of theoretical models. Potential frameworks range from Higgs-portal models to other portals to a dark sector, including neutrinos, axions and dark photons. The on-going development of a dedicated Level-1 and High-Level Trigger focusing on particle showers detected in the CMS muon spectrometer promises an order of magnitude improvement in the discovery sensitivity for LLPs in the forthcoming run of the LHC.
The diffuse photon background that fills the universe does not limit itself to the attention-hogging cosmic microwave background, but spans a wide spectrum extending up to TeV energies. The origin of the photon emission at X-ray and gamma-ray wavelengths, first discovered in the 1970s, remains poorly understood. Many possible sources have been proposed, ranging from active galactic nuclei to dark-matter annihilation. Thanks to many years of gamma-ray data from the Fermi Large Area Telescope (Fermi-LAT), a group from Australia and Italy has now produced a model that links part of the diffuse emission to star-forming galaxies (SFGs).
As their name implies, SFGs are galaxies in which stars are formed, and therefore also die through supernova events. Such sources, which include our own Milky Way, have gained interest from gamma-ray astronomers during the past decade because several resolvable SFGs have been shown to emit in the 100 MeV to 1 TeV energy range. Given their preponderance, SFGs are thus a prime-suspect source of the diffuse gamma-ray background.
Clear correlation
The source of gamma rays within SFGs is very likely the interaction between cosmic rays and the interstellar medium (ISM). The cosmic rays, in turn, are thought to be accelerated within the shockwaves of supernova remnants, after which they interact with the ISM to produce a hadronic cascade. The cascade includes neutral pions, which decay into gamma rays. This connection between supernova remnants and gamma rays is strengthened by a clear correlation between the star-formation rate in a galaxy and the gamma-ray flux they emit. Additionally, such sources are theorised to be responsible for the neutrino emission detected by the IceCube observatory over the past few years, which also appears to be highly isotropic.
Based on additional SFG gamma-ray sources found by Fermi–LAT, which could be used for validation, the Australian/Italian group developed a physical model to study the contribution of SFGs to the cosmic diffuse gamma-ray background. The model used to predict the gamma-ray emission from galaxies starts with the spectra of charged cosmic-rays produced in the numerous supernovae remnants within a galaxy, and greatly benefits from data collected from several such remnants present in the Milky Way. Subsequently the production and energies of gamma rays through their interaction of cosmic rays with the ISM is modelled, followed by the gamma-ray transport to Earth, which includes losses due to interactions with low-energy photons leading to pair production.
The main uncertainty in previous models was the efficiency of a galaxy to transform the energy from cosmic rays into gamma rays, since it is not possible to use our own galaxy to measure it. The big breakthrough in the new work is a more thorough theoretical modelling of this efficiency, which was first tested extensively using data from resolved SFG sources. After such tests proved successful, the model could be applied to predict the gamma-ray emission properties of galaxies spanning the history of the universe. These predictions indicate that the low-energy part of the spectrum can be largely attributed to galaxies from the so-called cosmic noon: the period when star formation in large galaxies was at its peak, about 10 billion years ago. Nearby galaxies, on the other hand, explain the high-energy part of the spectrum, which, for old and distant sources, is absorbed in the intergalactic medium by low-energy photons undergoing pair production with TeV emission. Overall, the model predicts not only the spectral shape but also the overall flux (see “Good fit” figure), negating the need for other possible sources such as active galactic nuclei or dark matter.
These new results once again indicate the importance of star-forming regions for astrophysics, after also recently being proposed as a possible source of PeV cosmic rays by LHAASO (CERN Courier July/August 2021 p11). Furthermore, it shows the potential for an expansion to other astrophysical messengers, with the authors stating their ambition to apply the same model to radio-emission and high-energy neutrinos.
The existence of an eV-scale sterile neutrino looks less likely today than at any time in the past 20 years. Such a particle has long been considered to be the simplest explanation for several related anomalies in neutrino physics, but results released yesterday by Fermilab’s MicroBooNE collaboration disfavour its existence relative to the Standard Model.
“MicroBooNE has made a very comprehensive exploration through multiple types of interactions, and multiple analysis and reconstruction techniques,” says co-spokesperson Bonnie Fleming of Yale. “They all tell us the same thing, and that gives us very high confidence in our results that we are not seeing a hint of a sterile neutrino.”
The collaboration says that the analyses favour the Standard Model over the anomalous signal seen by sibling-experiment MiniBooNE at more than 99% confidence, should its true origin be electrons from a neutrino oscillation via a hitherto-undetected sterile neutrino. “But that earlier data from MiniBooNE doesn’t lie,” says former co-spokesperson Sam Zeller of Fermilab. “There’s something really interesting happening that we still need to explain.”
There’s something really interesting happening that we still need to explain
Sam Zeller
Neutrinos suffer from an identity crisis regarding their mass. As a result, the three known flavours morph into each other as phase differences develop between three mass eigenstates. However, well before this model solidified around the turn of the millennium, a measurement by the LSND collaboration at Los Alamos in the US suggested the existence of an additional neutrino which had to be “sterile” with respect to the weak, electromagnetic and strong interactions, and much more massive, given how rapidly the oscillation developed. Since this first hint, the tale of the sterile neutrino has taken multiple twists and turns.
Twists and turns
In the mid-1990s, LSND reported seeing a 3.8σ excess of electron antineutrinos in a beam of accelerator-generated muon antineutrinos, but the KARMEN experiment at the Rutherford Appleton Laboratory in the UK failed to reproduce the effect. Evidence for an eV-scale sterile neutrino mounted with the observation of a deficit of electron neutrinos from 37Ar and 51Cr electron-capture decays at Gran Sasso in Italy and at the Baksan Neutrino Observatory in Russia (the gallium anomaly), and a reported deficit of electron antineutrinos from nuclear reactors (the reactor anomaly). Troublingly, however, long-baseline accelerator neutrino experiments such as MINOS+ do not observe the requisite “disappearance” of muon neutrinos required by the principle of unitarity, and the existence of such a sterile neutrino is also starkly incompatible with current models of cosmology. While the gallium anomaly should soon be probed definitively by the BEST experiment at Baksan (Phys. Rev. D 2018 97 073001), recent calculations of reactor fluxes may now be dissolving the reactor anomaly (see, for example, arXiv:2110.06820). But the most compelling single piece of evidence in favour of sterile neutrinos came when the MiniBooNE experiment at Fermilab tried to reproduce the LSND effect. In November 2018, the collaboration reported a 4.5σ excess of electron neutrinos and antineutrinos compared to Standard-Model expectations.
Few neutrino physicists foresaw that MicroBooNE would disfavour both hypotheses
Sibling experiment MicroBooNE has now released its first round of tests of the MiniBooNE anomaly. Equipped with a cutting-edge liquid-argon time-projection chamber, the collaboration observed neutrino interactions at the level of individual particle tracks – a key advantage compared to a Cherenkov detector such as MiniBooNE, which could not distinguish electrons from photons. The collaboration has now used half of its available data to probe which particle is the true origin of the anomaly. Earlier this month, MicroBooNE tested the hypothesis that MiniBooNE’s excess was actually due to an underestimated single-photon background, perhaps caused by a difficult-to-model rare decay of a Δ resonance. Now, MicroBooNE has tested the hypothesis that the MiniBooNE excess was caused by single electrons, most likely the result of neutrino oscillations via an eV-scale sterile neutrino. Few neutrino physicists foresaw that MicroBooNE would disfavour both hypotheses.
“Every time we look at neutrinos, we seem to find something new or unexpected,” says MicroBooNE co-spokesperson Justin Evans of the University of Manchester. “MicroBooNE’s results are taking us in a new direction, and our neutrino programme is going to get to the bottom of some of these mysteries.” The collaboration will now investigate whether more exotic topologies such as electron-positron pairs could be the source of the MiniBooNE anomaly. Such a final state might suggest the existence of heavier sterile neutrinos, say theorists.
“eV-scale sterile neutrinos no longer appear to be experimentally motivated, and never solved any outstanding problems in the Standard Model,” says theorist Mikhail Shaposhnikov of EPFL. “But GeV-to-keV-scale sterile neutrinos – so-called Majorana fermions – are well motivated theoretically and do not contradict any existing experiment. They can explain neutrino masses and oscillations, give a dark-matter candidate, and produce a baryon asymmetry in the universe: all the problems that the Standard Model is incapable of addressing. Experimental efforts at the intensity frontier should now be concentrated in this direction.”
Future scientific breakthroughs in high-energy physics will require unprecedented levels of international engagement, building on the successful model of the Large Hadron Collider at CERN. Joe Lykken, Fermilab deputy director for research, will describe how Fermilab is moving forward rapidly with CERN and other international partners to realise this vision.
The questions under scrutiny range from the nature of the Higgs field to the question of whether neutrinos play a role in the matter-antimatter asymmetry observed in the universe. PIP-II, an upgrade to the Fermilab accelerator complex that includes a leading-edge superconducting linear accelerator, is already under construction, with major “in-kind” contributions and expertise from partners in India, Italy, the UK, France and Poland. PIP-II will enable the world’s most intense beam of neutrinos for the Deep Underground Neutrino Experiment (DUNE), which will deploy 70,000 tonnes of liquid argon detectors in a deep underground site 1300 km from Fermilab. DUNE was formulated as an international project from the start, and now includes more than a thousand collaborators from 30 countries. Two large prototype detectors for DUNE have been successfully constructed and tested at the CERN Neutrino Platform. DUNE will have remarkable capabilities to determine how the properties of neutrinos have shaped our universe. At the same time, Fermilab has been developing and building next-generation superconducting magnets that will be deployed in the HL-LHC accelerator at CERN, and is the US lead for ambitious upgrades to the CMS experiment for the HL-LHC era. These technological capabilities will also make Fermilab an important partner for the proposed Future Circular Collider.
Joseph Lykken is Fermilab’s deputy director of research and leads the Fermilab Quantum Institute. A distinguished scientist at the laboratory, Lykken was a former member of the Theory Department, researching string theory and phenomenology, and is a member of the CMS experiment on the Large Hadron Collider at CERN. He received his PhD from the Massachusetts Institute of Technology and has previously worked for the Santa Cruz Institute for Particle Physics and the University of Chicago. Lykken began his tenure at Fermilab in 1989. He is a former member of the High Energy Physics Advisory Panel, which advises both the Department of Energy and the National Science Foundation, and served on the Particle Physics Project Prioritization Panel, developing a road map for the next 20 years of US particle physics. Lykken is a fellow of the American Physical Society and of the American Association for the Advancement of Science.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.