A crucial missing piece in our understanding of quantum chromodynamics (QCD) is a complete description of hadronisation in hard scattering processes with a large momentum transfer, which has now been investigated by the LHCb collaboration in proton–lead (pPb) collisions. While perturbative QCD describes reasonably well the transverse momentum (pT) dependence of heavy-quark production in proton–proton (pp) collisions, the situation is different in heavy-ion collisions due to the formation of quark–gluon plasma (QGP), which affects the behaviour of particles traversing the medium. In particular, hadronisation can be affected, modifying the relative abundance of hadrons compared to pp collisions. Several models predict an enhanced strange-quark production. Thus an abundance of strange baryons is seen as a signature of QGP formation.
The role that QGP may play in pPb collisions is currently unclear. Some models predict the formation of “QGP droplets”, which could partially induce the same behaviour, albeit less pronounced, as in PbPb collisions. In addition, in pPb interactions, “cold nuclear matter” (CNM) effects are also present that can mimic the behaviour caused by QGP but via different mechanisms. For all these reasons, a strangeness enhancement in pPb collisions would strongly indicate the formation of a deconfined medium in small systems, providing crucial information about QGP properties and formation once the CNM effects are under control.
The LHCb collaboration recently analysed pPb data for QGP effects with the twofold purpose of searching for strangeness enhancement and providing a precise understanding of the CNM effects. This search was performed by measuring the production ratio of the strange baryon Ξ+c, which has never been observed in pPb collisions before, to the strangeless baryon Λ+c. Using an earlier pPb sample, LHCb has also studied the ratios of the D+s, D+ and D0 , the first being measured for the first time down to zero pT in the forward region, precisely addressing CNM effects. All measurements are performed differentially in pT and the rapidity of the produced particle, and compared to the latest theory predictions. The Ξ+c cross section has been measured for the first time in pPb collisions, giving strong indications on the factorisation scale μ0 of the theory model. This result allows to set the absolute scale of the theoretical computations in terms of strangeness production, a trend confirmed with even higher precision by comparing the measurement to the Λ+c production-cross section evaluated in the same decay mode. Moreover, the ratio is roughly constant as a function of pTand behaves in the same way at positive (pPb) and negative (Pbp) rapidities (see figure 1). The measurement is consistent with models incorporating initial-state effects due to gluon-shadowing in nuclei, suggesting that QGP formation and the resulting strangeness enhancement have little or no effect on Ξ+c production in pPb collisions.
This interpretation is confirmed by the measurement of the D+s, D+ and D0 cross sections and corresponding ratios in different rapidity regions. While the ratios show little enhancement within the statistical uncertainty, a large asymmetry is observed in the forward-backward production. This strongly indicates CNM effects and provides detailed constraints on models of nuclear parton distribution functions and hadron production in a very wide range of Bjorken-x (10–2 – 10–5). A strong suppression is observed for the D mesons, giving insight into the nature of the CNM effects involved. An explanation via additional final-state effects is challenged by the Ξ+c data that are well described by models not including them. The production ratios of Ξ+c, D+s, D+ and D0 measured as a function of pT in pPb collisions confirm these findings. All these studies will profit from the increased statistics in pPb collisions that are expected from future LHC runs.
As dark matter (DM) search experiments increasingly constrain minimal models, more complex ones have gained importance, featuring a rich “dark sector” with additional particle states and often involving forces that cannot be directly felt by Standard Model (SM) particles. Nevertheless, the SM and dark sector are typically connected by a “portal” that can be experimentally probed.
The CMS collaboration recently presented the first dedicated collider search for inelastic dark matter (IDM) using the LHC Run 2 dataset. In IDM models, a small Majorana mass component is combined with a Dirac fermion field corresponding to the DM and added to the SM Lagrangian, resulting in two new DM mass eigenstates with a predominantly off-diagonal (inelastic) coupling and a small mass splitting. In addition, a dark photon (a gauge boson similar to the ordinary photon) serves as the portal to the SM. This means that at the LHC, the lighter (χ1) and heavier (χ2) DM states are simultaneously produced via a dark photon (A′). While the lighter state is stable and escapes the detector, the heavier one can travel a macroscopic distance before decaying to the lighter one and a pair of muons, which are produced away from the collision point.
This process can be probed by exploiting a striking signature: a pair of almost collinear, low-momentum and displaced muons from the χ2 decay; significant missing transverse momentum (MET) from the χ1; and an initial-state radiation jet that can be used for trigger purposes. The MET-dimuon system recoils against the high-momentum jet, so that the muons and MET are also almost collinear. This unique topology presents challenges, including the reconstruction of the displaced muons. This problem was addressed by using a dedicated reconstruction algorithm, which remains efficient even for muons produced several metres away from the collision point (figure 1, left).
The first dedicated collider search for IDM using the full dataset collected during LHC Run 2
After applying event-selection criteria targeting the expected IDM signal, the number of events is compared to the data-driven background prediction: no excess is observed. Upper limits are set on the product of the pp → A′ →χ2χ1 production cross-section and the branching fraction of the χ2→χ1 μ+μ– decay; they are shown in figure 1 (right) for a scenario with 10% mass splitting between the χ1 and χ2 states. The y variable is roughly proportional to the interaction strength between the SM and the DM sector. Values of y > 10–7 to10–9 are excluded for masses between 3 and 80 GeV, when assuming that the fine structure constant has the same value in the dark sector and in the SM.
CMS physicists are looking forward to probing more complex and well-motivated DM models with novel and creative uses of the existing detector.
This book was written on the occasion of the 100th anniversary of the birth of Jack Steinberger. Edited by Jack’s former colleagues Weimin Wu and KK Phua with his daughter Julia Steinberger, it is a tribute to the important role that Jack played in particle physics at CERN and elsewhere, and also highlights many aspects of his life outside physics.
The book begins with a nice introduction by his daughter, herself a well-known scientist. She describes Jack’s family life, his hobbies, interests, passions and engagement, such as with the Pugwash conference series. The introduction is followed by a number of short essays by former friends and colleagues. The first is a transcript of an interview with Jack by Swapan Chattopadhyay in 2017. It contains recollections of Jack’s time at Fermilab, with his PhD supervisor Enrico Fermi, and concludes with his connections with Germany later in life.
Drive and leadership
The next essays highlight the essential impact that Jack had in all the experiments he participated in, mostly as spokesperson, and underline his original ideas, drive and leadership, not just professionally but also in his personal life. Stories include those by Hallstein Høgåsen, a fellow in the CERN theory department, who describes the determination and perseverance he had in mountaineering. S Lokanathan worked with Jack as a graduate student in the early 1950s in Nevis Labs and remained in contact with him, including later on when he became a professor in Jaipur. Jacques Lefrançois covers the ALEPH period, and Vera Luth the earlier kaon experiments at CERN. Italo Mannelli comments on both the early times when Jack visited Bologna to work with Marcello Conversi and Giampietro Puppi, and then turns to his work at the NA31 experiment on direct CP violation in the Ko system.
Gigi Rolandi emphasises the important role that Jack played in the design and construction of the ALEPH time projection chamber. Another good essay is by David N Schwartz, the son of Mel Schwartz who shared the Nobel prize with Jack and Leon Lederman. When David was born, Jack was Mel Schwartz’s thesis supervisor. As Jack was a friend of the Schwartz family, they were in regular contact all along. David describes how his father and Jack worked together and how, together with Leon Lederman, they started the famous muon neutrino experiment in 1959. As David Schwartz later became involved in arms control for the US in Geneva, he kept in contact with Jack, who had always been very passionate about arms control. David also remembers the great respect that Jack had for his thesis supervisor Enrico Fermi. The final essay is by Weimin Wu, one of the first Chinese physicists to join the international high-energy physics research community. Weimin started to work on ALEPH in 1979 and has remained a friend of the family since. He describes not only the important role that Jack played as a professor, mentor and role model, but also for establishing the link between ALEPH and the Chinese high-energy physics community.
All these essays describe the enormous qualities of Jack as a physicist and as a leader. But they also highlight his social and human strengths. The reader gets a good feeling of Jack’s interests and hobbies outside of physics, such as music, climbing, skiing and sailing. Many of the essays are also accompanied by photographs, covering all parts of his life, and they are free from formulae or complicated physics explanations.
For those who want to go deeper into the physics that Jack was involved with, the second part of the book consists of a selection of his most important and representative publications, chosen and introduced by Dieter Schlatter. The first two papers from the 1950s deal with neutral meson production by photons and a possible detection of parity non-conservation in hyperon decays. They are followed by the Nobel prize-winning paper “Possible Detection of High-Energy Neutrino Interactions and the Existence of Two Kinds of Neutrinos” from 1962, three papers on CP violation in kaon decays at CERN (including first evidence for direct CP violation by NA31), then five important publications from the CDHS neutrino experiment (officially referred to as WA1) on inclusive neutrino and anti-neutrino interactions, charged-current structure functions, gluon distributions and more. Of course, the list would not be complete without a few papers from his last experiment, ALEPH, including the seminal one on the determination of the number of light neutrino species – a beautiful follow-up of Jack’s earlier discovery that there are at least two types of neutrinos.
This agreeable and interesting book will primarily appeal to those who have met or known Jack. But others, including younger physicists, will read the book with pleasure as it gives a good impression of how physics and physicists functioned over the past 70 years. It is therefore highly recommended.
Andrew Larkoski seems to be an author with the ability to write something interesting about topics for which a lot has already been written. His previous book Elementary Particle Physics (2020, CUP) was noted for its very intuitive style of presentation, which is not easy to find in other particle-physics textbooks. With his new book on quantum mechanics, the author continues in this manner. It is a textbook for advanced undergraduate students covering most of the subjects that an introduction to the topic usually includes.
Despite the subtitle “a mathematical introduction”, there is no more maths than in any other textbook at this level. The reason for the title is presumably not the mathematical content, but the presentation style. A standard quantum-mechanics textbook usually starts with postulating Schrödinger’s equation and then proceeds immediately to applications on physical systems. For example, the very popular Introduction to Quantum Mechanics by Griffiths and Schroeter (2018, CUP) introduces Schrödinger’s equation on the first page and, after some discussion on its meaning and basic computational techniques, the first application on the infinite square well appears on page 31. Larkoski aims to build an intuitive mathematical foundation before introducing Schrödinger’s equation. Hilbert spaces are discussed in the context of linear algebra as an abstract complex vector space. Indeed, space is given at the very beginning for ideas, such as the relation between the derivative and a translation, that are useful for more advanced applications of quantum mechanics, for example in field theory, but which seldom appear in quantum-mechanics textbooks so early. Schrödinger’s equation does not appear until page 58, and the first application in a system (which, as usual, is the infinite square well) appears only on page 89.
The book is concise in length, which means that the author has had to carefully choose the areas that are beyond the standard quantum-mechanics material covered in most undergraduate courses. Larkoski’s choices are probably informed by his background in quantum field theory, since path integral formalism features strongly. Perhaps the price for keeping the book short is that there are topics, such as identical particles or Fermi’s golden rule, that are not covered.
Some readers will find the book’s style of delaying a mathematical introduction unnecessary and may prefer a more direct approach to the topic, which might also be related to the duration of the teaching period at university. I would not agree with such an assessment. Taking the time to build a basis early on helps tremendously with understanding quantum mechanics later on in a course – an approach that it is hoped will find its way to more classrooms in the near future.
The exact origin of the high-energy cosmic rays that bombard Earth remains one of the most important open questions in astrophysics. Since their discovery more than a century ago, a multitude of potential sources, both galactic and extra-galactic, have been proposed. Examples of proposed galactic sources, which are theorised to be responsible for cosmic rays with energies below the PeV range, are supernova remnants and pulsars, while blazars and gamma-ray bursts are two of many potential sources theorised to be responsible for the cosmic-ray flux at higher energies.
When identifying the origin of astrophysical photons, one can use their direction. However, for cosmic rays this is not as straightforward due to the impact of galactic and extra-galactic magnetic fields on their direction. To identify the origin of cosmic rays, researchers therefore almost fully rely on information embedded in their energy spectra. When assuming just acceleration within shock regions of extreme astrophysical objects, the galactic cosmic-ray spectrum should follow a simple, single power law with an index between –2.6 and –2.7. However, thanks to measurements by a range of dedicated instruments including AMS, ATIC, CALET, CREAM and HAWC, we know the spectrum to be more complex. Furthermore, different types of cosmic rays, such as protons, and the nuclei of helium or oxygen, have all been shown to exhibit different spectral features with breaks at different energies.
New measurements by the space-based Chinese–European Dark Matter Particle Explorer (DAMPE) provide detailed insights into the various spectral breaks in the combined proton and helium spectra. Clear hints of spectral breaks were already shown previously by various balloon and space-based experiments at low energies (below about 1 TeV), and by ground-based air-shower detectors at high energies (> TeV). However, in the region where space-based measurements start to suffer from a lack of statistics, ground-based instruments suffer from a low sensitivity, resulting in relatively large uncertainties. Furthermore, the completely different way in which space- and ground-based instruments measure the energy (directly in the former, and via air-shower reconstruction in the latter) made it important to make measurements that clearly connect the two. DAMPE has now produced detailed spectra in the 46 GeV to 316 TeV energy range, thereby filling most of the gap. The results confirm both a spectral hardening around 100 GeV and a subsequent spectral softening around 10 TeV, which connects well with a second spectral bump previously observed by ARGO-YBJ+WFCT at an energy of several hundred TeV (see figure).
The complex spectral features of high-energy cosmic rays can be explained in various ways. One possibility is through the presence of different types of cosmic-ray sources in our galaxy; one population produces cosmic rays with energies up to PeV, while a second only produces cosmic rays with energies up to tens of TeV, for example. A second possibility is that the spectral features are a result of a nearby single source from which we observe the cosmic rays directly before they become diffused in the galactic magnetic field. Examples of such a nearby source could be the Geminga pulsar, or the young supernova remnant Vela.
In the near future, novel data and analysis methods will likely allow researchers to distinguish between these two theories. One important source of this data is the LHAASO experiment in China, which is currently taking detailed measurements of cosmic rays in the 100 TeV to EeV range. Furthermore, thanks to ever-increasing statistics, the anisotropy of the arrival direction of the cosmic rays will also become a method to compare different models, in particular to identify nearby sources. The important link between direct and indirect measurements presented in this work thereby paves the way to connecting the large amounts of upcoming data to the theories on the origins of cosmic rays.
In the latest milestone for the CERN Neutrino Platform, a key element of the near detector for the T2K (Tokai to Kamioka) neutrino experiment in Japan – a state-of-the-art time projection chamber (TPC) – is now fully operational and taking cosmic data at CERN. T2K detects a neutrino beam at two sites: a near-detector complex close to the neutrino production point and Super-Kamiokande 300 km away. The ND280 detector is one of the near detectors necessary to characterise the beam before the neutrinos oscillate and to measure interaction cross sections, both of which are crucial to reduce systematic uncertainties.
To improve the latter further, the T2K collaboration decided in 2016 to upgrade ND280 with a novel scintillator tracker, two TPCs and a time-of-flight system. This upgrade, in combination with an increase in neutrino beam power from the current 500 kW to 1.3 MW, will increase the statistics by a factor of about four and reduce the systematic uncertainties from 6% to 4%. The upgraded ND280 is also expected to serve as a near detector of the next generation long-baseline neutrino oscillation experiment Hyper-Kamiokande.
Meanwhile, R&D and testing for the prototype detectors for the DUNE experiment at the Long Baseline Neutrino Facility at Fermilab/SURF in the US is entering its final stages.
The discovery of the Higgs boson in 2012 unleashed a detailed programme of measurements by ATLAS and CMS which have confirmed that its couplings are consistent with those predicted by the Standard Model (SM). However, several Higgs-boson decay channels have such small predicted branching fractions that they have not yet been observed. Involving higher order loops, these channels also provide indirect probes of possible physics beyond the SM. ATLAS and CMS have now teamed up to report the first evidence of the decay H → Zγ, presenting the combined result at the Large Hadron Collider Physics conference in Belgrade in May.
The SM predicts that approximately 0.15% of Higgs bosons produced at the LHC will decay in this way, but some theories beyond the SM predict a different decay rate. Examples include models where the Higgs boson is a neutral scalar of different origin, or a composite state. Different branching fractions are also expected for models with additional colourless charged scalars, leptons or vector bosons that couple to the Higgs boson, due to their contributions via loop corrections.
“Each particle has a special relationship with the Higgs boson, making the search for rare Higgs decays a high priority,” says ATLAS physics coordinator Pamela Ferrari. “Through a meticulous combination of the individual results of ATLAS and CMS, we have made a step forward towards unravelling yet another riddle of the Higgs boson.”
We have made a step forward towards unravelling yet another riddle of the Higgs boson
Previously, ATLAS and CMS independently conducted extensive searches for H → Zγ. Both used the decay of a Z boson into pairs of electrons or muons, which occur in about 6.6% of cases, to identify H → Zγ events. In these searches, the collision events associated with this decay would be identified as a narrow peak over a smooth background of events.
In the new study, ATLAS and CMS combined data that was collected during the second run of the LHC in 2015–2018 to significantly increase the statistical precision and reach of their searches. This collaborative effort resulted in the first evidence of the Higgs boson decay into a Z boson and a photon, with a statistical significance of 3.4σ. The measured signal rate relative to the SM prediction was found to be 2.2 ± 0.7, in agreement with the theoretical expectation from the SM.
“The existence of new particles could have very significant effects on rare Higgs decay modes,” says CMS physics coordinator Florencia Canelli. “This study is a powerful test of the Standard Model. With the ongoing third run of the LHC and the future High-Luminosity LHC, we will be able to improve the precision of this test and probe ever rarer Higgs decays.”
At a CERN seminar on 13 June, the LHCb collaboration presented the world’s most precise measurements of two key parameters relating to CP violation. Based on the full LHCb dataset collected during LHC Runs 1 and 2, the first concerns the observable sin2β while the second concerns the CP-violating phase φs – both of which are highly sensitive to potential new-physics contributions.
CP violation was first observed in 1964 in kaon mixing, and confirmed among B mesons in 2001 by the e+e– B-factory experiments BaBar and Belle. The latter enabled the first measurements of sin2β and were a vital confirmation of the Standard Model (SM). In the SM, CP violation arises due to a complex phase in the Cabibbo–Kobayashi–Maskawa mixing matrix, which, being unitary, defines a triangle in the complex plane: one side is defined to have unit length, while the other two sides and three angles must be inferred via measurements of certain hadron decays. If the measurements do not provide a consistent description of the triangle, it would hint that something is amiss in the SM.
The measurement of sin2β, which determines the angle β in the unitarity triangle, is more difficult at a hadron collider than it is at an e+e– collider. However, the large data samples available at the LHC and the optimised design of the LHCb experiment have enabled a measurement that is twice as precise as the previous best result from Belle. The LHCb team used decays of B0 mesons to J/ψ K0S, which can proceed either directly or by first oscillating into their antimatter partners. The interference between the amplitudes for the two decay paths results in a time-dependent asymmetry between the decay-time distributions of the B0 and B0. The amplitude of the oscillation, and thus the magnitude of CP violation present, is a measurement of sin2β for which LHCb finds a value of 0.716 ± 0.013 ± 0.008, in agreement with predictions.
Based on an analysis of B0S→ J/ψ K+K– decays, LHCb also presented the world’s best measurement of the CP-violating phase φs, which plays a similar role in B0S meson decays as sin2β does in B0 decays. As for B0 mesons, a B0S may decay directly or oscillate into a B0S and then decay. CP violation causes these decays to proceed at slightly different rates, manifesting itself as a non-zero value of φs due to the interference between mixing and decay. The predicted value of φs is about –0.037 rad, but new-physics effects, even if also small, could change its value significantly.
A detailed study of the angular distribution of B0S decay products using the Run 1 and 2 data samples enabled LHCb to measure this decay-time-dependent CP asymmetry φs = -0.039 ± 0.022 ± 0.006 rad. Representing the most precise single measurement to date, it is consistent with previous measurements and with the SM expectation. The precision measurement of φs is one of LHCb’s most important goals, said co-presenter Vukan Jevtic (TU Dortmund): “Together with sin2β, the new LHCb result marks an important advance in the quest to understand the nature and origin of CP violation.”
With both results currently limited by statistics, the collaboration is looking forward to data from the current and future LHC runs. “In Run 3 LHCb will collect a larger data sample taking advantage of the new upgraded LHCb detector,” concluded co-presenter Peilian Li (CERN). “This will allow even higher precision and therefore the possibility to detect, through these key quantities, the manifestation of new-physics effects.”
I have always been interested in what one might call existential questions: those that were originally theological or philosophical, but are now science, such as “why are things the way they are?” When I was young, for me it was a toss-up: do I go into particle physics or cosmology? At the time, experimental cosmology was less developed, so it made sense to go towards particle physics.
What has been your research focus?
When I was a graduate student in college, I was intrigued by the idea of quantum mechanical spin. I didn’t understand spin and I still don’t. It’s a perplexing and non-intuitive concept. It turned out the university I went to was working on it. When I got there, however, I ended up doing a fixed-target jet-photoproduction experiment. My thesis experiment was small, but it was a wonderful training ground because I was able to do everything. I built the experiment, wrote the data acquisition and all of the analysis software. Then I got back on track with the big questions, so colliders with the highest energies were the way to go. Back then it was the Tevatron and I joined DØ. When the LHC came online it was an opportunity to transition to CMS.
Why and when did you decide to get into communication?
It has to do with my family background. Many physicists come from families where one or both parents are already from the field. But I come from an academically impoverished, blue-collar background, so I had no direct mentors for physics. However, I was able to read popular books from the generation before me, by figures such as Carl Sagan, Isaac Asimov or George Gamow. They guided me into science. I’m essentially paying that back. I feel it’s sort of my duty because I have some skill at it and because I expect that there is some young person in some small town who is in a similar position as I was in, who doesn’t know that they want to be a scientist. And, frankly, I enjoy it. I am also worried about the antiscience sentiment I see in society, from the antivaccine movement to climate-change denial to 5G radiation fears. If scientists do not speak up, the antiscience voices are the only ones that will be heard. And if public policy is based on these false narratives, the damage to society can be severe.
Scientists doing outreach create goodwill, which can lead to better funding for research-focused scientists
How did you start doing YouTube videos?
I had got to a point in my career where I was fairly established, and I could credibly think of other things. When you’re young, you are urged to focus entirely on research, because if you don’t, it could harm your research career. I had already been writing for Fermilab Today and I kept suggesting doing videos, as YouTube was becoming a thing. After a couple of years one of the videographers said, “You know, Don, you’re actually pretty good at explaining this stuff. We should do a video.” My first video came out a year before the Higgs discovery, in July 2011. It was on the Higgs boson. When the video came out, a few of the bigger science outlets picked it up and during the build-up to the Higgs excitement it got more and more views. By now it has more than three million clicks, which for a science channel is a lot. We do serious science in our videos, but there is also some light-heartedness in them.
Do you try to make the videos funny?
This has more to do with me not taking anything seriously. I have found that irreverent humour can be disarming. People like to be entertained when they are learning. For example, one video was about “What was the real origin of mass?” Most people think that the Higgs boson is giving mass, but it’s really QCD. It’s the energy stored inside nucleons. In any event, in this video I start out with a joke about going into a Catholic church. The Higgs boson tries to say “I’m losing my faith,” and the priest replies: “You can’t leave the church. Without you how can we have mass?” For a lot of YouTube channels, viewership is not just about the material. It’s about the viewer liking the presenter. I’d say people who like our channel appreciate the combination of reliable science facts, but also subscribe for the humour. If a viewer doesn’t like a guy who does terrible dad jokes, they just go to another channel.
During the Covid-19 pandemic your videos switched to “Subatomic stories”. How do they differ?
Most of my videos are done in a studio on green screen so that we can put visuals in the background, but that was not possible during the lockdown. We then did a set up in my living room. I had an old DSLR camera and a recorder, and would record the video and the audio, then send the files to my videographer, Ian Krass, who does all the magic. Our usual videos don’t have a real story arc; they are just a series of topics. With “Subatomic stories” we began with a plan. I organised it as a sort of self-contained course, beginning with basic things, like the Standard Model, weak force, strong force, etc. Towards the end, we introduced more diverse, current research topics and a few esoteric theoretical ideas. Later, after Subatomic stories, I continued to film in my basement in a green-screen studio I built. We’ve returned to the Fermilab studio, but the basement one is waiting should the need arise.
You are quite the public face of Fermilab. How does this relationship work?
It’s working wonderfully. I have no complaints. I can’t say that was always true in the past, because, when you’re young, you’re advised to focus on your research; it was like that for me. At the time there was some hostility towards science communicators. If you did outreach, you weren’t really considered a serious scientist, and that’s still true to a degree, although it is getting better. For me, it got to the point where people were just used to me doing it, and they tolerated it. As long as it didn’t bother my research, I could do this on my time. Some people bowl, some people knit, some people hike. I made videos. As I started becoming more successful, the laboratory started embracing the effort and even encouraged me to spend some of my work day on it. I was glad because in the same way that we encourage certain scientists to specialise in AI or computational skills or detector skills, I think that we as a field need to cultivate and encourage those scientists who are good at communicating our work. The bottom line is that I am very happy with the lab. I would like to see other laboratories encourage at least a small subset of scientists, those who are enthusiastic about outreach, to give them the time and the resources to do it, because there’s a huge payoff.
What are your favourite and least favourite things about doing outreach?
I think I’m making an impact. For instance, I’ve had graduate students or even postdocs ask me to autograph a book saying, “I went into physics because I read this book.” Occasionally I’m recognised in public, but the viewership numbers tell the story. If a video does poorly, it will get 50,000 viewers. And a good video, or maybe just a lucky one, can get millions. The message is getting out. As for the least favourite part, lately it is coming up with ideas. I’ve covered nearly every (hot) topic, so now I am thinking of revisiting early topics in a new way.
What would be your message to physicists who don’t have time or see the need for science communication?
Let’s start with the second type, who don’t see the value of it. I would like to remind them that essentially, in any country, if you want to do research, your funding comes from taxpayers. They work hard for their money and they certainly don’t want to pay taxes, so if you want to ask them to support this thing that you’re interested in, you need to convince them that it’s important and interesting. For those who don’t have time, I’m empathetic. Depending on your supervisor, doing science communication can harm a young career. However, in that case I think that the community should at least support a small group of people who do outreach. If nothing else, the scientists doing outreach create goodwill, which can lead to better funding for research-focused scientists.
Where do you see particle physics headed and the role of outreach?
The problem is that the Standard Model works well, but not perfectly. Consequently, we need to look for anomalies both at the LHC and with other precision experiments. I imagine that the next decade will resemble what we are doing now. I think it would be of very high value if we could spend some money on thinking about how to make stronger magnets and advanced acceleration technologies, because that’s the only way we’re going to get a very large increase in energy. The scientists know what to do. We are developing the techniques and technologies needed to move forward. On the communication side, we just need to remind the public that the questions particle physicists and cosmologists are trying to answer are timeless. They’re the questions many children ask. It’s a fascinating universe out there and a good science story can rekindle anyone’s sense of child-like wonder.
The neutrino had barely been known for two years when CERN’s illustrious neutrino programme got under way. As early as 1958, the 600 MeV Synchrocyclotron enabled the first observation of the decay of a charged pion into an electron and a neutrino – a key piece in the puzzle of weak interactions. Dedicated neutrino-beam experiments began a couple of years later when the Proton Synchrotron (PS) entered operation, rivalled by activities at Brookhaven’s higher-energy Alternating Gradient Synchrotron in the US. Producing the neutrino beam was relatively straightforward: make a proton beam from the PS hit an internal target to produce pions and kaons, let them fly some distance during which they can produce neutrinos when they decay, then use an iron shielding to filter the remaining hadrons, such that only neutrinos and muons remain. Ensuring that a new generation of particle detectors would enable the study of neutrino-beam interactions proved a tougher challenge.
CERN began with two small, 1 m-long heavy-liquid bubble chambers that used proton beams which struck an internal target inside the PS, hoping to see at least one neutrino event per day. It was nowhere near that. Unfortunately the target configuration had made the beams about 10 times less intense than expected, and in 1961 CERN’s nascent neutrino programme came to a halt. “It was a big disappointment,” recalls Don Cundy, who was a young scientist at CERN at the time. “Then, several months later, Brookhaven did the same experiment but this time they put the target in the right place, and they discovered that there were two neutrinos – the muon neutrino (νµ) and the electron neutrino (νe) – a great discovery for which Lederman, Schwartz and Steinberger received the Nobel prize some 25 years later.”
Despite this setback, CERN Director-General Victor Weisskopf, along with his Director of Research Gilberto Bernardini and the CERN team, decided to embark on an even more ambitious setup. Employing Simon van der Meer’s recently proposed “magnetic horn” – a high-current, pulsed focusing device placed around the target – and placing the target in an external beam pipe increased the neutrino flux by about two orders of magnitude. In 1963 this opened a new series of neutrino experiments at CERN. They began with a heavy-liquid bubble chamber containing around 500 kg of freon and a spark-chamber detector weighing several tonnes, for which first results were presented at a conference in Siena that year. The bubble-chamber results were particularly impressive, recalls Cundy: “Even though the number of events was of the order of a few hundred, you could do a lot of physics: measure the elastic form factor of the nucleon, single pion production, the total cross section, search for intermediate weak bosons and give limits on neutral-current processes.” It was at that conference that André Lagarrigue of Orsay urged that bubble chambers were the way forward for neutrino physics, and proposed to build the biggest chamber possible: Gargamelle, named after a giantess from a fictional renaissance story.
Construction in France of the much larger Gargamelle chamber, 4.8 m long and containing 18 tonnes of freon, was quick, and by the end of 1970 the detector was receiving a beam of muon neutrinos from the PS. The Gargamelle collaboration consisted of researchers from seven European institutes: Aachen, Brussels, CERN, École Polytechnique Paris, Milan, LAL Orsay and University College London. In 1969 the collaboration had made a list of physics priorities. Following the results of CERN’s Heavy Liquid Bubble Chamber, which set new limits on neutrino-electron scattering and single-pion neutral-current (NC) processes, the search for actual NC events made it onto the list. However, it only placed eighth out of 10 science goals. That is quite understandable, comments Cundy: “People thought that the most sensitive way to look for NCs was the decay of a K0 meson into two muons or two electrons but that had a very low branching ratio, so if NCs existed it would be at a very small level. The first thing on the list for Gargamelle was in fact looking at the structure of the nucleon, to measure the total cross section and to investigate the quark model.”
Setting priorities
After the discovery of the neutrino in 1956 by Reines and Cowan (CERN Courier July/August 2016 p17), the weak interaction became a focus of nuclear research. The unification of the electromagnetic and weak interactions by Salam, Glashow and Weinberg a decade later motivated experiments to look for the electroweak carriers: the W boson, which mediates charged-current interactions, and the Z boson associated with neutral currents. While the former were known to exist by means of β decay, the latter were barely thought of. Neutral currents started to become interesting in 1971, after Martinus Veltman and Gerard ’t Hooft proved the renormalisability of the electroweak theory.
More than 60 years after first putting the neutrino to work, CERN’s neutrino programme continues to evolve
By that time, Gargamelle was running at full speed. Analysing the photographs that were taken every time the PS was pulsed to look for interesting tracks were CERN personnel (at the time often referred to as “scanning girls”) who essentially performed the role of a modern level-1 trigger. Interactions were divided into different classes depending on the number of particles involved (muons, hadrons, electron–positron pairs, even one or more isolated protons as well as isolated electrons and positrons). The leptonic NC process (νµ + e–→νµ + e–) would give an event that consisted of a single energetic electron. Since the background was very low, it would be the smoking gun for NCs. However, the cross-section was also very low, with only one to nine events expected from the electroweak calculations. The energetic hadronic NC event (νµ + N →νµ + X, with the respective process involving antiparticles if the reaction was triggered by an antineutrino beam) would consist only of several hadrons, in fact just like events produced by incoming high-energy neutrons.
“When the first leptonic event was found in December 1972 we were convinced that NCs existed,” says Gargamelle member Donatella Cavalli from the University of Milan. “It was just one event but with very low background, so a lot of effort was put into the search for hadronic NC events and in the full understanding of the background. I was the youngest in my group and I remember spending the evenings with my colleagues scanning the films on special projectors, which allowed us to observe the eight views of the chamber. I proudly remember my travels to Paris, London and Brussels, taking the photographs of the candidate events found in Milan to be checked with colleagues from other groups.”
At a CERN seminar on 19 July 1973, Paul Musset, who was one of the principal investigators, presented Gargamelle’s evidence for NCs based on both the leptonic and hadronic analyses. Results from the former had been published in a short paper received by Physics Letters two weeks earlier, while the paper on the hadronic events, which reported on the actual observation and hence confirmation of neutral currents, was received on 23 July. In August 1973 Gerald Myatt ofUniversity College London, now at the University of Oxford, presented the results at the Electron-Photon conference. The papers were published in the same issue of the journal on 3 September. Yet many physicists doubted them. “It was generally believed that Gargamelle made a mistake,” says Myatt. “There was only one event, a tiny track really, and very low background. Still, it was not seen as conclusive evidence.” Among the critical voices were T D Lee, who was utterly unimpressed, and Jack Steinberger, who went as far as to bet half his wine cellar that the Gargamelle result would be wrong.
The difficulty was to demonstrate that the hadronic NC signal was not due to background from neutral hadrons. “A lot of work and many different checks were done, from calculations to a full Monte Carlo simulation to a comparison between spatial distributions of charged- and neutral-current events,” explains Cavalli. “We were really happy when we published the first results from hadronic and leptonic NCs after all background checks, because we were confident in our results.” Initially the Gargamelle results were confirmed by the independent HPWF (Harvard–Pennsylvania–Wisconsin–Fermilab) experiment at Fermilab. Unfortunately, a problem with the HPWF setup led to their paper being rewritten, and a new analysis presented in November 1973 showed no sign of NCs. It was not until the following year that the modified HPWF apparatus and other experiments confirmed Gargamelle’s findings.
Additionally, the collaboration managed to tick off number two on its list of physics priorities: deep-inelastic scattering and scaling. Confirming earlier results from SLAC which showed that the proton is made of point-like constituents, Gargamelle data were crucial in proving that these constituents (quarks) have charges of +2/3 and –1/3. For neutral currents, the icing on the cake came 10 years after Gargamelle’s discovery with the direct discovery of the Z (and W) bosons at the SppS collider in 1983. The next milestone for CERN in understanding weak interactions came in 1990 with the precise measurement of the decay width of the Z boson at LEP, which showed that there are three and no more light neutrinos.
Legacy of a giantess
In 1977 Gargamelle was moved from the PS to the newly installed Super Proton Synchrotron (SPS). The following year, however, metal fatigue caused the chamber to crack and the experiment was decommissioned. Some of the collaboration members – including Cundy and Myatt – went to work on the nearby Big European Bubble Chamber. Also hooked up to the SPS for neutrino studies at that time were CDHS (CERN–Dortmund–Heidelberg–Saclay, officially denoted WA1) led by Steinberger, and Klaus Winter’s CHARM experiment. Operating for eight years, these large detectors collected millions of events that enabled precision studies on the structure of the charged and neutral currents as well as the structure of nucleons and the first evidence for QCD via scaling violations.
The third type
The completion of the CHARM programme in 1991 marked the halt of neutrino operations at CERN for the first time in almost 30 years. But not for long. Experimental activities restarted with the search for neutrino oscillations, driven by the idea that neutrinos were an important component of dark matter in the universe. Consequently, two similarly styled short-baseline neutrino-beam experiments – CHORUS and NOMAD – were built. These next-generation detectors, which took data from 1994 to 1998 and from 1995 to 1998, respectively, joined others around the world to look for interactions of the third neutrino type, the ντ, and to search for neutrino oscillations, i.e. the change in neutrino flavour as they propagate, which was proposed in the 1950s and confirmed in 1998 by the SNO and Super-Kamiokande experiments in Canada and Japan. In 2000 the DONUT experiment at Fermilab reported the first direct evidence for ντ interactions.
CERN’s neutrino programme entered a hiatus until July 2006, when the SPS began firing an intense beam of muon neutrinos 732 km through Earth to two huge detectors – ICARUS and OPERA – located underground at Gran Sasso National Laboratory in Italy. Designed to make precision measurements of neutrino oscillations, the CERN Neutrinos to Gran Sasso (CNGS) programme observed the oscillation of muon neutrinos into tau neutrinos and was completed in 2012.
As the CERN neutrino-beam programme was wound down, a brand-new initiative to support fundamental neutrino research began. “The initial idea for a ‘neutrino platform’ at CERN was to do a short-baseline neutrino experiment involving ICARUS to check the LSND anomaly, and another to test prototypes for “LBNO”, which would have been a European long-baseline neutrino oscillation experiment sending beams from CERN to Phyäsalmi in Finland to investigate the oscillation,” says Dario Autiero, who has been involved in CERN’s neutrino programme since the beginning of the 1980s. “The former was later decided to take place at Fermilab, while for the latter the European and US visions for long-baseline experiments found a consensus for what is now DUNE (the Deep Underground Neutrino Experiment) in the US.”
A unique facility
Officially launched in 2013 in scope of the update to the European strategy for particle physics, the CERN Neutrino Platform serves as a unique R&D facility for next-generation long-baseline neutrino experiments. Its most prominent project is the design, construction and testing of prototype detectors for DUNE, which will see a neutrino beam from Fermilab sent 1300 km to the SURF laboratory in Dakota. One of the Neutrino Platform’s early successes was the refurbishment of the ICARUS detector, which is now taking data at Fermilab’s short-baseline neutrino programme. The platform is also developing key technologies for the near detector for the Tokai-to-Kamioka (T2K) neutrino facility in Japan (see p10), and has a dedicated theory working group aimed at strengthening the connections between CERN and the worldwide neutrino community. Independently, the NA61 experiment at the SPS is contributing to a better understanding of neutrino–nucleon cross sections for DUNE and T2K data.
More than 60 years after first putting the neutrino to work, CERN’s neutrino programme continues to evolve. In April 2023 a new experiment at the LHC called FASER made the first observation of neutrinos produced at a collider. Together with another new experiment, SND@LHC, FASER will enable the study of neutrinos in a new energy range and compare the production rate of all three types of neutrinos to further test the Standard Model.
As for Gargamelle, today it lies next to BEBC and other retired colleagues in the garden of Square van Hove behind CERN’s main entrance. Not many can still retell the story of the discovery of neutral currents, but those who can share the story with delight “It was very tiny that first track from the electron, one in hundreds of thousands of pictures,” says Myatt. “Yet it justified André Lagarrigue’s vision of the large heavy-liquid bubble chamber as an ideal detector of neutrinos, combining large mass with a very finely detailed picture of the interaction. There can be no doubt that it was these features that enabled Gargamelle to make one of the most significant discoveries in the history of CERN.”