The primary goal of high-energy heavy-ion physics is the study of a new state of nuclear matter, quark–gluon plasma, a thermalised system of quarks and gluons. The study of proton–proton (pp) and proton–nucleus (pA) collisions provides the baseline for the interpretation of results from heavy-ion collisions. The study of pA collisions also helps researchers understand the effects of cold nuclear matter on the production of final-state particles.
Global observables, such as the number of produced particles (particle multiplicity) and their distribution in pseudorapidity (η), provide key information about particle-production mechanisms in these collisions. The total multiplicity is mostly determined by soft interactions, i.e. processes with small momentum transfer, which cannot be calculated using perturbative techniques and are instead modelled using non-perturbative phenomenological descriptions. For example, the distribution of the number of produced particles can be used to disentangle relative contributions to particle production from hard and soft processes using a two-component model.
ALICE has recently completed the measurement of the multiplicity and pseudorapidity density distributions of inclusive photons at forward rapidity, spanning the range η = 2.3 to 3.9, by using the photon multiplicity detector (PMD) in pp, pPb and Pbp collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair using LHC Run 1 and 2 data. Since photons mostly originate from decays of neutral pions, this result complements existing measurements of charged-particle production. A comparative study of charged particles and inclusive photons can reveal possible similarities and differences in the underlying production mechanisms for charged and neutral particles.
The PMD uses the preshower technique, where a three-radiation-length-thick lead converter is sandwiched between two planes comprising an array of 184,320 gas-filled proportional counters. Photons are distinguished from hadrons in the PMD’s preshower plane by applying suitable thresholds on the number of detector cells and the energy deposited in reconstructed clusters.
The measured distributions are corrected for instrumental effects using a Bayesian unfolding method. This is the first time that the dependence of the inclusive photon production on the number of nucleons participating in the pPb collision and its scaling behaviour has been studied at the LHC.
Figure 1 (left) compares the pseudorapidity density distribution of inclusive photons in minimum bias pp, pPb and Pbp collisions measured at forward rapidity to that of charged particles at midrapidity. The pseudorapidity distribution of inclusive photons at forward rapidity smoothly matches that of charged particles at midrapidity, indicating that the production mechanisms for charged and neutral pions are similar. Figure 1 (right) shows the pseudorapidity density distribution of inclusive photons in pPb collisions for different multiplicity classes as estimated using the energy deposited in the zero-degree calorimeter (ZNA) at beam rapidity. The multiplicity in the most central collisions reaches values twice as large as those in minimum bias events. The data and model agree within one sigma of the measurement uncertainties.
These results of inclusive photon production in pp, pPb and Pbp collisions provide valuable input for the development of theoretical models and Monte Carlo event generators, and help to establish the baseline measurements for the interpretation of PbPb collision data.
A crucial missing piece in our understanding of quantum chromodynamics (QCD) is a complete description of hadronisation in hard scattering processes with a large momentum transfer, which has now been investigated by the LHCb collaboration in proton–lead (pPb) collisions. While perturbative QCD describes reasonably well the transverse momentum (pT) dependence of heavy-quark production in proton–proton (pp) collisions, the situation is different in heavy-ion collisions due to the formation of quark–gluon plasma (QGP), which affects the behaviour of particles traversing the medium. In particular, hadronisation can be affected, modifying the relative abundance of hadrons compared to pp collisions. Several models predict an enhanced strange-quark production. Thus an abundance of strange baryons is seen as a signature of QGP formation.
The role that QGP may play in pPb collisions is currently unclear. Some models predict the formation of “QGP droplets”, which could partially induce the same behaviour, albeit less pronounced, as in PbPb collisions. In addition, in pPb interactions, “cold nuclear matter” (CNM) effects are also present that can mimic the behaviour caused by QGP but via different mechanisms. For all these reasons, a strangeness enhancement in pPb collisions would strongly indicate the formation of a deconfined medium in small systems, providing crucial information about QGP properties and formation once the CNM effects are under control.
The LHCb collaboration recently analysed pPb data for QGP effects with the twofold purpose of searching for strangeness enhancement and providing a precise understanding of the CNM effects. This search was performed by measuring the production ratio of the strange baryon Ξ+c, which has never been observed in pPb collisions before, to the strangeless baryon Λ+c. Using an earlier pPb sample, LHCb has also studied the ratios of the D+s, D+ and D0 , the first being measured for the first time down to zero pT in the forward region, precisely addressing CNM effects. All measurements are performed differentially in pT and the rapidity of the produced particle, and compared to the latest theory predictions. The Ξ+c cross section has been measured for the first time in pPb collisions, giving strong indications on the factorisation scale μ0 of the theory model. This result allows to set the absolute scale of the theoretical computations in terms of strangeness production, a trend confirmed with even higher precision by comparing the measurement to the Λ+c production-cross section evaluated in the same decay mode. Moreover, the ratio is roughly constant as a function of pTand behaves in the same way at positive (pPb) and negative (Pbp) rapidities (see figure 1). The measurement is consistent with models incorporating initial-state effects due to gluon-shadowing in nuclei, suggesting that QGP formation and the resulting strangeness enhancement have little or no effect on Ξ+c production in pPb collisions.
This interpretation is confirmed by the measurement of the D+s, D+ and D0 cross sections and corresponding ratios in different rapidity regions. While the ratios show little enhancement within the statistical uncertainty, a large asymmetry is observed in the forward-backward production. This strongly indicates CNM effects and provides detailed constraints on models of nuclear parton distribution functions and hadron production in a very wide range of Bjorken-x (10–2 – 10–5). A strong suppression is observed for the D mesons, giving insight into the nature of the CNM effects involved. An explanation via additional final-state effects is challenged by the Ξ+c data that are well described by models not including them. The production ratios of Ξ+c, D+s, D+ and D0 measured as a function of pT in pPb collisions confirm these findings. All these studies will profit from the increased statistics in pPb collisions that are expected from future LHC runs.
As dark matter (DM) search experiments increasingly constrain minimal models, more complex ones have gained importance, featuring a rich “dark sector” with additional particle states and often involving forces that cannot be directly felt by Standard Model (SM) particles. Nevertheless, the SM and dark sector are typically connected by a “portal” that can be experimentally probed.
The CMS collaboration recently presented the first dedicated collider search for inelastic dark matter (IDM) using the LHC Run 2 dataset. In IDM models, a small Majorana mass component is combined with a Dirac fermion field corresponding to the DM and added to the SM Lagrangian, resulting in two new DM mass eigenstates with a predominantly off-diagonal (inelastic) coupling and a small mass splitting. In addition, a dark photon (a gauge boson similar to the ordinary photon) serves as the portal to the SM. This means that at the LHC, the lighter (χ1) and heavier (χ2) DM states are simultaneously produced via a dark photon (A′). While the lighter state is stable and escapes the detector, the heavier one can travel a macroscopic distance before decaying to the lighter one and a pair of muons, which are produced away from the collision point.
This process can be probed by exploiting a striking signature: a pair of almost collinear, low-momentum and displaced muons from the χ2 decay; significant missing transverse momentum (MET) from the χ1; and an initial-state radiation jet that can be used for trigger purposes. The MET-dimuon system recoils against the high-momentum jet, so that the muons and MET are also almost collinear. This unique topology presents challenges, including the reconstruction of the displaced muons. This problem was addressed by using a dedicated reconstruction algorithm, which remains efficient even for muons produced several metres away from the collision point (figure 1, left).
The first dedicated collider search for IDM using the full dataset collected during LHC Run 2
After applying event-selection criteria targeting the expected IDM signal, the number of events is compared to the data-driven background prediction: no excess is observed. Upper limits are set on the product of the pp → A′ →χ2χ1 production cross-section and the branching fraction of the χ2→χ1 μ+μ– decay; they are shown in figure 1 (right) for a scenario with 10% mass splitting between the χ1 and χ2 states. The y variable is roughly proportional to the interaction strength between the SM and the DM sector. Values of y > 10–7 to10–9 are excluded for masses between 3 and 80 GeV, when assuming that the fine structure constant has the same value in the dark sector and in the SM.
CMS physicists are looking forward to probing more complex and well-motivated DM models with novel and creative uses of the existing detector.
This book was written on the occasion of the 100th anniversary of the birth of Jack Steinberger. Edited by Jack’s former colleagues Weimin Wu and KK Phua with his daughter Julia Steinberger, it is a tribute to the important role that Jack played in particle physics at CERN and elsewhere, and also highlights many aspects of his life outside physics.
The book begins with a nice introduction by his daughter, herself a well-known scientist. She describes Jack’s family life, his hobbies, interests, passions and engagement, such as with the Pugwash conference series. The introduction is followed by a number of short essays by former friends and colleagues. The first is a transcript of an interview with Jack by Swapan Chattopadhyay in 2017. It contains recollections of Jack’s time at Fermilab, with his PhD supervisor Enrico Fermi, and concludes with his connections with Germany later in life.
Drive and leadership
The next essays highlight the essential impact that Jack had in all the experiments he participated in, mostly as spokesperson, and underline his original ideas, drive and leadership, not just professionally but also in his personal life. Stories include those by Hallstein Høgåsen, a fellow in the CERN theory department, who describes the determination and perseverance he had in mountaineering. S Lokanathan worked with Jack as a graduate student in the early 1950s in Nevis Labs and remained in contact with him, including later on when he became a professor in Jaipur. Jacques Lefrançois covers the ALEPH period, and Vera Luth the earlier kaon experiments at CERN. Italo Mannelli comments on both the early times when Jack visited Bologna to work with Marcello Conversi and Giampietro Puppi, and then turns to his work at the NA31 experiment on direct CP violation in the Ko system.
Gigi Rolandi emphasises the important role that Jack played in the design and construction of the ALEPH time projection chamber. Another good essay is by David N Schwartz, the son of Mel Schwartz who shared the Nobel prize with Jack and Leon Lederman. When David was born, Jack was Mel Schwartz’s thesis supervisor. As Jack was a friend of the Schwartz family, they were in regular contact all along. David describes how his father and Jack worked together and how, together with Leon Lederman, they started the famous muon neutrino experiment in 1959. As David Schwartz later became involved in arms control for the US in Geneva, he kept in contact with Jack, who had always been very passionate about arms control. David also remembers the great respect that Jack had for his thesis supervisor Enrico Fermi. The final essay is by Weimin Wu, one of the first Chinese physicists to join the international high-energy physics research community. Weimin started to work on ALEPH in 1979 and has remained a friend of the family since. He describes not only the important role that Jack played as a professor, mentor and role model, but also for establishing the link between ALEPH and the Chinese high-energy physics community.
All these essays describe the enormous qualities of Jack as a physicist and as a leader. But they also highlight his social and human strengths. The reader gets a good feeling of Jack’s interests and hobbies outside of physics, such as music, climbing, skiing and sailing. Many of the essays are also accompanied by photographs, covering all parts of his life, and they are free from formulae or complicated physics explanations.
For those who want to go deeper into the physics that Jack was involved with, the second part of the book consists of a selection of his most important and representative publications, chosen and introduced by Dieter Schlatter. The first two papers from the 1950s deal with neutral meson production by photons and a possible detection of parity non-conservation in hyperon decays. They are followed by the Nobel prize-winning paper “Possible Detection of High-Energy Neutrino Interactions and the Existence of Two Kinds of Neutrinos” from 1962, three papers on CP violation in kaon decays at CERN (including first evidence for direct CP violation by NA31), then five important publications from the CDHS neutrino experiment (officially referred to as WA1) on inclusive neutrino and anti-neutrino interactions, charged-current structure functions, gluon distributions and more. Of course, the list would not be complete without a few papers from his last experiment, ALEPH, including the seminal one on the determination of the number of light neutrino species – a beautiful follow-up of Jack’s earlier discovery that there are at least two types of neutrinos.
This agreeable and interesting book will primarily appeal to those who have met or known Jack. But others, including younger physicists, will read the book with pleasure as it gives a good impression of how physics and physicists functioned over the past 70 years. It is therefore highly recommended.
Andrew Larkoski seems to be an author with the ability to write something interesting about topics for which a lot has already been written. His previous book Elementary Particle Physics (2020, CUP) was noted for its very intuitive style of presentation, which is not easy to find in other particle-physics textbooks. With his new book on quantum mechanics, the author continues in this manner. It is a textbook for advanced undergraduate students covering most of the subjects that an introduction to the topic usually includes.
Despite the subtitle “a mathematical introduction”, there is no more maths than in any other textbook at this level. The reason for the title is presumably not the mathematical content, but the presentation style. A standard quantum-mechanics textbook usually starts with postulating Schrödinger’s equation and then proceeds immediately to applications on physical systems. For example, the very popular Introduction to Quantum Mechanics by Griffiths and Schroeter (2018, CUP) introduces Schrödinger’s equation on the first page and, after some discussion on its meaning and basic computational techniques, the first application on the infinite square well appears on page 31. Larkoski aims to build an intuitive mathematical foundation before introducing Schrödinger’s equation. Hilbert spaces are discussed in the context of linear algebra as an abstract complex vector space. Indeed, space is given at the very beginning for ideas, such as the relation between the derivative and a translation, that are useful for more advanced applications of quantum mechanics, for example in field theory, but which seldom appear in quantum-mechanics textbooks so early. Schrödinger’s equation does not appear until page 58, and the first application in a system (which, as usual, is the infinite square well) appears only on page 89.
The book is concise in length, which means that the author has had to carefully choose the areas that are beyond the standard quantum-mechanics material covered in most undergraduate courses. Larkoski’s choices are probably informed by his background in quantum field theory, since path integral formalism features strongly. Perhaps the price for keeping the book short is that there are topics, such as identical particles or Fermi’s golden rule, that are not covered.
Some readers will find the book’s style of delaying a mathematical introduction unnecessary and may prefer a more direct approach to the topic, which might also be related to the duration of the teaching period at university. I would not agree with such an assessment. Taking the time to build a basis early on helps tremendously with understanding quantum mechanics later on in a course – an approach that it is hoped will find its way to more classrooms in the near future.
The exact origin of the high-energy cosmic rays that bombard Earth remains one of the most important open questions in astrophysics. Since their discovery more than a century ago, a multitude of potential sources, both galactic and extra-galactic, have been proposed. Examples of proposed galactic sources, which are theorised to be responsible for cosmic rays with energies below the PeV range, are supernova remnants and pulsars, while blazars and gamma-ray bursts are two of many potential sources theorised to be responsible for the cosmic-ray flux at higher energies.
When identifying the origin of astrophysical photons, one can use their direction. However, for cosmic rays this is not as straightforward due to the impact of galactic and extra-galactic magnetic fields on their direction. To identify the origin of cosmic rays, researchers therefore almost fully rely on information embedded in their energy spectra. When assuming just acceleration within shock regions of extreme astrophysical objects, the galactic cosmic-ray spectrum should follow a simple, single power law with an index between –2.6 and –2.7. However, thanks to measurements by a range of dedicated instruments including AMS, ATIC, CALET, CREAM and HAWC, we know the spectrum to be more complex. Furthermore, different types of cosmic rays, such as protons, and the nuclei of helium or oxygen, have all been shown to exhibit different spectral features with breaks at different energies.
New measurements by the space-based Chinese–European Dark Matter Particle Explorer (DAMPE) provide detailed insights into the various spectral breaks in the combined proton and helium spectra. Clear hints of spectral breaks were already shown previously by various balloon and space-based experiments at low energies (below about 1 TeV), and by ground-based air-shower detectors at high energies (> TeV). However, in the region where space-based measurements start to suffer from a lack of statistics, ground-based instruments suffer from a low sensitivity, resulting in relatively large uncertainties. Furthermore, the completely different way in which space- and ground-based instruments measure the energy (directly in the former, and via air-shower reconstruction in the latter) made it important to make measurements that clearly connect the two. DAMPE has now produced detailed spectra in the 46 GeV to 316 TeV energy range, thereby filling most of the gap. The results confirm both a spectral hardening around 100 GeV and a subsequent spectral softening around 10 TeV, which connects well with a second spectral bump previously observed by ARGO-YBJ+WFCT at an energy of several hundred TeV (see figure).
The complex spectral features of high-energy cosmic rays can be explained in various ways. One possibility is through the presence of different types of cosmic-ray sources in our galaxy; one population produces cosmic rays with energies up to PeV, while a second only produces cosmic rays with energies up to tens of TeV, for example. A second possibility is that the spectral features are a result of a nearby single source from which we observe the cosmic rays directly before they become diffused in the galactic magnetic field. Examples of such a nearby source could be the Geminga pulsar, or the young supernova remnant Vela.
In the near future, novel data and analysis methods will likely allow researchers to distinguish between these two theories. One important source of this data is the LHAASO experiment in China, which is currently taking detailed measurements of cosmic rays in the 100 TeV to EeV range. Furthermore, thanks to ever-increasing statistics, the anisotropy of the arrival direction of the cosmic rays will also become a method to compare different models, in particular to identify nearby sources. The important link between direct and indirect measurements presented in this work thereby paves the way to connecting the large amounts of upcoming data to the theories on the origins of cosmic rays.
In the latest milestone for the CERN Neutrino Platform, a key element of the near detector for the T2K (Tokai to Kamioka) neutrino experiment in Japan – a state-of-the-art time projection chamber (TPC) – is now fully operational and taking cosmic data at CERN. T2K detects a neutrino beam at two sites: a near-detector complex close to the neutrino production point and Super-Kamiokande 300 km away. The ND280 detector is one of the near detectors necessary to characterise the beam before the neutrinos oscillate and to measure interaction cross sections, both of which are crucial to reduce systematic uncertainties.
To improve the latter further, the T2K collaboration decided in 2016 to upgrade ND280 with a novel scintillator tracker, two TPCs and a time-of-flight system. This upgrade, in combination with an increase in neutrino beam power from the current 500 kW to 1.3 MW, will increase the statistics by a factor of about four and reduce the systematic uncertainties from 6% to 4%. The upgraded ND280 is also expected to serve as a near detector of the next generation long-baseline neutrino oscillation experiment Hyper-Kamiokande.
Meanwhile, R&D and testing for the prototype detectors for the DUNE experiment at the Long Baseline Neutrino Facility at Fermilab/SURF in the US is entering its final stages.
The discovery of the Higgs boson in 2012 unleashed a detailed programme of measurements by ATLAS and CMS which have confirmed that its couplings are consistent with those predicted by the Standard Model (SM). However, several Higgs-boson decay channels have such small predicted branching fractions that they have not yet been observed. Involving higher order loops, these channels also provide indirect probes of possible physics beyond the SM. ATLAS and CMS have now teamed up to report the first evidence of the decay H → Zγ, presenting the combined result at the Large Hadron Collider Physics conference in Belgrade in May.
The SM predicts that approximately 0.15% of Higgs bosons produced at the LHC will decay in this way, but some theories beyond the SM predict a different decay rate. Examples include models where the Higgs boson is a neutral scalar of different origin, or a composite state. Different branching fractions are also expected for models with additional colourless charged scalars, leptons or vector bosons that couple to the Higgs boson, due to their contributions via loop corrections.
“Each particle has a special relationship with the Higgs boson, making the search for rare Higgs decays a high priority,” says ATLAS physics coordinator Pamela Ferrari. “Through a meticulous combination of the individual results of ATLAS and CMS, we have made a step forward towards unravelling yet another riddle of the Higgs boson.”
We have made a step forward towards unravelling yet another riddle of the Higgs boson
Previously, ATLAS and CMS independently conducted extensive searches for H → Zγ. Both used the decay of a Z boson into pairs of electrons or muons, which occur in about 6.6% of cases, to identify H → Zγ events. In these searches, the collision events associated with this decay would be identified as a narrow peak over a smooth background of events.
In the new study, ATLAS and CMS combined data that was collected during the second run of the LHC in 2015–2018 to significantly increase the statistical precision and reach of their searches. This collaborative effort resulted in the first evidence of the Higgs boson decay into a Z boson and a photon, with a statistical significance of 3.4σ. The measured signal rate relative to the SM prediction was found to be 2.2 ± 0.7, in agreement with the theoretical expectation from the SM.
“The existence of new particles could have very significant effects on rare Higgs decay modes,” says CMS physics coordinator Florencia Canelli. “This study is a powerful test of the Standard Model. With the ongoing third run of the LHC and the future High-Luminosity LHC, we will be able to improve the precision of this test and probe ever rarer Higgs decays.”
At a CERN seminar on 13 June, the LHCb collaboration presented the world’s most precise measurements of two key parameters relating to CP violation. Based on the full LHCb dataset collected during LHC Runs 1 and 2, the first concerns the observable sin2β while the second concerns the CP-violating phase φs – both of which are highly sensitive to potential new-physics contributions.
CP violation was first observed in 1964 in kaon mixing, and confirmed among B mesons in 2001 by the e+e– B-factory experiments BaBar and Belle. The latter enabled the first measurements of sin2β and were a vital confirmation of the Standard Model (SM). In the SM, CP violation arises due to a complex phase in the Cabibbo–Kobayashi–Maskawa mixing matrix, which, being unitary, defines a triangle in the complex plane: one side is defined to have unit length, while the other two sides and three angles must be inferred via measurements of certain hadron decays. If the measurements do not provide a consistent description of the triangle, it would hint that something is amiss in the SM.
The measurement of sin2β, which determines the angle β in the unitarity triangle, is more difficult at a hadron collider than it is at an e+e– collider. However, the large data samples available at the LHC and the optimised design of the LHCb experiment have enabled a measurement that is twice as precise as the previous best result from Belle. The LHCb team used decays of B0 mesons to J/ψ K0S, which can proceed either directly or by first oscillating into their antimatter partners. The interference between the amplitudes for the two decay paths results in a time-dependent asymmetry between the decay-time distributions of the B0 and B0. The amplitude of the oscillation, and thus the magnitude of CP violation present, is a measurement of sin2β for which LHCb finds a value of 0.716 ± 0.013 ± 0.008, in agreement with predictions.
Based on an analysis of B0S→ J/ψ K+K– decays, LHCb also presented the world’s best measurement of the CP-violating phase φs, which plays a similar role in B0S meson decays as sin2β does in B0 decays. As for B0 mesons, a B0S may decay directly or oscillate into a B0S and then decay. CP violation causes these decays to proceed at slightly different rates, manifesting itself as a non-zero value of φs due to the interference between mixing and decay. The predicted value of φs is about –0.037 rad, but new-physics effects, even if also small, could change its value significantly.
A detailed study of the angular distribution of B0S decay products using the Run 1 and 2 data samples enabled LHCb to measure this decay-time-dependent CP asymmetry φs = -0.039 ± 0.022 ± 0.006 rad. Representing the most precise single measurement to date, it is consistent with previous measurements and with the SM expectation. The precision measurement of φs is one of LHCb’s most important goals, said co-presenter Vukan Jevtic (TU Dortmund): “Together with sin2β, the new LHCb result marks an important advance in the quest to understand the nature and origin of CP violation.”
With both results currently limited by statistics, the collaboration is looking forward to data from the current and future LHC runs. “In Run 3 LHCb will collect a larger data sample taking advantage of the new upgraded LHCb detector,” concluded co-presenter Peilian Li (CERN). “This will allow even higher precision and therefore the possibility to detect, through these key quantities, the manifestation of new-physics effects.”
I have always been interested in what one might call existential questions: those that were originally theological or philosophical, but are now science, such as “why are things the way they are?” When I was young, for me it was a toss-up: do I go into particle physics or cosmology? At the time, experimental cosmology was less developed, so it made sense to go towards particle physics.
What has been your research focus?
When I was a graduate student in college, I was intrigued by the idea of quantum mechanical spin. I didn’t understand spin and I still don’t. It’s a perplexing and non-intuitive concept. It turned out the university I went to was working on it. When I got there, however, I ended up doing a fixed-target jet-photoproduction experiment. My thesis experiment was small, but it was a wonderful training ground because I was able to do everything. I built the experiment, wrote the data acquisition and all of the analysis software. Then I got back on track with the big questions, so colliders with the highest energies were the way to go. Back then it was the Tevatron and I joined DØ. When the LHC came online it was an opportunity to transition to CMS.
Why and when did you decide to get into communication?
It has to do with my family background. Many physicists come from families where one or both parents are already from the field. But I come from an academically impoverished, blue-collar background, so I had no direct mentors for physics. However, I was able to read popular books from the generation before me, by figures such as Carl Sagan, Isaac Asimov or George Gamow. They guided me into science. I’m essentially paying that back. I feel it’s sort of my duty because I have some skill at it and because I expect that there is some young person in some small town who is in a similar position as I was in, who doesn’t know that they want to be a scientist. And, frankly, I enjoy it. I am also worried about the antiscience sentiment I see in society, from the antivaccine movement to climate-change denial to 5G radiation fears. If scientists do not speak up, the antiscience voices are the only ones that will be heard. And if public policy is based on these false narratives, the damage to society can be severe.
Scientists doing outreach create goodwill, which can lead to better funding for research-focused scientists
How did you start doing YouTube videos?
I had got to a point in my career where I was fairly established, and I could credibly think of other things. When you’re young, you are urged to focus entirely on research, because if you don’t, it could harm your research career. I had already been writing for Fermilab Today and I kept suggesting doing videos, as YouTube was becoming a thing. After a couple of years one of the videographers said, “You know, Don, you’re actually pretty good at explaining this stuff. We should do a video.” My first video came out a year before the Higgs discovery, in July 2011. It was on the Higgs boson. When the video came out, a few of the bigger science outlets picked it up and during the build-up to the Higgs excitement it got more and more views. By now it has more than three million clicks, which for a science channel is a lot. We do serious science in our videos, but there is also some light-heartedness in them.
Do you try to make the videos funny?
This has more to do with me not taking anything seriously. I have found that irreverent humour can be disarming. People like to be entertained when they are learning. For example, one video was about “What was the real origin of mass?” Most people think that the Higgs boson is giving mass, but it’s really QCD. It’s the energy stored inside nucleons. In any event, in this video I start out with a joke about going into a Catholic church. The Higgs boson tries to say “I’m losing my faith,” and the priest replies: “You can’t leave the church. Without you how can we have mass?” For a lot of YouTube channels, viewership is not just about the material. It’s about the viewer liking the presenter. I’d say people who like our channel appreciate the combination of reliable science facts, but also subscribe for the humour. If a viewer doesn’t like a guy who does terrible dad jokes, they just go to another channel.
During the Covid-19 pandemic your videos switched to “Subatomic stories”. How do they differ?
Most of my videos are done in a studio on green screen so that we can put visuals in the background, but that was not possible during the lockdown. We then did a set up in my living room. I had an old DSLR camera and a recorder, and would record the video and the audio, then send the files to my videographer, Ian Krass, who does all the magic. Our usual videos don’t have a real story arc; they are just a series of topics. With “Subatomic stories” we began with a plan. I organised it as a sort of self-contained course, beginning with basic things, like the Standard Model, weak force, strong force, etc. Towards the end, we introduced more diverse, current research topics and a few esoteric theoretical ideas. Later, after Subatomic stories, I continued to film in my basement in a green-screen studio I built. We’ve returned to the Fermilab studio, but the basement one is waiting should the need arise.
You are quite the public face of Fermilab. How does this relationship work?
It’s working wonderfully. I have no complaints. I can’t say that was always true in the past, because, when you’re young, you’re advised to focus on your research; it was like that for me. At the time there was some hostility towards science communicators. If you did outreach, you weren’t really considered a serious scientist, and that’s still true to a degree, although it is getting better. For me, it got to the point where people were just used to me doing it, and they tolerated it. As long as it didn’t bother my research, I could do this on my time. Some people bowl, some people knit, some people hike. I made videos. As I started becoming more successful, the laboratory started embracing the effort and even encouraged me to spend some of my work day on it. I was glad because in the same way that we encourage certain scientists to specialise in AI or computational skills or detector skills, I think that we as a field need to cultivate and encourage those scientists who are good at communicating our work. The bottom line is that I am very happy with the lab. I would like to see other laboratories encourage at least a small subset of scientists, those who are enthusiastic about outreach, to give them the time and the resources to do it, because there’s a huge payoff.
What are your favourite and least favourite things about doing outreach?
I think I’m making an impact. For instance, I’ve had graduate students or even postdocs ask me to autograph a book saying, “I went into physics because I read this book.” Occasionally I’m recognised in public, but the viewership numbers tell the story. If a video does poorly, it will get 50,000 viewers. And a good video, or maybe just a lucky one, can get millions. The message is getting out. As for the least favourite part, lately it is coming up with ideas. I’ve covered nearly every (hot) topic, so now I am thinking of revisiting early topics in a new way.
What would be your message to physicists who don’t have time or see the need for science communication?
Let’s start with the second type, who don’t see the value of it. I would like to remind them that essentially, in any country, if you want to do research, your funding comes from taxpayers. They work hard for their money and they certainly don’t want to pay taxes, so if you want to ask them to support this thing that you’re interested in, you need to convince them that it’s important and interesting. For those who don’t have time, I’m empathetic. Depending on your supervisor, doing science communication can harm a young career. However, in that case I think that the community should at least support a small group of people who do outreach. If nothing else, the scientists doing outreach create goodwill, which can lead to better funding for research-focused scientists.
Where do you see particle physics headed and the role of outreach?
The problem is that the Standard Model works well, but not perfectly. Consequently, we need to look for anomalies both at the LHC and with other precision experiments. I imagine that the next decade will resemble what we are doing now. I think it would be of very high value if we could spend some money on thinking about how to make stronger magnets and advanced acceleration technologies, because that’s the only way we’re going to get a very large increase in energy. The scientists know what to do. We are developing the techniques and technologies needed to move forward. On the communication side, we just need to remind the public that the questions particle physicists and cosmologists are trying to answer are timeless. They’re the questions many children ask. It’s a fascinating universe out there and a good science story can rekindle anyone’s sense of child-like wonder.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.