Topics

A tribute to a great physicist

Jack Steinberger

This book was written on the occasion of the 100th anniversary of the birth of Jack Steinberger. Edited by Jack’s former colleagues Weimin Wu and KK Phua with his daughter Julia Steinberger, it is a tribute to the important role that Jack played in particle physics at CERN and elsewhere, and also highlights many aspects of his life outside physics.

The book begins with a nice introduction by his daughter, herself a well-known scientist. She describes Jack’s family life, his hobbies, interests, passions and engagement, such as with the Pugwash conference series. The introduction is followed by a number of short essays by former friends and colleagues. The first is a transcript of an interview with Jack by Swapan Chattopadhyay in 2017. It contains recollections of Jack’s time at Fermilab, with his PhD supervisor Enrico Fermi, and concludes with his connections with Germany later in life.

Drive and leadership

The next essays highlight the essential impact that Jack had in all the experiments he participated in, mostly as spokesperson, and underline his original ideas, drive and leadership, not just professionally but also in his personal life. Stories include those by Hallstein Høgåsen, a fellow in the CERN theory department, who describes the determination and perseverance he had in mountaineering. S Lokanathan worked with Jack as a graduate student in the early 1950s in Nevis Labs and remained in contact with him, including later on when he became a professor in Jaipur. Jacques Lefrançois covers the ALEPH period, and Vera Luth the earlier kaon experiments at CERN. Italo Mannelli comments on both the early times when Jack visited Bologna to work with Marcello Conversi and Giampietro Puppi, and then turns to his work at the NA31 experiment on direct CP violation in the Ko system.

Gigi Rolandi emphasises the important role that Jack played in the design and construction of the ALEPH time projection chamber. Another good essay is by David N Schwartz, the son of Mel Schwartz who shared the Nobel prize with Jack and Leon Lederman. When David was born, Jack was Mel Schwartz’s thesis supervisor. As Jack was a friend of the Schwartz family, they were in regular contact all along. David describes how his father and Jack worked together and how, together with Leon Lederman, they started the famous muon neutrino experiment in 1959. As David Schwartz later became involved in arms control for the US in Geneva, he kept in contact with Jack, who had always been very passionate about arms control. David also remembers the great respect that Jack had for his thesis supervisor Enrico Fermi. The final essay is by Weimin Wu, one of the first Chinese physicists to join the international high-energy physics research community. Weimin started to work on ALEPH in 1979 and has remained a friend of the family since. He describes not only the important role that Jack played as a professor, mentor and role model, but also for establishing the link between ALEPH and the Chinese high-energy physics community.

Memorial Volume for Jack Steinberger

All these essays describe the enormous qualities of Jack as a physicist and as a leader. But they also highlight his social and human strengths. The reader gets a good feeling of Jack’s interests and hobbies outside of physics, such as music, climbing, skiing and sailing. Many of the essays are also accompanied by photographs, covering all parts of his life, and they are free from formulae or complicated physics explanations.

For those who want to go deeper into the physics that Jack was involved with, the second part of the book consists of a selection of his most important and representative publications, chosen and introduced by Dieter Schlatter. The first two papers from the 1950s deal with neutral meson production by photons and a possible detection of parity non-conservation in hyperon decays. They are followed by the Nobel prize-winning paper “Possible Detection of High-Energy Neutrino Interactions and the Existence of Two Kinds of Neutrinos” from 1962, three papers on CP violation in kaon decays at CERN (including first evidence for direct CP violation by NA31), then five important publications from the CDHS neutrino experiment (officially referred to as WA1) on inclusive neutrino and anti-neutrino interactions, charged-current structure functions, gluon distributions and more. Of course, the list would not be complete without a few papers from his last experiment, ALEPH, including the seminal one on the determination of the number of light neutrino species – a beautiful follow-up of Jack’s earlier discovery that there are at least two types of neutrinos.

This agreeable and interesting book will primarily appeal to those who have met or known Jack. But others, including younger physicists, will read the book with pleasure as it gives a good impression of how physics and physicists functioned over the past 70 years. It is therefore highly recommended.

Quantum Mechanics: A Mathematical Introduction

Quantum Mechanics

Andrew Larkoski seems to be an author with the ability to write something interesting about topics for which a lot has already been written. His previous book Elementary Particle Physics (2020, CUP) was noted for its very intuitive style of presentation, which is not easy to find in other particle-physics textbooks. With his new book on quantum mechanics, the author continues in this manner. It is a textbook for advanced undergraduate students covering most of the subjects that an introduction to the topic usually includes.

Despite the subtitle “a mathematical introduction”, there is no more maths than in any other textbook at this level. The reason for the title is presumably not the mathematical content, but the presentation style. A standard quantum-mechanics textbook usually starts with postulating Schrödinger’s equation and then proceeds immediately to applications on physical systems. For example, the very popular Introduction to Quantum Mechanics by Griffiths and Schroeter (2018, CUP) introduces Schrödinger’s equation on the first page and, after some discussion on its meaning and basic computational techniques, the first application on the infinite square well appears on page 31. Larkoski aims to build an intuitive mathematical foundation before introducing Schrödinger’s equation. Hilbert spaces are discussed in the context of linear algebra as an abstract complex vector space. Indeed, space is given at the very beginning for ideas, such as the relation between the derivative and a translation, that are useful for more advanced applications of quantum mechanics, for example in field theory, but which seldom appear in quantum-mechanics textbooks so early. Schrödinger’s equation does not appear until page 58, and the first application in a system (which, as usual, is the infinite square well) appears only on page 89.

The book is concise in length, which means that the author has had to carefully choose the areas that are beyond the standard quantum-mechanics material covered in most undergraduate courses. Larkoski’s choices are probably informed by his background in quantum field theory, since path integral formalism features strongly. Perhaps the price for keeping the book short is that there are topics, such as identical particles or Fermi’s golden rule, that are not covered.

Some readers will find the book’s style of delaying a mathematical introduction unnecessary and may prefer a more direct approach to the topic, which might also be related to the duration of the teaching period at university. I would not agree with such an assessment. Taking the time to build a basis early on helps tremendously with understanding quantum mechanics later on in a course – an approach that it is hoped will find its way to more classrooms in the near future.

DAMPE confirms cosmic-ray complexity

Energy spectra measured by DAMPE

The exact origin of the high-energy cosmic rays that bombard Earth remains one of the most important open questions in astrophysics. Since their discovery more than a century ago, a multitude of potential sources, both galactic and extra-galactic, have been proposed. Examples of proposed galactic sources, which are theorised to be responsible for cosmic rays with energies below the PeV range, are supernova remnants and pulsars, while blazars and gamma-ray bursts are two of many potential sources theorised to be responsible for the cosmic-ray flux at higher energies. 

When identifying the origin of astrophysical photons, one can use their direction. However, for cosmic rays this is not as straightforward due to the impact of galactic and extra-galactic magnetic fields on their direction. To identify the origin of cosmic rays, researchers therefore almost fully rely on information embedded in their energy spectra. When assuming just acceleration within shock regions of extreme astrophysical objects, the galactic cosmic-ray spectrum should follow a simple, single power law with an index between –2.6 and –2.7. However, thanks to measurements by a range of dedicated instruments including AMS, ATIC, CALET, CREAM and HAWC, we know the spectrum to be more complex. Furthermore, different types of cosmic rays, such as protons, and the nuclei of helium or oxygen, have all been shown to exhibit different spectral features with breaks at different energies.

New measurements by the space-based Chinese–European Dark Matter Particle Explorer (DAMPE) provide detailed insights into the various spectral breaks in the combined proton and helium spectra. Clear hints of spectral breaks were already shown previously by various balloon and space-based experiments at low energies (below about 1 TeV), and by ground-based air-shower detectors at high energies (> TeV). However, in the region where space-based measurements start to suffer from a lack of statistics, ground-based instruments suffer from a low sensitivity, resulting in relatively large uncertainties. Furthermore, the completely different way in which space- and ground-based instruments measure the energy (directly in the former, and via air-shower reconstruction in the latter) made it important to make measurements that clearly connect the two. DAMPE has now produced detailed spectra in the 46 GeV to 316 TeV energy range, thereby filling most of the gap. The results confirm both a spectral hardening around 100 GeV and a subsequent spectral softening around 10 TeV, which connects well with a second spectral bump previously observed by ARGO-YBJ+WFCT at an energy of several hundred TeV (see figure).

The complex spectral features of high-energy cosmic rays can be explained in various ways. One possibility is through the presence of different types of cosmic-ray sources in our galaxy; one population produces cosmic rays with energies up to PeV, while a second only produces cosmic rays with energies up to tens of TeV, for example. A second possibility is that the spectral features are a result of a nearby single source from which we observe the cosmic rays directly before they become diffused in the galactic magnetic field. Examples of such a nearby source could be the Geminga pulsar, or the young supernova remnant Vela.

In the near future, novel data and analysis methods will likely allow researchers to distinguish between these two theories. One important source of this data is the LHAASO experiment in China, which is currently taking detailed measurements of cosmic rays in the 100 TeV to EeV range. Furthermore, thanks to ever-increasing statistics, the anisotropy of the arrival direction of the cosmic rays will also become a method to compare different models, in particular to identify nearby sources. The important link between direct and indirect measurements presented in this work thereby paves the way to connecting the large amounts of upcoming data to the theories on the origins of cosmic rays. 

A new TPC for T2K upgrade

In the latest milestone for the CERN Neutrino Platform, a key element of the near detector for the T2K (Tokai to Kamioka) neutrino experiment in Japan – a state-of-the-art time projection chamber (TPC) – is now fully operational and taking cosmic data at CERN. T2K detects a neutrino beam at two sites: a near-detector complex close to the neutrino production point and Super-Kamiokande 300 km away. The ND280 detector is one of the near detectors necessary to characterise the beam before the neutrinos oscillate and to measure interaction cross sections, both of which are crucial to reduce systematic uncertainties. 

To improve the latter further, the T2K collaboration decided in 2016 to upgrade ND280 with a novel scintillator tracker, two TPCs and a time-of-flight system. This upgrade, in combination with an increase in neutrino beam power from the current 500 kW to 1.3 MW, will increase the statistics by a factor of about four and reduce the systematic uncertainties from 6% to 4%. The upgraded ND280 is also expected to serve as a near detector of the next generation long-baseline neutrino oscillation experiment Hyper-Kamiokande. 

Meanwhile, R&D and testing for the prototype detectors for the DUNE experiment at the Long Baseline Neutrino Facility at Fermilab/SURF in the US is entering its final stages. 

ATLAS and CMS find first evidence for H → Zγ

The discovery of the Higgs boson in 2012 unleashed a detailed programme of measurements by ATLAS and CMS which have confirmed that its couplings are consistent with those predicted by the Standard Model (SM). However, several Higgs-boson decay channels have such small predicted branching fractions that they have not yet been observed. Involving higher order loops, these channels also provide indirect probes of possible physics beyond the SM. ATLAS and CMS have now teamed up to report the first evidence of the decay H  Zγ, presenting the combined result at the Large Hadron Collider Physics conference in Belgrade in May. 

The SM predicts that approximately 0.15% of Higgs bosons produced at the LHC will decay in this way, but some theories beyond the SM predict a different decay rate. Examples include models where the Higgs boson is a neutral scalar of different origin, or a composite state. Different branching fractions are also expected for models with additional colourless charged scalars, leptons or vector bosons that couple to the Higgs boson, due to their contributions via loop corrections. 

“Each particle has a special relationship with the Higgs boson, making the search for rare Higgs decays a high priority,” says ATLAS physics coordinator Pamela Ferrari. “Through a meticulous combination of the individual results of ATLAS and CMS, we have made a step forward towards unravelling yet another riddle of the Higgs boson.”

We have made a step forward towards unravelling yet another riddle of the Higgs boson

Previously, ATLAS and CMS independently conducted extensive searches for H  Zγ. Both used the decay of a Z boson into pairs of electrons or muons, which occur in about 6.6% of cases, to identify H  Zγ events. In these searches, the collision events associated with this decay would be identified as a narrow peak over a smooth background of events. 

In the new study, ATLAS and CMS combined data that was collected during the second run of the LHC in 2015–2018 to significantly increase the statistical precision and reach of their searches. This collaborative effort resulted in the first evidence of the Higgs boson decay into a Z boson and a photon, with a statistical significance of 3.4σ. The measured signal rate relative to the SM prediction was found to be 2.2 ± 0.7, in agreement with the theoretical expectation from the SM.

“The existence of new particles could have very significant effects on rare Higgs decay modes,” says CMS physics coordinator Florencia Canelli. “This study is a powerful test of the Standard Model. With the ongoing third run of the LHC and the future High-Luminosity LHC, we will be able to improve the precision of this test and probe ever rarer Higgs decays.”

LHCb sets record precision on CP violation

Comparison of sin2β measurements

At a CERN seminar on 13 June, the LHCb collaboration presented the world’s most precise measurements of two key parameters relating to CP violation. Based on the full LHCb dataset collected during LHC Runs 1 and 2, the first concerns the observable sin2β while the second concerns the CP-violating phase φs – both of which are highly sensitive to potential new-physics contributions. 

CP violation was first observed in 1964 in kaon mixing, and confirmed among B mesons in 2001 by the e+e B-factory experiments BaBar and Belle. The latter enabled the first measurements of sin2β and were a vital confirmation of the Standard Model (SM). In the SM, CP violation arises due to a complex phase in the Cabibbo–Kobayashi–Maskawa mixing matrix, which, being unitary, defines a triangle in the complex plane: one side is defined to have unit length, while the other two sides and three angles must be inferred via measurements of certain hadron decays. If the measurements do not provide a consistent description of the triangle, it would hint that something is amiss in the SM. 

The measurement of sin2β, which determines the angle β in the unitarity triangle, is more difficult at a hadron collider than it is at an e+e collider. However, the large data samples available at the LHC and the optimised design of the LHCb experiment have enabled a measurement that is twice as precise as the previous best result from Belle. The LHCb team used decays of B0 mesons to J/ψ K0S, which can proceed either directly or by first oscillating into their antimatter partners. The interference between the amplitudes for the two decay paths results in a time-dependent asymmetry between the decay-time distributions of the B0 and B0. The amplitude of the oscillation, and thus the magnitude of CP violation present, is a measurement of sin2β for which LHCb finds a value of 0.716 ± 0.013 ± 0.008, in agreement with predictions.

Based on an analysis of B0S J/ψ K+K decays, LHCb also presented the world’s best measurement of the CP-violating phase φs, which plays a similar role in B0S meson decays as sin2β does in B0 decays. As for B0 mesons, a B0S may decay directly or oscillate into a B0S and then decay. CP violation causes these decays to proceed at slightly different rates, manifesting itself as a non-zero value of φs due to the interference between mixing and decay. The predicted value of φs is about –0.037 rad, but new-physics effects, even if also small, could change its value significantly.

A detailed study of the angular distribution of B0S decay products using the Run 1 and 2 data samples enabled LHCb to measure this decay-time-dependent CP asymmetry φs = -0.039 ± 0.022 ± 0.006 rad. Representing the most precise single measurement to date, it is consistent with previous measurements and with the SM expectation. The precision measurement of φs is one of LHCb’s most important goals, said co-presenter Vukan Jevtic (TU Dortmund): “Together with sin2β, the new LHCb result marks an important advance in the quest to understand the nature and origin of CP violation.” 

With both results currently limited by statistics, the collaboration is looking forward to data from the current and future LHC runs. “In Run 3 LHCb will collect a larger data sample taking advantage of the new upgraded LHCb detector,” concluded co-presenter Peilian Li (CERN). “This will allow even higher precision and therefore the possibility to detect, through these key quantities, the manifestation of new-physics effects.”

Physicist by day, YouTuber by night

Don Lincoln

What got you into physics?

I have always been interested in what one might call existential questions: those that were originally theological or philosophical, but are now science, such as “why are things the way they are?” When I was young, for me it was a toss-up: do I go into particle physics or cosmology? At the time, experimental cosmology was less developed, so it made sense to go towards particle physics.

What has been your research focus?

When I was a graduate student in college, I was intrigued by the idea of quantum mechanical spin. I didn’t understand spin and I still don’t. It’s a perplexing and non-intuitive concept. It turned out the university I went to was working on it. When I got there, however, I ended up doing a fixed-target jet-photoproduction experiment. My thesis experiment was small, but it was a wonderful training ground because I was able to do everything. I built the experiment, wrote the data acquisition and all of the analysis software. Then I got back on track with the big questions, so colliders with the highest energies were the way to go. Back then it was the Tevatron and I joined DØ. When the LHC came online it was an opportunity to transition to CMS.

Why and when did you decide to get into communication?

It has to do with my family background. Many physicists come from families where one or both parents are already from the field. But I come from an academically impoverished, blue-collar background, so I had no direct mentors for physics. However, I was able to read popular books from the generation before me, by figures such as Carl Sagan, Isaac Asimov or George Gamow. They guided me into science. I’m essentially paying that back. I feel it’s sort of my duty because I have some skill at it and because I expect that there is some young person in some small town who is in a similar position as I was in, who doesn’t know that they want to be a scientist. And, frankly, I enjoy it. I am also worried about the antiscience sentiment I see in society, from the antivaccine movement to climate-change denial to 5G radiation fears. If scientists do not speak up, the antiscience voices are the only ones that will be heard. And if public policy is based on these false narratives, the damage to society can be severe. 

Scientists doing outreach create goodwill, which can lead to better funding for research-focused scientists

How did you start doing YouTube videos?

I had got to a point in my career where I was fairly established, and I could credibly think of other things. When you’re young, you are urged to focus entirely on research, because if you don’t, it could harm your research career. I had already been writing for Fermilab Today and I kept suggesting doing videos, as YouTube was becoming a thing. After a couple of years one of the videographers said, “You know, Don, you’re actually pretty good at explaining this stuff. We should do a video.” My first video came out a year before the Higgs discovery, in July 2011. It was on the Higgs boson. When the video came out, a few of the bigger science outlets picked it up and during the build-up to the Higgs excitement it got more and more views. By now it has more than three million clicks, which for a science channel is a lot. We do serious science in our videos, but there is also some light-heartedness in them.

Do you try to make the videos funny? 

This has more to do with me not taking anything seriously. I have found that irreverent humour can be disarming. People like to be entertained when they are learning. For example, one video was about “What was the real origin of mass?” Most people think that the Higgs boson is giving mass, but it’s really QCD. It’s the energy stored inside nucleons. In any event, in this video I start out with a joke about going into a Catholic church. The Higgs boson tries to say “I’m losing my faith,” and the priest replies: “You can’t leave the church. Without you how can we have mass?” For a lot of YouTube channels, viewership is not just about the material. It’s about the viewer liking the presenter. I’d say people who like our channel appreciate the combination of reliable science facts, but also subscribe for the humour. If a viewer doesn’t like a guy who does terrible dad jokes, they just go to another channel.

During the Covid-19 pandemic your videos switched to “Subatomic stories”. How do they differ?

Most of my videos are done in a studio on green screen so that we can put visuals in the background, but that was not possible during the lockdown. We then did a set up in my living room. I had an old DSLR camera and a recorder, and would record the video and the audio, then send the files to my videographer, Ian Krass, who does all the magic. Our usual videos don’t have a real story arc; they are just a series of topics. With “Subatomic stories” we began with a plan. I organised it as a sort of self-contained course, beginning with basic things, like the Standard Model, weak force, strong force, etc. Towards the end, we introduced more diverse, current research topics and a few esoteric theoretical ideas. Later, after Subatomic stories, I continued to film in my basement in a green-screen studio I built. We’ve returned to the Fermilab studio, but the basement one is waiting should the need arise. 

You are quite the public face of Fermilab. How does this relationship work?

It’s working wonderfully. I have no complaints. I can’t say that was always true in the past, because, when you’re young, you’re advised to focus on your research; it was like that for me. At the time there was some hostility towards science communicators. If you did outreach, you weren’t really considered a serious scientist, and that’s still true to a degree, although it is getting better. For me, it got to the point where people were just used to me doing it, and they tolerated it. As long as it didn’t bother my research, I could do this on my time. Some people bowl, some people knit, some people hike. I made videos. As I started becoming more successful, the laboratory started embracing the effort and even encouraged me to spend some of my work day on it. I was glad because in the same way that we encourage certain scientists to specialise in AI or computational skills or detector skills, I think that we as a field need to cultivate and encourage those scientists who are good at communicating our work. The bottom line is that I am very happy with the lab. I would like to see other laboratories encourage at least a small subset of scientists, those who are enthusiastic about outreach, to give them the time and the resources to do it, because there’s a huge payoff.

Don Lincoln on YouTube

What are your favourite and least favourite things about doing outreach?

I think I’m making an impact. For instance, I’ve had graduate students or even postdocs ask me to autograph a book saying, “I went into physics because I read this book.” Occasionally I’m recognised in public, but the viewership numbers tell the story. If a video does poorly, it will get 50,000 viewers. And a good video, or maybe just a lucky one, can get millions. The message is getting out. As for the least favourite part, lately it is coming up with ideas. I’ve covered nearly every (hot) topic, so now I am thinking of revisiting early topics in a new way.

What would be your message to physicists who don’t have time or see the need for science communication?

Let’s start with the second type, who don’t see the value of it. I would like to remind them that essentially, in any country, if you want to do research, your funding comes from taxpayers. They work hard for their money and they certainly don’t want to pay taxes, so if you want to ask them to support this thing that you’re interested in, you need to convince them that it’s important and interesting. For those who don’t have time, I’m empathetic. Depending on your supervisor, doing science communication can harm a young career. However, in that case I think that the community should at least support a small group of people who do outreach. If nothing else, the scientists doing outreach create goodwill, which can lead to better funding for research-focused scientists.

Where do you see particle physics headed and the role of outreach?

The problem is that the Standard Model works well, but not perfectly. Consequently, we need to look for anomalies both at the LHC and with other precision experiments. I imagine that the next decade will resemble what we are doing now. I think it would be of very high value if we could spend some money on thinking about how to make stronger magnets and advanced acceleration technologies, because that’s the only way we’re going to get a very large increase in energy. The scientists know what to do. We are developing the techniques and technologies needed to move forward. On the communication side, we just need to remind the public that the questions particle physicists and cosmologists are trying to answer are timeless. They’re the questions many children ask. It’s a fascinating universe out there and a good science story can rekindle anyone’s sense of child-like wonder.

CERN’s neutrino odyssey

The first candidate leptonic neutral-current event

The neutrino had barely been known for two years when CERN’s illustrious neutrino programme got under way. As early as 1958, the 600 MeV Synchro­cyclotron enabled the first observation of the decay of a charged pion into an electron and a neutrino – a key piece in the puzzle of weak interactions. Dedicated neutrino-beam experiments began a couple of years later when the Proton Synchrotron (PS) entered operation, rivalled by activities at Brookhaven’s higher-energy Alternating Gradient Synchrotron in the US. Producing the neutrino beam was relatively straightforward: make a proton beam from the PS hit an internal target to produce pions and kaons, let them fly some distance during which they can produce neutrinos when they decay, then use an iron shielding to filter the remaining hadrons, such that only neutrinos and muons remain. Ensuring that a new generation of particle detectors would enable the study of neutrino-beam interactions proved a tougher challenge. 

CERN began with two small, 1 m-long heavy-liquid bubble chambers that used proton beams which struck an internal target inside the PS, hoping to see at least one neutrino event per day. It was nowhere near that. Unfortunately the target configuration had made the beams about 10 times less intense than expected, and in 1961 CERN’s nascent neutrino programme came to a halt. “It was a big disappointment,” recalls Don Cundy, who was a young scientist at CERN at the time. “Then, several months later, Brookhaven did the same experiment but this time they put the target in the right place, and they discovered that there were two neutrinos – the muon neutrino (νµ) and the electron neutrino (νe) – a great discovery for which Lederman, Schwartz and Steinberger received the Nobel prize some 25 years later.” 

Despite this setback, CERN Director-General Victor Weisskopf, along with his Director of Research Gilberto Bernardini and the CERN team, decided to embark on an even more ambitious setup. Employing Simon van der Meer’s recently proposed “magnetic horn” – a high-current, pulsed focusing device placed around the target – and placing the target in an external beam pipe increased the neutrino flux by about two orders of magnitude. In 1963 this opened a new series of neutrino experiments at CERN. They began with a heavy-liquid bubble chamber containing around 500 kg of freon and a spark-chamber detector weighing several tonnes, for which first results were presented at a conference in Siena that year. The bubble-chamber results were particularly impressive, recalls Cundy: “Even though the number of events was of the order of a few hundred, you could do a lot of physics: measure the elastic form factor of the nucleon, single pion production, the total cross section, search for intermediate weak bosons and give limits on neutral-current processes.” It was at that conference that André Lagarrigue of Orsay urged that bubble chambers were the way forward for neutrino physics, and proposed to build the biggest chamber possible: Gargamelle, named after a giantess from a fictional renaissance story.

Magnetic horn

Construction in France of the much larger Gargamelle chamber, 4.8 m long and containing 18 tonnes of freon, was quick, and by the end of 1970 the detector was receiving a beam of muon neutrinos from the PS. The Gargamelle collaboration consisted of researchers from seven European institutes: Aachen, Brussels, CERN, École Polytechnique Paris, Milan, LAL Orsay and University College London. In 1969 the collaboration had made a list of physics priorities. Following the results of CERN’s Heavy Liquid Bubble Chamber, which set new limits on neutrino-electron scattering and single-pion neutral-current (NC) processes, the search for actual NC events made it onto the list. However, it only placed eighth out of 10 science goals. That is quite understandable, comments Cundy: “People thought that the most sensitive way to look for NCs was the decay of a K0 meson into two muons or two electrons but that had a very low branching ratio, so if NCs existed it would be at a very small level. The first thing on the list for Gargamelle was in fact looking at the structure of the nucleon, to measure the total cross section and to investigate the quark model.” 

Setting priorities

After the discovery of the neutrino in 1956 by Reines and Cowan (CERN Courier July/August 2016 p17), the weak interaction became a focus of nuclear research. The unification of the electromagnetic and weak interactions by Salam, Glashow and Weinberg a decade later motivated experiments to look for the electroweak carriers: the W boson, which mediates charged-current interactions, and the Z boson associated with neutral currents. While the former were known to exist by means of β decay, the latter were barely thought of. Neutral currents started to become interesting in 1971, after Martinus Veltman and Gerard ’t Hooft proved the renormalisability of the electroweak theory. 

More than 60 years after first putting the neutrino to work, CERN’s neutrino programme continues to evolve

By that time, Gargamelle was running at full speed. Analysing the photographs that were taken every time the PS was pulsed to look for interesting tracks were CERN personnel (at the time often referred to as “scanning girls”) who essentially performed the role of a modern level-1 trigger. Interactions were divided into different classes depending on the number of particles involved (muons, hadrons, electron–positron pairs, even one or more isolated protons as well as isolated electrons and positrons). The leptonic NC process (νµ + eνµ + e) would give an event that consisted of a single energetic electron. Since the background was very low, it would be the smoking gun for NCs. However, the cross-section was also very low, with only one to nine events expected from the electroweak calculations. The energetic hadronic NC event (νµ + N νµ + X, with the respective process involving antiparticles if the reaction was triggered by an antineutrino beam) would consist only of several hadrons, in fact just like events produced by incoming high-energy neutrons.

Gargamelle scanning table

“When the first leptonic event was found in December 1972 we were convinced that NCs existed,” says Gargamelle member Donatella Cavalli from the University of Milan. “It was just one event but with very low background, so a lot of effort was put into the search for hadronic NC events and in the full understanding of the background. I was the youngest in my group and I remember spending the evenings with my colleagues scanning the films on special projectors, which allowed us to observe the eight views of the chamber. I proudly remember my travels to Paris, London and Brussels, taking the photographs of the candidate events found in Milan to be checked with colleagues from other groups.”

At a CERN seminar on 19 July 1973, Paul Musset, who was one of the principal investigators, presented Gargamelle’s evidence for NCs based on both the leptonic and hadronic analyses. Results from the former had been published in a short paper received by Physics Letters two weeks earlier, while the paper on the hadronic events, which reported on the actual observation and hence confirmation of neutral currents, was received on 23 July. In August 1973 Gerald Myatt of  University College London, now at the University of Oxford, presented the results at the Electron-Photon conference. The papers were published in the same issue of the journal on 3 September. Yet many physicists doubted them. “It was generally believed that Gargamelle made a mistake,” says Myatt. “There was only one event, a tiny track really, and very low background. Still, it was not seen as conclusive evidence.” Among the critical voices were T D Lee, who was utterly unimpressed, and Jack Steinberger, who went as far as to bet half his wine cellar that the Gargamelle result would be wrong. 

The difficulty was to demonstrate that the hadronic NC signal was not due to background from neutral hadrons. “A lot of work and many different checks were done, from calculations to a full Monte Carlo simulation to a comparison between spatial distributions of charged- and neutral-current events,” explains Cavalli. “We were really happy when we published the first results from hadronic and leptonic NCs after all background checks, because we were confident in our results.” Initially the Gargamelle results were confirmed by the independent HPWF (Harvard–Pennsylvania–Wisconsin–Fermilab) experiment at Fermilab. Unfortunately, a problem with the HPWF setup led to their paper being rewritten, and a new analysis presented in November 1973 showed no sign of NCs. It was not until the following year that the modified HPWF apparatus and other experiments confirmed Gargamelle’s findings. 

André Lagarrigue

Additionally, the collaboration managed to tick off number two on its list of physics priorities: deep-inelastic scattering and scaling. Confirming earlier results from SLAC which showed that the proton is made of point-like constituents, Gargamelle data were crucial in proving that these constituents (quarks) have charges of +2/3 and –1/3. For neutral currents, the icing on the cake came 10 years after Gargamelle’s discovery with the direct discovery of the Z (and W) bosons at the SppS collider in 1983. The next milestone for CERN in understanding weak interactions came in 1990 with the precise measurement of the decay width of the Z boson at LEP, which showed that there are three and no more light neutrinos.

Legacy of a giantess

In 1977 Gargamelle was moved from the PS to the newly installed Super Proton Synchrotron (SPS). The following year, however, metal fatigue caused the chamber to crack and the experiment was decommissioned. Some of the collaboration members – including Cundy and Myatt – went to work on the nearby Big European Bubble Chamber. Also hooked up to the SPS for neutrino studies at that time were CDHS (CERN–Dortmund–Heidelberg–Saclay, officially denoted WA1) led by Steinberger, and Klaus Winter’s CHARM experiment. Operating for eight years, these large detectors collected millions of events that enabled precision studies on the structure of the charged and neutral currents as well as the structure of nucleons and the first evidence for QCD via scaling violations. 

The third type

The completion of the CHARM programme in 1991 marked the halt of neutrino operations at CERN for the first time in almost 30 years. But not for long. Experimental activities restarted with the search for neutrino oscillations, driven by the idea that neutrinos were an important component of dark matter in the universe. Consequently, two similarly styled short-baseline neutrino-beam experiments – CHORUS and NOMAD – were built. These next-generation detectors, which took data from 1994 to 1998 and from 1995 to 1998, respectively, joined others around the world to look for interactions of the third neutrino type, the ντ, and to search for neutrino oscillations, i.e. the change in neutrino flavour as they propagate, which was proposed in the 1950s and confirmed in 1998 by the SNO and Super-Kamiokande experiments in Canada and Japan. In 2000 the DONUT experiment at Fermilab reported the first direct evidence for ντ interactions. 

Gargamelle bubble chamber

CERN’s neutrino programme entered a hiatus until July 2006, when the SPS began firing an intense beam of muon neutrinos 732 km through Earth to two huge detectors – ICARUS and OPERA – located underground at Gran Sasso National Laboratory in Italy. Designed to make precision measurements of neutrino oscillations, the CERN Neutrinos to Gran Sasso (CNGS) programme observed the oscillation of muon neutrinos into tau neutrinos and was completed in 2012. 

As the CERN neutrino-beam programme was wound down, a brand-new initiative to support fundamental neutrino research began. “The initial idea for a ‘neutrino platform’ at CERN was to do a short-baseline neutrino experiment involving ICARUS to check the LSND anomaly, and another to test prototypes for “LBNO”, which would have been a European long-baseline neutrino oscillation experiment sending beams from CERN to Phyäsalmi in Finland to investigate the oscillation,” says Dario Autiero, who has been involved in CERN’s neutrino programme since the beginning of the 1980s. “The former was later decided to take place at Fermilab, while for the latter the European and US visions for long-baseline experiments found a consensus for what is now DUNE (the Deep Underground Neutrino Experiment) in the US.”

A unique facility

Officially launched in 2013 in scope of the update to the European strategy for particle physics, the CERN Neutrino Platform serves as a unique R&D facility for next-generation long-baseline neutrino experiments. Its most prominent project is the design, construction and testing of prototype detectors for DUNE, which will see a neutrino beam from Fermilab sent 1300 km to the SURF laboratory in Dakota. One of the Neutrino Platform’s early successes was the refurbishment of the ICARUS detector, which is now taking data at Fermilab’s short-baseline neutrino programme. The platform is also developing key technologies for the near detector for the Tokai-to-Kamioka (T2K) neutrino facility in Japan (see p10), and has a dedicated theory working group aimed at strengthening the connections between CERN and the worldwide neutrino community. Independently, the NA61 experiment at the SPS is contributing to a better understanding of neutrino–nucleon cross sections for DUNE and T2K data. 

Neutrino Platform at CERN’s North Area

More than 60 years after first putting the neutrino to work, CERN’s neutrino programme continues to evolve. In April 2023 a new experiment at the LHC called FASER made the first observation of neutrinos produced at a collider. Together with another new experiment, SND@LHC, FASER will enable the study of neutrinos in a new energy range and compare the production rate of all three types of neutrinos to further test the Standard Model. 

As for Gargamelle, today it lies next to BEBC and other retired colleagues in the garden of Square van Hove behind CERN’s main entrance. Not many can still retell the story of the discovery of neutral currents, but those who can share the story with delight “It was very tiny that first track from the electron, one in hundreds of thousands of pictures,” says Myatt. “Yet it justified André Lagarrigue’s vision of the large heavy-liquid bubble chamber as an ideal detector of neutrinos, combining large mass with a very finely detailed picture of the interaction. There can be no doubt that it was these features that enabled Gargamelle to make one of the most significant discoveries in the history of CERN.”

Five sigma revisited

The standard criterion for claiming a discovery in particle physics is that the observed effect should have the equivalent of a five standard-deviation (5σ) discrepancy with already known physics, i.e. the Standard Model (SM). This means that the chance of observing such an effect or larger should be at most 3 × 10–7, assuming it is merely a statistical fluctuation, which corresponds to the probability of correctly guessing whether a coin will fall down heads or tails for each of 22 tosses. Statisticians claim that it is crazy to believe probability distributions so far into their tails, especially when systematic uncertainties are involved; particle physicists still hope that they provide some measure of the level of (dis)agreement between data and theory. But what is the origin of this convention, and does it remain a relevant marker for claiming the discovery of new physics?

There are several reasons why the stringent 5σ rule is used in particle physics. The first is that it provides some degree of protection against falsely claiming the observation of a discrepancy with the SM. There have been numerous 3σ and 4σ effects in the past that have gone away when more data was collected. A relatively recent example was an excess of diphoton events at an energy of 750 GeV seen in both the ATLAS and CMS data of 2015, but which was absent in the larger data samples of 2016. 

Systematic errors provide another reason, since such effects are more difficult to assess than statistical uncertainties and may be underestimated. Thus in a systematics-dominated scenario, if our estimate is a factor of two too small, a more mundane 3σ fluctuation could incorrectly be inflated to an apparently exciting 6σ effect. A potentially more serious problem is a source of systematics that has not even been considered by the analysts, the so-called “unknown unknowns”. 

Know your p-values 

Another reason underlying the 5σ criterion is the look-elsewhere effect, which involves the “p-values” for the observed effect. These are defined as the probability of a statistical fluctuation causing a result to be as extreme as the one observed, or more so, assuming some null hypothesis. For example, in tossing an unbiased coin 10 times, and observing eight of them to be tails when we bet on each of them being heads, it is the probability of being wrong eight or nine or 10 times (5.5%). A small p-value indicates a tension between the theory and the observation. 

Higgs signals

Particle-physics analyses often look for peaks in mass spectra, which could be the sign of a new particle. An example is shown in the “Higgs signals” figure, which contains data from CMS used to discover the Higgs boson (ATLAS has similar data). Whereas the local p-value of an observed effect is the chance of a statistical fluctuation being at least as large as the observed one at its specific location, more relevant is a global p-value corresponding to a fluctuation anywhere in the analysis, which has a higher probability and hence reduces the significance. The local p-values corresponding to the data in “Higgs signals” are shown in the figure “p-values”. 

A non-physics example highlighting the difference between local and global p-values was provided by an archaeologist who noticed that a direction defined by two of the large stones at the Stonehenge monument pointed at a specific ancient monument in France. He calculated that the probability of this was very small, assuming that the placement of the stones was random (local p-value), and hence that this favoured the hypothesis that Stonehenge was designed to point in that way. However, the chance that one of the directions, defined by any pair of stones, was pointing at an ancient monument anywhere in the world (global p-value) is above 50%. 

Current practice for model-dependent searches in particle physics, however, is to apply the 5σ criterion to the local p-value, as was done in the search for the Higgs boson. One reason for this is that there is no unique definition of “elsewhere”; if you are a graduate student, it may be just your own analysis, while for CERN’s Director-General, “anywhere in any analysis carried out with data from CERN” may be more appropriate. Another is that model-independent searches involving machine-learning techniques are capable of being sensitive to a wide variety of possible new effects, and it is hard to estimate what their look-elsewhere factor should be. Clearly, in quoting global p-values it is essential to specify your interpretation of elsewhere. 

Local p-values

A fourth factor behind the 5σ rule is plausibility. The likelihood of an observation is the probability of the data, given the model. To convert this to the more interesting probability of the model, given the data, requires the Bayesian prior probability of the model. This is an example of the probability of an event A, assuming that B is true, not in general being the same as the probability of B, given A. Thus the probability of a murderer eating toast for breakfast may be 60%, but the probability of someone who eats toast for breakfast being a murderer is thankfully much smaller (about one in a million). In general, our belief in the plausibility in a model for a particular version of new physics is much smaller than for the SM, thus being an example of the old adage that “extraordinary claims require extraordinary evidence”.  Since these factors vary from one analysis to another, one can argue that it is unreasonable to use the same discovery criterion everywhere. 

There are other relevant aspects of the discovery procedure. Searches for new physics can be just tests for consistency with the SM; or they can see which of two competing hypotheses (“just SM” or “SM plus new physics”) provides a better fit to the data. The former are known as goodness-of-fit tests and may involve χ2, Kolmogorov–Smirnov or similar tests; the latter are hypothesis tests, often using the likelihood ratio. They are sometimes referred to as model-independent and model-dependent, respectively, each having its own advantages and limitations. However, the degree of model dependence is a continuous spectrum rather than a binary choice.

It is unreasonable to regard 5.1σ as a discovery, but 4.9σ as not. Also, should we regard the one with better observed accuracy or better expected accuracy as the preferred result? Blind analyses are recommended, in that this removes the possibility of the analyser adjusting selections to influence the significance of the observed effect. Some non-blind searches have such a large and indeterminate look-elsewhere effect that they can only be regarded as hints of new physics, to be confirmed by future independent data. Theory calculations also have uncertainties, due for example to parameters in the model or difficulties with numerical predictions. 

Discoveries in progress 

A useful exercise is to review a few examples that might be (or might have been) discoveries. A recent example involves the ATLAS and CMS observation of events involving four-top quarks. Apart from the similarity of the heroic work of the physicists involved, these analyses have interesting contrasts with the Higgs-boson discovery. First, the Higgs discovery involved clear mass peaks, while the four-top events simply caused an enhancement of events in the relevant region of phase space (see “Four tops” figure). Then, the four-top production is just a verification of an SM prediction and indeed it would have been more of a surprise if the measured rate had been zero. So this is just an observation of an expected process, rather than a new discovery. Indeed, both preprints use the word “observation” rather than “discovery”. Finally, although 5σ was the required criterion for discovering the Higgs boson, surely a lower level of significance would have been sufficient for the observation of four-top events. 

The output from a graph neural network

Going back further in time, an experiment in 1979 claimed to observe free quarks by measuring the electrical charge of small spheres levitated in an oscillating electric field; several gave multiples of 1/3, which was regarded as a signature of single quarks. Luis Alvarez noted that the raw results required sizeable corrections and suggested that a blind analysis should be performed on future data. The net result was that no further papers were published on this work. This demonstrates the value of blind analyses.

A second historical example is precision measurements at the Large Electron Positron collider (LEP). Compared with the predictions of the SM, including the then-known particles, deviations were observed in the many measurements made by the four LEP experiments. A much better fit to the data was achieved by including corrections from the (at that time hypothesised) top quark and Higgs boson, which enabled approximate mass ranges to be derived for them. However, it is now accepted that the discoveries of the top quark and the Higgs boson were subsequently made by their direct observations at the Tevatron and at the LHC, rather than by their virtual effects at LEP.

The muon magnetic moment is a more contemporary case. This quantity has been measured and also predicted to incredible precision, but a discrepancy between the two values exists at around the 4σ level, which could be an indication of contributions from virtual new particles. The experiment essentially measures just this one quantity, so there is no look-elsewhere effect. However, even if this discrepancy persists in new data, it will be difficult to tell if it is due to the theory or experiment being wrong, or whether it requires the existence of new, virtual particles. Also, the nature of such virtual particles could remain obscure. Furthermore, a recent calculation using lattice gauge theory of the “vacuum hadronic polarisation” contribution to the predicted value of the magnetic moment brings it closer to the observed value (see “Measurement of the moment” figure). Clearly it will be worth watching how this develops. 

Our hope for the future is that the current 5σ criterion will be replaced by a more nuanced approach for what qualifies as a discovery

The so-called flavour anomalies are another topical example. The LHCb experiment has observed several anomalous results in the decays of B mesons, especially those involving transitions of a b quark to an s quark and a lepton pair. It is not yet clear whether these could be evidence for some real discrepancies with the SM prediction (i.e. evidence for new physics), or simply and more mundanely an underestimate of the systematics. The magnitude of the look-elsewhere effect is hard to estimate, so independent confirmation of the observed effects would be helpful. Indeed, the most recent result from LHCb for the R(K) parameter, published in December 2022, is much more consistent with the SM. It appears that the original result was affected by an overlooked background source. Repeated measurements by other experiments are eagerly awaited. 

A surprise last year was the new result by the CDF collaboration at the former Tevatron collider at Fermilab, which finished collecting data many years ago, on the mass of the W boson (mW), which disagreed with the SM prediction by 7σ. It is of course more reasonable to use the weighted average of all mW measurements, which reduces the discrepancy, but only slightly. A subsequent measurement by ATLAS disagreed with the CDF result; the CMS determination of mW is awaited with interest. 

Nuanced approach

It is worth noting that the muon g-2, flavour and mW discrepancies concern tests of the SM predictions, rather than direct observation of a new particle or its interactions. Independent confirmations of the observations and the theoretical calculations would be desirable.

Measurement of the moment

One of the big hopes for further running of the LHC is that it will result in the “discovery” of Higgs pair production. But surely there is no reason to require a 5σ discrepancy with the SM in order to make such claim? After all, the Higgs boson is known to exist, its mass is known and there is no big surprise in observing its pair-production rate being consistent with the SM prediction. “Confirmation” would be a better word than “discovery” for this process. In fact, it would be a real discovery if the di-Higgs production rate was found to be significantly above or below the SM prediction. A similar argument could be applied to the searches for single top-quark production at hadron colliders, and decays such as H → μμ or Bs→ μμ. This should not be taken to imply that LHC running can be stopped once a suitable lower level of significance is reached. Clearly there will be interest in using more data to study di-Higgs production in greater detail. 

Our hope for the future is that the current 5σ criterion will be replaced by a more nuanced approach for what qualifies as a discovery. This would include just quoting the observed and expected p-values; whether the analysis is dominated by systematic uncertainties or statistical ones; the look-elsewhere effect; whether the analysis is robust; the degree of surprise; etc. This may mean leaving it for future measurements to determine who deserves the credit for a discovery. It may need a group of respected physicists (e.g. the directors of large labs) to make decisions as to whether a given result merits being considered a discovery or needs further verification. Hopefully we will have several of these interesting decisions to make in the not-too-distant future. 

Extreme detector design for a future circular collider

FCC-hh reference detector

The Future Circular Collider (FCC) is the most powerful post-LHC experimental infrastructure proposed to address key open questions in particle physics. Under study for almost a decade, it envisions an electron–positron collider phase, FCC-ee, followed by a proton–proton collider in the same 91 km-circumference tunnel at CERN. The hadron collider, FCC-hh, would operate at a centre-of-mass energy of 100 TeV, extending the energy frontier by almost an order of magnitude compared to the LHC, and provide an integrated luminosity a factor of 5–10 larger. The mass reach for direct discovery at FCC-hh will reach several tens of TeV and allow, for example, the production of new particles whose existence could be indirectly exposed by precision measurements at FCC-ee. 

The potential of FCC-hh offers an unprecedented opportunity to address fundamental unknowns about our universe

At the time of the kickoff meeting for the FCC study in 2014, the physics potential and the requirements for detectors at a 100 TeV collider were already heavily debated. These discussions were eventually channelled into a working group that provided the input to the 2020 update of the European strategy for particle physics and recently concluded with a detailed writeup in a 300-page CERN Yellow Report. To focus the effort, it was decided to study one reference detector that is capable of fully exploiting the FCC-hh physics potential. At first glance it resembles a super CMS detector with two LHCb detectors attached (see “Grand designs” image). A detailed detector performance study followed, allowing a very efficient study of the key physics capabilities. 

The first detector challenge at FCC-hh is related to the luminosity, which is expected to reach 3 × 1035 cm–2s–1. This is six times larger than the HL-LHC luminosity and 30 times larger than the nominal LHC luminosity. Because the FCC will operate beams with a 25 ns bunch spacing, the so-called pile-up (the number of pp collisions per bunch crossing) scales by approximately the same factor. This results in almost 1000 simultaneous pp collisions, requiring a highly granular detector. Evidently, the assignment of tracks to their respective vertices in this environment is a formidable task. 

Longitudinal cross-section of the FCC-hh reference detector

The plan to collect an integrated pp luminosity of 30 ab–1 brings the radiation hardness requirements for the first layers of the tracking detector close to 1018 hadrons/cm2, which is around 100 times more than the requirement for the HL-LHC. Still, the tracker volume with such high radiation load is not excessively large. From a radial distance of around 30 cm outwards, radiation levels are already close to those expected for the HL-LHC, thus the silicon technology for these detector regions is already available.

The high radiation levels also need very radiation-hard calorimetry, making a liquid-argon calorimeter the first choice for the electromagnetic calorimeter and forward regions of the hadron calorimeter. The energy deposit in the very forward regions will be 4 kW per unit of rapidity and it will be an interesting task to keep cryogenic liquids cold in such an environment. Thanks to the large shielding effect of the calorimeters, which have to be quite thick to contain the highest energy particles, the radiation levels in the muon system are not too different from those at the HL-LHC. So the technology needed for this system is available. 

Looking forward 

At an energy of 100 TeV, important SM particles such as the Higgs boson are abundantly produced in the very forward region. The forward acceptance of FCC-hh detectors therefore has to be much larger than at the LHC detectors. ATLAS and CMS enable momentum measurements up to pseudorapidities (a measure of the angle between the track and beamline) of around η = 2.5, whereas at FCC-hh this will have to be extended to η = 4 (see “Far reaching” figure). Since this is not achievable with a central solenoid alone, a forward magnet system is assumed on either side of the detector. Whether the optimum forward magnets are solenoids or dipoles still has to be studied and will depend on the requirements for momentum resolution in the very forward region. Forward solenoids have been considered that extend the precision of momentum measurements by one additional unit of rapidity. 

Momentum resolution versus pseudorapidity

A silicon tracking system with a radius of 1.6 m and a total length of 30 m provides a momentum resolution of around 0.6% for low-momentum particles, 2% at 1 TeV and 20% at 10 TeV (see “Forward momentum” figure). To detect at least 90% of the very forward jets that accompany a Higgs boson in vector-boson-fusion production, the tracker acceptance has to be extended up to η = 6. At the LHC such an acceptance is already achieved up to η = 4. The total tracker surface of around 400 m2 at FCC-hh is “just” a factor two larger than the HL-LHC trackers, and the total number of channels (16.5 billion) is around eight times larger.

It is evident that the FCC-hh reference detector is more challenging than the LHC detectors, but not at all out of reach. The diameter and length are similar to those of the ATLAS detector. The tracker and calorimeters are housed inside a large superconducting solenoid 10 m in diameter, providing a magnetic field of 4 T. For comparison, CMS uses a solenoid with the same field and an inner diameter of 6 m. This difference does not seem large at first sight, but of course the stored energy (13 GJ) is about five times larger than the CMS coil, which needs very careful design of the quench protection system.

For the FCC-hh calorimeters, the major challenge, besides the high radiation dose, is the required energy resolution and particle identification in the high pile-up environment. The key to achieve the required performance is therefore a highly segmented calorimeter. The need for longitudinal segmentation calls for a solution different from the “accordion” geometry employed by ATLAS. Flat lead/steel absorbers that are inclined by 50 degrees with respect to the radial direction are interleaved with liquid-argon gaps and straight electrodes with high-voltage and signal pads (see “Liquid argon” figure). The readout of these pads on the back of the calorimeter is then possible thanks to the use of multi-layer electrodes fabricated as straight printed circuit boards. This idea has already been successfully prototyped within the CERN EP detector R&D programme.

The considerations for a muon system for the reference detector are quite different compared to the LHC experiments. When the detectors for the LHC were originally conceived in the late 1980s, it was not clear whether precise tracking in the vicinity of the collision point was possible in this unprecedented radiation environment. Silicon detectors were excessively expensive and gas detectors were at the limit of applicability. For the LHC detectors, a very large emphasis was therefore put on muon systems with good stand-alone performance, specifically for the ATLAS detector, which is able to provide a robust measurement of, for example, the decay of a Higgs particle into four muons, with the muon system alone. 

Liquid argon

Thanks to the formidable advancement of silicon-sensor technology, which has led to full silicon trackers capable of dealing with around 140 simultaneous pp collisions every 25 ns at the HL-LHC, standalone performance is no longer a stringent requirement. The muon systems for FCC-hh can therefore fully rely on the silicon trackers, assuming just two muon stations outside the coil that measure the exit point and the angle of the muons. The muon track provides muon identification, the muon angle provides a coarse momentum measurement for triggering and the track position provides improved muon momentum measurement when combined with the inner tracker. 

The major difference between an FCC-hh detector and CMS is that there is no yoke for the return flux of the solenoid, as the cost would be excessive and its only purpose to shield the magnetic field towards the cavern. The baseline design assumes the cavern infrastructure can be built to be compatible with this stray field. Infrastructure that is sensitive to the magnetic field will be placed in the service cavern 50 m from the solenoid, where the stray field is sufficiently low.

Higgs self-coupling

The high granularity and acceptance of the FCC-hh reference detector will result in about 250 TB/s of data for calorimetry and the muon system, about 10 times more than the ATLAS and CMS HL-LHC scenarios. There is no doubt that it will be possible to digitise and read this data volume at the full bunch-crossing rate for these detector systems. The question remains whether the data rate of almost 2500 TB/s from the tracker can also be read out at the full bunch-crossing rate or whether calorimeter, muon and possible coarse tracker information need to be used for a first-level trigger decision, reducing the tracker readout rate to the few MHz level, without the loss of important physics. Even if the optical link technology for full tracker readout were available and affordable, sufficient radiation hardness of devices and infrastructure constraints from power and cooling services are prohibitive with current technology, calling for R&D on low-power radiation-hard optical links. 

Benchmarks physics

The potential of FCC-hh in the realms of precision Higgs and electroweak physics, high mass reach and dark-matter searches offers an unprecedented opportunity to address fundamental unknowns about our universe. The performance requirements for the FCC-hh baseline detector have been defined through a set of benchmark physics processes, selected among the key ingredients of the physics programme. The detector’s increased acceptance compared to the LHC detectors, and the higher energy of FCC-hh collisions, will allow physicists to uniquely improve the precision of measurements of Higgs-boson properties for a whole spectrum of production and decay processes complementary to those accessible at the FCC-ee. This includes measurements of rare processes such as Higgs pair-production, which provides a direct measure of the Higgs self-coupling – a crucial parameter for understanding the stability of the vacuum and the nature of the electroweak phase transition in the early universe – with a precision of 3 to 7% (see “Higgs self-coupling” figure).

Dark matters

Moreover, thanks to the extremely large Higgs-production rates, FCC-hh offers the potential to measure rare decay modes in a novel boosted kinematic regime well beyond what is currently studied at the LHC. These include the decay to second-generation fermions, muons, which can be measured to a precision of 1%. The Higgs branching fraction to invisible states can be probed to a value of 10–4, allowing the parameter space for dark matter to be further constrained. The much higher centre-of-mass energy of FCC-hh, meanwhile, significantly extends the mass reach for discovering new particles. The potential for detecting heavy resonances decaying into di-muons and di-electrons extends to 40 TeV, while for coloured resonances like excited quarks the reach extends to 45 TeV, thus extending the current limit by almost an order of magnitude. In the context of supersymmetry, FCC-hh will be capable of probing stop squarks with masses up to 10 TeV, also well beyond the reach of the LHC.

In terms of dark-matter searches, FCC-hh has immense potential – particularly for probing scenarios of weakly interacting massive particles such as higgsinos and winos (see “Dark matters” figure). Electroweak multiplets are typically elusive, especially in hadron collisions, due to their weak interactions and large masses (needed to explain the relic abundance of dark matter in our universe). Their nearly degenerate mass spectrum produces an elusive final state in the form of so-called “disappearing tracks”. Thanks to the dense coverage of the FCC-hh detector tracking system, a general-purpose FCC-hh experiment could detect these particle decays directly, covering the full mass range expected for this type of dark matter. 

A detector at a 100 TeV hadron collider is clearly a challenging project. But detailed studies have shown that it should be possible to build a detector that can fully exploit the physics potential of such a machine, provided we invest in the necessary detector R&D. Experience with the Phase-II upgrades of the LHC detectors for the HL-LHC, developments for further exploitation of the LHC and detector R&D for future Higgs factories will be important stepping stones in this endeavour.

bright-rec iop pub iop-science physcis connect