Topics

Lars Brink 1943–2022

Lars Brink

It is with great sadness that we learnt of the passing of Lars Brink on 29 October 2022 at the age of 78. Lars Brink was an emeritus professor at Chalmers University Göteborg, Sweden and a member of the Royal Swedish Academy. He started his career as a fellow in the CERN theory group (1971–1973), which was followed by a stay at Caltech as a scientific associate (1976–1977). In subsequent years he was a frequent visitor at CERN, Caltech and ITP Santa Barbara, before becoming a full professor of theoretical physics at Chalmers in 1986, which under his guidance became an internationally leading centre for string theory and supersymmetric field theories. 

Lars held numerous other appointments, in particular as a member and chairperson on the board of NORDITA, the International Center for Fundamental Physics in Moscow, and later as the chairperson of the advisory board of the Solvay Foundation in Brussels. Since 2004 he was an external scientific member of the Max Planck Institute for Gravitational Physics in Golm. During his numerous travels Lars was welcomed by many leading institutions all over the world. He also engaged in many types of community service, such as the coordination of the European Union network “Superstring Theory” since 2000. Most importantly, he served on the Nobel Committee for physics many years, and as its chairperson for the 2013 Nobel Prize in Physics awarded to François Englert and Peter Higgs. 

Lars was a world-class theoretical physicist, with many pioneering contributions, especially to the development of supergravity and superstring theory, as well as many other topics. One of his earliest contributions was a beautiful derivation of the critical dimension of the bosonic string (with Holger Bech Nielsen), obtained by evaluating the formally divergent sum over zero-point energies of the infinitely many string oscillators; this derivation is now considered a standard textbook result. In 1976, with Paolo Di Vecchia and Paul Howe, he presented the first construction of the locally supersymmetric world-sheet Lagrangian for superstrings (also derived by Stanley Deser and Bruno Zumino) which now serves as the basis for the quantisation of the superstring and higher loop calculations in the Polyakov approach. His seminal 1977 work with Joel Scherk and John Schwarz on the construction of maximal (N = 4) supersymmetric Yang–Mills theory in four dimensions laid the very foundation for key developments of modern string theory and the AdS/CFT correspondence that came to dominate string-theory research only much later. Independently of Stanley Mandelstam, he proved the UV finiteness of the N = 4 theory in the light-cone gauge in 1983, together with Olof Lindgren and Bengt Nilsson – another groundbreaking result. Equally influential is his work with Michael Green and John Schwarz on deriving supergravity theories as limits of string amplitudes. More recently, he devoted much effort to a reformulation of N = 8 supergravity in light-cone super-space (with Sudarshan Ananth and Pierre Ramond). His last project before his death was a reevaluation and pedagogical presentation of Yoichiro Nambu’s seminal early papers (with Ramond).

Lars received numerous honours during his long career. In spite of these achievements he remained a kind, modest and most approachable person. Among our many fondly remembered encounters we especially recall his visit to Potsdam in August 2013, when he revived an old tradition by inviting the Nobel Committee to a special retreat for its final deliberations. The concluding discussions of the committee thus took place in Einstein’s summer house in Caputh. Of course, we were all curious for any hints from the predictably tight-lipped Swedes in advance of the official Nobel announcement, but in the end the only useful information we got out of Lars was that the committee had crossed the street for lunch to eat mushroom soup in a local restaurant!

He leaves behind his wife Åsa, and their daughters Jenny and Maria with their families, to whom we express our sincere condolences. We will remember Lars Brink as a paragon of scientific humility and honesty, and we miss a great friend and human being.

Neutrino scattering sizes up the proton

More than a century after its discovery, physicists are still working hard to understand how fundamental properties of the proton – such as its mass and spin – arise from its underlying structure. A particular puzzle concerns the proton’s size, which is an important input to understand nuclei, for example. Inelastic electron–proton scattering experiments in the late 1950s revealed the spatial distribution of charge inside the proton, allowing its radius to be deduced. A complementary way to determine this “charge radius”, and which relies on precise quantum-electrodynamics calculations, is to measure the shift it produces in the lowest energy levels of the hydrogen atom. Over the decades, numerous experiments have measured the proton’s size with increasing precision. 

By 2006, based on results from scattering and spectroscopic measurements, the Committee on Data for Science and Technology (CODATA) had established the proton charge radius to be 0.8760(78) fm. Then, in 2010, came a surprise: the CREMA collaboration at the Paul Scherrer Institut (PSI) reported a value of 0.8418(7) fm based a novel, high-precision spectroscopic measurement of muonic hydrogen. Disagreeing with previous spectroscopic measurements, and lying more than 5σ below the CODATA world average, the result gave rise to the “proton radius puzzle”. While the most recent electron–proton scattering and hydrogen-spectroscopy measurements are in closer agreement with the latest muonic-hydrogen results, the discrepancies with earlier experiments are not yet fully understood.

Now, the MINERνA collaboration has brought a new tool to gauge the proton’s size: neutrino scattering. Whereas traditional scattering measurements probe the proton’s electric or magnetic charge distributions, which are encoded in vector form factors, scattering by neutrinos allows the analogous axial-vector form factor FA, which characterises the proton’s weak charge distribution, to be measured. In addition to providing a complementary probe of proton structure, FA is key to precise measurements of neutrino-oscillation parameters at experiments such as DUNE, Hyper-K, NOvA and T2K.

MINERνA is a segmented scintillator detector with hexagonal planes made from strips of triangular cross-section, which are assembled into planes perpendicular to the incoming beam. By studying how a beam of muon antineutrinos produced by Fermilab’s NuMI neutrino beamline interacts with a polystyrene target, which contains hydrogen closely bonded to carbon, the MINERνA researchers were able to make the first high-statistics measurement of the νμ p → μ+ n cross-section using the hydrogen atom in polystyrene. Extracting FA from 5580 ± 180 signal events (observed over an estimated background of 12,500), they measured the nucleon axial charge radius to be 0.73(17) fm, in agreement with the electric charge radius measured with electron scattering.

“If we weren’t optimists, we’d say [this measurement] was impossible,” says lead author Tejin Cai, who proposed the idea of using a polystyrene target to access neutrino-hydrogen scattering while a PhD student at the University of Rochester. “The hydrogen and carbon are chemically bonded, so the detector sees interactions on both at once. But then, I realised that the very nuclear effects that made scattering on carbon complicated also allowed us to select hydrogen and would allow us to subtract off the carbon interactions.”

A new experiment called AMBER, at the M2 beamline of CERN’s Super Proton Synchrotron, is about to open another perspective on the proton charge radius. AMBER is the successor to COMPASS, which played a major role towards resolving the proton “spin crisis” (the finding, by the European Muon Collaboration in 1987, that quarks account for less than a third of the total proton spin) by studying the contribution to the proton spin from gluons. Instead of electrons, AMBER will use muon scattering at unprecedented energies (around 100 GeV) to access the small momentum-transfer needed to measure the proton radius. A future experiment at PSI called MUSE, meanwhile, aims to determine the proton radius through simultaneous measurements of muon– and electron–proton scattering.

AMBER is scheduled to start with a pilot run in September 2023 and to operate for up to three years, with the goal to find a value for the proton radius in the range 0.84–0.88 fm, as expected from previous experiments, and with an uncertainty of about 0.01 fm. “Some colleagues say that there is no proton-radius puzzle, only problematic measurements,” says AMBER spokesperson Jan Friedrich of TU Munich. “The discrepancy between theory and experiments, as well as between individual experiments, will have to shrink and align as much as possible. After all, there is only one true proton radius.” 

TeV photons challenge standard explanations

GRB 221009A

Gamma-ray bursts (GRBs) are the result of the most violent explosions in the universe. They are named for their bright burst of high-energy emission, mostly in the keV to MeV region, which can last from milliseconds to hundreds of seconds, and are followed by an afterglow that covers the full electromagnetic spectrum. The extreme nature and important role in the universe of these extragalactic events – for example in the production of heavy elements, potential cosmic-ray acceleration or even mass-extinction events on Earth-like planets – makes them one of the most studied astrophysical phenomena. 

Since their discovery in 1967, detailed studies of thousands of GRBs show that they are the result of cataclysmic events, such as neutron-star binary mergers. The observed gamma-ray emission is produced (through a yet-unidentified mechanism) within relativistic jets that decelerate when they strike interstellar matter, resulting in the observed afterglow. 

But interest in GRBs goes beyond astrophysics. Due to the huge energies involved, they are also a unique lab to study the laws of physics at their extremes. This once again became clear on 9 October 2022, when a GRB was detected that was not only the brightest ever but also appeared to have produced an emission that is difficult to explain using standard physics.

Eye-catching emission

“GRB 221009A” immediately caught the eye of the multi-messenger community, its gamma-ray emission being so bright that it saturated many observatories. As a result, it was also observed by a wide range of detectors covering the electromagnetic spectrum, including at energies exceeding 10 TeV. Two separate ground-based experiments – the Large High Altitude Air Shower Observatory (LHAASO) in China and the Carpet-2 air-shower array in Russia – claimed detections of photons with an energy of 18 TeV and 251 TeV, respectively. This is significantly higher, by an order of magnitude, than the previous record for TeV emission from GRBs reported by the MAGIC and HESS telescopes in 2019 (CERN Courier January/February 2020 p10). Adding further intrigue, such high-energy emission from GRBs should not be able to reach Earth at all.

For photons with energies exceeding several TeV, electron–positron pair-production with optical photons starts to become possible. Although the cross section for this process only just exceeds its threshold at an energy of 2.6 TeV, it is compensated by the billions of light years of space filled with optical light that the TeV photons need to traverse before reaching us. Despite uncertainties in the density of this so-called extragalactic background light, a rough calculation using the distance of GRB 221009A (z = 0.151) suggests that the probability for an 18 TeV photon to reach Earth is around 10–8. 

Clearly we need to wait for the detailed analyses by LHAASO and Carpet-2 to confirm the measurements 

The reported measurements have thus far only been provided through alerts shared among the multi-messenger community, while detailed data analy­ses are still ongoing. Their significance, however, led to tens of beyond-the-Standard Model (BSM) explanations being posted on the arXiv preprint server within days of the alert. While each differs in the specific mechanism hypothesised, the overall idea is similar: instead of being produced directly in the GRB, the photons are posited to be a secondary product of BSM particles produced during or close to the GRB. Examples range from light scalar particles or right-handed neutrinos produced in the GRB and decaying within our galaxy, to photons that converted into axions close to the GRB and turned back into photons in the galactic magnetic field before reaching Earth.

Clearly the community needs to wait for the detailed analyses by the LHAASO and Carpet-2 collaborations to confirm the measurements. The published energy resolution of LHAASO keeps open the possibility that their results can be explained with Standard Model physics, while the 251 TeV emission from Carpet-2 is more difficult to attribute to known systematic effects. This result could, however, be explained by secondary particles resulting from an ultra-high energy cosmic-ray (UHECR) produced in the GRB which, although would not represent new physics, would still confirm GRBs as a source of UHECRs for the first time. Analysis results from both collaborations are therefore highly anticipated.

STEREO rejects sterile neutrino

ILL high-flux reactor

The STEREO experiment, located at the high-flux research reactor at the Institut Laue-Langevin (ILL), Grenoble, is the latest to cast doubt on the existence of an additional, sterile neutrino state. Based on the full dataset collated from October 2017 until the experiment shut down in November 2020, the results support the conclusions of a global analysis of all neutrino data, that a normalisation bias in the beta-decay spectrum of 235U is the most probable explanation for a deficit of electron neutrinos seen at reactor experiments during the past decade.

The confirmation of neutrino oscillations 25 years ago showed that the lepton content of a given neutrino evolves as it propagates, generating a change of flavour. Numerous experiments based on solar, atmospheric, accelerator, reactor and geological neutrino sources have determined the oscillation parameters in detail, reaffirming the three-neutrino picture obtained by precise measurements of the Z boson’s decay width at LEP. However, several anomalies have also shown up, one of the most prominent being the so-called reactor antineutrino anomaly. Following a re-evaluation of the expected νe flux from nuclear reactors by a team at CEA and Subatech in 2011, a deficit in the number of νe  detected by reactor neutrino experiments appeared. Combined with a longstanding anomaly reported by short-baseline accelerator-neutrino experiments such as LSND and a deficit in νe  seen in calibration data for the solar-neutrino detectors GALLEX and SAGE, excitement grew that an additional neutrino state – a sterile or right-handed neutrino with non-standard interactions that arises in many extensions of the Standard Model – might be at play. 

We anticipate that this result will allow progress towards finer tests of the fundamental properties of neutrinos

Designed specifically to investigate the sterile-neutrino hypothesis, STEREO was positioned about 10 m from the ILL reactor core to measure the evolution of the antineutrino energy spectrum from 235U fission at short distances with high precision. Comprising six cells filled with gadolinium-doped liquid scintillator positioned at different distances from the reactor core, producing six spectra, the setup allows the hypothesis that νe undergo a fast oscillation into a sterile neutrino to be tested independently of the predicted shape of the emitted νe spectrum.

The measured antineutrino energy spectrum, based on 107,558 detected antineutrinos, suggests that the previously reported anomalies originate from biases in the nuclear experimental data used for the predictions, while rejecting the hypothesis of a light sterile neutrino with a mass of about 1 eV. “Our result supports the neutrino content of the Standard Model and establishes a new reference for the 235U antineutrino energy spectrum,” writes the team. “We anticipate that this result will allow progress towards finer tests of the fundamental properties of neutrinos but also to benchmark models and nuclear data of interest for reactor physics and for observations of astrophysical or geoneutrinos.”

Gallium remains

STEREO’s findings fit those reported recently by other neutrino-oscillation experiments. A 2021 analysis by the MicroBooNE collaboration at Fermilab, for example, favoured the Standard Model over an anomalous signal seen by its nearby experiment MiniBooNE, assuming the latter was due to the existence of a non-standard neutrino. Yet the story of the sterile neutrino is not over. In 2022, new results from the Baksan Experiment on Sterile Transitions (BEST) further confirmed the deficit in the νe flux emitted from radioactive sources as seen by the SAGE and GALLEX experiments – the so-called gallium anomaly – which, if interpreted in the context of neutrino oscillations, is consistent with νe → νs oscillations with a relatively large squared mass difference and mixing angle. 

“Under the sterile neutrino hypothesis, a signal in MicroBooNE, MiniBooNE or LSND would require the sterile neutrino to mix with both νe and νμ, whereas for the gallium anomaly, mixing with νe alone is sufficient,” explains theorist Joachim Kopp of CERN. “Even though the reactor anomaly seems to be resolved, we’d still like to understand what’s behind the others.” 

Deep learning for safer driving

How quickly can a computer make sense of what it sees without losing accuracy? And to what extent can AI tasks on hardware be performed with limited computing resources? Aiming to answer these and other questions, car-safety software company Zenseact, founded by Volvo Cars, sought out CERN’s unique capabilities in real-time data analysis to investigate applications of machine-learning to autonomous driving. 

In the future, self-driving cars are expected to considerably reduce the number of road-accident fatalities. To advance developments, in 2019 CERN and Zenseact began a three-year project to research machine-learning models that could enable self-driving cars to make better decisions faster. Carried out in an open-source software environment, the project’s focus was “computer vision” – an AI discipline dealing with how computers interpret the visual world and then automate actions based on that understanding.

“Deep learning has strongly reshaped computer vision in the last decade, and the accuracy of image-recognition applications is now at unprecedented levels. But the results of our research show that there’s still room for improvement when it comes to running the deep-learning algorithms faster and being more energy-efficient on resource-limited on-device hardware,” said Christoffer Petersson, research lead at Zenseact. “Simply put, machine-learning techniques might help drive faster decision-making in autonomous cars.” 

The need to react fast and make quick decisions imposes strict runtime requirements on the neural networks that run on embedded hardware in an autonomous vehicle. By compressing the neural networks, for example using fewer parameters and bits, the algorithms can be executed faster and use less energy. For this task, the CERN–Zenseact team chose field-programmable gate arrays (FPGAs) as the hardware benchmark. Used at CERN for many years, especially for trigger readout electronics in the large LHC experiments, FPGAs are configurable integrated circuits that can execute complex decision-making algorithms in periods of microseconds. The main result of the FPGA experiment, says Petersson, was a practical demonstration that computer-vision tasks for automotive applications can be performed with high accuracy and short latency, even on a processing unit with limited computational resources. “The project clearly opens up for future directions of research. The developed workflows could be applied to many industries.”

The compression techniques in FPGAs elucidated by this project could also have a significant effect on “edge” computing, explains Maurizio Pierini of CERN: “Besides improving the trigger systems of ATLAS and CMS, future development of this research area could be used for on-site computation tasks, such as on portable devices, satellites, drones and obviously vehicles.”

ALICE looks through the Milky Way

Annihilation

Antinuclei can travel vast distances through the Milky Way without being absorbed, concludes a novel study by the ALICE collaboration. The results, published in December, indicate that the search for 3He in space is a highly promising way to probe dark matter. 

First observed in 1965 in the form of the antideuteron at CERN’s Proton Synchrotron and Brookhaven’s Alternating Gradient Synchrotron, antinuclei are exceedingly rare. Since they annihilate on contact with regular matter, no natural sources exist on Earth. However,  light antinuclei have been produced and studied at accelerator facilities, including recent precision measurements of the mass difference between deuterons and antideuterons and between 3He and 3He by ALICE, and between the hypertriton and antihypertriton by the STAR collaboration at RHIC. 

Antinuclei can in principle also be produced in space, for example in collisions between cosmic rays and the interstellar medium. However, the expected production rates are very small. A more intriguing possibility is that light antinuclei are produced by the annihilation of dark-matter particles. In such a scenario, the detection of antinuclei in cosmic rays could provide experimental evidence for the existence of dark-matter particles. Space-based experiments such as AMS-02 and PAMELA, along with the upcoming Antarctic balloon mission GAPS, are among a few experiments that are able to detect light antinuclei. But to be able to interpret future results, precise knowledge of the production and disappearance probabilities of antinuclei is vital. 

The latter is where the new ALICE study comes in. The unprecedented energies of proton–proton and lead–lead collisions at the LHC produce, on average, as many nuclei as antinuclei. By studying the change in the rate of 3He as a function of the distance to the production point, the collaboration was able to determine the inelastic cross section, or disappearance probability, of 3He nuclei for the first time. These values were then used as input for astrophysics simulations. 

Two models of the 3He flux expected near Earth after the nuclei’s journey from sources in the Milky Way were considered: one assumes that the sources are cosmic-
ray collisions with the interstellar medium, and the other annihilations of hypothetical weakly interacting massive particles (WIMPs). For each model, the Milky Way’s transparency to 3He
– that is, its ability to let the nuclei through without being absorbed – was estimated. The WIMP dark-matter model led to a transparency of about 50%, whereas for the cosmic-ray model the transparency ranged from 25 to 90%, depending on the energy of the antinucleus. These values show that 3He originating from dark-matter or cosmic-ray collisions can travel distances of several kiloparsecs in the Milky Way without being absorbed, even from as far away as the galactic centre. 

“This new result illustrates the close connection between accelerator-based experiments and observations of particles produced in the cosmos,” says ALICE spokesperson Marco van Leeuwen. “In the near future, these studies will be extended to 4He and to the lower-momentum region with much larger datasets.”

How to find your feet in industry

The sixth annual LHC Career Networking Event, which took place at CERN on 21 November 2022, attracted more than 200 scientists and engineers (half in person) seeking to explore careers beyond CERN. Seven former members of the LHC-experiment collaborations and representatives from CERN’s knowledge transfer group discussed their experiences, good and bad, upon transitioning to the diverse employment world outside particle physics. Lively Q&A sessions and panel discussions enabled the audience to voice their questions and concerns. 

While the motivations for leaving academia expressed by the speakers differed according to their personal stories, common themes emerged. The long time-scales of experimental physics coupled with job instability and the glacial pace of funding cycles for new projects, for example, sometimes led to demotivation, whereas the speakers found that industry had exciting shorter-term projects to explore. Several speakers sought a better work–life balance in subjects they could enthuse about, having previously experienced a sense of stagnation. Another factor related to that balance was the better ratio between salary and performance, and hours worked.

Case studies 

Caterina Deplano, formerly an ALICE experimentalist, and Giorgia Rauco, ex-CMS, described the personal constraints that led them to search for a job in the local area, and showed that this need not be a limiting factor. Both assessed their skills frankly and opted for further training in their target sectors: education and data science, respectively. Deplano’s path to teaching in Geneva led her to go back and study for four years, improving her French-language skills while obtaining a Swiss teaching qualification. The reward was apparent in the enthusiasm with which she talked about her students and her chosen career. Rauco explained how she came to contemplate life outside academia and talked participants through the application process, emphasising that finding the “right” employment fit had meant many months of work with frequent disappointments, the memory of which was erased by the final acceptance letter. Both speakers gave links to valuable resources for training and further education, and Rauco offered some top-tips for prospective transitioners: be excited for what is coming next, start as soon as possible if you are thinking about changing and don’t feel guilty about your choice.

Maria Elena Stramaglia, formerly ATLAS, described the anguish of deciding whether to stay in academia or go to industry, and her frank assessment of transferable skills weighed up against personal desires and her own work–life balance. Her decision to join Hitachi Energy was based on the right mix of personal and technical motivation, she said. In moving from LHCb to data science and management, Albert Puig Navarro joined a newly established department at Proton (the developers of ProtonMail, which was founded by former ATLAS members; CERN Courier September/October 2019 p53), in which he ended up being responsible for hiring a mix of data scientists, engineers and operations managers, conducting more than 200 interviews in the process. He discussed the pitfalls of over-confidence, the rather different requirements of the industrial sector, and the shift in motivations between pure science and industry. Cécile Deterre, a former ATLAS physicist now working on technology for sustainable fish farming, focussed on CV-writing for industrial job applications, during which she emphasised transferable skills and how to make your technical experience more accessible to future employers.

With one foot still firmly in particle physics, Alex Winkler, formerly CMS, joined a company that makes X-ray detectors for medical, security and industrial applications; in a serendipitous exception among the speakers, he described how he was head-hunted while contemplating life beyond CERN, and mentioned the novel pressures implicit in working in a for-profit environment. Massimo Marino, ex-ATLAS, gave a lively talk about his experiences in a number of diverse environments: Apple, the World Economic Forum and the medical energy industries, to name a few. Diverting along the way to write a series of books, his talk covered the personal challenges and expectations in different roles and environments over a long career.

Throughout the evening, which culminated in a panel session, participants had the opportunity to quiz the speakers about their sectors and the personal decisions and processes that led them there. Head of CERN Alumni Relations Rachel Bray also explained how the Alumni Network can help facilitate contact between current CERN members and their predecessors who have left the field. The interest shown by the audience and the detailed testimonials of the speakers demonstrated that this event remains a vital source of information and encouragement for those considering a career transition.

Physics is about principles, not particles

Last year marked the 10th anniversary of the discovery of the Higgs particle. Ten years is a short lapse of time when we consider the profound implications of this discovery. Breakthroughs in science mark a leap in understanding, and their ripples may extend for decades and even centuries. Take Kirchhoffs’ blackbody proposal more than 150 years ago: a theoretical construction, an academic exercise that opened the path towards a quantum revolution, the implications of which we are still trying to understand today. 

Imagine now the vast network of paths opened by ideas, such as emission theory, that led to no fruition despite their originality. Was pursuing these useful, or a waste of resources? Scientists would answer that the spirit of basic research is precisely to follow those paths with unknown destinations; it’s how humanity reached the level of knowledge that sustains modern life. As particle physicists, as long as the aim is to answer nature’s outstanding mysteries, the path is worth following. The Higgs-boson discovery is the latest triumph of this approach and, as for the quantum revolution, we are still working hard to make sense of it. 

Particle discoveries are milestones in the history of our field, but they signify something more profound: the realisation of a new principle in nature. Naively, it may seem that the Higgs discovery marked the end of our quest to understand the TeV scale. The opposite is true. The behaviour of the Higgs boson, in the form it was initially proposed, does not make sense at a quantum level. As a fundamental scalar, it experiences quantum effects that grow with their energy, doggedly pushing its mass towards the Planck scale. The Higgs discovery solidified the idea that gauge symmetries could be hidden, spontaneously broken by the vacuum. But it did not provide an explanation of how this mechanism makes sense with a fundamental scalar sensitive to mysterious phenomena such as quantum gravity. 

Veronica Sanz

Now comes the hard part. From the plethora of ideas proposed during the past decades to make sense of the Higgs boson – supersymmetry being the most prominent – most physicists predicted that it would have an entourage of companion particles with electroweak or even strong couplings. Arguments of naturalness, that these companions should be close-by to prevent troublesome fine-tunings of nature, led to the expectation that discoveries would follow or even precede that of the Higgs. Ten years on, this wish has not been fulfilled. Instead, we are faced with a cold reality that can lead us to sway between attitudes of nihilism and hubris, especially when it comes to the question of whether particle physics has a future beyond the Higgs. Although these extremes do not apply to everyone, they are understandable reactions to viewing our field next to those with more immediate applications, or to the personal disappointment of a lifelong career devoted to ideas that were not chosen by nature. 

Such despondence is not useful. Remember that the no-lose theorem we enjoyed when planning the LHC, i.e. the certainty that we would find something new, Higgs boson or not, at the TeV scale, was an exception to the rules of basic research. Currently, there is no no-lose theorem for the LHC, or for any future collider. But this is precisely the inherent premise of any exploration worth doing. After the incredible success we have had, we need to refocus and unify our discourse. We face the uncertainty of searching in the dark, with the hope that we will initiate the path to a breakthrough, still aware of the small likelihood that this actually happens. 

The no-lose theorem we enjoyed when planning the LHC was an exception to the rules of basic research

Those hopes are shared by wider society, which understands the importance of exploring big questions. From searching for exoplanets that may support life to understanding the human mind, few people assume these paths will lead to immediate results. The challenge for our field is to work out a coherent message that can enthuse people. Without straying far from collider physics, we could notice that there is a different type of conversation going on in the search for dark matter. Here, there is no no-lose theorem either, and despite the current exclusion of most vanilla scenarios, there is excitement and cohesion, which are effectively communicated. As for our critics, they should be openly confronted and viewed as an opportunity to build stronger arguments.

We have powerful arguments to keep delving into the smallest scales, with the unknown nature of dark matter, neutrinos and the matter–antimatter asymmetry the most well-known examples. As a field, we need to renew the excitement that led us where we are, from the shock of watching alpha particles bounce back from a thin gold sheet, to building a colossus like the LHC. We should be outspoken about our ambition to know the true face of nature and the profound ideas we explore, and embrace the new path that the Higgs discovery has opened. 

CLEAR highlights and goals

Particle accelerators have revolutionised our understanding of nature at the smallest scales, and continue to do so with facilities such as the LHC at CERN. Surprisingly, however, the number of accelerators used for fundamental research represents a mere fraction of the 50,000 or so accelerators currently in operation worldwide. Around two thirds of these are employed in industry, for example in chip manufacturing, while the rest are used for medical purposes, in particular radiotherapy. While many of these devices are available “off-the-shelf”, accelerator R&D in particle physics remains the principal driver of innovative, next-generation accelerators for applications further afield.

The CERN Linear Electron Accelerator for Research (CLEAR) is a prominent example. Launched in August 2017 (CERN Courier November 2017 p8), CLEAR is a user facility developed from the former CTF3 project which existed to test technologies for the Compact Linear Collider (CLIC) – a proposed e+e collider at CERN that would follow the LHC. During the past five years, beams with a wide range of parameters have been provided to groups from more than 30 institutions across more than 10 nations.

CLEAR was proposed as a response to the low availability of test-beam facilities in Europe. In particular, there was very little time available to users on accelerators with electron beams with an energy of a few hundred MeV, as these tend to be used in dedicated X-ray light-source and other specialist facilities. CLEAR therefore serves as a unique facility to perform R&D towards a wide range of accelerator-based technologies in this energy range. Independent of CERN’s other accelerator installations, CLEAR has been able to provide beams for around 35 weeks per year since 2018, as well as during long shutdowns, and even managing successful operation during the COVID-19 pandemic. 

Flexible physics

As a relatively small facility, CLEAR operates in a flexible fashion. Operators can vary the range of beams available with relative ease by tailoring many different parameters, such as the bunch charge, length and energy, for each user. There is regular weekly access to the machine and, thanks to the low levels of radioactivity, it is possible to gain access to the facility several times per day to adjust experimental setups if needed. Along with CLEAR’s location at the heart of CERN, the facility has attracted an eager stream of users from day one.

CLEAR has attracted an eager stream of users from day one

Among the first was a team from the European Space Agency working in collaboration with the Radiation to Electronics (R2E) group at CERN. The users irradiated electronic components for the JUICE (Jupiter Icy Moons Explorer) mission with 200 MeV electron beams. Their experiments demonstrated that high-energy electrons trapped in the strong magnetic fields around Jupiter could induce faults, so-called single event upsets, in the craft’s electronics, leading to the development and validation of components with the appropriate radiation-hardness. The initial experiment has been built upon by the R2E group to investigate the effect of electron beams on electronics.

Inspecting beamline equipment

As the daughter of CTF3, CLEAR has continued to be used to test the key technological developments necessary for CLIC. There are two prototype CLIC accelerating structures in the facility’s beamline. Originally installed to test CLIC’s unique two-beam acceleration scheme, the structures have been used to study short-range “wakefield kicks” that can deflect the beam away from the planned path and reduce the luminosity of a linear collider. Additionally, prototypes of the high-resolution cavity beam position monitors, which are vital to measure and control the CLIC beam, have been tested, showing promising initial results.

One of the main activities at CLEAR concerns the development and testing of beam instrumentation. Here, the flexibility and the large beam-parameter range provided by the facility, together with easy access, especially in its dedicated in-air test station, have proven to be very effective. CLEAR covers all phases of the development of novel beam diagnostics devices, from the initial exploration of a concept or physical mechanism to the first prototyping and to the testing of the final instrument adapted for use in an operational accelerator. Examples are beam-loss monitors based on optical fibres, and beam-position and bunch-length monitors based on Cherenkov diffraction radiation under development by the beam instrumentation group at CERN.

Advanced accelerator R&D

There is a strong collaboration between CLEAR and the Advanced Wakefield Experiment (AWAKE), a facility at CERN used to investigate proton-driven plasma wakefield acceleration. In this scheme, which promises higher acceleration gradients than conventional radio-frequency accelerator technology and thus more compact accelerators, charged particles such as electrons are accelerated by forcing them to “surf” atop a longitudinal plasma wave that contains regions of positive and negative charges. Several beam diagnostics for the AWAKE beamline were first tested and optimised at CLEAR. A second phase of the AWAKE project, presently being commissioned for operation in 2026, requires a new source of electron beams to provide shorter, higher quality beams. Before its final installation in AWAKE, it is proposed to use this source to increase the range of beam parameters available at CLEAR.

Installation of novel microbeam position monitors

Further research into compact, plasma-based accelerators has been undertaken at CLEAR thanks to the installation of an active plasma lens on the beamline. Such lenses use gases ionised by very high electric currents to provide focusing for beams many orders of magnitude stronger than can be achieved with conventional magnets. Previous work on active plasma lenses had shown that the focusing force was nonlinear and reduced the beam quality. However, experiments performed at CLEAR showed, for the first time, that by simply swapping the commonly used helium gas for a heavier gas like argon, a linear magnetic field could be produced and focusing could be achieved without reducing the beam quality (CERN Courier December 2018 p8). 

Plasma acceleration is not the only novel accelerator technology that has been studied at CLEAR over the past five years. The significant potential of using accelerators to produce intense beams of radiation in the THz frequency range has also been demonstrated. Such light, on the boundary between microwaves and infrared, is difficult to produce, but has a variety of different uses ranging from imaging and security scanning to the control of materials at the quantum level. Compact linear accelerator-based sources of THz light could potentially be advantageous to other sources as they tend to produce significantly higher photon fluxes. By using long trains of ultrashort, sub-ps bunches, it was shown at CLEAR that THz radiation can be generated through coherent transition radiation in thin metal foils, through coherent Cherenkov radiation, and through coherent “Smith–Purcell” radiation in periodic gratings. The peak power emitted in experiments at CLEAR was around 0.1 MW. However, simulations have shown that with relatively minor reductions in the length of the electron bunches it will be possible to generate a peak power of more than 100 MW. 

FLASH forward

Advances in high-gradient accelerator technology for projects like CLIC (CERN Courier April 2018 p32) have led to a surge of interest in using electron beams with energies between 50–250 MeV to perform radiotherapy, which is one of the key tools used in the treatment of cancer. The use of so-called very-high energy electron (VHEE) beams could provide advantages over existing treatment types. Of particular interest is using VHEE beams to perform radiotherapy at ultra-high dose rates, which could potentially generate the so-called FLASH effect in patients. Here, tumour cells are killed while sparing the surrounding healthy tissues, with the potential to significantly improve treatment outcomes. 

FLASH radiotherapy

So far, CLEAR has been the only facility in the world studying VHEE radiotherapy and FLASH with 200 MeV electron beams. As such, there has been a large increase in beam-time requests in this field. Initial tests performed by researchers from the University of Manchester demonstrated that, unlike other types of radiotherapy beams, VHEE beams are relatively insensitive to inhomogeneities in tissue that typically result in less targeted treatment. The team, along with another from the University of Strathclyde, also looked at how focused VHEE beams could be used to further target doses inside a patient by mimicking the Bragg peak seen in proton radiotherapy. Experiments with the University Hospital of Lausanne to try to demonstrate whether the FLASH effect can be induced with VHEE beams are ongoing (CERN Courier January/February 2023 p8). 

Even if the FLASH effect can be produced in the lab, there are issues that need to be overcome to bring it to the clinic. Chief among them is the development of novel dosimetric methods. As CLEAR and other facilities have shown, conventional real-time dosimetric methods do not work at ultra-high dose rates. Ionisation chambers, the main pillar of conventional radiotherapy dosimetry, were shown to have very nonlinear behaviour at such dose rates, and recombination times that were too long. Due to this, CLEAR has been involved in the testing of modified ionisation chambers as well as other more innovative detector technologies from the world of particle physics for use in a future FLASH facility. 

High impact 

As well as being a test-bed for new technologies and experiments, CLEAR has provided an excellent training infrastructure for the next generation of physicists and engineers. Numerous masters and doctoral students have spent a large portion of their time performing experiments at CLEAR either as one-time users or long-term collaborators. Additionally, CLEAR is used for practical accelerator training for the Joint Universities Accelerator School.

Numerous masters and doctoral students have spent time performing experiments at CLEAR

As in all aspects of life, the COVID-19 pandemic placed significant strain on the facility. The planned beam schedule for 2020 and beyond had to be scrapped as beam operation was halted during the first lockdown and external users were barred from travelling. However, through the hard work of the team, CLEAR was able to recover and run at almost full capacity within weeks. Several internal CERN users, many of whom were unable to travel to external facilities, were able to use CLEAR during this period to continue their research. Furthermore, CLEAR was involved in CERN’s own response to the pandemic by undertaking sterilisation tests of personal protective equipment.

Test-beam facilities such as CLEAR are vital for developing future physics technology, and the impact that such a small facility has been able to produce in just a few years is impressive. A variety of different experiments from several different fields of research have been performed, with many more that are not mentioned in this article. Unfortunately for the world of high-energy physics, the aforementioned shortage of accelerator test facilities has not gone away. CLEAR will continue to play its role in helping provide test beams, with operations due to continue until at least 2025 and perhaps long after. There is an exciting physics programme lined up for the next few years, featuring many experiments similar to those that have already been performed but also many that are new, to ensure that accelerator technology continues to benefit both science and society.

LHCb looks forward to the 2030s

LHCb Upgrade II detector

The LHCb collaboration is never idle. While building and commissioning its brand new Upgrade I detector, which entered operation last year with the start of LHC Run 3, planning for Upgrade II was already under way. This proposed new detector, envisioned to be installed during Long Shutdown 4 in time for High-Luminosity LHC (HL-LHC) operations continuing in Run 5, scheduled to begin in 2034/2035, would operate at a peak luminosity of 1.5 × 1034cm–2s–1. This is 7.5 times higher than at Run 3 and would generate data samples of heavy-flavoured hadron decays six times larger than those obtainable at the LHC, allowing the collaboration to explore a wide range of flavour-physics observables with extreme precision. Unprecedented tests of the CP-violation paradigm (see “On point” figure) and searches for new physics at double the mass scales possible during Run 3 are among the physics goals on offer. 

Attaining the same excellent performance as the original detector has been a pivotal constraint in the design of LHCb Upgrade I. While achieving the same in the much harsher collision environments at the HL-LHC remains the guiding principle for Upgrade II, the LHCb collaboration is investigating the possibilities to go even further. And these challenges need to be met while keeping the existing footprint and arrangement of the detector (see “Looking forward” figure). Radiation-hard and fast 3D silicon pixels, a new generation of extremely fast and efficient photodetectors, and front-end electronics chips based on 28 nm semiconductor technology are just a few examples of the innovations foreseen for LHCb Upgrade II, and will also set the direction of R&D for future experiments.

LHCb constraints

Rethinking the data acquisition, trigger and data processing, along with intense use of hardware accelerators such as field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), will be fundamental to manage the expected five-times higher average data rate than in Upgrade I. The Upgrade II “framework technical design report”, completed in 2022, is also the first to consider the experiment’s energy consumption and greenhouse-gas emissions, as part of a close collaboration with CERN to define an effective environmental protection strategy.

Extreme tracking 

At the maximum expected luminosity of the HL-LHC, around 2000 charged particles will be produced per bunch crossing within the LHCb apparatus. Efficiently reconstructing these particles and their associated decay vertices in real time represents a significant challenge. It requires the existing detector components to be modified to increase the granularity, reduce the amount of material and benefit from the use of precision timing.

The future VELO will be a true 4D-tracking detector

The new Vertex Locator (VELO) will be based, as it was for Upgrade I (CERN Courier May/June 2022 p38), on high-granularity pixels operated in vacuum in close proximity to the LHC beams. For Upgrade II, the trigger and online reconstruction will rely on the selection of events, or parts of events, with displaced tracks at the early stage of the event. The VELO must therefore be capable of independently reconstructing primary vertices and identifying displaced tracks, while coping with a dramatic increase in event rate and radiation dose. Excellent spatial resolution will not be sufficient, given the large density of primary interactions along the beam axis expected under HL-LHC conditions. A new coordinate – time – must be introduced. The future VELO will be a true 4D-tracking detector that includes timing information with a precision of better than 50 ps per hit, leading to a track time-stamp resolution of about 20 ps (see “Precision timing” figure). 

Precision timing

The new VELO sensors, which include 28 nm technology application-specific integrated circuits (ASICs), will need to achieve this time resolution while being radiation-hard. The important goal of a 10 ps time resolution has recently been achieved with irradiated prototype 3D-trench silicon sensors. Depending on the rate-capability of the new detectors, the pitch may have to be reduced and the mat­erial budget significantly decreased to reach comparable spatial resolution to the current Run 3 detector. The VELO mechanics have to be redesigned, in particular to reduce the material of the radio-frequency foil that separates the secondary vacuum – where the sensors are located – from the machine vacuum. The detector must be built with micron-level precision to control systematic uncertainties.

The tracking system will take advantage of a detector located upstream of the dipole magnet, the Upstream Tracker (UT), and of a detector made of three tracking stations, the Mighty Tracker (MT), located downstream of the magnet. In conjunction with the VELO, the tracking system ensures the ability to reconstruct the trajectory of charged particles bending through the detector due to the magnetic field, and provides a high-precision momentum measurement for each particle. The track direction is a necessary input to the photon-ring searches in Ring Imaging Cherenkov (RICH) detectors, which identify the particle species. Efficient real-time charged-particle reconstruction in a very high particle-density environment requires not only good detector efficiency and granularity, but also the ability to quickly reject combinations of hits not produced by the same particle. 

LHCb-dedicated high-voltage CMOS sensor

The UT and the inner region of the MT will be instrumented with high-granularity silicon pixels. The emerging radiation-hard monolithic active pixel sensor (MAPS) technology is a strong candidate for these detectors. LHCb Upgrade II would represent the first large-scale implementation of MAPS in a high-radiation environment, with the first prototypes currently being tested (see “Mighty pixels” figure). The outer region of the MT will be covered by scintillating fibres, as in Run 3, with significant developments foreseen to cope with the radiation damage. The availability of high-precision vertical-coordinate hit information in the tracking, provided for the first time in LHCb by pixels in the high-occupancy regions of the tracker, will be crucial to reject combinations of track segments or hits not produced by the same particle. To substantially extend the coverage of the tracking system to lower momenta, with consequent gains for physics measurements, the internal surfaces of the magnet side walls will be instrumented with scintillating bar detectors, the so-called magnet stations (MS). 

Extreme particle identification 

A key factor in the success of the LHCb experiment has been its excellent particle identification (PID) capabilities. PID is crucial to distinguish different decays with final-state topologies that are backgrounds to each other, and to tag the flavour of beauty mesons at production, which is a vital ingredient to many mixing and CP-violation measurements. For particle momenta from a few GeV/c up to 100 GeV/c, efficient hadron identification at LHCb is provided by two RICH detectors. Cherenkov light emitted by particles traversing the gaseous radiators of the RICHes is projected by mirrors onto a plane of photodetectors. To maintain Upgrade I performances, the maximum occupancy over the photodetector plane must be kept below 30%, the single-photon Cherenkov-angle resolution must be below 0.5 mrad, and the time resolution on single-photon hits should be well below 100 ps (see “RICH rewards” figure). 

Photon hits on the RICH photodetector plane

Next-generation silicon photomultipliers (SiPMs) with improved timing and a pixel size of 1 × 1 mm2, together with re-optimised optics, are deemed capable of delivering these specifications. The high “dark” rates of SiPMs, especially after elevated radiation doses, would be controlled with cryogenic cooling and neutron shielding. Vacuum tubes based on micro-channel plates (MCPs) are a potential alternative due to their excellent time resolution (30 ps) for single-photon hits and lower dark rate, but suffer in high-rate environments. New eco-friendly gaseous radiators with a lower refractive index can improve the PID performance at higher momenta (above 80 GeV/c), but meta-materials such as photonic crystals are also being studied. In the momentum region below 10 GeV/c, PID will profit from TORCH – an innovative 30 m2 time-of-flight detector consisting of quartz plates where charged particles produce Cherenkov light. The light propagates by internal reflection to arrays of high-granularity MCP–PMTs optimised to operate at high rates, with a prototype already showing performances close to the target of 70 ps per photon.

Excellent photon and π0 reconstruction and e–π separation are provided by LHCb’s electromagnetic calorimeter (ECAL). But the harsh occupancy conditions of the HL-LHC impose the development of 5D calorimetry, which complements precise position and energy measurements of electromagnetic clusters with a time resolution of about 20 ps. The most crowded inner regions will be equipped with so-called spaghetti calorimeter (SPACAL) technology, which consists of arrays of scintillating fibres either made of plastic or garnet crystals arranged along the beam direction, embedded in a lead or tungsten matrix. The less-crowded outer regions of the calorimeter will continue to be instrumented with the current “Shashlik” technology with refurbished modules and increased granularity. A timing layer, either based on MCPs or on alternated tungsten and silicon-sensor layers placed within the front and back ECAL sections, is also a possibility to achieve the ultimate time resolution. Several SPACAL prototypes have already demonstrated that time resolutions down to an impressive 15 ps are feasible (see “Spaghetti calorimetry” image).

A SPACAL prototype being prepared for beam tests

The final main LHCb subdetector is the muon system, based on four stations of multiwire proportional chambers (MWPCs) interleaved with iron absorbers. For Upgrade II, it is proposed that MWPCs in the inner regions, where the rate will be as high as a few MHz/cm2, are replaced with new-generation micro-pattern gaseous detectors, the micro-RWELL, a prototype of which has proved able to reach a detection efficiency of approximately 97% and a rate-capability of around 10 MHz/cm2. The outer regions, characterised by lower rates, will be instrumented either by reusing a large fraction (95%) of the current MWPCs or by implementing other solutions based on resistive plate chambers or scintillating-tile-based detectors. As with all Upgrade II subdetectors, dedicated ASICs in the front-end electronics, which integrate fast time-to-digital converters or high-frequency waveform samplers, will be necessary to measure time with the required precision.

Trigger and computing 

The detectors for LHCb Upgrade II will produce data at a rate of up to 200 Tbit/s (see “On the up” figure), which for practical reasons needs to be reduced by four orders of magnitude before being written to permanent storage. The data acquisition therefore needs to be reliable, scalable and cost-efficient. It will consist of a single type of custom-made readout board combined with readily available data-centre hardware. The readout boards collect the data from the various sub-detectors using the radiation-hard, low-power GBit transceiver links developed at CERN and transfer the data to a farm of readout servers via next- generation “PCI Express” connections or Ethernet. For every collision, the information from the subdetectors is merged by passing through a local area network to the builder server farm.

With up to 40 proton–proton interactions, every bunch crossing at the HL-LHC will contain multiple heavy-flavour hadrons within the LHCb acceptance. For efficient event selection, hits not associated with the proton–proton collision of interest need to be discarded as early as possible in the data-processing chain. The real-time analysis system performs reconstruction and data reduction in two high-level-trigger (HLT) stages. HLT1 performs track reconstruction and partial PID to apply inclusive selections, after which the data is stored in a large disk buffer while alignment and calibration tasks run in semi real-time. The final data reduction occurs at the HLT2 level, with exclusive selections based on full offline-quality event reconstruction. Starting from Upgrade I, all HLT1 algorithms are running on a farm of GPUs, which enabled, for the first time at the LHC, track reconstruction to be performed at a rate of 30 MHz. The HLT2 sequence, on the other hand, is run on a farm of CPU servers – a model that would be prohibitively costly for Upgrade II. Given the current evolution of processor performance, the baseline approach for Upgrade II is to perform the reconstruction algorithms of both HLT1 and HLT2 on GPUs. A strong R&D activity is also foreseen to explore alternative co-processors such as FPGAs and new emerging architectures.

Real-time versus the start date of various high-energy physics experiments

The second computing challenge for LHCb Upgrade II derives from detector simulations. A naive extrapolation from the computing needs of the current detector implies that 2.5 million cores will be needed for simulation in Run 5, which is one order of magnitude above what is available with a flat budget assuming a 10% performance increase of processors per year. All experiments in high-energy physics face this challenge, motivating a vigorous R&D programme across the community to improve the processing time of simulation tools such as GEANT4, both by exploiting co-processors and by parametrising the detector response with machine-learning algorithms.

Intimately linked with digital technologies today are energy consumption and efficiency. Already in Run 3, the GPU-based HLT1 is up to 30% more energy-efficient than the originally planned CPU-based version. The data centre is designed for the highest energy-efficiency, resulting in a power usage that compares favourably with other large computing centres. Also for Upgrade II, special focus will be placed on designing efficient code and fully exploiting efficient technologies, as well as designing a compact data acquisition system and optimally using the data centre.

A flavour of the future 

The LHC is a remarkable machine that has already made a paradigm-shifting discovery with the observation of the Higgs boson. Exploration of the flavour-physics domain, which is a complementary but equally powerful way to search for new particles in high-energy collisions, is essential to pursue the next major milestone. The proposed LHCb Upgrade II detector will be able to accomplish this by exploring energy scales well beyond those reachable by direct searches. The proposal has received strong support from the 2020 update of the European strategy for particle physics, and the framework technical design report was positively reviewed by the LHC experiments committee. The challenges of performing precision flavour physics in the very harsh conditions of the HL-LHC are daunting, triggering a vast R&D programme at the forefront of technology. The goal of the LHCb teams is to begin construction of all detector components in the next few years, ready to install the new detector at the time of Long Shutdown 4.

bright-rec iop pub iop-science physcis connect