Special Topics in Accelerator Physics by Alexander Wu Chao introduces the global picture of accelerator physics, clearly establishing the scope of the book from the first page. The derivation and solution of concepts and equations is didactic throughout the chapters. Chao takes readers by the hand and guides them through important formulae and their limitations step-by-step, such that the reader does not miss the important parts – an extremely useful tactic for advanced masters or doctoral students when their topic of interest is among the eight special topics described.
In the first chapter, I particularly liked the way the author transitions from the Vlasov equation, a very powerful technique for studying beam–beam effects, towards the Fokker–Planck equation describing the statistical interaction of charged particles inside an accelerator. Chao pedagogically introduces the potential-well distortion, which is complemented by illustrations. The discussion on wakefield acceleration, taking readers deeper into the subject and extending it both for proton and electron beams, is timely. Extending the Fokker–Planck equation to 2D and 3D systems is particularly advanced but at the same time important. The author discusses the practical applications of the transient beam distribution in simple steps and introduces the higher order moments later. The proposed exercises, for some of which solutions are provided, are practical as well.
In chapter two, the concept of symplecticity, the conservation of phase space (a subject that causes much confusion), is discussed with concrete examples. Naming issues are meticulously explained, such as using the term short-magnet rather than thin-lens approximation in formula 2.6. Symplectic models for quadrupole magnets are introduced and the following discussion is extremely useful for students and accelerator physicists who will use symplectic codes such as MAD-X and who would like to understand the mathematical framework of their operation. This nicely conjuncts with the next chapter and the book offers useful insights to how these codes operate. In the discussion about third-order integration, Chao makes occasional mental leaps, which could be mitigated with an additional sentence. Although the discussion on higher order and canonical integrators is rather specialised, it is still very useful.
The author introduces the extremely convenient and broadly used truncated power series algebra (TPSA) technique, used to obtain maps, in chapter three. Chao explains in a simple manner the transition from the pre-TPSA algorithms (such as TRANSPORT or COSY) to symplectic algorithms such as MAD-X or PTC, as well as the reason behind this evolution. The clear “drawbacks” discussion is very useful in this regard.
The transition to Lie algebra in chapter four is masterful and pedagogical. Lie algebras, which can be an advanced topic and come with many formulas, are the main focus in this section of the book. In particular, the non-linearity of the drift space, which is absent of fields, should catch the reader’s attention. This is followed by specialised applications for expert readers only. One of this chapter’s highlights is the derivation of the sextupole pairing, which is complemented by that of Taylor maps up to the second order and its Lie algebra, although it would be better if the “Our plan” section was placed at the beginning of the chapter.
Chapter five covers proton-spin dynamics. Spinor formulas and the Froissart–Stora equation for the polarisation change are developed and explained. The Siberian snake technique remains one of the most well-known to retain beam polarisation, which the author discusses in detail. This links elegantly to chapter six, which introduces the reader to electron-spin dynamics where synchrotron radiation is the dominant effect and therefore constitutes a completely different research area. Chao focuses on the differences between the quantum and classical approach to synchrotron radiation, a phenomenon that cannot be ignored in high-brightness machines. Analogies between protons and electrons are then very well summarised in the recap figure 6.3. Section 6.5 is important for storage rings and leads smoothly to the Derbenev–Kondratenko formula and its applications.
Echoes
Chapter seven looks at echoes, a key technique when measuring diffusion in an accelerator, where the author introduces the reader to the generality of the term and the concept of echoes in accelerator physics. Transverse echoes (with and without diffusion) are quite analytical and the figures are didactic.
The book concludes with a very complete, concise and detailed chapter about beam–beam effects, which acts as an introduction to collider–accelerator physics for coherent- and incoherent-effects studies. Although synchro-betatron couplings causing resonant instabilities are advanced topics, they are often seen in practice when operating the machines, and the book offers the theoretical background for a deeper understanding of these effects.
Special Topics in Accelerator Physics is well written and develops the advanced subjects in a comprehensive, complete and pedagogical way.
High-energy physics spans a wide range of energies, from a few MeV to TeV, that are all relevant. It is therefore often difficult to take all phenomena into account at the same time. Effective field theories (EFTs) are designed to break down this range of scales into smaller segments so that physicists can work in the relevant range. Theorists “cut” their theory’s energy scale at the order of the mass of the lightest particle omitted from the theory, such as the proton mass. Thus, multi-scale problems reduce to separate and single-scale problems (see “Scales” image). EFTs are today also understood to be “bottom-up” theories. Built only out of the general field content and symmetries at the relevant scales, they allow us to test hypotheses efficiently and to select the most promising ones without needing to know the underlying theories in full detail. Thanks to their applicability to all generic classical and quantum field theories, the sheer variety of EFT applications is striking.
In hindsight, particle physicists were working with EFTs from as early as Fermi’s phenomenological picture of beta decay in which a four-fermion vertex replaces the W-boson propagator because the momentum is much smaller compared to the mass of the W boson (see “Fermi theory” image). Like so many profound concepts in theoretical physics, EFT was first considered in a narrow phenomenological context. One of the earliest instances was in the 1960s, when ad-hoc methods of current algebras were utilised to study weak interactions of hadrons. This required detailed calculations, and a simpler approach was needed to derive useful results. The heuristic idea of describing hadron dynamics with the most general Lagrangian density based on symmetries, the relevant energy scale and the relevant particles, which can be written in terms of operators multiplied by Wilson coefficients, was yet to be known. With this approach, it was possible to encode local symmetries in terms of the current algebra due to their association with conserved currents.
For strong interactions, physicists described the interaction between pions with chiral perturbation theory, an effective Lagrangian, which simplified current algebra calculations and enabled the low-energy theory to be investigated systematically. This “mother” of modern EFTs describes the physics of hadrons and remains valid to an energy scale of the proton mass. Heavy-quark effective theory (HQET), introduced by Howard Georgi in 1990, complements chiral perturbation theory by describing the interactions of charm and bottom quarks. HQET allowed us to make predictions on B-meson decay rates, since the corrections could now be classified. The more powers of energy are allowed, the more infinities appear. These infinities are cancelled by available counter-terms.
Similarly, it is possible to regard the Standard Model as the truncation of a much more general theory including non-renormalisable interactions, which yield corrections of higher order in energy. This perception of the whole Standard Model as an effective field theory started to be formed in the late 1970s by Weinberg and others (see “All things EFT: a lecture series hosted at CERN” panel). Among the known corrections to the Standard Model that do not satisfy its approximate symmetries are neutrino masses, postulated in the 1960s and discovered via the observation of neutrino oscillations in the late 1990s. While the scope of EFTs was unclear initially, today we understand that all successful field theories, with which we have been working in many areas of theoretical physics, are nothing but effective field theories. EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments. The former is crucial for making accurate theoretical predictions, while the latter is central to the physics programme of CERN in general.
EFTs in particle physics
More than a decade has passed since the first run of the LHC, in which the Higgs boson and the mechanism for electroweak symmetry breaking were discovered. So far, there are no signals of new physics beyond the SM. EFTs are well suited to explore LHC physics in depth. A typical example for an event involving two scales is Higgs-boson production because there is a factor 10–100 between its mass and transverse momentum. The calculation of each Higgs-boson production process leads to large logarithms that can invalidate perturbation theory due to the large-scale separation. This is just one of many examples of the two-scale problem that arises when the full quantum field theory approach for high-energy colliders is applied. Traditionally, such two-scale problems have been treated in the framework of QCD factorisation and resummation.
Over the past two decades, it has been possible to recast two-scale problems at high-energy colliders with the advent of soft-collinear effective theory (SCET). SCET is nowadays a popular framework that is used to describe Higgs physics, jets and their substructure, as well as more formal problems, such as power corrections to reconstruct full amplitudes eventually. The difference between HQET and SCET is that SCET considers long-distance interactions between quarks and both soft and collinear particles, whereas HQET takes into account only soft interactions between a heavy quark and a parton. SCET is just one example where the EFT methodology has been indispensable, even though the underlying theory at much higher energies is known. Other examples of EFT applications include precision measurements of rare decays that can be described by QCD with its approximate chiral symmetry, or heavy quarks at finite temperature and density. EFT is also central to a deeper understanding of the so-called flavour anomalies, enabling comparisons between theory and experiment in terms of particular Wilson coefficients.
All things EFT: a lecture series hosted at CERN
A novel global lecture series titled “All things EFT” was launched at CERN in autumn 2020 as a cross-cutting online series focused on the universal concept of EFT, and its application to the many areas where it is now used as a core tool in theoretical physics. Inaugurated in a formidable historical lecture by the late Steven Weinberg, who reviewed the emergence and development of the idea of EFT through to its perception nowadays as encompassing all of quantum field theory and beyond, the lecture series has amassed a large following that is still growing. The series featured outstanding speakers, world-leading experts from cosmology to fluid dynamics, condensed-matter physics, classical and quantum gravity, string theory, and of course particle physics – the birthing bed of the powerful EFT framework. The second year of the series was kicked off in a lecture dedicated to the memory of Weinberg by Howard Georgi, who looked back on the development of heavy-quark effective theory and its immediate aftermath.
Moreover, precision measurements of Higgs and electroweak observables at the LHC and future colliders will provide opportunities to detect new physics signals, such as resonances in invariant mass plots, or small deviations from the SM, seen in tails of distributions for instance at the HL-LHC – testing the perception of the SM as a low-energy incarnation of a more fundamental theory being probed at the electroweak scale. This is dubbed the SMEFT (SM EFT) or HEFT (Higgs EFT), depending on whether the Higgs fields are expressed in terms of the Higgs doublet or the physical Higgs boson. This particular EFT framework has recently been implemented in the data-analysis tools at the LHC, enabling the analyses across different channels and even different experiments (see “LHC physics” image). At the same time, the study of SMEFT and HEFT has sparked a plethora of theoretical investigations that have uncovered its remarkable underlying features, for example allowing EFT to be extended or placing constraints on the EFT coefficients due to Lorentz invariance, causality and analyticity.
EFTs in gravity
Since the inception of EFT, it was believed that the framework is applicable only to the description of quantum field theories for capturing the physics of elementary particles at high-energy scales, or alternatively at very small length scales. Thus, EFT seemed mostly irrelevant regarding gravitation, for which we are still lacking a full theory valid at quantum scales. The only way in which EFT seemed to be pertinent for gravitation was to think of general relativity as a first approximation to an EFT description of quantum gravity, which indeed provided a new EFT perspective at the time. However, in the past decade it has become widely acknowledged that EFT provides a powerful framework to capture gravitation occurring completely across large length scales, as long as these scales display a clear hierarchy.
The most notable application to such classical gravitational systems came when it was realised that the EFT framework would be ideal to handle gravitational radiation emitted at the inspiral phase of a binary of compact objects, such as black holes. At this phase in the evolution of the binary, the compact objects are moving at non-relativistic velocities. Using the small velocity as the expansion parameter exhibits the separation between the various characteristic length scales of the system. Thus, the physics can be treated perturbatively. For example, it was found that even couplings manifestly change in classical systems across their characteristic scales, which was previously believed to be unique to quantum field theories. The application of EFT to the binary inspiral problem has been so successful that the precision frontier has been pushed beyond the state of the art, quickly surpassing the reach of work that has been focused on the two-body problem for decades via traditional methods in general relativity.
This theoretical progress has made an even broader impact since the breakthrough direct discovery of gravitational waves (GWs) was announced in 2016. An inspiraling binary of black holes merged into a single black hole in less than a split second, releasing an enormous amount of energy in the form of GWs, which instigated even greater, more intense use of EFTs for the generation of theoretical GW data. In the coming years and decades, a continuous increase in the quantity and quality of real-world GW data is expected from the rapidly growing worldwide network of ground-based GW detectors, and future space-based interferometers, covering a wide range of target frequencies (see “Next generation” image).
EFTs in cosmology
Cosmology is inherently a cross-cutting domain, spanning scales over about 1060 orders of magnitude, from the Planck scale to the size of the observable universe. As such, cosmology generally cannot be expected to be tackled directly by each of the fundamental theories that capture particle physics or gravity. The correct description of cosmology relies heavily on the work in many disparate areas of research in theoretical and experimental physics, including particle physics and general relativity among many more.
The development of EFT applications in cosmology – including EFTs of inflation, dark matter, dark energy and even EFTs of large-scale structure – has become essential to make observable predictions in cosmology. The discovery of the accelerated expansion of the universe in 1998 shows our difficulty in understanding gravity both at the quantum regime and the classical one. The cosmological constant problem and dark-matter paradigm might be a hint for alternative theories of gravity at very large scales. Indeed, the problems with gravity in the very-high and very-low energy range may well be tied together. The science programme of next-generation large surveys, such as ESA’s Euclid satellite (see “Expanding horizons” image), rely heavily on all these EFT applications for the exploitation of the enormous data that is going to be collected to constrain unknown cosmological parameters, thus helping to pinpoint viable theories.
The future of EFTs in physics
The EFT framework plays a key role at the exciting and rich interface between theory and experiment in particle physics, gravity and cosmology as well as in other domains, such as condensed-matter physics, which were not covered here. The technology for precision measurements in these domains is constantly being upgraded, and in the coming years and decades we are heading towards a growing influx of real-world data of higher quality. Future particle-collider projects, such as the Future Circular Collider at CERN, or China’s Circular Electron Positron Collider, are being planned and developed. Precision cosmology is also thriving, with an upcoming next-generation of very large surveys, such as the ground-based LSST, or space-based Euclid. GW detectors keep improving and multiplying, and besides those that are currently operating many more are planned, aimed at measuring various frequency ranges, which will enable a richer array of sources and events to be found.
EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments across all domains of physics
Half a century after the concept has formally emerged, effective field theory is still full of surprises. Recently, the physical space of EFTs has been studied as a fundamental entity in its own right. These studies, by numerous groups worldwide, have exposed a new hidden “totally positive’’ geometric structure dubbed the EFT-hedron that constrains the EFT expansion in any quantum field theory, and even string theory, from first principles, including causality, unitarity and analyticity, to be satisfied by any amplitudes of these theories. This recent formal progress reflects the ultimate leap in the perception of EFT nowadays as the most fundamental and most generic theory concept to capture the physics of nature at all scales. Clearly, in the vast array of formidable open questions in physics that still lie ahead, effective field theory is here to stay – for good.
The discovery of the Higgs boson at the LHC in 2012 changed the landscape of high-energy physics forever. After just a few short years of data-taking by the ATLAS and CMS experiments, this last piece of the Standard Model (SM) was proven to exist. Since then, the Higgs sector has been studied using a rapidly growing dataset and, so far, all measurements agree with the SM predictions within the experimental uncertainties. In parallel, a comprehensive programme of searches for beyond-SM processes has been carried out, resulting in strong constraints on new physics. A harvest of precise measurements of a large variety of processes, confronted with state-of-the-art theoretical predictions, has further supported the SM. However, the theory lacks explanations for, among others, the nature of dark matter, the cosmological baryon asymmetry and neutrino masses. Importantly, the Higgs sector is related to “naturalness” problems that suggest the existence of new physics at the TeV scale, which the LHC can probe.
The high-luminosity phase of the LHC (HL-LHC) will provide an order of magnitude more data starting from 2029, allowing precision tests of the properties of the Higgs boson and improved sensitivity to a wealth of new-physics scenarios. The HL-LHC will deliver to each of the ATLAS and CMS experiments approximately 170 million Higgs bosons and 120,000 Higgs-boson pairs over a period of about 10 years. By extrapolating Run 2 results to the HL-LHC dataset, this will increase the precision of most Higgs-boson coupling measurements: 2–4% precision on the couplings to W, Z and third-generation fermions; and approximately 50% precision on the self-coupling by combining the ATLAS and CMS datasets. The larger dataset will also give improved sensitivity to rare vector-boson scattering processes that will offer further insights into the Higgs sector.
These precision measurements could reveal discrepancies with the SM predictions, which in turn could inform us about the energy scale of beyond-SM physics. In addition to improving SM measurements, the upgraded detectors and trigger systems being developed and constructed for the HL-LHC era will enable direct searches to better target new physics with challenging signatures. To achieve these goals, it will be essential to achieve a detailed understanding of the detector performance as well as to measure the integrated luminosity of the collected dataset to 1% precision.
Rising to the challenge
To cope with the increased number of interactions when proton bunches collide at the HL-LHC, the ATLAS collaboration is working hard to upgrade its detectors with state-of-the-art instrumentation and technologies. These new detectors will need to cope with challenging radiation levels, higher data rates and an extreme high-occupancy environment with up to 200 proton–proton interactions per bunch crossing (see “Pileup” figure). Upgrades will include changes to the trigger and data-acquisition systems, a completely new inner tracker, as well as a new silicon timing detector (see “ATLAS Phase II” figure).
The trigger and data-acquisition system will need to cope with a readout rate of 1 MHz, which is about 10 times higher than today. To achieve this, ATLAS will use a new architecture with a level-0 trigger (the first-level hardware trigger) based on the calorimeter and muon systems. Building on the upgrades for Run 3, which started in July 2022, the calorimeter will include capabilities for triggering at higher pseudorapidity, up to |η| = 4. During HL-LHC running, the global trigger system will be required to handle 50 Tb/s as input and to decide within 10 μs whether each event should be recorded or discarded, allowing for more sophisticated algorithms to be run online for particle identification. All the detectors will require substantial upgrades to handle the additional acceptance rates from the trigger.
The readout electronics for the electromagnetic, forward and hadronic end-cap liquid-argon calorimeters, along with the hadronic tile calorimeter, will be replaced. The full calorimeter systems, segmented into 192,320 cells that are read out individually, will be read out for every bunch crossing at the full 40 MHz to provide full-granularity information to the trigger. This will require changes to both front-end electronics and off-detector components.
The muon system will also see significant upgrades to the on-detector electronics of the resistive plate chambers (RPCs) and thin-gap chambers (TGCs) responsible for triggering on muons, as well as the muon drift tubes (MDTs) responsible for measuring the curvature of the tracks precisely. The MDTs will also be used for the first time in the level-0 trigger decisions. These improvements will allow all data to be sent to the back-end at 40 MHz, removing the need for readout buffers on the detector itself. All hits in the detector will be used to perform trigger logic in hardware using field programmable gate-arrays. Additional improvements to increase the trigger acceptance for muons will come in the form of a new layer of RPCs to be installed in the inner barrel layer, along with new MDTs in the small sectors. The Muon New Small Wheel system was installed during Long Shutdown 2 (LS2) from 2019 to 2022 and is located inside the end-cap toroid magnet containing both triggering and precision tracking chambers. Additional RPC upgrades were also made in the barrel leading up to Run 3, and the TGCs will be upgraded in the endcap region of the muon system during LS3.
State-of-the-art tracking
The success of the research programme at the HL-LHC will strongly rely on the tracking performance, which in turn determines the ability to efficiently identify hadrons containing b and c quarks, in addition to tau and other charged leptons. Reconstructing individual particles in the HL-LHC collision environment with thousands of charged particles being produced within a region of about 10 cm will be very challenging. The entire tracking system, presently consisting of pixel and strip detectors and the transition radiation tracker, will be replaced by a new all-silicon pixel and strip tracker – the ITk. Thiswill feature higher granularity, increased radiation hardness and readout electronics that allow higher data rates and a longer trigger latency. The new pixel detector will also extend the pseudorapidity coverage in the forward region from |η| < 2.5 to |η| < 4, increasing the acceptance for important physics processes like vector-boson fusion (see “Pixel perfection” image).
The ITk will comprise nine barrel layers, positioned at radii from 33 mm out to 1 m from the beam line, plus end-cap rings. It will be much more complex with respect to the present ATLAS tracker, featuring 10 times the number of strip channels and 60 times the number of pixel channels. The strip detectors will cover a total surface of 160 m2 with 60 million readout channels, and the pixels an area of 13 m2 with more than five billion readout channels. The innermost layer will be populated with radiation-hard 3D sensors, with pixel cells of 25 × 100 µm2 in the barrel part and 50 × 50 µm2 in the forward parts for improved tracking capabilities in the central and forward regions. Prototypes of the end-cap ring for the inner system and of the strip barrel stave are at an advanced stage (see “ITk prototyping” image). A unique feature of the trackers at the HL-LHC is that they will be operated for the first time with a serial powering scheme, in which a chain of modules is powered by a constant current. If the modules were to be powered in parallel, the high total current would lead to either high power losses or a large mass of cables within the volume of the detector, which would impact the tracking performance.
Given the challenging conditions posed by the HL-LHC, ATLAS will construct a novel precision-timing silicon detector, the High-Granularity Timing Detector (HGTD), which provides a time resolution of 30 to 50 ps for charged particles. The detector will cover a pseudorapidity range of 2.4 < |η| < 4 and will comprise two double-sided silicon layers on each side of ATLAS with a total active area of 6.4 m2. The precise timing information will allow the collaboration to disentangle proton–proton interactions in the same bunch crossing in the time dimension, complementing the impressive spatial resolution of the ITk. Low-gain avalanche diodes (see “Clocking tracks” image) provide timing information that can be associated with tracks in the forward regions, where they are more difficult to assign to individual interactions using spatial information. With a timing resolution six times smaller than the temporal spread of the beam spot, tracks emanating from collisions occurring very close in space but well-separated in time can be distinguished. This is particularly important in the forward region, where reduced longitudinal impact-parameter resolution limits the performance.
Building upon the insertable B-layer cooling system used since the start of Run 2, and to reduce the material budget, ATLAS will use a two-phase CO2 cooling system for the entire silicon ITk and HGTD detectors. These will allow the detectors to be cooled to around –35 °C during the entire lifetime of the HL-LHC. The low temperature is required to protect the silicon sensors from the expected high radiation dose received during their lifetime. Two-phase CO2 cooling is an environmentally friendly option compared to other suitable coolants. It provides a high heat transfer at reasonable flow parameters, a low viscosity (thus reducing the material used in the detector construction) and a well-suited temperature range for detector operations.
Luminous future
Precise knowledge of the luminosity is key for the ATLAS physics programme. To reach the goal of percent-level precision at the HL-LHC, ATLAS will upgrade the LUCID (Luminosity Cherenkov Integrating Detector) detector, a luminometer that is sensitive to charged particles produced at the interaction point. This is incredibly challenging given the number of interactions expected to be delivered by the machine, and the requirements on radiation hardness and long-term stability for the lifetime of the experiment. The HGTD will also provide online luminosity measurements on a bunch-by-bunch basis, and additional detector prototypes are being tested to provide the best possible precision for luminosity determination during HL-LHC running. Luminometers in ATLAS provide luminosity monitoring to the LHC every one to two seconds, which is required for efficient beam steering, machine optimisation and fast checking of running conditions. In the forward region, the zero-degree calorimeter, which is particularly important for determining the centrality in heavy-ion collisions, is also being redesigned for HL-LHC running.
The HL-LHC will deliver luminosities of up to 7.5 × 1034 cm–2s–1, and ATLAS will record data at a rate 10 times higher than in Run 2. The ability to process and analyse these data depends heavily on R&D in software and computing, to make use of resource-efficient storage solutions and opportunities that paradigm-shifting improvements like heterogeneous computing, hardware accelerators and artificial intelligence can bring. This is needed to simulate and process the high-occupancy HL-LHC events, but also to provide a better theoretical description of the kinematics.
New era
The Phase-II upgrade projects described are only possible through collaborative efforts between universities and laboratories across the world. The research teams are currently working intensely to finalise the designs, establish the assembly and testing procedures, and in some cases start construction. They will all be installed and commissioned during LS3 in time for the start of Run 4, currently planned for 2029.
To cope with the increased number of interactions when proton bunches collide at the HL-LHC, the ATLAS collaboration is working hard to upgrade its detectors with state-of-the-art instrumentation and technologies
The HL-LHC will provide an order of magnitude more data recorded with a dramatically improved ATLAS detector. It will usher in a new era of precision tests of the SM, and of the Higgs sector in particular, while also enhancing sensitivity to rare processes and beyond-SM signatures. The HL-LHC physics programme relies on the successful and timely completion of the ambitious detector upgrade projects, pioneering full-scale systems with state-of-the-art detector technologies. If nature is harbouring physics beyond the SM at the TeV scale, then the HL-LHC will provide the chance to find it in the coming decades.
The High-Luminosity LHC (HL-LHC), due to start operations in 2029, will deliver about 10 times more data than has been accumulated during the previous LHC runs. The CMS collaboration is getting ready to profit from sub-percent precision on many Standard Model (SM) processes and to probe physics beyond the SM, both directly and through studies of higher-order effective operators. Studying rare processes, such as double-Higgs production, rare tau-lepton decays and Higgs couplings to second-generation fermions, will also be a central part of the programme. New ideas will certainly lead to improvements beyond the statistical scaling of uncertainties, bringing us closer to observing these rare processes. While high-precision tests of the SM will surely be the ultimate legacy of the LHC experiments, CMS will also keep searching for clear signs of new physics by investigating the many signatures accessible at the HL-LHC.
To exploit the HL-LHC physics potential, the CMS collaboration is building an optimised detector that pushes technologies to new heights. This major “Phase II” upgrade will enable the subdetectors to sustain the increased luminosity, which results in greater radiation damage and higher particle rates – the innermost pixel layer, for example, will see three billion hits per second per square centimetre. The CMS tracker and the calorimeter endcap will be replaced, a new minimum-ionising-particle precision timing detector (MTD) and a new luminosity detector will be installed, almost all of the existing electronics will be replaced, and additional muon forward stations will be mounted.
High granularity
The key to achieving the necessary HL-LHC performance is to enhance the granularity of the detector significantly. This reduces the maximum occupancy per readout cell while considerably increasing the readout bandwidth and processing power of the trigger system, thereby fully exploiting the higher collision rates. As a novelty, all CMS detector designs are tuned to allow full particle-flow reconstruction at the hardware-based level-1 trigger (operating at 40 MHz), while precision timing information, which contributes to the high-level-trigger decision, is exploited by highly optimised software mostly running on graphics processing units.
CMS is currently transitioning from the prototyping to the production phase on several major items. The novel gas electron multiplier (GEM) detector concept, used to detect muons produced in the very forward region, was deployed for the first time on a large scale during long shutdown 2 (LS2): 144 chambers in the first station are fully integrated into the ongoing data taking and the second station will be fully installed in a year-end technical stop before LS3 (see “GEM of a detector” image). Finishing endcap-muon upgrades in advance of LS3 allows the collaboration to minimise the repositioning of the CMS disks during LS3 and to reduce its overall duration. In this spirit, CMS has already finished the replacement of all front-end electronics of the cathode strip chambers. The replacement of the drift-tube electronics in the barrel muon detectors will take place in LS3, and an installed small-scale drift-tube demonstrator is already proving its performance.
The exceptional performance of the current all-silicon tracker provides a solid platform for even further improvements. A main novelty for Phase II is the level-1 track trigger, which reconstructs tracks with transverse momentum above 2 GeV, made available at a rate of 40 MHz. Profiting from the experience with pixels from Phase I, the whole Phase II tracker will use dual-phase CO2 cooling, ultra-lightweight mechanics, DCDC converters for the powering of the outer tracker, and serial powering for the pixel system, thereby reducing the amount of material by a factor of two compared to today. To reduce the occupancies expected at the highest foreseeable number of collisions per bunch crossing (pile-up), the outer-tracker channel count will increase from 9 million strips to 42 million strips plus 170 million macro-pixels, providing unambiguous z-position measurements. With six barrel layers and five double-disks per endcap, the outer tracker is optimised not only for standalone tracking but also for vertexing, a prerequisite for the track trigger (see “Outer tracker” image).
The outer tracker is already in production, having overcome most engineering and prototyping challenges. ASICs (application-specific integrated circuits) and sensors are being delivered and the order for the hybrids (which host integrated circuits and connections in the front-end modules) has been submitted. The inner tracker (pixel system) will feature two billion micro-pixels, compared to the 125 million at present. Four barrel layers plus 12 disks per endcap enable excellent track seeding and b-quark jet identification over the pseudorapidity range |η| < 4 (much broader than today’s |η| < 2.5). The inner tracker system aspects are understood, sensors will be ordered soon, and teams are waiting for the final readout ASIC to begin module production.
A new era of calorimetry
The high-granularity calorimeter (HGCAL) in the forward region starts a new era of calorimetry. It is a radiation-tolerant 5D imaging calorimeter with spatial, energy and precision-timing information (see “HGCAL on display” and “High-granularity calorimetry” images). The deployment of machine-learning algorithms will further enhance its potential to establish the HGCAL as a blueprint for future calorimeters. The HGCAL has 6.4 million channels, two orders of magnitude more than the current endcap calorimeters, including both silicon cells (with an area of 0.5 or 1 cm2) and scintillator tiles (4 to 32 cm2) read out by silicon photomultipliers (SiPMs). The electromagnetic section consists of 26 active layers of silicon sensors interleaved with copper, copper-tungsten and lead absorbers. It is followed by the hadronic section, which is made of 21 active layers of silicon and scintillator tiles, separated by steel absorbers. All in all, 600 m2 are equipped with silicon sensors (three times the area of the tracker) and 400 m2 with SiPMs-on-tiles.
The hexagonal-shaped sensors and the modules mimicking bee-hive structures are a design feature of the HGCAL. The hexagonal structure makes optimal use of the circular silicon wafer, therefore being cost-effective. With the sensor design and evaluation completed, mass production has started. The development of silicon sensors that are sufficiently radiation-tolerant for HL-LHC conditions, for both the HGCAL and the tracker, has taken considerable effort. It is also worth noting the use, for the first time in particle physics, of passive sensors processed on 8-inch wafers – another essential feature for covering larger areas at an affordable cost.
The electromagnetic calorimeter (ECAL) barrel and its 61,200 lead-tungstate crystals, which were instrumental in the discovery of the Higgs boson via its decay to two photons, will be retained and equipped with new front-end electronics capable of sustaining the higher rates and trigger latency (see “ECAL upgrades” image). Faster signal processing will result in a much-improved time resolution of 30 ps for high-energy photons (and electrons), enabling precise primary-vertex determination.
The novel track trigger and the much-improved cell granularity allow the full implementation of the particle-flow reconstruction (based on field-programmable gate arrays, FPGAs) already in the level-1 trigger. The single-crystal granularity that will be provided by the ECAL barrel, even at the trigger level, together with the 3D imaging features of the HGCAL, provide crucial information to precisely follow the path of all particles through the entire detector. This opens the possibility of establishing a full menu of cross-particle triggers and of using FGPA-based machine learning at the level-1 trigger. Trigger algorithms have been prototyped in FPGAs and demonstrated in a multi-board “slice test”. In order to efficiently process the 63 Tb/s input bandwidth, the system is equipped with 250 FPGAs, with an output rate of 750 kHz and a latency of 12.5 μs.
Bright times ahead
For the luminosity measurement, CMS is following a strategy analogous to the one for the trigger, exploiting data from various subdetectors with the ambitious goal of 1% offline (2% online) uncertainty. Achieving this precision requires an understanding of the detector systematic effects, such as linearity and stability, at the per-mille level. A dedicated silicon-pad-based luminometer with an asynchronous readout will help in quantifying systematic uncertainties and beam-related backgrounds, and will play an essential role in the CMS (and LHC accelerator) commissioning.
CMS will enter further uncharted territory in the precision-timing domain with the MTD. For the barrel system, the challenge is to adapt the cost-effective LYSO crystal + SiPM technology, similar to that used in PET scanners, to sustain the HL-LHC rates and radiation. An interesting detail is the use of micro-Peltier elements (thermo-electric coolers) to further decrease the local temperature on the SiPMs, thus counteracting the effects of radiation damage. The endcaps use a new twist on well-established silicon tracking technology: low-gain avalanche detectors, with an additional thin implant within the sensor to generate internal gain. The MTD covers the full solid angle up to |η| = 3 to mitigate pile-up, boost the sensitivity to searches for long-lived particles, and enhance the physics capabilities during heavy-ion runs by providing particle identification capability via time-of-flight measurements.
The high-granularity calorimeter in the forward region starts a new era of calorimetry
All these individual systems are combined into the CMS Phase II detector. Technical coordination is choreographing the integration and has established a detailed schedule for LS3, taking all the detector and external requirements and constraints into account. To maximise the time for detector installation and commissioning during LS3, a huge effort is ongoing in preparing the site in advance. New buildings are already under construction to house the new service infrastructure, such as chillers for the detector CO2 cooling, uninterrupted power systems and detector- and dry-gas systems. During LS3, the biggest challenge will be to decommission all the legacy systems, including services, that will be replaced by new detectors, and then fit all the new pieces together.
The CMS Phase-II upgrade is a multi-faceted project involving more than 2000 scientists, students and engineers from institutes and industrial companies in more than 50 countries. Initially discussed prior to the first LHC operation and defined by the CMS technical proposal in 2015, the CMS Phase II upgrade together with upgrades to the other LHC detectors will ensure that maximal physics is extracted under the challenging conditions of the HL-LHC.
SESAME (Synchrotron-light for Experimental Science and Applications in the Middle East) is the Middle East’s first major international research centre. It is a regional third-generation synchrotron X-ray source situated in Allan, Jordan, which broke ground on 6 January 2003 and officially opened on 16 May 2017. The current members of SESAME are Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, Palestine and Turkey. Active current observers include, among others: the European Union, France, Germany, Greece, Italy, Japan, Kuwait, Portugal, Spain, Sweden, Switzerland, the UK and the US. The common vision driving SESAME is the belief that human beings can work together for a cause that furthers the interests of their own nations and that of humanity as a whole.
The story of SESAME started at CERN 30 years ago. One day in 1993, shortly after the signature of the Oslo Accords by Israel and the Palestine Liberation Organization, the late Sergio Fubini, an outstanding scientist and a close friend and collaborator, approached me in the corridor of the CERN theory group. He told me that now was the time to test what he called “your idealism”, referring to future joint Arab–Israeli scientific projects.
CERN is a very appropriate venue for the inception of such a project. It was built after World War II to help heal Europe and European science in particular. Abdus Salam, as far back as the 1950s, identified the light source as a tool that could help thrust what were then considered “third-world” countries directly to the forefront of scientific research. The very same Salam joined our efforts in 1993 as a member of the Middle Eastern Science Committee (MESC), founded by Sergio, myself and many others to forge meaningful scientific contacts in the region. By joining our scientific committee, Salam made public his belief in the value of Arab–Israeli scientific collaborations, something the Nobel laureate had expressed several times in private.
To focus our vision, that year I gave a talk on the status of Arab–Israeli collaborations at a meeting in Torino held on the occasion of Sergio’s 65th birthday. Afterwards we travelled to Cairo to meet Venice Gouda, the Egyptian minister for higher education, and other Egyptian officials. At that stage we were just self-appointed entrepreneurs. We were told that president Hosni Mubarak had made a decision to take politics out of scientific collaborations with Israel, so together we organized a high-quality scientific meeting in Dahab, in the Sinai desert. The meeting, held in a large Bedouin tent on 19-26 November 1995, brought together about 100 young and senior scientists from the region and beyond. It took place in the weeks after the murder of the Israeli prime minister Yitzhak Rabin, for whom, at the request of Venice Gouda, all of us stood for a moment of silence in respect. The silence echoes in my ears to this day. The first day of the meeting was attended by Jacob Ziv, president of the Israeli Academy of Sciences and Humanities, which had been supporting such efforts in general. It was thanks to the additional financial help of Miguel Virasoro, director-general of ICTP at the time, and also Daniele Amati, director of SISSA, that the meeting was held. All three decisions of support were made at watershed moments and on the spur of the moment. The meeting was followed by a very successful effort to identify concrete projects in which Arab–Israeli collaboration could be beneficial to both sides.
But attempts to continue the project were blocked by a turn for the worse in the political situation. MESC decided to retreat to Torino, where, during a meeting in November 1996, there was a session devoted to studying the possibilities of cooperation via experimental activities in high-energy physics and light-source science. During that session, the late German scientist Gus Voss suggested (on behalf of himself and Hermann Winnick from SLAC) to bring the parts of a German light source situated in Berlin, called BESSY, which was about to be dismantled, to the Middle East. Former Director-General of CERN Herwig Schopper also attended the workshop. MESC had built sufficient trust among the parties to provide an appropriate infrastructure to turn such an idea into something concrete.
Targeting excellent science
A light source was very attractive thanks to the rich diversity of fields that can make use of such a facility, from biology through chemistry, physics and many more to archaeology and environmental sciences. Such a diversity would also allow the formation of a critical mass of real users in the region. The major drawback of the BESSY-based proposal was that there was no way a reconstructed dismantled “old” machine would be able to attract first-class scientists and science.
Around that time, Fubini asked Schopper, who had a rich experience in managing complex experimental projects, to take a leadership position. The focus of possible collaborations was narrowed down to the construction of a large light source, and it was decided to use the German machine as a nucleus around which to build the administrative structure of the project. The non-relations among several of the members presented a serious challenge. At the suggestion of Schopper, following the example of the way CERN was assembled in the 1950s, the impasse was overcome by using the auspices of UNESCO to deposit the instruments for joining the project. The statutes of SESAME were to a large extent copied from those of CERN. A band of self-appointed entrepreneurs had evolved into a self-declared interim Council of SESAME, with Schopper as its president. The next major challenge was to choose a site.
On 15 March 2000 I flew to Amman for a meeting on the subject. I met Khaled Toukan (the current director-general of SESAME) and, after studying a map sold at the hotel where we met, we discussed which site Israel would support. We also asked that a Palestinian be the director general. Due to various developments, none of which depended on Israel, this was not to happen. The decision on the site venue was taken at a meeting at CERN on 11 April 2000. Jordan, which had and has diplomatic relations with all the parties involved, was selected as the host state. BESSY was dismantled by Russian scientists, placed in boxes and shipped with assembly instructions to the Jordanian desert to be kept until the appropriate moment would arise. This was made possible thanks to a direct contribution by Koichiro Matsuura, director-general of UNESCO at the time, and to the efforts of Khaled Toukan who has served in several ministerial capacities in Jordan.
With the administrative structure in place, it was time to address the engineering and scientific aspects of the project. Technical committees had designed a totally new machine, with BESSY serving as a boosting component. Many scientists in the region were introduced via workshops to the scientific possibilities that SESAME could offer. Scientific committees considered appropriate “day-one” beamlines, yet that day seemed very far in the future. Technical and scientific directors from abroad helped define the parameters of a new machine and identified appropriate beamlines to be constructed. Administrators and civil servants from the members started meeting regularly in the finance committee. Jordan began to build the facility to host the light source and made major additional financial contributions.
Transformative agreements
At this stage it was time for the SESAME interim council to transform into a permanent body and in the process cut its umbilical cord from UNESCO. This transformation presented new hurdles because it was required of every member that wished to become a member of the permanent council that its head of state, or someone authorised by the head of state, sign an official document sent to UNESCO stating this wish.
By 2008 the host building had been constructed. But it remained essentially empty. SESAME had received support from leading light-source labs all over the world – a spiritual source of strength to members to continue with the project. However, attempts to get significant funding failed time and again. It was agreed that the running costs of the project should be borne by the members, but the one-time large cost needed to construct a new machine was outside the budget parameters of most of the members, many of whom did not have a tradition of significant support for basic science. The European Union (EU) supported us in that stage only through its bilateral agreement with Jordan. In the end, several million Euros from those projects did find their way to SESAME, but the coffers of SESAME and its infrastructure remained skeletal.
Changing perceptions
In 2008 Herwig Schopper was succeeded by Chris Llewellyn Smith, another former Director-General of CERN, as president of the SESAME Council. His main challenge was to get the funding needed to construct a new light source and to remove from SESAME the perception that it was simply a reassembled old light source of little potential attraction to top scientists. In addition to searching for sources of significant financial support, there was an enormous amount of work still to be done in formulating detailed and realistic plans for the following years. A grinding systematic effort began to endow SESAME with the structure needed for a modern working accelerator, and to create associated information materials.
Llewellyn Smith, like his predecessor, also needed to deal with political issues. For the most part the meetings of the SESAME Council were totally devoid of politics. In fact, they felt to me like a parallel universe where administrators and scientists from the region get to work together in a common project, each bringing her or his own scars and prejudices and each willing to learn. That said, there were moments when politics did contaminate the spirit forming in SESAME. In some cases, this was isolated and removed from the agenda and in others a bitter taste remains. But these are just at the very margins of the main thrust of SESAME.
The empty SESAME building started to be filled with radiation shields, giving the appearance of a full building. But the absence of the light-source itself created a void. The morale of the local staff was in steady decline, and it seemed to me that the project was in some danger. I decided to approach the ministry of finance in Israel. When I asked if Israel would make a voluntary contribution to SESAME of $5 million, I was not shown the door. Instead they requested to come and see SESAME, after which they discussed the proposal with Israel’s budget and planning committee and agreed to contribute the requested funds on the condition that others join them.
Each member of the unlikely coalition – consisting of Iran, Israel, Jordan and Turkey – pledged an extra $5 million for the project in an agreement signed in Amman. Since then, Israel, Jordan and Turkey have stood up to their commitment, and Iran claims that it recognises its commitment but is obstructed by sanctions. The support from members encouraged the EU to dedicate $5 million to the project, in addition to the approximately $3 million directed earlier from a bilateral EU–Jordan agreement. In 2015 the INFN, under director Fernando Ferroni, gave almost $2 million. This made it possible to build a hostel, as offered by most light sources, which was named appropriately after Sergio Fubini. Many leading world labs, in a heartwarming expression of support, have donated equipment for future beam lines as well as fellowships for the training of young people.
Point of no return
With their help, SESAME crossed the point of no return. The undefined stuff dreams are made of turned into magnets and girdles made of real hard steel, which I was able to touch as they were being assembled at CERN. The pace of events had finally accelerated, and a star-studded inauguration including attendance by the king of Jordan took place on 16 May 2017. During the ceremony, amazingly, the political delegates of different member states listened to each other without leaving the room (as is the standard practice in other international organisations). Even more unique was that each member-state delegate taking the podium gave essentially the same speech: “We are trying here to achieve understanding via collaboration.”
At that moment the SESAME Council presidency passed from Chris Llewellyn Smith to a third former CERN Director-General, Rolf Heuer. The high-quality 2.5 GeV electron storage ring at the heart of SESAME started operation later that year, driving two X-ray beamlines: one dedicated to X-ray absorption fine structure/X-ray fluorescence (XAFS/XRF) spectroscopy, and another to infrared spectro-microscopy. A third powder-diffraction beamline is presently being added, while a soft X-ray beamline “HESEB” designed and constructed by five Helmholtz research centres is being commissioned. In 2023 the BEAmline for Tomography at SESAME (BEATS) will also be completed, with the construction and commissioning of a beamline for hard X-ray full-field tomography.
The unique SESAME facility started operating with uncanny normality. Well over 100 proposals for experiments were submitted and refereed, and beam time was allocated to the chosen experiments. Data was gathered, analysed and the results were and are being published in first-rate journals. Given the richness of archaeological and cultural heritage in the region, SESAME’s beamlines offer a highly versatile tool for researchers, conservators and cultural-heritage specialists to work together on common projects. The first SESAME Cultural Heritage Day took place online on 16 February 2022 with more than 240 registrants in 39 countries (CERN Courier July/August 2022 p19).
Thanks to the help of the EU, SESAME has also become the world’s first “green” light source, its energy entirely generated by solar power, which also has the bonus of stabilising the energy bill of the machine. There is, however, concern that the only component used from BESSY, the “Microtron” radio-frequency system, may eventually break down, thus endangering the operation of the whole machine.
SESAME continues to operate on a shoe-string budget. The current approved 2022 budget is about $5.3 million, much smaller than that of any modern light source. I marvel at the ingenuity of the SESAME staff allowing the facility to operate, and am sad to sense indifference to the budget among many of the parties involved. The world’s media has been less indifferent: the BBC, The New York Times, Le Monde, The Washington Post, Brussels Libre, The Arab Weekly, as well as regional newspapers and TV stations, have all covered various aspects of SESAME. In 2019 the AAAS highlighted the significance of SESAME by awarding five of its founders (Chris Llewellyn Smith, Eliezer Rabinovici, Zehra Sayers, Herwig Schopper and Khaled Toukan) with its 2019 Award for Science Diplomacy.
SESAME was inspired by CERN, yet it was a much more challenging task to construct. CERN was built after the Second World War was over, and it was clear who had won and who had lost. In the Middle East the conflicts are not over, and there are different narratives on who is winning and who is losing, as well as what win or lose means. For CERN it took less than 10 years to set up the original construct; for SESAME it took about 25 years. Thus, SESAME now should be thought of as CERN was in around 1960.
On a personal note, it brings immense happiness that for the first time ever, Israeli scientists have carried out high-quality research at a facility established on the soil of an Arab country, Jordan. Many in the region and beyond have taken their people to a place their governments most likely never dreamt of or planned to reach. It is impossible to give due credit to the many people without whom SESAME would not be the success it is today.
The non-relations among several of the members presented a serious challenge
In many ways SESAME is a very special child of CERN, and often our children can teach us important lessons. As president of the CERN Council, I can say that the way in which the member states of SESAME conducted themselves during the decades of storms that affect our region serves as a benchmark for how to keep bridges for understanding under the most trying of circumstances. The SESAME spirit has so far been a lighthouse even to the CERN Council, in particular in light of the invasion of Ukraine (an associate member state of CERN) by the Russian Federation. Maintaining this attitude in a stormy political environment is very difficult.
However SESAME’s story ends, we have proved that the people of the Middle East have within them the capability to work together for a common cause. Thus, the very process of building SESAME has become a beacon of hope to many in our region. The responsibility of SESAME in the next years is to match this achievement with high-quality scientific research, but it requires appropriate funding and help. SESAME is continuing very successfully with its mission to train hundreds of engineers and scientists in the region. Requests for beam time continue to rise, as do the number of publications in top journals.
If one wants to embark on a scientific project to promote peaceful understanding, SESAME offers at least three important lessons: it should be one to which every country can contribute, learn and profit significantly from; its science should be of the highest quality; and it requires an unbounded optimism and an infinite amount of enthusiasm. My dream is that in the not-so-distant future, people will be able to point to a significant discovery and say “this happened at SESAME”.
Hadron therapy, to which particle and accelerator physicists have contributed significantly during the past decades, has treated more than 300,000 patients to date. As collaborations and projects have grown over time, new methods aimed at improving and democratising this type of cancer treatment have emerged. Among them, therapy with proton beams from circular accelerators stands out as a particularly effective treatment: protons can obliterate tumours, sparing the surrounding healthy tissues at higher rates than conventional electron or photon therapy. Unfortunately, present proton- and ion-therapy centres are large and very demanding on the design of buildings, accelerators and gantry systems.
A novel proton accelerator for cancer treatment based on CERN technology is preparing to receive its first patients in the UK. Advanced Oncotherapy (AVO), based in London, has developed a proton-therapy system called LIGHT (Linac Image-Guided Hadron Technology) – the result of more than 20 years of work at CERN and spin-off company ADAM, founded in 2007 to build and test linacs for medical purposes and now AVO’s Geneva-based subsidiary. LIGHT provides a proton beam that allows the delivery of ultra-high dose rates to deep-seated tumours. The initial acceleration to 5 MeV is based on radio-frequency quadrupole (RFQ) technology developed at CERN and supported by CERN’s knowledge transfer group. LIGHT reached the maximum treatment energy of 230 MeV at the STFC Daresbury site on 26 September. Four years after the first 16 m-long prototype was built and tested at LHC Point 2, this novel oncological linac will treat its first patients in collaboration with University Hospital Birmingham at Daresbury during the second half of 2023, marking the first time a proton linear accelerator is used for cancer therapy.
LIGHT operates with components and designs developed by CERN, ENEA, the TERA Foundation and ADAM. Components of note include LIGHT’s RFQ, which contributes to its compact design, as well as 19 radio-frequency modules composed of four side-coupled drift-tube accelerating cavities based on a TERA Foundation design and 15 coupled accelerating cavities with industrial design by ADAM. Each module is controlled to vary the beam energy electronically, 200 times per second, depending on the depth of the tumour layer. This obviates the need for absorbers (or degraders), which greatly reduce the throughput of protons and produce large unwanted radiation, therefore reducing the volume of shielding material required. This design allows the linear accelerator to generate an extremely focused beam of 70 to 230 MeV and to target tumours in three dimensions, by varying the depth at which the radiation dose is delivered much faster than existing circular accelerators.
“Our mission is simple: democratise proton therapy,” says Nicolas Serandour, CEO of AVO. “The only way to fulfill this goal is through the development of a different particle accelerator and this is what we have achieved with the successful testing of the first-ever proton linear accelerator for medical purposes. Importantly, the excitement comes from the fact that cost reduction can be accompanied with better medical outcomes due to the quality of the LIGHT beam, particularly for cancers that still have a low prognosis. I cannot over-emphasise the importance that CERN and ADAM played in making this project a tangible reality for millions of cancer patients.”
The latest edition of the CERN Alumni Network’s “Moving out of academia” series, held on 21 October, focused on how to successfully manage a transition from academia to the big- tech industry. Six panellists who have started working in companies such as Google, Microsoft, Apple and Meta shared their advice and experience on how to successfully start a career in a large multinational company after having worked at large scale-research infrastructures such as CERN.
In addition to describing the nature of their work and the skills acquired at CERN that have helped them make the transition, the panellists explained which new skills they had to develop after CERN for a successful career move. The around 180 participants who attended the online event received tips for interviews and CV-writing and heard personal stories about how a PhD prepares you for a career outside academia.
The panellists agreed that metrics used in academia to qualify a person’s success, such as a PhD, the h-index, or the number of published papers, do not necessarily apply to roles outside of academia, except for research positions. “You don’t need to have a PhD or a certificate to demonstrate that you are a good problem solver or a good programmer – you should do a PhD because you are interested in the field,” said Cristina Bahamonde, who used to work in accelerator operations at CERN and now oversees and unblocks all Google’s network deployments as regional leader for its global network delivery team in Europe, the Middle East and Africa. She considers her project-management and communication skills, which she acquired during her time at CERN while designing solution and mitigation strategies for operational changes in the LHC, essential for her current role.
General skills needed for big-tech companies include the ability to learn and adapt fast, project and product-management skills, as well as communicating effectively to technical and non-technical audiences. Some participants were unaware that skills that they sharpened intuitively throughout their academic career are vital for a career outside.
“CERN taught me how to be a generalist,” says James Casey, now a group programme manager at Microsoft. “I was not working as a product manager at CERN, but you do very similar work at CERN because you write documents, build customer relationships and need to communicate your work in an understandable way as well as to communicate the work that needs to be done.” At CERN in 1994, Casey worked as a summer student alongside the original team that developed the web. After having worked in start-ups, he returned to CERN for a while and then moved back to industry in 2011.
Finding the narrative
Finding your own narrative and presenting it in the right way on a resumé is not always easy. “When I write my resumé, it looks really straight forward,” said Mariana Rihl, former LHCb experimentalist and now Meta’s product-system validation lead for verifying and validating Oculus VR products. “But only after a certain time, I realised that a common theme emerged — testing hardware and understanding users’ needs.” Working on the LHCb beam-gas vertex detector and especially ensuring the functionality of detector hardware prepared her well, she said.
Former CERN openlab intern Ritika Kanade, who now works as a software engineer at Apple, shared her experience of interviewing people applying for software engineering roles. “What I like to see during an interview is how the applicant approaches the tasks and how he or she interacts with me. It’s ok if someone needs help. That’s normal in our job,” she adds. “Time management is one thing I see many candidates struggle with.” Other skills needed in industry as well as in academia are tenacity and persistence. Often, candidates need to apply more than three times to land a job at their favourite company. “I applied six or seven times before I was invited for an interview at Google,” emphasised Bahamonde.
The Moving out of academia series provides a rich source of advice for those seeking to make a career change, with the latest event followingothers dedicated to careers in finance, industrial engineering, big data, entrepreneurship, the environment and medical technologies. “This CERN Alumni event demonstrated once more the impact of high-energy physics on society and that people transitioning from academia to industry bring fresh insights from another field,” said Rachel Bray, head of CERN Alumni relations.
Experimentalist Volker Soergel passed away on 5 October at the age of 91. Born in Breslau in March 1931, Soergel was a brilliant experimental physicist and an outstanding leader, shaping particle physics for many years.
Receiving a doctorate from the University of Freiburg in 1956 under the tutelage of Wolfgang Gentner, Soergel remained at Freiburg until 1961, with a year at Caltech in 1957–1958. He then joined CERN as a research associate, working with Joachim Heintze on the beta decay of elementary particles, especially very rare decays of mesons and hyperons. Their results became milestones in the development of the Standard Model, resulting in the award of the German Physical Society’s highest honour in 1963.
In 1965 Soergel became a professor at the University of Heidelberg. He continued his research at CERN while taking on important roles at the university: as director of the Institute of Physics, as dean and as a member of the university’s administrative council. With vision and skill, he played a major role in shaping the university.
Important tasks outside Heidelberg followed. From 1976–1979 he chaired the DESY Scientific Council through a period that saw work begin on the electron–positron collider, PETRA. Under his leadership, the council played an important role in DESY’s transition from national to international laboratory. In 1979 and 1980 he served as research director at CERN, helping pave the way for the collider experiments of the 1980s.
From 1981–1993 Soergel headed DESY, overseeing construction of the electron–proton storage ring, HERA, together with Björn Wiik and Gustav-Adolph Voss. HERA and its experiments benefited from large international contributions, mainly in the form of components and manpower: an approach that became known as the HERA model. Soergel’s powers of persuasion, his reputation, and his negotiating skills led to support from institutes in Western Europe, Israel and Canada, as well as from Poland, Russia and China. From 1996–2000 he headed the Max Planck Institute for Physics in Munich. Under his guidance, photon science became an important pillar of DESY research, first as a by-product of accelerators used for particle physics, then, with the inauguration of HASYLAB in 1981 and the conversion of DORIS, as an established research field that continues strongly to this day.
Soergel’s time at DESY coincided with German reunification. He enabled the merger of the Institute for High Energy Physics in Zeuthen, near Berlin, with DESY and, together with Paul Söding, made Zeuthen a centre for astroparticle physics. Even before the Iron Curtain fell, Soergel personally ensured that Zeuthen scientists could work at DESY.
Volker Soergel received many honours. He was awarded the Federal Cross of Merit, 1st class, and honorary doctorates from the universities of Glasgow and Hamburg. He has left a lasting legacy. His love for physics was similar in intensity to his love for music. A gifted violin and viola player, he enjoyed making music with his wife and children, friends and colleagues. All who worked with him remain grateful for all they learned from him and will not forget his support and guidance.
Renowned high-energy theorist Nicola Najib Khuri died on 4 August 2022 in New York City. Born in 1933 in Beirut, Lebanon he was the eldest of four siblings and a precocious student. He graduated from the American University of Beirut (AUB) in 1952 at the age of 19, then travelled to the US for his graduate studies in physics. He received his MA and PhD from Princeton University and was a fellow of Princeton’s Institute for Advanced Study. While in graduate school, he met Elizabeth Tyson, the love of his life and wife of over 60 years. Upon receiving his doctorate in 1957, Nicola returned to Lebanon and joined the faculty at AUB. In 1964 he went back to the US and accepted a position at The Rockefeller University, New York, where he founded a lab and remained for the rest of his career.
Nicola was a leading authority on the use of mathematics in high-energy theoretical physics. At Rockefeller, his research focused on the mathematical description of elementary-particle collisions. Among his most notable achievements were the introduction of a new method to study the Riemann hypothesis, one of the last unsolved problems in mathematics, and the foundation of the field of potential scattering theory, which led to the development of important concepts such as Regge poles and strings.
In addition to his post at Rockefeller, he held visiting appointments and consulting roles at CERN, Stanford University, Columbia University, Lawrence Livermore National Laboratory, Brookhaven National Laboratory and Los Alamos National Laboratory. He was also a member of the panel on national security and arms control of the Carnegie Endowment for International Peace and a fellow of the American Physical Society.
Nicola and Liz, along with their two children, built a beautiful life in New York. Their homes had a revolving door for friends, family, colleagues and mentees who came from far and wide to hear Nicola’s remarkable stories, take in his sage advice, and enjoy his timeless, occasionally risqué jokes. A true cosmopolitan, he relished the vibrancy and possibility of New York. When not at home, he could be found ordering mezze for the table at one of his favourite Lebanese restaurants, exploring his interest in international politics at the Council on Foreign Relations, or making a toast at the Century Association. He retained an enduring love for, and a fundamental commitment to, Lebanon. He was a passionate supporter of his alma mater, a mentor to generations of young scientists from the Middle East, and was instrumental in establishing the university’s Center for Advanced Mathematical Sciences, among many other contributions.
There are many things we will miss about Nicola: his character; the way he commanded a room; his childlike sense of humour; the happy gleam in his eye when he told a story from his adventurous life; and his sneaky determination in old age to satisfy a lifelong appetite for good wine, good cheese and excellent chocolate over the protests of doctors, caregivers and his daughter, Suzanne. Above all, we will miss the way he treated others.
Valery Rubakov, who pioneered groundbreaking ideas and methods in many domains of particle physics and cosmology, passed away suddenly on 19 October at the age of 67.
Born in Moscow in 1955, Valery studied physics at Moscow State University from 1972–1978. He started doing research in his third year of studies, publishing his first paper at the age of 21 on the topic of quantum gravity. Valery joined the Institute for Nuclear Research (INR) of the Russian Academy of Sciences in 1978, defending his PhD on the non-perturbative aspects of gauge theories in 1981. At the age of 26, Rubakov had already made his name in global high-energy physics. He discovered that the ’t Hooft–Polyakov super-heavy magnetic monopoles that exist in grand-unified theories “catalyse” proton decay – a beautiful and subtle non-perturbative effect that provides alternative experimental signatures in the search of monopoles.
Valery was one of the first physicists to realise the importance of inflationary theory, his 1982 work with Mikhail Sazhin and Alexey Veryaskin on the primordial production of gravitational waves establishing important constraints on the energy scale of the de-Sitter stage of inflation. In 1983 Rubakov (together with Mikhail Shaposhnikov) proposed alternatives to Kaluza–Klein compactification in theories with more than four space–time dimensions – one of which is now known as the brane-world scenario, where the particles of our world live on a four-dimensional defect embedded in a higher-dimensional universe.
He produced key contributions towards the understanding of the baryon asymmetry of the universe. His paper in 1985 with Vadim Kuzmin and Shaposhnikov revealed that non-perturbative processes can drive a rapid violation of baryon- and lepton-number conservation in the early universe – the basic ingredient of thermal leptogenesis. In 1998, together with Evgeny Akhmedov and Alexey Smirnov, he suggested how to produce the baryon asymmetry by relatively light right-handed neutrinos – an alternative to so-called Fukigita–Yanagida leptogenesis, which opened a unique possibility for experimental verification.
In a series of remarkable articles from 1987–1988, Valery, together with his PhD students George Lavrelashvili and Peter Tinyakov, discussed the physical impact of topology change in quantum gravity, analysing deep conceptual issues such as quantum coherence. In 1990–1991, together with Sergei Khlebnikov and Peter Tinyakov, he attacked the challenging problem of how to compute the probability of anomalous processes with baryon number non-conservation in very high-energy collisions above tens of TeV. He returned to this problem together with Fedor Bezrukov, Dmitry Levkov and Claudio Rebbi, in 2003, demonstrating that these reactions are exponentially suppressed, thus removing hopes of experimental verification of this phenomenon.
Valery made imprints in virtually all areas of particle physics and cosmology: supersymmetry phenomenology; the strong CP-problem; dark matter and dark energy; non-commutative field theories; classical and quantum gravity; and alternatives to inflation, to name a few. His very last article, written with Christof Wetterich, was devoted to a physical concept of time for the beginning stage of the universe.
Valery was also an outstanding and passionate teacher, creating the “Rubakov School” of theoretical physics and cosmology in Moscow, and a successful scientific manager. As a research director of INR from 1987 to 1994, for example, he was responsible for the construction and operation of the Baksan neutrino observatory and the Baikal deep underwater neutrino telescope. Valery was a member of the CERN Scientific Policy Committee from 2014 to 2019 and served on the ICTP Scientific Council from 2010 to 2020. He was the recipient of numerous prizes, awards and distinctions and wrote several excellent textbooks, including Classical Theory of Gauge Fields (Princeton 2002).
Valery Rubakov was an exceptional person. He defended scientific thinking, the freedom of mind and fruitful collaboration between scientists from different countries. He will always be remembered for his kindness, sharp and inventive mind, charisma, honesty and integrity.