Comsol -leaderboard other pages

Topics

Egil Lillestøl 1938–2021

Egil Lillestøl 1938–2021

 

Norwegian experimental particle physicist Egil Sigurd Lillestøl passed away in Valence, France, on 27 September. He will be remembered as a passionate colleague with exceptional communication and teaching skills, and a friend with many personal interests. He was able to explain the most complex systems and mechanisms in physics so that even the layperson felt they understood it.

Egil Lillestøl obtained his PhD from the University of Bergen in 1970. By which time he had already spent three years (1964–1967) as a fellow at CERN. He was appointed associate professor at his alma mater the same year, and then left for Paris in 1973 where he was a guest researcher at Collège de France. In 1984 Lillestøl was appointed full professor in experimental particle physics in Bergen, where he became central in the PLUTO collaboration at DESY, DELPHI and then ATLAS at CERN.

Over time, CERN became Lillestøl’s main laboratory, first as a paid associate, later as a guest professor and eventually as a staff member, contributing to the management of the experimental programme and significantly improving the conditions for the visiting scientists at the laboratory.

In Norway he acted as national coordinator of CERN activities in preparation for the LHC. He was instrumental in the organisation of the community and discussions of future funding models at the national level, in particular to accommodate the long-term commitments needed for the ATLAS and ALICE construction projects.

Egil Lillestøl played a pivotal role in the CERN Schools of Physics from 1992 until 2009, relaunching the European School of High-Energy Physics as annual events organised in collaboration with JINR, and establishing a new biennial series of schools in Latin America from 2001. He worked tirelessly on preparations for each event, in collaboration with local organisers in each host country, as well as on-site during the two-week-long events.

The Latin-American schools were an important element in increasing the involvement of scientists and institutes from the region in the CERN experimental programme, for which he deserves much credit. Beyond his official duties, he took great pleasure in interacting with the participants of the schools during their free time, and in the evenings he could often be found playing piano to accompany their singing.

As a founding member of the International Thorium Energy Committee, Lillestøl was a strong proponent for thorium-based nuclear power. He was also one of the main drivers behind the UNESCO-supported travelling exhibition “Science bringing nations together”, organised jointly by JINR and CERN.

As a teacher and a lecturer, Lillestøl was a role model. He always tailored his presentations to match the audience. His tabletop book The Search for Infinity, co-authored with Gordon Fraser and Inge Sellevåg, became a bestseller and has been published in nine language editions.

Egil Lillestøl was a bon viveur who spread joy around him. He had an impressive repertoire of anecdotes, including topics such as how to cold-smoke salmon. He enjoyed sports and was active in the CERN clubs for cycling, skiing and sailing. He leaves behind his wife and former colleague, Danielle, and two adult children from his first marriage.

Simon Eidelman 1948–2021

Simon Eidelman 1948-2021

Simon Eidelman, a leading researcher at the Budker Institute of Nuclear Physics in Novosibirsk, Russia, and a professor of Novosibirsk State University (NSU), passed away on 28 June.

He was a key member of experimental collaborations at Novosibirsk, CERN and KEK, and a leading author in the Particle Data Group. Eidelman served the high-energy physics community in a variety of ways, including as Novosibirsk’s correspondent for this magazine for more than 20 years.

Simon (Semyon) Eidelman was born in Odessa in 1948. He went to Novosibirsk aged 15 to participate in a national mathematics Olympiad, and ended up staying to attend a special high school for extraordinarily gifted students. He then studied physics at NSU. Even before graduating, in 1968 Simon joined the Budker Institute and remained there his entire professional life. In parallel, he was a faculty member at NSU and held the high-energy physics chair for 10 years. Simon always cared for, helped and supported students and young colleagues.

Meson expert

Eidelman’s scientific activity mostly concerned experiments at e+e colliders, beginning with participation in the discovery of multi-hadron events at the pioneering VEPP-2 collider.

In 1974 he moved to experiments with the OLYA detector at the upgraded VEPP-2M, where a comprehensive study of e+e annihilation into hadrons was performed up to an energy of 1.4 GeV. Later, this detector was moved to the VEPP-4 collider, where high-precision measurements of the J/ψ and ψʹ masses were performed. Simon’s work at VEPP-2 and VEPP-4, and the analysis of the so-called box anomaly, made him one of the world’s leading experts on vector mesons. Together with Lery Kurdadze and Arkady Vainshtein, he also performed the first comparison of QCD sum rules with experiment.

Simon became one of the pioneers in the evaluation of the hadronic contribution to the anomalous magnetic moment of the muon

Simon was a key member of several major experimental collaborations: KEDR, CMD-2 and CMD-3 at Novosibirsk, LHCb at CERN and Belle, Belle II and g-2/EDM at J-PARC. Recently he contributed to the KLF proposal at JLab to build a secondary beam of neutral kaons to be used with the GlueX setup for strange-hadron spectroscopy. Just last year he proposed to measure the charged kaon mass with unprecedented precision using the Siddharta X-ray experiment at DAΦNE in Frascati – which would have yielded a dramatic improvement on determinations of the masses of charmonium-like exotic mesons.

Thanks to his deep understanding of hadron-production cross sections, Simon became one of the pioneers in the evaluation of the hadronic contribution to the anomalous magnetic moment of the muon, g-2. He was a founding member of the Muon g-2 Theory Initiative and a key contributor to its first white paper, published last year, which provides the community consensus for the Standard Model prediction. He was also an authority on strongly interacting hadrons and resonances, as well as the τ lepton and two-photon physics.

Simon was a key author in the international Particle Data Group (PDG) for 30 years, leading the PDG subgroup responsible for meson resonances since 2006. In recognition of his contributions, he was chosen to be the first author of the 2004 edition of the Review of Particle Physics. He was also a great source of inspiration for the Quarkonium Working Group (QWG). Attendees of the QWG workshops will remember his lucid presentations, his great enthusiasm for research and his keen scientific insights. Moreover, he was greatly appreciated for his wisdom and calm counsel during intense discussions.

Superb editor

Thanks to his deep knowledge and wide scientific horizons, combined with a wonderful sense of humour and a kind and friendly nature, Simon possessed a unique ability to galvanise colleagues into joint projects within many international collaborations and meetings. He was also deeply engaged in training the next generations of physicists, most recently being the driving force behind the school on muon g-2.

Simon was also a superb scientific editor. He had a rare gift of formulating scientific problems and results clearly and concisely, providing an invaluable contribution to the very large number of papers that he authored, co-authored and refereed. Several international meetings have been dedicated to Simon’s memory, including CHARM 2021 and the 4th Plenary Workshop of the Theory Initiative.

We have lost a remarkable physicist, and a dear and kind person. All who had the privilege of knowing and working with Simon Eidelman will always remember him as an invaluable colleague.

Counting collisions precisely at CMS

The start of Run-2 physics

Year after year, particle physicists celebrate the luminosity records established at accelerators around the world. On 15 June 2020, for example, a new world record for the highest luminosity at a particle collider was claimed by SuperKEKB at the KEK laboratory in Tsukuba, Japan. Electron–positron collisions at the 3 km-circumference machine had reached an instantaneous luminosity of 2.22 × 1034 cm–2s–1 – surpassing the 27 km-circumference LHC’s record of 2.14 × 1034 cm–2s–1 set with proton–proton collisions in 2018. Within a year, SuperKEKB had celebrated a new record of 3.1 × 1034 cm–2s–1 (CERN Courier September/October 2021 p8).

Integrated proton–proton luminosity

Beyond the setting of new records, precise knowledge of the luminosity at particle colliders is vital for physics analyses. Luminosity is our “standard candle” in determining how many particles can be squeezed through a given space (per square centimetre) at a given time (per second); the more particles we can squeeze into a given space, the more likely they are to collide, and the quicker the experiments fill up their tapes with data. Multiplied by the cross section, the luminosity gives the rate at which physicists can expect a given process to happen, which is vital for searches for new phenomena and precision measurements alike. Luminosity milestones therefore mark the dawn of new eras, like the B-hadron or top-quark factories at SuperKEKB and LHC (see “High-energy data” figure). But what ensures we didn’t make an accidental blunder in calculating these luminosity record values?

Physics focus

Physicists working at the precision frontier need to infer with percent-or-less accuracy how many collisions are needed to reach a certain event rate. Even though we can produce particles at an unprecedented event rate at the LHC, however, their cross section is either too small (as in the case of Higgs-boson production processes) or impacted too much by theoretical uncertainty (for example in the case of Z-boson and top-quark production processes) to enable us to establish the primary event rate with a high level of confidence. The solution comes down to extracting one universal number: the absolute luminosity.

Schematic view of the CMS detector

The fundamental difference between quantum electrodynamics (QED) and chromodynamics (QCD) influences how luminosity is measured at different types of colliders. On the one hand, QED provides a straightforward path to high precision because the absolute rate of simple final states is calculable to very high accuracy. On the other, the complexity in QCD calculations shapes the luminosity determination at hadron colliders. In principle, the luminosity can be inferred by measuring the total number of interactions occurring in the experiment (i.e. the inelastic cross section) and normalising to the theoretical QCD prediction. This technique was used at the SppS and Tevatron colliders. A second technique, proposed by Simon van der Meer at the ISR (and generalised by Carlo Rubbia for the pp case), could not be applied to such single-ring colliders. However, this van der Meer-scan method is a natural choice at the double-ring RHIC and LHC colliders, and is described in the following.

Beam-separation-dependent event rate

Absolute calibration

The LHC-experiment collaborations perform a precise luminosity inference from data (“absolute calibration”) by relating the collision rate recorded by the subdetectors to the luminosity of the beams. With the implementation of multiple collisions per bunch crossing (“pileup”) and intense collision-induced radiation, which acts as a background source, dedicated luminosity-sensitive detector systems called luminometers also had to be developed (see “Luminometers” figure). To maximise the precision of the absolute calibration, beams with large transverse dimensions and relatively low intensities are delivered by the LHC operators during a dedicated machine preparatory session, usually held once a year and lasting for several hours. During these unconventional sessions, called van der Meer beam-separation scans, the beams are carefully displaced with respect to each other in discrete steps, horizontally and vertically, while observing the collision rate in the luminometers (see “Closing in” figure). This allows the effective width and height of the two-dimensional interaction region, and thus the beam’s transverse size, to be measured. Sources of systematic uncertainty are either common to all experiments and are estimated in situ, for example residual differences between the measured beam positions and those provided by the operational settings of the LHC magnets, or depend on the scatter between luminometers. A major challenge with this technique is therefore to ensure that the obtained absolute calibration as extracted under the specialised van der Meer conditions is still valid when the LHC operates at nominal pileup (see “Stability shines” figure).

Stepwise approach

Using such a stepwise approach, the CMS collaboration obtained a total systematic uncertainty of 1.2% in the luminosity estimate (36.3 fb–1) of proton–proton collisions in 2016 – one of the most precise luminosity measurements ever made at bunched-beam hadron colliders. Recently, taking into account correlations between the years 2015–2018, CMS further improved on its preliminary estimate for the proton–proton luminosity at higher collision energies of 13 TeV. The full Run-2 data sample corresponds to a cumulative (“integrated”) luminosity of 140 fb–1 with a total uncertainty of 1.6%, which is comparable to the preliminary estimate from the ATLAS experiment.

Ratio of luminosities between luminometers

In the coming years, in particular when the High-Luminosity LHC (HL-LHC) comes online, a similarly precise luminosity calibration will become increasingly important as the LHC pushes the precision frontier further. Under those conditions, which are expected to produce 3000 fb–1 of proton–proton data by the end of LHC operations in the late 2030s (see “Precision frontier” figure), the impact from (at least some of) the sources of uncertainty is expected to be larger due to the expected high pileup. However, they can be mitigated using techniques already established in Run 2 and/or are currently under deployment. Overall, the strategy for the HL-LHC should combine three different elements: maintenance and upgrades of existing detectors; development of new detectors; and adding dedicated readouts to other planned subdetectors for luminosity and beam monitoring data. This will allow us to meet the tight luminosity performance target ( 1%) while maintaining a good diversity of luminometers. 

Given that accurate knowledge of luminosity is a key ingredient of most physics analyses, experiments also release precision estimates for specialised data sets, for example using either proton–proton collisions at lower centre-of-mass energies or involving nuclear collisions at different per-nucleon centre-of-mass energies, as needed by the ALICE but also ATLAS, CMS and LHCb experiments. On top of the van der Meer method, the LHCb collaboration uniquely employs a “beam-gas imaging” technique in which vertices of interactions between beam particles and gas nuclei in the beam vacuum are used to measure the transverse size of the beams without the need to displace them. In all cases, and despite the fact that the experiments are located at different interaction points, their luminosity-related data are used in combination with input from the LHC beam instrumentation. Close collaboration among the experiments and LHC operators is therefore a key prerequisite for precise luminosity determination.

Protons versus electrons

Contrary to the approach at hadron colliders, the operation of the SuperKEKB accelerator with electron–positron collisions allows for an even more precise luminosity determination. Following well-known QED processes, the Belle II experiment recently reported an almost unprecedented precision of 0.7% for data collected during April–July 2018. Though electrons and positrons conceptually give the SuperKEKB team a slightly easier task, its new record for the highest luminosity set at a collider is thus well established. 

Expected uncertainties

SuperKEKB’s record is achieved thanks to a novel “crabbed waist” scheme, originally proposed by accelerator physicist Pantaleo Raimondi. In the coming years this will enable the luminosity of SuperKEKB is to be increased by a factor of almost 30 to reach its design target of 8 × 1035 cm–2s–1. The crabbed waist scheme, which works by squeezing the vertical height of the beams at the interaction point, is also envisaged for the proposed Future Circular Collider (FCC-ee) at CERN. It also differs from the “crab-crossing” technology, based on special radio­frequency cavities, which is now being implemented at CERN for the high-luminosity phase of the LHC. While the LHC has passed the luminosity crown to SuperKEKB, taken together, novel techniques and the precise evaluation of their outcome continue to push forward both the accelerator and related physics frontiers. 

ITER powers ahead

A D-shaped toroidal magnet coil

At the heart of the ITER fusion experiment is an 18 m-tall, 1000-tonne superconducting solenoid – the largest ever built. Its 13 T field will induce a 15 MA plasma current inside the ITER tokamak, initiating a heating process that ultimately will enable self-sustaining fusion reactions. Like all-things ITER, the scale and power of the central solenoid is unprecedented. Fabrication of its six niobium-tin modules began nearly 10 years ago at a purpose-built General Atomics facility in California. The first module left the factory on 21 June and, after traveling more than 2400 km by road and then crossing the Atlantic, the 110 tonne component arrived at the ITER construction site in southern France on 9 September. During a small ceremony marking the occasion, the director of engineering and projects for General Atomics described the job as: “among the largest, most complex and demanding magnet programmes ever undertaken” and “the most important and significant project of our careers.”

The US is one of seven ITER members, along with China, the European Union, India, Japan, Korea and Russia, who ratified an international agreement in 2007. Each member shares in the cost of project construction, operation and decommissioning, and also in the experimental results and any intellectual property. Europe is responsible for the largest portion of construction costs (45.6%), with the remainder shared equally by the other members. Mirroring the successful model of collider experiments at CERN, the majority (85%) of ITER-member contributions are to be delivered in the form of completed components, systems or buildings – representing untold hours of highly skilled work both in the member states and at the ITER site. 

First plasma

Assembly of the tokamak, which got under way in 2020, marks an advance to a crucial new phase for the ITER project. Production of its 18 D-shaped coils that provide the toroidal magnetic field, each 17 m high and weighing 350 tonnes, is in full swing, while its circular poloidal coils are close to completion. The remaining solenoid modules and all other major tokamak components are scheduled to be on site by mid-2023. Despite the impact of the global pandemic, the ITER teams are working towards the baseline target for “first plasma” by the end of 2025, with more than 2000 persons on site each day. 

A plasma in a torus-shaped tokamak

ITER’s purpose is to demonstrate the scientific and technological feasibility of fusion power for peaceful purposes. Key objectives are defined for this demonstration, namely: production of 500 MW of fusion power with a ratio of fusion power to input heating power (Q) of at least 10 for at least 300 seconds, and sustainment of fusion power with Q = 5 consistent with steady-state operation. The key factor in reaching these objectives is the world’s largest tokamak, a concept whose name comes from a Russian acronym roughly translated “toroidal chamber with magnetic coils”. This could also describe CERN’s Large Hadron Collider (LHC), but as we will see, the two magnetic confinement schemes are significantly different.

Among the largest, most complex and demanding magnet programmes ever undertaken

ITER chose deuterium and tritium (heavier variants of ordinary hydrogen) for its fuel because the D–T cross-section is the highest of all known fusion reactions. However, the energy at which the cross-section is maximum (~65 keV) is equivalent to almost 1 billion degrees. As a result, the fuel will no longer be in the form of gas as it is introduced but in the plasma state, where it is broken down to its electrically charged components (ions and electrons). As in the LHC, the electric charge introduces the possibility to hold the ions and electrons in place using magnetic fields generated by electromagnets – in both cases by superconducting magnets held at temperatures near absolute zero to avoid massive electrical consumption.  

ITER’s cryostat base

A simple picture of how the magnets in ITER work together to confine a plasma with temperatures greater than 100 million degrees begins with the toroidal field coils  (see “Trapping a plasma” figure). Eighteen of these are arranged to make a magnetic field that is circular-centered on a vertical line. Charged particles, to the crudest approximation, follow the magnetic field, so it would seem that the problem of confining them is solved. However, at the next level of approximation, the charged particles actually make small “gyro-orbits”, like beads on a wire. This introduces a difficulty because the “gyroradius” of these orbits depends on the strength of the magnetic field, and the toroidal magnetic field increases in strength closer to the vertical line defining its centre. This means that the gyroradius is smaller on the inner part of the orbit, which leads to a vertical motion of the charged particles. Since the direction of motion depends on the charge of the particle, however, the opposite charges move away from each other. This makes a vertical electric field which, when combined with the toroidal field, rapidly expels charged particles radially outward – eliminating confinement! Two Russian physicists, Tamm and Sakharov, proposed the idea in the 1950s that a current flowing in the plasma in the toroidal direction would generate a net helical field and charged particles flowing along the total field would short out the electric field, leading to confinement. This was the invention of the tokamak magnetic confinement concept.  

Magnetic configuration

In ITER, this current is generated by the powerful central solenoid, aligned on the vertical line at the centre of the toroidal field. It acts as the primary winding of a transformer, with the plasma as the secondary. There remains one more issue to address, again with magnets. The pressure and current in the plasma result in a force that tries to push the plasma further from the vertical line at the centre. To counter this force in ITER, six “poloidal field” coils are aligned – again about the vertical centerline – to generate vertical fields that push the plasma back toward the vertical line and also shape the plasma in ways that enhance the performance. A number of correction coils will complete ITER’s complex magnetic configuration, which will demonstrate the deployment of the Nb3Sn conductor – the same as is being implemented for high-field accelerator magnets at the High-Luminosity LHC and as proposed for future colliders – on a massive scale. CERN signed a collaboration agreement with ITER in 2008 concerning the design of high-temperature superconducting current leads and other magnet technologies, and acted as one of the “reference” laboratories for testing ITER’s superconducting strands. 

The first of ITER’s poloidal field coils

Despite the pandemic disrupting production and transport, the first step of ITER’s tokamak assembly sequence – the installation of the base of the cryostat into the tokamak bioshield – was achieved in May 2020. The ITER cryostat, which must be made of non-magnetic stainless steel, will keep the entire (30 m diameter by 30 m high) tokamak assembly at the low temperatures necessary for the magnets to function. It comes in four pieces (base, lower and upper cylinders, and lid) that are welded together in the tokamak building. At 1250 tonnes, the cryostat-base lift was the heaviest of the entire assembly sequence, its successful completion officially starting the assembly sequence (see “Heavy lifting” image). Later in 2020, the lower cylinder was then installed and welded to the base. 

Bottle up

With the “bottle” to hold the tokamak placed in position, installation of the electromagnets could begin. The two poloidal field coils at the bottom of the tokamak, PF6 and PF5, had to be installed first. PF6 was placed inside the cryostat earlier this year (see “Poloidal descent” image), while the second was lifted into place this September. The next big milestone is the assembly and installation of the first “sector” of the tokamak. The vacuum vessel in which the fusion plasma is made is divided into nine equal sectors (like the slices of an orange), due to limitations on the lifting capacity and to facilitate parallel fabrication of these large objects. Each sector of the vacuum vessel (see “Monster moves” image) has two toroidal field coils associated with it. 

ITER vacuum-vessel sector

In August, this vacuum vessel and its associated thermal shields were assembled together with the toroidal field coils on the sector sub-assembly tool for the first time (see “Shaping up” image). Once joined into a single unit, it will be installed in the cryostat in late 2021. The second vacuum-vessel sector arrived on site in August and will be assembled with the two associated toroidal-field coils already on site, with a target to install the final unit in the cryostat early in 2022. Sector components are scheduled to arrive, be put together, and then installed in the cryostat and welded together in assembly-line fashion, with the closure of the vacuum vessel scheduled for the end of 2023. The six central-solenoid modules are also to be assembled outside the cryostat into a single structure and installed in the cryostat shortly before closure. Following the arrival of the first module this summer, the second is complete and ready for shipping. Of the remaining four niobium-titanium poloidal field magnets, three are being fabricated on-site because they are too large to transport by road and all four are in advanced stages of production.  

Of course, there is more to ITER than its tokamak. In parallel, work on the supporting plant is under way. Four large transformers, which draw the steady-state electrical power from the grid, have been in operation since early 2019, while the medium- and low-voltage load centres that power clients in the plant buildings have been turned over to the operations division. The secondary and tertiary cooling systems, the chilled water and demineralised water plants, and the compressed-air and breathable-air plants are also currently being commissioned. The three large transformers that connect the pulsed power supplies for the magnets and the plasma heating systems have been qualified for operation on the 400 kV grid. The next big steps are the start of functional testing of the cryoplant and the reactive power compensation at the end of this year, and of the magnet power supplies and the first plasma heating system early in 2022. 

The 180-hectare ITER site

Perhaps the most common question one encounters when talking about ITER is: when will tokamak operations begin? Following the closure of the vacuum vessel in 2023, the current baseline schedule includes one year of installation work inside the cryostat before its closure, followed by integrated commissioning of the tokamak in 2025, culminating in “first plasma” by the end of 2025. By mandate from ITER’s governing body, the ITER Council, this schedule was put into place in 2016 as the “fastest technically achievable”, meaning no contingency. Clearly the pandemic has impacted the ability to meet that schedule, but the actual impact is still not possible to determine accurately. The challenge in this assessment is that 85% of the ITER components are delivered as in-kind contributions from the ITER members, and the pandemic has affected and continues to affect the manufacturing work on items that take years to complete. The components now being installed were substantially complete at the onset of the pandemic, but even these deliveries have encountered difficulties due to the disruption of the global shipping industry. Component installation in the tokamak complex has also been impacted by limited availability of components, goods and services. The possibility of recovery actions or further restrictions is not possible to predict with the needed accuracy today. In this light, the ITER Council has challenged us to do the best possible effort to maintain the baseline schedule, while preparing an assessment of the impact for consideration of a revised baseline schedule next year. The ITER Organization, domestic agencies in the ITER members responsible for supplying in-kind components, and contractors and suppliers around the world are working together to meet this additional challenge.  

What the future holds

ITER is expected to operate for 20 years, providing crucial information about both the science and the technology necessary for a fusion power plant. For the science, beyond the obvious interest in meeting ITER’s performance objectives, qualitative frontiers will be crossed in two essential areas of plasma physics. First, ITER will be the first “burning” plasma, where the dominant heating power to sustain the fusion output comes directly from fusion itself. Aspects of the relevant physics have been studied for many years, but the operating point of ITER places it in a fundamentally different regime from present experiments. The same is true of the second frontier: the handling of heat and particle exhaust in ITER. There is a qualitative difference predicted by our best simulation capabilities between the ITER operating point and present experiments. This is also the first touch-point between the physics and the technology: the physics must enable the survival of the wall, while the wall must allow the plasma physics to yield the conditions needed for the fusion reactions. Other essential technologies such as the means to make new fusion fuel (tritium), recycling of the fuel in use in real-time and remote handling for maintenance activities will all be pioneered in ITER.

ITER will provide crucial information about both the science and technology necessary for a fusion power plant

While ITER will demonstrate the potential for fusion energy to become the dominant source of energy production, harnessing that potential requires the demonstration not just of the scientific and technical capabilities but of the economic feasibility too. The next steps along that path are true demonstration power plants – “DEMOs” in fusion jargon – that explore these steps. ITER members are already exploring DEMO options, but no commitments have yet been made. The continuing advance of ITER is critical not just to motivate these next steps but also as a vision of a future where the world is powered by an energy source with universally available fuel and no impact on the environment. What a tremendous gift that would be for future generations.

The end of an era

Steven Weinberg in 2020

On 23 July, the great US theoretical physicist Steve Weinberg passed away in hospital in Austin, Texas, aged 88. He was a towering figure in the field, and made numerous seminal contributions to particle physics and cosmology that are part of the backbone of our current understanding of the fundamental laws of nature. He is part of the reduced rank of scientists who, in the course of history, have radically changed the way we understand the universe and our place in it.

Weinberg was born in New York, the son of Jewish immigrants, Eve and Frederick Weinberg. He attended the Bronx High School of Science, where he met Sheldon Glashow, later to become his Harvard colleague and with whom he would share the 1979 Nobel Prize in Physics. Towards the end of high school, Weinberg was already set on becoming a theoretical physicist. He obtained his undergraduate degree at Cornell University in 1954, and then spent a year doing graduate work at the Niels Bohr Institute in Copenhagen, after which he returned to the US to complete his graduate studies at Princeton. His PhD advisor was Sam Treiman and his thesis topic was the application of renormalisation theory to the effects of strong interactions in weak processes. Weinberg obtained his degree in 1957 and then spent two years at Columbia University. From 1959 to 1969 he was at Lawrence Berkeley Laboratory and later UC Berkeley, where he got his tenure in 1964. He was on leave at Harvard (1966–1967) and MIT (1967–1969), where he became professor of physics (1969–1973) and then moved to Harvard (1973–1983), where he succeeded Julian Schwinger as Higgins Professor of Physics. Weinberg joined the faculty of the University of Texas at Austin as the Josey Regental Professor of Physics in 1982, and remained there for the rest of his life.

Immense contributions

Perhaps his best known contribution to physics is his formulation of electroweak unification in the context of gauge theories and using the Brout–Englert–Higgs mechanism of symmetry breaking to give mass to the W and Z bosons, while sparing the photon (CERN Courier November 2017 p25). The names Glashow, Weinberg and Salam are forever associated with the spontaneously broken SU(2) × U(1) gauge theory, which unified the electromagnetic and weak interactions and provided a large number of predictions that have been experimentally confirmed. The most concise and elegant presentation of the theory appears in Weinberg’s famous 1967 paper: “A Model of Leptons”, one of the most cited papers in the history of physics, and a great example of clear science writing (CERN Courier November 2017 p31). At the time, the first family of quarks and leptons was known, but the second was incomplete. After a substantial amount of experimental and theoretical work, we now have the full formulation of the Standard Model (SM) describing our best knowledge of the fundamental laws of nature. This is a collective journey starting with the discovery of the electron in 1897, and concluding with the discovery of the scalar particle of the SM (the Higgs boson) at CERN in 2012. Weinberg was deeply involved with the building of the SM before and beyond his 1967 paper.

Normal humans would need to live several lives to accomplish so much

It is impossible to do justice to all the scientific contributions of Weinberg’s career, but we can list a few of them. In the early 1960s he embarked on the study of symmetry breaking, and wrote a seminal contribution with Goldstone and Salam describing in detail and in full generality the mechanism of spontaneous symmetry breaking in the context of quantum field theory, providing sound bases to the earlier discoveries of Nambu and Goldstone. Around the same time, he worked out the general structure of scattering amplitudes with the emission of arbitrary numbers of photons and gravitons. It is remarkable that this work has played a very important role in the recent study of asymptotic symmetries in general relativity and gauge theories (for example, Bondi–Metzner–Sachs symmetries and generalisations, and the general theory of Feynman amplitudes).

From jets to GUTs

Together with George Sterman, Weinberg started the study of jets in QCD, whose importance in modern high-energy experiments can hardly be exaggerated. He (and independently Frank Wilczek) realised that in the Peccei–Quinn mechanism invoked to solve the strong-CP problem, there is a light pseudoscalar particle lurking in the background. This is the infamous axion, also a prime candidate for dark-matter particles and whose experimental search has been actively pursued for decades. Weinberg was one of the pioneers in the formulation of effective field theories that transformed the traditional approach to quantum field theory. He was the founder of chiral perturbation theory, one of the initiators of relativistic quantum theories at finite temperature, and of asymptotic safety, which has been used in some approaches to quantum gravity. In 1979 he (and independently Leonard Susskind) introduced the notion of technicolour – an alternative to the Brout–Englert–Higgs mechanism in which the scalar particle of the SM appears as a composite fermion, which some find more appealing, but so far has little experimental support. Finally, we can mention his work on grand unification together with Howard Georgi and Helen Quinn, where they used the renormalisation group to understand in detail how a single coupling in the ultraviolet evolves in such a way that in the infrared it generates the coupling constants of the strong, weak and electromagnetic interactions.

Astronomical arguments

Steven Weinberg also made profound contributions in his work on the cosmological constant. In 1989 he used astronomical arguments to indicate that the vacuum energy is many orders of magnitude smaller than would be expected from modern theories of elementary particles. His bound on its possible value based on anthropic reasoning is as deep as it is unsettling. And it agrees surprisingly well with the measured value, as inferred from observations of receding, distant supernovae. It shatters Einstein’s dream of unification, when he asked himself whether the Almighty had any choice in creating the universe. Anthropic reasoning opens the door to theories of the multiverse that may also be considered as inevitable in some versions of inflationary cosmology, and in the theory of the string landscape of possible vacua for our universe. Among all the parameters of the current standard models of cosmology and particle physics, the question of which are environmental and which are fundamental becomes meaningful. Some of their values may ultimately have only a purely statistical explanation based on anthropism. “It’s a depressing kind of solution to the problem,” remarked Weinberg recently in the Courier: “But as I’ve said: there are many conditions that we impose on the laws of nature, such as logical consistency, but we don’t have the right to impose the condition that the laws should be such that they make us happy!” (CERN Courier March/April 2021 p51). On the one hand, his work led to the unification of the weak and electromagnetic forces; on the other the landscape of possibilities points against a unique universe. The tension between both points of view continues.

Steven Weinberg with Rolf Heuer and Peter Jenni

Weinberg also mastered the art of writing for non-experts. One of the most influential science books written for the general public is his masterpiece The First Three Minutes (1977), which provides a wonderful exposition of modern cosmology, the expansion of the universe, the cosmic microwave background radiation, and of Big Bang nucleosynthesis. Towards the end of the epilogue he formulated his famous statement that generated heated discussions with philosophers and theologians: “The more the universe seems comprehensible, the more it seems pointless.” In the next paragraph he tempers the coldness somewhat: “The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy.” But the implied meaning that the laws of nature have no purpose continues to be as provocative as when it was made originally. The debate will linger on for a long time. 

Controversies and passions 

Weinberg’s non-technical books exhibit an extraordinary erudition in numerous subjects. His approach is original and thorough, and always illuminating. He did not shy away from delicate and controversial discussions. Weinberg was a declared atheist, with a rather negative opinion on the influence of religion on human history and society. He showed remarkable courage to be outspoken and to engage in public debates about it. Again in his 1977 book, he wrote: “Anything that we scientists can do to weaken the hold of religion should be done and may in the end be our greatest contribution to civilisation.” Needless to say, such statements raised a number of blisters in some quarters. He was also a champion of scientific reductionism, something that was not very well received in many philosophical communities. He was clearly passionate about science and scientific principles, and in defence of the search for truth. In Dreams of a Final Theory (2011) he described his fight to avoid the demise of the Superconducting Super Collider (SSC). His ardent and convincing argument about the value of basic science, and also its importance as a motor of economic and technological growth, were not enough to convince sufficient members of the House of Representatives and the project was cancelled in 1993. It was a very hard blow to the US and global high-energy physics communities. The discussion had another great scientist on the other side: Phil Anderson, who passed away in 2020. It is not obvious if Anderson was against particle physics, or against big science. What is clear is that given the size of the budget deficit in the US (now and then), what was saved by not building the SSC did not go to “table top” science. 

In a 2015 interview to Third Way, Weinberg explained his philosophy and strategy when writing for the general public: “When we talk about science as part of the culture of our times, we would better make it part of that culture by explaining what we are doing. I think it is very important not to write down to the public. You have to keep in mind that you are writing for people who are not mathematically trained, but are just as smart as you are.”  This empathy and respect for the reader is immediately apparent as soon as you open any of his books, and together with the depth and breadth of his insight, explains their success.

Sheldon Lee Glashow, Abdus Salam and Steven Weinberg

He also excelled in the writing of technical books. In the early 1960s Weinberg became interested in astrophysics and cosmology, leading, among other things, to the landmark Gravitation and Cosmology (1971). The book became an instant classic, and it is still useful to learn about many aspects of general relativity and the early universe. In the 1990s he published a masterful three-volume set on The Quantum Theory of Fields, which is probably the definitive treatment on the subject in the 20th century. In 2008 he published Cosmology, an important update of his 1971 work, providing self-contained explanations of the ideas and formulas that are used and tested in modern cosmological observations. He also published Lectures on Quantum Mechanics in 2015, among one of the very best books on the subject, where the depth of his knowledge and insight shine throughout. The man had not lost his grit. Only this year, he published what he described as an advanced undergraduate textbook Foundations of Modern Physics, based on a lecture course he was asked to give at Austin. What distinguishes his scientific books from many others is that, in addition to the care and erudition with which the material is presented, they are also interspersed with all kinds of golden nuggets. Weinberg never avoids some of the conceptual difficulties that plague the subjects, and it is a real pleasure to find deep and inspiring clarifications.

It is not possible to list all his awards and honours, but let’s mention that he was elected to the US National Academy of Sciences in 1972, was awarded the Dannie Heineman Prize for Mathematical Physics in 1977 and the Nobel Prize in Physics in 1979. He was also a foreign honorary member of the Royal Society of London, received a Special Breakthrough Prize in Fundamental Physics in 2020, and has been invited to give the most prestigious lectures on the planet. Normal humans would need to live several lives to accomplish so much.

A great general 

Lately, Weinberg was interested in fundamental problems in the foundations and interpretation of quantum mechanics, and in the study of gravitational waves and what we can learn about the distribution of matter in the universe between us and their sources – two subjects of very active current research. In a 2020 preprint “Models of lepton and quark masses”, he returned to a problem that he last tackled in 1972, the fermion mass hierarchy.

His legacy will continue to inspire physicists for generations to come 

He also continued lecturing until almost the very end. Weinberg was an avid reader of military history, as evidenced in some of his writings, and as with a great general, he died with his boots on.

The news of his demise spread like a tsunami in our community, and led us into a state of mourning. When such a powerful voice is permanently silenced, we are all inevitably diminished. His legacy will continue to inspire physicists for generations to come.

Steven Weinberg is survived by his wife Louise, professor of law at the University of Texas, whom he married in 1954, his daughter Elizabeth, a medical doctor, and a granddaughter Gabrielle.

First Mustafa prizes for fundamental physics

An ATLAS researcher and a leading string theorist are among the winners of the 2021 Mustafa prize, which recognises researchers from the Islamic world. Yahya Tayalati (Mohammed V University, Rabat) was cited for contributions to searches for magnetic monopoles and his work on light-by-light scattering, which was first observed by ATLAS in 2019. Cumrun Vafa (Harvard) was recognised for developing F-theory. Among other laureates, M. Zahid Hasan (Princeton) was cited for his work on Weyl-fermion semimetals and topological insulators – materials which are insulators inside but conduct on their surfaces. Each wins $500,000.

Since its foundation in 2012, the Mustafa Prize has been announced every two years. This year is the first time that the prize has been awarded to researchers in fundamental science.

World’s most powerful MRI unveiled

A 132 tonne superconducting magnet has set a new record for whole-body magnetic-resonance imaging (MRI), producing a field of 11.7 T inside a 0.9 m diameter and 5 m long volume. Four-times more powerful than typical hospital devices, the “Iseult” project at CEA-Paris-Saclay paves the way for imaging the brain in unprecedented detail for medical research.

Using a pumpkin as a suitably brain-like subject, the team released its first images on 7 October, validating the system and demonstrating an initial resolution of 400 microns in three dimensions. Other checks and approvals are necessary before the first imaging of human volunteers can begin.

This work will undoubtedly lead to major clinical applications

Stanislas Dehaene

“Thanks to this extraordinary MRI, our researchers are looking forward to studying the anatomical and structural organization of the brain in greater detail. This work will undoubtedly lead to major clinical applications,” said Stanislas Dehaene, director of NeuroSpin, the neuroimaging platform at CEA-Paris-Saclay.

The magnets that drive tens of thousands of MRI devices worldwide perform the vital task of aligning the magnetic moments of hydrogen atoms.Then, RF pulses are used to momentarily disturb this order in a specific region, after which the atoms are pulled back into equilibrium by the magnetic field, and radiate. The stronger the field, the higher the signal-to-noise ratio, and thus better image resolution.

Niobium-titanium

In addition to being the largest and most powerful MRI magnet ever built, claims the team, the Iseult solenoid (carrying a current of 1.5 kA) also sets a record for the highest ever field achieved using niobium-titanium conductor, the same as is used in the present LHC magnets. With various optimisations, and working with the European Union Aroma project on methodologies for optimal functioning of the new MRI device, a resolution approaching 100 to 200 microns is planned, around ten times higher than commercial 3T devices.

Designed and built over ten years, Iseult was jointly led by neuroscientists and magnet and MRI specialists at the CEA Institute of Research into the Fundamental Laws of the Universe (IRFU) and the Frédéric Joliot Institute for Life Sciences, along with several industry and academic partnerships in Germany. Although CERN was not directly involved, Iseult’s success is anchored in more than four decades of joined developments between CERN and the CEA, explains Anne-Isabelle Etienvre, head of CEA IRFU:

“It is thanks to the know-how developed for particle physics and fusion that MRI experts had the idea to ask us to design and build this unique and challenging magnet for MRI — in particular, CEA has played a major role, together with CERN and other partners, on LHC magnets, the ATLAS toroidal magnets and the CMS solenoid,” says Etienvre. “The collaboration between CEA and CERN is still very lively, in particular for advanced magnets for future accelerators.

MicroBooNE sees no hint of a sterile neutrino

The existence of an eV-scale sterile neutrino looks less likely today than at any time in the past 20 years. Such a particle has long been considered to be the simplest explanation for several related anomalies in neutrino physics, but results released yesterday by Fermilab’s MicroBooNE collaboration disfavour its existence relative to the Standard Model.

“MicroBooNE has made a very comprehensive exploration through multiple types of interactions, and multiple analysis and reconstruction techniques,” says co-spokesperson Bonnie Fleming of Yale. “They all tell us the same thing, and that gives us very high confidence in our results that we are not seeing a hint of a sterile neutrino.” 

The collaboration says that the analyses favour the Standard Model over the anomalous signal seen by sibling-experiment MiniBooNE at more than 99% confidence, should its true origin be electrons from a neutrino oscillation via a hitherto-undetected sterile neutrino. “But that earlier data from MiniBooNE doesn’t lie,” says former co-spokesperson Sam Zeller of Fermilab. “There’s something really interesting happening that we still need to explain.”

There’s something really interesting happening that we still need to explain

Sam Zeller

Neutrinos suffer from an identity crisis regarding their mass. As a result, the three known flavours morph into each other as phase differences develop between three mass eigenstates. However, well before this model solidified around the turn of the millennium, a measurement by the LSND collaboration at Los Alamos in the US suggested the existence of an additional neutrino which had to be “sterile” with respect to the weak, electromagnetic and strong interactions, and much more massive, given how rapidly the oscillation developed. Since this first hint, the tale of the sterile neutrino has taken multiple twists and turns.

Twists and turns

​​In the mid-1990s, LSND reported seeing a 3.8σ excess of electron antineutrinos in a beam of accelerator-generated muon antineutrinos, but the KARMEN experiment at the Rutherford Appleton Laboratory in the UK failed to reproduce the effect. Evidence for an eV-scale sterile neutrino mounted with the observation of a deficit of electron neutrinos from 37Ar and 51Cr electron-capture decays at Gran Sasso in Italy and at the Baksan Neutrino Observatory in Russia (the gallium anomaly), and a reported deficit of electron antineutrinos from nuclear reactors (the reactor anomaly). Troublingly, however, long-baseline accelerator neutrino experiments such as MINOS+ do not observe the requisite “disappearance” of muon neutrinos required by the principle of unitarity, and the existence of such a sterile neutrino is also starkly incompatible with current models of cosmology. While the gallium anomaly should soon be probed definitively by the BEST experiment at Baksan (Phys. Rev. D 2018 97 073001), recent calculations of reactor fluxes may now be dissolving the reactor anomaly (see, for example, arXiv:2110.06820). But the most compelling single piece of evidence in favour of sterile neutrinos came when the MiniBooNE experiment at Fermilab tried to reproduce the LSND effect. In November 2018, the collaboration reported a 4.5σ excess of electron neutrinos and antineutrinos compared to Standard-Model expectations.

Few neutrino physicists foresaw that MicroBooNE would disfavour both hypotheses

Sibling experiment MicroBooNE has now released its first round of tests of the MiniBooNE anomaly. Equipped with a cutting-edge liquid-argon time-projection chamber, the collaboration observed neutrino interactions at the level of individual particle tracks – a key advantage compared to a Cherenkov detector such as MiniBooNE, which could not distinguish electrons from photons. The collaboration has now used half of its available data to probe which particle is the true origin of the anomaly. Earlier this month, MicroBooNE tested the hypothesis that MiniBooNE’s excess was actually due to an underestimated single-photon background, perhaps caused by a difficult-to-model rare decay of a Δ resonance. Now, MicroBooNE has tested the hypothesis that the MiniBooNE excess was caused by single electrons, most likely the result of neutrino oscillations via an eV-scale sterile neutrino. Few neutrino physicists foresaw that MicroBooNE would disfavour both hypotheses.

“Every time we look at neutrinos, we seem to find something new or unexpected,” says MicroBooNE co-spokesperson Justin Evans of the University of Manchester. “MicroBooNE’s results are taking us in a new direction, and our neutrino programme is going to get to the bottom of some of these mysteries.” The collaboration will now investigate whether more exotic topologies such as electron-positron pairs could be the source of the MiniBooNE anomaly. Such a final state might suggest the existence of heavier sterile neutrinos, say theorists.

“eV-scale sterile neutrinos no longer appear to be experimentally motivated, and never solved any outstanding problems in the Standard Model,” says theorist Mikhail Shaposhnikov of EPFL. “But GeV-to-keV-scale sterile neutrinos – so-called Majorana fermions – are well motivated theoretically and do not contradict any existing experiment. They can explain neutrino masses and oscillations, give a dark-matter candidate, and produce a baryon asymmetry in the universe: all the problems that the Standard Model is incapable of addressing. Experimental efforts at the intensity frontier should now be concentrated in this direction.”

Fermilab: a future built on international engagement

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

Future scientific breakthroughs in high-energy physics will require unprecedented levels of international engagement, building on the successful model of the Large Hadron Collider at CERN. Joe Lykken, Fermilab deputy director for research, will describe how Fermilab is moving forward rapidly with CERN and other international partners to realise this vision.

The questions under scrutiny range from the nature of the Higgs field to the question of whether neutrinos play a role in the matter-antimatter asymmetry observed in the universe. PIP-II, an upgrade to the Fermilab accelerator complex that includes a leading-edge superconducting linear accelerator, is already under construction, with major “in-kind” contributions and expertise from partners in India, Italy, the UK, France and Poland. PIP-II will enable the world’s most intense beam of neutrinos for the Deep Underground Neutrino Experiment (DUNE), which will deploy 70,000 tonnes of liquid argon detectors in a deep underground site 1300 km from Fermilab. DUNE was formulated as an international project from the start, and now includes more than a thousand collaborators from 30 countries. Two large prototype detectors for DUNE have been successfully constructed and tested at the CERN Neutrino Platform. DUNE will have remarkable capabilities to determine how the properties of neutrinos have shaped our universe. At the same time, Fermilab has been developing and building next-generation superconducting magnets that will be deployed in the HL-LHC accelerator at CERN, and is the US lead for ambitious upgrades to the CMS experiment for the HL-LHC era. These technological capabilities will also make Fermilab an important partner for the proposed Future Circular Collider.

Want to learn more on this subject?

Joseph Lykken is Fermilab’s deputy director of research and leads the Fermilab Quantum Institute. A distinguished scientist at the laboratory, Lykken was a former member of the Theory Department, researching string theory and phenomenology, and is a member of the CMS experiment on the Large Hadron Collider at CERN. He received his PhD from the Massachusetts Institute of Technology and has previously worked for the Santa Cruz Institute for Particle Physics and the University of Chicago. Lykken began his tenure at Fermilab in 1989. He is a former member of the High Energy Physics Advisory Panel, which advises both the Department of Energy and the National Science Foundation, and served on the Particle Physics Project Prioritization Panel, developing a road map for the next 20 years of US particle physics. Lykken is a fellow of the American Physical Society and of the American Association for the Advancement of Science.





Wheels in motion for ATLAS upgrade

The first of the ATLAS New Small Wheels

The Large Hadron Collider (LHC) complex is being upgraded to significantly extend its scientific reach. Following the ongoing 2019–2022 long shutdown, the LHC is expected to operate during Run 3 at close to its design of 7 TeV per beam and at luminosities more than double the original design. After the next shutdown, currently foreseen in 2025–2027, the High-Luminosity LHC (HL-LHC) will run at luminosities of 5–7 × 1034 cm–2s–1. This corresponds to 140–200 simultaneous interactions per LHC bunch crossing (“pileup”), which is three to four times the Run-3 expectation and up to eight times above the original LHC design value. The ATLAS experiment, like others at the LHC, is undergoing major upgrades for the new LHC era.

Coping with very high interaction rates while maintaining low transverse-momentum (pT) thresholds for triggering on electrons and muons from the targeted physics processes will be extremely challenging at the HL-LHC. Another issue for the ATLAS experiment is that the performance of its muon tracking chambers, particularly in the end-cap regions of the detector, degrades with increasing particle rates. If the original chambers were used for the HL-LHC, it would lead to a loss in the efficiency and resolution of muon reconstruction.

Pseudorapidity distribution of muon candidates

Muons are vital for efficiently triggering on, and thus precisely studying, processes in the electroweak sector such as Higgs, W and Z physics. It is therefore essential that the ATLAS detector cover as much volume as possible across the pseudorapidity distribution η = –ln tanθ/2, where θ is the angle with respect to the proton beam axis. In the central region of the detector, corresponding to a pseudorapidity |η| < 1, there is a good purity of muons originating from the proton collision point (see “Good muons” figure). In the end caps, |η| > 1.3, significant contributions, the so-called “fake” muon signals (see “Real or fake?” figure), arise from other sources. These include cavern backgrounds and muons produced in the halo of the LHC proton beams, both of which increase with larger instantaneous luminosities. Without modifications to the detector, the fake-muon trigger rates in the end caps would become unsustainable at the HL-LHC, requiring the muon pT thresholds in the Level-1 trigger to be raised substantially.

Sketch of a quarter section of ATLAS

To resolve these issues, the ATLAS collaboration decided, as part of its major Phase-I upgrade, to replace the existing ATLAS muon small wheels with the “New Small Wheels” (NSW), capable of reconstructing muon track segments locally with 1 mrad resolution for both the Level-1 trigger and for offline reconstruction. The NSW will allow low-pT thresholds to be maintained for the end-cap muon triggers even at the ultimate HL-LHC luminosity.

The low-pT region for leptons is of critical importance to the ATLAS physics programme. As an example, Higgs-boson production via vector-boson fusion (VBF) is a powerful channel for precision Higgs studies, and low-pT end-cap lepton triggers are crucial for selecting H → ττ events used to study Higgs-boson Yukawa couplings. Within the current tracking detector acceptance of |η| < 2.5, the fraction VBF of H → ττ events with the leading muons having pT above 25 GeV (typical Run-2 threshold) is 60%, while this fraction drops to 28% for a pT threshold of 40 GeV (expected typical HL-LHC threshold if no changes to the detectors are made). Maintaining, or even reducing, the muon pT threshold is critical for extending the ATLAS physics programme in higher luminosity LHC operation.

Frontier technologies

The ATLAS NSW is a set of precision tracking and trigger detectors able to work at high rates with excellent spatial and time resolution using two innovative technologies: MicroMegas (MM) and small-strip thin-gap chambers (sTGC). These detectors will provide the muon Level-1 trigger system with online track segments with good angular resolution to confirm that they originate from the interaction point, reducing triggers from fake muons. They will also have timing resolutions below the 25 ns interbunch time, enabling bunch-crossing identification. With the NSW, ATLAS will keep the full acceptance of its muon tracking system at the HL-LHC while maintaining a low Level-1 pT threshold of around 20 GeV.

MicroMegas detectors and small-strip thin-gap chambers

The ATLAS collaboration chose MM and sTGC technologies for the NSW after a detailed scrutiny of several available options. The idea was to build a robust and redundant system, using research-frontier and cost-effective technologies. Each NSW wheel has 16 sectors, with each sector containing four MM chambers and six sTGC chambers. Each sector, with a total surface area ranging from about 4 to 6 m2 , has eight sensitive planes of MM and eight of sTGC along the muon track direction. The 16 overall measurement planes allow for redundancy in the track reconstruction.

MM detectors were proposed in the 1990s in the framework of the Micro-Pattern Gaseous Detectors (MPGD) R&D programme including the RD51 project at CERN (see “Robust and redundant” figure, top). They profit from the development of photolithographic techniques for the design of high-granularity readout patterns and, in parallel, from the development of specialised front-end electronics with an increased number of channels. A dedicated R&D programme introduced, developed and realised the concept of resistive MM detectors. The main challenge for ATLAS was to scale the detectors from a few tens of cm in size to chambers of 2–3 m2 with a geometry under control at the level of tens of μm. This required additional R&D together with a very detailed mechanical design of the detectors. The resulting detectors represent the largest and most complex MPGD system ever built.

Thin-gap chambers have been used for triggering and to provide the azimuthal coordinate of muons in the ATLAS muon spectrometer end caps since the beginning of LHC operations, and were used previously in the OPAL experiment at LEP. The sTGC is an extension of established TGC technology to allow for precise online tracking that can be used both in the trigger and in offline muon tracking, with a strip pitch of 3.2 mm (see “Robust and redundant” figure, bottom).

A common readout front-end chip, named VMM, was developed for the readout of the MM strips and of the active elements of the sTGC (strips, pads and wires). This chip is a novel “amplifier-shaper-discriminator” front-end ASIC able to perform amplification and shaping, peak finding and digitisation of the detector signals. The overall system has about 2 million MM and 350,000 sTGC readout channels. The ATLAS trigger, using information from both detectors, will identify track segments pointing to the interaction region and share this information with the muon trigger.

International enterprise

The construction of the 128 MM and 192 sTGC chambers has been a truly international enterprise shared among several laboratories. The construction of the MM was shared among five construction consortia in France, Germany, Greece, Italy and Russia, with infrastructure and technical expertise inherited from the construction of the ATLAS Muon Spectrometer Monitored Drift Tube chambers. The construction of the sTGC was shared among five consortia located in Canada, Chile, China, Israel and Russia, including both institutes from the original TGC construction and new ones.

A key challenge in realising both technologies was the use of large-area circuit boards produced by industry. For the case of the MM, high-voltage instabilities observed since the construction of the first large-size prototypes were mostly due to the quality of the printed circuit boards. Two aspects in particular were investigated: the cleanliness of the surfaces, and the actual measured values of the board resistivity that were in many cases not large enough to prevent electrical discharges in the detector. For both problems, detailed mitigation protocols were developed and shared among the consortia: a cleaning protocol including polishing and washing of all the surfaces and a “passivation” procedure designed to mask detector regions with lower resistance where most of the discharges were observed to take place.

MicroMegas double-wedges and small-strip thin-gap chamber wedges

For the sTGC, the principal difficulty in the circuit-board production was maintaining mechanical tolerances and electrical integrity over the large areas. Considerable R&D and quality control were required before and during the board production, and when combined with X-ray measurements at CERN the sTGC layers are aligned to better than 100 μm.

Along with the chamber construction, several tests were carried out at the construction sites to evaluate the chamber quality. Some of the first full-size prototypes together with the first production chambers were exposed to test beams. All the sTGC chambers and a large fraction of the MM chambers were also tested at CERN’s GIF++ irradiation facility to evaluate their behaviour under a particle rate comparable to the one expected at the HL-LHC.

The integration of both MM and sTGC chambers to form the wheel sectors took place at CERN from 2018 to 2021. Four MM chambers form a double-wedge, assembled accounting for the severe alignment requirements, which is then equipped with all the necessary services and the final front-end electronics (see “Taking stock” image). The systems were fully tested in a dedicated cosmic-ray test stand to verify the functionality of the detector and to evaluate the detector efficiency. For the sTGCs, three chambers were glued to fibreglass frames using precision inserts on a granite table to form a wedge. After long-term high-voltage tests, the sTGC wedges were equipped with front-end electronics, cooling, and readout cables and fibres. All the sTGC chambers were tested with cosmic rays at the construction sites, and a few were also tested at CERN.

The first New Small Wheel

To form each sector, two sTGC wedges and one MM double-wedge were sandwiched together. The sectors were then precisely mounted on “spokes” installed on the large shielding disks that form the NSW wheels, along with a precision optical alignment system that allows the chamber positions to be tracked by ATLAS in real time (see “Revolutions” image). After completing final electrical, cooling and gas connections during 2020 and 2021, all sectors were commissioned and tested on the wheel. One unexpected problem encountered on the first sectors on wheel A was the presence of a noise level in the front-end electronics that was significantly higher than observed during integration. A large and ultimately successful effort was put in place to mitigate this new challenge, for example by improving the grounding and shielding, and adding filtering to the power supplies.

This final success follows more than a decade of research, design and construction by the ATLAS collaboration. The NSW initiative dates to early LHC operation, around 2010, and the technical design report was approved in 2013, with construction preparation starting soon afterwards. The impact of the COVID-19 pandemic on the NSW construction schedule was significant, mostly at the construction sites, where delays of up to a few months were accrued, but the project is now on schedule for completion during the current LHC shutdown.

The endgame

Prior to lowering the NSW into the ATLAS experimental cavern, other infrastructure was installed to prepare for detector operation. The service caverns were equipped with electronics racks, high-voltage and low-voltage power supplies, gas distribution systems, cooling infrastructure for electronics, as well as control and safety systems. Where possible, existing infrastructure from the previous ATLAS small wheels was repurposed for the NSW.

ATLAS is now close to the completion of its Phase-I upgrade goal of having both NSW-A and NSW-C installed for the start of Run 3

 

On 6 July, the first wheel, NSW-A, was shipped from Building 191 on the CERN site to LHC Point 1 and then, less than a week later, lowered into its position in ATLAS (see “In place” image). With the first NSW in its final position, the extensive campaign of connecting low voltage, high voltage, gas, readout fibres and electronics cooling was the next step. These connections were completed for NSW-A in July and August 2021, and an extensive commissioning programme is ongoing. In addition to powering both the chambers and the readout electronics, the integration of the NSW into the ATLAS controls and data-acquisition system is occurring at Point 1. NSW-A is planned to be fully integrated into ATLAS for the LHC pilot-beam run in October 2021, and then NSW-C will be lowered and installed.

Despite a tight schedule, ATLAS is now close to the completion of its Phase-I upgrade goal of having both NSW-A and NSW-C installed for the start of Run 3. The period up to February 2022 will be needed to complete commissioning and testing. Starting from March 2022, a very important “commissioning with beam” phase will be carried out to ensure stable collisions in Run 3. Even with the challenges of developing new technologies while working across a dozen countries during the COVID-19 pandemic, the ATLAS New Small Wheel upgrade will be ready for the exciting, new higher luminosities that will open up a novel era of LHC physics.

bright-rec iop pub iop-science physcis connect