Comsol -leaderboard other pages

Topics

Vacuum solutions fuel fusion dreams

ITER vacuum vessel being moved into the tokamak pit

Robert Pearce is all about the detail. That’s probably as it should be for the section leader of the diverse, sprawling vacuum ecosystem now taking shape as part of the work-in-progress ITER experimental reactor in southern France. When it comes online in the mid-2020s, this collaborative megaproject – which is backed by China, the European Union, India, Japan, Korea, Russia and the US – will generate nuclear fusion in a tokamak device (the world’s largest) that uses superconducting magnets to contain and control a hot plasma in the shape of a torus. In the process, ITER will also become the first experimental fusion machine to achieve “net energy” – when the total power produced during a fusion plasma pulse surpasses the power injected to heat the plasma – while providing researchers with a real-world platform to test the integrated technologies, materials and physics regimes necessary for future commercial production of fusion-based electricity. 

Robert Pearce giving a lecture to students

Vacuum reimagined

If ITER is big science writ large, then its myriad vacuum systems are an equally bold reimagining – at scale – of vacuum science, technology and innovation. “ITER requires one of the most complex vacuum systems ever built,” explains Pearce. “We’ve overcome a lot of challenges so far in the construction of the vacuum infrastructure, though there are doubtless more along the way. One thing is certain: we will need to achieve a lot of vacuum – across a range of regimes and with enabling technologies that deliver bulletproof integrity – to ensure successful, sustained fusion operation.” 

The task of turning the vacuum vision into reality falls to Pearce and a core team of around 30 engineers and physicists based at the main ITER campus at Cadarache. It’s a multidisciplinary effort, with domain knowledge and expertise spanning mechanical engineering, modelling and simulation, experimental validation, surface science, systems deployment and integration, as well as process control and instrumentation. At a headline level, the group is focused on delivery versus two guiding objectives. “We need to make sure all the vacuum systems are specified to our exacting standards in terms of leak tightness, cleanliness and optimal systems integration so that everything works together seamlessly,” notes Pearce. “The other aspect of our remit involves working with multiple partner organisations to develop, validate and implement the main pumping systems, vacuum chambers and distribution network.” 

The tokamak at the heart of the ITER construction site

Sharing the load

Beyond the main project campus, the two primary partners on the ITER vacuum programme are the Fusion for Energy (F4E) team in Barcelona, Spain, and US ITER in Oak Ridge, Tennessee, both of which support the vacuum effort through “in-kind” contributions of equipment and personnel to complement direct cash investments from the member countries. While the ITER Vacuum Handbook – effectively the project bible for all things vacuum – provides a reference point to shape best practice across vacuum hardware, associated control systems, instrumentation and quality management, there’s no one-size-fits-all model for the relationship between the Cadarache vacuum team and its partner network.

“We supply ‘build-to-print’ designs to Barcelona – for example, in the case of the large torus cryopump systems – and they, in collaboration with us, then take care of the procurement with their chosen industry suppliers,” explains Pearce. With Oak Ridge, which is responsible for provision of the vacuum auxiliary and roughing pumps systems (among other things), the collaboration is based on what Pearce calls “functional specification procurement…in which we articulate more of the functionality and they then work through a preliminary and final design with us”. 

Vacuum innovation: ITER’s impact dividend

While ITER’s vacuum team pushes the boundaries of what’s possible in applied vacuum science, industry partners are working alongside to deliver the enabling technology innovations, spanning one-of-a-kind pumping installations to advanced instrumentation and ancillary equipment. 

The ITER neutral beam injector systems – accelerators that will drive high-energy neutral particles into the tokamak to heat the fusion plasma – are a case in point. The two main injectors (each roughly the size of a locomotive) will be pumped by a pair of open-structure, panel-style cryosorption pumps (with a single pump measuring 8 m long and 2.8 m high). 

Working in tandem, the pumps will achieve a pumping speed of 4500 m3/s for hydrogen, with a robust stainless-steel boundary necessary for the cryogenic circuits to provide a confinement barrier between tritium (which is radioactive) and cryogenic helium. 

Key to success is a co-development effort – involving ITER engineers and industry partner Ravanat (France) – to realise a new manufacturing method for the fabrication of cryopanels via expansion of stainless-steel tube (at around 2000 bar) into aluminium extrusions. It’s a breakthrough, moreover, that delivers excellent thermal contact over the operating temperature range (4.5 K for pumping to 400 K for regeneration), while combining the robustness of stainless steel with the thermal conductivity of aluminium.

Industry innovation is also in evidence at a smaller scale. As the ITER project progresses to the active (nuclear) phase of operations, for example, human access to the cryostat will be very limited. With this in mind, the In-Pipe Inspection Tool (IPIT) is being developed for remote inspection and leak localisation within the tens of km of cryostat pipework. 

An R&D collaboration between ITER vacuum engineers, F4E and Italian robotics firm Danieli Telerobot, the IPIT is capable of deployment in small-bore pipework up to 45 m from the insertion point. The unit combines a high-resolution camera for inspection of welds internal to the pipe, as well as a dedicated “bladder” for isolation of vacuum leaks prior to repair. 

Other instrument innovations already well developed by industry to meet ITER’s needs include a radiation-hardened (> 1 MGy) and magnetic-field-compatible (> 200 mT) residual gas analyser that permits remote operation via a controller up to 140 m away (supplied by Hidden Analytical, UK); and also an optical diaphragm gauge (> 1 MGy, > 200 mT) with a measurement capability in line with the best capacitive manometers (a co-development between Inficon, Germany, and OpSens Solutions, Canada).

When it comes to downstream commercial opportunities, it’s notable that ITER member countries share the experimental results and any intellectual property generated by ITER during the development, construction and operation phases of the project.

More broadly, because vacuum is central to so many of ITER’s core systems – including the main tokamak vessel (1400 m3), the surrounding cryostat (16,000 m3) and the superconducting magnets – the vacuum team also has touch-points and dependencies with an extended network of research partners and equipment makers across ITER’s member countries. Unsurprisingly, with more than 300 pumping systems and 10 different pumping technologies to be deployed across the ITER plant, complexity is one of the biggest engineering challenges confronting Pearce and his team. 

“Once operational, ITER will have thousands of different volumes that need pumping across a range of vacuum regimes,” notes Pearce. “Overall, there’s high diversity in terms of vacuum function and need, though the ITER Vacuum Handbook does help to standardise our approach to issues like leak tightness, weld quality, testing protocols, cleanliness and the like.”

The ITER cryoplant

Atypical vacuum

Notwithstanding the complexity of ITER’s large-scale vacuum infrastructure, Pearce and his team must also contend with the atypical operational constraints in and around the fusion tokamak. For starters, many of the machine’s vacuum components (and associated instrumentation) need to be qualified for operation in a nuclear environment (the ITER tokamak and supporting plant must enclose and securely contain radioactive species like tritium) and to cope with strong magnetic fields (up to 7 T in the plasma chamber and up to 300 mT for the vacuum valves and instruments). In terms of qualification, it’s notable that ITER is being built in a region with a history of seismic activity – deliberately so, to demonstrate that a fusion reactor can be operated safely anywhere in the world. 

“Ultimately,” concludes Pearce, “any vacuum system – and especially one on the scale and complexity required for ITER – requires great attention to detail to be successful.”

MAX IV: partnership is the key

Sweden’s MAX IV synchrotron radiation facility

Sweden’s MAX IV synchrotron radiation facility is among an elite cadre of advanced X-ray sources, shedding light on the structure and behaviour of matter at the atomic and molecular level across a range of fundamental and applied disciplines – from clean-energy technologies to pharma and healthcare, from structural biology and nanotech to food science and cultural heritage. 

Marek Grabski, MAX IV’s vacuum section leader

In terms of core building blocks, this fourth-generation light source – which was inaugurated in 2016 – consists of a linear electron accelerator plus 1.5 and 3 GeV electron storage rings (with the two rings optimised for the production of soft and hard X rays, respectively). As well as delivering beam to a short-pulse facility, the linac serves as a full-energy injector to the two storage rings which, in turn, provide photons that are extracted for user experiments across 14 specialist beamlines.

Underpinning all of this is a ground-breaking implementation of ultrahigh-vacuum (UHV) technologies within MAX IV’s 3 GeV electron storage ring – the first synchrotron storage ring in which the inner surface of almost all the vacuum chambers along its circumference are coated with non-evaporable-getter (NEG) thin film for distributed pumping and low dynamic outgassing. Here, Marek Grabski, MAX IV vacuum section leader, gives CERN Courier the insider take on a unique vacuum installation and its subsequent operational validation. 

What are the main design challenges associated with the 3 GeV storage-ring vacuum system?

We were up against a number of technical constraints that necessitated an innovative approach to vacuum design. The vacuum chambers, for example, are encapsulated within the storage ring’s compact magnet blocks with bore apertures of 25 mm diameter (see “The MAX IV 3 GeV storage ring: unique technologies, unprecedented performance”). What’s more, there are requirements for long beam lifetime, space limitations imposed by the magnet design, the need for heat dissipation from incoming synchrotron radiation, as well as minimal beam-coupling impedance. 

The answer, it turned out, is a baseline design concept that exploits NEG thin-film coatings, a technology originally pioneered by CERN that combines distributed pumping of active residual gas species with low photon-stimulated desorption. The NEG coating was applied by magnetron sputtering to almost all the inner surfaces (98% lengthwise) of the vacuum chambers along the electron beam path. As a consequence, there are only three lumped ion pumps fitted on each standard “achromat” (20 achromats in all, with a single acromat measuring 26.4 m end-to-end). That’s far fewer than typically seen in other advanced synchrotron light sources. 

The MAX IV 3 GeV storage ring: unique technologies, unprecedented performance

Among the must-have user requirements for the 3 GeV storage ring was the specified design goal of reaching ultralow electron-beam emittance (and ultrahigh brightness) within a relatively small circumference (528 m). As such, the bare lattice natural emittance for the 3 GeV ring is 328 pm rad – more than an order of magnitude lower than typically achieved by previous third-generation storage rings in the same energy range.

Even though the fundamental concepts for realising ultralow emittance had been laid out in the early 1990s, many in the synchrotron community remained sceptical that the innovative technical solutions proposed for MAX IV would work. Despite the naysayers, on 25 August 2015 the first electron beam circulated in the 3 GeV storage ring and, over time, all design parameters were realised: the fourth generation of storage-ring-based light sources was born. 

Layout of the MAX IV lab and aerial view of the main facilities

Stringent beam parameters 

The MAX IV 3 GeV storage ring represents the first deployment of a so-called multibend achromat magnet lattice in an accelerator of this type, with the large number of bending magnets central to ensuring ultralow horizontal beam emittance. In all, there are seven bending magnets per achromat (and 20 achromats making up the complete storage ring). 

Not surprisingly, miniaturisation is a priority in order to accommodate the 140 magnet blocks – each consisting of a dipole magnet and other magnet types (quadrupoles, sextupoles, octupoles and correctors) – into the ring circumference. This was achieved by CNC machining the bending magnets from a single piece of solid steel (with high tolerances) and combining them with other magnet types into a single integrated block. All magnets within one block are mechanically referenced, with only the block as a whole aligned on a concrete girder.

Vacuum innovation

Meanwhile, the vacuum system design for the 3 GeV storage ring also required plenty of innovative thinking, key to which was the close collaboration between MAX IV and the vacuum team at the ALBA Synchrotron in Barcelona. For starters, the storage-ring vacuum vessels are made from extruded, oxygen-free, silver-bearing copper tubes (22 mm inner diameter, 1 mm wall thickness). 

Copper’s superior electrical and thermal conductivities are crucial when it comes to heat dissipation and electron beam impedance. The majority of the chamber walls act as heat absorbers, directly intercepting synchrotron radiation coming from the bending magnets. The resulting heat is dissipated by cooling water flowing in channels welded on the outer side of the vacuum chambers. Copper also absorbs unwanted radiation better than aluminium, offering enhanced protection for key hardware and instrumentation in the tunnel. 

The use of crotch absorbers for extraction of the photon beam is limited to one unit per achromat, while the section where synchrotron radiation is extracted to the beamlines is the only place where the vacuum vessels incorporate an antechamber. Herein the system design is particularly challenging, with the need for additional cooling blocks to be introduced on the vacuum chambers with the highest heat loads. 

Other important components of the vacuum system are the beam position monitors (BPMs), which are needed to keep the synchrotron beam on an optimised orbit. There are 10 BPMs in each of the 20 achromats, all of them decoupled thermally and mechanically from the vacuum chambers through RF-shielded bellows that also allow longitudinal expansion and small transversal movement of the chambers.

Ultimately, the space constraints imposed by the closed magnet block design – as well as the aggregate number of blocks along the ring circumference – was a big factor in the decision to implement a NEG-based pumping solution for MAX IV’s 3 GeV storage ring. It’s simply not possible to incorporate sufficient lumped ion pumps to keep the pressure inside the accelerator at the required level (below 1 × 10–9 mbar) to achieve the desired beam lifetime while minimising residual gas–beam interactions. 

Operationally, it’s worth noting that a purified neon venting scheme (originally developed at CERN) has emerged as the best-practice solution for vacuum interventions and replacement or upgrade of vacuum chambers and components. As evidenced on two occasions so far (in 2018 and 2020), the benefits include significantly reduced downtime and risk management when splitting magnets and reactivating the NEG coating. 

How important was collaboration with CERN’s vacuum group on the NEG coatings?

Put simply, the large-scale deployment of NEG coatings as the core vacuum technology for the 3 GeV storage ring would not have been possible without the collaboration and support of CERN’s vacuum, surfaces and coatings (VSC) group. Working together, our main objective was to ensure that all the substrates used for chamber manufacturing, as well as the compact geometry of the 3 GeV storage-ring vacuum vessels, were compatible with the NEG coating process (in terms of coating adhesion, thickness, composition and activation behaviour). Key to success was the deep domain knowledge and proactive technical support of the VSC group, as well as access to CERN’s specialist facilities, including the mechanical workshop, vacuum laboratory and surface treatment plant. 

What did the manufacturing model look like for this vacuum system? 

Because of the technology and knowledge transfer from CERN to industry, it was possible for the majority of the vacuum chambers to be manufactured, cleaned, NEG-coated and tested by a single commercial supplier – in this case, FMB Feinwerk- und Messtechnik in Berlin, Germany. Lengthwise, 70% of the chambers were NEG-coated by the same vendor. Naturally, the manufacturing of all chambers had to be compatible with the NEG coating, which meant careful selection and verification of materials, joining methods (brazing) and handling. Equally important, the raw materials needed to undergo surface treatment compatible with the coating, with the final surface cleaning certified by CERN to ensure good film adhesion under all operating conditions – a potential bottleneck that was navigated thanks to excellent collaboration between the three parties involved. 

To spread the load, and to relieve the pressure on our commercial supplier ahead of system installation (which commenced in late 2014), it’s worth noting that most geometrically complicated chambers (including vacuum vessels with a 5 mm vertical aperture antechamber) were NEG-coated at CERN. Further NEG coating support was provided through a parallel collaboration with the European Synchrotron Radiation Facility (ESRF) in Grenoble. 

How did you handle the installation phase? 

This was a busy – and at times stressful – phase of the project, not least because all the vacuum chambers were being delivered “just-in-time” for final assembly in situ. This approach was possible thanks to exhaustive testing and qualification of all vacuum components prior to shipping from the commercial vendor, while extensive dialogue with the MAX IV team helped to resolve any issues arising before the vacuum components left the factory. 

Owing to the tight schedule for installation – just eight months – we initiated a collaboration with the Budker Institute of Nuclear Physics (BINP) in Russia to provide additional support. For the duration of the installation phase, we had two teams of specialists from BINP working alongside (and coordinated by) the MAX IV vacuum team. All vacuum-related processes – including assembly, testing, baking and NEG activation of each achromat (at 180 °C) – took place inside the accelerator tunnel directly above the opened lower magnet blocks of MAX IV’s multibend achromat (MBA) lattice. Our installation approach, though unconventional, yielded many advantages – not least, a reduction in the risks related to transportation of assembled vacuum sectors as well as reduced alignment issues. 

Presumably not everything went to plan through installation and acceptance?

One of the issues we encountered during the initial installation phase was a localised peeling of the NEG coating on the RF-shielded bellows assembly of several vacuum vessels. This was addressed as a matter of priority – NEG film fragments falling into the beam path is a show-stopper – and all the effected modules were replaced by the vendor in double-quick time. More broadly, the experience of the BINP staff meant difficulties with the geometry of a few chambers could also be resolved on the spot, while the just-in-time delivery of all the main vacuum components worked well, such that the installation was completed successfully and on time. After completion of several achromats, we installed straight sections in between while the RF cavities were integrated and conditioned in situ. 

Magnet block, complete achromat and the vacuum installation team

How has the vacuum system performed from the commissioning phase and into regular operation? 

Bear in mind that MAX IV was the first synchrotron light source to apply NEG technology on such a scale. We were breaking new ground at the time, so there were credible concerns regarding the conditioning and long-term reliability of the NEG vacuum system – and, of course, possible effects on machine operation and performance. From commissioning into regular operations, however, it’s clear that the NEG pumping system is reliable, robust and efficient in delivering low dynamic pressure in the UHV regime.

Initial concerns around potential saturation of the NEG coating in the early stages of commissioning (when pressures are high) proved to be unfounded, while the same is true for the risk associated with peeling of the coating (and potential impacts on beam lifetime). We did address a few issues with hot-spots on the vacuum chambers during system conditioning, though again the overall impacts on machine performance were minimal. 

To sum up: the design current of 500 mA was successfully injected and stored in November 2018, proving that the vacuum system can handle the intense synchrotron radiation. After more than six years of operation, and 5000 Ah of accumulated beam dose, it is clear the vacuum system is reliable and provides sustained UHV conditions for the circulating beam – a performance, moreover, that matches or even exceeds that of conventional vacuum systems used in other storage rings.

What are the main lessons your team learned along the way through design, installation, commissioning and operation of the 3 GeV storage-ring vacuum system?

The unique parameters of the 3 GeV storage ring were delivered according to specification and per our anticipated timeline at the end of 2015. Successful project delivery was only possible by building on the collective experience and know-how of staff at MAX-lab (MAX IV’s predecessor) constructing and operating accelerators since the 1970s – and especially the lab’s “explorer mindset” for the early-adoption of new ideas and enabling technologies. Equally important, the commitment and team spirit of our technical staff, reinforced by our collaborations with colleagues at ALBA, CERN, ESRF and BINP, were fundamental to the realisation of a relatively simple, efficient and compact vacuum solution.

Operationally, it’s worth adding that there are many dependencies between the chosen enabling technologies in a project as complex as the MAX IV 3 GeV storage ring. As such, it was essential for us to take a holistic view of the vacuum system from the start, with the choice of a NEG pumping solution enforcing constraints across many aspects of the design – for example, chamber geometry, substrate type, surface treatment and the need for bellows. The earlier such knowledge is gathered within the laboratory, the more it pays off during construction and operation. Suffice to say, the design and technology solutions employed by MAX IV have opened the door for other advanced light sources to navigate and build on our experience.

Roadmaps set a path to post-LHC facilities

The AWAKE plasma-wakefield experiment

In setting out a vision for the post-LHC era, the 2020 update of the European strategy for particle physics (ESPPU) emphasised the need to ramp up detector and accelerator R&D in the near and long term. To this end, the European Committee for Future Accelerators (ECFA) was asked to develop a global detector R&D roadmap, while the CERN Council invited the European Laboratory Directors Group (LDG) to oversee the development of a complementary accelerator R&D roadmap. 

After more than a year of efforts involving hundreds of people, and comprising more than 500 pages between them, both roadmaps were completed in December. In addition to putting flesh on the bones of the ESPPU vision, they provide a rich and detailed snapshot of the global state-of-the-art in detector and accelerator technologies.

Future-proof detectors

Beyond the successful completion of the high-luminosity LHC, the ESPPU identified an e+e Higgs factory as the highest priority future collider, and tasked CERN to undertake a feasibility study for a hadron collider operating at the highest possible energies with a Higgs factory as a possible first stage. The ESPPU also acknowledged that construction of the next generations of colliders and experiments will be challenging, especially for machines beyond a Higgs factory.

The development of cost-effective detectors that match the precision-physics potential of a Higgs factory is one of four key challenges in implementing the ESPPU vision, states the ECFA roadmap report. The second is to push the limitations in radiation tolerance, rate capabilities and pile-up rejection power to meet the unprecedented requirements of future hadron-collider and fixed-target experiments, while a third is to enhance the sensitivity and affordably expand the scales of both accelerator and non-accelerator experiments searching for rare phenomena. The fourth challenge identified by ECFA is to vigorously expand the technological basis of detectors, maintain a nourishing environment for new ideas and concepts, and attract and train the next generation of instrumentation scientists.

To address these challenges, ECFA set up a roadmap panel, chaired by Phil Allport of the University of Birmingham, and defined six task forces spanning different instrumentation topics (gaseous, liquid, solid state, particle-identification and photon, quantum, calorimetry) and three cross-cutting task forces (electronics, integration, training), with the most crucial R&D themes identified for each. Tasks are mapped to concrete time scales ranging from the present to beyond 2045, driven by the earliest technically achievable experiment or facility start-dates. The resulting picture reveals the potential synergies between concurrent projects pursued by separate communities, as well as between consecutive projects, which  was one of the goals of the exercise, explains ECFA chair Karl Jakobs of the University of Freiburg: “It shows the role of earlier projects as a stepping stone for later ones, opening the possibility to evaluate and to organise R&D efforts in a much broader strategic context and on longer timescales, and allowing us to suggest greater coordination,” he says. 

Attracting R&D experts and recognising and sustaining their careers is one of 10 general strategic recommendations made by the report. Others include support for infrastructure and facilities, industrial partnerships, software, open science, blue-sky research, and recommendations relating to international coordination and strategic funding programmes. Guided by this roadmap, concludes the report, concerted and “resource-loaded” R&D programmes in innovative instrumentation will transform the ability of present and future generations of researchers to explore and observe nature beyond current limits.

“Ensuring the goals of future collider and non-collider experiments are not compromised by detector readiness calls now for an R&D collaboration programme, similar to that initiated in 1990 to better manage the activities then already underway for the LHC,” adds Allport. “These should be focused on addressing their unmet technology requirements through common research projects, exploiting where appropriate developments in industry and synergies with neighbouring disciplines.” 

Accelerating physics 

Although accelerator R&D is necessarily a long-term endeavour, the LDG roadmap focuses on the shorter but crucial timescale of the next five-to-ten years. It concentrates on the five key objectives identified in the ESPPU: further development of high-field superconducting magnets; advanced technologies for superconducting and normal-conducting radio-frequency (RF) structures; development and exploitation of laser/plasma-acceleration techniques; studies and developments towards future bright muon beams and muon colliders; and the advancement and exploitation of energy-recovery linear accelerator technology. Expert panels were convened to examine each area, which are at different stages of maturity, and to identify the key R&D objectives.

The high-field-magnets panel supports continued and accelerated progress on both niobium-tin and high-temperature superconductor technology, placing strong emphasis on its inclusion into practical accelerator magnets and warning that final designs may have to reflect a compromise between performance and practicality. The panel for high-gradient RF structures and systems also identified work needed on basic materials and construction techniques, noting significant challenges to improve efficiency. Longer term, it flags a need for automated test, tuning and diagnostic techniques, particularly where large-scale series-production is needed. 

Energy consumption and sustainability are key considerations in defining R&D priorities and in the design of new machines

In the area of advanced plasma and laser acceleration, the panel focused on rapidly evolving plasma-wakefield and dielectric acceleration technologies. Further developments require reduced emittance and improved efficiency, the ability to accelerate positrons and the combination of accelerating stages in a realistic future collider, the panel concludes, with the goal to produce a statement about the basic feasibility of such a machine by 2026. The panel exploring muon beams and colliders also sets a date of 2026 to demonstrate that further investment is justified, focusing on a 10 TeV collider with a 3 TeV intermediate-scale facility targeted for the 2040s. Finally, having considered several medium-scale projects under way worldwide, the energy-recovery linacs panel identifies reaching the 10 MW power level as the next practical step, and states that future sustainability rests on developing 4.4 K superconducting RF technology for a next-generation e+e collider. 

In addition to the technical challenges, states the report, new investment will be needed to support R&D and test facilities. Energy consumption and sustainability are explicitly identified as key considerations in defining R&D priorities and in the design of new machines. Having identified objectives, each panel set out a detailed work plan covering the period to the next ESPPU, with options for a number of different levels of investment. The aim is to allow the R&D to be pushed as rapidly as needed, but in balance with other priorities for the field.

Like its detector R&D counterpart, the report concludes with 10 concrete recommendations. These include the attraction, training and career management of researchers, observations on the implementation and governance of the programme, environmental sustainability, cooperation between European and international laboratories, and continuity of funding. 

“The accelerator R&D roadmap represents the collective view of the accelerator and particle-physics communities on the route to machines beyond the Higgs factories,” says Dave Newbold, LDG chair and director of particle physics for STFC in the UK. “We now need to move swiftly forwards with an ambitious, cooperative and international R&D programme – the potential for future scientific discoveries depends on it.”

Bernhard Spaan 1960–2021

Bernhard Spaan

Bernhard Spaan, an exceptional particle physicist and a wonderful colleague, unexpectedly passed away on 9 December, much too early at the age of 61.

Bernhard studied physics at the University of Dortmund, obtaining his diploma thesis in 1985 working on the ARGUS experiment at DESY’s electron–positron collider DORIS. Together with CLEO at Cornell, ARGUS was the first experiment dedicated to heavy-flavour physics, which became the central theme of Bernhard’s research work for the following 36 years. Progressing from ARGUS and CLEO to the higher-statistics experiments BaBar and ultimately LHCb, for which he made early contributions, he was one of the pioneering leaders in the next generation of heavy-flavour experiments at both electron–positron and hadron colliders.

While working on tau–lepton decays at ARGUS for his doctorate, Bernhard led a study of tau decays to five charged pions and a tau neutrino, which resulted in the world’s best upper limit for the tau-neutrino mass at the time. He also pioneered a new method of reconstructing the pseudo mass of the tau lepton by approximating the tau direction with the direction of the hadronic system. This method led to a new tau-lepton mass, which was an important ingredient to resolve the long-standing deviation from lepton universality as derived from the measurements of the tau lifetime, mass and leptonic branching fraction.

In 1993 Bernhard joined McGill University in Montreal, where he contributed to CLEO operation, data-taking and analysis, and was brought into contact with the formative stages of an asymmetric electron–positron B-factory at SLAC. He was an author of the BaBar letter of intent in 1994 and remained a leading member of the collaboration for the two following decades.

Bernhard saw the unique potential of a dedicated B experiment at the LHC and joined the LHCb collaboration

In 1996 Bernhard started a professorship at Dresden where, together with Klaus Schubert, he built a strong German BaBar participation including involvement in the construction and operation of the calorimeter. At that time, BaBar was pioneering the use of distributed computing resources for data-processing. As one of the proponents of this approach, Bernhard played a crucial role in the German contribution via the computing centre at Karlsruhe, later “GridKa”. Building on the success of the electron–positron B-factories, Bernhard saw the unique potential of a dedicated B experiment at the LHC and joined the LHCb collaboration in 1998.

Bernhard’s scientific journey came full circle when he accepted a professorship at Dortmund University in 2004, which he used to significantly grow his LHCb participation. The Dortmund group is one of LHCb’s largest, with a long list of graduate students and main research topics including the determination of the CKM angles β and γ governing CP violation in rare B decays. In parallel with LHC Run 1 and 2 data-taking, Bernhard investigated the possibility of using scintillating fibres for a novel tracking detector capable of operating at much larger luminosities. In all phases of the “SciFi” detector, which was recently installed ahead of LHC Run 3, he supported the project with his ideas, his energy and the commitment of his group.

Bernhard was an outstanding experimental physicist whose many contributions shaped the field of experimental heavy-flavour physics. He was also a great communicator. His ability to resolve conflicts and to find compromises brought many additional tasks to Bernhard, whether as dean of the Dortmund faculty, chair of the national committee for particle physics, member of R-ECFA or chair of the LHCb collaboration board. When help was needed, Bernhard never said “no”.

We have lost a tremendous colleague and a dear friend who will be sorely missed not only by us, but the wider field.

Exotic flavours at the FCC

Half a century after its construction, the Standard Model of particle physics (SM) still reigns supreme as the most accurate mathematical description of the visible matter in the universe and its interactions. It was placed upon its throne by the many precise measurements made at the Large Electron Positron collider (LEP), in particular, and its rule was confirmed by the discovery of the Higgs boson at the Large Hadron Collider (LHC). CERN’s LEP/LHC success story, in which a hadron collider provided direct evidence for a new particle (the Higgs boson) whose properties were already partially established at a lepton collider, can serve as a blueprint for physics discoveries at a proposed Future Circular Collider (FCC) operating at CERN after the end of the LHC. 

Back in the late 1970s and early 1980s when the LEP/LHC programme was first proposed, the W and Z bosons mediating the weak interactions had not yet been observed, the top quark was considered a possible discovery, and the Higgs boson was regarded as a distant speculation. Precise studies of the W and Z, which were discovered in 1983 at the SPS proton–antiproton collider at CERN, were key items in LEP’s physics programme along with direct searches for the top quark, the Higgs boson and possible unknown particles. Even though the LEP experiments did not reveal any new particles beyond the W and Z, the unprecedented precision of its measurements revealed indirect effects (via quantum fluctuations) of the top and the Higgs, thereby providing indirect evidence for the SM mechanism of electroweak symmetry breaking. When the top quark was discovered at the Tevatron proton–antiproton collider at Fermilab in 1995, and the Higgs boson at the LHC in 2012, their masses were within the ranges indicated by precision measurements made at lepton colliders. 

Layout of the Future Circular Collider at CERN

Nowadays, the hope is that the proposed FCC programme – comprising an electron–positron collider followed by a high-energy proton-proton collider in the same ~100 km tunnel – will repeat the LEP/LHC success story at an even higher level of precision and energy. The e+e FCC stage would reproduce the entire LEP sample of Z bosons within a couple of minutes, yielding around 5 × 1012 Z bosons after four years of operation. In addition to allowing an incredibly accurate determination of the Z-boson’s properties, Z decays would also provide unprecedented samples of bottom quarks (1.5 × 1012) and tau leptons (3 × 1011). Potential increases in the FCC-ee centre-of-mass-energy would also produce unparalleled numbers of W+W and top–antitop pairs, which are important for the global electroweak fit, close to their respective thresholds, as well as more Higgs bosons than promised by other proposed e+e Higgs factories.

Probing beyond the Standard Model

Analyses of FCC-ee data, combined with results from previous experiments at the LHC and elsewhere, would not only push our understanding of the SM to the next level but would also provide powerful indirect probes of possible physics beyond the SM, with sensitivities to masses an order of magnitude greater than those of the LHC. A possible subsequent proton–proton FCC stage (FCC-hh) operating at a centre-of-mass energy of at least 100 TeV would then provide unequalled opportunities to discover this new physics directly, just as the LHC made possible the discovery of the Higgs boson following the indirect hints from high-precision LEP data. Whereas the combination of LEP and the LHC explore the TeV scale both indirectly and directly, the combination of FCC-ee and FCC-hh will carry the search for new physics to 30 TeV and beyond. 

The e+e stage of FCC would reproduce the entire LEP sample of Z bosons within a couple of minutes

However, for this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach. While the existence of dark matter and neutrino masses already prove that the SM cannot be complete (and there is no shortage of theoretical ideas as to what extensions of the SM could account for them), these observations can be explained by new particles within a very wide mass range – possibly well beyond the reach of FCC-hh. Fortunately, intriguing hints for new physics in the flavour sector have accumulated in recent years that point towards beyond-the-SM physics that should be accessible to FCC.

B-decay anomalies

Within the SM, the charged leptons – electrons, muons and taus – all have very similar properties. They interact with the photon as well as the W and Z bosons in the same way, and differ only in their masses, which in the SM are represented as Yukawa couplings to the Higgs boson. It is therefore said that the SM (approximately) respects lepton-flavour universality (LFU), despite the seemingly large differences in charged-lepton lifetimes originating from phase-space effects. 

Flavour observables (i.e. processes resulting from rare transitions among the different generations of quarks and leptons), and observables measuring LFU in particular, are especially promising to test the SM because they are strongly suppressed in the SM and thus very sensitive to new physics. In recent years, a coherent pattern of anomalies, all pointing towards the violation of LFU, have emerged. Two classes of fundamental processes giving rise to decays of B mesons – b → sℓ+ and b → cτν – show deviations from the SM predictions. 

In the flavour-changing neutral-current process b → sℓ+, a heavy bottom quark undergoes a transition to a strange quark and a pair of oppositely-charged leptons, which could be either electrons or muons. The ratios RK = Br(B → +μ)/Br(B → Ke+e) and RK* = Br(B → K*μ+μ)/Br(B → K*e+e), measured most precisely by the LHCb collaboration, are particularly interesting because their SM predictions are very clean. Since the muon and electron masses are negligible compared to the B-meson mass, the ratio of muon to electron decays should be close to unity according to the SM. However, intriguingly, LHCb has observed values significantly lower than one, and recently reported first evidence for LFU violation in RK . These hints of new physics are supported by measurements of the angular observable P5′ in B0→ K*0μ+μ decays and the rate of Bs→ φμ+μ decays. Importantly, all these observations can potentially be explained by the same new-physics interactions and are consistent with all other available measurements of processes involving b → sℓ+transitions. In fact, global fits of all available b → sℓ+  data find a preference for new physics compared to the SM hypothesis which reeks of a possible discovery.

Anomalous correlations

The second class of anomalies involves the charged-current process b → cτν, which is already mediated at tree level in the SM. The corresponding B-meson decays therefore have much higher probabilities to occur and thus larger branching ratios. However, the non-negligible tau mass leads to imperfect cancellations of the form factors in the ratio to electron or muon final states, and thus the resulting SM prediction is not as precise as those for RK and RK*. The most prominent examples of observables involving b → cτν transitions are the ratios RD = Br(B → Dτντ)/Br(B → Dℓν) and RD* = Br(B → D*τ ντ)/Br(B → D*τν). Here, the measurements of Belle, BaBar and LHCb consistently point above the SM predictions, resulting in a combined tension of 3σ. Importantly, as these processes happen quite frequently in the SM, a significant new-physics effect would be required to account for the corresponding anomaly. 

With the FCC-ee capable of producing 1.5 × 1012 b quarks, clearly the b anomalies could be further verified within a short period of running, assuming that LHCb, Belle II and possibly other experiments do confirm them. The large data sample would also allow physicists to study complementary modes that bear upon LFU but are more difficult for LHCb to measure, such as other “R” measurements involving neutral kaons. These measurements would be invaluable for pinning down the mechanism responsible for any violation of lepton universality.

Other possible anomalies

The B anomalies are just one exciting avenue that a “Tera-Z factory” like FCC-ee could explore further. The anomalous magnetic moment of the muon, aμ, can also be viewed as an exciting hint for new physics in the lepton sector. Predicted by the Dirac equation to have a value exactly equal to two, the physical value of the magnetic moment of the muon is slightly higher due to fluctuations at the quantum level. The very high precision of both the calculation and measurement therefore make it a powerful observable with which to search for new physics. A tension between the measured and predicted value of aμ has persisted since Brookhaven published its final result in 2006, and was recently strengthened by the muon g-2 experiment at Fermilab, yielding an overall significance of 4.2σ when combined with the earlier Brookhaven data. 

Effects of new physics on precision electroweak measurements

Various models have been proposed to explain the g-2 anomaly. They include leptoquarks (scalar or vector particles that carry colour and couple directly to a quark and a lepton that arise in models with extended gauge groups) and supersymmetry. Such leptoquarks could have masses anywhere between the lower LHC limit of 1.5 TeV and about 10 TeV, thus being within the reach of FCC-hh, whereas a supersymmetric explanation would require a couple of new particles with masses of a few hundred GeV, possibly even within reach of FCC-ee. Importantly, any explanation involving heavy new particles would also lead to effects in Z → μ+μ, as both observables are sensitive to interactions with sizeable coupling strength to muons. FCC-ee’s large Z-boson sample could therefore reveal deviations from the SM predictions at the suggested level. Leptoquarks could also modify the SM prediction for H  μ+μ decay, which will be measured very accurately at FCC-hh (see “Anomalous correlations” figure).

CKM under scrutiny

As the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which describes flavour violation in the quark sector, is unitary, the sum of the squares of the elements in each row and in each column must add up to unity. This unitarity relation can be used to check the consistency of different determinations of CKM elements (within the SM) and thus also to search for new physics. Interestingly, a deficit in the first-row unitarity relation exists at the 3σ level. This can be traced back to the fact that the value of the element Vud, extracted from super-allowed beta decays, is not compatible with the value of Vus, determined from kaon and tau decays, given CKM unitarity. Interestingly, this deviation can also be interpreted as a sign of LFU violation, since beta decays involve electrons while the most precise determination of Vus comes from decays with final-state muons. 

Here, a new-physics effect at a relative sub-per-mille level compared to the SM would suffice to explain the anomaly. This could be achieved by a heavy new lepton or a massive gauge boson affecting the determination of the Fermi constant that parametrises the strength of the weak interactions. As the Fermi constant can also be determined from the global electroweak fit, for which Z decays are crucial inputs, FCC-ee would again be the perfect machine to investigate this anomaly, as it could improve the precision by a large factor (see “High precision” figure). Indeed, the Fermi constant may be determined directly to one part in 105 from the enormous sample (> 1011) of Z decays to tau leptons. 

For this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach

FCC-ee’s extraordinarily large dataset will also enable scrutiny of a long-standing anomaly in the forward-backward asymmetry of Z → bb decays. The LEP measurement of ΔAFB, which arises from the difference between the Z boson couplings to left- and right-handed chiral states with different strengths, lies 2–3σ below the SM prediction. Although not significant, this anomaly may also be linked to new physics entering in b → s transitions.

Finally, a possible difference in the decay asymmetries of B → D*μν vs B → D*eν was recently reported by an analy­sis of Belle data. As in the case of RK, the SM prediction that the difference between the muon and the electron asymmetries should be zero is very clean and, like RD and RD*, this observable points towards new physics in b → c transitions and could be related via leptoquarks to g-2 of the muon. Once more, the great number of b quarks to be produced at FCC-ee, together with the clean environment of a lepton collider, would allow this observable to be determined with unprecedented accuracy.

Since all these anomalies point, to varying degrees, towards the existence of LFU-violating new physics, it raises the question of whether a common explanation exists? There are several particularly interesting possibilities, including leptoquarks, new scalars and fermions (as arise in supersymmetric extensions of the SM), new vector bosons (W′ and Z′) and new heavy fermions. In the overwhelming majority of such scenarios, a direct discovery of a new particle is possible at FCC-hh. For example, it could discover leptoquarks with masses up to 10 TeV and Z′ bosons with masses up to 40 TeV, covering most of the mass ranges expected in such models.

Anomalies point to possible violations of lepton-flavour universality

A return to the Z pole and beyond

The LEP programme was extremely successful in determining the mechanism of electroweak symmetry breaking, in particular by measuring the properties and decays of the Z boson very precisely from a 17 million-strong sample. This allowed for a prediction of a range for the Higgs mass within which it was later discovered at the LHC. The flavour anomalies could lead to a similar situation in the near future. In this case, the roughly 5 × 1012 Z bosons that the FCC-ee is designed to collect would not only be able to test the effects of new particles in precision electroweak observables, but also, via Z decays into bottom quarks and tau leptons, provide a unique testing ground for flavour physics. As noted earlier, FCC-ee’s Z-pole run is also envisaged to be the first step in a broader electroweak programme encompassing large statistics at the WW and tt thresholds, in addition to its key role as a precision Higgs factory. 

Looking much further ahead to the energy frontier, FCC-hh would be able, in the overwhelming number of scenarios motivated by the flavour anomalies, to directly discover a new particle. Furthermore, FCC-hh would allow for a precise determination of rare Higgs decays and the Higgs potential, probing new-physics effects related to this sector, such as leptoquark explanations of the anomalous magnetic moment of the muon.

Pending the outcome of the FCC feasibility study recommended by the 2020 update of the European strategy for particle physics, the hope that the LEP/LHC success story could be repeated by FCC-ee/FCC-hh is well justified. While FCC-ee could be used to indirectly pin down the parameters of the model(s) of new physics explaining the flavour anomalies via precision electroweak and flavour measurements, FCC-hh would be capable of searching for the predicted particles directly. 

How the Sun and stars shine

Staring at the Sun

Each second, fusion reactions in the Sun’s core fling approximately 60 billion neutrinos onto every square centimetre of the Earth. In the late 1990s, the Borexino experiment at Gran Sasso National Laboratory in Italy was conceived to measure these neutrinos right down to a few tens of keV, where the bulk of the flux lies. The detector’s name means “little Borex” and refers to an earlier idea for a large experiment with a boron-loaded liquid scintillator, which was shelved in favour of the present, smaller and more ambitious detector. Rather than studying rare but high-energy 8B neutrinos from a little-followed branch of the proton–proton (pp) fusion chain, Borexino would target the far more numerous but lower energy neutrinos produced in the Sun by electron captures on 7Be.

The fusion reactions generating the Sun’s energy

Three decades after its conception, Borexino has far exceeded this goal thanks to the exceptional radiopurity of the experimental apparatus (see “Detector design” panel. Special care taken in construction and commissioning has achieved a radiopurity about three orders of magnitude better than predicted, and 10 to 12 orders of magnitude below natural radioactivity. This has allowed the collaboration to probe the entire solar-neutrino spectrum, including not only the pp chain, but also the carbon–nitrogen–oxygen (CNO) cycle. This mechanism plays a minor role in the Sun but becomes important for more massive stars, dominating the energy production and the production of elements heavier than helium in the universe at large.

The heart of the Sun

The pp-chain generates 99% of the energy in the Sun: it begins when two protons fuse to produce a deuteron and an electron neutrino – the so-called pp neutrino (see “Chain and cycle” figure). Subsequent reactions produce light elements, such as 3He, 4He, 7Be, 7Li, 8B and more electron neutrinos. In Borexino, the sensitivity to pp neutrinos depends on the amount of 14C in the liquid scintillator: with an end-point energy of 0.156 MeV compared with a maximum visible energy for pp neutrinos of 0.264 MeV, the 14C 14N + β + ν beta decay sets the detection threshold and the feasibility of probing pp-neutrinos. The Borexino scintillator was therefore made using petroleum from very old and deep geological layers, to ensure a low content of 14C.

Detector design

Like many particle-physics detectors, Borexino has an onion-like design. The innermost layers have the highest radio-purity. The detector’s active core consists of 278 tonnes of pseudocumene (C9H12) scintillator. Into this is dissolved 2,5-diphenyloxazole (PPO) at a concentration of 1.5 grams per litre, which shifts the emission light to 400 nm, where the sensitivity of photomultipliers is peaked. The scintillator is contained within a 125 μm-thick nylon inner vessel (IV) with a 4.5 m radius – made thin to reduce radiation emission from the nylon . In addition, the IV stops radon diffusion towards the core of the detector. 

Borexino design

The IV is contained within a 7 m-radius stainless-steel sphere (SSS) that supports 2212 photomultipliers (PMTs) and contains 1000 tonnes of pseudocumene as high-radio-purity shielding liquid against radioactivity from PMTs and the SSS itself. Between the SSS and the IV, a second nylon balloon acts as a barrier preventing radon and its progeny from reaching the scintillator. The SSS is contained in a 2400-tonne tank of highly purified water which, together with Borexino’s underground location, shields the detector from environmental radioactivity. The tank boasts a muon detector to tag particles crossing the detector. 

When a neutrino interacts in the target volume, energy deposited by the decelerating electron is registered by a handful of PMTs. The neutrino’s energy can be obtained from the total charge, and the hit-time distribution is used to infer the location of the event’s vertex. Recoiling electrons are used to tag electron neutrinos, and the combination of a positron annihilation and a neutron capture on hydrogen (an inverse beta decay) are used to tag electron antineutrinos.

Due to the impossibility of discriminating individual solar-neutrino events from the backgrounds, the greatest challenge has been the reduction of natural radioactivity to unprecedented levels. In the early 1990s, Borexino developed innovative techniques such as under-vacuum distillation, water extraction, ultrafiltration and nitrogen sparging with ultra-high radiopurity nitrogen to reduce radioactive impurities in the scintillator to 10–10 Bq/kg or better. An initial detector called the Counting Test Facility was developed as a means to demonstrate such claims, publishing results for the key uranium, thorium and krypton backgrounds in 1995. Full data taking at Borexino began in 2007. 

Since data-taking began in 2007, Borexino has measured, for the first time, all the individual fluxes produced in the pp-chain. In 2014 the collaboration made the first definitive observation of pp neutrinos, using a comparison with the predicted energy spectrum. In 2018 the collaboration performed, with the same apparatus, a measurement of all the pp-chain components (pp, 7Be, pep and 8B neutrinos), demonstrating the large-scale energy-generation mechanism in the Sun for the first time (see “Energy spectrum” figure). This spectral fit allowed the collaboration to directly determine the ratio between the interaction rate of 3He + 3He fusions and that of 3He + 4He fusions – a crucial parameter for characterising the pp chain and its energy production.

The simultaneous measurement of pp-chain neutrino fluxes also gave Borexino a unique window onto the famous “vacuum-matter” transition, whereby coherent virtual W-boson interactions with electrons modify neutrino-oscillation probabilities as neutrinos propagate through matter, enhancing the oscillation probability as a function of energy. In 2018 Borexino measured the solar electron–neutrino survival probability, Pee, in the energy range from a few tens of keV up to 15 MeV (see “Survival probability” figure). This was the first direct observation of the transition from a low-energy vacuum regime (Pee~0.55) to a higher energy matter regime where neutrino propagation is dominantly affected by the solar interior (Pee~0.32). The transition was measured by Borexino at the level of 98% confidence.

CNO cycle

A different way to burn hydrogen, the CNO cycle, was hypothesised independently by Carl Friedrich von Weizsäcker and Hans Albrecht Bethe between 1937 and 1939. Here, 12C acts as a catalyst, and electron neutrinos are produced by the beta decay of 13N and 15O, with a small contribution from 17F. The maximum energy of CNO neutrinos is about 1.7 MeV. In addition to making an important contribution to the production of elements heavier than helium, this cycle is important for the nucleosynthesis of 16O and 17O. In massive stars it also develops in more complex reactions producing 18F, 18O, 19F, 18Ne and 20Ne.

Solar neutrinos and residual backgrounds

The sensitivity to CNO neutrinos in Borexino mainly comes from events in the energy range from 0.8 to 1 MeV. In this region, the dominant background comes from 210Bi, which is produced by the slow radioactive decay 210Pb (22 y) 210Bi (5 d) + β + ν210Po (138 d) + β + ν206Pb (stable) + α. The 210Bi activity can be inferred from 210Po, which can be efficiently tagged using pulse-shape discrimination. However, convective currents in the liquid scintillator bring into the central fiducial mass 210Po produced by 210Pb, which is most likely to be embedded on the nylon containment vessel. In order to reduce convection currents, a passive insulation system and a temperature control system were installed in 2016, significantly reducing the effect of seasonal temperature variations. 

Thanks to these and other efforts, in 2020 Borexino rejected the null hypothesis of no CNO reactions by more than five standard deviations, providing the first direct proof of the process. The energy production as a fraction of the solar luminosity was measured to be 1-0.3+0.4 %, in agreement with the Solar Standard Model (SSM) prediction of roughly 0.6 ± 0.1% (which assumes the solar surface has a high metallicity – a topic discussed in  more detail later). Given that luminosity scales as M4 and number density as M–2.5 for stars between one and 10 solar masses, the CNO cycle is thought to be the most important source of energy in massive hydrogen-burning stars. Borexino has provided the first experimental evidence for this hypothesis.

Probing solar metallicity using CNO neutrinos is of the utmost importance, and Borexino is hard at work on the problem

But, returning to the confines of our solar system, it’s important to remember that the SSM is not a closed book. Borexino’s results are thus far in agreement with its assumption of a protostar that had a uniform composition throughout its entire volume when fusion began (“zero-age homogeneity”). However, thanks to the ability of neutrinos to peek into the heart of the Sun, the experiment now has the potential to explore this assumption and weigh in on one of the most intriguing controversies in astrophysics.

The solar-abundance controversy

As stars evolve, the distribution of elements within them changes thanks to fusion reactions and convection currents. But the composition of the surface is thought to remain very nearly the same as that of the protostar, as it is not hot enough there for fusion to occur. Measuring the abundance of elements on a star’s surface therefore gives an idea of the protostar’s composition and is a powerful way to constrain the SSM. 

Solar-neutrino measurements

Currently, the best method to determine the surface abundance of elements heavier than helium (“metallicity”) uses measurements of photo-absorption lines. Since 2005, improved hydrodynamic calculations (which are needed to model atomic-line formation, and radiative and collisional processes which contribute to excitation and ionisation) indicate a much lower surface metallicity than was previously considered. However, helioseismology observables differ by roughly five standard deviations from SSM predictions that use the new surface metallicity to infer the protostar’s composition, when the sound–speed profile, surface–helium abundance and the depth of the convective envelope are taken into account. Helioseismology implies that the zero-age Sun’s core was richer in metallicity than the present surface composition, suggesting a violation of zero-age homogeneity and a break with the SSM. This is the solar-abundance controversy, which was discovered in 2005.

One possible explanation is that a late “dilution” of the Sun’s convective zone occurred due to a deposition of elements during the formation of the solar system. Were there to have been an accretion of dust and gas from the proto-planetary disc onto the central star during the evolution of the star–planet system, this could have changed the initial metallicity of the surface of the Sun – a hypothesis backed up by recent simulations that show that a metal-poor accretion could produce the present surface metallicity. 

As they are an excellent probe of metallicity, CNO neutrinos have an important role to play in settling the solar-abundance controversy. If Borexino were to measure the Sun’s present core metallicity, and by running simulations backwards prove that its surface metallicity must have been diluted right from its birth, this would violate one of the basic assumptions of the SSM. Probing solar metallicity using CNO neutrinos is, therefore, of the utmost importance, and Borexino is hard at work on the problem. Initial results favour the high-metallicity hypothesis with a significance of 2.1 standard deviations – a tentative first hint from Borexino that zero-age homogeneity may indeed be false.

The ancient question of why and how the Sun and stars shine finally has a comprehensive answer from Borexino, which has succeeded thanks to the detector’s extreme and unprecedented radio-purity – the hard work of hundreds of researchers over almost three decades.

Linacs to narrow radiotherapy gap

Number of people in African countries who have access to radiotherapy facilities

By 2040, the annual global incidence of cancer is expected to rise by more than 42% from 19.3 million to 27.5 million cases, corresponding to approximately 16.3 million deaths. Shockingly, some 70% of these new cases will be in low- and middle-income countries (LMICs), which lack the healthcare programmes required to effectively manage their cancer burden. While it is estimated that about half of all cancer patients would benefit from radiotherapy (RT) for treatment, there is a significant shortage of RT machines outside high-income countries.

More than 10,000 electron linear accelerators (linacs) are currently used worldwide to treat patients with cancer. But only 10% of patients in low-income and 40% in middle-income countries who need RT have access to it. Patients face long waiting times, are forced to travel to neighbouring regions or face insurmountable expenditure to access treatment. In Africa alone, 27 out of 55 countries have no linac-based RT facilities. In those that do, the ratio of the number of machines to people ranges from one machine to 423,000 people in Mauritius, one machine to almost five million people in Kenya and one machine to more than 100 million people in Ethiopia (see “Out of balance” image). In high-income countries such as the US, Switzerland, Canada and the UK, by contrast, the ratio is one RT machine to 85,000, 102,000, 127,000 and 187,000 people, respectively. To draw another stark comparison, Africa has approximately 380 linacs for a population of 1.2 billion while the US has almost 4000 linacs for a population of 331 million.

Unique challenges

It is estimated that to meet the demand for RT in LMICs over the next two to three decades, the current projected need of 5000 RT machines is likely to become more than 12,000. To put these figures into perspective, Varian, the market leader in RT machines, has a current worldwide installation base of 8496 linacs. While many LMICs provide RT using cobalt-60 machines, linacs offer better dose-delivery parameters and better treatment without the environmental and potential terrorism risks associated with cobalt-60 sources. However, since linacs are more complex and labour-intensive to operate and maintain, their current costs are significantly higher than cobalt-60 machines, both in terms of initial capital costs and annual service contracts. These differences pose unique challenges in LMICs, where macro- and micro-economic conditions can influence the ability of these countries to provide linac-based RT. 

The difficulties of operating electron guns

In November 2016 CERN hosted a first-of-its-kind workshop, sponsored by the International Cancer Expert Corps (ICEC), to discuss the design characteristics of RT linacs (see “Linac essentials” image) for the challenging environments of LMICs. Leading experts were invited from international organisations, government agencies, research institutes, universities and hospitals, and companies that produce equipment for conventional X-ray and particle therapy. The following October, CERN hosted a second workshop titled “Innovative, robust and affordable medical linear accelerators for challenging environments”, co-sponsored by the ICEC and the UK’s Science and Technology Facilities Council, STFC. Additional workshops have taken place in March 2018, hosted by STFC in collaboration with CERN and the ICEC, and in March 2019, hosted by STFC in Gaborone, Botswana (see “Healthy vision” image). These and other efforts have identified substantial opportunities for scientific and technical advancements in the design of the linac and the overall RT system for use in LMICs. In 2019, the ICEC, CERN, STFC and Lancaster University entered into a formal collaboration agreement to continue concerted efforts to develop this RT system. 

The idea of novel medical linacs is an excellent example of the impact of fundamental research on wider society

In June 2020, STFC funded a project called ITAR (Innovative Technologies towards building Affordable and equitable global Radiotherapy capacity) in partnership with the ICEC, CERN, Lancaster University, the University of Oxford and Swansea University. ITAR’s first phase was aimed at defining the persistent shortfalls in basic infrastructure, equipment and specialist workforce that remain barriers to effective RT delivery in LMICs. Clearly, a linac suitable for these conditions needs to be low-cost, robust and easy to maintain. Before specifying a detailed design, however, it was first essential to assess the challenges and difficulties RT facilities face in LMICs and in other demanding environments. Published in June 2021, an expansive study of RT facilities in 28 African countries was carried out and compared to western hospitals by the ITAR team to quantitatively and qualitatively assess and compare variables in several domains (see “Downtime” figure). The survey builds on a related 2018 study on the availability of RT services and barriers to providing such services in Botswana and Nigeria, which looked at the equipment maintenance logs of linacs in those countries and selected facilities in the UK.

Surveying the field

The absence of detailed data regarding linac downtime and failure modes makes it difficult to determine the exact impact of the LMIC environment on the performance of current technology. The ongoing ITAR design development and prototyping process identified a need for more information on equipment failures, maintenance and service shortcomings, personnel, training and country-specific healthcare challenges from a much larger representation of LMICs. A further-reaching ITAR survey obtained relevant information for defining design parameters and technological choices based on issues raised at the workshops. They include well-recognised factors such as ease and reliability of operation, machine self-diagnostics and a prominent display of impending or actual faults, ease of maintenance and repair, insensitivity to power interruptions, low power requirement and the consequent reduced heat production.

A standard medical linac

Based on the information from its surveys, ITAR produced a detailed specification and conceptual design for an RT linac that requires less maintenance, has fewer failures and offers fast repair. Over the next three years, under the umbrella of a larger project called STELLA (Smart Technologies to Extend Lives with Linear Accelerators) launched in June 2020, the project will progress to a prototype development phase at STFC’s Daresbury Laboratory. 

The design of the electron gun has been optimised to increase beam-capture. This has the dual advantage of reducing both the peak current required from the gun to deliver the requisite dose and “back bombardment”. It also allows for simpler replacement of the electron gun’s cathode by trained personnel (current designs require replacement of the full electron gun or even the full linac). Electron-beam capture is limited in medical linacs as the pulses from the electron gun are much longer in duration than the radiofrequency (RF) period, meaning electrons are injected at all RF phases. Some phases cause the bunch to be accelerated while others result in electrons being reflected back to the cathode. In typical linacs, less than 50% of electrons reach the target and many electrons reach the target with lower energies. In high-energy accelerators velocity bunching can be used to compress the bunch, however the space is limited in medical linacs and the energy gain per cell is often well in excess of the beam energy. To allow velocity bunching in a medical linac, the first cell needs to operate at a low gradient – such that less space is required to bunch as the average beam velocity is much lower and the deceleration is less than the beam energy. By adjusting the length of the first and second cells, the decelerated electrons can re-accelerate on the next RF cycle and synchronise with the accelerated electrons, capturing nearly all the electrons and transporting them to the target without a low-energy tail. This is achieved using techniques originally developed for the optimisation of klystrons as part of the Compact Linear Collider project at CERN. By adjusting cell-to-cell coupling, it is possible to make all the other cells at a higher gradient similar to a standard medical linac such that the total linac length remains the same (see “Strong coupling” figure).

Designing a Robust and Affordable Radiation Therapy Treatment System for Challenging Environments workshop participants

The electrical power supply in LMICs can often be variable and protection equipment to isolate harmonics between pieces of equipment is not always installed, hence it is critical to consider this when designing the electrical system for RT machines. This in itself is relatively straightforward but is not normally considered as part of a RT machine design.

The failure of multi-leaf collimators (MLCs), which alter the intensity of the radiation so that it conforms to the tumour volume via several individually actuated leaves, is a major linac downtime issue. Designing MLCs that are less prone to failure will play a key role in RT in LMICs, with studies ongoing into ways to simplify the design without compromising on treatment quality.

Building a workforce

Making it simpler to diagnose and repair faults on linacs is another key area that needs improvement. Given the limited technical staff training in some LMICs, when a machine fails it can be challenging for local staff to make repairs. In addition, components that are degrading can be missed by staff, leading to loss of valuable time to order spares. An important component of the STELLA project, led by ICEC, is to enhance existing and establish new twinning programmes that provide mentoring and training to healthcare professionals in LMICs to build workforce capacity and capability in those regions.

ITAR linac cavity geometry

The idea to address the need for a novel medical linac for challenging environments was first presented by Norman Coleman, senior scientific advisor to the ICEC, at the 2014 ICTR-PHE meeting in Geneva. This led to the creation of the STELLA project, led by Coleman and ICEC colleagues Nina Wendling and David Pistenmaa, which is now using technology originally developed for high-energy physics to bring this idea closer to reality – an excellent example of the impact of fundamental research on wider society. 

The next steps are to construct a full linac prototype to verify the higher capture, as well as to improve the ease of maintaining and repairing the machine. Then we need to have the RT machine manufactured for use in LMICs, which will require many practical and commercial challenges to be overcome. The aim of project STELLA to make RT truly accessible to all cancer patients brings to mind a quote from the famous Nigerian novelist Chinua Achebe: “While we do our good works let us not forget that the real solution lies in a world in which charity will have become unnecessary.” 

Space-based data probe neutron lifetime

Recent measurements of the neutron lifetime

The neutron lifetime is key to a range of fields, not least astrophysics and cosmology, where it is used in the modeling of the synthesis of helium and heavier elements in the early universe. Its value, however, is uncertain. In recent years, discrepancies of up to 4σ between measurements of the neutron lifetime using different methods present a puzzle that particle physicists, nuclear physicists and cosmologists are increasingly eager to solve. 

A recent experiment with the UCNτ experiment at the Los Alamos Neutron Science Center, which resulted in the most constraining measurement of the lifetime to date, further strengthens the discrepancy. The latest result, achieved using the so called “bottle” method, results in a neutron lifetime of 877.75 ± 0.28 (stat) +0.22 –0.16 (syst) s, whereas measurements using the “beam” method have consistently resulted in longer lifetimes (see figure). While the beam method determines the lifetime by measuring the decay products of the neutron, the bottle method instead stores cold, or thermalised, neutrons for a certain time before counting the remaining ones by direct detection. If not the result of some unknown systematic error, the discrepancy could be a sign of exotic physics whereby the longer lifetime in the beam method stems from an unmeasured second decay channel. 

Escape detection

Astrophysics brings a third, independent measurement into play based on the bombardment of galactic cosmic rays on planetary surfaces. This continual process liberates large numbers of high-energy neutrons, some of which escape into space while others approach thermal equilibrium with surface and atmospheric material, a proportion subsequently escaping into space where at some point they will decay. The neutron lifetime can therefore be inferred by counting the neutrons remaining at different distances from their production location, using detectors positioned hundreds to thousands of kilometres above the surface. As the escaped neutron flux depends on a planet’s particular elemental composition at depths corresponding to the neutron mean-free path (typically around 10 cm), neutron spectrometers have already been installed on several missions to explore planetary surface compositions.

A dedicated instrument on a future lunar mission could bring a crucial third independent tool to tackle the neutron lifetime puzzle

In 2020, using neutrons produced through interactions of cosmic rays with Venus and Mercury, a team from the Johns Hopkins Applied Physics Laboratory and Durham University demonstrated the feasibility of such a neutron-lifetime measurement. Now, using data from a lunar mission, the same team has provided the first results with uncertainties approaching those coming from lab-based experiments. Importantly, since it also relies on direct detection, the result from space should produce the same lifetime as the bottle experiments.

For this latest study, the researchers used data from NASA’s Lunar Prospector taken during several elliptical orbits around the moon in 1998. The orbiter contained two neutron detectors, one with a cadmium shield making it insensitive to slow or thermal neutrons, and one containing a tin shield that allows it to measure thermal as well as higher- energy neutrons. The difference between the two count rates then provides the thermal neutron flux. Combining this with the spacecraft position, the group deduced the thermal neutron flux for different positions and distances towards the Moon and fitted the data against a model that includes the production and propagation of thermal neutrons originating from interactions of cosmic rays with the lunar surface.

Surface studies

The highly detailed models account for neutron production from cosmic-ray interactions with the different elements of the lunar surface, and also for the varying composition of the surface in different regions. For the lifetime measurement, thermal neutrons were used due to their lower velocities (a few km/s), which makes their flux as a function of the distance to the surface (typically several 100 km) more sensitive to their lifetimes. The higher sensitivity comes at the cost of greater model complexity, however. For example, thermal neutrons cannot simply be modeled as traveling in a straight line, but are affected by the lunar gravity, meaning that they not only come directly from the surface but also enter the detector from the back as they perform elliptical orbits. 

The study found a lifetime of 887 ± 14 (stat) +7–3 (syst) s. The systematic error stems mainly from uncertainties in the surface composition and its variations as well as a lack of modeling of the temperature variation of the Moon’s surface, which affects the thermalisation process, and from uncertainties in the ephermides (location) of the spacecraft. In future dedicated missions, the latter two issues can be mitigated, while knowledge of the surface composition can be improved with additional studies. Indeed, the large statistical error arises from this being a non-dedicated mission where the small data sample used was not even part of the science data of the original mission. The results are therefore highly promising, as they show that a dedicated instrument on a future lunar mission would bring a crucial third independent tool to tackle the neutron lifetime puzzle.

One day in September: Copenhagen

The ghosts of Niels Bohr, Werner Heisenberg and Margrethe Bohr

“But why?” asks Margrethe Bohr. Her husband, Niels, replies “Does it matter my love now that we’re all three of us dead and gone?” Alongside Werner Heisenberg, the trio look like spirits meeting in an atemporal dimension, maybe the afterlife, under an eerie ring of light. Dominating an almost empty stage, they try to revive what happened on one day in September 1941, when Heisenberg, a prominent figure in Hitler’s Uranverein (Uranium Club), travelled to Nazi-occupied Denmark to visit his former mentor, Niels Bohr. 

Why did Heisenberg go to meet Bohr that day? Did he seek an agreement not to develop the bomb in Germany? Was he searching for intelligence on Allied progress? To convince Bohr that there was no German programme? Or to pick Bohr’s brain on atomic physics? Or, according to Margrethe, to show off? Perhaps his motives were a superposition of all of these. No one knows what was said. This puzzle has intrigued historians ever since. 

Eighty years after that meeting, and 23 since Michael Frayn’s masterwork Copenhagen premiered at the National Theatre in London, award-winning director Polly Findlay and Emma Howlett in her professional directorial debut have revived a play that contains little action but much physics and food for thought.

The three actors orbit like electrons in an atom

Frayn’s nonlinear script is based on three possible versions of the same meeting in Copenhagen in 1941, which can be construed as three different scenarios playing out in the many-worlds interpretation of quantum mechanics. He describes it as the process of rewriting a draft of a paper again and again, trying to unlock more secrets. In the afterlife, the trio’s dialogue jumps back and forth in time, adding confusing memories and contradicting hypotheses. Delivered at pace, the narrative explores historical information and their personal stories.

The three characters reflect on how German scientists failed to build the bomb, even though they had the best start; Otto Hahn, Lise Meitner and Fritz Strassmann having discovered nuclear fission in 1939. But Frayn highlights how Hitler’s Deutsche Physik was hostile to so-called Jewish physics and key Jewish physicists, including Bohr, who later fled to Los Alamos in the US. Frayn’s Heisenberg reveals the disbelief he felt when he learnt about the destruction of Hiroshima on the radio. At the time he was detained in Farm Hall, not far from this theatre in Cambridge in the UK, together with other members of the Uranium Club. Called Operation Epsilon, the bugged hall was used by the Allied forces to try to uncover the state of Nazi scientific progress.

The three actors orbit like electrons in an atom, while the theatre’s revolving stage itself spins. Superb acting by Philip Arditti and Malcolm Sinclair elucidates an extraordinary student–mentor relationship between Heisenberg and Bohr. The sceptical Mrs Bohr (Haydn Gwynne) steers the conversation and questions their friendship, cajoling Bohr to speak in plain language. Nevertheless, the use of scientific jargon could leave some non-experts in the audience behind. 

Although Heisenberg wrote in his autobiography that “it would be better to stop disturbing the spirits of the past,” the private conversation between the two physicists has stirred the interest of the public, journalists and historians for years. In 1956 the journalist Robert Jungk wrote in his debated book, Brighter than a Thousand Suns, that Heisenberg wanted to prevent the development of an atomic bomb. This book was also an inspiration for Frayn’s play. More recently, in 2001, Bohr’s family released some letters that Bohr wrote and never sent to Heisenberg. According to these letters, Bohr was convinced that Heisenberg was building the bomb in Germany.

To this day, the reason for Heisenberg’s visit to Copenhagen remains uncertain, or unknowable, like the properties of a quantum particle that’s not observed. The audience can only imagine what really happened, while considering all philosophical interpretations of the fragility of the human species. 

Witten reflects

Edward Witten

How has the discovery of a Standard Model-like Higgs boson changed your view of nature? 

The discovery of a Standard Model-like Higgs boson was a great triumph for renormalisable field theory, and really for simplicity. By the time the LHC was operating, attempts to make the Standard Model (SM) work without an elementary Higgs field – using a dynamical mechanism instead – had become rather convoluted. It turned out that, as far as one can judge from what we have learned so far, the original idea of an elementary Higgs particle was correct. This also means that nature takes advantage of all the possible building blocks of renormalisable field theory – fields of spin 0, 1/2 and 1 – and the flexibility that that allows. 

The other key fact is that the Higgs particle has appeared by itself, and without any sign of a mechanism that would account for the smallness of the energy scale of weak interactions compared to the much larger presumed energy scales of gravity, grand unification and cosmic inflation. From the perspective that my generation of particle physicists grew up with (and not only my generation, I would say), this is quite a shock. Of course, we lived through a somewhat similar shock a little over 20 years ago with the discovery that the expansion of the universe is accelerating – something that is most simply interpreted in terms of a very small but positive cosmological constant, the energy density of the vacuum. It seems that the ideas of naturalness that we grew up with are failing us in at least these two cases.

What about new approaches to the fine-tuning problem such as the relaxion or “Nnaturalness”?

Unfortunately, it has been very hard to find a conventional natural explanation of the dark energy and hierarchy problems. Reluctantly, I think we have to take seriously the anthropic alternative, according to which we live in a universe that has a “landscape”of possibilities, which are realised in different regions of space or maybe in different portions of the quantum mechanical wavefunction, and we inevitably live where we can. I have no idea if this interpretation is correct, but it provides a yardstick against which to measure other proposals. Twenty years ago, I used to find the anthropic interpretation of the universe upsetting, in part because of the difficulty it might present in understanding physics. Over the years I have mellowed. I suppose I reluctantly came to accept that the universe was not created for our convenience in understanding it.

Which experimental paths should physicists prioritise at this time?

It is extremely important to probe the twin mysteries of the cosmic acceleration and the smallness of the electroweak scale as thoroughly as possible, in order to determine whether we are interpreting the facts correctly and possibly to discover a new layer of structure. In the case of the cosmic acceleration, this means measuring as precisely as we can the parameter w (the ratio of pressure and energy), which equals –1 if the acceleration of the expansion is governed by a simple cosmological constant, but would be greater than –1 in most alternative models. In particle physics, we would like to probe for further structure as precisely as we can both indirectly, for example with precision studies of the Higgs particle, and hopefully directly by going to higher energies than are available at the LHC.

What might be lurking at energies beyond the LHC?

If it is eventually possible to go to higher energies, I can imagine several possible outcomes. It might become rather clear that the traditional idea of naturalness is not the whole story and that we have on our hands a “bare” Higgs particle, without a mechanism that would account for its mass scale. Alternatively, we might find out that the apparent failure of naturalness was an illusion and that additional particles and forces that provide an explanation for the electroweak scale are just beyond our current experimental reach. There is also an intermediate possibility that I find fascinating. This is that the electroweak scale is not natural in the customary sense, but additional particles and forces that would help us understand what is going on exist at an energy not too much above LHC energies. A fascinating theory of this type is the “split supersymmetry” that has been proposed by Nima Arkani-Hamed and others.  

It seems that the ideas of naturalness that we grew up with are now failing us 

There is an obvious catch, however. It is easy enough to say “such-and-such will happen at an energy not too much above LHC energies”. But for practical purposes, it makes a world of difference whether this means three times LHC energies, six times LHC energies, 25 times LHC energies, or more. In theories such as split supersymmetry, the clues that we have are not sufficient to enable a real answer. A dream would be to get a concrete clue from experiment about what is the energy scale for new physics beyond the Higgs particle. 

Could the flavour anomalies be one such clue?

There are multiple places that new clues could come from. The possible anomalies in b physics observed at CERN are extremely significant if they hold up. The search for an electric dipole moment of the electron or neutron is also very important and could possibly give a signal of something new happening at energies close to those that we have already probed. Another possibility is the slight reported discrepancy between the magnetic moment of the muon and the SM prediction. Here, I think it is very important to improve the lattice gauge theory estimates of the hadronic contribution to the muon moment, in order to clarify whether the fantastically precise measurements that are now available are really in disagreement with the SM. Of course, there are multiple other places that experiment could pinpoint the next energy scale at which the SM needs to be revised, ranging from precision studies of the Higgs particle to searches for muon decay modes that are absent in the SM. 

Which current developments in theory are you most excited about?

The new ideas about gravity and quantum mechanics that go under the rough title “It from qubit” are really exciting. Black-hole thermodynamics was discovered in the 1970s through the work of Jacob Bekenstein, Stephen Hawking and others. These results were fascinating, but for several decades it seemed to me – rightly or wrongly – that this field was evolving only slowly compared to other areas of theoretical physics. In the past decade or so, that is clearly no longer the case. In large part the change has come from thinking about “entropy” as microscopic or fine-grained von Neumann entropy, as opposed to the thermodynamic entropy that Bekenstein and others considered. A formulation in terms of fine-grained entropy has made possible new statements and more general statements which reduce to the traditional ones when thermodynamics is valid. All this has been accelerated by the insights that come from holographic duality between gravity and gauge theory.

How different does the field look today compared to when you entered it?

It is really hard to exaggerate how the field has changed. I started graduate school at Princeton in September 1973. Asymptotic freedom of non-abelian gauge theory had just been discovered a few months earlier by David Gross, Frank Wilzcek and David Politzer. This was the last key ingredient that was needed to make possible the SM as we know it today. Since then there has been a revolution in our experimental knowledge of the SM. Several key ingredients (new quarks, leptons and the Higgs particle) were unknown in 1973. Jets in hadronic processes were still in the future, even as an idea, let alone an experimental reality, and almost nothing was known about CP violation or about scaling violations in high-energy hadronic processes, just to mention two areas that developed later in an impressive way. 

6D Calabi–Yau manifolds

Not only is our experimental knowledge of the SM so much richer than it was in 1973, but the same is really true of our theoretical understanding as well. Quantum field theory is understood much better today than was the case in 1973. There really is no comparison.

Perhaps equally dramatic has been the change in our understanding of cosmology. In 1973, the state of cosmological knowledge could be summarised fairly well in a couple of numbers – notably the cosmic-microwave temperature and the Hubble constant – and of these only the first was measured with any reasonable precision. In the intervening years, cosmology became a precision science and also a much more ambitious science, as cosmologists have learned to grapple with the complex processes of the formation of structure in the universe. In the inhomogeneities of the microwave background, we have observed what appear to be the seeds of structure formation. And the theory of cosmic inflation, which developed starting around 1980, seems to be a real advance over the framework in which cosmology was understood in 1973, though it is certainly still incomplete.

Exploring the string-theory framework has led to a remarkable series of discoveries

Finally, 50 years ago the gulf between particle physics and gravity seemed unbridgeably wide. There is still a wide gap today. But the emergence in string theory of a sensible framework to study gravity unified with particle forces has changed the picture. This framework has turned out to be very powerful, even if one is not motivated by gravity and one is just searching for new understanding of ordinary quantum field theory. We do not understand today in detail how to unify the forces and obtain the particles and interactions that we see in the real world. But we certainly do have a general idea of how it can work, and this is quite a change from where we were in 1973. Exploring the string-theory framework has led to a remarkable series of discoveries. This well has not run dry, and that is one of the reasons that I am optimistic about the future.

Which of the numerous contributions you have made to particle and mathematical physics are you most proud of?

I am most satisfied with the work that I did in 1994 with Nathan Seiberg on electric-magnetic duality in quantum field theory, and also the work that I did the following year in helping to develop an analogous picture for string theory.

Who knows, maybe I will have the good fortune to do something equally significant again in the future.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

bright-rec iop pub iop-science physcis connect