The 30th International Symposium on Lepton Photon Interactions at High Energies, hosted online by the University of Manchester from 10 to 14 January, saw more than 500 physicists from around the world engaged in a broad science programme. The Lepton Photon series dates to the 1960s and takes place every two years. This was the first time the conference was meant to return to the UK in over 50 years, with its original August time slot moved to January due to Covid-19 restrictions. The agenda was stretched to improve accessibility in different time zones. Posters were presented via pre-recorded videos and three prizes were awarded following a public vote.
With 2022 marking the ten-year anniversary of the Higgs-boson discovery, it was appropriate that the conference kicked-off with an experimental Higgs-summary talk. Both the ATLAS and CMS collaborations showcased their latest high-precision measurements of Higgs-boson properties and searches for physics beyond the Standard Model using the Higgs boson as a portal. ATLAS presented a new combination of the Higgs total and differential cross-section measurements in the two-photon and four-lepton channels, while CMS shared the first full Run-2 search for resonant di-Higgs production in several multi-lepton final states.
In flavour physics, the pattern of anomalies in rare leptonic and semi-leptonic processes continues to intrigue. Highlights in this area included new tests of lepton universality from LHCb in Λb0→ Λc+ℓ–? decays (ℓ=e, μ, τ) , where the decay involving a τ lepton was observed for the first time, and from Belle in Ω?0→Ω−ℓ+? decays, where the ratio of the e-μ final-state branching ratios was found to be in agreement with the expectation of unity and where the μ decay had been measured for the first time. Similar studies of rare leptonic decays are now also taking place in the charm sector. The BESIII collaboration tested in one study the e-μ universality in a second decay mode and confirmed its agreement with the Standard Model. Participants also heard about the latest searches for the ultra-rare decay K→π?? from KOTO, searching for the neutral kaon decay mode, and from NA62, which now has a 3.4σ evidence for the charged kaon decay mode.
With the 2021 update on muon g-2 from Fermilab, and with the MEG-II, DeeMe and Mu3e experiments getting ready to search for muon-to-electron transitions, there is much excitement about charged-lepton physics. CP violation in beauty and charm remains a hot topic, with updates from LHCb, Belle and BES-III on D0 and Bs oscillations and the CKM angle γ. In all these areas, the theoretical community continues to push the boundaries to make improved predictions. Among other things, theorists presented the latest global fits of Wilson coefficients, and several welcome developments in lattice QCD.
The highlights from the neutrino sector included the low-energy excess search by MicroBooNE and the observation of the CNO cycle of solar neutrinos by Borexino. The latest results from the long-baseline experiments – T2K and recently NovA– are starting to hint at large CP violating effects in neutrino oscillations.
A series of talks on dark-matter searches spanned collider experiments, direct detection and astrophysical signatures. Some interesting anomalies persist, such as the DAMA annual modulation and the XENON1T low-energy excess. These will be challenged by a suite of next-generation detectors, such as PANDAX-4T, XENONnT, LZ and DarkSide-20k.
The conference also included a rich programme of talks covering astrophysics with an emphasis on gravitational waves and multi-messenger astronomy. Hot-off-the-press was a combined search for spatial correlations between neutrinos and ultra-high energy cosmic rays, using data from ANTARES, IceCube, Auger and TA collaborations, with no sign yet of a connection.
As well as many new results from experiments in operation, the conference included sessions devoted to R&D in accelerators, detectors, software and computing, covering both collider and non-collider experiments. With many new facilities proposed in the medium and long terms, technological challenges, which include power consumption, data rates and radiation tolerance, are immense and demand significant efforts in harnessing promising avenues such as high-temperature superconductors, quantum sensors or specialised computer accelerators. Common to all areas is the need to train and retain highly skilled people to lead these efforts in future.
A firm part of the Lepton Photon plenary programme are discussions around diversity, inclusion and outreach. A lively panel discussion covered many aspects of the former two topics and ended with a key message to the whole community: be an ally and take an active stance in support of minorities. The conference ended with traditional reports from the IUPAP commission on particles and fields and from ICFA, followed by strategy updates from Snowmass and the African Strategy for Fundamental and Applied Physics. While Snowmass is an established process for regular updates of the US strategy for the field based on wide-spread community input both from the US and internationally, the African strategy is the first of its kind and is testament to the continent’s ambition and growing importance in physics research. The next conference will take place in Melbourne in July 2023.
Ronald Shellard began his journey in physics in the 1970s at the University of São Paulo, where he took his undergraduate degree, and at the Institute of Theoretical Physics of São Paulo State University, where he completed his master’s in 1973. He received his doctorate, titled “Particle physics field theory, dynamical symmetry breakdown at the two loop and beyond”, from the University of California, Los Angeles in 1978 after also spending time at the University of California, Santa Barbara.
After a period working in theoretical particle physics, Shellard devoted himself to experimental and astroparticle physics. He joined the DELPHI collaboration at the former LEP collider at CERN in 1989, and in 1995 he joined the Pierre Auger Observatory, where he made an outstanding contribution both as a researcher and as an articulator of Brazilian collaboration. Remaining in the astroparticle field, during the past decade he was also involved in the Cherenkov Telescope Array, the Large Array Telescope for Tracking Energetic Sources and the Southern Wide Field Gamma-Ray Observatory.
Shellard played a key role in efforts to make Brazil an official member of CERN
From 2009 to 2013, Shellard was vice president of the Brazilian Physical Society. He participated tirelessly on various initiatives to promote Brazilian physics, such as the establishment of the exchange programme with the American Physical Society, the strengthening of the Brazilian physics Olympiad, the in-depth study of physics and national development, the establishment of the internship programme of high-school teachers at CERN, and the initiative to create a professional master’s degree in physics teaching. He was a member of the Brazilian Academy of Science since 2017, director of the Centro Brasileiro de Pesquisas Físicas since 2015 and president of the Brazilian network of high-energy physics since 2019. He played a key role in efforts to make Brazil an official member of CERN – a process that appears to be reaching a successful conclusion, with the CERN Council voting in September 2021 to grant to Brazil the status of Associate Member State, pending the signature of the corresponding agreement and its ratification by Brazilian authorities. Active until a few days before he passed away on 7 December, Ron was very excited about this news and was making plans for the next steps of the accession procedure.
Ron Shellard had an innovative and sensibly optimistic spirit, with a comprehensive and progressive vision of the crucial role of physics, and science in general, for the progress of Brazilian society. He exerted a great influence on the formation of the research community in high-energy physics. He was the advisor of several graduate students and had a permanent commitment to the training of new scientists and the dissemination and popularisation of science in the country.
Every seven to 10 years, the US high-energy physics community comes together to re-evaluate and update its vision of the field. These wide-ranging exercises, organised by the American Physical Society (APS)’s Division of Particles and Fields (DPF) since 1982, are now known as the Snowmass Community Studies on account of the final drafting having historically taken place in Snowmass, Colorado. They include all related disciplines that contribute to elementary particle physics and welcome the participation of physicists from outside the US.
Snowmass exists to identify the physics issues that should be addressed and possible approaches to pursuing them, but we do not seek to specify which projects should be carried out. That task is accomplished by a Particle Physics Project Prioritization Panel (P5), a subpanel of the US High Energy Physics Advisory Panel (HEPAP), which uses the Snowmass output to develop programmatic priorities based on specific budget scenarios and provides recommendations to US funding agencies. Snowmass 2013 and the subsequent 2014 P5 roadmap recommended a suite of new projects, including: the HL-LHC upgrade; DUNE/LBNF; a short-baseline neutrino programme; the PIP-II proton source upgrade; the Mu2e experiment; the LSST camera and DESI; the LUX-ZEPLIN and CDMS dark matter searches; preparation for a new cosmic-microwave-background explorer; and strong investment in R&D for future accelerators. With many of these projects now under construction, it is vital to prepare the next round of compelling US particle-physics initiatives.
In April 2020 we kicked off a new Snowmass study. Initially scheduled to conclude with a workshop at the University of Washington in Seattle in July 2021, the process was paused due to COVID-19. On 24 September, at a virtual “Snowmass Day” meeting, we declared the Snowmass process officially resumed, with the Seattle workshop scheduled for 17 to 26 July.
White papers describing ideas, proposals and projects are due by 15 March for discussion
The Snowmass 2021 study is divided into 10 “frontiers”: energy; neutrino physics; rare processes and precision measurements; cosmic; theory; accelerator; instrumentation; computation; underground facilities; and community engagement. Each frontier is led by two or three conveners and is divided into between six and 11 topical groups – with community development, demographics, and diversity and inclusion addressed across all frontiers. A Snowmass early-career organisation has also been formed to assist young physicists in contributing to the process. The whole exercise is overseen by a steering group, which includes the DPF chair line, and international representation is provided by an advisory group chosen by national and regional physics societies.
Informing Snowmass 2021 are many recent results: Higgs-boson properties obtained by ATLAS and CMS; the measurement of the angle θ13 in the neutrino mixing matrix; evidence for anomalies in B-meson decays from LHCb; and the tension between Fermilab’s measurement of muon g-2 and the Standard Model prediction. These topics will continue to be explored in current experiments. Snowmass 2021 and the latest European strategy update focus on what comes next.
Collider matters
In the Snowmass process, we collect all ideas, whether they are large or small, expensive or less so, require international collaboration or not, and are hosted in the US or elsewhere. One topic of intense interest worldwide is the next generation of colliders, both to study the Higgs boson with sub-percent level precision and to directly search for new phenomena in the multi-TeV regime. The proposed Higgs factories require some final development that could be completed in a few years, which would enable a decision on which machine to build, and the start of negotiations to fund it, as an international project. Machines to explore the multi-TeV terrain require significantly more R&D to develop and industrialise the necessary new technologies. We expect this Snowmass/P5 process to set the direction for US participation in this R&D effort and future construction projects. We also look forward to new experiments and upgrades to existing experiments in neutrino physics, rare decays and astrophysics, along with new R&D initiatives in detectors, computing, accelerators and theory.
White papers describing ideas, proposals and projects are due by 15 March 2022 for discussion at the Seattle meeting, where a draft report will be produced and then submitted to HEPAP and the APS in the fall. With hard work and good will, we expect to emerge from the Snowmass/P5 process with a grand vision for a vibrant US high-energy physics programme over the 10 years starting from 2025 and with a roadmap for large new initiatives that will come to fruition in the 2030s. Please join us and contribute your ideas to shaping our future!
The decision by CERN in 2010 to introduce a policy of geographical enlargement to attract new Member States and Associate Member States, including from outside Europe, marked a prominent step towards the globalisation of high-energy physics. It aimed to strengthen relations with countries that can bring scientific and technological expertise to CERN and, in return, allow countries with developing particle-physics communities to build capacity. From South Asia, researchers have made significant contributions to pioneering activities of CERN over the past decades, including the construction of the LHC.
The first CERN South Asian High Energy Physics Instrumentation (SAHEPI) workshop, held in Kathmandu, Nepal, in 2017, came into place shortly after Pakistan (July 2015) and India (January 2017) became CERN Associate Member States and follows similar regional approaches in Latin America and South-east Asia. Also, within the South Asia region, CERN has signed bilateral international cooperation agreements with Bangladesh (2014), Nepal (2017) and Sri Lanka (2017). The second workshop took place in Colombo, Sri Lanka, in 2019. SAHEPI’s third edition took place virtually on 21 October 2021, hosted by the University of Mauritius in collaboration with CERN. Its aim was to consolidate the dialogue from the first two workshops while strengthening the scientific cooperation between CERN and the South Asia region.
“SAHEPI has been very successful in strengthening the scientific cooperation between CERN and the South Asia region and reinforcing intra-regional links,” said Emmanuel Tsesmelis, head of relations with Associate Members and non-Member States at CERN. “SAHEPI provides the opportunity for countries to enhance their existing contacts and to establish new connections within the region, with the objective of initiating new intra-regional collaborations in particle physics and related technologies, including the promotion of exchange of researchers and students within the region and also with CERN.”
Rising participation
Despite its virtual mode, SAHEPI-3 witnessed the largest participation yet, with 210 registrants. Representatives from Afghanistan, Bangladesh, Bhutan, India, Maldives, Mauritius, Nepal, Pakistan, and Sri Lanka attended, with at least one senior scientist and one student from each country. The workshop brought together physicists and policy makers from the South Asia region and neighbouring countries, together with representatives from CERN. Societal applications of technologies developed for particle physics were key highlights of SAHEPI-3, explained Archana Sharma, senior advisor for relations with international organisations at CERN:
“In this decade, disruptive innovation underpinning the importance of science and technology is making a huge impact towards the United Nations Sustainable Development Goals. CERN plays its role at the forefront, whether it is advances in science and technology or dissemination of that knowledge with an emphasis on inclusive engagement. We see the percolation of this initiative with increasing engagement from the region in CERN programmes.”
Participants reviewed the status and operation of present facilities in particle physics, and the scientific experimental programme, including the LHC and its high-luminosity upgrade at CERN, while John Ellis captivated participants with his talk “Answering the Big Question: From the Higgs boson to the dark side of the Universe”. Sanjaye Ramgoolam topped off the workshop with a public lecture on “the simple and the complex” in elementary particle physics.
SAHEPI has been very successful in strengthening the scientific cooperation between CERN and the South Asia region and reinforcing intra-regional links
Emmanuel Tsesmelis
Country representatives presented several highlights of the ongoing experimental programmes in collaboration with CERN and other international projects. India’s contributions across the ALICE experiment (such as the development of the photon multiplicity detector), its plans to join the IPPOG outreach group, its activities for the Worldwide LHC computing grid, industrial involvement and contributions to CMS – where it is the seventh-largest country in terms of the number of members – were presented. For Afghanistan, representatives described the participation of the country’s first student in the CERN Summer Student School (2019) and the completion of master’s degrees by two faculty members based on measurements at ATLAS. The country hopes to team up with particle physicists outside Afghanistan to teach online courses at the physics faculty at Kabul University, provide postgraduate scholarships to students and involve more female faculty members at ICTP – the International Centre for Theoretical Physics.
Pakistan shared its contributions to the LHC experiments as well as accelerator projects such as CLIC/CTF3 and Linac4 and its role in the tracker alignment of CMS and Resistive Plate Chambers. Nepal representatives described the development of supercomputers at Kathmandu University (KU) and acknowledged the donation agreement between KU and CERN receiving servers and related hardware to set up a high-performance computing facility. In Sri Lanka, delegates highlighted a rising popularity of the CERN Summer Student Programme among university physics students following honours degrees. The country also mentioned its initiative of an island-wide online teacher training programme to promote particle physics. The representative from Bangladesh reported on the country’s long tradition in theoretical particle physics and plans for developing the experimental particle physics community in partnership with CERN. Maldives and Bhutan continue to be growing members from South Asia at CERN, with Bhutan preparing to host the second South Asia science education programme in a hybrid-mode this year.
Strengthening relations Chief guest Leela Devi Dookun-Luchoomun, the Vice-Prime Minister and Minister of Education, Tertiary Education, Science and Technology of Mauritius, informed the audience about the formation of a research and development unit in her ministry and gave her strong support to a partnership between CERN and Mauritius. The Vice-Chancellor of the University of Mauritius, Dhanjay Jhurry, expressed his deep appreciation of SAHEPI and indicated his support for future initiatives via a partnership between CERN and the University of Mauritius.
The workshop and the initiative to reinforce particle-physics capacity in the region also form part of broader efforts for CERN to emphasise the role of fundamental research in development, notably to advance the United Nations Sustainable Development Goals agenda. In this regard, discussions took place for a follow-up on the first-of-its-kind professional development programme for high-school teachers of STEM subjects from South Asia, held in New Delhi in 2019, with Bhutan volunteering to host the next event in 2023 pandemic permitting. A poster competition engaged students from South Asia, and three prizes were announced to encourage further participation in big-science projects and towards capacity building in the local regions.
The motivation and enthusiasm of SAHEPI-participants was notable, and the efforts in support of research and education across the region were clear. Proceedings of the workshop will be presented to representatives of the governments from the participating countries to raise awareness at the highest political level of the growth of the community in the region and its value for broader societal development.
Discussions will follow in 2023 at SAHEPI-4, helping CERN continue to engage further with particle physics research and education across South Asia for the benefit of the field as a whole.
Seven years after the direct detection of gravitational waves (GWs), particle physicists around the world are preparing for the next milestone in GW astronomy: the search for a cosmological stochastic GW background. Current and planned GW observatories roughly cover 12 orders of magnitude from the nanohertz to kilohertz regimes, in which astrophysical models predict sizable GW signals from the merging of compact objects such as black-hole and neutron-star mergers, as observed by the LIGO/Virgo collaborations. It is also expected that theuniverse contains a randomly distributed GW background, which is yet to be detected. This could be the result of various known and unknown astrophysical signals, which are too weak to be resolved individually, or could be due to hypothetical processes in the very early Universe, such as phase transitions at high temperatures. The most promising region to search for the latter is arguably the ultra-high frequency (UHF) regime encompassing megahertz and gigahertz GWs, which is beyond the reach of current detectors. The detection of such a stochastic GW background could therefore offer a powerful probe of the early universe and of physics beyond the Standard Model.
On 12-15 October a virtual workshop hosted by CERN explored theoretical models and detector concepts targeting the UHF GW regime. Following an initial meeting at ICTP Trieste in 2019 and the publication of aLiving Review on UHF GWs, the goal of the workshop was to bring together theorists and experimentalists to discuss feasibility studies and prototypes of existing detector concepts as well as to review more recent proposals.
The wide range of detector concepts discussed demonstrates the rapid evolution of this field and shows the difficulty in choosing the optimal strategy. Tailoring “light shining through wall” experiments for GWs is one promising approach. In the presence of a static magnetic field, general relativity in conjunction with electrodynamics allows GWs to generate electromagnetic radiation at the same frequency, similar to the conversion of the hypothetical axion into photons. In this case, the bounds placed on “axion to photon” couplings, for example as determined by the CAST and OSQAR experiments at CERN or the ALPS experiments at DESY, can be recast as GW bounds.
The sheer variety of systems offers a new playground for creative ideas and underlines the cross-disciplinary nature of this field
Another approach, echoing that of the very first GW searches in the late 1960s, is to detect the mechanical deformation induced by GWs at the base of resonant-bar detectors, which can be implemented in the UHF regime using centimetre-sized bulk acoustic wave devicescommon in radio-frequency engineering. Resonant microwave cavities are another approach to detect interactions between GWs and electromagnetism, and have been explored in the past, such as by the MAGO collaboration at CERN (2004-2007) or proposed as a modified version of the ADMX experiment at the University of Washington. Further proposals include the precise measurement of optically levitated nanoparticles, transitions in Bose Einstein condensates, mesoscopic quantum systems, cosmological detectors and magnon systems. The sheer variety of systems, the majority of which are much smaller and less costly than long-baseline interferometric detectors, offers a new playground for creative ideas and underlines the cross-disciplinary nature of this field. Working groups set up during the workshop will investigate some of the most promising ideas in more detail within the next months.
Complementing the discussion about detector concepts, theorists presented BSM models that predict violent processes in the early universe, which could source strong GW signals. These arise e.g. in some models of cosmic inflation, at the transition phase between cosmic inflation and the radiation dominated universe, or from spontaneous symmetry breaking processes. Since these processes occur isotropically everywhere in the Universe, the expected signal is a diffuse gravitational wave background. Moreover, some relics of these processes, such as topological defects and primordial black holes, may have survived until the late universe and may still be actively emitting gravitational waves.
The current sensitivity of all proposed and existing detector concepts is several orders of magnitude away from the expected cosmological GW signals. Given that the first laser-interferometer GW detectors built in the 1970s were eight orders of magnitude below the sensitivity of the currently operating LIGO/Virgo/KAGRA observatories, however, there is every reason to think that the search for UHF GWs is the beginning and not the end of a story.
The LHCb collaboration has made the first observation of the semileptonic baryon decay Λb0→ Λc+τ–ντ, and used it to carry out a new test of lepton-flavour universality. Presented on 10 January at the 30thLepton Photon conference organised by the University of Manchester, the result brings a further tool to understand the flavour anomalies reported by LHCb and other experiments in recent years.
Lepton-flavour universality (LFU) is the principle that the weak interaction couples to electrons, muons and tau leptons equally. Decays of hadrons to electrons, muons and tau leptons are therefore predicted to occur at the same rate, once differences in the lepton masses are taken into account.
During the past few years, physicists have seen hints that some processes might not respect LFU. One of the strongest comes from b→cℓ–νℓ (ℓ=μ,τ) transitions in B-meson decays, as quantified by the parameter R(D∗), which measures the ratio of the branching fractions of B →D∗τ−ντ and B→D∗ℓ−νℓ. The combined deviation from precise Standard Model predictions of R(D*) and R(D) as measured by the BaBar, Belle and LHCb collaborations amounts to around 3.4σ. R(J/ψ), which concerns the branching ratios of Bc+→J/ψτ+ντ and Bc+→J/ψμ+νμ, was also found by LHCb to be larger than expected, but only at the level of around 2σ. Another key test of LFU involves the flavour-changing neutral current (FCNC) quark transition b→sℓ+ℓ–, for which several channels suggest that electrons are produced at a greater rate than muons. The largest effect comes from the decay B+→K+e+e–, for which LHCb finds R(K) to lie 3.1σ from the Standard Model expectation.
Taken individually, none of the measurements are significant. But together they present an intriguing pattern. New-physics models based on leptoquarks have been proposed as possible explanations for the anomalies observed in semileptonic B-meson decays and in FCNC reactions.
Baryons entered the fray in late 2019, when LHCb compared the rates of Λb0→pK–e+e– and Λb0→pK–μ+μ– decays. Although R(pK) also erred on the side of fewer muons than electrons, it was found to be in agreement with the Standard Model within the limited statistics. The latest LHCb analysis, which compared the branching ratio of Λb0→ Λc+τ–ντ from a sample of around 350 events selected from LHC Run 1 to that of Λb0→ Λc+μ– νμ measured by the former DELPHI experiment at LEP, found R(Λc+) = 0.242±0.026(stat)±0.040(syst)±0.059(ext), in good agreement (approximately 1σ ) with the Standard Model prediction of 0.324±0.004.
R(D*) can be large and R(Λc+) small in one new-physics scenario, or R(D*) large and R(Λc+) even larger in another
Guy Wormser
Baryon decays provide complementary constraints on potential violations of LFU to those from meson decays due to the different spin of the initial state. This allows constraints to be placed on possible new-physics scenarios, explains Guy Wormser of IJCLab, who led the LHCb analysis: “R(D*) can be large and R(Λc+) small in one new-physics scenario or R(D*) large and R(Λc+) even larger in another. The spin of the accompanying hadron changes the way new-physics couples into the reaction, and it depends also of the spin of particle present in the new-physics model, usually a leptoquark which can be a scalar, pseudoscalar, vector, axial vector or tensor. Our result excludes phase-space regions in some of these scenarios. In the future, a combined measurement of LFU violation — if it is confirmed — in mesons and baryons can therefore help to pin down the characteristics of the new-physics mediator.”
The latest LHCb result concerning R(Λc+) is likely to trigger intensive discussions among theorists, says the collaboration, with future measurements of this and other “R” measurements using Run-2 and Run-3 data keenly anticipated.
The ALICE detector has undergone significant overhauls during Long Shutdown 2 to prepare for the higher luminosities expected during Run 3 and 4 of the LHC, starting this year. Further upgrades of the inner tracking system and the addition of a new forward calorimeter are being planned for the next long shutdown, ahead of Run 4. A series of physics questions will still remain inaccessible with Run 3 and 4, and major improvements in the detector performance and an ability to collect an even greater integrated luminosity are needed to address them in Run 5 and beyond. The ideas for a heavy-ion programme for Run 5 and 6 are part of the European strategy for particle physics. At the beginning of 2020, the ALICE collaboration formed dedicated working groups to work out the physics case, the physics performance, and a detector concept for a next-generation heavy-ion experiment called “ALICE 3”.
To advance the project further, the ALICE collaboration organised a hybrid workshop on October 18 and 19, attracting more than 300 participants. Invited speakers on theory and experimental topics reviewed relevant physics questions for the 2030s, and members of the ALICE collaboration presented detector plans and physics performance studies for ALICE 3. Two key areas are the understanding how thermal equilibrium is approached in the quark-gluon plasma (QGP) and the precise measurement of its temperature evolution.
Restoring chiral symmetry
Heavy charm and beauty quarks are ideal probes to understand how thermal equilibrium is approached in the QGP, since they are produced early in the collision and are traceable throughout the evolution of the system. Measurements of azimuthal distributions of charm and beauty hadrons, as well as charm-hadron pairs, are particularly sensitive to the interactions between heavy quarks and the QGP. In heavy-ion collisions, heavy charm quarks are abundantly produced and can hadronise into rare multi-charm baryons. The production yield of such particles is expected to be strongly enhanced compared to proton-proton collisions because the free propagation of charm quarks in the deconfined plasma allows the combination of quarks from different initial scatterings.
Electromagnetic radiation is a powerful probe of the temperature evolution of the QGP. Since real and virtual photons emitted throughout the evolution of the system are not affected by the strong interaction, differential measurements of dielectron pairs produced from virtual photons allow physicists to determine the temperature evolution in the plasma phase. Given the high temperature and density of the quark-gluon plasma, chiral symmetry is expected to be restored. ALICE 3 will allow us to study the mechanisms of chiral symmetry restoration from the imprint on the dielectron spectrum.
New specialised detectors are being considered to further extend the physics reach
To achieve the performance required for these measurements and the broader proposed ALICE 3 physics programme, a novel detector concept has been envisioned. At its core is a tracker based on silicon pixel sensors, covering a large pseudo-rapidity range and installed within a new superconducting magnet system. To achieve the ultimate pointing resolution, a retractable high-resolution vertex detector is to be placed in the beampipe. The tracking is complemented by particle identification over the full acceptance, realised with different technologies, including silicon-based time-of-flight sensors. Further specialised detectors are being considered to further extend the physics reach.
ALICE 3 will exploit completely new detector components to significantly extend the detector capabilities and to fully exploit the physics potential of the LHC. The October workshop marked the start of the discussion of ALICE 3 with the community at large and of the review process with the LHC experiments committee.
Within a particle accelerator, the surface of materials directly exposed to the beams interacts with the circulating particles and, in so doing, influences the local vacuum conditions through which those particles travel. Put simply: accelerator performance is linked inextricably to the surface characteristics of the vacuum beam pipes and chambers that make up the machine.
In this way, the vacuum vessel’s material top surface and subsurface layer (just a few tens of nm thick) determine, among many other characteristics, the electrical resistance of the beam image current, synchrotron light reflectivity, degassing rates and secondary electron yield under particle bombardment. The challenge for equipment designers and engineers is that while the most common structural materials used to fabricate vacuum systems – stainless steel, aluminium alloys and copper – ensure mechanical resistance against atmospheric pressure, they do not deliver the full range of chemical and physical properties required to achieve the desired beam performance.
Sputtering is one of the methods used to produce thin films by physical vapour deposition
Aluminium alloys, though excellent in terms of electrical conductivity, suffer from high secondary electron emission. On the latter metric, copper represents a better choice, but can be inadequate regarding gas desorption and mechanical performance. Even though it is the workhorse of vacuum technology, for its excellent mechanical and metallurgical behaviour, stainless steel lacks most of the required surface properties. The answer is clear: adapt the surface properties of these structural materials to the specific needs of the accelerator environment by coating them with more suitable materials, typically using electrochemical or plasma treatments. (For a review of electrochemical coating methods, see Surface treatment: secrets of success in vacuum science.)
Variations on the plasma theme
The emphasis herein is exclusively on plasma-based thin-film deposition, in which an electrically quasi-neutral state of matter (composed of positive and negative charged particles) is put to work to re-engineer the physical and chemical properties of vacuum component/subsystem surfaces. A plasma can be produced by ionising gas atoms so that the positive charges are ions, and the negative ones are electrons. The most useful properties of the resultant gas plasma derive from the large difference in inertial mass between the particles carrying negative and positive charges. Owing to their much lower inertial mass, electrons are a lot more responsive than ions to variations of the electromagnetic field, leading to separation of charges and electrical fields within the plasma. What’s more, the particle trajectories for ions and electrons also differ markedly.
These characteristics can be exploited to deposit thin films and, more generally, to modify the properties of vacuum chamber and component surfaces. For such a purpose, noble-gas ions are extracted from a plasma and accelerated towards a negatively charged solid target. If the ions acquire enough kinetic energy (of the order of hundreds to thousands of eV), one of the effects of the bombardment is the extraction of neutral atoms from the target and their deposition on the surface of the substrate to be modified. This mechanism – called sputtering – is one of the methods used to produce thin films by physical vapour deposition (PVD), where film materials are extracted from a solid into a gas phase before condensing on a substrate.
In the plasma, the lost ions are reintroduced by electron ionisation of additional gas atoms. While the rate of ionisation is improved by increasing the gas density, an excessive gas density can have a detrimental effect on the sputtered atoms (as their trajectories are modified and their kinetic energy decreased by multiple collisions with gas atoms). The alternative is to increase the length of the electron trajectories by applying a magnetic field of several hundred gauss to the plasma.
Contrary to ions – which are affected minimally – electrons move around the lines of force of the magnetic field in longer helical-like curves, such that the probability of hitting an atom is higher. As electrons are sooner or later lost – either on the growing film or nearby surfaces – the plasma is refilled by secondary electrons extracted from the target (as a result of ion collisions). For a given set of parameters – among them target voltage, plasma power, gas pressure and magnetic flux density – the plasma ultimately attains stable conditions and a constant rate of deposition. Typical film thicknesses for accelerator applications range from a few tens of nm to 2–3 microns.
Unique requirements
The peculiarities of thin-film deposition for accelerator applications lie in the size of the objects to be coated and the purity of the coatings in question. Substrate areas, for example, range from a few cm2 up to m2, and in a great variety of 3D shapes and geometries. Large-aspect-ratio beam pipes that are several metres long or complicated multicell cavities for RF applications are typical substrates regularly coated at CERN. The coating process is implemented either in dedicated workshops or directly inside the accelerators during the retrofitting of installed equipment.
HiPIMS target geometriesand coating parametersmust beoptimised for each family of acceleratorcomponents
The simplest sputtering configuration can be deployed when coating a cylindrical beam pipe. The target, which is made of a wire or a rod of the material to be deposited, is aligned along the longitudinal axis of the beam pipe. Argon is the most commonly used noble gas, at a pressure that depends on the cross-section – i.e. the smaller the diameter, the higher the pressure (a typical value for vacuum chambers that are a few centimetres in diameter is 0.1 mbar). The plasma is ignited by polarising the target negatively (at a few hundred volts) using a DC power supply while keeping the pipe grounded. It’s possible to reduce the pressure by one or two orders of magnitude if a magnetic field is applied parallel to the target (owing to the longer electron paths). In this case, the deposition technique is known as DC magnetron sputtering.
When the substrate is not a simple cylinder, however, the target design becomes more complicated. That’s because of the need to accommodate different target–substrate distances, while the angle of incidence of sputtered atoms on the substrate is also subject to change. As a result, the deposited film might have different thicknesses and uneven properties at different locations on the substrate (owing to dissimilar morphologies, densities and defects, including voids). These weaknesses have been addressed, in large part, over recent years with a new family of sputtering methods called high-power impulse magnetron sputtering (HiPIMS).
In HiPIMS, short plasma pulses (of the order of 10-100 μs) of high power density (kW/cm2 regime) are applied to the target. The discharge is shut down between two consecutive pulses for a duration of about 100–1000 μs; in this way, the duty cycle is low (less than 10%) and the average power ensures there is no overheating and deformation of the target. The resulting plasma, though, is about 10 times denser (approximately 1013 ions/cm3) versus standard DC magnetron sputtering – a figure of merit that, thanks to a bias voltage applied to the substrate, ensures a higher fraction of ionised sputtered atoms are transported to the surfaces to be coated.
The impingement of such energetic ions produces denser films and reduces the columnar structure resulting from the deposition of sputtered atoms moving along lines of sight. As the bias voltage is not always a safe and practical solution, the CERN vacuum team has successfully tested the application of a positive pulse to the target immediately after the main negative pulse. The effect is an increase in energy of the ionised sputtered atoms, with equivalent results as per the bias voltage (though with a simpler implementation for accelerator components).
Owing to the variety of materials and shapes encountered along a typical (or atypical) beamline, the HiPIMS target geometries and coating parameters must be optimised for each distinct family of accelerator components. This optimisation phase is traditionally experimental, based on testing and measurement of “coupon samples” and then prototypes. In the last five years, however, the CERN team has reinforced these experimental studies with 3D simulations based on a particle-in-cell Monte Carlo/direct simulation Monte Carlo (PICMS/DSMC) code – a capability originally developed at the Fraunhofer Institute for Surface Engineering and Thin Films (IST) in Braunschweig, Germany.
Surface cleaning: putting plasmas to work
Notwithstanding their central role in thin-film deposition, plasmas are also used at CERN to clean surfaces for vacuum applications and to enhance the adherence of thin films. A case in point is the application of plasmas containing oxygen ions and free radicals (highly reactive chemical species) for the removal of hydrocarbons. In short: the ions and radicals are driven toward the contaminated surface, where they can decompose hydrocarbon molecules and form volatile species (e.g. CO and CO2) for subsequent evacuation.
It’s a method regularly used to clean beryllium surfaces (which cannot be treated by traditional chemical methods for safety reasons). If the impingement kinetic energy of the oxygen ions is about 100 eV, the chemical reaction rate on the surface is much larger than the beryllium sputtering rate, such that cleaning is possible without producing hazardous compounds of the carcinogenic metal.
Meanwhile, plasma treatments have recently been proposed for the cleaning of stainless-steel radioactive components when they are dismounted from accelerators, modified and then reinstalled. Using a remote plasma source, the energy of the plasma’s oxygen ions is chosen (<50 eV) so as to avoid sputtering of the component materials, thereby preventing radioactive atoms from entering the gas phase. The main difficulty here is to adapt the plasma source to the wealth of different geometries that are typical of accelerator components.
Plasma versatility
So much for the fundamentals of plasma processing, what of the applications? At CERN, the large-scale deployment of thin-film coatings began in the 1980s on the Large Electron–Positron (LEP) collider. To increase LEP’s collision energy to 200 GeV and above, engineering teams studied, and subsequently implemented, superconducting niobium (Nb) thin films deposited on copper (Cu) for the RF cavities (in place of bulk niobium).
This technology was also adopted for the Large Hadron Collider (LHC), the High Intensity and Energy ISOLDE (HIE ISOLDE) project at CERN and other European accelerators operating at fields up to 15 MV/m. The advantages are clear: lower cost, better thermal stability (thanks to the higher thermal conductivity of the copper substrate), and reduced sensitivity to trapped magnetic fields. The main drawback of Nb/Cu superconducting RF cavities is an exponential growth of the power lost in an RF cycle with the accelerating electrical field (owing to resistivity and magnetic permeability of the Nb film). This weakness, although investigated extensively, has eluded explanation and proposed mitigation for the past 20 years.
NEG coatings comprise a mixture of titanium, zirconium and vanadium
It’s only lately, in the frame of studies for the proposed electron–positron Future Circular Collider (FCC-ee), that researchers have shed light on this puzzling behaviour. Those insights are due, in large part, to a deeper theoretical analysis of Nb thin-film densification as a result of HiPIMS, though a parallel line of investigation involves the manufacturing of seamless copper cavities and their surface electropolishing. In both cases, the objective is the reduction of defects in the substrate to enhance film adherence and purity.
Related studies have shown that Nb films on Cu can perform as well as bulk Nb in terms of superconducting RF properties, though coating materials other than Nb are also under investigation. Today, for example, the CERN vacuum group is evaluating Nb3Sn and V3Si – both of which are part of the A15 crystallographic group and exhibit superconducting transition temperatures of about 18 K (i.e. 9 K higher than Nb). This higher critical temperature would allow the use of RF cavities operating at 4.3 K (instead of 1.9 K), yielding significant simplification of the cryogenic infrastructure and reductions in electrical energy consumption. Even so, intense development is still necessary before these coatings can really challenge pure Nb films – not least because A15 films are brittle, plus the coating of such materials is tricky (given the need to reproduce a precise stoichiometry and crystallographic structure).
Game-changing innovations
Another wide-scale application of plasma processing at CERN is in the deposition of non-evaporable-getter (NEG) thin-film coatings, specialist materials originally developed to provide distributed vacuum pumping for the LHC. NEG coatings comprise a mixture of titanium, zirconium and vanadium with a typical composition around 30:30:40, respectively. For plasma deposition of NEG films, the target (comprising three interlacing elemental wires) is pulled along the main axis of the beam pipes. Once the coated vacuum chambers are installed within an accelerator and pumped out, the NEG films undergo heating for 24 hours at temperatures ranging from 180 to 250 °C – a process known as activation, in which the superficial oxide layer and any contaminants are dissolved into their bulk.
The clean surfaces obtained in this way chemically adsorb most of the gas species in the vacuum system at room temperature – except for noble gases (which are chemically inert) and methane (for which small auxiliary pumps are necessary). The NEG-coated surfaces provide an impressively high pumping speed and, thanks to their cleanliness, a lower desorption yield when bombarded by electrons, photons and ions – and all this with minimal space occupancy. Moreover, owing to their maximum secondary electron yields (δmax) below 1.3, NEG coatings avoid the development of electron multipacting, the main cause of electron clouds in beam pipes (and related unfavourable impacts on beam performance, equipment operation and cryogenic heat load).
More broadly, plasma processing of NEG coatings represents a transformative innovation in the implementation of large-scale vacuum systems. Hundreds of beam pipes were NEG-coated for the long straight section of the LHC, including the experimental vacuum chambers inserted in the four gigantic detectors. Beyond CERN, NEG coatings have also been employed widely in other large scientific instruments, including the latest generation of synchrotron light sources.
Of course, NEG coatings require thermal activation, so cannot be applied in vacuum systems that are unheatable (i.e. vacuum vessels that operate at cryogenic temperatures or legacy accelerators that may need retrofitting). For these specific cases, the CERN vacuum team has, over the past 15 years, been developing and iterating low-δmax carbon coatings comprised mostly of amorphous carbon (a-C) with prevailing graphitic-like bonding among the carbon atoms.
Even though a-C thin films were originally studied for CERN’s older Super Proton Synchrotron (SPS), they are now the baseline solution for the beam screen of the superconducting magnets in the long straight section of the High-Luminosity LHC. A 100 nm thin coating is deposited either in the laboratory for the new magnets (located on both sides of the ATLAS and CMS detectors) or in situ for the ones already installed in the tunnel (both sides of LHCb and ALICE).
Production of denser plasmas will be key for future applications in surface treatments for accelerators
The in situ processing has opened up another productive line of enquiry: the possibility of treating the surface of beam screens (15 m long, a few cm diameter) directly in the accelerators with the help of mobile targets. The expectation is that these innovative coating methods for a-C could, over time, also be applied to improve the performance of installed vacuum chambers in the LHC’s arcs, without the need to dismount magnets and cryogenic connections.
Opportunity knocks
Looking ahead, the CERN vacuum team has plenty of ideas regarding further diversification of plasma surface treatments – though the direction of travel will ultimately depend on the needs of future studies, projects and collaborations. Near term, for example, there are possible synergies with the Advanced Proton Driven Plasma Wakefield Acceleration Experiment (AWAKE), an accelerator R&D project based at CERN that’s investigating the use of plasma wakefields driven by a proton bunch to accelerate charged particles over short distances. Certainly, the production of denser plasmas (and their manipulation) will be key for future applications in surface treatments for accelerators.
Another area of interest is the use of plasma-assisted chemical vapour deposition to extend the family of materials that can be deposited. For the longer term, the coating of vacuum systems with inert materials that allow the attainment of very low pressures (in the ultrahigh vacuum regime) in a short timeframe (five years) without bakeout remains one of the most ambitious targets.
The reliability of the CERN vacuum systems is very much front-and-centre as the restart of the LHC physics programme approaches in mid-2022. The near-term priority is the recommencement of beam circulation in vacuum systems that were open to the air for planned interventions and modification – sometimes for several days or weeks – during Long Shutdown 2 (LS2), a wide-ranging overhaul of CERN’s experimental infrastructure that’s been underway since the beginning of 2019.
With LS2 now drawing to a close and pilot beam already circulated in October for a general check of the accelerator chain, it’s worth revisiting the three operational objectives that CERN’s engineering teams set out to achieve during shutdown: consolidation of the LHC dipole diodes (essential safety elements for the superconducting magnets); the anticipation of several interventions required for the High-Luminosity LHC (HL–LHC) project (the successor to the LHC, which will enter operation in 2028); and the LHC Injectors Upgrade project to enhance the injection chain so that beams compatible with HL–LHC expectations can be injected into CERN’s largest machine.
“The CERN vacuum team has made fundamental contributions to the achievement of the LS2 core objectives and other parallel activities,” notes Paolo Chiggiato, head of the CERN vacuum, surfaces and coatings group. “As such, we have just completed an intense period of work in the accelerator tunnels and our laboratories, as well as running and supporting numerous technical workshops.”
As for vacuum specifics, all of the LHC’s arcs were vented to the air after warm-up to room temperature; all welds were leak-checked after the diode consolidation (with only one leak found among the 1796 tests performed); while the vacuum team also replaced or consolidated around 150 turbomolecular pumps acting on the cryogenic insulation vacuum. In total, 2.4 km of non-evaporable-getter (NEG)-coated beampipes were also opened to the air at room temperature – an exhaustive programme of work spanning mechanical repair and upgrade (across 120 weeks), bakeout (spread across 90 weeks) and NEG activation (over 45 weeks). “The vacuum level in these beampipes is now in the required range, with most of the pressure readings below 10–10 mbar,” explains Chiggiato.
Close control
Another LS2 priority for Chiggiato and colleagues involved upgrades to CERN’s vacuum control infrastructure, with the emphasis on reducing single points of failure and the removal of confusing architectures (i.e. systems with no clear separation of function amongst the different programmable logic controllers). “For the first time,” adds Chiggiato, “mobile vacuum equipment was controlled and monitored by wireless technologies – a promising communication choice for distributed systems and areas of the accelerator complex requiring limited stay.”
Elsewhere, in view of the higher LHC luminosity (and consequent increased radioactivity) following LS2 works, the vacuum group developed and installed advanced radiation-tolerant electronics to control 100 vacuum gauges and valves in the LHC dispersion suppressors. This roll-out represents the first step of a longer-term campaign that will be scaled during the next Long Shutdown (LS3 is scheduled for 2025–2027), including the production of 1000 similar electronic cards for vacuum monitoring. “In parallel,” says Chiggiato, “we have renewed the vacuum control software – introducing resilient, easily scalable and self-healing web services technologies and frameworks used by some of the biggest names in industry.”
Success breeds success
In the LHC experimental area, meanwhile, the disassembly of the vacuum chambers at the beginning of LS2 required 93 interventions and 550 person-hours of work in the equipment caverns. Reinstallation has progressed well in the four core LHC experiments, with the most impressive refit of vacuum hardware in the CMS and LHCb detectors.
For the former, the vacuum team installed a new central beryllium chamber (internal diameter 43.4 mm, 7.3 m long), while 12 new aluminium chambers were manufactured, surface-finished and NEG-coated at CERN. Their production comprised eight separate quality checks, from surface treatment to performance assessment of the NEG coating. “The mechanical installation – including alignments, pump-down and leak detection – lasted two months,” explains Chiggiato, “while the bake-out equipment installation, bake-out process, post-bake-out tests and venting with ultrapure neon required another month.”
Thankfully, creative problem-solving is part of the vacuum team’s DNA
In LHCb, the team contributed to the new version of the Vertex Locator (VELO) sub-detector. The VELO’s job is to pick out B mesons from the multitude of other particles produced – a tricky task as their short lives will be spent close to the beam. To find them, the VELO’s RF box – a delicate piece of equipment filled with silicon detectors, electronics and cooling circuits – must be positioned perilously close to the point where protons collide. In this way, the sub-detector faces the beam at a distance of just 5 mm, with an aluminium window thinned down to 150 μm by chemical etching prior to the deposition of a NEG coating.
As the VELO encloses the RF box, and both volumes are under separate vacuum, the pumpdown is a critical operation because pressure differences across the thin window must be lower than 10 mbar to ensure mechanical integrity. “This work is now complete,” says Chiggiato, “and vacuum control of the VELO is in the hands of the CERN vacuum team after a successful handover from specialists at Nikhef [the Dutch National Institute for Subatomic Physics].”
Wrapping up a three-year effort, the vacuum team’s last planned activity in LS2 involves the bake-out of the ATLAS and CMS beampipe in early 2022. “There was no shortage of technical blockers and potential show-stoppers during our LS2 work programme,” Chiggiato concludes. “Thankfully, creative problem-solving is part of the vacuum team’s DNA, as is the rigorous application of vacuum best practice and domain knowledge accumulated over decades of activity. Ours is a collective mindset, moreover, driven by a humble approach to such complex technological installations, where every single detail can have important consequences.”
A detailed report on the CERN vacuum team’s LS2 work programme – including the operational and technical challenges along the way – will follow in the March/April 2022 issue of CERN Courier magazine.
Christian Day is a vacuum scientist on a mission – almost evangelically so. As head of the Vacuum Lab at Karlsruhe Institute of Technology (KIT), part of Germany’s renowned network of federally funded Helmholtz Research Centres, Day and his multidisciplinary R&D team are putting their vacuum know-how to work in tackling some of society’s “grand challenges”.
Thinking big goes with the territory. As such, the KIT Vacuum Lab addresses a broad-scope canvas, one that acknowledges the core enabling role of vacuum technology in all manner of big-science endeavours – from the ITER nuclear-fusion research programme to fundamental studies of the origins of the universe (Day is also technical lead for cryogenic vacuum on the design study for the Einstein Telescope, a proposed next-generation gravitational-wave observatory).
Here, Day tells CERN Courier about the Vacuum Lab’s unique R&D capabilities and the importance of an integrated approach to vacuum system development in which modelling, simulation and experimental validation all work in tandem to foster process and technology innovation.
What are the long-term priorities of the KIT Vacuum Lab?
Our aim is to advance vacuum science and technology along three main pathways: an extensive R&D programme in collaboration with a range of university and scientific partners; design and consultancy services for industry; and vacuum education to support undergraduate and postgraduate science and engineering students at KIT. It’s very much a multidisciplinary effort, with a staff team of 20 vacuum specialists working across physics, software development and the core engineering disciplines (chemical, mechanical, electrical). They’re supported, at any given time, by a cohort of typically five PhD students.
So what does that mean in terms of the team’s core competencies?
At a headline level, we’re focused around the two fundamental challenges in modern vacuum science: the realisation of a physically consistent description of outgassing behaviours for a range of materials and vacuum regimes; also the development of advanced vacuum gas dynamics methods and associated supercomputer algorithms.
As such, one of the main strengths of the KIT Vacuum Lab is our prioritisation of predictive code development alongside experimental validation – twin capabilities that enable us to take on the design, delivery and project-management of the most complex vacuum systems. The resulting work programme is nothing if not diverse – from very-large-scale vacuum pumping systems for nuclear fusion to contamination-free vacuum applications in advanced manufacturing (e.g. extreme UV lithography and solar-cell fabrication).
What sets the KIT Vacuum Lab apart from other vacuum R&D programmes?
Over the last 10 years or so, and very much driven by our contribution to the ITER nuclear fusion project in southern France, we have developed a unique and powerful software capability to model vacuum regimes at a granular level – from atmospheric pressure all the way down to extreme-high-vacuum (XHV) conditions (10–10 Pa and lower). This capability, and the massive computational resources that make it possible, are being put to use across all manner of advanced vacuum applications – quantum computing, HyperLoop transportation systems and gravitational-wave experiments, among others.
Mistakenly, early-career researchers often think that vacuum is a somehow old-fashioned service that they can buy off-the-shelf
The Vacuum Lab’s organising principles are built around “integrated process development”. What does that look like operationally?
It means we take a holistic view regarding the development of vacuum processes, which allows us to identify the main influences in the vacuum system and to map them theoretically or experimentally. An iterative design evolution must not only be based on efficient models; it must also be validated and parameterised by using experimental data from different levels of the process hierarchy. Experimental data are indispensable to evaluate the pros and cons of competing models and to quantify the uncertainties of model predictions.
In turn, the department’s research structure is set up to address elementary processes and unit functions within a vacuum system. When choosing a vacuum pump, for example, it’s important to understand how the pump design, underlying technology and connectivity will influence other parts of the vacuum system. It’s also necessary, though too often forgotten, for the end-user to understand the ultimate purpose of the vacuum system – the why – so that they can address any issues arising in terms of the vacuum science fundamentals and underlying physics.
What are your key areas of emphasis in fusion research right now?
Our vacuum R&D programme in nuclear fusion is carried out under the umbrella of EUROfusion, a consortium agreement signed by 30 research organisations and universities from 25 EU countries plus the UK, Switzerland and Ukraine. Collectively, the participating partners in EUROfusion are gearing up for the ITER experimental programme (due to come online in 2025), with a longer-term focus on the enabling technologies – including the vacuum systems – for a proof-of-principle fusion power plant called DEMO. The latter is still at the concept phase, though provisionally scheduled for completion by 2050.
As EUROfusion project leader for the tritium fuel cycle, I’m overseeing KIT’s vacuum R&D inputs to the DEMO fusion reactor – a collective effort that we’ve labelled the Vacuum Pumping Task Force and involving multiple research/industry partners. The vacuum systems in today’s nuclear fusion reactors – including the work-in-progress ITER facility – rely largely on customised cryosorption pumps for vacuum pumping of the main reaction vessel and the neutral beam injector (essentially by trapping gases and vapours on an ultracold surface). DEMO, though, will require a portfolio of advanced pumping concepts to be developed for ongoing operations, including metal-foil and mercury-based diffusion and ring pumps as well as high-capacity non-evaporable-getter (NEG) materials (see “Next-generation pump designs: from ITER to the DEMO fusion project”).
Next-generation pump designs: from ITER to the DEMO fusion project
By Yannick Kathage
When the ITER experimental reactor enters operation later this decade, nuclear fusion will be realised in a tokamak device that uses superconducting magnets to contain a hot plasma in the shape of a torus. Herein the fusion reaction between deuterium and tritium (DT) nuclei will produce one helium nucleus, one neutron and, in the process, liberate huge amounts of energy (that will heat up the walls of the reactor to be exploited in a power cycle for electricity production).
In this way, fusion reactors like ITER must combine large-scale vacuum innovation with highly customised pumping systems. The ITER plasma chamber (1400 m3), for example, will be pumped at fine vacuum (several Pa) against large gas throughputs in the course of the plasma discharge – the so-called burn phase for energy generation. There follows a dwell phase, when the chamber will be pumped down (for approximately 23 minutes) to moderately high vacuum (5 × 10–4 Pa), before initiating the next plasma discharge. Meanwhile, surrounding the plasma chamber is an 8500 m3 cryostat to provide a 10 mPa cryogenic insulation vacuum (required for the operation of the superconducting magnets).
A key design requirement for all of ITER’s pumping systems is to ensure compatibility with tritium, a radioactive hydrogen isotope. Effectively, this rules out the use of elastomer seals (only metal joints are permitted) and the use of oil or lubricants (which are destroyed by tritium contamination). Specifically, the six torus exhaust systems are based on so-called discontinuous cryosorption pumps, cooled with supercritical helium gas at 5 K and coated with activated charcoal as sorbent material to capture helium (primarily), a mix of hydrogen isotopes and various impurities from the plasma chamber.
As with all accumulation pumps, these cryopumps must be regenerated by heating on a regular basis. To provide a constant pumping speed on the torus during the plasma pulse, it’s therefore necessary to “build in” additional cryopumping capacity – such that four systems are always pumping, while the other two are in regeneration mode. What’s more, these six primary cryopumps are backed by a series of secondary cryopumps and, finally, by dry mechanical rough pumps that compress to ambient pressure.
Scaling up for DEMO
The operational principle of the cryosorption pump means that large quantities of tritium accumulate within the sorbent material over time – a safety concern that’s very much on the radar of the ITER management team as well as Europe’s nuclear regulatory agencies. Furthermore, this “tritium inventory” will only be amplified in the planned future DEMO power plant, providing real impetus for the development of new, and fully continuous, pumping technologies tailored for advanced fusion applications.
Among the most promising candidates in this regard is the so-called metal-foil pump (MFP), which uses a plasma source to permeate, and ultimately compress, a flux of pure hydrogen isotopes through a group V metal foil (e.g. niobium or vanadium) using an effect called superpermeation. The driving force here is an energy gradient in the gas species, up and downstream of the foil (due to plasma excitation, but largely independent of pressure). It’s worth noting that the KIT Vacuum Pumping Task Force initiated development work on the MFP concept five years ago, with a phased development approach targeting “mature technical exploitation” of superpermeation pumping systems by 2027.
If ultimately deployed in a reactor context, the MFP will yield two downstream components: a permeated gas stream (comprising D and T, which will be cycled directly back into the fusion reactor) and an unprocessed stream (comprising D, T, He and impurities), which undergoes extensive post-processing to yield an additional D, T feedstock. It is envisaged that both gas streams, in turn, will be pumped by a train of continuously working rough pumps that use mercury as a working fluid (owing to the metal’s compatibility with tritium). As the DEMO plant will feature a multibarrier concept for the confinement of tritium, mercury can also be circulated safely in a closed-loop system.
One of those alternative roughing pump technologies is also being developed by the KIT Vacuum Pumping Task Force – specifically, a full stainless-steel mercury ring pump that compresses to ambient pressure. Progress has been swift since the first pump-down curve with this set-up was measured (in 2013) and the task force now has a third-generation design working smoothly in the lab, albeit with all rotary equipment redesigned to take account of the fact that mercury has a specific weight 13 times greater than that of water (the usual operating fluid in a ring pump).
Hydrogen impacts
While mercury-based vacuum pumping is seen as a game-changer exclusively for fusion applications, it’s evident that the MFP is attracting wider commercial attention. That’s chiefly because superpermeation works only for the hydrogenic species in the gas mixture being pumped – thereby suggesting the basis of a scalable separation functionality. In the emerging hydrogen economy, for example, it’s possible that MFP technology, if suitably dimensioned, could be implemented to continuously separate hydrogen from the synthesis gas of classical gasification reactions (steam reforming) and at purities that can otherwise only be achieved via electrolytic processes (which require huge energy consumption).
Put simply: MFP technology has the potential to significantly reduce the ecological footprint associated with the mass-production of pure hydrogen. As such, once the MFP R&D programme achieves sufficient technical readiness, the KIT Vacuum Lab will be seeking to partner with industry to commercialise the associated know-how and expertise.
Yannick Kathageis a chemical engineering research student in the KIT Vacuum Lab.
How does all that translate into the unique facilities and capabilities within the KIT Vacuum Lab?
Our work on fusion vacuum pumps requires specialist domain knowledge regarding, for example, the safe handling of mercury as well as how to manage, measure and mitigate the associated radioactivity hazard associated with tritium-compatible vacuum systems. We have set up a dedicated mercury lab, for example, to investigate the fluid dynamics of mercury diffusion pumps as well as a test station to optimise their performance at a system level.
Many of the other laboratory facilities are non-standard and not found anywhere else. Our Outgassing Measurement Apparatus (OMA), for example, uses the so-called difference method for high-resolution measurements of very low levels of outgassing across a range of temperatures (from ambient to 570 K). The advantage of the difference method is that a second vacuum chamber, which is identical to the sample chamber, is used as a reference in order to directly subtract the background outgassing rate of the chamber.
Meanwhile, our TransFlow facility allows us to generate fluid flows at different levels of rarefaction, and across a range of channel geometries, to validate our in-house code development. Even TIMO, our large multipurpose vacuum vessel – a workhorse piece of kit in any vacuum R&D lab – is heavily customised, offering temperature cycling from 450 K down to 4 K.
What about future plans for the KIT Vacuum Lab?
A significant expansion of the lab is planned over the next four years, with construction of a new experimental hall to house a 1:1 scale version of the vacuum section of the DEMO fuel cycle. This facility – the catchily titled Direct Internal Recycling Integrated Development Platform Karlsruhe, or DIPAK – will support development and iteration of key DEMO vacuum systems and associated infrastructure, including a large vacuum vessel to replicate the torus – a non-trivial engineering challenge at 30 tonnes, 7 m long and 3.5 m in diameter.
How do you attract the brightest and best scientists and engineers to the KIT Vacuum Lab?
The specialist teaching and lecture programme that the vacuum team provides across the KIT campus feeds our talent pipeline and helps us attract talented postgraduates. Early-career researchers often think – mistakenly – that vacuum is somehow old-fashioned and a “commoditised service” that they can buy off-the-shelf. Our educational outreach shows them otherwise, highlighting no shortage of exciting R&D challenges to be addressed in vacuum science and technology – whether that’s an exotic new pumping system for nuclear fusion or a low-outgassing coating for an accelerator beamline.
The multidisciplinary nature of the vacuum R&D programme certainly helps to broaden our appeal, as does our list of high-profile research partners spanning fundamental science (e.g. the Weizmann Institute of Science in Tel Aviv), the particle accelerator community (e.g. TRIUMF in Canada) and industry (e.g. Leybold and Zeiss in Germany). Wherever they are, we’re always keen to talk to talented candidates interested in working with us.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.