Topics

Digging deeper into invisible Higgs-boson decays

ATLAS figure 1

Studies of the Higgs boson by ATLAS and CMS have observed and measured a large spectrum of production and decay mechanisms. Its relatively long lifetime and low expected width (4.1 MeV, compared with the GeV-range decay widths of the W and Z bosons) make the Higgs boson a sensitive probe for small couplings to new states that may measurably distort its branching fractions. The search for invisible or yet undetected decay channels is thus highly relevant.

Dark-matter (DM) particles created in LHC collisions would have no measurable interaction with the ATLAS detector and thus would be “invisible”, but could still be detected via the observation of missing transverse momentum in an event, similarly to neutrinos. The Standard Model (SM) predicts the Higgs boson to decay invisibly via H → ZZ*→ 4ν in only 0.1% of cases. However, this value could be significantly enhanced if the Higgs boson decays into a pair of (light enough) DM particles. Thus, by constraining the branching fraction of Higgs-boson decays to invisible particles it is possible to constrain DM scenarios and probe other physics beyond the SM (BSM).

The ATLAS collaboration has performed comprehensive searches for invisible decays of the Higgs boson considering all its major production modes: vector-boson fusion with and without additional final-state photons, gluon fusion in association with a jet from initial-state radiation, and associated production with a leptonically decaying Z boson or a top quark–antiquark pair. The results of these searches have now been combined, including inputs from Runs 1 and 2 analyses. They yield an upper limit of 10.7% on the branching ratio of the Higgs boson to invisible particles at 95% confidence level, for an unprecedented expected sensitivity of 7.7%. The result is used to extract upper limits on the spin-independent DM-nucleon scattering cross section for DM masses smaller than about 60 GeV in a variety of Higgs-portal models (figure 1). In this range and for the models considered, invisible Higgs-boson decays are more sensitive than the results from DM-nucleon scattering detection experiments.

ATLAS figure 2

An alternative way to constrain possible undetected decays of the Higgs boson is to measure its total decay width ΓH. Combining the observed value of the width with measurements of the branching fractions to observed decays allows the partial width for decays to new particles to be inferred. Directly measuring ΓH at the LHC is not possible as it is much smaller than the detector resolution. However, ΓH can be constrained by taking advantage of an unusual feature of the H  ZZ(*) decay channel: the rapid increase in available phase space for the H  ZZ(*) decay as mH approaches the 2mZ threshold counteracts the mass dependence of Higgs-boson production. Furthermore, this far “off-shell” production above 2mZ has a negligible ΓH dependence, unlike “on-shell” production near the Higgs-boson mass at 125 GeV. Comparing the Higgs-boson production rates in these two regions therefore allows an indirect measurement of ΓH. Although some assumptions are required (e.g. that the relation between on-shell and off-shell production is not modified by BSM effects), the measurement is sensitive to the value of ΓH expected in the SM. Recently, ATLAS measured the off-shell production cross-section using both the four-charged lepton (4l) and two-charged lepton plus two neutrino (2l2v) final states, finding evidence for off-shell Higgs-boson production with a significance of 3.3 σ (figure 2). By combining both the previously measured on-shell Higgs-boson production-cross section and the of-shell Higgs-boson production-cross section, ΓH was found to be 4.5+3.3–2.5 MeV, which agrees with the SM prediction of 4.1 MeV but leaves plenty of room for possible BSM contributions.

This sensitivity will improve thanks to the new data to be collected in Run 3 of the LHC, which should more than triple the size of the Run 2 dataset.

Design principles of theoretical physics

“Now I know what the atom looks like!” Ernest Rutherford’s simple statement belies the scientific power of reductionism. He had recently discovered that atoms have substructure, notably that they comprise a dense positively charged nucleus surrounded by a cloud of negatively charged electrons. Zooming forward in time, that nucleus ultimately gave way further when protons and neutrons were revealed at its core. A few stubborn decades later they too gave way with our current understanding being that they are comprised of quarks and gluons. At each step a new layer of nature is unveiled, sometimes more, sometimes less numerous in “building blocks” than the one prior, but in every case delivering explanations, even derivations, for the properties (in practice, parameters) of the previous layer. This strategy, broadly defined as “build microscopes, find answers” has been tremendously successful, arguably for millennia.

Natural patterns

While investigating these successively explanatory layers of nature, broad patterns emerge. One of which is known colloquially as “naturalness”. This pattern essentially asserts that in reversing the direction and going from one microscopic theory, “the UV-completion”, to its larger-scale shell, “the IR”, the values of parameters measured in the latter are, essentially, “typical”. Typical, in the sense that they reflect the scales, magnitudes and, perhaps most importantly, the symmetries of the underlying UV completion. As Murray Gell-Mann once said: “everything not forbidden is compulsory”.

So, if some symmetry is broken by a large amount by some interaction in the UV theory, the same symmetry, in whatever guise it may have adopted, will also be broken by a large amount in the IR theory. The only exception to this is accidental fine-tuning, where large UV-breakings can in principle conspire and give contributions to IR-breakings that, in practical terms, accidentally cancel to a high degree, giving a much smaller parameter than expected in the IR theory. This is colloquially known as “unnaturalness”.

There are good examples of both instances. There is no symmetry in QCD that could keep a proton light; unsurprisingly it has mass of the same order as the dominant mass scale in the theory, the QCD scale, mp ~ ΛQCD. But there is a symmetry in QCD that keeps the pion light. The only parameters in UV theory that break this symmetry are the light quark masses. Thus, the pion mass-squared is expected to be around m2π ~ mqΛQCD. Turns out, it is.

There are also examples of unnatural parameters. If you measure enough different physical observables, observations that are unlikely on their own become possible in a large ensemble of measurements – a sort of theoretical “look elsewhere effect”. For example, consider the fact that the Moon almost perfectly obscures the Sun during a lunar eclipse. There is no symmetry which requires that the angular size of the Moon should almost match that of the Sun to an Earth-based observer. Yet, given many planets and many moons, this will of course happen for some planetary systems.

However, if an observation of a parameter returns an apparently unnatural value, can one be sure that it is accidentally small? In other words, can we be confident we have definitively explored all possible phenomena in nature that can give rise to naturally small parameters? 

From 30 January to 3 February, participants of an informal CERN theory institute “Exotic Approaches to Naturalness” sought to answer this question. Drawn from diverse corners of the theorist zoo, more than 130 researchers gathered, both virtually and in person, to discuss questions of naturalness. The invited talks were chosen to expose phenomena in quantum field theory and beyond which challenge the naive naturalness paradigm.

Coincidences and correlations

The first day of the workshop considered how apparent numerical coincidences can lead to unexpectedly small parameters in the IR due to the result of selection rules that do not immediately manifest from a symmetry, known as “natural zeros”. A second set of talks considered how, going beyond quantum field theory, the UV and IR can potentially be unexpectedly correlated, especially in theories containing quantum gravity, and how this correlation can lead to cancellations that are not apparent from a purely quantum field theory perspective.

The second day was far-ranging, with the first talk unveiling some lower dimensional theories of the sort one more readily finds in condensed matter systems, in which “topological” effects lead to constraints on IR parameters. A second discussed how fundamental properties, such as causality, can impose constraints on IR parameters unexpectedly. The last demonstrated how gravitational effective theories, including those describing the gravitational waves emitted in binary black hole inspirals, have their own naturalness puzzles.

The ultimate goal is to now go forth and find new angles of attack on the biggest naturalness questions in fundamental physics

Midweek, alongside an inspirational theory colloquium by Nathaniel Craig (UC Santa Barbara), the potential role of cosmology in naturalness was interrogated. An early example made famous by Steven Weinberg concerns the role of the “anthropic principle” in the presently measured value of the cosmological constant. However, since then, particularly in recent years, theorists have found many possible connections and mechanisms linking naturalness questions to our universe and beyond.

The fourth day focussed on the emerging world of generalised and higher-form symmetries, which are new tools in the arsenal of the quantum field theorist. It was discussed how naturalness in IR parameters may potentially arise as a consequence of these recently uncovered symmetries, but whose naturalness would otherwise be obscured from view within a traditional symmetry perspective. The final day studied connections between string theory, the swampland and naturalness, exploring how the space of theories consistent with string theory leads to restricted values of IR parameters, which potentially links to naturalness. An eloquent summary was delivered by Tim Cohen (CERN).

Grand slam

In some sense the goal of the workshop was to push back the boundaries by equipping model builders with new and more powerful perspectives and theoretical tools linked to questions of naturalness, broadly defined. The workshop was a grand slam in this respect. However, the ultimate goal is to now go forth and use these new tools to find new angles of attack on the biggest naturalness questions in fundamental physics, relating to the cosmological constant and the Higgs mass.

The Standard Model, despite being an eminently marketable logo for mugs and t-shirts, is incomplete. It breaks down at very short distances and thus it is the IR of some more complete, more explanatory UV theory. We don’t know what this UV theory is, however, it apparently makes unnatural predictions for the Higgs mass and cosmological constant. Perhaps nature isn’t unnatural and generalised symmetries are as-yet hidden from our eyes, or perhaps string theory, quantum gravity or cosmology has a hand in things? It’s also possible, of course, that nature has fine-tuned these parameters by accident, however, that would seem – à la Weinberg – to point towards a framework in which such parameters are, in principle, measured in many different universes. All of these possibilities, and more, were discussed and explored to varying degrees.

Perhaps the most radical possibility, the most “exotic approach to naturalness” of all, would be to give up on naturalness altogether. Perhaps, in whatever framework UV completes the Standard Model, parameters such as the Higgs mass are simply incalculable, unpredictable in terms of more fundamental parameters, at any length scale. Shortly before the advent of relativity, quantum mechanics, and all that have followed from them, Lord Kelvin (attribution contested) once declared: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”. The breadth of original ideas presented at the “Exotic Approaches to Naturalness” workshop, and the new connections constantly being made between formal theory, cosmology and particle phenomenology, suggest it would be similarly unwise now, as it was then, to make such a wager.

We can’t wait for a future collider

Imagine a world without a high-energy collider. Without our most powerful instrument for directly exploring the smallest scales, we would be incapable of addressing many open questions in particle physics. With the US particle-physics community currently debating which machines should succeed the LHC and how we should fit into the global landscape, this possibility is a serious concern. 

The good news is that physicists generally agree on the science case for future colliders. Questions surrounding the Standard Model itself, in particular the microscopic nature of the Higgs boson and the origin of electroweak symmetry breaking, can only be addressed at high-energy colliders. We also know the Standard Model is not the complete picture of the universe. Experimental observations and theoretical concerns strongly suggest the existence of new particles at the multi-TeV scale. 

The latest US Snowmass exercise and the European strategy update both advocate for the fast construction of an e+e Higgs factory followed by a multi-TeV collider. The former will enable us to measure the Higgs boson’s couplings to other particles with an order of magnitude better precision than the High-Luminosity LHC. The latter is crucial to unambiguously surpass exclusions from the LHC, and would be the only experiment where we could discover or exclude minimal dark-matter scenarios all the way up to their thermal targets. Most importantly, precise measurements of the Brout–Englert–Higgs potential at a 10 TeV scale collider are essential to understand what role the Higgs plays in the origin and evolution of the universe. 

We haven’t yet agreed on what to build, where and when. We face an unprecedented choice between scaling up existing collider technologies or pursuing new, compact and power-efficient options. We must also choose between centering the energy frontier at a single lab or restoring global balance to the field by hosting colliders at different sites. Our choices in the next few years could determine the next century of particle physics. 

Snowmass community workshop

The Future Circular Collider programme – beginning with a large circular e+e collider (FCC-ee) with energies ranging from 90 to 365 GeV, followed by a pp collider with energies up to 100 TeV (FCC-hh) – would build on the infrastructure and skills currently present at CERN. A circular e+e machine could support multiple interaction points, produce higher luminosity than a linear machine for energies of interest, and its tunnel could be re-used for a pp collider. While this staged approach has driven success in our field for decades, scaling up to a circumference of 100 km raises serious questions about feasibility, cost and power consumption. As a new assistant professor, I am also deeply concerned about gaps in data-taking and time­-scales. Even if there are no delays, I will likely retire during the FCC-ee run and die before the FCC-hh produces collisions. 

In contrast, there is a growing contingent of physicists who think that a paradigm shift is essential to reach the 10 TeV scale and beyond. The International Muon Collider collaboration has determined that, with targeted R&D to address engineering challenges and make design progress, a few-TeV μ+μ collider could be realised on a 20-year technically limited timeline, and would set the stage for an eventual 10 TeV machine. The latter could enable a mass reach equivalent to a 50–200 TeV hadron collider, in addition to precision electroweak measurements, with a lower price tag and significantly smaller footprint. A muon collider also opens the possibility to host different machines at different sites, easing the transition between projects and fostering a healthier, more global workforce. Assuming the technical challenges can be overcome, a muon collider would therefore be the most attractive way forward.

Assuming the technical challenges can be overcome, a muon collider would be the most attractive way forward

We are not yet ready to decide which path is most optimal, but we are already time-constrained. It is increasingly likely that the next machine will not turn on until after the High Luminosity-LHC. The most senior person today who could reasonably participate is roughly only 10 years into a permanent job. Early-career faculty, who would use this machine, are experienced enough to have well-informed opinions, but are not senior enough to be appointed to decision-making panels. While we value the wisdom of our senior colleagues, future colliders are inherently “early-career colliders”, and our perspectives must be incorporated. 

The US must urgently invest in future collider R&D. If other areas of physics progress faster than the energy frontier, our colleagues will disengage, move elsewhere and might not come back. If the size of the field and expertise atrophy before the next machine, we risk imperilling future colliders altogether. We agree on the physics case. We want the opportunity to access higher energies in our lifetimes. Let’s work together to choose the right path forward.

Stanisław Jadach 1947–2023

Stanisław Jadach, an outstanding theoretical physicist, died on 26 February at the age of 75. His foundational contributions to the physics programmes at LEP and the LHC, and for the proposed Future Circular Collider at CERN, have significantly helped to advance the field of elementary particle physics and its future aspirations.

Born in Czerteż, Poland, Jadach graduated in 1970 with a masters in physics from Jagiellonian University. There, he also defended his doctorate, received his habilitation degree and worked until 1992. During this period, whilst partly under martial law in Poland, Jadach took trips to Leiden, Paris, London, Stanford and Knoxville, and formed collaborations on precision theory calculations based on Monte Carlo event-generator methods. In 1992 he moved to the Institute of Nuclear Physics Polish Academy of Sciences (PAS) where, receiving the title of professor in 1994, he worked until his death. 

Prior to LEP, all calculations of radiative corrections were based on first- and, later, partially second-order results. This limited the theoretical precision to the 1% level, which was unacceptable for experiment. In 1987 Jadach solved that problem in a single-author report, inspired by the classic work of Yennie, Frautschi and Suura, featuring a new calculational method for any number of photons. It was widely believed that soft-photon approximations were restricted to many photons with very low energies and that it was impossible to relate, consistently, the distributions of one or two energetic photons to those of any number of soft photons. Jadach and his colleagues solved this problem in their papers in 1989 for differential cross sections, and later in 1999 at the level of spin amplitudes. A long series of publications and computer programmes for re-summed perturbative Standard Model calculations ensued. 

Most of the analysis of LEP data was based exclusively on the novel calculations provided by Jadach and his colleagues. The most important concerned the LEP luminosity measurement via Bhabha scattering, the production of lepton and quark pairs, and the production and decay of W and Z boson pairs. For the W-pair results at LEP2, Jadach and co-workers intelligently combined separate first-order calculations for the production and decay processes to achieve the necessary 0.5% theoretical accuracy, bypassing the need for full first-order calculations for the four-fermion process, which were unfeasible at the time. Contrary to what was deemed possible, Jadach and his colleagues achieved calculations that simultaneously take into account QED radiative corrections and the complete spin–spin correlation effects in the production and decay of two tau leptons. He also had success in the 1970s in novel simulations of strong interaction processes.

After LEP, Jadach turned to LHC physics. Among other novel results, he and his collaborators developed a new constrained Markovian algorithm for parton cascades, with no need to use backward evolution and predefined parton distributions, and proposed a new method, using a “physical” factorisation scheme, for combining a hard process at next-to leading order with a parton cascade, much simpler and more efficient than alternative methods.

Jadach was already updating his LEP-era calculations and software towards the increased precision of FCC-ee, and is the co-editor and co-author of a major paper delineating the need for new theoretical calculations to meet the proposed collider’s physics needs. He co-organised and participated in many physics workshops at CERN and in the preparation of comprehensive reports, starting with the famous 1989 LEP Yellow Reports.

Jadach, a member of the Polish Academy of Arts and Sciences (PAAS), received the most prestigious awards in physics in Poland: the Marie Skłodowska-Curie Prize (PAS), the Marian Mięsowicz Prize (PAAS), and the prize of the Minister of Science and Higher Education for lifetime scientific achievements. He was also a co-initiator and permanent member of the international advisory board of the RADCOR conference.

Stanisław (Staszek) was a wonderful man and mentor. Modest, gentle and sensitive, he did not judge or impose. He never refused requests and always had time for others. His professional knowledge was impressive. He knew almost everything about QED, and there were few other topics in which he was not at least knowledgeable. His erudition beyond physics was equally extensive. He is already profoundly and dearly missed.

Vittorio Giorgio Vaccaro 1941–2023

Accelerator physicist Vittorio Giorgio Vaccaro passed away after a short illness on 11 February 2023 in his hometown of Naples, Italy. 

Vittorio graduated in 1965 from the University of Naples Federico II. He soon moved to CERN as a fellow, where he remained from 1966 to 1969, contributing to the design and commissioning of the first high-intensity hadron collider, the Intersecting Storage Rings. At CERN, Vittorio introduced the concept of beam-coupling impedance to model the instabilities that were experienced above transition energy, writing a seminal report (Longitudinal instability of a coasting beam above transition, due to the action of lumped discontinuities), in which he described for the first time the action of discontinuities in the transverse section of a beam pipe as an impedance. His theory, which after his initial intuition he developed together with Andy Sessler, Alessandro G Ruggiero and many other colleagues, has become a fundamental tool in the design of particle accelerators. 

In 1969 he returned to his alma mater in Naples as professor of electromagnetic fields at the faculty of engineering, and continued teaching until he retired. He created an accelerator-physics team in association with INFN within the faculty of physics, and throughout his career remained closely related to CERN, where he visited regularly and where he sent many of his students. 

Vittorio collaborated with practically all the studies and accelerator projects in Europe, from the CERN machines to DAFNE, the European Spallation Source and HERA-B at DESY. The group in Naples became, thanks to him, a reference in the world of accelerators for the development of the theory of beam-coupling impedance of accelerator components and the associated bench measurements. Since the mid-1990s, he became increasingly interested in the development of linear accelerators for proton therapy, participating in a large collaboration with the TERA foundation, CERN and INFN. In 2003 he led a new collaboration between the University of Naples and several sections of INFN, which produced the first linac module at 3 GHz capable of accelerating protons from a 30 MeV cyclotron.

In 2019 Vittorio was awarded the IPAC Xie Jialin Award for outstanding work in the accelerator field “For his pioneering studies on instabilities in particle-beam physics, the introduction of the impedance concept in storage rings and, in the course of his academic career, for disseminating knowledge in accelerator physics throughout many generations of young scientists”.

It is difficult to find the words to recall Vittorio’s immense human qualities, his deep culture and his profound humanity. Several of his students are now scattered around the world, continuing his efforts to propose technical solutions to accelerator-physics problems based on a deep understanding of the phenomena of beam instability. Vittorio was moved by a sincere passion for science, and an irresistible curiosity for everything and everyone around him, which always brought him to approach anyone with an open and friendly spirit. 

We will deeply miss a passionate mentor and colleague, his wide knowledge, energy, friendship and humanity. 

A game changer for CERN

Patrick Geeraert

How did the idea for Science Gateway come about?

I was on detachment at the European Southern Observatory (ESO) in Garching when I was called back to CERN in 2017. The idea for a flagship education and outreach project was already quite advanced, and since I had triggered the construction of ESO’s Supernova planetarium and visitor centre during my mandate as director of administration, the CERN Director-General (DG) thought I could build on this for CERN. There had been various projects for buildings based around the Globe in the past, but they never quite took off. However, the then-new directorate wanted to create a new space for education and outreach targeting the general public of all ages. The DG also made it clear that a large auditorium for CERN events should be part of any plan, and that the entire construction should be financed by donations. I started to work on the concept.

The Italian architect Renzo Piano had visited CERN independently and fell in love with our values. When he left, he said: “If one day I can do something for you, don’t hesitate.” A few months later he proposed to draw the building. In June 2018 he showed us his first mockup, the “space station” design you see today. It crossed the Route de Meyrin and encroached on land designated for agricultural use on the north side and the CERN kindergarten on the south side. The design complicated matters, but on the other hand it was really inspiring. My first thought was that the budget I had will not be sufficient because what is expensive when you do construction are the facades, and here we had five buildings, complicated ones, with some parts suspended. But it was so original, so much in the DNA of CERN, that we thought, okay, let it be five. 

What will be in the buildings? 

There are three “pavilions” and two “tubes”. On the north side of the Science Gateway, we have a 900-seat auditorium where we can host large CERN meetings such as collaboration weeks, as well as hiring the venue out. It’s modular so we can split it in up to three different rooms and host independent events if needed. This element of the building caused most of the headaches. The second pavilion will house the reception, shop and restaurant. On the upper floor we have the two large lab spaces, where we will have two school groups at a time. Between the restaurant and the auditorium we have a natural amphitheatre where we can also hold events. 

 

Science Gateway

Then we enter the two tubes straddling the Route de Meyrin, which are exhibition areas. The first is about CERN – engaging visitors with accelerators, detectors, data acquisition and IT, etc. In the second tube, one half is a journey back to the Big Bang and the other is about open questions such as dark matter, dark energy, extra dimensions and such topics, where we will have art pieces to engage visitors. The third pavilion is an exhibition about the quantum world. The bridge linking the buildings is 220 m long and you can walk from one side to the other unimpeded.

How was the construction managed, and when will the building be open to the public? 

The first problem was that the north side of the Science Gateway, previously a temporary car park, was on agricultural land. We had to reclassify that piece of land for it to be authorised to build on, which is extremely complicated in Geneva. The process usually takes at least 10 years if it is successful at all, and we got it done in one. We had a very constructive process with our host authorities, whom I would like to thank warmly for their support, and the Renzo Piano team had made a case with drawings and models to help communicate our vision. We got the building permit in September 2019 and launched a procurement process for the construction and for the scenographers regarding the exhibitions. In November 2020 we signed the contract with the construction companies and they started to erect the site barracks at the end of 2020. The construction is due to be completed this summer. It was an extremely aggressive schedule, made more difficult by the pandemic and factors relating to Russia’s invasion of Ukraine. The inauguration will very likely be in the first week of October, with first visitors in the next day. I would like to thank the competent and dedicated work of all CERN’s departments and services that have contributed to the success of this project.

Who is the Science Gateway for? 

The main objective is to inspire the next generation to engage in STEM (science, technology, engineering, mathematics) studies and careers. To do that, first you need to have a programme for different age ranges. Whereas traditionally we target 16 years and above, Science Gateway will start with workshops for visitors as young as five. The exhibitions are suited to all ages above eight. Ideally, we want to engage visitors before they reach high school because that’s typically when girls start to think that STEM subjects are not for them. Another important audience is parents, so Science Gateway is also geared towards families and to show adults what it means to be a scientist along with showing diverse role models. The exhibits and installations are developed by a mix of in-house and outside expertise. For the labs, we rely on our education team, which has the experience of S’Cool LAB, but now that we have extended the age range of our audiences, we will also work closely with, for instance, the LEGO foundation, one of our donors, who are very strong in education programmes for children aged 5 to 12. Finally, Science Gateway is an opportunity for us to engage with VIPs and decision makers, to bring support to fundamental research and explain its impact on society.

How many visitors do you expect?

A lot! Currently we have more than 300,000 demands for guided tours per year and we can only satisfy about half of them. From those 300,000, more than 70% are based more than 800 km away. The Science Gateway will allow us to welcome up to 500,000 people per year, which is more than 1000 per day on average. We will continue to attract schools and visitors from all CERN member states and beyond, that’s for sure, and increase capacity for hands-on lab activities in particular. We also expect many more local visitors. Entry will be free, and we will be open to visitors all year, every day except Mondays. The Science Gateway will only be closed on 24, 25 and 31 December, and 1 January. For groups of 12 or more, people have to book in advance. But individuals and families can just show up on the day and access the auditorium, exhibition tubes, restaurant and the quantum-world pavilion. On the campus, they will also find temporary exhibitions in the Globe, and Ideasquare will also propose activities. Visitors can book a guided tour in the morning for that same day. Guided tours will remain at the same level as today, and we are trying to reduce pressure on existing restaurants on the Meyrin site with the new Science Gateway restaurant.

How is the Science Gateway funded?

The construction, landscaping, exhibitions and everything you will see in the building on day one are all funded from donations, with the main ones comings from Stellantis Foundation and a private foundation in Geneva. CERN is very grateful to all donors for their generosity. It’s about CHF 90 million in total, with some donors sponsoring particular exhibits or spaces. For the operations, the cost is estimated at around CHF 4 million per year. This will be funded from a mix of income from the infrastructure (for example, the shop, restaurant, parking and auditorium) and some limited CERN budget. The operational costs are for staffing in addition to maintenance of the equipment, cleaning and maintaining the forest that surrounds the building. 

What is the operational model?

A Science Gateway operations group has been created from the former visits service. With the exception of a small increase in industrial services contracts and two fellows, there are basically no recruitments. We will heavily rely on volunteers, from members of the personnel to users and other people linked with CERN. We already have a pool of guides who provide on average 16,000 hours per year on guided tours and we need to double that amount to ensure the Science Gateway operates as required. We will encourage more people to become guides and start training in July. We want to emphasise that, in addition to the rewards of engaging visitors with CERN’s science, this experience will be useful to their professional lives. We are also considering giving certificates and possibly accreditations. Ideally we should have about 650 guides each giving 48 hours per year. 

What is the environmental philosophy behind Science Gateway?

We want to pass on the message that we’re sustainable. We’ll be carbon neutral when we are in the operations phase, and solar panels on the roof of the three pavilions will produce much more energy than we need, with 40% going back into the CERN grid. The use of geothermal probes was explored but had to be abandoned due to local geology. Heating and cooling will be provided by heat exchangers powered by our solar panels. In the restaurant we will avoid single-use plastics, and lights will be dimmed in the evening and switched off at night. There will also be a charge for parking to encourage visitors to come by public transport. We wanted to show the link between science and nature, and that’s why we have the forest, with 400 trees and 13,000 shrubs.

How does it feel to see the project coming to completion?

When we started discussions six or so years ago, I thought I had less than a 10% chance of success because the project was so ambitious and had to be completely funded by donations. . However, it was strongly supported by the directorate, which was also very active in raising funds. The fact that it was to be built on agricultural land was another factor. There were more reasons for it to fail than to succeed. But the challenge was worth it. The phase during which we were doing the design of the construction with the architects was really interesting. I think we had 50 different versions, trying to define a design that would fit both the architects’ vision and our programme. With the construction, things start to become less fun. But we are almost there now and the Science Gateway will be a game changer for CERN, so I’m pretty proud of it. I had planned to retire at the end of the construction, but now I’ve decided to stay a bit longer and see the first steps of CERN’s new big baby. 

Event celebrates 50 years of Kobayashi–Maskawa theory

Quarks change their flavour through the weak interaction, and the strength of the flavour mixing is parametrised by the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which is an essential part of the Standard Model. This year marks the 60th anniversary of Nicola Cabibbo’s paper describing the mixing between down and strange quarks. It also marks the 50th anniversary of the paper by Makoto Kobayashi and Toshihide Maskawa, published in February 1973, which explained the origin of CP violation by generalising the quark mixing to three generations. To celebrate the magnificent accomplishments of quark-flavour physics during the past 50 years and to discuss the future of this important topic, a symposium was held at KEK in Tsukuba, Japan on 11 February, attracting about 150 participants from around the globe, including Makoto Kobayashi himself.

Opening the event, Masanori Yamauchi, director-general of KEK, summarised the early history of Kobayashi-Maskawa (KM) theory and the ideas to test it as a theory of CP violation. He recalled his time as a member of the Belle collaboration at the KEKB accelerator, including the memorable competition with the BaBar experiment at SLAC during the late 1990s and early 2000s, which finally led to the conclusion that KM theory explains the observed CP violation. Kobayashi and Maskawa shared one half of the 2008 Nobel Prize in Physics “for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature”.

The scientific sessions were initiated by Amarjit Soni (BNL), who summarised various ideas to measure CP violation from cascade decays of B mesons including the celebrated papers by A I Sanda and co-workers in 1980–1981, which gave a strong motivation to build B factories. Stephen Olsen (Chung Ang University), who was one of the leaders of the Belle collaboration, looked back at the situation in the early 1980s when B-meson mixing was first observed, and emphasised the role of the accelerator physicists who achieved the 100-fold increase in luminosity that was necessary to measure CP angles. Adrian Bevan (Queen Mary University of London) added a perspective from the BaBar experiment, while the more recent impressive development by the LHCb experiment was summarised by Patrick Koppenburg (Nikhef).

Theoretical developments remain an integral part of quark-flavour physics. Matthias Neubert (University of Mainz) gave an overview of the theoretical tools developed to understand B-meson decays, which include heavy-quark symmetry, heavy-quark effective field theory, heavy-quark expansion and QCD factorisation, and Zoltan Ligeti (LBNL) summarised concurrent developments of theory and experiment to determine the sides of the CKM triangle. Lattice QCD also played a central role in the determination of the CKM matrix elements by providing precision computation of non-perturbative parameters, as discussed by Aida El-Khadra (University of Illinois).

There are valuable lessons from the KM paper when applied to the search beyond the Standard Model

The B sector is not the only place where CP violation is observed. Indeed, it was first observed in kaon mixings, and important pieces of information have been obtained since then. A number of theoretical ideas dedicated to the study of kaon CP violation were discussed by Andrzej Buras (Technical University of Munich), and experimental projects were overviewed by Taku Yamanaka (Osaka University).

There are still unsolved mysteries around quark-flavour physics. The most notable is the origin of the fermion generations, which may only be understood by accumulating more data to find any discrepancy with the Standard Model. SuperKEKB/Belle II, the successor of KEKB/Belle, plans to accumulate 50 times more data in the coming decades, while LHCb will continue to improve the precision of measurement in hadronic collisions. Nanae Taniguchi (KEK) reported the current status of SuperKEKB/Belle II, which has been in physics operation since 2019 and has already broken peak-luminosity records in e+e collisions. Gino Isidori (University of Zurich) gave his view on the possible shape of physics to come. “There are valuable lessons from the KM paper, which are still valuable today, when applied to the search beyond the Standard Model,” he concluded. 

As a closing remark, Makoto Kobayashi reminisced about the time when he built the theory as well as the time when the KEKB/Belle experiment was running. “I was able to watch the development of the B factory so closely from the very beginning,” he said. “I am grateful to the colleagues who gave me such a great opportunity.”

Majorana neutrinos remain at large

Majorana Demonstrator cryostat

Neutrinoless double-beta decay (0νββ) remains as elusive as ever, following publication of the final results from the Majorana Demonstrator experiment at SURF, South Dakota, in February. Based on six years’ monitoring of ultrapure 76Ge crystals, corresponding to an exposure of 64.5 kg × yr, the collaboration has confirmed that the half-life of 0νββ in this isotope is greater than 8.3 × 1025 years. This translates to an upper limit of an effective neutrino mass mββ of 113–269 meV, and complements a number of other 0νββ experiments that have recently concluded data-taking. 

Whereas double-beta decay is known to occur in several nuclides, its neutrinoless counterpart is forbidden by the Standard Model. That’s because it involves the simultaneous decay of two neutrons into two protons with the emission of two electrons and no neutrinos, which is only possible if neutrinos and antineutrinos are identical “Majorana” particles such that the two neutrinos from the decay cancel each other out. Such a process would violate lepton-number conservation, possibly playing a role in the matter–antimatter asymmetry in the universe, and be a direct sign of new physics. The discovery that neutrinos have mass, which is a necessary condition for them to be Majorana particles, motivated experiments worldwide to search for 0νββ in a variety of candidate nuclei.

Germanium-based detectors have an excellent energy resolution, which is key to be able to resolve the energy of the electrons emitted in potential 0νββ decays. The Majorana Demonstrator is also located 1.5 km underground, with low-noise electronics and ultrapure in-house-grown electroformed copper surrounding the detectors to shield it from background events. Despite a lower exposure, the collaboration was able to achieve similar limits to the GERDA experiment at Gran Sasso National Laboratory, which set a lower limit on the 76Ge 0νββ half-life of 1.8 × 1026 yr. Also among the projects of the collaboration is an ongoing search for the influence of dark-matter particles in the decay of metastable 180mTa – nature’s rarest isotope. Although no hints have been found so far, the search has already improved the sensitivity of dark-matter searches in nuclei significantly. 

The search has already improved the sensitivity of dark-matter searches in nuclei significantly

Other experiments, such as KamLAND- ZEN and EXO-200, use 136Xe to search for 0νββ. While the former recently set the most stringent limit of 2.3 × 1026 yr and is ongoing, the latter arrived at a value of 3.5 × 1025 yr with a total 136Xe exposure of 234.1 kg × yr based on its full dataset. Searches at Gran Sasso with CUORE using 1t × yr exposure of 130Te led to a half-life of 2.2 × 1025 yr and at CUORE’s successor, CUPID-0, which used 82Se with a total exposure of 8.82 kg × yr, of the order 1023 yr.

Having demonstrated the required sensitivity for 0νββ detection in 76Ge, the designs of Majorana Demonstrator and GERDA have been incorporated into the next-generation experiment LEGEND-200, which uses high-purity germanium detectors surrounded by liquid argon. The experiment, based at Gran Sasso, started operations last spring and could have initial results later this year, says co-spokesperson Steven Elliot (LANL): “Once all the detectors are installed, we plan to run for five years, while the next stage, LEGEND-1000, is proceeding through the DOE Critical Decision process. We hope to begin construction in summer 2026, with first data available early next decade.”

Neutrino pheno week back at CERN

Supernova 1987A

Since its inception in 2013, the CERN Neutrino Platform has evolved into a worldwide hub for both experimental and theoretical neutrino physics. Besides its multifaceted activities in hardware development – including most notably the ProtoDUNE detectors for the international long-baseline neutrino programme in the US – the platform also hosts a vibrant group of theorists.

From 13 to 17 March this group once again hosted the CERN Neutrino Platform Pheno Week, after a COVID-related hiatus of more than three years. With about 100 in-person participants and 200 more on Zoom, the meeting has become one of the largest in the field – a testament to the ever-growing popularity of neutrinos among particle physicists, even though neutrinos are the most elusive among all known elementary particles.

Talks at the March event reflected the full breadth of the subject, with the first days devoted to novel theoretical models explaining the peculiar relations observed among neutrino masses and mixing angles, and to understanding the way in which neutrinos interact with nuclei. The latter topic is particularly complex, given the vast range of energies in which neutrinos are studied – from non-relativistic cosmic background neutrinos with sub-meV energies to PeV-scale neutrinos observed in neutrino telescopes. An especially popular topic has also been the possibility of discovering physics beyond the Standard Model in the neutrino sector. In fact, because of their ability to mix with hypothetical “dark sector” fermions – that is, fermions potentially related to the physics of dark matter, or even dark matter itself – neutrinos offer a unique window to new physics.

The second part of the workshop was devoted to the neutrino’s role in astrophysics and cosmology. “There’s actually a two-way relationship between neutrinos and the cosmos,” explained invited speaker John Beacom (Ohio State University). “On the one hand, astrophysical and cosmological observations can teach us a lot about neutrino properties. On the other, neutrinos are unique cosmic messengers, and from observations at neutrino telescopes we can learn fascinating things about stars, galaxies and the evolution of the universe.” In recent years, for instance, neutrinos have allowed physicists to shed new light on the century-old problem of where ultra-high-energy cosmic rays come from. And the next galactic supernova – an event that happens on average every 30 to 100 years – will be a treasure trove of new information, given that we expect to observe tens of thousands of neutrinos from such an event. At the same time, cosmology sets the strongest upper limits on the absolute scale of neutrino masses, and with the next generation of cosmological surveys we have every expectation to achieve an actual measurement of this quantity. This is interesting because neutrino oscillations, while establishing that neutrinos have non-zero mass, are only sensitive to differences of squared masses, not to the absolute mass scale.

The programme of the Neutrino Platform Pheno Week closed with a tour of the ProtoDUNE experiments, giving the mostly theory-oriented audience an impression of how the magnificent machines testing our theories of the neutrino sector are being developed and assembled.

New superconducting technologies for the HL-LHC and beyond

The python

The era of high-temperature superconductivity started in 1986 with the discovery, by IBM researchers Georg Bednorz and Alex Muller, of superconductivity in a lanthanum barium copper oxide. This discovery was revolutionary: not only did the new, brittle superconducting compound belong to the family of ceramic oxides, which are generally insulators, but it had the highest critical temperature ever recorded (up to 35 K, compared with about 18 K in conventional superconductors). In the following years, scientists discovered other cuprate superconductors (bismuth–strontium–copper oxide and yttrium–barium–copper oxide) and achieved superconductivity at temperatures above 77 K, the boiling point of liquid nitrogen (see “Heat is rising” figure). The possibility of operating superconducting systems with inexpensive, abundant and inert liquid nitrogen generated tremendous enthusiasm in the superconducting community. 

Several applications of high-temperature superconducting materials with a potentially high impact on society were studied. Among them, superconducting transmission lines were identified as an innovative and effective solution for bulk power transmission. The unique advantages of superconducting transmission are high capacity, very compact volume and low losses. This enables the sustainable transfer of up to tens of GW of power at low and medium voltages in narrow channels, together with energy savings. Demonstrators have been built worldwide in conjunction with industry and utility companies, some of which have successfully operated in national electricity grids. However, widespread adoption of the technology has been hindered by the cost of cuprate superconductors. 

Critical temperature of superconductors

In particle physics, superconducting magnets allow high-energy beams to circulate in colliders and provide stronger fields for detectors to be able to handle higher collision energies. The LHC is the largest superconducting machine ever built, and the first to also employ high-temperature superconductors at scale. Realising its high-luminosity upgrade and possible future colliders is driving the use of next-generation superconducting materials, with applications stretching far beyond fundamental research.

High-temperature superconductivity (HTS) was discovered at the time when the conceptual study for the LHC was ongoing. While the new materials were still in a development phase, the potential of HTS for use in electrical transmission was immediately recognised. The powering of the LHC magnets (which are based on the conventional superconductor niobium titanium, cooled by superfluid helium) requires the transfer of about 3.4 MA of current, generated at room temperature, in and out of the cryogenic environment. This is done via devices called current leads, of which more than 3000 units are installed at different underground locations around the LHC’s circumference. The conventional current–lead design, based on vapour-cooled metallic conductors, imposes a lower limit (about 1.1 W/kA) on the heat in-leak into the liquid helium. The adoption of the HTS BSCCO 2223 (bismuth–strontium–calcium copper oxide ceramic) tape – operated in the LHC current leads in the temperature range 4.5 to 50 K – enabled thermal conduction and ohmic dissipation to be disentangled. Successful multi-disciplinary R&D followed by prototyping at CERN and then industrialisation, with series production of the approximately 1100 LHC HTS current leads starting in 2004, resulted in both capital and operational savings (avoiding one extra cryoplant and an economy of about 5000 l/h of liquid helium). It also encouraged wider adoption of BSCCO 2223 current–lead technology, for instance in the magnet circuits for the ITER tokamak, which benefit via a collaboration agreement with CERN on the development and design of HTS current leads.

MgB2 links at the HL-LHC 

The discovery of superconductivity in magnesium diboride (MgB2) in 2001 generated new enthusiasm for HTS applications. This material, classified as medium-temperature superconductor, has remarkable features: it has a critical temperature (39 K) some 30 K higher than that of niobium titanium, a high current density (to date in low and medium magnetic fields) and, crucially, it can be industrially produced as round multi-filamentary wire in long (km) lengths. These characteristics, along with a cost that is intrinsically lower than other available HTS materials, make it a promising candidate for electrical applications.

At the LHC the current leads are located in the eight straight sections. For the high-luminosity upgrade of the LHC (HL-LHC), scheduled to be operational in 2029, the decision was taken to locate the power converters in new, radiation-free underground technical galleries above the LHC tunnel. The distance between the power converters and the HL-LHC magnets spans about 100 m and includes a vertical path via an 8 m shaft connecting the technical galleries and the LHC tunnel. The large current to be transferred across such distance, the need for compactness, and the search for energy efficiency and potential savings led to the selection of HTS transmission as the enabling technology.

Complex cabling

The electrical connection, at cryogenic temperature, between the HL-LHC current leads and the magnets is performed via superconducting links based on MgB2 technology. MgB2 wire is assembled in cables with different layouts to transfer currents ranging from 0.6 kA to 18 kA. The individual cables are then arranged in a compact assembly that constitutes the final cable feeding the magnet circuits of either the HL-LHC inner triplets (a series of quadrupole magnets that provides the final focusing of the proton beams before collision in ATLAS and CMS) or the HL-LHC matching sections (which match the optics in the arcs to those at the entrance of the final-focus quadrupoles), and the final cable is incorporated in a flexible cryostat with an external diameter of up to 220 mm. The eight HL-LHC superconducting links are about 100 m long and transfer currents of about 120 kA for the triplets and 50 kA for the matching sections at temperatures up to 25 K, with cryogenic cooling performed with helium gas.

The R&D programme for the HL-LHC superconducting links started in around 2010 with the evaluation of the MgB2 conductor and the development, with industry, of a round wire with mechanical properties enabling cabling after reaction. Brittle superconductors, such as Nb3Sn – used in the HL-LHC quadrupoles and also under study for future high-field magnets – need to be reacted into the superconducting phase via heat treatments, at high temperatures, performed after their assembly in the final configuration. In other words, those conductors are not superconducting until cabling and winding have been performed. When the R&D programme was initiated, industrial MgB2 conductor existed in the form of multi-filamentary tape, which was successfully used by ASG Superconductors in industrial open MRI systems for transporting currents of a few hundred amperes. The requirement for the HL-LHC to transfer current to multiple circuits for a total of up to 120 kA in a compact configuration, with multiple twisting and transposition steps necessary to provide uniform current distribution in both the wires and cables, called for the development of an optimised multi-filamentary round wire. 

Carried out in conjunction with ASG Superconductors, this development led to the introduction of thin niobium barriers around the MgB2 superconducting filaments to separate MgB2 from the surrounding nickel and avoid the formation of brittle MgB2–Ni reaction layers that compromise electro-mechanical performance; the adoption of higher purity boron powder to increase current capability; the optimisation in the fraction of Monel (a nickel-copper alloy used as the main constituent of the wire) in the 1 mm-diameter wire to improve mechanical properties; the minimisation of filament size (about 55 µm) and twist pitch (about 100 mm) for the benefit of electro-mechanical properties; the addition of a copper stabiliser around the Monel matrix; and the coating of tin–silver onto the copper to ensure the surface quality of the wire and a controlled electrical resistance among wires (inter-strand resistance) when assembled into cables. After successive implementation and in-depth experimental validation of all improvements, a robust 1 mm-diameter MgB2 wire with required electro-mechanical characteristics was produced. 

REBCO tape and cables

The next step was to manufacture long unit lengths of MgB2 wire via larger billets (the assembled composite rods that are then extruded and drawn down in a long wire). The target unit length of several kilometres was reached in 2018 when series procurement of the wire was launched. In parallel, different cable layouts were developed and validated at CERN. This included round MgB2 cables in a co-axial configuration rated for 3 kA and for 18 kA at 25 K (see “Complex cabling” figure). While the prototypes made at CERN were 20 to 30 m long, the cable layout incorporated, from the outset, characteristics to enable production via industrial cabling machines of the type used for conventional cables. Splice techniques as well as detection and protection aspects were addressed in parallel with wire and cable development. Both technologies are strongly dependent on the characteristics of the superconductor, and are of key importance for the reliability of the final system. 

The first qualification at 24 K of a 20 kA MgB2 cable produced at CERN, comprising two 20 m lengths connected together, took place in 2014. This followed the qualification at CERN of short-model cables and other technological aspects, as well as the construction of a dedicated test station enabling the measurement of long cables operated at higher temperatures, in a forced flow of helium gas. The cables were then industrially produced at TRATOS Cavi via a contract with ICAS, in a close and fruitful collaboration that enabled – while operating heavy industrial equipment – the requirements identified during the R&D phase. The complexity of the final cables required a multi-step process that used different cabling, braiding and electrically insulating lines, and the implementation of a corresponding quality-assurance programme. The first industrial cables, which were 60 m long, were successfully qualified at CERN in 2018. Final prototype cables of the type needed for the HL-LHC (for both the triplets and matching sections) were validated at CERN in 2020, when series production of the final cables was launched. As of today, the full series of about 1450 km of MgB2 wire – the first large-scale production of this material – and five of the eight final MgB2 cables needed for the HL-LHC have been produced.

The use of hydrogen can diversify energy sources as it significantly reduces greenhouse-gas emissions and environmental pollution during energy conversion

Superconducting wire and cables are the core of a superconducting system, but the system itself requires a global optimisation, which is achieved via an integrated design. Following this approach, the challenge was to investigate and develop, in industry, long and flexible cryostats for the superconducting links with enhanced cryogenic performance. The goal was to achieve a low static heat load (< 1.5 W/m) into the cryogenic volume of the superconducting cables while adopting a design – a two-wall cryostat without intermediate thermal screen – that simplifies the cooling of the system, improves the mechanical flexibility of the links and eases handling during transport and installation. This development, which ran in parallel with the wire and cable activities, led to the desired results and, after an extensive test campaign at CERN, the developed technology was adopted. Series production of these cryostats is taking place at Cryoworld in the Netherlands.

The optimised system minimises the cryogenic cost for the cooling such that a superconducting link transfers – from the tunnel to the technical galleries – just enough helium gas to cool the resistive section of the current leads and brings it to the temperature (about 20 K) for which the leads are optimised. In other words, the superconducting link does not add cryogenic cost to the refrigeration of the system. The links, which are rated for currents up to 120 kA, are sufficiently flexible to be transported, as for conventional power cables, on drums about 4 m in diameter and can be manually pulled, without major tooling, during installation (see “kA currents” image). The challenge of dealing with the thermal contraction of the superconducting links, which shrink by about 0.5 m when cooled down to cryogenic temperature, was also addressed. An innovative solution, which takes advantage of bends and is compatible with the fixed position of the current lead cryostat, was validated with prototype tests. 

Novel HTS leads

Whereas MgB2 cables transfer high DC currents from the 4.5 K liquid helium environment in the LHC tunnel to about 20 K in the HL-LHC new underground galleries, a different superconducting material is required to transfer the current from 20 to 50 K, where the resistive part of the current leads makes the bridge to room temperature. To cope with the system requirements, novel HTS current leads based on REBCO (rare-earth barium copper oxide) HTS superconducting tape – a material still in a development phase at the time of the LHC study – have been conceived, constructed and qualified to perform this task (see “Bridging the gap” image). Compact, round REBCO cables ensure, across a short (few-metre-long) length, the electrical transfer from the MgB2 to 50 K, after which the resistive part of the current leads finally brings the current to room temperature. In view of the complexity of dealing with the REBCO conductor, the corresponding R&D was done at CERN, where a complex dedicated cabling machine was also constructed. 

Cable assembly

While REBCO tape is procured from industry, the challenges encountered during the development of the cables were many. Specific issues associated with the tape conductor, for example electrical resistance internal to the tape and the dependence of electrical properties on temperature and cycles applied during soldering, were identified and solved with the tape manufacturers. A conservative approach imposing zero critical current degradation of the tape after cabling was implemented. The lessons learnt from this development are also instrumental for future projects employing REBCO conductors, including the development of high-field REBCO coils for future accelerator magnets. 

The series components of the HL-LHC cold-powering systems (superconducting links with corresponding terminations) are now in production, with the aim to have all systems available and qualified in 2025 for installation in the LHC underground areas during the following years. Series production and industrialisation were preceded by the completion of R&D and technological validations at CERN. Important milestones have been the test of a sub-scale 18 kA superconducting link connected to a pair of novel REBCO current leads in 2019, and the test of full-cross section, 60 m-long superconducting lines of the type needed for the LHC triplets and for the matching sections, both in 2020. 

The complex terminations of the superconducting links involve two types of cryostat that contain, at the 20 K side, the HTS current leads and the splices between REBCO and MgB2 cables and, at the 4.2 K side, the splices between the niobium titanium and the MgB2 cables. A specific development in the design was to increase compactness and enable the connection of the cryostat with the current leads to the superconducting link at the surface, prior to installation in the HL-LHC underground areas (see “End of the line” figure). The series production of the two cryostat terminations is taking place via collaboration agreements with the University of Southampton and Uppsala University.     

The displacement of the current leads via the adoption of superconducting links brings a number of advantages. These include freeing precious space in the main collider ring, which becomes available for other accelerator equipment, and the ability to locate powering equipment and associated electronics in radiation-free areas. The latter relaxes radiation-hardness requirements for the hardware and eases access for personnel to carry out the various interventions required during accelerator operations. 

Cooling with low-density helium gas also makes electrical transfer across long vertical distances feasible. The ability to transfer high currents from underground tunnels to surface buildings – as initially studied for the HL-LHC – is therefore of interest for future machines, such as the proposed Future Circular Collider at CERN. Flexible superconducting links can also be applied to “push–pull” arrangements of detectors at linear colliders such as the proposed CLIC and ILC, where the adoption of flexible powering lines can simplify and reduce the time for the exchange of experiments sharing the same interaction region.

An enabling technology

Going beyond fundamental research in physics, superconductivity is an enabling technology for the transfer of GWs of power across long distances. The main benefits, in addition to incomparably higher power transmission, are small size, low total electrical losses, minimised environmental impact and more sustainable transmission. HTS offers the possibility of replacing resistive high-voltage overhead lines, operated across thousands of kilometres at voltages reaching about 1000 kV, with lower voltage lines, laid underground with reduced footprints.

Cryostat termination

Long-distance power transmission using hydrogen- cooled MgB2 superconducting links, potentially associated with renewable energy sources, is identified as one of the leading ways towards a future sustainable energy system. Since hydrogen is liquid at 20 K (the temperature at which MgB2 is superconducting), large amounts can be stored and used as a coolant for superconducting lines, acting at the same time as the energy vector and cryogen. In this direction, CERN participated – at a very early stage of the HL-LHC superconducting links development – in a project launched by Carlo Rubbia as scientific director of the Institute for Advanced Sustainability Studies (IASS) in Potsdam. Around 10 years ago, CERN and IASS joint research culminated in the record demonstration of the first 20 kA MgB2 transmission line operated at liquid hydrogen temperature. This activity continued with a European initiative called BestPaths, which demonstrated a monopole MgB2 cable system operated in helium gas at 20 K. This was qualified in industry for 320 kV operation and at 10 kA at CERN, proving 3.2 GW power transmission capability. This initiative involved European industry and France’s transmission system operator. In Italy, the INFN has recently launched a project called IRIS based on similar technology (see CERN Courier January/February 2023 p9).

In addition to transferring power across long distances with low losses and minimal environmental impact, the development of high-performance, low-cost, sustainable and environmentally friendly energy storage and production systems is a key challenge for society. The use of hydrogen can diversify energy sources as it significantly reduces greenhouse-gas emissions and environmental pollution during energy conversion. In aviation, alternative-propulsion systems are studied to reduce CO2 emission and move toward zero-emission flights. Scaling up electric propulsion to larger aircraft is a major challenge. Superconducting technologies are a promising solution as they can increase power density in the propulsion chain while significantly lowering the mass of the electrical distribution system. In this context, a collaboration agreement has recently been launched between CERN and Airbus UpNext. The construction of a demonstrator of superconducting distribution in aircraft called SCALE (Super-Conductor for Aviation with Low Emissions), which uses the HL-LHC superconducting link technology, was recently launched at CERN. 

CERN’s developed experience in superconducting-link technology is also of interest to large data centres, with a collaboration agreement between CERN and Meta under discussion. The possibility of locating energy equipment remotely from servers, of transferring efficiently large power in a compact volume, and of meeting sustainability goals by reducing carbon footprints are motivating a global re-evaluation of conventional systems in light of the potential of superconducting transmission.

Such applications demonstrate the virtuous circle between fundamental and applied research. The requirements of fundamental exploration in particle physics research have led to the development of increasingly powerful and sophisticated accelerators. In this endeavour, scientists and engineers engage in developments initially conceived to address specific challenges. This often requires a multi-disciplinary approach and collaboration with industry to transform prototypes into mature technology ready for large-scale application. Accelerator technology is a key driver of innovation that may also have a wider impact on society. The superconducting-link system for the HL-LHC project is a shining example.

bright-rec iop pub iop-science physcis connect