Comsol -leaderboard other pages

Topics

Karel Šafařík 1953–2024

Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024.

Karel graduated in theoretical physics in Bratislava, Slovakia (then Czechoslovakia) in 1976 and worked at JINR Dubna for over 10 years, participating in experiments in Serpukhov and doing theoretical studies on the phenomenology of particle production at high energies. In 1990 he joined Collège de France and the heavy-ion programme at CERN, soon becoming one of the most influential scientists in the Omega series of heavy-ion experiments (WA85, WA94, WA97, NA57) at the CERN Super Proton Synchrotron (SPS). In 2002 Karel was awarded the Slovak Academy of Sciences Prize for his contributions to the observation of the enhancement of the production of multi-strange particles in heavy-ion collisions at the SPS. In 2013 he was awarded the medal of the Czech Physical Society.

As early as 1991, Karel was part of the small group who designed the first heavy-ion detector for the LHC, which later became ALICE. He played a central role in shaping the ALICE experiment, from the definition of physics topics and the detector layout to the design of the data format, tracking, data storage and data analysis. He was pivotal in convincing the collaboration to introduce two layers of pixel detectors to reconstruct decays of charm hadrons only a few tens of microns from the primary vertex in central lead–lead collisions at the LHC – an idea considered by many to be impossible in heavy-ion collisions, but that is now one of the pillars of the ALICE physics programme. He was the ALICE physics coordinator for many years leading up to and including first data taking. Over the years, he also made multiple contributions to ALICE upgrade studies and became known as the “wise man” to be consulted on the trickiest questions.

Karel was a top-class physicist, with a sharp analytical mind, a legendary memory, a seemingly unlimited set of competences ranging from higher mathematics to formal theory, and from detector physics to high-performance computing. At the same time he was a generous, caring and kind colleague who supported, helped, mentored and guided a large number of ALICE collaborators. We miss him dearly.

Günter Wolf 1937–2024

Günter Wolf

Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86. He significantly shaped and contributed to the research programme of DESY, and knew better than almost anyone how to form international collaborations and lead them to the highest achievements.

Born in Ulm, Germany in 1937, Wolf studied physics in Tübingen. At the urging of his supervisor Helmut Faissner, he went to Hamburg in 1961 where the DESY synchrotron was being built under DESY founder Willibald Jentschke. Together with Erich Lohrmann and Martin Teucher, he was involved in the preparation of the bubble-chamber experiments there and at the same time took part in experiments at CERN.

The first phase of experiments with high-energy photons at the DESY synchrotron, in which he was involved, had produced widely recognised results on the electromagnetic interactions of elementary particles. In 1967 Wolf seized the opportunity to continue this research at the higher energies of the recently completed linear accelerator at Stanford University (SLAC). He became the spokesperson for an experiment with a polarised gamma beam, which provided new insights into the nature of vector mesons.

In 1971, Jentschke succeeded in bringing Wolf back to Hamburg as senior scientist. He remained associated with DESY for the rest of his life and became a leader in the planning, construction and analysis of key DESY experiments.

Together with Bjørn Wiik, as part of an international collaboration, Wolf designed and realised the DASP detector for DORIS, the first electron–positron storage ring at DESY. This led to the discovery of the excited states of charmonium in 1975 and thus to the ultimate confirmation that quarks are particles. For the next, larger electron–positron storage ring, PETRA, he designed the TASSO detector, again together with Wiik. In 1979, the TASSO collaboration was able to announce the discovery of the gluon through its spokesperson Wolf, for which he, together with colleagues from TASSO, was awarded the High Energy Particle Physics Prize of the European Physical Society.

Wolf’s negotiating skills and deep understanding of physics and technology served particle physics worldwide

In 1982 Wolf became the chair of the experiment selection committee for the planned LEP collider at CERN. His deep understanding of physics and technology, and his negotiating skills, were an essential foundation for the successful LEP programme, just one example of how Wolf has served particle physics worldwide as a member of international scientific committees.

At the same time, Wolf was involved in the planning of the physics programme for the electron–proton collider HERA. The ZEUS general-purpose detector for experiments at HERA was the work of an international collaboration of more than 400 scientists, that Wolf brought together and led as its spokesperson for many years. The experiments at HERA ran from 1992 to 2007, producing outstanding results that include the direct demonstration of the unification of the weak and electromagnetic force at high momentum transfers, the precise measurement of the structure of the proton, which is determined by quarks and gluons, and the surprising finding that there are collisions in which the proton remains intact even at the highest momentum transfers. In 2011 Wolf was awarded the Stern–Gerlach Medal of the German Physical Society, its highest award for achievements in experimental physics.

When dealing with colleagues and staff, Günter Wolf was always friendly, helpful, encouraging and inspiring, but at the same time demanding and insistent on precision and scientific excellence. He took the opinions of others seriously, but only a thorough and competent analysis could convince him. As a result, he enjoyed the greatest respect from everyone and became a role model and friend to many. DESY owes its reputation in the international physics community not least to people like him.

A call to engage

The European strategy for particle physics is the cornerstone of Europe’s decision-making process for the long-term future of the field. In March 2024 CERN Council launched the programme for the third update of the strategy. The European Strategy Group (ESG) and the strategy secretariat for this update were established by CERN Council in June 2024 to organise the full process. Over the past few months, important aspects of the process have been set up, and these are described in more detail on the strategy web pages at europeanstrategyupdate.web.cern.ch/welcome.

The Physics Preparatory Group (PPG) will play an important role in distilling the community’s scientific input and scientific discussions at the open symposium in Venice in June 2025 into a “physics briefing book”. At its meeting in September 2024, CERN Council appointed eight members of the PPG, four on the recommendation of the scientific policy committee and four on the recommendation of the European Committee for Future Accelerators (ECFA). In addition, the PPG has one representative from CERN and two representatives each from the Americas and Asia.

The strategy secretariat also proposed to form nine working groups to cover the full range of physics topics as well as the technology areas of accelerators, detectors and computing. The work of these groups will be co-organised by two conveners, with one of them being a member of the PPG. In addition, an early-career researcher has been appointed to each group to act as a scientific secretary. Both the appointments of the co-conveners and of the early-career researchers are important to increase the engagement by the broader community in the current update. The full composition of the PPG, the co-conveners and the scientific secretaries of the working groups is available on the strategy web pages.

Karl Jakobs

The strategy secretariat has also devised guidelines for input by the community. Any submitted documents must be no more than 10 pages long and provide a comprehensive and self-contained summary of the input. Additional information and details can be submitted in a separate backup document that can be consulted on by the PPG if clarification on any aspect is required. A backup document is not, however, mandatory.

A major component are inputs by national high-energy physics communities, which are expected to be collected individually by each country, and in some cases by region. The information collected from different countries and regions will be most useful if it is as coherent and uniform as possible when addressing the key issues. To assist with this, the ECFA has put together a set of guidelines.

It is anticipated that a number of proposals for large-scale research projects will be submitted as input to the strategy process, including, but not limited to, particle colliders and collider detectors. These proposals are likely to vary in scale, anticipated timeline and technical maturity. In addition to studying the scientific potential of these projects, the ESG wishes to evaluate the sequence of delivery steps and the challenges associated with delivery, and to understand how each project could fit into the wider roadmap for European particle physics. In order to allow a straightforward comparison of projects, we therefore request that all large-scale projects submit a standardised set of technical data in addition to their physics case and technical description.

It is anticipated that a number of proposals for large-scale research projects will be submitted as input to the strategy

To allow the community to take into account and to react to the submissions collected by March 2025 and to the content of the briefing book, national communities are offered further opportunities for input: first ahead of the open symposium (see p11), with a deadline of 26 May 2025; and then ahead of the drafting session, with a deadline of 14 November 2025.

In this strategy process the community must converge on a preferred option for the next collider at CERN and identify a prioritised list of alternative options. The outcome of the process will provide the basis for the decision by CERN Council in 2027 or 2028 on the construction of the next large collider at CERN, following the High-Luminosity LHC. Areas of priority for exploration complementary to colliders and for other experiments to be considered at CERN and other laboratories in Europe will also be identified, as well as priorities for participation in projects outside Europe.

Given the importance of this process and its outcomes, I encourage strong community involvement throughout to reach a consensus for the future of our field.

Edoardo Amaldi and the birth of Big Science

Ugo Amaldi beside a portrait of his father Edoardo

Should we start with your father’s involvement in the founding of CERN?

I began hearing my father talk about a new European laboratory while I was still in high school in Rome. Our lunch table was always alive with discussions about science, physics and the vision of this new laboratory. Later, I learned that between 1948 and 1949, my father was deeply engaged in these conversations with two of his friends: Gilberto Bernardini, a well-known cosmic-ray expert, and Bruno Ferretti, a professor of theoretical physics at Rome University. I was 15 years old and those table discussions remain vivid in my memory.

So, the idea of a European laboratory was already being discussed before the 1950 UNESCO meeting?

Yes, indeed. Several eminent European physicists, including my father, Pierre Auger, Lew Kowarski and Francis Perrin, recognised that Europe could only be competitive in nuclear physics through collaborative efforts. All the actors wanted to create a research centre that would stop the post-war exodus of physics talent to North America and help rebuild European science. I now know that my father’s involvement began in 1946 when he travelled to Cambridge, Massachusetts, for a conference. There, he met Nobel Prize winner John Cockcroft, and their conversations planted in his mind the first seeds for a European laboratory.

Parallel to scientific discussions, there was an important political initiative led by Swiss philosopher and writer Denis de Rougemont. After spending the war years at Princeton University, he returned to Europe with a vision of fostering unity and peace. He established the Institute of European Culture in Lausanne, Switzerland, where politicians from France, Britain and Germany would meet. In December 1949, during the European Cultural Conference in Lausanne, French Nobel Prize winner Louis de Broglie sent a letter advocating for a European laboratory where scientists from across the continent could work together peacefully.

The Amaldi family in 1948

My father strongly believed in the importance of accelerators to advance the new field that, at the time, was at the crossroads between nuclear physics and cosmic-ray physics. Before the war, in 1936, he had travelled to Berkeley to learn about cyclotrons from Ernest Lawrence. He even attempted to build a cyclotron in Italy in 1942, profiting from the World’s Fair that had to be held in Rome. Moreover, he was deeply affected by the exodus of talented Italian physicists after the war, including Bruno Rossi, Gian Carlo Wick and Giuseppe Cocconi. He saw CERN as a way to bring these scientists back and rebuild European physics.

How did Isidor Rabi’s involvement come into play?

In 1950 my father was corresponding with Gilberto Bernardini, who was spending a year at Columbia University. There Bernardini mentioned the idea of a European laboratory to Isidor Rabi, who, at the same time, was in contact with other prominent figures in this decentralised and multi-centered initiative. Together with Norman Ramsay, Rabi had previously succeeded, in 1947, in persuading nine northeastern US universities to collaborate under the banner of Associated Universities, Inc, which led to the establishment of Brookhaven National Laboratory.

What is not generally known is that before Rabi gave his famous speech at the fifth assembly of UNESCO in Florence in June 1950, he came to Rome and met with my father. They discussed how to bring this idea to fruition. A few days later, Rabi’s resolution at the UNESCO meeting calling for regional research facilities was a crucial step in launching the project. Rabi considered CERN a peaceful compensation for the fact that physicists had built the nuclear bomb.

How did your father and his colleagues proceed after the UNESCO resolution?

Following the UNESCO meeting, Pierre Auger, at that time director of exact and natural sciences at UNESCO, and my father took on the task of advancing the project. In September 1950 Auger spoke of it at a nuclear physics conference in Oxford, and at a meeting of the International Union of Pure and Applied Physics (IUPAP), my father– one of the vice presidents – urged the executive committee to consider how best to implement the Florence resolution. In May 1951, Auger and my father organised a meeting of experts at UNESCO headquarters in Paris, where a compelling justification for the European project was drafted.

The cost of such an endeavour was beyond the means of any single nation. This led to an intergovernmental conference under the auspices of UNESCO in December 1951, where the foundations for CERN were laid. Funding, totalling $10,000 for the initial meetings of the board of experts, came from Italy, France and Belgium. This was thanks to the financial support of men like Gustavo Colonnetti, president of the Italian Research Council, who had already – a year before – donated the first funds to UNESCO.

Were there any significant challenges during this period?

Not everyone readily accepted the idea of a European laboratory. Eminent physicists like Niels Bohr, James Chadwick and Hendrik Kramers questioned the practicality of starting a new laboratory from scratch. They were concerned about the feasibility and allocation of resources, and preferred the coordination of many national laboratories and institutions. Through skilful negotiation and compromise, Auger and my father incorporated some of the concerns raised by the sceptics into a modified version of the project, ensuring broader support. In February 1952 the first agreement setting up a provisional council for CERN was written and signed, and my father was nominated secretary general of the provisional CERN.

Enrico and Giulio Fermi, Ginestra Amaldi, Laura Fermi, Edoardo and Ugo Amaldi

He worked tirelessly, travelling through Europe to unite the member states and start the laboratory’s construction. In particular, the UK was reluctant to participate fully. They had their own advanced facilities, like the 40 MeV cyclotron at the University of Liverpool. In December 1952 my father visited John Cockcroft, at the time director of the Harwell Atomic Energy Research Establishment, to discuss this. There’s an interesting episode where my father, with Cockcroft, met Frederick Lindemann and Baron Cherwell, who was a long-time scientific advisor to Winston Churchill. Cherwell dismissed CERN as another “European paper mill.” My father, usually composed, lost his temper and passionately defended the project. During the following visit to Harwell, Cockcroft reassured him that his reaction was appropriate. From that point on, the UK contributed to CERN, albeit initially as a series of donations rather than as the result of a formal commitment. It may be interesting to add that, during the same visit to London and Harwell, my father met the young John Adams and was so impressed that he immediately offered him a position at CERN.

What were the steps following the ratification of CERN’s convention?

Robert Valeur, chairman of the council during the interim period, and Ben Lockspeiser, chairman of the interim finance committee, used their authority to stir up early initiatives and create an atmosphere of confidence that attracted scientists from all over Europe. As Lew Kowarski noted, there was a sense of “moral commitment” to leave secure positions at home and embark on this new scientific endeavour.

During the interim period from May 1952 to September 1954, the council convened three sessions in Geneva whose primary focus was financial management. The organisation began with an initial endowment of approximately 1 million Swiss Francs, which – as I said – included a contribution from the UK known as the “observer’s gift”. At each subsequent session, the council increased its funding, reaching around 3.7 million Swiss Francs by the end of this period. When the permanent organisation was established, an initial sum of 4.1 million Swiss Francs was made available.

Giuseppe Fidecaro, Edoardo Amaldi and Werner Heisenberg at CERN in 1960

In 1954, my father was worried that if the parliaments didn’t approve the convention before winter, then construction would be delayed because of the wintertime. So he took a bold step and, with the approval of the council president, authorised the start of construction on the main site before the convention was fully ratified.

This led to Lockspeiser jokingly remarking later that council “has now to keep Amaldi out of jail”. The provisional council, set up in 1952, was dissolved when the European Organization for Nuclear Research officially came into being in 1954, though the acronym CERN (Conseil Européen pour la Recherche Nucléaire) was retained. By the conclusion of the interim period, CERN had grown significantly. A critical moment occurred on 29 September  1954, when a specific point in the ratification procedure was reached, rendering all assets temporarily ownerless. During this eight-day period, my father, serving as secretary general, was the sole owner on behalf of the newly forming permanent organisation. The interim phase concluded with the first meeting of the permanent council, marking the end of CERN’s formative years.

Did your father ever consider becoming CERN’s Director-General?

People asked him to be Director-General, but he declined for two reasons. First, he wanted to return to his students and his cosmic-ray research in Rome. Second, he didn’t want people to think he had done all this to secure a prominent position. He believed in the project for its own sake.

When the convention was finally ratified in 1954, the council offered the position of Director-General to Felix Bloch, a Swiss–American physicist and Nobel Prize winner for his work on nuclear magnetic resonance. Bloch accepted but insisted that my father serve as his deputy. My father, dedicated to CERN’s success, agreed to this despite his desire to return to Rome full time.

How did that arrangement work out?

My father agreed but Bloch wasn’t at that time rooted in Europe. He insisted on bringing all his instruments from Stanford so he could continue his research on nuclear magnetic resonance at CERN. He found it difficult to adapt to the demands of leading CERN and soon resigned. The council then elected Cornelis Jan Bakker, a Dutch physicist who had led the synchrocyclotron group, as the new Director-General. From the beginning, he was the person my father thought would have been the ideal director for the initial phase of CERN. Tragically though, Bakker died in a plane crash a year and a half later. I well remember how hard my father was hit by this loss.

How did the development of accelerators at CERN progress?

The decision to adopt the strong focusing principle for the Proton Synchrotron (PS) was a pivotal moment. In August 1952 Otto Dahl, leader of the Proton Synchrotron study group, Frank Goward and Rolf Widerøe visited Brookhaven just as Ernest Courant, Stanley Livingston and Hartland Snyder were developing this new principle. They were so excited by this development that they returned to CERN determined to incorporate it into the PS design. In 1953 Mervyn Hine, a long-time friend of John Adams with whom he had moved to CERN, studied potential issues with misalignment in strong focusing magnets, which led to further refinements in the design. Ultimately, the PS became operational before the comparable accelerator at Brookhaven, marking a significant achievement for European science.

Edoardo Amaldi and Victor Weisskopf in 1974

It’s important here to recognise the crucial contributions of the engineers, who often don’t receive the same level of recognition as physicists. They are the ones who make the work of experimental physicists and theorists possible. “Viki” Weisskopf, Director-General of CERN from 1961 to 1965, compared the situation to the discovery of America. The machine builders are the captains and shipbuilders. The experimentalists are those fellows on the ships who sailed to the other side of the world and wrote down what they saw. The theoretical physicists are those who stayed behind in Madrid and told Columbus that he was going to land in India.

Your father also had a profound impact on the development of other Big Science organisations in Europe

Yes, in 1958 my father was instrumental, together with Pierre Auger, in the founding of the European Space Agency. In a letter written in 1958 to his friend Luigi Crocco, who was professor of jet propulsion in Princeton, he wrote that “it is now very much evident that this problem is not at the level of the single states like Italy, but mainly at the continental level. Therefore, if such an endeavour is to be pursued, it must be done on a European scale, as already done for the building of the large accelerators for which CERN was created… I think it is absolutely imperative for the future organisation to be neither military nor linked to any military organisation. It must be a purely scientific organisation, open – like CERN – to all forms of cooperation and outside the participating countries.” This document reflects my father’s vision of peaceful and non-military European science.

How is it possible for one person to contribute so profoundly to science and global collaboration?

My father’s ability to accept defeats and keep pushing forward was key to his success. He was an exceptional person with a clear vision and unwavering dedication. I hope that by sharing these stories, others might be inspired to pursue their goals with the same persistence and passion.

Could we argue that he was not only a visionary but also a relentless advocate?

He travelled extensively, talked to countless people, and was always cheerful and energetic. He accepted setbacks but kept moving forwards. In this connection, I want to mention Eliane Bertrand, later de Modzelewska, his secretary in Rome who later became secretary of the CERN Council for about 20 years, serving under several Director-Generals. She left a memoir about those early days, highlighting how my father was always travelling, talking and never stopping. It’s a valuable piece of history that, I think, should be published.

Eliane de Modzelewska

International collaboration has been a recurring theme in your own career. How do you view its importance today?

International collaboration is more critical than ever in today’s world. Science has always been a bridge between cultures and nations, and CERN’s history is a testimony of what this brings to humanity. It transcends political differences and fosters mutual understanding. I hope CERN and the broader scientific community will find ways to maintain these vital connections with all countries. I’ve always believed that fostering a collaborative and inclusive environment is one of the main goals of us scientists. It’s not just about achieving results but also about how we work together and support each other along the way.

Looking ahead, what are your thoughts on the future of CERN and particle physics?

I firmly believe that pursuing higher collision energies is essential. While the Large Hadron Collider has achieved remarkable successes, there’s still much we haven’t uncovered – especially regarding supersymmetry. Even though minimal supersymmetry does not apply, I remain convinced that supersymmetry might manifest in ways we haven’t yet understood. Exploring higher energies could reveal supersymmetric particles or other new phenomena.

Like most European physicists, I support the initiative of the Future Circular Collider and starting with an electron–positron collider phase so to explore new frontiers at two very different energy levels. However, if geopolitical shifts delay or complicate these plans, we should consider pushing hard on alternative strategies like developing the technologies for muon colliders.

Ugo Amaldi first arrived at CERN as a fellow in September 1961. Then, for 10 years at the ISS in Rome, he opened two new lines of research: quasi-free electron scattering on nuclei and atoms. Back at CERN, he developed the Roman pots experimental technique, was a co-discoverer of the rise of the proton–proton cross-section with energy, measured the polarisation of muons produced by neutrinos, proposed the concept of a superconducting electron–positron linear collider, and led LEP’s DELPHI Collaboration. Today, he advances the use of accelerators in cancer treatment as the founder of the TERA Foundation for hadron therapy and as president emeritus of the National Centre for Oncological Hadrontherapy (CNAO) in Pavia. He continues his mother and father’s legacy of authoring high-school physics textbooks used by millions of Italian pupils. His motto is: “Physics is beautiful and useful.”

This interview first appeared in the newsletter of CERN’s experimental physics department. It has been edited for concision.

Isospin symmetry broken more than expected

In the autumn of 2023, Wojciech Brylinski was analysing data from the NA61/SHINE collaboration at CERN for his thesis, when he noticed an unexpected anomaly – a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions. Instead of producing roughly equal numbers, he found that charged kaons were produced 18.4% more often. This suggested that the “isospin symmetry” between up (u) and down (d) quarks might be broken by more than expected due to the differences in their electric charges and masses – a discrepancy that existing theoretical models would struggle to explain. Known sources of isospin asymmetry only predict deviations of a few percent.

“When Wojciech got started, we thought it would be a trivial verification of the symmetry,” says Marek Gaździcki of Jan Kochanowski University of Kielce, spokesperson of NA61/SHINE at the time of the discovery. “We expected it to be closely obeyed – though we had previously measured discrepancies at NA49, they had large uncertainties and were not significant.”

Isospin symmetry is one facet of flavour symmetry, whereby the strong interaction treats all quark flavours identically, except for kinematic differences arising from their different masses. Strong interactions should therefore generate nearly equal yields of charged K+ (us) and K (us), and neutral K0 (ds) and K0 (ds), given the similar masses of the two lightest quarks. NA61/SHINE’s data contradict the hypothesis of equal yields with 4.7σ significance.

“I see two options to interpret the results,” says Francesco Giacosa, a theo­retical physicist at Jan Kochanowski University working with NA61/SHINE. “First, we substantially underestimate the role of electromagnetic interactions in creating quark–antiquark pairs. Second, strong interactions do not obey flavour symmetry – if so, this would falsify QCD.” Isospin is not a symmetry of the electromagnetic interaction as up and down quarks have different electric charges.

While the experiment routinely measures particle yields in nuclear collisions, finding a discrepancy in isospin symmetry was not something researchers were actively looking for. NA61/SHINE’s primary focus is studying the phase diagram of high-energy nuclear collisions using a range of ion beams. This includes looking at the onset of deconfinement, the formation of a quark-gluon plasma fireball, and the search for the hypothesised QCD critical point where the transition between hadronic matter and quark–gluon plasma changes from a smooth crossover to a first-order phase transition. Data is also shared with neutrino and cosmic-ray experiments to help refine their models.

The collaboration is now planning additional studies using different projectiles, targets and collision energies to determine whether this effect is unique to certain heavy-ion collisions or a more general feature of high-energy interactions. They have also put out a call to theorists to help explain what might have caused such an unexpectedly large asymmetry.

“The observation of the rather large isospin violation stands in sharp contrast to its validity in a wide range of physical systems,” says Rob Pisarski, a theoretical physicist from Brookhaven National Laboratory. “Any explanation must be special to heavy-ion systems at moderate energy. NA61/SHINE’s discrepancy is clearly significant, and shows that QCD still has the power to surprise our naive expectations.”

Cosmogenic candidate lights up KM3NeT

Muon neutrino

On 13 February 2023, strings of photodetectors anchored to the seabed off the coast of Sicily detected the most energetic neutrino ever observed, smashing previous records. Embargoed until the publication of a paper in Nature last month, the KM3NeT collaboration believes their observation may have originated in a novel cosmic accelerator, or may even be the first detection of a “cosmogenic” neutrino.

“This event certainly comes as a surprise,” says KM3NeT spokesperson Paul de Jong (Nikhef). “Our measurement converted into a flux exceeds the limits set by IceCube and the Pierre Auger Observatory. If it is a statistical fluctuation, it would correspond to an upward fluctuation at the 2.2σ level. That is unlikely, but not impossible.” With an estimated energy of a remarkable 220 PeV, the neutrino observed by KM3NeT surpasses IceCube’s record by almost a factor of 30.

The existence of ultra-high-energy cosmic neutrinos has been theorised since the 1960s, when astrophysicists began to conceive ways that extreme astrophysical environments could generate particles with very high energies. At about the same time, Arno Penzias and Robert Wilson discovered “cosmic microwave background” (CMB) photons emitted in the era of recombination, when the primordial plasma cooled down and the universe became electrically neutral. Cosmogenic neutrinos were soon hypothesised to result from ultra-high-energy cosmic rays interacting with the CMB. They are expected to have energies above 100 PeV (1017 eV), however, their abundance is uncertain as it depends on cosmic rays, whose sources are still cloaked in intrigue (CERN Courier July/August 2024 p24).

A window to extreme events

But how might they be detected? In this regard, neutrinos present a dichotomy: though outnumbered in the cosmos only by photons, they are notoriously elusive. However, it is precisely their weakly interacting nature that makes them ideal for investigating the most extreme regions of the universe. Cosmic neutrinos travel vast cosmic distances without being scattered or absorbed, providing a direct window into their origins, and enabling scientists to study phenomena such as black-hole jets and neutron-star mergers. Such extreme astrophysical sources test the limits of the Standard Model at energy scales many times higher than is possible in terrestrial particle accelerators.

Because they are so weakly interacting, studying cosmic neutrinos requires giant detectors. Today, three large-scale neutrino telescopes are in operation: IceCube, in Antarctica; KM3NeT, under construction deep in the Mediterranean Sea; and Baikal–GVD, under construction in Lake Baikal in southern Siberia. So far, IceCube, whose construction was completed over 10 years ago, has enabled significant advancements in cosmic-neutrino physics, including the first observation of the Glashow resonance, wherein a 6 PeV electron antineutrino interacts with an electron in the ice sheet to form an on-shell W boson, and the discovery of neutrinos emitted by “active galaxies” powered by a supermassive black hole accreting matter. The previous record-holder for the highest recorded neutrino energy, IceCube has also searched for cosmogenic neutrinos but has not yet observed neutrino candidates above 10 PeV.

Its new northern-hemisphere colleague, KM3NeT, consists of two subdetectors: ORCA, designed to study neutrino properties, and ARCA, which made this detection, designed to detect high-energy cosmic neutrinos and find their astronomical counterparts. Its deep-sea arrays of optical sensors detect Cherenkov light emitted by charged particles created when a neutrino interacts with a quark or electron in the water. At the time of the 2023 event, ARCA comprised 21 vertical detection units, each around 700 m in length. Its location 3.5 km deep under the sea reduces background noise, and its sparse set up over one cubic kilometre optimises the detector for neutrinos of higher energies.

The event that KM3NeT observed in 2023 is thought to be a single muon created by the charged-current interaction of an ultra-high-energy muon neutrino. The muon then crossed horizontally through the entire ARCA detector, emitting Cherenkov light that was picked up by a third of its active sensors. “If it entered the sea as a muon, it would have travelled some 300 km water-equivalent in water or rock, which is impossible,” explains de Jong. “It is most likely the result of a muon neutrino interacting with sea water some distance from the detector.”

The network will improve the chances of detecting new neutrino sources

The best estimate for the neutrino energy of 220 PeV hides substantial uncertainties, given the unknown interaction point and the need to correct for an undetected hadronic shower. The collaboration expects the true value to lie between 110 and 790 PeV with 68% confidence. “The neutrino energy spectrum is steeply falling, so there is a tug-of-war between two effects,” explains de Jong. “Low-energy neutrinos must give a relatively large fraction of their energy to the muon and interact close to the detector, but they are numerous; high-energy neutrinos can interact further away, and give a smaller fraction of their energy to the muon, but they are rare.”

More data is needed to understand the sources of ultra-high-energy neutrinos such as that observed by KM3NeT, where construction has continued in the two years since this remarkable early detection. So far, 33 of 230 ARCA detection units and 24 of 115 ORCA detection units have been installed. Once construction is complete, likely by the end of the decade, KM3NeT will be similar in size to IceCube.

“Once KM3NeT and Baikal–GVD are fully constructed, we will have three large-scale neutrino telescopes of about the same size in operation around the world,” adds Mauricio Bustamante, theoretical astroparticle physicist at the Niels Bohr Institute of the University of Copenhagen. “This expanded network will monitor the full sky with nearly equal sensitivity in any direction, improving the chances of detecting new neutrino sources, including faint ones in new regions of the sky.”

CERN gears up for tighter focusing

When it comes online in 2030, the High-Luminosity LHC (HL-LHC) will feel like a new collider. The hearts of the ATLAS and CMS detectors, and 1.2 km of the 27 km-long Large Hadron Collider (LHC) ring will have been transplanted with cutting-edge technologies that will push searches for new physics into uncharted territory.

On the accelerator side, one of the most impactful upgrades will be the brand-new final focusing systems just before the proton or ion beams arrive at the interaction points. In the new “inner triplets”, particles will slalom in a more focused and compacted way than ever before towards collisions inside the detectors.

To achieve the required focusing strength, the new quadrupole magnets will use Nb3Sn conductors for the first time in an accelerator. Nb3Sn will allow fields as high as 11.5 T, compared to 8.5 T for the conventional NbTi bending magnets used elsewhere in the LHC. As they are a new technology, an integrated test stand of the full 60 m-long inner-triplet assembly is essential – and work is now in full swing.

Learning opportunity

“The main challenge at this stage is the interconnections between the magnets, particularly the interfaces between the magnets and the cryogenic line,” explains Marta Bajko, who leads work on the inner-triplet-string test facility. “During this process, we have encountered nonconformities, out-of-tolerance components, and other difficulties – expected challenges given that these connections are being made for the first time. This phase is a learning opportunity for everyone involved, allowing us to refine the installation process.”

The last magnet – one of two built in the US – is expected to be installed in May. Before then, the so-called N lines, which enable the electrical connections between the different magnets, will be pulled through the entire magnet chain to prepare for splicing the cables together. Individual system tests and short-circuit tests have already been successfully performed and a novel alignment system developed for the HL-LHC is being installed on each magnet. Mechanical transfer function measurements of some magnets are ongoing, while electrical integrity tests in a helium environment have been successfully completed, along with the pressure and leak test of the superconducting link.

“Training the teams is at the core of our focus, as this setup provides the most comprehensive and realistic mock-up before the installations are to be done in the tunnel,” says Bajko. “The surface installation, located in a closed and easily accessible building near the teams’ workshops and laboratories, offers an invaluable opportunity for them to learn how to perform their tasks effectively. This training often takes place alongside other teams, under real installation constraints, allowing them to gain hands-on experience in a controlled yet authentic environment.”

The inner triplet string is composed of a separation and recombination dipole, a corrector-package assembly and a quadrupole triplet. The dipole combines the two counter-rotating beams into a single channel; the corrector package fine-tunes beam parameters; and the quadrupole triplet focuses the beam onto the interaction point.

Quadrupole triplets have been a staple of accelerator physics since they were first implemented in the early 1950s at synchrotrons such as the Brookhaven Cosmotron and CERN’s Proton Synchrotron. Quadrupole magnets are like lenses that are convex (focusing) in one transverse plane and concave (defocusing) in the other, transporting charged particles like beams of light on an optician’s bench. In a quadrupole triplet, the focusing plane alternates with each quadrupole magnet. The effect is to precisely focus the particle beams onto tight spots within the LHC experiments, maximising the number of particles that interact, and increasing the statistical power available to experimental analyses.

Nb3Sn is strategically important because it lays the foundation for future high-energy colliders

Though quadrupole triplets are a time-honoured technique, Nb3Sn brings new challenges. The HL-LHC magnets are the first accelerator magnets to be built at lengths of up to 7 m, and the technical teams at CERN and in the US collaboration – each of which is responsible for half the total “cold mass” production – have decided to produce two variants, primarily driven by differences in available production and testing infrastructure.

Since 2011, engineers and accelerator physicists have been hard at work designing and testing the new magnets and their associated powering, vacuum, alignment, cryogenic, cooling and protection systems. Each component of the HL-LHC will be individually tested before installation in the LHC tunnel, however, this is only half the story as all components must be integrated and operated within the machine, where they will all share a common electrical and cooling circuit. Throughout the rest of 2025, the inner-triplet string will test the integration of all these components, evaluating them in terms of their collective behaviour, in preparation for hardware commissioning and nominal operation.

“We aim to replicate the operational processes of the inner-triplet string using the same tools planned for the HL-LHC machine,” says Bajko. “The control systems and software packages are in an advanced stage of development, prepared through extensive collaboration across CERN, involving three departments and nine equipment groups. The inner-triplet-string team is coordinating these efforts and testing them as if operating from the control room – launching tests in short-circuit mode and verifying system performance to provide feedback to the technical teams and software developers. The test programme has been integrated into a sequencer, and testing procedures are being approved by the relevant stakeholders.”

Return on investment

While Nb3Sn offers significant advantages over NbTi, manufacturing magnets with it presents several challenges. It requires high-temperature heat treatment after winding, and is brittle and fragile, making it more difficult to handle than the ductile NbTi. As the HL-LHC Nb3Sn magnets operate at higher current and energy densities, quench protection is more challenging, and the possibility of a sudden loss of superconductivity requires a faster and more robust protection system.

The R&D required to meet these challenges will provide returns long into the future, says Susana Izquierdo Bermudez, who is responsible at CERN for the new HL-LHC magnets.

“CERN’s investment in R&D for Nb3Sn is strategically important because it lays the foundation for future high-energy colliders. Its increased field strength is crucial for enabling more powerful focusing and bending magnets, allowing for higher beam energies and more compact accelerator designs. This R&D also strengthens CERN’s expertise in advanced superconducting materials and technology, benefitting applications in medical imaging, energy systems and industrial technologies.”

The inner-triplet string will remain an installation on the surface at CERN and is expected to operate until early 2027. Four identical assemblies will be installed underground in the LHC tunnel from 2028 to 2029, during Long Shutdown 3. They will be located 20 m away on either side of the ATLAS and CMS interaction points.

How to unfold with AI

Open-science unfolding

All scientific measurements are affected by the limitations of measuring devices. To make a fair comparison between data and a scientific hypothesis, theoretical predictions must typically be smeared to approximate the known distortions of the detector. Data is then compared with theory at the level of the detector’s response. This works well for targeted measurements, but the detector simulation must be reapplied to the underlying physics model for every new hypothesis.

The alternative is to try to remove detector distortions from the data, and compare with theoretical predictions at the level of the theory. Once detector effects have been “unfolded” from the data, analysts can test any number of hypotheses without having to resimulate or re-estimate detector effects – a huge advantage for open science and data preservation that allows comparisons between datasets from different detectors. Physicists without access to the smearing functions can only use unfolded data.

No simple task

But unfolding detector distortions is no simple task. If the mathematical problem is solved through a straightforward inversion, using linear algebra, noisy fluctuations are amplified, resulting in large uncertainties. Some sort of “regularisation” must be imposed to smooth the fluctuations, but algorithms vary substantively and none is preeminent. Their scope has remained limited for decades. No traditional algorithm is capable of reliably unfolding detector distortions from data relative to more than a few observables at a time.

In the past few years, a new technique has emerged. Rather than unfolding detector effects from only one or two observables, it can unfold detector effects from multiple observables in a high-dimensional space; and rather than unfolding detector effects from binned histograms, it unfolds detector effects from an unbinned distribution of events. This technique is inspired by both artificial-intelligence techniques and the uniquely sparse and high-dimensional data sets of the LHC.

An ill-posed problem

Unfolding is used in many fields. Astronomers unfold point-spread functions to reveal true sky distributions. Medical physicists unfold detector distortions from CT and MRI scans. Geophysicists use unfolding to infer the Earth’s internal structure from seismic-wave data. Economists attempt to unfold the true distribution of opinions from incomplete survey samples. Engineers use deconvolution methods for noise reduction in signal processing. But in recent decades, no field has had a greater need to innovate unfolding techniques than high-energy physics, given its complex detectors, sparse datasets and stringent standards for statistical rigour.

In traditional unfolding algorithms, analysers first choose which quantity they are interested in measuring. An event generator then creates a histogram of the true values of this observable for a large sample of events in their detector. Next, a Monte Carlo simulation simulates the detector response, accounting for noise, background modelling, acceptance effects, reconstruction errors, misidentification errors and energy smearing. A matrix is constructed that transforms the histogram of the true values of the observable into the histogram of detector-level events. Finally, analysts “invert” the matrix and apply it to data, to unfold detector effects from the measurement.

How to unfold traditionally

Diverse algorithms have been invented to unfold distortions from data, with none yet achieving preeminence.

• Developed by Soviet mathematician Andrey Tikhonov in the late 1940s, Tikhonov regularisation (TR) frames unfolding as a minimisation problem with a penalty term added to suppress fluctuations in the solution.

• In the 1950s, statistical mechanic Edwin Jaynes took inspiration from information theory to seek solutions with maximum entropy, seeking to minimise bias beyond the data constraints.

• Between the 1960s and the 1990s, high-energy physicists increasingly drew on the linear algebra of 19th-century mathematicians Eugenio Beltrami and Camille Jordan to develop singular value decomposition as a pragmatic way to suppress noisy fluctuations.

• In the 1990s, Giulio D’Agostini and other high-energy physicists developed iterative Bayesian unfolding (IBU)– a similar technique to Lucy–Richardson deconvolution, which was developed independently in astronomy in the 1970s. An explicitly probabilistic approach well suited to complex detectors, IBU may be considered a forerunner of the neural-network-based technique described in this article.

IBU and TR are the most widely-used approaches in high-energy physics today, with the RooUnfold tool started by Tim Adye serving countless analysts.

At this point in the analysis, the ill-posed nature of the problem presents a major challenge. A simple matrix inversion seldom suffices as statistical noise produces large changes in the estimated input. Several algorithms have been proposed to regularise these fluctuations. Each comes with caveats and constraints, and there is no consensus on a single method that outperforms the rest (see “How to unfold traditionally” panel).

While these approaches have been successfully applied to thousands of measurements at the LHC and beyond, they have limitations. Histogramming is an efficient way to describe the distributions of one or two observables, but the number of bins grows exponentially with the number of parameters, restricting the number of observables that can be simultaneously unfolded. When unfolding only a few observables, model dependence can creep in, for example due to acceptance effects, and if another scientist wants to change the bin sizes or measure a different observable, they will have to redo the entire process.

New possibilities

AI opens up new possibilities for unfolding particle-physics data. Choosing good parameterisations in a high-dimensional space is difficult for humans, and binning is a way to limit the number of degrees of freedom in the problem, making it more tractable. Machine learning (ML) offers flexibility due to the large number of parameters in a deep neural network. Dozens of observables can be unfolded at once, and unfolded datasets can be published as an unbinned collection of individual events that have been corrected for detector distortions as an ensemble.

Unfolding performance

One way to represent the result is as a set of simulated events with weights that encode information from the data. For example, if there are 10 times as many simulated events as real events, the average weight would be about 0.1, with the distribution of weights correcting the simulation to match reality, and errors on the weights reflecting the uncertainties inherent in the unfolding process. This approach gives maximum flexibility to future analysts, who can recombine them into any binning or combination they desire. The weights can be used to build histograms or compute statistics. The full covariance matrix can also be extracted from the weights, which is important for downstream fits.

But how do we know the unfolded values are capturing the truth, and not just “hallucinations” from the AI model?

An important validation step for these analyses are tests performed on synthetic data with a known answer. Analysts take new simulation models, different from the one being used for the primary analysis, and treat them as if they were real data. By unfolding these alternative simulations, researchers are able to compare their results to a known answer. If the biases are large, analysts will need to refine their methods to reduce the model-dependency. If the biases are small compared to the other uncertainties then this remaining difference can be added into the total uncertainty estimate, which is calculated in the traditional way using hundreds of simulations. In unfolding problems, the choice of regularisation method and strength always involves some tradeoff between bias and variance.

Just as unfolding in two dimensions instead of one with traditional methods can reduce model dependence by incorporating more aspects of the detector response, ML methods use the same underlying principle to include as much of the detector response as possible. Learning differences between data and simulation in high-dimensional spaces is the kind of task that ML excels at, and the results are competitive with established methods (see “Better performance” figure).

Neural learning

In the past few years, AI techniques have proven to be useful in practice, yielding publications from the LHC experiments, the H1 experiment at HERA and the STAR experiment at RHIC. The key idea underpinning the strategies used in each of these results is to use neural networks to learn a function that can reweight simulated events to look like data. The neural network is given a list of relevant features about an event such as the masses, energies and momenta of reconstructed objects, and trained to output the probability that it is from a Monte Carlo simulation or the data itself. Neural connections that reweight and combine the inputs across multiple layers are iteratively adjusted depending on the network’s performance. The network thereby learns the relative densities of the simulation and data throughout phase space. The ratio of these densities is used to transform the simulated distribution into one that more closely resembles real events (see “OmniFold” figure).

Illustration of AI unfolding using the OmniFold algorithm

As this is a recently-developed technique, there are plenty of opportunities for new developments and improvements. These strategies are in principle capable of handling significant levels of background subtraction as well as acceptance and efficiency effects, but existing LHC measurements using AI-based unfolding generally have small backgrounds. And as with traditional methods, there is a risk in trying to estimate too many parameters from not enough data. This is typically controlled by stopping the training of the neural network early, combining multiple trainings into a single result, and performing cross validations on different subsets of the data.

Beyond the “OmniFold” methods we are developing, an active community is also working on alternative techniques, including ones based on generative AI. Researchers are also considering creative new ways to use these unfolded results that aren’t possible with traditional methods. One possibility in development is unfolding not just a selection of observables, but the full event. Another intriguing direction could be to generate new events with the corrections learnt by the network built-in. At present, the result of the unfolding is a reweighted set of simulated events, but once the neural network has been trained, its reweighting function could be used to simulate the unfolded sample from scratch, simplifying the output.

CERN and ESA: a decade of innovation

Sky maps

Particle accelerators and spacecraft both operate in harsh radiation environments, extreme temperatures and high vacuum. Each must process large amounts of data quickly and autonomously. Much can be gained from cooperation between scientists and engineers in each field.

Ten years ago, the European Space Agency (ESA) and CERN signed a bilateral cooperation agreement to share expertise and facilities. The goal was to expand the limits of human knowledge and keep Europe at the leading edge of progress, innovation and growth. A decade on, CERN and ESA have collaborated on projects ranging from cosmology and planetary exploration to Earth observation and human spaceflight, supporting new space-tech ventures and developing electronic systems, radiation-monitoring instruments and irradiation facilities.

1. Mapping the universe

The Euclid space telescope is exploring the dark universe by mapping the large-scale structure of billions of galaxies out to 10 billion light-years across more than a third of the sky. With tens of petabytes expected in its final data set – already a substantial reduction of the 850 billion bits of compressed images Euclid processes each day – it will generate more data than any other ESA mission by far.

With many CERN cosmologists involved in testing theories of beyond-the-Standard-Model physics, Euclid first became a CERN-recognised experiment in 2015. CERN also contributes to the development of Euclid’s “science ground segment” (SGS), which processes raw data received from the Euclid spacecraft into usable scientific products such as galaxy catalogues and dark-matter maps. CERN’s virtual-machine file system (CernVM-FS) has been integrated into the SGS to allow continuous software deployment across Euclid’s nine data centres and on developers’ laptops.

The telescope was launched in July 2023 and began observations in February 2024. The first piece of its great map of the universe was released in October 2024, showing millions of stars and galaxies from observations and covering 132 square degrees of the southern sky (see “Sky map” figure). Based on just two weeks of observations, it accounts for just 1% of project’s six-year survey, which will be the largest cosmic map ever made.

Future CERN–ESA collaborations on cosmology, astrophysics and multimessenger astronomy are likely to include the Laser Interferometer Space Antenna (LISA) and the NewAthena X-ray observatory. LISA will be the first space-based observatory to study gravitational waves. NewAthena will study the most energetic phenomena in the universe. Both projects are expected to be ready to launch about 10 years from now.

2. Planetary exploration

Though planetary exploration is conceptually far from fundamental physics, its technical demands require similar expertise. A good example is the Jupiter Icy Moons Explorer (JUICE) mission, which will make detailed observations of the gas giant and its three large ocean-bearing moons Ganymede, Callisto and Europa.

Jupiter’s magnetic field is a million times greater in volume than Earth’s magnetosphere, trapping large fluxes of highly energetic electrons and protons. Before JUICE, the direct and indirect impact of high-energy electrons on modern electronic devices, and in particular their ability to cause “single event effects”, had never been studied before. Two test campaigns took place in the VESPER facility, which is part of the CERN Linear Electron Accelerator for Research (CLEAR) project. Components were tested with tuneable beam energies between 60 and 200 MeV, and average fluxes of roughly 108 electrons per square centimetre per second, mirroring expected radiation levels in the Jovian system.

JUICE radiation-monitor measurements

JUICE was successfully launched in April 2023, starting an epic eight-year journey to Jupiter including several flyby manoeuvres that will be used to commission the onboard instruments (see “Flyby” figure). JUICE should reach Jupiter in July 2031. It remains to be seen whether test results obtained at CERN have successfully de-risked the mission.

Another interesting example of cooperation on planetary exploration is the Mars Sample Return mission, which must operate in low temperatures during eclipse phases. CERN supported the main industrial partner, Thales Alenia Space, in qualifying the orbiter’s thermal-protection systems in cryogenic conditions.

3. Earth observation

Earth observation from orbit has applications ranging from environmental monitoring to weather forecasting. CERN and ESA collaborate both on developing the advanced technologies required by these applications and ensuring they can operate in the harsh radiation environment of space.

In 2017 and 2018, ESA teams came to CERN’s North Area with several partner companies to test the performance of radiation monitors, field-programmable gate arrays (FPGAs) and electronics chips in ultra-high-energy ion beams at the Super Proton Synchrotron. The tests mimicked the ultra-high-energy part of the galactic cosmic-ray spectrum, whose effects had never previously been measured on the ground beyond 10 GeV/nucleon. In 2017, ESA’s standard radiation-environment monitor and several FPGAs and multiprocessor chips were tested with xenon ions. In 2018, the highlight of the campaign was the testing of Intel’s Myriad-2 artificial intelligence (AI) chip with lead ions (see “Space AI” figure). Following its radiation characterisation and qualification, in 2020 the chip embarked on the φ-sat-1 mission to autonomously detect clouds using images from a hyperspectral camera.

Myriad 2 chip testing

More recently, CERN joined Edge SpAIce – an EU project to monitor ecosystems onboard the Balkan-1 satellite and track plastic pollution in the oceans. The project will use CERN’s high-level synthesis for machine learning (hls4ml) AI technology to run inference models on an FPGA that will be launched in 2025.

Looking further ahead, ESA’s φ-lab and CERN’s Quantum Technology Initiative are sponsoring two PhD programmes to study the potential of quantum machine learning, generative models and time-series processing to advance Earth observation. Applications may accelerate the task of extracting features from images to monitor natural disasters, deforestation and the impact of environmental effects on the lifecycle of crops.

4. Dosimetry for human spaceflight

In space, nothing is more important than astronauts’ safety and wellbeing. To this end, in August 2021 ESA astronaut Thomas Pesquet activated the LUMINA experiment inside the International Space Station (ISS), as part of the ALPHA mission (see “Space dosimetry” figure). Developed under the coordination of the French Space Agency and the Laboratoire Hubert Curien at the Université Jean-Monnet-Saint-Étienne and iXblue, LUMINA uses two several-kilometre-long phosphorous-doped optical fibres as active dosimeters to measure ionising radiation aboard the ISS.

ESA astronaut Thomas Pesquet

When exposed to radiation, optical fibres experience a partial loss of transmitted power. Using a reference control channel, radiation-induced attenuation can be accurately measured related to the total ionising dose, with the sensitivity of the device primarily governed by the length of the fibre. Having studied optical-fibre-based technologies for many years, CERN helped optimise the architecture of the dosimeters and performed irradiation tests to calibrate the instrument, which will operate on the ISS for a period of up to five years.

LUMINA complements dosimetry measurements performed on the ISS using CERN’s Timepix technology – an offshoot of the hybrid-pixel-detector technology developed for the LHC experiments (CERN Courier September/October 2024 p37). Timepix dosimeters have been integrated in multiple NASA payloads since 2012.

5. Radiation-hardness assurance

It’s no mean feat to ensure that CERN’s accelerator infrastructure functions in increasingly challenging radiation environments. Similar challenges are found in space. Damage can be caused by accumulating ionising doses, single-event effects (SEEs) or so-called displacement damage dose, which dislodges atoms within a material’s crystal lattice rather than ionising them. Radiation-hardness assurance (RHA) reduces radiation-induced failures in space through environment simulations, part selection and testing, radiation-tolerant design, worst-case analysis and shielding definition.

Since its creation in 2008, CERN’s Radiation to Electronics project has amplified the work of many equipment and service groups in modelling, mitigating and testing the effect of radiation on electronics. A decade later, joint test campaigns with ESA demonstrated the value of CERN’s facilities and expertise to RHA for spaceflight. This led to the signing of a joint protocol on radiation environments, technologies and facilities in 2019, which also included radiation detectors and radiation-tolerant systems, and components and simulation tools.

CHARM facility

Among CERN’s facilities is CHARM: the CERN high-energy-accelerator mixed-field facility, which offers an innovative approach to low-cost RHA. CHARM’s radiation field is generated by the interaction between a 24 GeV/c beam from the Proton Synchrotron and a metallic target. CHARM offers a uniquely wide spectrum of radiation types and energies, the possibility to adjust the environment using mobile shielding, and enough space to test a medium-sized satellite in full operating conditions.

Radiation testing is particularly challenging for the new generation of rapidly developed and often privately funded “new space” projects, which frequently make use of commercial and off-the-shelf (COTS) components. Here, RHA relies on testing and mitigation rather than radiation hardening by design. For “flip chip” configurations, which have their active circuitry facing inward toward the substrate, and dense three-dimensional structures that cannot be directly exposed without compromising their performance, heavy-ion beams accelerated to between 10 and 100 MeV/nucleon are the only way to induce SEE in the sensitive semiconductor volumes of the devices.

To enable testing of highly integrated electronic components, ESA supported studies to develop the CHARM heavy ions for micro-electronics reliability-assurance facility – CHIMERA for short (see “CHIMERA” figure). ESA has sponsored key feasibility activities such as: tuning the ion flux in a large dynamic range; tuning the beam size for board-level testing; and reducing beam energy to maximise the frequency of SEE while maintaining a penetration depth of a few millimetres in silicon.

6. In-orbit demonstrators

Weighing 1 kg and measuring just 10 cm on each side – a nanosatellite standard – the CELESTA satellite was designed to study the effects of cosmic radiation on electronics (see “CubeSat” figure). Initiated in partnership with the University of Montpellier and ESA, and launched in July 2022, CELESTA was CERN’s first in-orbit technology demonstrator.

Radiation-testing model of the CELESTA satellite

As well as providing the first opportunity for CHARM to test a full satellite, CELESTA offered the opportunity to flight-qualify SpaceRadMon, which counts single-event upsets (SEUs) and single-event latchups (SELs) in static random-access memory while using a field-effect transistor for dose monitoring. (SEUs are temporary errors caused by a high-energy particle flipping a bit and SELs are short circuits induced by high-energy particles.) More than 30 students contributed to the mission development, partially in the frame of ESA’s Fly Your Satellite Programme. Built from COTS components calibrated in CHARM, SpaceRadMon has since been adopted by other ESA missions such as Trisat and GENA-OT, and could be used in the future as a low-cost predictive maintenance tool to reduce space debris and improve space sustainability.

The maiden flight of the Vega-C launcher placed CELESTA on an atypical quasi-circular medium-Earth orbit in the middle of the inner Van Allen proton belt at roughly 6000 km. Two months of flight data sufficed to validate the performance of the payload and the ground-testing procedure in CHARM, though CELESTA will fly for thousands of years in a region of space where debris is not a problem due to the harsh radiation environment.

The CELESTA approach has since been adopted by industrial partners to develop radiation-tolerant cameras, radios and on-board computers.

7. Stimulating the space economy

Space technology is a fast-growing industry replete with opportunities for public–private cooperation. The global space economy will be worth $1.8 trillion by 2035, according to the World Economic Forum – up from $630 billion in 2023 and growing at double the projected rate for global GDP.

Whether spun off from space exploration or particle physics, ESA and CERN look to support start-up companies and high-tech ventures in bringing to market technologies with positive societal and economic impacts (see “Spin offs” figure). The use of CERN’s Timepix technology in space missions is a prime example. Private company Advacam collaborated with the Czech Technical University to provide a Timepix-based radiation-monitoring payload called SATRAM to ESA’s Proba-V mission to map land cover and vegetation growth across the entire planet every two days.

The Hannover Messe fair

Advacam is now testing a pixel-detector instrument on JoeySat – an ESA-sponsored technology demonstrator for OneWeb’s next-generation constellation of satellites designed to expand global connectivity. Advacam is also working with ESA on radiation monitors for Space Rider and NASA’s Lunar Gateway. Space Rider is a reusable spacecraft whose maiden voyage is scheduled for the coming years, and Lunar Gateway is a planned space station in lunar orbit that could act as a staging post for Mars exploration.

Another promising example is SigmaLabs – a Polish startup founded by CERN alumni specialising in radiation detectors and predictive-maintenance R&D for space applications. SigmaLabs was recently selected by ESA and the Polish Space Agency to provide one of the experiments expected to fly on Axiom Mission 4 – a private spaceflight to the ISS in 2025 that will include Polish astronaut and CERN engineer Sławosz Uznański (CERN Courier May/June 2024 p55). The experiment will assess the scalability and versatility of the SpaceRadMon radiation-monitoring technology initially developed at CERN for the LHC and flight tested on the CELESTA CubeSat.

In radiation-hardness assurance, the CHIMERA facility is associated with the High-Energy Accelerators for Radiation Testing and Shielding (HEARTS) programme sponsored by the European Commission. Its 2024 pilot user run is already stimulating private innovation, with high-energy heavy ions used to perform business-critical research on electronic components for a dozen aerospace companies.

A word with CERN’s next Director-General

Mark Thomson

What motivates you to be CERN’s next Director-General?

CERN is an incredibly important organisation. I believe my deep passion for particle physics, coupled with the experience I have accumulated in recent years, including leading the Deep Underground Neutrino Experiment, DUNE, through a formative phase, and running the Science and Technology Facilities Council in the UK, has equipped me with the right skill set to lead CERN though a particularly important period.

How would you describe your management style?

That’s a good question. My overarching approach is built around delegating and trusting my team. This has two advantages. First, it builds an empowering culture, which in my experience provides the right environment for people to thrive. Second, it frees me up to focus on strategic planning and engagement with numerous key stakeholders. I like to focus on transparency and openness, to build trust both internally and externally.

How will you spend your familiarisation year before you take over in 2026?

First, by getting a deep understanding of CERN “from within”, to plan how I want to approach my mandate. Second, by lending my voice to the scientific discussion that will underpin the third update to the European strategy for particle physics. The European strategy process is a key opportunity for the particle-physics community to provide genuine bottom-up input and shape the future. This is going to be a really varied and exciting year.

What open question in fundamental physics would you most like to see answered in your lifetime?

I am going to have to pick two. I would really like to understand the nature of dark matter. There are a wide range of possibilities, and we are addressing this question from multiple angles; the search for dark matter is an area where the collider and non-collider experiments can both contribute enormously. The second question is the nature of the Higgs field. The Higgs boson is just so different from anything else we’ve ever seen. It’s not just unique – it’s unique and very strange. There are just so many deep questions, such as whether it is fundamental or composite. I am confident that we will make progress in the coming years. I believe the High-Luminosity LHC will be able to make meaningful measurements of the self-coupling at the heart of the Higgs potential. If you’d asked me five years ago whether this was possible, I would have been doubtful. But today I am very optimistic because of the rapid progress with advanced analysis techniques being developed by the brilliant scientists on the LHC experiments.

What areas of R&D are most in need of innovation to meet our science goals?

Artificial intelligence is changing how we look at data in all areas of science. Particle physics is the ideal testing ground for artificial intelligence, because our data is complex there are none of the issues around the sensitive nature of the data that exist in other fields. Complex multidimensional datasets are where you’ll benefit the most from artificial intelligence. I’m also excited by the emergence of new quantum technologies, which will open up fresh opportunities for our detector systems and also new ways of doing experiments in fundamental physics. We’ve only scratched the surface of what can be achieved with entangled quantum systems.

How about in accelerator R&D?

There are two areas that I would like to highlight: making our current technologies more sustainable, and the development of high-field magnets based on high-temperature superconductivity. This connects to the question of innovation more broadly. To quote one example among many, high-temperature superconducting magnets are likely to be an important component of fusion reactors just as much as particle accelerators, making this a very exciting area where CERN can deploy its engineering expertise and really push that programme forward. That’s not just a benefit for particle physics, but a benefit for wider society.

How has CERN changed since you were a fellow back in 1994?

The biggest change is that the collider experiments are larger and more complex, and the scientific and technical skills required have become more specialised. When I first came to CERN, I worked on the OPAL experiment at LEP – a collaboration of less than 400 people. Everybody knew everybody, and it was relatively easy to understand the science of the whole experiment.

My overarching approach is built around delegating and trusting my team

But I don’t think the scientific culture of CERN and the particle-physics community has changed much. When I visit CERN and meet with the younger scientists, I see the same levels of excitement and enthusiasm. People are driven by the wonderful mission of discovery. When planning the future, we need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics. Today we have an amazing machine that’s running beautifully: the LHC. I also don’t think it is possible to overstate the excitement of the High-Luminosity LHC. So there’s a clear and exciting future out to the early 2040s for today’s early-career researchers. The question is what happens beyond that? This is one reason to ensure that there is not a large gap between the end of the High-Luminosity LHC and the start of whatever comes next.

Should the world be aligning on a single project?

Given the increasing scale of investment, we do have to focus as a global community, but that doesn’t necessarily mean a single project. We saw something similar about 10 years ago when the global neutrino community decided to focus its efforts on two complementary long-baseline projects, DUNE and Hyper-Kamiokande. From the perspective of today’s European strategy, the Future Circular Collider (FCC) is an extremely appealing project that would map out an exciting future for CERN for many decades. I think we’ll see this come through strongly in an open and science-driven European strategy process.

How do you see the scientific case for the FCC?

For me, there are two key points. First, gaining a deep understanding of the Higgs boson is the natural next step in our field. We have discovered something truly unique, and we should now explore its properties to gain deeper insights into fundamental physics. Scientifically, the FCC provides everything you want from a Higgs factory, both in terms of luminosity and the opportunity to support multiple experiments.

Second, investment in the FCC tunnel will provide a route to hadron–hadron collisions at the 100 TeV scale. I find it difficult to foresee a future where we will not want this capability.

These two aspects make the FCC a very attractive proposition.

How successful do you believe particle physics is in communicating science and societal impacts to the public and to policymakers?

I think we communicate science well. After all, we’ve got a great story. People get the idea that we work to understand the universe at its most basic level. It’s a simple and profound message.

Going beyond the science, the way we communicate the wider industrial and societal impact is probably equally important. Here we also have a good story. In our experiments we are always pushing beyond the limits of current technology, doing things that have not been done before. The technologies we develop to do this almost always find their way back into something that will have wider applications. Of course, when we start, we don’t know what the impact will be. That’s the strength and beauty of pushing the boundaries of technology for science.

Would the FCC give a strong return on investment to the member states?

Absolutely. Part of the return is the science, part is the investment in technology, and we should not underestimate the importance of the training opportunities for young people across Europe. CERN provides such an amazing and inspiring environment for young people. The scale of the FCC will provide a huge number of opportunities for young scientists and engineers.

We need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics

In terms of technology development, the detectors for the electron–positron collider will provide an opportunity for pushing forward and deploying new, advanced technologies to deliver the precision required for the science programme. In parallel, the development of the magnet technologies for the future hadron collider will be really exciting, particularly the potential use of high-temperature superconductors, as I said before.

It is always difficult to predict the specific “return on investment” on the technologies for big scientific research infrastructure. Part of this challenge is that some of that benefits might be 20, 30, 40 years down the line. Nevertheless, every retrospective that has tried, has demonstrated that you get a huge downstream benefit.

Do we reward technical innovation well enough in high-energy physics?

There needs to be a bit of a culture shift within our community. Engineering and technology innovation are critical to the future of science and critical to the prosperity of Europe. We should be striving to reward individuals working in these areas.

Should the field make it more flexible for physicists and engineers to work in industry and return to the field having worked there?

This is an important question. I actually think things are changing. The fluidity between academia and industry is increasing in both directions. For example, an early-career researcher in particle physics with a background in deep artificial-intelligence techniques is valued incredibly highly by industry. It also works the other way around, and I experienced this myself in my career when one of my post-doctoral researchers joined from an industry background after a PhD in particle physics. The software skills they picked up from industry were incredibly impactful.

I don’t think there is much we need to do to directly increase flexibility – it’s more about culture change, to recognise that fluidity between industry and academia is important and beneficial. Career trajectories are evolving across many sectors. People move around much more than they did in the past.

Does CERN have a future as a global laboratory?

CERN already is a global laboratory. The amazing range of nationalities working here is both inspiring and a huge benefit to CERN.

How can we open up opportunities in low- and middle-income countries?

I am really passionate about the importance of diversity in all its forms and this includes national and regional inclusivity. It is an agenda that I pursued in my last two positions. At the Deep Underground Neutrino Experiment, I was really keen to engage the scientific community from Latin America, and I believe this has been mutually beneficial. At STFC, we used physics as a way to provide opportunities for people across Africa to gain high-tech skills. Going beyond the training, one of the challenges is to ensure that people use these skills in their home nations. Otherwise, you’re not really helping low- and middle-income countries to develop.

What message would you like to leave with readers?

That we have really only just started the LHC programme. With more than a factor of 10 increase in data to come, coupled with new data tools and upgraded detectors, the High-Luminosity LHC represents a major opportunity for a new discovery. Its nature could be a complete surprise. That’s the whole point of exploring the unknown: you don’t know what’s out there. This alone is incredibly exciting, and it is just a part of CERN’s amazing future.

bright-rec iop pub iop-science physcis connect