Comsol -leaderboard other pages

Topics

Superconductors for the energy frontier

Fill hundreds of copper tubes with a powder of niobium and tin, and then stack them in the form of a cylinder. Draw this out into a composite wire hundreds of kilometres long and barely a millimetre in diameter. Braid it into a rectangular cable and insulate it in fibreglass. Wind it into coils, bake for a week at precisely 650 °C and impregnate with resin. Assemble them with sub-millimetre precision under a compressive stress of one tonne per square centimetre, cool the magnet to a few kelvin and power it with tens of thousands of amps. This is not alchemy. This is a possible recipe for a Nb3Sn magnet.

Whether made of Nb3Sn or higher-performance superconductors, such devices promise to substantially improve the discovery potential of hadron colliders. Since their energy reach scales as the dipole field times the size of the tunnel, each additional tesla directly expands the energy frontier.

What makes these magnets unique is their compactness. Superconducting coils can carry a current density of order 500 A/mm2, a factor 100 higher than what can be tolerated by copper with active cooling. A magnet based on superconductivity can therefore have coils that are narrower and lighter.

No application of superconductivity pushes this limit harder than an accelerator magnet. Larger coils mean larger magnets and an unaffordably large tunnel to accommodate them. Accelerator magnets must therefore be highly optimised in space and cost – the capsule hotels of superconductivity – and this extreme optimisation creates opportunities for spinoff applications, from lightweight motors for electric aircraft to power transmission beneath the pavement of a crowded metropolis. Superconducting accelerator devices have already paved the way for societal applications in medical imaging and advanced accelerators for cancer therapy, and the field continues to benefit from strong research synergies with fusion tokamaks, though their toroidal coils don’t need to push the limits of current densities in the same way.

Superconductors also save energy. At the LHC, more than a thousand niobium–titanium alloy (Nb–Ti) superconducting dipoles are powered by only 40 MW. This is much less than what is consumed by the LHC’s injectors.

As dipoles based on Nb-Ti superconductors are limited to a maximum achievable field of nearly 10 tesla, corresponding to an operational field of about 8 tesla with acceptable margins, accelerator physicists and engineers are exploring the use of better superconductors to roughly double their field. The options include Nb3Sn, which will soon be used in an accelerator for the first time at the HL-LHC, and “high temperature” superconductors that promise much higher performance and a simplified accelerator infrastructure. But dipoles are much more difficult to design than solenoids. Though 30 tesla solenoid magnets are already available on the market, no one has yet succeeded in building a 20 tesla dipole magnet.

Shear complexity

An accelerator dipole poses several challenges compared to a solenoid. While a solenoid’s current loops generate an axial magnetic field, a dipole must use vertically separated coils to generate a vertical magnetic field; for the same total coil thickness and current density, a solenoid can provide twice the field strength of a dipole; and the field distribution and the forces exerted on the coils are much more difficult to control. In a solenoid, electromagnetic forces are perpendicular to the conductor, but in a dipole they push the coil towards the midplane and outwards, with a two-dimensional distribution that includes shear stresses.

Superconductors for high-field accelerator magnets

The engineering challenge is increased by the need for dipoles to operate precisely during the ramp, when particles gain energy with every turn after being injected into the collider, requiring increasingly strong magnetic fields to bend them. To ensure that accelerator physicists can make tightly focused beams collide with high luminosity inside the experiments, the field must be uniform to better than one part in 104 across two thirds of a dipole’s aperture as the field increases up to a factor 15. These challenges are not present in either medical-imaging magnets or the toroidal coils used for fusion, which must operate at a constant current, though the toroidal coils used for fusion are subject to rapidly varying external magnetic fields.

In the context of the 2026 update to the European Strategy for Particle Physics (ESPP), advanced high-field dipole magnets would be needed by the hadron-collider phase of the Future Circular Collider (FCC-hh) and the proposed muon collider. Due to its exceptionally large and unstable beams, a muon collider would also require a kilometre-long channel of superconducting solenoids with alternating gradient, and a final superconducting cooling solenoid with a strength of roughly 40 tesla before the collider ring. These challenges are complementary to what is required by the FCC-hh, and the community is devoting significant research and development in this direction.

The targets initially set for the FCC-hh in 2014 were based on round numbers: a 100 km tunnel and a centre-of-mass energy of 100 TeV. This required 16 tesla dipoles, one or two tesla above what can be done with adequate margins and costs with present technology. After a decade of studies, the tunnel size was reduced to 91 km to fit geological constraints, and the field was brought down to 14 tesla, allowing a centre-of-mass energy of 85 TeV after some optimisation of the lattice. This 15% reduction in the energy in the centre-of-mass frame has had a major effect on the energy consumption of the collider, as synchrotron radiation reduced by 50%. A similar tuning occurred for the LHC, which was initially imagined at 16 TeV with 10 tesla magnets rather than today’s 13.6 TeV and 8.1 tesla.

The baseline design for the FCC-hh dipole magnets is Nb3Sn technology operated at 1.9 K, though the ESPP documents also note three other possibilities: hybrid magnets that use substitute Nb–Ti for Nb3Sn in the lower field regions; operation at 4.5 K; and a high-temperature-superconductor option operating between 4.5 and 20 K with magnetic fields in the range 14 to 20 tesla.

The Nb3Sn path

Nb3Sn was discovered a few years before Nb-Ti and has the advantage of providing current densities in excess of 500 A/mm2 up to 16 T (see “Superconductors for high-field accelerator magnets” figure). After 35 years of research, fields have now reached 14.5 tesla, close to the 15–16 tesla target needed to have magnets operating at 14 tesla in the FCC-hh with adequate margins (see “Niobium dipoles” figure). The main goal today is to produce a double-aperture short-model Nb3Sn magnet with all features specified in the FCC-hh design. This should be achieved by 2030 and then scaled up in length.

Niobium dipoles

A key challenge is to reduce the quantity of Nb3Sn, thereby lowering both the cost and hysteresis losses during field ramping. As the magnetic field changes, currents are induced within the superconducting filaments, leading to energy dissipation that must be carefully controlled. Minimising these losses is one reason for the complex, multi-filamentary architecture of superconducting wires. The smaller filaments of Nb-Ti can significantly reduce the losses, and Nb-Ti costs five to 10 times less than Nb3Sn.

A second engineering challenge is to achieve a mechanical structure capable of keeping the coil in compression during powering but not overstressing it. The stress limits of Nb3Sn are of the order of 200 MPa, and the required precompression for a 14 tesla dipole is about 150 MPa.

Another challenge of the low-temperature path would be logistical: the production of roughly 5000 tonnes of Nb3Sn. This corresponds to a 1 kA cable from the Earth to the Moon at a cost of several billions of dollars. These numbers are an order of magnitude larger than what was needed for the Nb-Ti coils of the LHC.

Despite these challenges, Nb3Sn technology is now well established for small series, and will soon play a key role at the High-Luminosity LHC – the technology’s first use in a working accelerator, though for focusing beams rather than bending them (see “Nb3Sn quadrupoles” figure). But newer superconductors may well prove competitive.

The high-temperature path

In 1986, Johannes Georg Bednorz and Karl Alexander Müller announced the discovery of superconductivity above 35 K, something not foreseen by theory, and well above the boiling point of liquid helium. “High-temperature” superconductors (HTS) not only remain superconducting at high temperatures, in many cases above the boiling point of liquid nitrogen (though at 77 K HTS performance is not yet adequate for our needs ), but also at high fields. HTS solenoids have been constructed with fields up to 40 tesla, and though the problem of degradation is not yet totally solved, progress has been outstanding.

Three families of superconducting conductors are currently available or emerging on the market: rare-earth barium copper oxides (REBCO), bismuth strontium calcium copper oxides (BSCCO) and iron-based superconductors (IBS).

Nb3Sn quadrupoles

REBCO is of strong interest in the world of fusion. Billions of dollars of investment have reduced the cost by more than an order of magnitude in the past decades. REBCO comes in tapes (see “Frontier superconductors” figure). A 12 mm-wide tape has thickness of 0.1 mm and can carry 1500 A at 4.5 K, or about half that at 20 K. 20 tesla peak field coils have been built and tested for fusion applications, and private investors plan to build reactors that are much more compact than ITER, which is based on Nb3Sn technology. 

Manufacturing REBCO coils is greatly simplified compared to Nb3Sn as the tape needs no temperature treatment; but the technology used to wind the tapes is not easy to adapt for accelerator dipole magnets, which are radically different from the toroidal coils designed for tokamaks. The challenge here is not to develop a conductor for accelerator magnets, but to adapt our magnet designs to this amazing tape. There is a long way to the 15–16 tesla target, but the potential is huge, with progress being made in Europe, the US and China (see “HTS dipoles” figure).

HTS dipoles

And what of the other HTS superconductors? BSCCO has the great advantage of round wires, but must be treated at 800 °C and it does not profit from synergies with fusion. At present, this path is only being pursued in the US, with achieved fields of just 1.8 T. IBS is being actively developed in China and Europe, but its current density has not yet matched the performance of REBCO, and the best results were obtained for tapes rather than wires.

HTS would allow operation at 20 K, with a simplification of the cooling scheme and a possible reduction in the energy consumption of the collider, though at 85 TeV half of the heat loads are due to synchrotron radiation, which does not depend on the operational temperature of the magnets. Moreover, REBCO tape has a single filament, as wide as the tape, and therefore the saving from the higher operational temperature could be compensated by larger heat losses. Estimating the energy balance is far from trivial: do not draw easy conclusions!

Optimal solution

Addressing these challenges is the work of the High Field Magnet (HFM) programme, an international collaboration with 15 institutes steered by CERN that was founded in 2021. HFM is exploring multiple different designs to find the optimal solution, from the most classical to the more exotic, and novel ideas should be explored in parallel to the most conservative paths. Though there are major challenges ahead, solving them promises societal benefits via a number of diverse spinoff applications.

High-field magnets remain one of the hardest problems in applied superconductivity. The next decade will be decisive for understanding the feasibility and cost of the FCC-hh.

Execution mode

Going all the way back to Robert Wilson in the 1960s, some formidable figures precede you as Fermilab director…

Coming back to Fermilab is, for me, a little like coming home. My family and I moved to the United States in 1998, and Fermilab was the first place I worked in the Department of Energy (DOE) system. It was also a place where people really took me in. Fermilab, like many national laboratories, is built on the shoulders of giants – and Robert Wilson was one of them.

He got this huge site, more than 6000 acres, with a real vision for expansion and growth in science. He was also a genuine fan of architecture, truly inspired by it. Our Wilson Hall is a tribute to that. It echoes what people call the folding hands of Beauvais Cathedral in France. Having that building stand out from the prairie was a statement.

That’s Robert Wilson’s legacy at Fermilab: a science of statements and the ability to do things fast, effectively, things that people thought could not be done. So, honestly, sitting in that chair feels good.

Wilson’s 1969 Congressional testimony is one of the most celebrated defences of fundamental science. What do you make of his case today?

He told Congress that high-energy physics had to do with dignity and all the things that we really venerate and honour in our country. That is still true. Despite the strain on science funding and all the questions about whether we are spending money effectively, the government is still willing to invest more than five billion dollars at Fermilab over the next five to ten years. This feels almost contrarian to what you hear in the press. Yes, science is under pressure. But the commitment is there, for the very same reason Bob Wilson stated back then.

That said, I believe we carry a genuine responsibility to deliver to society. That has been the basis of the social contract since Vannevar Bush wrote Science, the Endless Frontier in 1945; the document that helped create the national laboratory system and agencies such as DOE, the National Science Foundation and NASA. I don’t expect every citizen to understand exactly what a neutrino does or why it matters. But the outcomes of science, and the technology we develop on the way, whether that’s AI, quantum information tools, electronics, those are things we have to deliver. It’s part of the social contract.

Then, under Leon Lederman, and driven forwards by figures like Helen Edwards, Fermilab expanded the world’s energy frontier with the Tevatron…

Helen Edwards is actually directly responsible for the fact that I’m in this country. It’s her fault, really. When I was a group leader at DESY in 1998, 37 years old, with two small kids and having just built a house in Germany, Helen walked into my office. She asked, “Norbert, what do you want to do with your future?” She was very direct and wouldn’t take no for an answer. I hesitated, and she said, “You need to think about this. You should go to the United States.” Six months later, I was at Fermilab.

She was undeterrable. If she had a mission, a North Star, there was no lab director, no government official, no one who could deflect her from it. She and Alvin Tollestrup, a name that doesn’t get talked about enough, developed the superconducting magnet technology under Leon Lederman’s leadership that made the Tevatron what it was. That technology later allowed DESY to build HERA and ultimately landed in the LHC at CERN.

Alvin could explain superconductor physics on first principles and very quickly come to how you wind a magnet and what fundamentally limits its performance. A physicist and a technologist at the same time. They were both giants. There’s no question about it.

You mentioned moving from Europe to the United States. How different were the two scientific cultures, in the late 1990s?

You sure you want to write about this? [chuckles] Before I left DESY, I went to the director, Björn Wiik. He was himself a visionary leader, the person behind the TESLA concept for superconducting RF. When he asked where I saw myself in five or ten years, I answered, “I want your job. I want to be a director.” He was very direct too. “You are only 35 years old,” he said. “To become a director in Europe, you have to look like me. You have to have grey hair and a beard.” I found that frustrating. But I think it was largely true at the time.

In the United States, age didn’t matter. Nationality didn’t matter. What mattered was: could I do it? A 39-year-old German, alongside a Canadian, Thom Mason, and the son of Croatian immigrants, Anthony Chargin, suddenly found themselves in charge of building one of the biggest science projects in the United States: the Spallation Neutron Source, inspired by a former South Korean accelerator physicist, Yanglai Cho. That’s a story you can’t make up. That is where my career really started.

The transition from Lederman to John Peoples coincided with both the golden age of the Tevatron and the era of the Superconducting Super Collider (SSC). What do those two directors, and that moment, tell us about leadership in big science?

I knew Leon well because I actually lived in his house. He had a place off-site, and when my family first arrived we had very little money, so he said: “You need a house. I have one.” And we moved in. He came by regularly, stored his Porsche in the garage, and we talked a great deal. I learned a lot from him.

He was the kind of person you simply liked. Everybody at Fermilab loved Leon. He was funny, extraordinarily smart and he had a vision for the laboratory. I asked him once why he stepped down after nine years as director. He told me, “If you are a lab director, you have to make important decisions, and with every decision you make, you lose 10 percent of your friends. After 10 decisions, they are all gone. That is when you step down.” That was a true Leon answer. But it reflected his deep understanding of what leadership really costs.

I deeply believe high-energy physics can again be a launchpad for open international collaboration

John Peoples was very different. He was hands-on, deeply involved in building the complex and the Antiproton Source. Where Leon was the beloved visionary, John was the builder who wanted to be involved. And he had two extraordinarily difficult jobs at the same time: managing the closure of the SSC in Texas, which you could see drain him, and running a programme that ultimately delivered the discovery of the top quark.

These were very different people, very different characters. I think every character has its time. That is as true at Fermilab as it is at CERN. You can tell the same story through CERN’s directors. We just lost one, Herwig Schopper, who was a phenomenal leader. He spoke openly about the sacrifices he and the laboratory had to make to get CERN going. And when you look at CERN 50 years later, that is still a defining legacy, with the 27-kilometre tunnel and the science that continues to come out of it.

What lessons does the abandonment of the SSC hold for the large-scale projects being discussed today?

The real lesson of the SSC isn’t the failure itself. It is about implementation. The days when you could go to a government and say your project costs this much, then come back the next year and ask for 20 percent more, and the year after that another 20 percent – those days are gone. That is not the world we live in, and at the scale of projects we are talking about today, it would not be responsible.

John understood that deeply. I have tried to carry it through my own career. On my watch, I will always be direct with our funding agencies about what I see as risks and what things actually cost. That is non-negotiable for me.

Fermilab then repositioned itself at the intensity frontier. How do you keep the laboratory aligned behind the Long-Baseline Neutrino Facility (LBNF) and the DUNE experiment?

You form a team, you focus the team and you execute. That sounds pretty mundane and simple. It is not. It is really hard. CERN went through something very similar under Robert Aymar with the LHC: the necessity to focus every resource and every engineering capability on one thing to make it happen.

I am a scientist, but also a project guy. I wake up every morning thinking about those five billion dollars. That is roughly eight hundred million a year. Three million dollars a day. My job is to organise a team that can responsibly and effectively deploy that every single day to build LBNF/DUNE.

When I spoke at my first all-hands meeting here, I laid out three bullet points, because nobody remembers more than three. First: beam at the DUNE far detector by 2031. Second: science at the High-Luminosity LHC and delivering on our commitments there. Third: develop science, technology and innovation for the benefit of society. Those are the three and everything flows from them.

I use the story of JFK visiting NASA and asking the janitor why he is there. The janitor says: “To put a man on the Moon.” That is the answer I want from everyone here. So I go around and ask people why they are here. And if I don’t get the answer I want, I ask again.

Neutrino physics is also receiving major investments in China and Japan, with JUNO already closing in on the neutrino mass hierarchy and Hyper-Kamiokande equipped to measure leptonic CP violation when it comes online. How does DUNE fit in that landscape?

We live in a world that is not the world of 20 or 30 years ago. We have to recognise that. But I deeply believe high-energy physics can again be a launchpad for open international collaboration.

The neutrino story is phenomenal for the US with the DOE’s support of the DUNE project. It is also great for CERN. The most significant large-scale investment CERN has made in an external experiment is in DUNE. And it goes both ways: Fermilab contributes significantly to the HL-LHC programme. That is one of the healthiest collaborations in the field, both at the personal level and at the level of laboratories and programmes.

In my world, it is better to make the wrong decision and correct it than to make no decision at all

As for competition among neutrino facilities, it’s healthy. It is all about what I call the three C’s: collaboration, cooperation, competition. Every scientific relationship works better when you are clear about which is which. There is competition with other neutrino experiments, of course, in the sense that whoever reaches an answer first gets the golden nugget. But there is also technology exchange, open science and the free sharing of knowledge. Both things are true.

When you look at the DUNE detector and the beam we are building, it will be, hopefully sooner than later, the most effective research instrument for this kind of science. It is nice to be number one. You never stay number one forever, but it is nice. CERN is number one in collider physics right now – a pretty good feeling. But you also have to deliver results.

How would you describe Fermilab’s culture right now?

Scientists are driven by curiosity. That hasn’t changed and it won’t. But when a large institution commits to building a major instrument, there is real tension between the broad research culture that develops over time and the laser focus that construction demands. Is there stress in the system? Yes, honestly, there is. The best thing you can do is recognise that, talk about it openly and make sure people can see the light at the end of the tunnel.

The people who love construction have a clear finish line. The researchers have an extraordinary instrument coming, and the conceptual and technical work they do now is their investment in what comes after. The two groups are not perpendicular to each other. A good instrument requires constant feedback from the science side on what it actually needs to deliver, but you also can’t have an infinite conversation about what to build while you are trying to finish building it. Finding that line is delicate, and I spent my life basically walking it. At the SNS, at LCLS-II, at ITER. You pick.

There is a saying I keep coming back to: culture eats strategy for breakfast. Getting the culture right will take time and requires healthy tension. But it also requires the willingness to make decisions. I am not afraid to make a decision. Sometimes the wrong one, and that’s fine, it needs to be corrected. But in my world, it is better to make the wrong decision and correct it than to make no decision at all.

Where should Fermilab position itself in the next chapter of global high-energy physics?

I wanna stretch my hand to Europe, and to CERN in particular. I am very proud of the connection between our two institutions, at the programmatic level and at the personal level. I think we need to continue discussing how to keep the world open for those that want to share our values and share our way of doing science. People like me should be able to come to the United States. People from here should be able to go to CERN. That’s the foundation of everything we do.

The most important tool you’ve never heard of

Jos Vermaseren

Jos, FORM has been at the heart of precision calculations for decades. But the story starts earlier, with Martinus Veltman (see “The pioneer” image). What was he trying to do?

Jos Vermaseren In 1963, Veltman was interested in the renormalisation of Yang–Mills theories. He wanted to check whether certain models produced unphysical infinities that could not be removed. These calculations are a lot of work: you don’t do that by hand. So he built himself a program, which he called Schoonschip, to do that calculation.

What was computing like in those days?

Vermaseren Very primitive by current standards. When Veltman started at CERN, they had a CDC 6600, which was for a while the biggest computer in the world. But you had to share it with maybe a few thousand people, so you had to wait for your program to come out (see “The first supercomputer” image). At Nijmegen University in the early 1970s, we had an IBM computer where you had to hand in your computer cards, then wait a few hours for output. If your program was big, it would only run during the night. Make a typo, and you’d find out the next day that nothing had happened. That kind of primitive computing was left behind when personal computers came in the 1980s. I bought an Atari ST in late 1985, and the fun part was that at Nikhef, the Dutch National Institute for Subatomic Physics, we had a CDC 173, but my Atari had more memory! That was quite amazing. Every decade, the computers became more powerful, and with that the calculations became larger. I’ve been involved in calculations where the intermediate formulas were terabytes big. That is kind of hard to imagine. But if you put in enough effort and enough checking, you still get the correct answer. There is simply no way you could ever do that by hand. No way. That’s why we absolutely need these algebra programs.

Martinus Veltman

Where did Schoonschip – I apologise for my pronunciation – fit in the landscape of early computer algebra?

Vermaseren Ah, Veltman did that intentionally to tease all the foreigners. [chuckles] There were already ideas about algebraic software in the 1960s – Feynman was suggesting something in the 1950s – but nothing really usable for physics calculations when Veltman started. Around the same time, Tony Hearn started with the REDUCE program, which was formally more elegant but less powerful. Those were the main players for a while, but they all had limitations. REDUCE wasn’t nearly as fast as Schoonschip and couldn’t handle very big expressions. Schoonschip’s limitation was that Veltman had written it in assembly, so you could only use it if you had the correct computer.

How did you enter this story?

Vermaseren I was very much used to Schoonschip and was quite a good programmer with it, but CDC computers were expensive and being phased out. So there I was, faced with the idea that I wouldn’t have Schoonschip any longer. I also wanted to make a giant system for doing automated calculations that would need computer algebra in a more flexible way than Schoonschip provided. If I needed new features, I’d have to go to Veltman and wait probably a year. Veltman had built in what he needed and was so nice to provide other people with his program. But if you get a free program, you shouldn’t come up with too many demands. The conclusion was that if I really wanted to make what I needed, I would need my own program.

The first supercomputer

Schoonschip had a couple of weak points. One was the sorting mechanism, which meant that with very large expressions, the program became outrageously slow. The handling of functions and function arguments was not flexible at all. And then there was the whole business of computer availability. I asked Nikhef management if they would allow me to take some time out to work on it, and they thought it was a good idea, so my back was covered.

This may resonate with early-career researchers who want to build long-lived tools today. What would you tell them?

Vermaseren You have to put in an enormous amount of time, and if you want to get a job in physics, you can only get credit for that if at the same time you use what you make for good calculations that draw attention. You need physics publications. If you go in as a postdoc to just write useful software, you have a problem, unless somebody has already promised you a decent job.

People like to count citations, and organisations usually look at citations in the first two years. But when you have a paper about a calculation, the opposite usually occurs. In the beginning you don’t get very many citations because people aren’t using it yet. I have a lot of papers that started with hardly anything, and then after a few years they pick up and keep growing. But for a postdoc, that is a disaster.

Thomas Gehrmann

Thomas Gehrmann I’d add to this that recognition for contributions to scientific software is usually underrated when evaluating a researcher’s performance. It’s not recognised at the same level as publications or plenary talks. We should really try to communicate to senior people making funding decisions the importance of the whole body of scientific output. Scientific software development is very useful to the community but much less easily quantifiable than citations.

Vermaseren Although, for universities it is very nice to eventually have somebody there who is generating a lot of citations and educating people to do big calculations, they just don’t recognise it. The world of theory software development needs more institutional support.

Thomas, can you describe FORM’s impact on particle physics?

Gehrmann FORM enabled calculations that would never have been possible with any other tool. At each given moment in time, ever since the inception of FORM version one in the late 1980s, early 1990s, the cutting-edge calculations were usually done with FORM. Many of these calculations were redone a few years later with other tools, but what had changed was that computers became more powerful, had more memory, more storage space and were faster, so you could also do similar calculations in Mathematica or Maple. However, FORM was always at the avant-garde of the calculations.

In groups that are performing multi-loop calculations, the first-week’s task for a new student is usually: learn FORM on a simple example, compute the scattering matrix elements in FORM to get you used to its environment. For students working on cutting-edge projects – the next loop on a scattering amplitude, the next order on a benchmark cross-section – it’s made clear from the very onset that FORM is the tool to be used, because it’s only with this tool that there’s a realistic chance to get through the project in a finite amount of time.

Can you give an example of a particularly important calculation?

Gehrmann The LHC is a proton–proton collider, but the hard scattering processes underlying the collisions are not proton–proton but collisions of quarks and gluons. To make precise predictions for anything you observe at the LHC, you need to know how quarks and gluons are distributed inside the proton. These parton distribution functions are extracted from combined fits to huge sets of data from different experiments at vastly different energy scales. I mean, from a 35 GeV electron beam at SLAC up to multi-TeV collisions at the LHC. That’s almost three orders of magnitude.

Parton distributions evolve with energy scales via the Altarelli–Parisi evolution equations: knowing the Altarelli–Parisi splitting functions to sufficient theoretical precision is one of the cornerstones enabling these fits. The calculation that enabled the current level of precision was done in the early 2000s by Jos and his collaborators Sven Moch and Andreas Vogt. It went alongside the development of FORM version three, and was a crucial result for the entire LHC physics programme.

Looking ahead to the High-Luminosity LHC and a potential FCC, how important is FORM’s continued development?

Gehrmann Both are extremely high-statistics, high-luminosity machines. They’ll give us measurements at a statistical precision never achieved before in a collider experiment. Researchers need to be empowered with proper tools to make the most of the physics, with a whole new generation of precision calculations. FORM has grown with the field, due to both the ingenious design choices Jos made at inception, when a lot was already conceived in a scalable fashion, and through continuous development addressing bottlenecks. It’s very hard to predict what will be the bottlenecks for High-Luminosity LHC calculations, and it’s even harder for the FCC. But they will require adaptations to how we do computer algebra. And, of course, committed developers.

Josh, you’ve been working on FORM 5. Why is a major release necessary now?

Joshua Davies

Joshua Davies Being able to release new versions helps convey to the community that there’s progress. Most users stick to a released version rather than rebuilding from GitHub. Being able to say “this is a new version with well-tested new features” is important for users to trust it for their work.

What are the major new features?

Davies The first is a Feynman graph generator built into FORM, from a collaborator of Jos, Toshiaki Kaneko. FORM now has an interface to this generator that lets you produce graphs from within the code without relying on external tools. It’s written in a more flexible way, which lets you add features or modify it much more easily than other tools. I also put in an interface that improved polynomial arithmetic performance. This is increasingly necessary now that people study processes with higher multiplicities or more mass scales. You end up with computations depending on many more variables than in the past.

Vermaseren The third main feature is the ability to have floating-point coefficients as opposed to rational numbers. Modern algorithms still can’t determine everything through normal calculations. You’re restricted to doing certain parts in arbitrary-precision floating point. But these capabilities have other good features. If you want to do a calculation for the LHC, in the end these run in Monte Carlo integration programs: you take a very big formula and sample it billions of times. But how numerically stable is that formula? If I have floating-point capability, I can figure out the numerical stability before I evaluate it billions of times in another program. I can determine whether I’ll run into disasters.

What does the future hold for FORM’s development?

Davies It seems unlikely that anyone is suddenly going to fund a permanent job where the main role is looking after FORM. But if we can foster an environment where postdocs or PhD students feel they can contribute and be recognised for it, and it helps them apply for their next position, this needs to be the way packages like FORM are developed. I’m a postdoc trying to apply for longer-term positions, but the future of FORM isn’t secure. I’ve put in a lot of effort, alongside Coenraad Marinissen and Takahiro Ueda, to get FORM to version five, but it’s not guaranteed people working on FORM will be able to continue.

Do we need a different institutional framework to support this kind of development?

Davies We need more recognition from the people who decide where funding goes for contributions to software work. On the experimental side, there are people whose job is the LHC software that goes into the analysis chain. We don’t really have this equivalent on the theory side. People work on software alongside their physics projects, and you always have to have physics results coming out if you want to continue to get jobs. No one can truly focus one hundred percent on the tools. What would really help is if contributing to a project like FORM was clearly recognised as a valuable scientific output in its own right, alongside physics papers. If young researchers felt that contributing to core tools genuinely strengthened their career prospects rather than putting them at risk, it would completely change how sustainable projects like this are.

FORM before meaning

Gehrmann This is exactly right. Over the years, it was crucial to have Jos as a developer in the background regularly talking to the community, getting feedback: “This is the current bottleneck we’re up against.” But that only worked because Jos could actually focus on it. We’ve been trying to improve community involvement over the past five years with dedicated workshops, bringing together developers with users pushing FORM to their limits and students coming into the field. This format has started to take off successfully. At these workshops, in the mornings the senior developers explain the internal structure of the code. And then in the afternoons people work on concrete exercises like bug fixes or small features, almost like a hackathon. But this is a bottom-up initiative. It needs a top-down approach to make the project sustainable and create career perspectives for FORM developers like Josh. I can only hail the visionary decisions Nikhef management made 40 years ago when they decided to leave Jos alone for a few years to develop version one. Without institutional recognition that creates actual career paths for theory software developers, we risk losing the very people who can secure FORM’s future – and with it, our ability to make the most of the next generation of colliders. 

The mystery of the little red dots

Every new instrument needs its mysteries, and no discovery of the James Webb Space Telescope (JWST) has been more surprising than the “little red dots” it discovered in the early universe. Four years after their discovery, their nature is still an open question, with new papers purporting to solve the mystery on an almost daily basis.

These unexpected objects came into view in JWST’s first data release in 2022 thanks to its sharp images and sensitivity in the near infrared. By summer of 2023, a number of discovery papers had been written about them, identifying three traits in common: they were compact in size, had unusual “V-shaped” spectra and they showed emission from high-velocity hydrogen gas. Due to their compact size and red colour in the rest frame, they were dubbed little red dots. A few appeared in every pointing of the JWST imaging camera NIRCam, accounting for a few percent of all known galaxies in the first billion years of cosmic time. The race was on to determine their nature.

Two options initially appeared possible, but both were extraordinary and required a very precise tuning of parameters to fit the observations: too-dense galaxies or too-massive supermassive black holes. In either case, the objects had to be enshrouded in a cocoon of dust.

Galaxies or black holes?

The first paper assumed they were very massive galaxies, with their stars all assembled less than a billion years after the Big Bang. In favour of the galactic hypothesis were the V-shaped spectra, which are difficult to model without invoking massive stars. The vertex of the V-shape resembles a “Balmer break”, which is produced by the absorption of hydrogen atoms in the n = 2 level. Longward of the break, the optical continuum rises steeply toward the red, which this model attributed to the reddening of these stars by dust, with the UV being produced by starlight that was scattered out of the dust screen. However, the very high masses and early-universe star formation rates required for these models were difficult to reconcile with our understanding of the rate at which galaxies and their dark-matter halos assemble.

The first paper assumed they were very massive galaxies, with their stars all assembled less than a billion years after the Big Bang

The black-hole hypothesis was supported by evidence for very dense gas clouds moving at thousands of kilometres per second in the potential of a massive black hole. In this picture, surrounding dust would preferentially absorb ultraviolet light and re-emit it at longer wavelengths, producing the observed red colour. Though this explanation promised to alleviate the tension arising from the implied galaxy masses, it quickly became clear that these objects were not typical growing black holes. They were not detected in X-rays, nor did they show the characteristic 1000 K dust signature that is ubiquitous in actively accreting black holes. However, the most concerning piece of the black-hole interpretation was the implied black-hole masses. Applying local calibrations to the observed motion of gas in the little red dots implied black-hole masses of ten million to a billion suns, compared with galaxy masses of the same order – a stark contrast with local black holes, which have masses roughly a thousandth of their host galaxies. These overly massive black holes are hard to grow so far in advance of the galaxies, and also overproduce the total amount of black-hole mass created at such an early time.

Explaining their redness

Two major breakthroughs occurred in 2024 that clarified the nature of the little red dots. All the aforementioned models invoked heavy amounts of dust to suppress ultraviolet emission and produce the observed red colours. The conservation of energy implies that all the absorbed radiation should be re-emitted by the dust. However, multiple studies of populations and of luminous individual sources turned up non-detections of dust emission. These stringent limits on the far-infrared energy output were enough to conclusively rule out these entire classes of models, invoking reddening by dust to explain the observed red colours.

At the same time, campaigns to observe the broad population of little red dots discovered a remarkable class of sources with very little ultraviolet emission and extreme Balmer breaks. These breaks could not be produced by anything resembling a stellar population we have observed before, and served as conclusive evidence that normal stars cannot be responsible for producing the optical emission in little red dots; the photoabsorption by hydrogen in the n = 2 energy state must nevertheless be a crucial physical aspect of the little red dots, even if it wasn’t happening in the atmospheres of massive stars.

Plausible scenarios

The challenge is therefore to explain the characteristic red colour of the little red dots without dust obscuration. Any successful model would also need a substantial reservoir of hydrogen around to cause the hydrogen absorption that looked like starlight, but wasn’t. One plausible scenario that could satisfy these requirements is very dense gas arranged quasi-spherically around the black hole. In this scenario, the black holes powering the little red dots could be significantly less massive than we had originally thought, when we had assumed that dust was obscuring most of the light from the growing black hole.

The task is to explain the characteristic red colour of the little red dots without dust obscuration

In this new picture, the little red dots are powered by black holes that are accreting at much higher rates than are typically seen at later times. A higher accretion rate implies greater luminosity for a given black-hole mass, and therefore we infer much lower black-hole masses, perhaps closer to a million suns, and much more aligned with the measured galaxy masses. As a side benefit, lower black-hole masses are much more natural for objects that are so prevalent, because the number of low-mass dark-matter halos and low-mass galaxies is much higher than the number of high-mass systems.

Astronomers are still arguing about how this dense gas is configured and accretes onto the black hole, and everyone has their favourite model. We do not know if the geometry of the system is completely spherical, or if we are seeing a mixed-phase medium where the viewing angle is an important parameter. These details matter, because if we can pin down the characteristic size and density of these gas envelopes, we may be able to infer more robust black-hole masses for the population. There has been some recent speculation that the little red dots may be marking the end stages of black-hole seed growth, in which case they could be a critical missing link in our understanding of the formation of the first black holes. However, without more concrete constraints on black-hole mass, we cannot know for sure. At the same time, we need a much better theoretical understanding of what makes little red dots so distinct from the more typical growing black holes we have studied for decades, and why that mode of growth becomes so much less common as the universe ages.

One thing we do know for sure: the more we learn about the little red dots, the more complex and unexpected they become. We are excited to see what new wrinkles arise as we enter our fifth year of JWST operations.

New directions for bent crystals

Soviet accelerator physicists were the first to bend particle beams using bent crystals. Under controlled conditions, the technique can produce beam deflections equivalent to those generated by magnetic fields of hundreds of tesla, far exceeding the limits of superconducting magnets.

Even more strikingly, genuinely enormous magnetic fields also arise in a more subtle way. At LHC energies, the electric fields between crystal planes are Lorentz-boosted into effective magnetic fields of hundreds to thousands of tesla in the rest frame of passing particles. This opens up some unique possibilities for particle physics: probes of new physics once limited to long-lived particles in conventional, orders-of-magnitude weaker magnets may now come within reach for short-lived baryons.

A bobsleigh on a track

Energy loss and multiple scattering are the fate of most charged particles in matter. If carefully aligned to particle trajectories, crystals can be an exception: as positively charged particles fly past nuclei in the planes of the crystal lattice, they experience an averaged electrostatic potential that channels them between the crystal planes. Provided they don’t have enough transverse energy to cross the potential barrier to a neighbouring crystal plane, the particles oscillate between the atomic planes like a bobsleigh on a track (see “Guided paths” figure). If the crystal is mechanically bent, the entire track curves, steering the particles along with it.

Guided paths

Crystal channelling was predicted in simulations by Robinson and Oen in 1963, experimentally confirmed the same year by Piercy, and given its theoretical foundation by Lindhard in 1965. The idea of using bent crystals for beam control was first proposed in 1976 by Tsyganov. Proof-of-principle experiments at JINR Dubna in 1979 demonstrated the channelling of 8.4 GeV protons, achieving deflections equivalent to an 81-tesla magnetic field, and practical applications followed soon after.

A key modern application of bent crystals is the selective extraction of particles from the beam halo rather than the beam core, to produce a secondary beam. Crystal-based beam extraction was demonstrated up to 8.4 GeV at JINR in Dubna in 1984, then with higher energy protons at IHEP Protvino in 1989, and at CERN’s Super Proton Synchrotron (SPS) in 1993. Later in that decade, Fermilab’s Tevatron extracted beam particles using crystals at a record energy of 900 GeV.

Bent crystals are also used in modern accelerators’ collimation systems to deflect stray particles in the beam halo into shielding blocks that safely absorb them. The exploration of bent crystals for beam collimation began in the 1990s at Brookhaven National Laboratory and Fermilab, but the field underwent a step change in 2006 with the experimental observation of volume reflection at Petersburg Nuclear Physics Institute. This advance was enabled by new manufacturing techniques for high-quality bent silicon crystals. Predicted in the mid-1980s by Taratin and Vorobiev, volume reflection occurs when a particle is coherently deflected by the collective field of bent crystal planes without becoming trapped in a channel, effectively rebounding from the planar potential barrier.

Crystal clear

These breakthroughs motivated the UA9 Collaboration and experts in beam collimation to undertake a systematic programme of crystal-based beam manipulation at the SPS. This effort culminated in 2023, when crystal collimation became an operational reality at the LHC (see “Heavy-ion collimation” figure).

Heavy-ion collimation

This technique addressed a critical limitation of heavy-ion operation: conventional amorphous collimators fragment heavy nuclei into lighter ions, some of which escape the collimation system and can quench downstream superconducting magnets. Bent crystals, by contrast, coherently and deterministically steer beam halo particles onto dedicated absorbers. As a result, crystal collimation was demonstrated to reduce heavy-ion beam losses at LHC magnets by factors of 5 to 13 compared with standard collimation.

New frontiers

The success of TeV-scale beam collimation at the LHC laid the groundwork for another ambitious goal: using bent crystals in the LHC not just to steer beams, but also to probe the spin of short-lived particles. In the intense internal fields between crystal atomic planes, a particle’s spin behaves much like a spinning top in a gravitational field. Rather than simply tipping over, the top’s angular momentum rotates slowly – precesses – under the action of a torque. In close analogy, the magnetic moment of a relativistic particle traversing a bent crystal precesses under the torque generated by the effective magnetic field experienced in its rest frame.

In 1992, the E761 collaboration used the fixed-target proton beam from the Tevatron to perform the first experimental demonstration of the effect by measuring the magnetic moment of the Σ+ hyperon (uus). This pioneering work used two 4.5 cm-long bent silicon crystals to induce spin precession, proving that the technique could effectively substitute for massive conventional magnets.

Bent crystals could open new frontiers in particle physics at the LHC

Bent crystals could open new frontiers in particle physics at the LHC. The TWOCRYST collaboration is exploring whether the technique can be extended to study the spin of short-lived charm baryons. The idea dates back to 1996, when Samsonov extended the E761 findings to charm baryons and demonstrated that despite their extremely short lifetimes, the intense effective fields of bent crystals could induce measurable spin precession. In 2016, Scandale and Stocchi proposed to use this technique to measure the magnetic dipole moments of charm baryons at the LHC.

The lightest charm baryon, the Λc+ (udc), has an extremely short lifetime of roughly 200 femtoseconds. Even at 1 TeV, it only travels a few centimetres before decaying. The magnetic fields needed to study its spin precession cannot be provided by conventional magnets, but are well within reach if bent crystals are used. If produced at a fixed target, a clean sample of its decays to a proton, a kaon and a pion can be obtained via tracking and invariant-mass reconstruction, with decay angles yielding spin information.

Such measurements promise a unique opportunity to explore QCD at the interface between heavy and light quarks. Measurements of its spin precession would also provide exceptional sensitivity to a possible electric dipole moment – a potential signature of physics beyond the Standard Model. The ALADDIN (An LHC Apparatus for Direct Dipole moments INvestigation) experimental proposal aims to measure the electromagnetic dipole moments of charm baryons, the Λc+ and the Ξc+ (usc), using a double-crystal scheme in the LHC. In this concept, a first bent crystal extracts a small fraction of the LHC beam halo and guides 7 TeV protons onto a fixed target located inside the LHC vacuum pipe, producing, amongst other particles, the charm baryons of interest. The particles would then impinge on a second bent crystal, whose intense inter-planar fields would induce a measurable spin precession.

Such an experiment must deal with challenging demands on the crystal alignment. Channelling only occurs if particles enter a crystal within a narrow angular range, known as the Lindhard angle, which decreases with increasing beam energy. At TeV energies in the LHC, this angle is only a few microradians, meaning that misalignments far smaller than the width of a human hair over a metre are sufficient to suppress channelling entirely. This alignment will be particularly challenging for ALADDIN, which will rely on protons that have scattered off the primary collimators.

Double channelling

TWOCRYST was installed at Insertion Region 3 (IR3) in early 2025 (see “Halo extraction” figure). The experiment marks a significant leap in complexity compared to previous LHC crystal tests. Last year, the experiment successfully channelled LHC protons through two crystals (see “Double channelling” figure). These measurements marked the first controlled deployment of a double-crystal setup in the LHC, demonstrating the technique at 450 GeV, 1 and 2 TeV – a new world record, surpassing the 270 GeV achieved by the UA9 collaboration at the SPS and corresponding to an equivalent magnetic field of 600 tesla. Preliminary analyses of the recorded data indicate that more than 20% of protons were channelled successfully at 1 TeV.

Bent crystals have come a long way since the pioneering experiments at JINR Dubna in 1979. TWOCRYST’s demonstration of double-channelling at a record energy of 2 TeV represents an important step toward using the technique for precision particle-physics measurements with bent crystals at the LHC.

Measurements of spin precession have long played a central role in particle physics, providing deep insights into fundamental interactions and symmetries. The anomalous magnetic moments of the proton and neutron – measured in the 1930s and 1940s – remained unexplained for decades until the emergence of the quark model in the 1960s. While conventional magnet-based techniques remain highly effective for relatively long-lived particles such as the muon (CERN Courier March/April 2025 p21), particles as short-lived as charm baryons have so far remained experimentally inaccessible. The results from TWOCRYST suggest that bent crystals may allow the first direct experimental probe of electromagnetic dipole moments in charm baryons, opening a new window on QCD dynamics and offering a sensitive test for physics beyond the Standard Model.

A thousand anomalies hiding in plain sight

The Hubble Space Telescope has been observing the cosmos for more than 35 years, amassing hundreds of thousands of observations. Each image was taken with a specific scientific goal, yet every exposure contains far more than its intended target: background galaxies, foreground objects and unexpected phenomena scattered across the field of view. Systematic human inspection of the millions of source cutouts in the Hubble Legacy Archive is impossible – but artificial intelligence has now uncovered more than a thousand astrophysical anomalies hiding in plain sight.

The challenge of identifying rare signals amid overwhelming backgrounds will resonate with CERN Courier readers. At the LHC, experiments increasingly deploy anomaly detection methods to search for new physics beyond the Standard Model without fully specifying the signal in advance. Both fields face a shared problem: isolating rare events from billions of observations with minimal prior assumptions about the target. “Semi-supervised” approaches that marry sparse expert knowledge with vast unlabelled datasets may prove as valuable for collider data as they have for astronomical archives.

A new semi-supervised machine-learning framework developed at the European Space Agency in December 2025 has identified 1339 unique astrophysical anomalies spanning 19 distinct morphological classes (see “Six out of 1339” figure). Approximately 65% of these – some 811 objects – had no prior reference in the scientific literature, despite residing in data that has been publicly available for years. Some of these newly discovered objects were excellent additions to existing catalogues of which examples are limited. These included collisional ring galaxies, galaxy mergers, jellyfish galaxies and gravitational lenses. Forty-three of the objects completely defied classification and remain unknown objects to this day.

Semi-supervised learning

At the heart of this work lies a fundamental tension in modern astronomy: datasets are growing far faster than our ability to label them. Traditional supervised machine learning requires large, annotated training sets, but expert labelling of millions of images is prohibitively expensive. Semi-supervised learning offers a way forward. In this approach, a model learns simultaneously from a small set of human-labelled examples and a vastly larger pool of unlabelled data, extracting patterns from the abundant unlabelled images to compensate for the scarcity of annotations.

The challenge of identifying rare signals amid overwhelming backgrounds will resonate with CERN Courier readers

The new code we have developed generates provisional “pseudo-labels” when the model’s confidence exceeds a threshold, then enforces consistent predictions with augmented versions of the same images. These augmentations take the form of cropping of the images, flipping them, inverting the pixel values, and so forth. This allows the model to leverage the statistical structure of millions of unlabelled cutouts without requiring a human to inspect each one. The algorithm then couples this semi-supervised backbone with human expertise. After each training cycle, the model ranks all images by anomaly score and a domain expert reviews the highest-ranked candidates, correcting misclassifications and confirming genuine anomalies. These newly labelled images feed the next training cycle. This human-in-the-loop design combines the pattern recognition capabilities of deep learning with the domain knowledge of an astronomer, achieving an efficiency that neither could match alone.

In our study, the entire process began with 128 standard astrophysical phenomena and three labelled anomalies where finding further examples would be valuable. The chosen examples were edge-on protoplanetary disks – young stellar objects with a proto-planetary disk around a host star that exhibits strong emission with a direct high-energy jet and secondary emission in a striking butterfly shape. Through successive iterations, the training set grew to 1400 images, at which point the model could flag anomaly types it had never been shown.

Community access

A search of this scale was made possible by ESA Datalabs, a collaborative science platform that provides researchers with direct access to ESA’s mission archives alongside computational resources – including GPU acceleration – through a browser-based environment. Rather than downloading terabytes of Hubble data, we brought our analysis code to where the data already resides. The full inference run across 99.6 million images completed in just 2.5 days on a single GPU, demonstrating that large-scale anomaly detection does not require vast computational resources, a consideration that matters as the community increasingly weighs the sustainability of data-intensive research.

The most abundant anomalies were galaxy mergers: 629 systems hosting tidal tails, bridges and other signatures of gravitational interactions that exist at the very limit of our detection power. We also found 140 candidate gravitational lenses and 39 gravitational arcs, where the warping of spacetime distorts background sources into characteristic rings. Mergers give us snapshots of hierarchical structure formation, while spacetime distortions provide direct tests of general relativity and enable dark-matter mapping on cosmological scales.

Even decades-old data can yield hundreds of new discoveries when the right tools are brought to bear

The model also independently recovered five previously catalogued quadruply lensed quasars in the Einstein cross configuration – a fourfold splitting of a distant quasar’s light by a foreground galaxy. That the model identified these without any lensed quasars in its training set validates its ability to generalise beyond the anomaly types it was explicitly taught. Fewer than 50 such systems are known, and each enables an independent “late universe” measurement of the Hubble constant; such measurements are invaluable given the persistent tension between values derived from the cosmic microwave background and the local distance ladder (CERN Courier March/April 2025 p28).

Among the genuinely new discoveries were two collisional ring galaxies – extreme systems that have undergone such an extreme galaxy interaction that a shockwave is moving through the galaxy, causing a burst of star formation through the galaxy. Thirty-five jellyfish galaxies shaped by ram pressure stripping in the intracluster medium also provide an excellent laboratory to understand the relationship between the galactic environment and the internal gas of the galaxy. Finally, 43 sources had morphologies that defied classification entirely – curved, distorted objects that fit none of the established categories and have been released to the community for further investigation.

With the Euclid space telescope now operational, and the Vera C. Rubin Observatory and Square Kilometre Array soon to follow, data volumes will dwarf Hubble’s archive by orders of magnitude. Our work shows that even decades-old data can yield hundreds of new discoveries when the right tools are brought to bear – and that AI-assisted discovery, guided by human expertise, is only just getting started.

Policymaking with data

James Robinson

In physics, as in life, it’s important to persevere in the face of setbacks. When James Robinson joined the ATLAS experiment at CERN in 2008, the Large Hadron Collider had just sputtered into life. “I remember the excitement of the initial startup and the disappointment when data taking was delayed for a year,” recalls Robinson.” Over the next decade, Robinson built a career in experimental particle physics, analysing jets and soft-QCD events, convening subgroups, tuning Monte Carlo generators and helping measure luminosity.

By 2018, Robinson was beginning to ponder his professional priorities. “I didn’t really want to spend another three years writing grants and not having much time to do physics,” he says. Constant relocation was another strain. “It was really nice having the freedom to travel, but in your mid-thirties you start thinking maybe it’s time to settle in one location.”

Real-world research

That’s when he spotted an opening at the Alan Turing Institute, the UK’s national centre for data science and AI. The Institute is a research-led organisation who hire experts and academics to find solutions to real-world challenges and to advise UK public policy. The role Robinson initially applied for focused on advanced computing and AI strategy, one that would apply his academic skills, and help develop his practical ones. “The Institute has a lot in common with CERN,” he says. “But I applied because of its larger focus on applications of research, rather than pure blue-sky work.”

Today, Robinson is the software engineering research lead in the Turing’s Environment and Sustainability programme, where teams of researchers, data scientists and engineers tackle urgent global challenges. “Right now we’re working with the Met Office on using AI to get faster and better weather predictions in the UK,” he explains. “For other projects, we also partner with African countries to improve forecasts in the global South, and model changes in Arctic and Antarctic sea ice, which is useful for everything from animal migrations to navigation.”

One of Robinson’s first projects was to model London’s air quality to inform the mayor’s office on pollution hot spots. “Traffic turned out to be the most important factor,” he says. “We could point to areas where we thought air quality was bad but under-measured, and the mayor’s office deployed mobile sensors to check. During COVID we even repurposed the project to monitor how busy London was coming out of lockdown. It felt really nice to see a project pivot quickly and directly feed into policy.”

Although the Turing Institute engages with government and public-sector partners, it isn’t a commercial consultancy. Each team decides which areas they would like to work in, and the problems they focus on improving. Once they identify a problem, the next stage is to find the best partner who will allow their models to make the most impact. “We’re not here to build a slightly better algorithm for its own sake,” says Robinson. “We want to apply AI to make change in the real world.”

The Institute’s mission echoes the one that first drew Robinson to physics. “One of the big similarities with CERN is the sense that what you’re doing is worthwhile and good for the world,” he says. “It’s still research, but more applied. Improving the weather forecast that everyone sees on their phone – that’s easy to explain to your grandparents.”

Robinson, who had previously been part of decades-long, large-scale research projects at ATLAS, felt it extremely satisfying to see the direct impact of his work. “At CERN you contribute a tiny part to a huge experiment,” he says. “Here I get to see a project from start to finish, and sometimes adapted straight into real-world decision making.”

Transferable skills

But was high-energy physics a good preparation for Robinson’s current career?

The answer is a resounding yes. Having done a PhD and two post docs, he was used to flexible and adaptable timelines. “I was often handed a problem without a clear solution,” he recalls. “Sometimes we have to pivot quickly away from one idea or plan and dive straight into another. That ability to rethink and improve has transferred directly to Turing.”

A lack of formal technical qualifications also need not be a problem. “Many of us were self-taught programmers at CERN,” he says. “The fact you’ve done research, adapted and developed those skills is what matters.”

Collaboration is another common thread. “Like CERN, Turing is a meeting place for people from many different institutions,” he says. “No one can just order work to happen. You negotiate, you build consensus.”

But Robinson notes that applying for non-academic roles requires a shift in mindset. While academic CVs and cover letters are often long and detailed, applications for industry, consultancy or somewhere in between like the Institute, may look different.

“Don’t go into the specifics of your ATLAS analysis because it won’t be directly relevant in industry,” says Robinson. “Show your research experience, but focus on the skills: problem-solving, collaboration, adaptability.”

But most importantly, make sure the values of the company you’re applying to align with your own. For Robinson, the Turning Institute was an obvious choice.

“I’m taking the same mindset I had at CERN and using it to make a difference you can see,” says Robinson. “That’s the rewarding part: turning data into something that genuinely helps people.”

The revolution ahead

Michael S Turner

Particle physics is the modern manifestation of the two-thousand-year quest to understand nature at the most fundamental level possible. That journey has not only deepened our understanding of the physical world but has also reaped enormous benefits for humanity, and is continuing to do so.

I have experienced two revolutions in this quest – the 1974 revolution in particle physics and the 1998 ΛCDM revolution that cemented the relationship between particle physics and cosmology. I am now anxiously awaiting a third. This one will deepen the connections between the quantum world of elementary particles and Einstein’s expanding universe by answering big questions about the origin of space, time and the universe as well as the unity of the particles and forces.

Powerful ideas, big surprises

In the early 1970s I was a graduate student at SLAC; it was an exciting and confusing time. Deep-inelastic scattering experiments at SLAC revealed free partons inside neutrons and protons, but they could not be knocked out. The SU(3) quark model successfully classified the elementary particles and predicted mass relations, but without any dynamics. There were powerful theoretical ideas – quantum field theory, the bootstrap, Regge trajectories, the eightfold way and scattering amplitudes – but no unifying picture.

In November 1974, the discovery of the J/ψ particle was announced. It seemed like overnight the Standard Model of particle physics, with its SU(3) of colour (not flavour) and the SU(2) × U(1) electroweak unification, was in place. All the pieces had been on the table earlier – Weinberg’s broken symmetry model of the weak and electromagnetic interactions, Gross–Wilczek–Politzer’s asymptotic freedom, the GIM mechanism, and evidence for quarks, but it was the discovery of the J/ψ that was needed to make it gel.

The 1980s and 1990s were exciting as new connections between the inner space of elementary particles and the outer space of cosmology were identified – some involving my own research. Inflation and particle dark matter in the form of slowly-moving particles – cold dark matter – led to an expansive theory about the early evolution of the universe along with strong predictions, including a flat, critical density universe, formation of structure from the bottom up, and scale-invariant density perturbations that arose from quantum fluctuations.

But, measurements of the matter density were coming up far short of the critical density, predictions for the large-scale distribution of matter didn’t fit the observations, and the age of the universe and Hubble constant measurements conflicted with a flat universe and possibly each other. Amidst all the confusion, some thought the bubble of enthusiasm would burst.

We are ready for another revolution that transforms our view of matter, energy, space and time, but when?

Then, in early 1998, two supernovae teams announced that the expansion of the universe is speeding up, not slowing down, and the missing piece of the puzzle had been found. ΛCDM quickly fell into place: a flat universe with cold dark matter accounting for a third of the critical density and the other two thirds in dark energy – something like a cosmological constant.

A bittersweet memory reminds me how fast things changed. My close friend and mentor, cosmologist David Schramm, was slated to debate whether the universe was flat with Jim Peebles in April 1998. David, who had the seemingly indefensible “flat” side of the debate, died tragically in a plane crash just weeks before the discovery of cosmic acceleration. When the debate took place and I subbed for David, the title had been changed to, “Cosmology solved?”

Here we are today. Two highly successful standard models which also raise profound questions about the fundamental nature of matter, energy, space and time. There are an abundance of powerful theoretical ideas not yet fully exploited or even completely understood.

There are plenty of clues. The 125 GeV Higgs – who ordered that? The dark-matter particle, dark energy and neutrino mass are not part of the Standard Model and hint at deeper connections between inner and outer space. Recent results from DESI indicate that dark energy may be evolving and is not a cosmological constant. And there is the Hubble tension, which could be telling us something is missing, both in cosmology and particle physics.

On the hunt

But sensitive searches for the dark-matter particle, at the LHC and other colliders, in deep underground experiments and space observatories, have come up short. The Higgs has yet to reveal its secrets. And there has yet to be experimental evidence for the predictions of the powerful theor­etical ideas of supersymmetry, grand unification and string theory, which must play a role in moving forward.

We are ready for another revolution that transforms our view of matter, energy, space and time, but when? Take it from a cosmologist: predicting the past is hard and predicting the future is even harder. Nonetheless, just to illustrate, I mention two possibilities, based upon two speculative papers I have written.

The first, is the detection of gravitational waves from an unexpected cosmological phase transition at a temperature of 100 TeV or so by LIGO, and the second is the discovery that the observed CMB dipole is misaligned with that expected from large-scale structure and arises instead as a revealing relic of cosmic inflation. Either would shake things up, and lead to additions, discoveries and connections. Moreover, I am confident that the real triggering event will be even more impactful and exciting.

The discovery frontier today is very broad, from table-top experiments to colliders to telescopes on the ground and in space, and big ideas abound. The world is waiting and watching. Now is the time to double down and to believe that the next result will be the one that ushers in the coming revolution in our understanding of matter, energy, space and time.

Eiffel honour for women physicists

When the Eiffel Tower opened for the 1889 Exposition Universelle, its girders bore in gold lettering the names of scientists whom Gustave Eiffel said had honoured France since 1789. Every one of them was a man. 137 years later, on 26 January 2026, Anne Hidalgo, the mayor of Paris, accepted the nomination of 72 women scientists to join them.

The list spans nearly 250 years and multiple disciplinary domains. Many made important contributions to nuclear and particle physics, and several had close associations with strong partners to CERN such as the Centre national de la recherche scientifique (CNRS) and the Commissariat à l’énergie atomique et aux énergies alternatives (CEA).

Foremost among the women to be honoured is Polish–French physicist Marie Skłodowska Curie (1867–1934), who discovered polonium and radium, helping to establish radioactivity as an intrinsic property of atoms. She carried out systematic measurements of radioactive substances, determined radium’s atomic weight and developed methods to isolate radioactive elements from pitchblende. She shared the 1903 Nobel Prize in Physics and later won the 1911 Nobel Prize in Chemistry, becoming the first woman laureate and the only person to receive Nobel prizes in two different scientific fields.

A pioneer in X-ray spectroscopy, Yvette Cauchois (1908–1999) invented the Cauchois spectrometer, a curved-crystal spectrometer widely used for the analysis of X-rays and gamma rays. She introduced X-ray spectroscopy using synchrotron radiation to Europe and later studied the X-ray spectrum of the Sun.

A trailblazer for women physicists in Japan, nuclear physicist Toshiko Yuasa (1909–1980) studied the continuous spectrum of beta radiation emitted by artificial radioactive substances and developed her own double-focusing spectrometer. In 1955 she warned of the dangers of nuclear tests at Bikini Atoll. In the 1960s, promoted to senior research fellow at CNRS, she studied nuclear reactions using a synchrocyclotron.

Marie-Antoinette Tonnelat (1912–1980) worked on early unified theories that sought to connect gravity and electromagnetism. She served as director of research at CNRS.

Henriette Faraggi (1915–1985) introduced new techniques with photographic emulsions and directed the CEA Department of Nuclear Physics from 1972 to 1978. She also served as chair of the Nuclear Physics Commission of IUPAP and became the first woman elected president of the French Physical Society. Convinced early on of the importance of high-energy heavy-ion physics for studying quark–gluon plasma, she played a key role in the decision to build GANIL in Caen.

Cécile DeWitt-Morette (1922–2017) worked in quantum field theory and gravitation, and founded the Les Houches Summer School in 1951, which became a major international centre for theoretical physics training. She later contributed to path-integral methods in quantum theory.

Yvonne Choquet-Bruhat (1923–2025) placed Einstein’s field equations of general relativity on a firmer mathematical ground, showing how their behaviour follows from appropriate initial conditions. In 1979 she became the first woman elected as a full member of the Académie des Sciences.

A specialist in cosmic radiation, Lydie Koch (1931–2023) led stratospheric-balloon experiments to detect cosmic rays, contributed to the development of innovative germanium and silicon detectors for the HEAO-3 and COS-B satellites, and advanced X-ray and gamma-ray astronomy. She played a central role in the development of astrophysics at the CEA and was head of the Astrophysics Section from 1967 to 1979.

“It is time for this highly symbolic landmark to embrace the cause of equality between women and men, and to restore women to their rightful place on this monument dedicated to the glory of science and scientists,” said Hidalgo.

All that antimatters in the universe

Intersections

Applying the Standard Model (SM) to early cosmological times leads to an uninhabitable universe, with tiny and equal amounts of matter and antimatter. Yet the universe is habitable and the local universe strongly matter-dominated. Observations of the diffuse gamma-ray background and cosmic microwave background show no evidence for the presence of antimatter on large scales and rule out a matter–antimatter symmetric universe.

From 19 to 22 January, 80 particle physicists, astronomers and cosmologists gathered at CERN for the first “All that Antimatters in the Universe” workshop to explore the frontier between the laboratory and astrophysical perspectives on the matter–antimatter asymmetry of the universe.

Broad panorama

Julia Harz (Mainz University) reviewed a broad panorama of baryogenesis models in which physics beyond the SM produces a homogeneous matter excess within the first seconds after the Big Bang, before light elements are synthesised. She highlighted their features and potential tests and constraints, including searches at colliders like the LHC and indirectly with experiments such as those looking for neutrinoless double-beta decays.

Questioning our assumptions about antimatter was a central thread of the workshop, with several presentations highlighting non-standard baryogenesis models that allow domains of antimatter to survive the Big Bang, as well as others in which antimatter is hidden in compact nuggets that could also constitute dark matter. A lively discussion explored how to hunt for these scenarios using astrophysical and cosmological observables. For example, spectral distortions of the cosmic microwave background could indicate energy injections from matter–antimatter annihilation in the early universe. Observations at 21 cm-wavelengths offer another probe: these signals trace neutral hydrogen during the cosmic-dawn epoch, when the first stars and galaxies formed, and could reveal anomalous heating or ionisation patterns characteristic of antimatter annihilation.

Questioning assumptions about antimatter was a central thread of the workshop

The discrete symmetries of charge conjugation (C), parity (P) and time reversal (T) have been central to particle physics since the discovery that nature violates them individually, yet their combined action (CPT) appears to be preserved in all standard interactions. In a particularly sharp presentation, Gabriela Barenboim (University of Valencia) stressed that while much attention is devoted to the search for differences in the interactions between particles and antiparticles through CP-symmetry violation, the more fundamental possibility of CPT violation remains largely unexplored. Unlike CP violation, which can occur within the Standard Model, any breakdown of CPT symmetry would signal new physics and could manifest as differences in the intrinsic properties of particles and antiparticles, including their masses and lifetimes.

Leading stress-tests of CPT symmetry are now carried out at CERN’s Antimatter Factory (AF), whose experiments presented an array of impressive results at the workshop. Eric Hunter (CERN) highlighted the potential of boosting the yield of antihydrogen formation at the AF experiments, showing how this could improve our knowledge of antimatter physics enormously. Improved yields of antimatter replicas of naturally occurring matter-based atoms would enable higher precision tests of key electromagnetic transitions and gravitational interactions of antimatter.

Much attention went to antimatter in cosmic rays. Primary cosmic rays are particles accelerated at astrophysical sources such as supernova remnants and injected into the galaxy, whereas secondary cosmic rays are produced when those primaries collide with gas and dust in the interstellar medium. In standard galactic cosmic-ray models, antimatter is purely a secondary product of the interactions of primary cosmic rays with the interstellar medium. However, the AMS-02 experiment operating on the International Space Station has firmly established a positron excess requiring a primary source, possibly pulsars. AMS-02 antiproton data also show some anomalies, but uncertainties in the propagation models and interaction cross-sections remain large.

Mind the GAPS

Complementary searches for cosmic-ray antimatter are also carried out by balloon-borne experiments. Principal investigator Chuck Hailey (Columbia University) described how the GAPS balloon experiment, uniquely suited to probe low-energy antiprotons, antideuterons and antihelium, reported its first data from a 25-day flight completed in early 2026. The specificity of GAPS is the exploitation of the characteristic X-ray emission produced by short-lived bound states between antimatter nuclei and ordinary atoms, which results in excellent particle-identification and background-rejection capabilities.

The atmosphere at the workshop was excellent, with participants curious to learn from other communities and expand their horizons everywhere that antimatter matters in the universe, from the cosmos to the lab, via astrophysical systems. While antimatter still holds many mysteries, All that Antimatters in the Universe brought us one step closer to answering them.

bright-rec iop pub iop-science physcis connect