Comsol -leaderboard other pages

Topics

The physicist who fought war and cancer

The courage of his convictions

Joseph Rotblat’s childhood was blighted by the destruction visited on Warsaw, first by the Tsarist Army, followed by the Central Powers and completed by the Red Army from 1918 to 1920. His father’s successful paper-importing business went bankrupt in 1914, and the family became destitute. After a short course in electrical engineering, Joseph and a teenaged friend became jobbing electricians. A committed autodidact, Rotblat found his way into the Free University, where he studied physics under Ludwik Wertenstein. Wertenstein had worked with Marie Skłodowska-Curie in Paris and was the chief of the Radiological Institute in Warsaw as well as teaching at the Free University. He was the first to recognise Rotblat’s brilliance and retained him as a researcher at the Institute. Rotblat’s main research was neutron-induced artificial radioactivity: he was among the first to induce cobalt-60, which became a standard source in radiotherapy machines before reliable linear accelerators were available.

Chadwick described Rotblat as “very intelligent and very quick”

By the late 1930s, Rotblat had published more than a dozen papers, some in English journals after translation by Wertenstein; the name Rotblat was becoming known in neutron physics. The professor regarded him as the likely next head of the Radiological Institute and thought he should prepare by working outside Poland. Rotblat wanted to gain experience of the cyclotron and, although he could have joined the Joliot–Curie group in Paris, elected to go to Liverpool where James Chadwick was overseeing a machine expected to produce a proton beam within months. He arrived in Liverpool in April 1939 and was shocked by the city’s filth. He also found the scouse dialect of its citizens incomprehensible. Despite the trying circumstances, Rotblat soon impressed Chadwick with his experimental skill and was rewarded with a prestigious fellowship. Chadwick wrote to Wertenstein in June describing Rotblat as “very intelligent and very quick”.

Brimming with enthusiasm

Chadwick had formed a long-distance friendship with Ernest Lawrence, the cyclotron’s inventor, who kept him apprised of developments in Berkeley. At the time of Rotblat’s arrival, Lawrence was brimming with enthusiasm about the potential of neutrons and radioactive isotopes from cyclotrons for medical research, especially in cancer treatment. Chadwick hired Bernard Kinsey, a Cambridge graduate who spent three years with Lawrence, to take charge of the Liverpool cyclotron, and he befriended Rotblat. Liverpool had limited funding: Chadwick complained to Lawrence that the money “this laboratory has been running on in the past few years – is less than some men spend on tobacco.” Chadwick served on a Cancer Commission in Liverpool under the leadership of Lord Derby, which planned to bring cancer research to the Liverpool Radium Institute using products from the cyclotron.

James Chadwick

The small stipend from the Oliver Lodge fellowship encouraged Rotblat to return to Warsaw in August 1939 to collect his wife, Tola, and bring her to England. She was recovering from acute appendicitis; her doctors persuaded Joseph that she was not fit to travel. So he returned alone on the last train allowed to pass through Berlin before the Germans attacked Poland once more. Tola wrote her last letter to Joseph in December 1939. While he was in Warsaw, Rotblat confided in Wertenstein about his belief that a uranium fission bomb was feasible using fast neutrons, and he repeated this argument to Chadwick when he returned to Liverpool. Chadwick eventually became the leader of the British contingent on the Manhattan Project and arranged for Rotblat to come to Los Alamos in 1944 while remaining a Polish citizen. Rotblat worked in Robert Wilson’s cyclotron group and survived a significant radiation accident, receiving an estimated dose of 1.5 J/kg to his upper torso and head. The circumstances of his leaving the project in December 1944 were far more complicated than the moralistic account he wrote in The Bulletin of the Atomic Scientists 40 years later, but no less noble.

Tragedy and triumph

As Chadwick wrote to Rotblat in London, he saw “very obvious advantages” for the future of nuclear physics in Britain from Rotblat’s return to Liverpool. For one thing, “Rotblat has a wider experience on the cyclotron than anyone now in England,” and he also possessed “a mass of information on the equipment used in Project Y [Los Alamos] and Chicago.” Chadwick had two major roles in mind for Rotblat. One was to revitalise the depleted Liverpool department and to stimulate cyclotron research in England; and the second to collate the detailed data on nuclear physics brought by British scientists returning from the Manhattan Project. In 1945, Rotblat discovered that six members of his family had miraculously survived the war in Poland, but tragically not Tola. His state of despair deepened after the news of the atomic bombs being used against Japan: he knew about the possibility of a hydrogen bomb, and remembered conversations with Niels Bohr in Los Alamos about the risks of a nuclear arms race. He made two resolutions: to campaign against nuclear weapons and to leave academic nuclear physics and become a medical physicist to use his scientific knowledge for the direct benefit of people.

Joseph Rotblat
Robert Wilson

When Chadwick returned to Liverpool from the US, he found the department in a much better state than he expected. The credit for this belonged largely to Rotblat’s leadership; Chadwick wrote to Lawrence praising his outstanding ability, combined with a truly remarkable concern for the staff and students. Chadwick and Rotblat then agreed to build a synchrocyclotron in Liverpool. Rotblat selected the abandoned crypt of an unbuilt Catholic cathedral as the best site, since the local topography would provide some radiation protection. The post-war shortages, especially of steel, made this an extremely ambitious project. Rotblat presented a successful application for the largest university grant to the Department of Science and Industrial Research, and despite design and construction problems resulting in spiralling costs, the machine was in active research use from 1954 to 1968.

With the encouragement of physicians at Liverpool Royal Infirmary, Rotblat started to dabble in nuclear medicine to image thyroid glands and treat haematological disorders. In 1949 he saw an advert for the chair in physics at the Medical College of St. Bartholomew’s Hospital (Bart’s) in London and applied. While Rotblat was easily the most accomplished candidate, there was a long delay in his appointment on spurious grounds, such as being over-qualified to teach physics to medical students, likely to be a heavy consumer of research funds and xenophobia. Bart’s was a closed, reactionary institution. There was a clear division between the Medical College, with its links to London University, and the hospital, where the post-war teaching was suboptimal as it struggled to recover from the war and adjusted reluctantly to the new National Health Service (NHS). The Medical College, in Charterhouse Square, was severely bombed in the Blitz and the physics department completely destroyed. Rotblat attempted to thwart his main opponent, the dean (described as “secretive and manipulative” in one history), by visiting the hospital and meeting senior clinicians and governors. There was also a determined effort, orchestrated by Chadwick, to retain him in the ranks of nuclear physicists.

When I interviewed Rotblat in 1994, he told me that Chadwick’s final tactic was to tell him that he was close to being elected as a fellow of the Royal Society, but if he took the position at Bart’s, it would never happen. Rotblat poignantly observed: “He was right.” I mentioned this to Lorna Arnold, the nuclear historian, who thought it was a shame. She said she would take it up with her friend Rudolf Peierls. Despite being in poor health, Peierls vowed to correct this omission, and the next year the Royal Society elected Rotblat a fellow at the age of 86.

Full-time medical physicist

Rotblat’s first task at Bart’s, when he finally arrived in 1950, was to prepare a five-year departmental plan: a task he was well-qualified for after his experience with the synchrocyclotron in Liverpool. With wealthy, centuries-old hospitals such as Bart’s allowed to keep their endowments after the advent of the NHS, he also became an active committee member for the new Research Endowment Fund that provided internal grants and hired research assistants. The physics department soon collaborated with the biochemistry, pharmacology and physiology departments that required radioisotopes for research. He persuaded the Medical College to buy a 15 MV linear accelerator from Mullard, an English electronics company, which never worked for long without problems.

Rotblat resolved to campaign against nuclear weapons and use his scientific knowledge for the direct benefit of people

During his first two years, in addition to the radioisotope work, he studied the passage of electrons through biological tissue and the energy dissipation of neutrons in tissue – the 1950s were a golden age for radiobiology in England, and Rotblat forged close relationships with Hal Gray and his group at the Hammersmith Hospital. In the mid-1950s, he was approached by Patricia Lindop, a newly qualified Bart’s physician who had also obtained a first-class degree in physiology. Lindop had a five-year grant from the Nuffield Foundation to study ageing and, after discussions with Rotblat, it was soon arranged that she would study the acute and long-term effects of radiation in mice at different ages. This was a massive, prospective study that would eventually involve six research assistants and a colony of 30,000 mice. Rotblat acted as the supervisor for her PhD, and they published multiple papers together. In terms of acute death (within 30 days of a high, whole-body dose), she found that mice that were one-day old at exposure could tolerate the highest doses, whereas four-week-old mice were the most vulnerable. The interpretation of long-term effects was much less clearcut and provoked major disagreements within the radiobiology community. In a 1994 letter, Rotblat mused on the number of Manhattan Project scientists still alive: “According to my own studies on the effects of radiation on lifespan, I should have been dead a long time, having received a sub-lethal dose in Los Alamos. But here I am, advocating the closure of Los Alamos, Livermore and Sandia, instead of promoting them as health resorts!”

Patricia Lindop

In 1954, the US Bravo test obliterated the Bikini atoll and layered a Japanese fishing boat (Lucky Dragon No. 5) that was outside the exclusion zone in the South Pacific with radioactive dust. American scientists realised that the weapon massively exceeded its designed yield, and there was an unconvincing attempt to allay public fear. Rotblat was invited onto BBC’s flagship current-affairs programme, Panorama, to explain to the public the difference between the original fission bombs and the H-bomb. His lucid delivery impressed Bertrand Russell, a mathematical philosopher and a leading pacifist in World War I, who also spoke on Panorama. The two became close friends. When Rotblat went to a radiobiology conference a few months later, he met a Japanese scientist who had analysed the dust recovered from Lucky Dragon No. 5. The dust was comprised of about 60% rare-earth isotopes, leading Rotblat to believe that most of the explosive energy was due to fission not fusion. He wrote his own report, not based on any inside knowledge and despite official opposition, concluding this was a fission–fusion–fission bomb and that his TV presentation had underestimated its power by orders of magnitude. Rotblat’s report became public just as the British Cabinet decided in secret to develop thermonuclear weapons. The government was concerned that the Americans would view this as another breach of security by an ex-Manhattan Project physicist. Rotblat’s reputation as a man of the political left grew within the conservative institution of Bart’s.

Russell made a radio address at the end of 1954 to address the global existential threat posed by thermonuclear weapons and urged the public to “remember your humanity and forget the rest”. Six months later, Russell announced the Russell–Einstein Manifesto with Rotblat as one of the signatories, and relied upon by Russell to answer questions from the press. The first Pugwash conference followed in 1957 with Rotblat as a prominent contributor. His active involvement, closely supported by Lindop, would last for the rest of his life, as he encouraged communication across the East–West divide and pushed for international arms control agreements. Much of this work took place in his office at Bart’s. Rotblat and the Pugwash conference then shared the 1995 Nobel Peace Prize.

JUNO takes aim at neutrino-mass hierarchy

Compared to the quark sector, the lepton sector is the Wild West of the weak interaction, with large mixing angles and large uncertainties. To tame this wildness, neutrino physicists are set to bring a new generation of detectors online in the next five years, each roughly an order of magnitude larger than its predecessor. The first of these to become operational is the Jiangmen Underground Neutrino Observatory (JUNO) in Guangdong Province, China, which began data taking on 26 August. The new 20 kton liquid-scintillator detector will seek to resolve one of the major open questions in particle physics: whether the third neutrino-mass eigenstate (ν3) is heavier or lighter than the second (ν2).

“Building JUNO has been a journey of extraordinary challenges,” says JUNO chief engineer Ma Xiaoyan. “It demanded not only new ideas and technologies, but also years of careful planning, testing and perseverance. Meeting the stringent requirements of purity, stability and safety called for the dedication of hundreds of engineers and technicians. Their teamwork and integrity turned a bold design into a functioning detector, ready now to open a new window on the world of neutrinos.”

Main goals

Neutrinos interact only via the parity-violating weak interaction, providing direct evidence only for left-handed neutrinos. As a result, right-handed neutrinos are not part of the Standard Model (SM) of particle physics. As the SM explains fermion masses by a coupling of the Higgs field to a left-handed fermion and its right-handed counterpart of the same flavour, neutrinos are predicted to be massless – a prediction still consistent with every effort to directly measure a neutrino mass yet attempted. Yet decades of observations of the flavour oscillations of solar, atmospheric, reactor, accelerator and astrophysical neutrinos have provided incontrovertible indirect evidence that neutrinos must have tiny masses below the sensitivity of instruments to detect. Observations of quantum interference between flavour eigenstates – the electron, muon and tau neutrinos – indicate that there must be a small mass splitting between ν1 and the slightly more massive ν2, and a larger mass splitting to ν3. But it is not yet known whether the mass eigenvalues follow a so-called normal hierarchy, m1 < m2 < m3, or an inverted hierarchy, m3 < m1 < m2. Resolving this question is the main physics goal of the JUNO experiment.

JUNO’s determination of the mass ordering is largely free of parameter degeneracies

“Unlike other approaches, JUNO’s determination of the mass ordering does not rely on the scattering of neutrinos with atomic electrons in the Earth’s crust or the value of the leptonic CP phase, and hence is largely free of parameter degeneracies,” explains JUNO spokesperson Wang Yifang. “JUNO will also deliver order‑of‑magnitude improvements in the precision of several neutrino‑oscillation parameters and enable cutting‑edge studies of neutrinos from the Sun, supernovae, the atmosphere and the Earth. It will also open new windows to explore unknown physics, including searches for sterile neutrinos and proton decay.”

Additional eye

Located 700 m underground near Jiangmen city, JUNO detects antineutrinos produced 53 km away by the Taishan and Yangjiang nuclear power plants. At the heart of the detector is a liquid‑scintillator detector inside a 44 m-deep water pool. A stainless-steel truss supports an acrylic sphere housing the liquid scintillator, as well as 20,000 20‑inch photomultiplier tubes (PMTs), 25,600 three‑inch PMTs, front‑end electronics, cabling and anti‑magnetic compensation coils. All the PMTs operate simultaneously to capture scintillation light from neutrino interactions and convert it to electrical signals.

To distinguish the extremely fine flavour oscillations that will allow JUNO to observe the neutrino-mass hierarchy, the experiment must achieve an extremely fine energy resolution of almost 50 keV for a typical 3 MeV reactor antineutrino. To attain this, JUNO had to push performance margins in several areas relative to the KamLAND experiment in Japan, which was previously the world’s largest liquid-scintillator detector.

“JUNO is a factor 20 larger than KamLAND, yet our required energy resolution is a factor two better,” explains Wang. “To achieve this, we have covered the full detector with PMTs with only 3 mm clearance and twice the photo-detection efficiency. By optimising the recipe of the liquid scintillator, we were able to improve its attenuation length by a factor of two to over 20 m, and increase its light yield by 50%.”

Go with the flow

Proposed in 2008 and approved in 2013, JUNO began underground construction in 2015. Detector installation started in December 2021 and was completed in December 2024, followed by a phased filling campaign. Within 45 days, the team filled the detector with 60 ktons of ultra‑pure water, keeping the liquid‑level difference between the inner and outer acrylic spheres within centimetres and maintaining a flow‑rate uncertainty below 0.5% to safeguard structural integrity.

Over the next six months, 20 ktons of liquid scintillator progressively filled the 35.4 m diameter acrylic sphere while displacing the water. Stringent requirements on scintillator purity, optical transparency and extremely low radioactivity had to be maintained throughout. In parallel, the collaboration conducted detector debugging, commissioning and optimisation, enabling a seamless transition to full operations at the completion of filling.

JUNO is designed for a scientific lifetime of up to 30 years, with a possible upgrade path allowing a search for neutrinoless double‑beta decay, says the team. Such an upgrade would probe the absolute neutrino-mass scale and test whether neutrinos are truly Dirac fermions, as assumed by the SM, or Majorana fermions without distinct antiparticles, as favoured by several attempts to address fundamental questions spanning particle physics and cosmology.

First oxygen and neon collisions at the LHC

In the first microseconds after the Big Bang, extreme temperatures prevented quarks and gluons from binding into hadrons, filling the universe with a deconfined quark–gluon plasma. Heavy-ion collisions between pairs of gold (19779Au79+) or lead (20882Pb82+) nuclei have long been observed to produce fleeting droplets of this medium, but light–ion collisions remain relatively unexplored. Between 29 June and 9 July 2025, LHC physicists pushed the study of the quark–gluon plasma into new territory, with the first dedicated studies of collisions between pairs of oxygen (168O8+) and neon (2010Ne10+) nuclei, and between oxygen nuclei and protons.

“Early analyses have already helped characterise the geometry of oxygen and neon nuclei, including the latter’s predicted prolate ‘bowling-pin’ shape,” says Anthony Timmins of the University of Houston. “More importantly, they appear consistent with the onset of the quark-gluon plasma in light–ion collisions.”

As the quark–gluon plasma appears to behave like a near-perfect fluid with low viscosity, the key to modelling heavy-ion collisions is hydrodynamics – the physics of how fluids evolve under pressure gradients, viscous stresses and other forces. When two lead nuclei collide at the LHC, they create a tiny, extremely hot fireball where quarks and gluons interact so frequently they reach local thermal equilibrium within about 10–23 s. Measurements of gold–gold collisions at Brookhaven’s RHIC and lead–lead collisions at the LHC suggest that the quark–gluon plasma flows with an extraordinarily low viscosity, close to the quantum limit, allowing momentum to move rapidly across the system. But it’s not clear whether the same rules apply to the smaller nuclear systems involved in light–ion collisions.

“For hydrodynamics to work, along with the appropriate quark-gluon plasma equation of state, you need a separation of scales between the mean free path of quarks and gluons, the pressure gradients and overall system size,” explains Timmins. “As you move to smaller systems, those scales start to overlap. Oxygen and neon are expected to sit near that threshold, close to the limits of plasma formation.”

Across the oxygen–oxygen and neon–neon datasets, the ALICE, ATLAS and CMS collaborations decomposed the transverse distribution of emitted particles into Fourier modes – a way to search for collective, fluid-like behaviour. Measurements of the “elliptic” and “triangular” Fourier components as functions of event multiplicity support the emergence of a collective flow driven by the initial collision geometry. The collaborations observe signs of energetic-probe suppression in oxygen–oxygen collisions – a signature of the droplet “quenching” jets in a way not observed in proton–proton collisions. Similar features appeared in a one-day xenon–xenon run that took place in October 2017.

These initial results are just a smattering of those to come

CMS compared particle yields in light-ion collisions to a proton–proton reference. After scaling for the number of binary nucleon–nucleon interactions, the collaboration observed a maximum suppression of 0.69 ± 0.04 at a transverse momentum of about 6 GeV, more than five standard deviations from unity. While milder than that observed for lead–lead and xenon–xenon collisions, the data point to genuine medium-induced suppression in the smallest ion–ion system studied to date. Meanwhile, ATLAS reported the first dijet transverse-momentum imbalance in a light-ion system. The reduction in balanced jets is consistent with path-length-dependent energy-loss effects, though apparently weaker than in lead–lead collisions.

In “head-on” collisions, ALICE, ATLAS and CMS all observed a neon–oxygen–lead hierarchy in elliptic flow, suggesting that, if a quark–gluon plasma does form, it exhibits the most pronounced “almond shape” in neon collisions. This pattern reflects the expected nuclear geometries of each species. Lead-208 is a doubly magic nucleus, with complete proton and neutron shells that render it tightly bound and nearly spherical in its ground state. Conversely, neon is predicted to be prolate, with its inherent elongation producing a larger elliptic overlap. Oxygen falls in between, consistent with models describing it as roughly spherical or weakly clustered.

ALICE and ATLAS reported a hierarchy of flow coefficients in light-ion collisions, with elliptic, triangular and quadrangular flows progressively decreasing as their Fourier index rises, in line with hydrodynamic expectations. Like CMS’s charged hadron yields, ALICE’s preliminary neutral pion yields exhibit a suppression at large momenta.

In a previous fixed-target study, the LHCb collaboration also measured the elliptic and triangular components of the flow in lead–neon and lead–argon collisions, observing the distinctive shape of the neon nucleus. As for proton–oxygen collisions, LHCb’s forward-rapidity coverage can probe the partonic structure of nuclei at very small values of Bjorken-x – the fraction of the nucleon’s momentum carried by a quark or gluon. Such measurements help constrain nuclear parton distribution functions in the low-x region dominated by gluons and provide rare benchmarks for modelling ultra-high-energy cosmic rays colliding with atmospheric oxygen.

These initial results are just a smattering of those to come. In a whirlwind 11-day campaign, physicists made full use of the brief but precious opportunity to investigate the formation of quark–gluon plasma in the uncharted territory of light ions. Accelerator physicists and experimentalists came together to tackle peculiar problems, such as the appearance of polluting species in the beams due to nuclear transmutation (see “Alchemy by pure light“). Despite the tight schedule, luminosity targets for proton–oxygen, oxygen–oxygen and neon–neon collisions were exceeded by large factors, thanks to high accelerator availability and the high injector intensity delivered by the LHC team.

“These early oxygen and neon studies show that indications of collective flow and parton-energy-loss-like suppression persist even in much smaller systems, while providing new sensitivity to nuclear geometry and valuable prospects for forward-physics studies,” concludes Timmins. “The next step is to pin down oxygen’s nuclear parton distribution function. That will be crucial for understanding the hadron-suppression patterns we see, with proton–oxygen and ultra-peripheral collisions being great ways to get there.”

Prepped for re-entry

Francesca Luoni

When Francesca Luoni logs on each morning at NASA’s Langley Research Center in Virginia, she’s thinking about something few of us ever consider: how to keep astronauts safe from the invisible hazards of space radiation. As a research scientist in the Space Radiation Group, Luoni creates models to understand how high-energy particles from the Sun and distant supernovae interact with spacecraft structures and the human body – work that will help future astronauts safely travel deeper into space.

But Luoni is not a civil servant for NASA. She is contracted through the multinational engineering firm Analytical Mechanics Associates, continuing a professional slingshot from pure research to engineering and back again. Her career is an intriguing example of how to balance research with industrial engagement – a holy grail for early-career researchers in the late 2020s.

Leveraging expertise

Luoni’s primary aim is to optimise NASA’s Space Radiation Cancer Risk Model, which maps out the cancer incidence and mortality risk for astronauts during deep-space missions, such as NASA’s planned mission to Mars. To make this work, Luoni’s team leverages the expertise of all kinds of scientists, from engineers, statisticians and physicists, to biochemists, epidemiologists and anatomists.

“I’m applying my background in radiation physics to estimate the cancer risk for astronauts,” she explains. “We model how cosmic rays pass through the structure of a spacecraft, how they interact with shielding materials, and ultimately, what reaches the astronauts and their tissues.”

Before arriving in Virginia early this year, Luoni had already built a formidable career in space-radiation physics. After a physics PhD in Germany, she joined the GSI Helmholtz Centre for Heavy Ion Research, where she spent long nights at particle accelerators testing new shielding materials for spacecraft. “We would run experiments after the medical facility closed for the day,” she says. “It was precious work because there are so few facilities worldwide where you can acquire experimental data on how matter responds to space-like radiation.”

Her experiments combined experimental measurement data with Monte Carlo simulations to compare model predictions with reality – skills she honed during her time in nuclear physics that she still uses daily at NASA. “Modelling is something you learn gradually, through university, postgrads and research,” says Luoni. “It’s really about understanding physics, maths, and how things come together.”

In 2021 she accepted a fellowship in radiation protection at CERN. The work was different from the research she’d done before. It was more engineering-oriented, ensuring the safety of both scientists and surrounding communities from the intense particle beams of the LHC and SPS. “It may sound surprising, but at CERN the radiation is far more energetic than we see in space. We studied soil and water activation, and shielding geometries, to protect everyone on site. It was much more about applied safety than pure research.”

Luoni’s path through academia and research was not linear, to say the least. From being an experimental physicist collecting data at GSI, to working as an engineer and helping physicists conduct their own experiments at CERN, Luoni is excited to be diving back into pure research, even if it wasn’t her initially intended field.

Despite her industry–contractor title, Luoni’s day-to-day work at NASA is firmly research-driven. Most of her time is spent refining computational models of space-radiation-induced cancer risk. While the coding skills she honed at CERN apply to her role now, Luoni still experienced a steep learning curve when transitioning to NASA.

“I am learning biology and epidemiology, understanding how radiation damages human tissues, and also deepening my statistics knowledge,” she says. Her team codes primarily in Python and MATLAB, with legacy routines in Fortran. “You have to be patient with Fortran,” she remarks. “It’s like building with tiny bricks rather than big built-in functions.”

Luoni is quick to credit not just the technical skills but the personal resilience gained from moving between countries and disciplines. Born in Italy, she has worked in Germany, Switzerland and now the US. “Every move teaches you something unique,” she says. “But it’s emotionally demanding. You face bureaucracy, new languages, distance from family and friends. You need to be at peace with yourself, because there’s loneliness too.”

Bravery and curiosity

But in the end, she says, it’s worth the price. Above all, Luoni counsels bravery and curiosity. “Be willing to step out of your comfort zone,” she says. “It takes strength to move to a new country or field, but it’s worth it. I feel blessed to have experienced so many cultures and to work on something I love.”

While she encourages travel, especially at the PhD and postdoc stages in a researcher’s career, Luoni advises caution when presenting your experience on applications. Internships and shorter placements are welcome, but employers want to see that you have stayed somewhere long enough to really understand and harness that company’s training.

“Moving around builds a unique skill set,” she says. “Like it or not, big names on your CV matter – GSI, CERN, NASA – people notice. But stay in each place long enough to really learn from your mentors, a year is the minimum. Take it one step at a time and say yes to every opportunity that comes your way.”

Luoni had been looking for a way to enter space-research throughout her career, building up a diverse portfolio of skills throughout her various roles in academia and engineering. “Follow your heart and your passions,” she says. “Without that, even the smartest person can’t excel.”

The puzzle of an excess of bright early galaxies

Since the Big Bang, primordial density perturbations have continually merged and grown to form ever larger structures. This “hierarchical” model of galaxy formation has withstood observational scrutiny for more than four decades. However, understanding the emergence of the earliest galaxies in the first few hundred million years after the Big Bang has remained a key frontier in the field of astrophysics. This is also one of the key science aims of the James Webb Space Telescope (JWST), launched on Christmas Day in 2021.

Its large, cryogenically-cooled mirror and infrared instruments let it capture the faint, redshifted ultraviolet light from the universe’s earliest stars and galaxies. Since its launch, the JWST has collected unprecedented samples of astrophysical sources within the first 500 million years of the Big Bang, utterly transforming our understanding of early galaxy formation.

Stellar observations

Tantalisingly, JWST’s observations hint at an excess of galaxies very bright in the ultra-violet (UV) within the first 400 million years, as compared to expectations from early formation within the standard Lambda Cold Dark matter model. Given that UV photons are a key indicator of young star formation, these observations seem to imply that early galaxies in any given volume of space were overly efficient at forming stars in the infancy of the universe.

However, extraordinary claims demand extraordinary evidence. These puzzling observations have come under immense scrutiny in confirming that the sources lie at the inferred redshifts, and do not just probe over-dense regions that might preferentially host galaxies with high star-formation rates. It could still be the case that the apparent excess of bright galaxies is cosmic variance – a statistical fluctuation caused by the relatively small regions of the sky probed by the JWST so far.

Such observational caveats notwith­standing, theorists have developed a number of distinct “families” of explanations.

UV photons are readily attenuated by dust at low redshifts. If, however, these early galaxies had ejected all of their dust, one might be able to observe almost all of the intrinsic UV light they produced, making them brighter than expected based on lower-redshift benchmarks.

Bias may also arise from detecting only those sources powered by rapid bursts of star formation that briefly elevate galaxies to extreme luminosities.

Extraordinary claims demand extraordinary evidence

Several explanations focus on modifying the physics of star formation itself, for example regarding “stellar feedback” – the energy and momentum that newly formed stars inject back into their surrounding gas, that can heat, ionise or expel gas, and slow or shut down further star formation. Early galaxies might have high star-formation rates because stellar feedback was largely inefficient, allowing them to retain most of their gas for further star formation, or perhaps because a larger fraction of gas was able to form stars in the first place.

While the relative number of low- and high-mass stars in a newly formed stellar population – the initial mass function (IMF) – has been mapped out in the local universe to some extent, its evolution with redshift remains an open question. Since the IMF crucially determines the total UV light produced per unit mass of star formed, a “top-heavy” IMF, with a larger fraction of massive stars compared to that in the local universe, could explain the observations.

Alternatively, the striking ultraviolet light may not arise solely from ordinary young stars – it could instead be powered by accretion onto black holes, which JWST is finding in unexpected numbers.

Alternative cosmologies

Finally, a number of works also appeal to alternative cosmologies to enhance structure formation at such early epochs, invoking an evolving dark-energy equation of state, primordial magnetic fields or even primordial black holes.

A key caveat involved in these observations is that redshifts are often inferred purely from broadband fluxes in different filters – a technique known as photometry. Spectroscopic data are urgently required, not only to verify their exact distances but also to distinguish between different physical scenarios such as bursty star formation, an evolving IMF or contamination by active galactic nuclei, where supermassive black holes accrete gas. Upcoming deep observations with facilities such as the Atacama Large Millimeter/submillimeter Array (ALMA) and the Northern Extended Millimeter Array (NOEMA) will be crucial for constraining the dust content of these systems and thereby clarifying their intrinsic star-formation rates. Extremely large surveys with facilities such as Euclid, the Nancy Grace Roman Space Telescope and the Extremely Large Telescope will also be crucial in surveying early galaxies over large volumes and sampling all possible density fields.

Combining these datasets will be critical in shedding light on this unexpected puzzle unearthed by the JWST.

A step towards the Higgs self-coupling

ATLAS figure 1

A defining yet unobserved property of the Higgs boson is its ability to couple to itself. The ATLAS collaboration has now set new bounds on this interaction, by probing the rare production of Higgs-boson pairs. Since the self-coupling strength directly connects to the shape of the Higgs potential, any departure from the Standard Model (SM) prediction would have direct implications for electroweak symmetry breaking and the early history of the universe. This makes its measurement one of the most important objectives of modern particle physics.

Higgs-boson pair production is a thousand times less frequent than single-Higgs events, roughly corresponding to a single occurrence every three trillion proton–proton collisions at the LHC. Observing such a rare process demands both vast datasets and highly sophisticated analysis techniques, along with the careful choice of a sensitive probe. Among the most effective is the HH  bbγγ channel, where one Higgs boson decays into a bottom quark–antiquark pair and the other into two photons. This final state balances the statistical reach of the dominant Higgs decay to bottom quarks with the exceptionally clean signature offered by photon-pair measurements. Despite the small signal branching ratio of about 0.26%, the decay to two photons benefits from the excellent di-photon mass resolution and offers the highest efficiency among the leading HH channels. This provides the HH  bbγγ channel with an excellent sensitivity to variations in the trilinear self-coupling modifier κλ, defined as the ratio of the measured Higgs-boson self-coupling to the SM prediction.

In its new study, the ATLAS collaboration relied on Run 3 data collected between 2022 and 2024, and on the full Run 2 dataset, reaching an integrated luminosity of 308 fb–1. Events were selected with two high-quality photons and at least two b-tagged jets, identified using the latest and most performant ATLAS b-tagging algorithm. To further distinguish signal from background, dominated by non-resonant γγ+jets and single-Higgs production with H γγ, a set of machine-learning classifiers called “multivariate analysis discriminants” were trained and used to filter genuine HH  bbγγ signals.

The collaboration reported an HH  bbγγ signal significance of 0.84σ  under the background-only hypothesis, compared to a SM expectation of 1.01σ (see figure 1). At the 95% confidence level, the self-coupling modifier was constrained to –1.7 < κλ < 6.6. These results extend previous Run 2 analyses and deliver a substantially improved sensitivity, comparable to the observed (expected) significance of 0.4σ (1σ) in the combined Run 2 results across all channels. The improvement is primarily due to the adoption of advanced b-tagging algorithms, refined analysis techniques yielding better mass resolution and a larger dataset, more than double that of previous studies.

This result marks significant progress in the search for Higgs self-interactions at the LHC and highlights the potential of Run 3 data. With the full Run 3 dataset and the High-Luminosity LHC on the horizon, ATLAS is set to extend these measurements – improving our understanding of the Higgs boson and searching for possible signs of physics beyond the SM.

ALICE observes ρ–proton attraction

ALICE figure 1

The ALICE collaboration recently obtained the first direct measurement of the attraction between a proton and a ρ0 meson – a particle of particular interest due to its fleeting lifetime and close link to chiral symmetry breaking. The result establishes a technique known as femtoscopy as a new method for studying interactions between vector mesons and baryons, and opens the door to a systematic exploration of how short-lived hadrons behave.

Traditionally, interactions between baryons and vector mesons have been studied indirectly at low-energy facilities, using decay patterns or photoproduction measurements. These were mostly interpreted through vector–meson–dominance models developed in the 1960s, in which photons fluctuate into vector mesons to interact with hadrons. While powerful, these methods provide only partial information and cannot capture the full dynamics of the interaction. Direct measurements have long been out of reach, mainly because the extremely short lifetime of vector mesons – of the order of 1–10 fm/c – renders conventional scattering experiments impossible.

At the hadronic level, the strong force can be described as arising from the exchange of massive mesons, with the lightest among them, the pion, setting the interaction range to about 1.4 fm. For such a short-range effect to influence the products of a pp collision, the particles must be created close together and with low relative momentum, ensuring sufficient interaction time and a significant wavefunction overlap.

The ALICE collaboration has now studied this mechanism in high-multiplicity proton–proton (pp) collisions, at a centre-of-mass energy of 13 TeV, through femtoscopy, which examines correlations in the relative momentum (k*) of particle pairs in their rest frame. These were expected to carry information on the size and shape of the particle-emitting source at k* below about 200 MeV, with any deviations from unity indicating the presence of short-ranged forces.

To study the interaction between protons and ρ0 vector mesons, candidates were reconstructed via the hadronic decay channel ρ0 π+π, identified from π+π pairs within the 0.70–0.85 GeV invariant mass window. Since the ρ0 decays almost instantly into pions, only about 3% of the candidates were genuine ρ0 mesons. Background corrections were therefore essential to extract the ρ0–proton correlation function, defined as the ratio of the relative-momentum distribution of same-event pairs to that of mixed-event pairs. The result is consistent with unity at large relative momenta (k* > 200 MeV), as expected in the absence of strong forces. At lower values, however, a suppression with significance of about four standard deviations clearly signals ρ0–proton final-state interactions (see figure 1).

To interpret these results, ALICE used an effective field model based on chiral perturbation theory, which predicted two resonance states consistent with the formation of excited nucleon states. Because some pairs linger in these quasi-bound states instead of flying out freely, fewer emerge with nearly the same momentum. This results in a correlation suppression at low k* consistent with observations. Unlike photoproduction experiments and QCD sum rules, femtoscopy delivers the complete phase information of the ρ0–proton interaction. By analysing both ρ–proton and φ–proton pairs, ALICE extracted precise scattering parameters that can now be incorporated into theoretical models.

This measurement sets a benchmark for vector-meson–dominance models and establishes femtoscopy as a tool to probe interactions involving the shortest-lived hadrons, while providing essential input for understanding ρ–nucleon interactions in vacuum and describing the meson’s properties in heavy-ion collisions. Pinning down how the ρ meson behaves is crucial for interpreting dilepton spectra and the restoration of chiral symmetry, as differences between light quark masses become negligible at high energies. For example, the mass gap between the ρ and its axial counterpart, a1, comes from spontaneous chiral-symmetry breaking.

The measurement problem, measured

A century on, physicists still disagree on what quantum mechanics actually means. Nature recently surveyed more than a thousand researchers, asking about their views on the interpretation of quantum mechanics. When broken down by career stage, the results show that a diversity of views spans all generations.

Getting eccentric with age

The Copenhagen interpretation remains the most widely held view, placing the act of measurement at the core of quantum theory well into the 2020s. Epistemic or QBist approaches, where the quantum state expresses an observer’s knowledge or belief, form the next most common group, followed by Everett’s many-worlds framework, in which all quantum outcomes continue to coexist without collapse (CERN Courier July/August 2025 p26). Other views maintain small but steady followings, including pilot-wave theory, spontaneous-collapse models and relational quantum mechanics (CERN Courier July/August 2025 p21).

Fewer than 10% of physicists surveyed declined to express a view. Though this cohort purports to include proponents of the “shut up and calculate” school of thought, an apparently dwindling cohort of disinterested working physicists may simply be undersampled.

Crucially, confidence is modest. Most respondents view their preferred interpretation as an adequate placeholder or a useful conceptual tool. Only 24% are willing to describe their preferred interpretation as correct, leaving ample room for manoeuvre in the very foundations of fundamental physics.

Neural networks boost B-tagging

LHCb figure 1

The LHCb collaboration has developed a new inclusive flavour-tagging algorithm for neutral B-mesons. Compared to standard approaches, it can correctly identify 35% more B0 and 20% more B0s decays, expanding the dataset available for analysis. This increase in tagging power will allow for more accurate studies of charge–parity (CP) violation and B-meson oscillations.

In the Standard Model (SM), neutral B-mesons oscillate between particle and antiparticle states via second-order weak interactions involving a pair of W-bosons. Flavour-tagging techniques determine whether a neutral B-meson was initially produced as a B0 or its antiparticle B0, thereby enabling the measurement of time-dependent CP asymmetries. As the initial flavour can only be inferred indirectly from noisy, multi-particle correlations in the busy hadronic environment of the LHC, mistag rates have traditionally been high.

Until now, the LHCb collaboration has relied on two complementary flavour-tagging strategies. One infers the signal meson’s flavour by analysing the decay of the other b-hadron in the event, whose existence follows from bb pair production in the original proton-proton collision. Since the two hadrons originate from oppositely-charged, early-produced bottom quarks, the method is known as “opposite-side” (OS) tagging. The other strategy, or “same-side” (SS) tagging, uses tracks from the fragmentation process that produced the signal meson. Each provides only part of the picture, and their combination defined the state of the art in previous analyses.

The new algorithm adopts a more comprehensive approach. Using a deep neural network based on the “DeepSets” architecture, it incorporates information from all reconstructed tracks associated with the hadronisation process, rather than preselecting a subset of candidates. By considering the global structure of the event, the algorithm builds a more detailed inference of the meson’s initial flavour. This inclusive treatment of the available information increases both the sensitivity and the statistical reach of the tagging procedure.

The model was trained and calibrated using well-established B0 and B0s meson decay channels. When compared with the combination of opposite-side and same-side taggers, the inclusive algorithm displayed a 35% increase in tagging power for B0 mesons and 20% for B0s mesons (see figure 1). The improvement stems from gains in both the fraction of events that receive a flavour tag and how often the tag is correct. Tagging power is a critical figure of merit, as it determines the effective amount of usable data. Therefore, even modest gains can dramatically reduce statistical uncertainties in CP-violation and B-oscillation measurements, enhancing the experiment’s precision and discovery potential.

This development illustrates how algorithmic innovation can be as important as detector upgrades in pushing the boundaries of precision. The improved tagging power effectively expands the usable data sample without requiring additional collisions, enhancing the experiment’s capacity to test the SM and seek signs of new physics within the flavour sector. The timing is particularly significant as LHCb enters Run 3 of the LHC programme, with higher data rates and improved detector components. The new algorithm is designed to integrate smoothly with existing reconstruction and analysis frameworks, ensuring immediate benefits while providing scalability for the much larger datasets expected in future runs.

As the collaboration accumulates more data, the inclusive flavour-tagging algorithm is likely to become a central tool in data analysis. Its improved performance is expected to reduce uncertainties in some of the most sensitive measurements carried out at the LHC, strengthening the search for deviations from the SM.

Machine learning and the search for the unknown

CMS figure 1

In particle physics, searches for new phenomena have traditionally been guided by theory, focusing on specific signatures predicted by models beyond the Standard Model. Machine learning offers a different way forward. Instead of targeting known possibilities, it can scan the data broadly for unexpected patterns, without assumptions about what new physics might look like. CMS analysts are now using these techniques to conduct model-independent searches for short-lived particles that could escape conventional analyses.

Dynamic graph neural networks operate on graph-structured data, processing both the attributes of individual nodes and the relationships between them. One such model is ParticleNet, which represents large-radius-jet constituents as networks to identify N-prong hadronic decays of highly boosted particles, predicting their parent’s mass. The tool recently aided a CMS search for the single production of a heavy vector-like quark (VLQ) decaying into a top quark and a scalar boson, either the Higgs or a new scalar particle. Alongside ParticleNet, a custom deep neural network was trained to identify leptonic top-quark decays by distinguishing them from background processes over a wide range of momenta. With this approach, the analysis achieved sensitivity to VLQ production cross-sections as small as 0.15 fb. Emerging methods such as transformer networks can provide even more sensitivity in future searches (see figure 1).

CMS figure 2

Another novel approach combined two distinct machine-learning tools in the search for a massive scalar X decaying into a Higgs boson and a second scalar Y. While ParticleNet identified Higgs-boson decays to two bottom quarks, potential Y signals were assigned an “anomaly score” by an autoencoder – a neural network trained to reproduce its input and highlight atypical features in the data. This technique provided sensitivity to a wide range of unexpected decays without relying on specific theoretical models. By combining targeted identification with model-independent anomaly detection, the analysis achieved both enhanced performance and broad applicability.

Searches at the TeV scale sit at the frontier where not only more and more data but also algorithmic innovation drives experimental discovery. Tools such as targeted deep neural networks, parametric neural networks (PNNs) – which efficiently scan multi-dimensional mass landscapes (see figure 2) – and model-independent anomaly detection, are opening new ways to search for deviations from the Standard Model. Analyses of the full LHC Run 2 dataset have already revealed intriguing hints, with several machine-learning studies reporting local excesses – including a 3.6σ excess in a search for V′  VV or VH  jets, and deviations up to 3.3σ in various X  HY searches. While no definitive signal has yet emerged, the steady evolution of neural-network techniques is already changing how new phenomena are sought, and anticipation is high for what they may reveal in the larger Run 3 dataset.

bright-rec iop pub iop-science physcis connect