Topics

Exotic flavours at the FCC

Half a century after its construction, the Standard Model of particle physics (SM) still reigns supreme as the most accurate mathematical description of the visible matter in the universe and its interactions. It was placed upon its throne by the many precise measurements made at the Large Electron Positron collider (LEP), in particular, and its rule was confirmed by the discovery of the Higgs boson at the Large Hadron Collider (LHC). CERN’s LEP/LHC success story, in which a hadron collider provided direct evidence for a new particle (the Higgs boson) whose properties were already partially established at a lepton collider, can serve as a blueprint for physics discoveries at a proposed Future Circular Collider (FCC) operating at CERN after the end of the LHC. 

Back in the late 1970s and early 1980s when the LEP/LHC programme was first proposed, the W and Z bosons mediating the weak interactions had not yet been observed, the top quark was considered a possible discovery, and the Higgs boson was regarded as a distant speculation. Precise studies of the W and Z, which were discovered in 1983 at the SPS proton–antiproton collider at CERN, were key items in LEP’s physics programme along with direct searches for the top quark, the Higgs boson and possible unknown particles. Even though the LEP experiments did not reveal any new particles beyond the W and Z, the unprecedented precision of its measurements revealed indirect effects (via quantum fluctuations) of the top and the Higgs, thereby providing indirect evidence for the SM mechanism of electroweak symmetry breaking. When the top quark was discovered at the Tevatron proton–antiproton collider at Fermilab in 1995, and the Higgs boson at the LHC in 2012, their masses were within the ranges indicated by precision measurements made at lepton colliders. 

Layout of the Future Circular Collider at CERN

Nowadays, the hope is that the proposed FCC programme – comprising an electron–positron collider followed by a high-energy proton-proton collider in the same ~100 km tunnel – will repeat the LEP/LHC success story at an even higher level of precision and energy. The e+e FCC stage would reproduce the entire LEP sample of Z bosons within a couple of minutes, yielding around 5 × 1012 Z bosons after four years of operation. In addition to allowing an incredibly accurate determination of the Z-boson’s properties, Z decays would also provide unprecedented samples of bottom quarks (1.5 × 1012) and tau leptons (3 × 1011). Potential increases in the FCC-ee centre-of-mass-energy would also produce unparalleled numbers of W+W and top–antitop pairs, which are important for the global electroweak fit, close to their respective thresholds, as well as more Higgs bosons than promised by other proposed e+e Higgs factories.

Probing beyond the Standard Model

Analyses of FCC-ee data, combined with results from previous experiments at the LHC and elsewhere, would not only push our understanding of the SM to the next level but would also provide powerful indirect probes of possible physics beyond the SM, with sensitivities to masses an order of magnitude greater than those of the LHC. A possible subsequent proton–proton FCC stage (FCC-hh) operating at a centre-of-mass energy of at least 100 TeV would then provide unequalled opportunities to discover this new physics directly, just as the LHC made possible the discovery of the Higgs boson following the indirect hints from high-precision LEP data. Whereas the combination of LEP and the LHC explore the TeV scale both indirectly and directly, the combination of FCC-ee and FCC-hh will carry the search for new physics to 30 TeV and beyond. 

The e+e stage of FCC would reproduce the entire LEP sample of Z bosons within a couple of minutes

However, for this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach. While the existence of dark matter and neutrino masses already prove that the SM cannot be complete (and there is no shortage of theoretical ideas as to what extensions of the SM could account for them), these observations can be explained by new particles within a very wide mass range – possibly well beyond the reach of FCC-hh. Fortunately, intriguing hints for new physics in the flavour sector have accumulated in recent years that point towards beyond-the-SM physics that should be accessible to FCC.

B-decay anomalies

Within the SM, the charged leptons – electrons, muons and taus – all have very similar properties. They interact with the photon as well as the W and Z bosons in the same way, and differ only in their masses, which in the SM are represented as Yukawa couplings to the Higgs boson. It is therefore said that the SM (approximately) respects lepton-flavour universality (LFU), despite the seemingly large differences in charged-lepton lifetimes originating from phase-space effects. 

Flavour observables (i.e. processes resulting from rare transitions among the different generations of quarks and leptons), and observables measuring LFU in particular, are especially promising to test the SM because they are strongly suppressed in the SM and thus very sensitive to new physics. In recent years, a coherent pattern of anomalies, all pointing towards the violation of LFU, have emerged. Two classes of fundamental processes giving rise to decays of B mesons – b → sℓ+ and b → cτν – show deviations from the SM predictions. 

In the flavour-changing neutral-current process b → sℓ+, a heavy bottom quark undergoes a transition to a strange quark and a pair of oppositely-charged leptons, which could be either electrons or muons. The ratios RK = Br(B → +μ)/Br(B → Ke+e) and RK* = Br(B → K*μ+μ)/Br(B → K*e+e), measured most precisely by the LHCb collaboration, are particularly interesting because their SM predictions are very clean. Since the muon and electron masses are negligible compared to the B-meson mass, the ratio of muon to electron decays should be close to unity according to the SM. However, intriguingly, LHCb has observed values significantly lower than one, and recently reported first evidence for LFU violation in RK . These hints of new physics are supported by measurements of the angular observable P5′ in B0→ K*0μ+μ decays and the rate of Bs→ φμ+μ decays. Importantly, all these observations can potentially be explained by the same new-physics interactions and are consistent with all other available measurements of processes involving b → sℓ+transitions. In fact, global fits of all available b → sℓ+  data find a preference for new physics compared to the SM hypothesis which reeks of a possible discovery.

Anomalous correlations

The second class of anomalies involves the charged-current process b → cτν, which is already mediated at tree level in the SM. The corresponding B-meson decays therefore have much higher probabilities to occur and thus larger branching ratios. However, the non-negligible tau mass leads to imperfect cancellations of the form factors in the ratio to electron or muon final states, and thus the resulting SM prediction is not as precise as those for RK and RK*. The most prominent examples of observables involving b → cτν transitions are the ratios RD = Br(B → Dτντ)/Br(B → Dℓν) and RD* = Br(B → D*τ ντ)/Br(B → D*τν). Here, the measurements of Belle, BaBar and LHCb consistently point above the SM predictions, resulting in a combined tension of 3σ. Importantly, as these processes happen quite frequently in the SM, a significant new-physics effect would be required to account for the corresponding anomaly. 

With the FCC-ee capable of producing 1.5 × 1012 b quarks, clearly the b anomalies could be further verified within a short period of running, assuming that LHCb, Belle II and possibly other experiments do confirm them. The large data sample would also allow physicists to study complementary modes that bear upon LFU but are more difficult for LHCb to measure, such as other “R” measurements involving neutral kaons. These measurements would be invaluable for pinning down the mechanism responsible for any violation of lepton universality.

Other possible anomalies

The B anomalies are just one exciting avenue that a “Tera-Z factory” like FCC-ee could explore further. The anomalous magnetic moment of the muon, aμ, can also be viewed as an exciting hint for new physics in the lepton sector. Predicted by the Dirac equation to have a value exactly equal to two, the physical value of the magnetic moment of the muon is slightly higher due to fluctuations at the quantum level. The very high precision of both the calculation and measurement therefore make it a powerful observable with which to search for new physics. A tension between the measured and predicted value of aμ has persisted since Brookhaven published its final result in 2006, and was recently strengthened by the muon g-2 experiment at Fermilab, yielding an overall significance of 4.2σ when combined with the earlier Brookhaven data. 

Effects of new physics on precision electroweak measurements

Various models have been proposed to explain the g-2 anomaly. They include leptoquarks (scalar or vector particles that carry colour and couple directly to a quark and a lepton that arise in models with extended gauge groups) and supersymmetry. Such leptoquarks could have masses anywhere between the lower LHC limit of 1.5 TeV and about 10 TeV, thus being within the reach of FCC-hh, whereas a supersymmetric explanation would require a couple of new particles with masses of a few hundred GeV, possibly even within reach of FCC-ee. Importantly, any explanation involving heavy new particles would also lead to effects in Z → μ+μ, as both observables are sensitive to interactions with sizeable coupling strength to muons. FCC-ee’s large Z-boson sample could therefore reveal deviations from the SM predictions at the suggested level. Leptoquarks could also modify the SM prediction for H  μ+μ decay, which will be measured very accurately at FCC-hh (see “Anomalous correlations” figure).

CKM under scrutiny

As the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which describes flavour violation in the quark sector, is unitary, the sum of the squares of the elements in each row and in each column must add up to unity. This unitarity relation can be used to check the consistency of different determinations of CKM elements (within the SM) and thus also to search for new physics. Interestingly, a deficit in the first-row unitarity relation exists at the 3σ level. This can be traced back to the fact that the value of the element Vud, extracted from super-allowed beta decays, is not compatible with the value of Vus, determined from kaon and tau decays, given CKM unitarity. Interestingly, this deviation can also be interpreted as a sign of LFU violation, since beta decays involve electrons while the most precise determination of Vus comes from decays with final-state muons. 

Here, a new-physics effect at a relative sub-per-mille level compared to the SM would suffice to explain the anomaly. This could be achieved by a heavy new lepton or a massive gauge boson affecting the determination of the Fermi constant that parametrises the strength of the weak interactions. As the Fermi constant can also be determined from the global electroweak fit, for which Z decays are crucial inputs, FCC-ee would again be the perfect machine to investigate this anomaly, as it could improve the precision by a large factor (see “High precision” figure). Indeed, the Fermi constant may be determined directly to one part in 105 from the enormous sample (> 1011) of Z decays to tau leptons. 

For this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach

FCC-ee’s extraordinarily large dataset will also enable scrutiny of a long-standing anomaly in the forward-backward asymmetry of Z → bb decays. The LEP measurement of ΔAFB, which arises from the difference between the Z boson couplings to left- and right-handed chiral states with different strengths, lies 2–3σ below the SM prediction. Although not significant, this anomaly may also be linked to new physics entering in b → s transitions.

Finally, a possible difference in the decay asymmetries of B → D*μν vs B → D*eν was recently reported by an analy­sis of Belle data. As in the case of RK, the SM prediction that the difference between the muon and the electron asymmetries should be zero is very clean and, like RD and RD*, this observable points towards new physics in b → c transitions and could be related via leptoquarks to g-2 of the muon. Once more, the great number of b quarks to be produced at FCC-ee, together with the clean environment of a lepton collider, would allow this observable to be determined with unprecedented accuracy.

Since all these anomalies point, to varying degrees, towards the existence of LFU-violating new physics, it raises the question of whether a common explanation exists? There are several particularly interesting possibilities, including leptoquarks, new scalars and fermions (as arise in supersymmetric extensions of the SM), new vector bosons (W′ and Z′) and new heavy fermions. In the overwhelming majority of such scenarios, a direct discovery of a new particle is possible at FCC-hh. For example, it could discover leptoquarks with masses up to 10 TeV and Z′ bosons with masses up to 40 TeV, covering most of the mass ranges expected in such models.

Anomalies point to possible violations of lepton-flavour universality

A return to the Z pole and beyond

The LEP programme was extremely successful in determining the mechanism of electroweak symmetry breaking, in particular by measuring the properties and decays of the Z boson very precisely from a 17 million-strong sample. This allowed for a prediction of a range for the Higgs mass within which it was later discovered at the LHC. The flavour anomalies could lead to a similar situation in the near future. In this case, the roughly 5 × 1012 Z bosons that the FCC-ee is designed to collect would not only be able to test the effects of new particles in precision electroweak observables, but also, via Z decays into bottom quarks and tau leptons, provide a unique testing ground for flavour physics. As noted earlier, FCC-ee’s Z-pole run is also envisaged to be the first step in a broader electroweak programme encompassing large statistics at the WW and tt thresholds, in addition to its key role as a precision Higgs factory. 

Looking much further ahead to the energy frontier, FCC-hh would be able, in the overwhelming number of scenarios motivated by the flavour anomalies, to directly discover a new particle. Furthermore, FCC-hh would allow for a precise determination of rare Higgs decays and the Higgs potential, probing new-physics effects related to this sector, such as leptoquark explanations of the anomalous magnetic moment of the muon.

Pending the outcome of the FCC feasibility study recommended by the 2020 update of the European strategy for particle physics, the hope that the LEP/LHC success story could be repeated by FCC-ee/FCC-hh is well justified. While FCC-ee could be used to indirectly pin down the parameters of the model(s) of new physics explaining the flavour anomalies via precision electroweak and flavour measurements, FCC-hh would be capable of searching for the predicted particles directly. 

How the Sun and stars shine

Staring at the Sun

Each second, fusion reactions in the Sun’s core fling approximately 60 billion neutrinos onto every square centimetre of the Earth. In the late 1990s, the Borexino experiment at Gran Sasso National Laboratory in Italy was conceived to measure these neutrinos right down to a few tens of keV, where the bulk of the flux lies. The detector’s name means “little Borex” and refers to an earlier idea for a large experiment with a boron-loaded liquid scintillator, which was shelved in favour of the present, smaller and more ambitious detector. Rather than studying rare but high-energy 8B neutrinos from a little-followed branch of the proton–proton (pp) fusion chain, Borexino would target the far more numerous but lower energy neutrinos produced in the Sun by electron captures on 7Be.

The fusion reactions generating the Sun’s energy

Three decades after its conception, Borexino has far exceeded this goal thanks to the exceptional radiopurity of the experimental apparatus (see “Detector design” panel. Special care taken in construction and commissioning has achieved a radiopurity about three orders of magnitude better than predicted, and 10 to 12 orders of magnitude below natural radioactivity. This has allowed the collaboration to probe the entire solar-neutrino spectrum, including not only the pp chain, but also the carbon–nitrogen–oxygen (CNO) cycle. This mechanism plays a minor role in the Sun but becomes important for more massive stars, dominating the energy production and the production of elements heavier than helium in the universe at large.

The heart of the Sun

The pp-chain generates 99% of the energy in the Sun: it begins when two protons fuse to produce a deuteron and an electron neutrino – the so-called pp neutrino (see “Chain and cycle” figure). Subsequent reactions produce light elements, such as 3He, 4He, 7Be, 7Li, 8B and more electron neutrinos. In Borexino, the sensitivity to pp neutrinos depends on the amount of 14C in the liquid scintillator: with an end-point energy of 0.156 MeV compared with a maximum visible energy for pp neutrinos of 0.264 MeV, the 14C 14N + β + ν beta decay sets the detection threshold and the feasibility of probing pp-neutrinos. The Borexino scintillator was therefore made using petroleum from very old and deep geological layers, to ensure a low content of 14C.

Detector design

Like many particle-physics detectors, Borexino has an onion-like design. The innermost layers have the highest radio-purity. The detector’s active core consists of 278 tonnes of pseudocumene (C9H12) scintillator. Into this is dissolved 2,5-diphenyloxazole (PPO) at a concentration of 1.5 grams per litre, which shifts the emission light to 400 nm, where the sensitivity of photomultipliers is peaked. The scintillator is contained within a 125 μm-thick nylon inner vessel (IV) with a 4.5 m radius – made thin to reduce radiation emission from the nylon . In addition, the IV stops radon diffusion towards the core of the detector. 

Borexino design

The IV is contained within a 7 m-radius stainless-steel sphere (SSS) that supports 2212 photomultipliers (PMTs) and contains 1000 tonnes of pseudocumene as high-radio-purity shielding liquid against radioactivity from PMTs and the SSS itself. Between the SSS and the IV, a second nylon balloon acts as a barrier preventing radon and its progeny from reaching the scintillator. The SSS is contained in a 2400-tonne tank of highly purified water which, together with Borexino’s underground location, shields the detector from environmental radioactivity. The tank boasts a muon detector to tag particles crossing the detector. 

When a neutrino interacts in the target volume, energy deposited by the decelerating electron is registered by a handful of PMTs. The neutrino’s energy can be obtained from the total charge, and the hit-time distribution is used to infer the location of the event’s vertex. Recoiling electrons are used to tag electron neutrinos, and the combination of a positron annihilation and a neutron capture on hydrogen (an inverse beta decay) are used to tag electron antineutrinos.

Due to the impossibility of discriminating individual solar-neutrino events from the backgrounds, the greatest challenge has been the reduction of natural radioactivity to unprecedented levels. In the early 1990s, Borexino developed innovative techniques such as under-vacuum distillation, water extraction, ultrafiltration and nitrogen sparging with ultra-high radiopurity nitrogen to reduce radioactive impurities in the scintillator to 10–10 Bq/kg or better. An initial detector called the Counting Test Facility was developed as a means to demonstrate such claims, publishing results for the key uranium, thorium and krypton backgrounds in 1995. Full data taking at Borexino began in 2007. 

Since data-taking began in 2007, Borexino has measured, for the first time, all the individual fluxes produced in the pp-chain. In 2014 the collaboration made the first definitive observation of pp neutrinos, using a comparison with the predicted energy spectrum. In 2018 the collaboration performed, with the same apparatus, a measurement of all the pp-chain components (pp, 7Be, pep and 8B neutrinos), demonstrating the large-scale energy-generation mechanism in the Sun for the first time (see “Energy spectrum” figure). This spectral fit allowed the collaboration to directly determine the ratio between the interaction rate of 3He + 3He fusions and that of 3He + 4He fusions – a crucial parameter for characterising the pp chain and its energy production.

The simultaneous measurement of pp-chain neutrino fluxes also gave Borexino a unique window onto the famous “vacuum-matter” transition, whereby coherent virtual W-boson interactions with electrons modify neutrino-oscillation probabilities as neutrinos propagate through matter, enhancing the oscillation probability as a function of energy. In 2018 Borexino measured the solar electron–neutrino survival probability, Pee, in the energy range from a few tens of keV up to 15 MeV (see “Survival probability” figure). This was the first direct observation of the transition from a low-energy vacuum regime (Pee~0.55) to a higher energy matter regime where neutrino propagation is dominantly affected by the solar interior (Pee~0.32). The transition was measured by Borexino at the level of 98% confidence.

CNO cycle

A different way to burn hydrogen, the CNO cycle, was hypothesised independently by Carl Friedrich von Weizsäcker and Hans Albrecht Bethe between 1937 and 1939. Here, 12C acts as a catalyst, and electron neutrinos are produced by the beta decay of 13N and 15O, with a small contribution from 17F. The maximum energy of CNO neutrinos is about 1.7 MeV. In addition to making an important contribution to the production of elements heavier than helium, this cycle is important for the nucleosynthesis of 16O and 17O. In massive stars it also develops in more complex reactions producing 18F, 18O, 19F, 18Ne and 20Ne.

Solar neutrinos and residual backgrounds

The sensitivity to CNO neutrinos in Borexino mainly comes from events in the energy range from 0.8 to 1 MeV. In this region, the dominant background comes from 210Bi, which is produced by the slow radioactive decay 210Pb (22 y) 210Bi (5 d) + β + ν210Po (138 d) + β + ν206Pb (stable) + α. The 210Bi activity can be inferred from 210Po, which can be efficiently tagged using pulse-shape discrimination. However, convective currents in the liquid scintillator bring into the central fiducial mass 210Po produced by 210Pb, which is most likely to be embedded on the nylon containment vessel. In order to reduce convection currents, a passive insulation system and a temperature control system were installed in 2016, significantly reducing the effect of seasonal temperature variations. 

Thanks to these and other efforts, in 2020 Borexino rejected the null hypothesis of no CNO reactions by more than five standard deviations, providing the first direct proof of the process. The energy production as a fraction of the solar luminosity was measured to be 1-0.3+0.4 %, in agreement with the Solar Standard Model (SSM) prediction of roughly 0.6 ± 0.1% (which assumes the solar surface has a high metallicity – a topic discussed in  more detail later). Given that luminosity scales as M4 and number density as M–2.5 for stars between one and 10 solar masses, the CNO cycle is thought to be the most important source of energy in massive hydrogen-burning stars. Borexino has provided the first experimental evidence for this hypothesis.

Probing solar metallicity using CNO neutrinos is of the utmost importance, and Borexino is hard at work on the problem

But, returning to the confines of our solar system, it’s important to remember that the SSM is not a closed book. Borexino’s results are thus far in agreement with its assumption of a protostar that had a uniform composition throughout its entire volume when fusion began (“zero-age homogeneity”). However, thanks to the ability of neutrinos to peek into the heart of the Sun, the experiment now has the potential to explore this assumption and weigh in on one of the most intriguing controversies in astrophysics.

The solar-abundance controversy

As stars evolve, the distribution of elements within them changes thanks to fusion reactions and convection currents. But the composition of the surface is thought to remain very nearly the same as that of the protostar, as it is not hot enough there for fusion to occur. Measuring the abundance of elements on a star’s surface therefore gives an idea of the protostar’s composition and is a powerful way to constrain the SSM. 

Solar-neutrino measurements

Currently, the best method to determine the surface abundance of elements heavier than helium (“metallicity”) uses measurements of photo-absorption lines. Since 2005, improved hydrodynamic calculations (which are needed to model atomic-line formation, and radiative and collisional processes which contribute to excitation and ionisation) indicate a much lower surface metallicity than was previously considered. However, helioseismology observables differ by roughly five standard deviations from SSM predictions that use the new surface metallicity to infer the protostar’s composition, when the sound–speed profile, surface–helium abundance and the depth of the convective envelope are taken into account. Helioseismology implies that the zero-age Sun’s core was richer in metallicity than the present surface composition, suggesting a violation of zero-age homogeneity and a break with the SSM. This is the solar-abundance controversy, which was discovered in 2005.

One possible explanation is that a late “dilution” of the Sun’s convective zone occurred due to a deposition of elements during the formation of the solar system. Were there to have been an accretion of dust and gas from the proto-planetary disc onto the central star during the evolution of the star–planet system, this could have changed the initial metallicity of the surface of the Sun – a hypothesis backed up by recent simulations that show that a metal-poor accretion could produce the present surface metallicity. 

As they are an excellent probe of metallicity, CNO neutrinos have an important role to play in settling the solar-abundance controversy. If Borexino were to measure the Sun’s present core metallicity, and by running simulations backwards prove that its surface metallicity must have been diluted right from its birth, this would violate one of the basic assumptions of the SSM. Probing solar metallicity using CNO neutrinos is, therefore, of the utmost importance, and Borexino is hard at work on the problem. Initial results favour the high-metallicity hypothesis with a significance of 2.1 standard deviations – a tentative first hint from Borexino that zero-age homogeneity may indeed be false.

The ancient question of why and how the Sun and stars shine finally has a comprehensive answer from Borexino, which has succeeded thanks to the detector’s extreme and unprecedented radio-purity – the hard work of hundreds of researchers over almost three decades.

Linacs to narrow radiotherapy gap

Number of people in African countries who have access to radiotherapy facilities

By 2040, the annual global incidence of cancer is expected to rise by more than 42% from 19.3 million to 27.5 million cases, corresponding to approximately 16.3 million deaths. Shockingly, some 70% of these new cases will be in low- and middle-income countries (LMICs), which lack the healthcare programmes required to effectively manage their cancer burden. While it is estimated that about half of all cancer patients would benefit from radiotherapy (RT) for treatment, there is a significant shortage of RT machines outside high-income countries.

More than 10,000 electron linear accelerators (linacs) are currently used worldwide to treat patients with cancer. But only 10% of patients in low-income and 40% in middle-income countries who need RT have access to it. Patients face long waiting times, are forced to travel to neighbouring regions or face insurmountable expenditure to access treatment. In Africa alone, 27 out of 55 countries have no linac-based RT facilities. In those that do, the ratio of the number of machines to people ranges from one machine to 423,000 people in Mauritius, one machine to almost five million people in Kenya and one machine to more than 100 million people in Ethiopia (see “Out of balance” image). In high-income countries such as the US, Switzerland, Canada and the UK, by contrast, the ratio is one RT machine to 85,000, 102,000, 127,000 and 187,000 people, respectively. To draw another stark comparison, Africa has approximately 380 linacs for a population of 1.2 billion while the US has almost 4000 linacs for a population of 331 million.

Unique challenges

It is estimated that to meet the demand for RT in LMICs over the next two to three decades, the current projected need of 5000 RT machines is likely to become more than 12,000. To put these figures into perspective, Varian, the market leader in RT machines, has a current worldwide installation base of 8496 linacs. While many LMICs provide RT using cobalt-60 machines, linacs offer better dose-delivery parameters and better treatment without the environmental and potential terrorism risks associated with cobalt-60 sources. However, since linacs are more complex and labour-intensive to operate and maintain, their current costs are significantly higher than cobalt-60 machines, both in terms of initial capital costs and annual service contracts. These differences pose unique challenges in LMICs, where macro- and micro-economic conditions can influence the ability of these countries to provide linac-based RT. 

The difficulties of operating electron guns

In November 2016 CERN hosted a first-of-its-kind workshop, sponsored by the International Cancer Expert Corps (ICEC), to discuss the design characteristics of RT linacs (see “Linac essentials” image) for the challenging environments of LMICs. Leading experts were invited from international organisations, government agencies, research institutes, universities and hospitals, and companies that produce equipment for conventional X-ray and particle therapy. The following October, CERN hosted a second workshop titled “Innovative, robust and affordable medical linear accelerators for challenging environments”, co-sponsored by the ICEC and the UK’s Science and Technology Facilities Council, STFC. Additional workshops have taken place in March 2018, hosted by STFC in collaboration with CERN and the ICEC, and in March 2019, hosted by STFC in Gaborone, Botswana (see “Healthy vision” image). These and other efforts have identified substantial opportunities for scientific and technical advancements in the design of the linac and the overall RT system for use in LMICs. In 2019, the ICEC, CERN, STFC and Lancaster University entered into a formal collaboration agreement to continue concerted efforts to develop this RT system. 

The idea of novel medical linacs is an excellent example of the impact of fundamental research on wider society

In June 2020, STFC funded a project called ITAR (Innovative Technologies towards building Affordable and equitable global Radiotherapy capacity) in partnership with the ICEC, CERN, Lancaster University, the University of Oxford and Swansea University. ITAR’s first phase was aimed at defining the persistent shortfalls in basic infrastructure, equipment and specialist workforce that remain barriers to effective RT delivery in LMICs. Clearly, a linac suitable for these conditions needs to be low-cost, robust and easy to maintain. Before specifying a detailed design, however, it was first essential to assess the challenges and difficulties RT facilities face in LMICs and in other demanding environments. Published in June 2021, an expansive study of RT facilities in 28 African countries was carried out and compared to western hospitals by the ITAR team to quantitatively and qualitatively assess and compare variables in several domains (see “Downtime” figure). The survey builds on a related 2018 study on the availability of RT services and barriers to providing such services in Botswana and Nigeria, which looked at the equipment maintenance logs of linacs in those countries and selected facilities in the UK.

Surveying the field

The absence of detailed data regarding linac downtime and failure modes makes it difficult to determine the exact impact of the LMIC environment on the performance of current technology. The ongoing ITAR design development and prototyping process identified a need for more information on equipment failures, maintenance and service shortcomings, personnel, training and country-specific healthcare challenges from a much larger representation of LMICs. A further-reaching ITAR survey obtained relevant information for defining design parameters and technological choices based on issues raised at the workshops. They include well-recognised factors such as ease and reliability of operation, machine self-diagnostics and a prominent display of impending or actual faults, ease of maintenance and repair, insensitivity to power interruptions, low power requirement and the consequent reduced heat production.

A standard medical linac

Based on the information from its surveys, ITAR produced a detailed specification and conceptual design for an RT linac that requires less maintenance, has fewer failures and offers fast repair. Over the next three years, under the umbrella of a larger project called STELLA (Smart Technologies to Extend Lives with Linear Accelerators) launched in June 2020, the project will progress to a prototype development phase at STFC’s Daresbury Laboratory. 

The design of the electron gun has been optimised to increase beam-capture. This has the dual advantage of reducing both the peak current required from the gun to deliver the requisite dose and “back bombardment”. It also allows for simpler replacement of the electron gun’s cathode by trained personnel (current designs require replacement of the full electron gun or even the full linac). Electron-beam capture is limited in medical linacs as the pulses from the electron gun are much longer in duration than the radiofrequency (RF) period, meaning electrons are injected at all RF phases. Some phases cause the bunch to be accelerated while others result in electrons being reflected back to the cathode. In typical linacs, less than 50% of electrons reach the target and many electrons reach the target with lower energies. In high-energy accelerators velocity bunching can be used to compress the bunch, however the space is limited in medical linacs and the energy gain per cell is often well in excess of the beam energy. To allow velocity bunching in a medical linac, the first cell needs to operate at a low gradient – such that less space is required to bunch as the average beam velocity is much lower and the deceleration is less than the beam energy. By adjusting the length of the first and second cells, the decelerated electrons can re-accelerate on the next RF cycle and synchronise with the accelerated electrons, capturing nearly all the electrons and transporting them to the target without a low-energy tail. This is achieved using techniques originally developed for the optimisation of klystrons as part of the Compact Linear Collider project at CERN. By adjusting cell-to-cell coupling, it is possible to make all the other cells at a higher gradient similar to a standard medical linac such that the total linac length remains the same (see “Strong coupling” figure).

Designing a Robust and Affordable Radiation Therapy Treatment System for Challenging Environments workshop participants

The electrical power supply in LMICs can often be variable and protection equipment to isolate harmonics between pieces of equipment is not always installed, hence it is critical to consider this when designing the electrical system for RT machines. This in itself is relatively straightforward but is not normally considered as part of a RT machine design.

The failure of multi-leaf collimators (MLCs), which alter the intensity of the radiation so that it conforms to the tumour volume via several individually actuated leaves, is a major linac downtime issue. Designing MLCs that are less prone to failure will play a key role in RT in LMICs, with studies ongoing into ways to simplify the design without compromising on treatment quality.

Building a workforce

Making it simpler to diagnose and repair faults on linacs is another key area that needs improvement. Given the limited technical staff training in some LMICs, when a machine fails it can be challenging for local staff to make repairs. In addition, components that are degrading can be missed by staff, leading to loss of valuable time to order spares. An important component of the STELLA project, led by ICEC, is to enhance existing and establish new twinning programmes that provide mentoring and training to healthcare professionals in LMICs to build workforce capacity and capability in those regions.

ITAR linac cavity geometry

The idea to address the need for a novel medical linac for challenging environments was first presented by Norman Coleman, senior scientific advisor to the ICEC, at the 2014 ICTR-PHE meeting in Geneva. This led to the creation of the STELLA project, led by Coleman and ICEC colleagues Nina Wendling and David Pistenmaa, which is now using technology originally developed for high-energy physics to bring this idea closer to reality – an excellent example of the impact of fundamental research on wider society. 

The next steps are to construct a full linac prototype to verify the higher capture, as well as to improve the ease of maintaining and repairing the machine. Then we need to have the RT machine manufactured for use in LMICs, which will require many practical and commercial challenges to be overcome. The aim of project STELLA to make RT truly accessible to all cancer patients brings to mind a quote from the famous Nigerian novelist Chinua Achebe: “While we do our good works let us not forget that the real solution lies in a world in which charity will have become unnecessary.” 

Space-based data probe neutron lifetime

Recent measurements of the neutron lifetime

The neutron lifetime is key to a range of fields, not least astrophysics and cosmology, where it is used in the modeling of the synthesis of helium and heavier elements in the early universe. Its value, however, is uncertain. In recent years, discrepancies of up to 4σ between measurements of the neutron lifetime using different methods present a puzzle that particle physicists, nuclear physicists and cosmologists are increasingly eager to solve. 

A recent experiment with the UCNτ experiment at the Los Alamos Neutron Science Center, which resulted in the most constraining measurement of the lifetime to date, further strengthens the discrepancy. The latest result, achieved using the so called “bottle” method, results in a neutron lifetime of 877.75 ± 0.28 (stat) +0.22 –0.16 (syst) s, whereas measurements using the “beam” method have consistently resulted in longer lifetimes (see figure). While the beam method determines the lifetime by measuring the decay products of the neutron, the bottle method instead stores cold, or thermalised, neutrons for a certain time before counting the remaining ones by direct detection. If not the result of some unknown systematic error, the discrepancy could be a sign of exotic physics whereby the longer lifetime in the beam method stems from an unmeasured second decay channel. 

Escape detection

Astrophysics brings a third, independent measurement into play based on the bombardment of galactic cosmic rays on planetary surfaces. This continual process liberates large numbers of high-energy neutrons, some of which escape into space while others approach thermal equilibrium with surface and atmospheric material, a proportion subsequently escaping into space where at some point they will decay. The neutron lifetime can therefore be inferred by counting the neutrons remaining at different distances from their production location, using detectors positioned hundreds to thousands of kilometres above the surface. As the escaped neutron flux depends on a planet’s particular elemental composition at depths corresponding to the neutron mean-free path (typically around 10 cm), neutron spectrometers have already been installed on several missions to explore planetary surface compositions.

A dedicated instrument on a future lunar mission could bring a crucial third independent tool to tackle the neutron lifetime puzzle

In 2020, using neutrons produced through interactions of cosmic rays with Venus and Mercury, a team from the Johns Hopkins Applied Physics Laboratory and Durham University demonstrated the feasibility of such a neutron-lifetime measurement. Now, using data from a lunar mission, the same team has provided the first results with uncertainties approaching those coming from lab-based experiments. Importantly, since it also relies on direct detection, the result from space should produce the same lifetime as the bottle experiments.

For this latest study, the researchers used data from NASA’s Lunar Prospector taken during several elliptical orbits around the moon in 1998. The orbiter contained two neutron detectors, one with a cadmium shield making it insensitive to slow or thermal neutrons, and one containing a tin shield that allows it to measure thermal as well as higher- energy neutrons. The difference between the two count rates then provides the thermal neutron flux. Combining this with the spacecraft position, the group deduced the thermal neutron flux for different positions and distances towards the Moon and fitted the data against a model that includes the production and propagation of thermal neutrons originating from interactions of cosmic rays with the lunar surface.

Surface studies

The highly detailed models account for neutron production from cosmic-ray interactions with the different elements of the lunar surface, and also for the varying composition of the surface in different regions. For the lifetime measurement, thermal neutrons were used due to their lower velocities (a few km/s), which makes their flux as a function of the distance to the surface (typically several 100 km) more sensitive to their lifetimes. The higher sensitivity comes at the cost of greater model complexity, however. For example, thermal neutrons cannot simply be modeled as traveling in a straight line, but are affected by the lunar gravity, meaning that they not only come directly from the surface but also enter the detector from the back as they perform elliptical orbits. 

The study found a lifetime of 887 ± 14 (stat) +7–3 (syst) s. The systematic error stems mainly from uncertainties in the surface composition and its variations as well as a lack of modeling of the temperature variation of the Moon’s surface, which affects the thermalisation process, and from uncertainties in the ephermides (location) of the spacecraft. In future dedicated missions, the latter two issues can be mitigated, while knowledge of the surface composition can be improved with additional studies. Indeed, the large statistical error arises from this being a non-dedicated mission where the small data sample used was not even part of the science data of the original mission. The results are therefore highly promising, as they show that a dedicated instrument on a future lunar mission would bring a crucial third independent tool to tackle the neutron lifetime puzzle.

One day in September: Copenhagen

The ghosts of Niels Bohr, Werner Heisenberg and Margrethe Bohr

“But why?” asks Margrethe Bohr. Her husband, Niels, replies “Does it matter my love now that we’re all three of us dead and gone?” Alongside Werner Heisenberg, the trio look like spirits meeting in an atemporal dimension, maybe the afterlife, under an eerie ring of light. Dominating an almost empty stage, they try to revive what happened on one day in September 1941, when Heisenberg, a prominent figure in Hitler’s Uranverein (Uranium Club), travelled to Nazi-occupied Denmark to visit his former mentor, Niels Bohr. 

Why did Heisenberg go to meet Bohr that day? Did he seek an agreement not to develop the bomb in Germany? Was he searching for intelligence on Allied progress? To convince Bohr that there was no German programme? Or to pick Bohr’s brain on atomic physics? Or, according to Margrethe, to show off? Perhaps his motives were a superposition of all of these. No one knows what was said. This puzzle has intrigued historians ever since. 

Eighty years after that meeting, and 23 since Michael Frayn’s masterwork Copenhagen premiered at the National Theatre in London, award-winning director Polly Findlay and Emma Howlett in her professional directorial debut have revived a play that contains little action but much physics and food for thought.

The three actors orbit like electrons in an atom

Frayn’s nonlinear script is based on three possible versions of the same meeting in Copenhagen in 1941, which can be construed as three different scenarios playing out in the many-worlds interpretation of quantum mechanics. He describes it as the process of rewriting a draft of a paper again and again, trying to unlock more secrets. In the afterlife, the trio’s dialogue jumps back and forth in time, adding confusing memories and contradicting hypotheses. Delivered at pace, the narrative explores historical information and their personal stories.

The three characters reflect on how German scientists failed to build the bomb, even though they had the best start; Otto Hahn, Lise Meitner and Fritz Strassmann having discovered nuclear fission in 1939. But Frayn highlights how Hitler’s Deutsche Physik was hostile to so-called Jewish physics and key Jewish physicists, including Bohr, who later fled to Los Alamos in the US. Frayn’s Heisenberg reveals the disbelief he felt when he learnt about the destruction of Hiroshima on the radio. At the time he was detained in Farm Hall, not far from this theatre in Cambridge in the UK, together with other members of the Uranium Club. Called Operation Epsilon, the bugged hall was used by the Allied forces to try to uncover the state of Nazi scientific progress.

The three actors orbit like electrons in an atom, while the theatre’s revolving stage itself spins. Superb acting by Philip Arditti and Malcolm Sinclair elucidates an extraordinary student–mentor relationship between Heisenberg and Bohr. The sceptical Mrs Bohr (Haydn Gwynne) steers the conversation and questions their friendship, cajoling Bohr to speak in plain language. Nevertheless, the use of scientific jargon could leave some non-experts in the audience behind. 

Although Heisenberg wrote in his autobiography that “it would be better to stop disturbing the spirits of the past,” the private conversation between the two physicists has stirred the interest of the public, journalists and historians for years. In 1956 the journalist Robert Jungk wrote in his debated book, Brighter than a Thousand Suns, that Heisenberg wanted to prevent the development of an atomic bomb. This book was also an inspiration for Frayn’s play. More recently, in 2001, Bohr’s family released some letters that Bohr wrote and never sent to Heisenberg. According to these letters, Bohr was convinced that Heisenberg was building the bomb in Germany.

To this day, the reason for Heisenberg’s visit to Copenhagen remains uncertain, or unknowable, like the properties of a quantum particle that’s not observed. The audience can only imagine what really happened, while considering all philosophical interpretations of the fragility of the human species. 

Witten reflects

Edward Witten

How has the discovery of a Standard Model-like Higgs boson changed your view of nature? 

The discovery of a Standard Model-like Higgs boson was a great triumph for renormalisable field theory, and really for simplicity. By the time the LHC was operating, attempts to make the Standard Model (SM) work without an elementary Higgs field – using a dynamical mechanism instead – had become rather convoluted. It turned out that, as far as one can judge from what we have learned so far, the original idea of an elementary Higgs particle was correct. This also means that nature takes advantage of all the possible building blocks of renormalisable field theory – fields of spin 0, 1/2 and 1 – and the flexibility that that allows. 

The other key fact is that the Higgs particle has appeared by itself, and without any sign of a mechanism that would account for the smallness of the energy scale of weak interactions compared to the much larger presumed energy scales of gravity, grand unification and cosmic inflation. From the perspective that my generation of particle physicists grew up with (and not only my generation, I would say), this is quite a shock. Of course, we lived through a somewhat similar shock a little over 20 years ago with the discovery that the expansion of the universe is accelerating – something that is most simply interpreted in terms of a very small but positive cosmological constant, the energy density of the vacuum. It seems that the ideas of naturalness that we grew up with are failing us in at least these two cases.

What about new approaches to the fine-tuning problem such as the relaxion or “Nnaturalness”?

Unfortunately, it has been very hard to find a conventional natural explanation of the dark energy and hierarchy problems. Reluctantly, I think we have to take seriously the anthropic alternative, according to which we live in a universe that has a “landscape”of possibilities, which are realised in different regions of space or maybe in different portions of the quantum mechanical wavefunction, and we inevitably live where we can. I have no idea if this interpretation is correct, but it provides a yardstick against which to measure other proposals. Twenty years ago, I used to find the anthropic interpretation of the universe upsetting, in part because of the difficulty it might present in understanding physics. Over the years I have mellowed. I suppose I reluctantly came to accept that the universe was not created for our convenience in understanding it.

Which experimental paths should physicists prioritise at this time?

It is extremely important to probe the twin mysteries of the cosmic acceleration and the smallness of the electroweak scale as thoroughly as possible, in order to determine whether we are interpreting the facts correctly and possibly to discover a new layer of structure. In the case of the cosmic acceleration, this means measuring as precisely as we can the parameter w (the ratio of pressure and energy), which equals –1 if the acceleration of the expansion is governed by a simple cosmological constant, but would be greater than –1 in most alternative models. In particle physics, we would like to probe for further structure as precisely as we can both indirectly, for example with precision studies of the Higgs particle, and hopefully directly by going to higher energies than are available at the LHC.

What might be lurking at energies beyond the LHC?

If it is eventually possible to go to higher energies, I can imagine several possible outcomes. It might become rather clear that the traditional idea of naturalness is not the whole story and that we have on our hands a “bare” Higgs particle, without a mechanism that would account for its mass scale. Alternatively, we might find out that the apparent failure of naturalness was an illusion and that additional particles and forces that provide an explanation for the electroweak scale are just beyond our current experimental reach. There is also an intermediate possibility that I find fascinating. This is that the electroweak scale is not natural in the customary sense, but additional particles and forces that would help us understand what is going on exist at an energy not too much above LHC energies. A fascinating theory of this type is the “split supersymmetry” that has been proposed by Nima Arkani-Hamed and others.  

It seems that the ideas of naturalness that we grew up with are now failing us 

There is an obvious catch, however. It is easy enough to say “such-and-such will happen at an energy not too much above LHC energies”. But for practical purposes, it makes a world of difference whether this means three times LHC energies, six times LHC energies, 25 times LHC energies, or more. In theories such as split supersymmetry, the clues that we have are not sufficient to enable a real answer. A dream would be to get a concrete clue from experiment about what is the energy scale for new physics beyond the Higgs particle. 

Could the flavour anomalies be one such clue?

There are multiple places that new clues could come from. The possible anomalies in b physics observed at CERN are extremely significant if they hold up. The search for an electric dipole moment of the electron or neutron is also very important and could possibly give a signal of something new happening at energies close to those that we have already probed. Another possibility is the slight reported discrepancy between the magnetic moment of the muon and the SM prediction. Here, I think it is very important to improve the lattice gauge theory estimates of the hadronic contribution to the muon moment, in order to clarify whether the fantastically precise measurements that are now available are really in disagreement with the SM. Of course, there are multiple other places that experiment could pinpoint the next energy scale at which the SM needs to be revised, ranging from precision studies of the Higgs particle to searches for muon decay modes that are absent in the SM. 

Which current developments in theory are you most excited about?

The new ideas about gravity and quantum mechanics that go under the rough title “It from qubit” are really exciting. Black-hole thermodynamics was discovered in the 1970s through the work of Jacob Bekenstein, Stephen Hawking and others. These results were fascinating, but for several decades it seemed to me – rightly or wrongly – that this field was evolving only slowly compared to other areas of theoretical physics. In the past decade or so, that is clearly no longer the case. In large part the change has come from thinking about “entropy” as microscopic or fine-grained von Neumann entropy, as opposed to the thermodynamic entropy that Bekenstein and others considered. A formulation in terms of fine-grained entropy has made possible new statements and more general statements which reduce to the traditional ones when thermodynamics is valid. All this has been accelerated by the insights that come from holographic duality between gravity and gauge theory.

How different does the field look today compared to when you entered it?

It is really hard to exaggerate how the field has changed. I started graduate school at Princeton in September 1973. Asymptotic freedom of non-abelian gauge theory had just been discovered a few months earlier by David Gross, Frank Wilzcek and David Politzer. This was the last key ingredient that was needed to make possible the SM as we know it today. Since then there has been a revolution in our experimental knowledge of the SM. Several key ingredients (new quarks, leptons and the Higgs particle) were unknown in 1973. Jets in hadronic processes were still in the future, even as an idea, let alone an experimental reality, and almost nothing was known about CP violation or about scaling violations in high-energy hadronic processes, just to mention two areas that developed later in an impressive way. 

6D Calabi–Yau manifolds

Not only is our experimental knowledge of the SM so much richer than it was in 1973, but the same is really true of our theoretical understanding as well. Quantum field theory is understood much better today than was the case in 1973. There really is no comparison.

Perhaps equally dramatic has been the change in our understanding of cosmology. In 1973, the state of cosmological knowledge could be summarised fairly well in a couple of numbers – notably the cosmic-microwave temperature and the Hubble constant – and of these only the first was measured with any reasonable precision. In the intervening years, cosmology became a precision science and also a much more ambitious science, as cosmologists have learned to grapple with the complex processes of the formation of structure in the universe. In the inhomogeneities of the microwave background, we have observed what appear to be the seeds of structure formation. And the theory of cosmic inflation, which developed starting around 1980, seems to be a real advance over the framework in which cosmology was understood in 1973, though it is certainly still incomplete.

Exploring the string-theory framework has led to a remarkable series of discoveries

Finally, 50 years ago the gulf between particle physics and gravity seemed unbridgeably wide. There is still a wide gap today. But the emergence in string theory of a sensible framework to study gravity unified with particle forces has changed the picture. This framework has turned out to be very powerful, even if one is not motivated by gravity and one is just searching for new understanding of ordinary quantum field theory. We do not understand today in detail how to unify the forces and obtain the particles and interactions that we see in the real world. But we certainly do have a general idea of how it can work, and this is quite a change from where we were in 1973. Exploring the string-theory framework has led to a remarkable series of discoveries. This well has not run dry, and that is one of the reasons that I am optimistic about the future.

Which of the numerous contributions you have made to particle and mathematical physics are you most proud of?

I am most satisfied with the work that I did in 1994 with Nathan Seiberg on electric-magnetic duality in quantum field theory, and also the work that I did the following year in helping to develop an analogous picture for string theory.

Who knows, maybe I will have the good fortune to do something equally significant again in the future.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Multidisciplinary CERN forum tackles AI

Anima Anandkumar

The inaugural Sparks! Serendipity Forum attracted 49 leading computer scientists, policymakers and related experts to CERN from 17 to 18 September for a multidisciplinary science-innovation forum. In this first edition, participants discussed a range of ethical and technical issues related to artificial intelligence (AI), which has deep and developing importance for high-energy physics and its societal applications. The structure of the discussions was designed to stimulate interactions between AI specialists, scientists, philosophers, ethicists and other professionals with an interest in the subject, leading to new insights, dialogue and collaboration between participants.

World-leading cognitive psychologist Daniel Kahneman opened the public part of the event by discussing errors in human decision making, and their impact on AI. He explained that human decision making will always have bias, and therefore be “noisy” in his definition, and asked whether AI could be the solution, pointing out that AI algorithms might not be able to cope with the complexity of decisions that humans have to make. Others speculated as to whether AI could ever achieve the reproducibility of human cognition – and if the focus should shift from searching for a “missing link” to considering how AI research is actually conducted by making the process more regulated and transparent.

Introspective AI

Participants discussed both the advantages and challenges associated with designing introspective AI, which is capable of examining its own processes and could be beneficial in making predictions about the future. Participants also questioned, however, whether we should be trying to make AI more self-aware and human-like. Neuroscientist Ed Boyden explored introspection through the lens of neural pathways, and asked whether we can design introspective AI before we understand introspection in brains. Following the introspection theme, philosopher Luisa Damiano addressed the reality versus fiction of “social-embodied” AI – the idea of robots interacting with us in our physical world – arguing that such a possibility would require careful ethical considerations. 

AI is already a powerful, and growing, tool for particle physics

Many participants advocated developing so-called “strong” AI technology that can solve problems it has not come across before, in line with specific and targeted goals. Computer scientist Max Welling explored the potential for AI to exceed human intelligence, and suggested  that AI can potentially be as creative as humans, although further research is required. 

On the subject of ethics, Anja Kaspersen (former director of the UN Office for Disarmament Affairs) asked: who makes the rules? Linking to military, humanitarian and technological affairs, she considered how our experience in dealing with nuclear weapons could help us deal with the development of AI. She said that AI is prone to ethics washing: the process of creating an illusory sense that ethical issues are being appropriately addressed when they are not. Participants agreed that we should seek to avoid polarising the community when considering risks associated with current and future AI, and suggested a more open approach to deal with the challenges faced by AI today and tomorrow. Skype co-founder Jann Tallin identified AI as one of the most worrying existential risks facing society today; the fact that machines do not consider whether their decisions are unethical demands that we consider the constraints of the AI design space within the realm of decision making. 

Fruits of labour

The initial outcomes of the Sparks! Serendipity Forum are being written up as a CERN Yellow Report, and at least one paper will be submitted to the journal Machine Learning Science and Technology. Time will tell what other fruits of the serendipitous interactions at Sparks! will bring. One thing is certain, however, AI is already a powerful, and growing, tool for particle physics. Without it, the LHC experiments’ analyses would have been much more tortuous, as discussed by Jennifer Ngadiuba and Maurizio Pierini (CERN Courier September/October 2021 p31)

Future editions of the Sparks! Seren­dipity Forum will tackle different themes in science and innovation that are relevant to CERN’s research. The 2022 event will be built around future health technologies, including the many accelerator, detector and simulation technologies that are offshoots of high-energy-physics research. 

Training future experts in the fight against cancer

The leading role of CERN in fundamental research is complemented by its contribution to applications for the benefit of society. A strong example is the Heavy Ion Therapy Masterclass (HITM) school, which took place from 17 to 21 May 2021. Attracting more than 1000 participants from around the world, many of whom were young students and early-stage researchers, the school demonstrated the enormous potential to train the next generation of experts in this vital application. It was the first event of the European Union project HITRIplus (Heavy Ion Therapy Research Integration), in which CERN is a strategic partner along with other research infrastructures, universities, industry partners, the four European heavy-ion therapy centres and the South East European International Institute for Sustainable Technologies (SEEIIST). As part of a broader “hands-on training” project supported by the CERN and Society Foundation with emphasis on capacity building in Southeast Europe, the event was originally planned to be hosted in Sarajevo but was held online due to the pandemic. 

The school’s scientific programme highlighted the importance of developments in fundamental research for cancer diagnostics and treatment. Focusing on treatment planning, it covered everything needed to deliver a beam to a tumour target, including the biological response of cancerous and healthy tissues. The Next Ion Medical Machine Study (NIMMS) group delivered many presentations from experts and young researchers, starting from basic concepts to discussions of open points and plans for upgrades. Expert-guided practical sessions were based on the matRad open-source professional toolkit, developed by the German cancer research centre DKFZ for training and research. Several elements of the course were inspired by the International Particle Therapy Masterclasses.  

Virtual visits to European heavy-ion therapy centres and research infrastructures were ranked by participants among the most exciting components of the course. There were also plenty of opportunities for participants to interact with experts in dedicated sessions, including a popular session on entrepreneurship by the CERN Knowledge Transfer group. This interactive approach had a big impact on participants, several of which were motivated to pursue careers in related fields and to get actively involved at their home institutes. This future expert workforce will become the backbone for building and operating future heavy-ion therapy and research facilities that are needed to fight cancer worldwide (see Linacs to narrow radiotherapy gap).

Further support is planned at upcoming HITRIplus schools on clinical and medical aspects, as well as HITRIplus internships, to optimally access existing European heavy-ion therapy centres and contribute to relevant research projects. 

Unrivalled precision on Z invisible width

The three regions used to extract the Z-boson invisible width

The LHC was built in the 27 km tunnel originally excavated for LEP, the highest energy electron–positron collider ever built. Designed to study the carriers of the weak force, LEP’s greatest legacy is the accuracy with which it pinned down the properties of the Z boson. Among the highlights is the measurement of the Z boson’s invisible width and decay branching fraction, which was used to deduce that there are three, and only three, species of light neutrinos that couple to the Z boson. This measurement of the Z-boson invisible width from LEP has remained the most precise for two decades.

This precise measurement of the Z-boson invisible width is the first of its kind at a hadron collider

In a bid to provide an independent and complementary test of the Standard Model (SM) at a new energy regime, CMS has performed a precise measurement of the Z-boson invisible width – the first of its kind at a hadron collider. The analysis uses the experimental signature of a very energetic jet accompanied by large missing transverse momentum to select events where the Z boson decays predominantly to neutrinos. The invisible width is then extracted from the well-known relationship between the Z-boson coupling to neutrinos and its coupling to muons and electrons. 

While the production of a pair of neutrinos occurs through a pure Z interaction, the production of a pair of charged leptons can also occur through a virtual photon. The contribution of virtual photon exchange and the interference between photon and Z-boson exchange are determined to be less than 2% for a dilepton invariant mass range of 71–111 GeV, and was accounted for to allow the collaboration to compare the results directly to the Z’s decay to neutrinos. 

Figure 1 shows the missing transverse momentum distribution for the three key regions contributing to this measurement: the jets-plus-missing-transverse-momentum region; the dimuon-plus-jets region; and the dielectron-plus-jets region. For the dilepton regions, selected muons and electrons are not included in the calculation of the missing transverse momentum. The dominant background to the jets plus missing transverse momentum region is from a W boson decaying leptonically, and accounts for 35% of the events. Estimating this background with a high accuracy is one of the key aspects of the measurement, and was performed by studying several exclusive regions in data that are designed to be kinematically very similar to the signal region, but statistically independent. 

The invisible width of the Z boson was extracted from a simultaneous likelihood fit and measured to be 523 ±3 (stat) ±16 (syst) MeV. This 3.2% uncertainty in the final result is dominated by systematic uncertainties, with the largest contributions coming from the uncertainty in the efficiencies of selecting muons and electrons. In a fitting tribute to its predecessor and testament to the LHC entering a precision era of physics, this measurement from CMS is competitive with the LEP combined result of 503 ± 16 MeV and is currently the world’s most precise single direct measurement.

Charm-strange mesons probe hadronisation

Pb–Pb collision data

The ALICE collaboration has reported a new measurement of the production of Ds+ mesons, which contain a charm and an anti-strange quark, in Pb–Pb collisions collected in 2018 at a centre- of-mass energy per nucleon pair of 5.02 TeV. The large data sample and the use of machine-learning techniques for the selection of particle candidates led to increased precision on this important quantity. 

D-meson measurements probe the interaction between charm quarks and the quark–gluon plasma (QGP) formed in ultra-relativistic heavy-ion collisions. Charm quarks are produced in the early stages of the nucleus–nucleus collision and thus experience the whole system evolution, losing part of their energy via scattering processes and gluon radiation. The presence of the QGP medium also affects the charm-quark hadronisation and, in addition to the fragmentation mechanism, a competing process based on charm–quark recombination with light quarks of the medium might occur. Given that strange quark–antiquark pairs are abundantly produced in the QGP, the recombination mechanism could enhance the yield of Ds+ mesons in Pb–Pb collisions with respect to that of D0 mesons, which do not contain strange quarks. 

ALICE investigated this possibility using the ratio of the yields of Ds+ and D0 mesons. The figure displays the Ds+ /D0 yield ratio in central (0–10%) Pb–Pb collisions divided by the ratio in pp collisions, showing that the values of the ratio in the 2 < pT < 8 GeV/c interval are higher in central Pb–Pb collisions by about 2.3σ. The measured Ds+ /D0 double ratio also hints at a peak for pT5–6 GeV/c. Its origin could be related to the different D-meson masses and to the collective radial expansion of the system with a common flow-velocity profile. In addition, the hadronisation via fragmentation becomes dominant at high transverse momenta, and consequently, the values of the Ds+ /D0 ratio become similar between Pb–Pb and pp collisions.

The measurement was compared with theoretical calculations based on charm–quark transport in a hydrodynamically expanding QGP (LGR, TAMU, Catania and PHSD), which implement the strangeness enhancement and the hadronisation of charm quarks via recombination in addition to the fragmentation in the vacuum. The Catania and PHSD models predict a ratio almost flat in pT, while TAMU and LGR describe the peak at pT 3–5 GeV/c. 

Complementary information was obtained by comparing the elliptic flow coefficient v2 of Ds+ and non-strange D mesons (D0, D+ and D*+) in semi-central (30–50%) Pb–Pb collisions. The Ds+– meson v2 is positive in the 2 < pT < 8 GeV/c interval with a significance of 6.4σ, and is compatible within uncertainties with that of non-strange D mesons. These features of the data are described by model calculations that include recombination of charm and strange quarks.

The freshly-completed upgrade of the detectors and the harvest of Pb–Pb collision data expected in Run 3 will allow the ALICE collaboration to further improve the measurements, deepening our understanding of the heavy-quark interaction and hadronisation in the QGP.

bright-rec iop pub iop-science physcis connect