The new labs and exhibitions in Science Gateway offer children as young as five and eight, respectively, the opportunity to have fun with science. Why would CERN target such young audiences? And what CERN-related content could possibly be accessible to such an age group?
CERN has traditionally tailored education and outreach material predominantly towards high-school students, in particular those already expressing an interest in science.For this age group, it is relatively easy to find overlaps between school curricula and work at CERN. Such visitors will continue to find engaging content in our exhibitions. However, if CERN is to connect to a broader section of the public and attract a more diverse cohort of future scientists, it needs to reach out beyond existing science fans, attracting younger audiences before stereotypes set in.
Positive contacts
Over the decades, communication best-practice has evolved from the idea that to inspire children to choose a career in science, you just need to make it sound interesting. Now, it is recognised that there are multiple factors influencing choice. The Aspires research project at University College London, for example, has highlighted the importance of “science capital”, a notion based on the variety of positive contacts with science that children experience. This includes knowing people who work in science, talking with family and friends, doing science-based activities outside school and there being a generally positive attitude towards science within the family setting.
At schools, careers information often comes once choices to drop science subjects have already been made. And without role models to identify with, or contact with science or science-related professions through family and friends, it can be extremely difficult for some students to imagine themselves as future scientists. Hence the drop in pupils expressing such aspirations from the end of primary education onwards that occurs in many countries. By offering younger students the opportunity to experiment and play in a scientific environment, Science Gateway seeks to counter this drop. In addition to the existing science-fan visitors, it aims to reach those with less science capital at home, so that children can discover new opportunities.
There is a slogan in the exhibitions world: “hands on, minds on”. A good exhibit creates memorable experiences that empower visitors to explore and engage, rather than simply transmitting knowledge in a unidirectional way. Science Gateway offers activities – such as designing a detector or collaborating to lower equipment into a cavern – where children are encouraged to think logically, and exhibits that encourage them to make their own deductions, helping them to become more confident that science is for them. Here the exhibition guides play a key role in encouraging interaction and play.
Sometimes in a hands-on science centre, one can have the impression that children are having so much fun racing from exhibit to exhibit that there is no valid experience. This is countered by research which shows that learning comes in a broad variety of forms. Informal learning experiences, such as those at Science Gateway, can have just as much impact as in-school learning.
The exhibitions offer a variety of different environments – playful areas and beautiful spaces, including artworks, that can be enjoyed by simply sitting back and reflecting.The exhibitions team has also collaborated with community groups to develop tactile content and ensure the exhibits are accessible to wheelchair users. Not all exhibits will be accessible for younger children, or for the visually impaired, but throughout there is a spread of different experiences that give something for everyone to enjoy.
The ambition is for CERN to become a popular destination for a fun day out, attracting a broad section of the public, both those who might one day become scientists themselves and those who might never choose that path, but who are curious to explore the new buildings that have popped up in their local area. Successful outcomes can be as simple as visitors having fun in a scientific environment. This is a first step towards being open to scientific ideas and methods – a valid goal in today’s world of misinformation and distrust, where science is sometimes talked of as something you might or might not choose to believe in.
In 1826, the Swiss pedagogue and educational reformer Johann Heinrich Pestalozzi advocated for a natural and meaningful education through a holistic learning approach that engaged “the hands, head and heart”. One prime example of such an approach is found in science education, where experiments allow learners to experience scientific phenomena while manipulating ideas about experiments in their minds. Experiments are also associated with high affective value, as school students generally enjoy practical tasks and often rank them as preferred learning activities in school. As a result, experiments have long been considered an essential part of teaching the nature of science, and only very few science educators have questioned their necessity.
Consequently, it was long overdue for CERN to offer opportunities for visiting high-school students to get hands-on with particle physics. In 2014, CERN inaugurated its first particle-physics learning laboratory for high-school students. During its eight years of operations, “S’Cool LAB” gave nearly 40,000 visitors a unique opportunity to make discoveries independently, work scientifically and gain insight into modern science in the making.
A major factor in S’Cool LAB’s success was its connection to the latest thinking in physics education research. Interestingly, learning from hands-on experiments is (still) one of the central problems of physics education research. Even though students often enjoy doing experiments, various factors influence what and how much students learn from the exercise. To address this research gap, educational activities at S’Cool LAB were continually developed and improved through accompanying physics education research projects. For example, experimental tasks were designed to challenge scientifically inaccurate mental models (such as bar magnets having electrically charged poles) by allowing students to compare their predictions with surprising observations and thus foster conceptual understanding. Moreover, empirical research carried out based on questionnaires from students before and after taking part in lab workshops confirmed significant positive effects on high-school students’ interest in physics and their beliefs in their physics-related capabilities, and a surprisingly high correlation between these affective outcomes and students’ perceived level of cognitive activation. Remarkably, girls benefited more from S’Cool LAB with respect to their interest and self-beliefs. Consequently, the initial gender gap (with girls reporting slightly lower interest and self-beliefs than boys) was closed.
New incarnation
On 12 January 2023, excavators arrived to dismantle S’Cool LAB to make space for the new educational labs at CERN Science Gateway. Several considerations went into the design of the new labs. Firstly, they have a broader scope, catering not only to high-school students and their teachers but also to school students as young as five, as well as the general public. Indeed, Science Gateway offers regular workshops open to individual visitors, tourists and families. Moreover, workshops are adapted to different age groups and cover many different topics such as engineering challenges, different technologies, detection principles, or medical applications of particle physics. This diversity allows for better adaptation to the needs of students and teachers, who often prefer workshops that can be easily integrated into their science curriculum.
When designing labs for young learners, a critical choice involves balancing the level of openness and guidance. While open exploration is considered to be the ideal form of experimentation, young students can feel overwhelmed by the choices involved in developing research questions, experiment design and the interpretation of evidence. At the same time, giving students a choice in their learning can foster a sense of ownership and autonomy, leading to increased engagement and motivation to explore topics of personal interest. Providing the right level of guidance and support is therefore crucial to meaningful experimentation and a key element of the education labs at Science Gateway. It helps students enjoy hands-on activities while freeing up mental capacity to process new information effectively. To help teachers prepare their students for the new lab workshops, they now receive detailed information about its planned content and suggestions on how to integrate their experience at CERN into their classroom practice.
The impact of volunteers on students’ interest and self-beliefs was a striking result from physics education research at S’Cool LAB
Despite the variety of lab workshops offered, all activities are anchored in authentic CERN contexts and can even be linked to real objects and authentic equipment in the interactive exhibitions at Science Gateway. This approach helps foster students’ interest in science and provides them with an accurate image of science and scientists. For instance, one lab workshop for students aged 8–15 – the “Power of Air” – allows students to use 3D-printed components and toy balloons to investigate balloon hovercrafts on different surfaces, drawing connections with how engineers at CERN move massive slices of the LHC detectors via air pads.
Community input
To enhance the authenticity of lab workshops, volunteers from CERN’s scientific community accompany students during their learning process and engage in discussions about their findings. The impact of volunteers on students’ interest and self-beliefs was a striking result from physics education research at S’Cool LAB. Students were inspired by the enthusiasm displayed by their guides and appreciated the opportunity to ask questions in an enjoyable learning atmosphere. Therefore, the education labs at Science Gateway will continue to rely on volunteers to facilitate workshops and inspire the next generation of engineers and scientists. To address new challenges related to groups of very young learners, heterogeneous audiences, the diverse collection of lab workshops and the high volume of workshops held each year, a team of professional science educators provides continuous support and guidance to volunteers.
In conclusion, the educational labs at CERN Science Gateway have been designed to provide a wide range of hands-on learning experiences for learners of all ages. These labs aim to not only promote scientific understanding but also foster curiosity, interest and positive self-beliefs in students, empowering them to explore the world of science by demonstrating that science is for everyone.
At the European Physical Society Conference on High Energy Physics, held in Hamburg in August, the LHCb collaboration announced first results on the production of antihelium and antihypertriton nuclei in proton–proton (pp) collisions at the LHC. These promising results open a new research field, that up to now has been pioneered by ground-breaking work from the ALICE collaboration on the central rapidity interval |y| < 0.5. By extending the measurements into the so-far unexplored forward region 1.0 < y < 4.0, the LHCb results provide new experimental input to derive the production cross sections of antimatter particles formed in pp collisions, which are not calculable from first principles.
LHCb’s newly developed helium-identification technique mainly exploits information from energy losses through ionisation in the silicon sensors upstream (VELO and TT stations) and downstream (Inner Tracker) of the LHCb magnet. The amplitude measurements from up to ~50 silicon layers are combined for each subdetector into a log-likelihood estimator. In addition, timing information from the Outer Tracker and velocity measurements from the RICH detectors are used to improve the separation power between heavy helium nuclei (with charge Z = 2) and lighter, singly charged particles (mostly charged pions). With a signal efficiency of about 50%, a nearly background-free sample of 1.1 × 105 helium and antihelium nuclei is identified in the data collected during LHC Run 2 from 2016 to 2018 (see figure, inset).
The helium identification method proves the feasibility of new research fields at LHCb
As a first step towards a light-nuclei physics programme in LHCb, hypertritons are reconstructed via their two-body decay into a now-identified helium nucleus and a charged pion. Hypertriton (3ΛH) is a bound state of a proton, a neutron and a Λ hyperon that can be produced via coalescence in pp collisions. These states provide experimental access to the hyperon–nucleon interaction through the measurement of their lifetime and of their binding energy. Hyperon–nucleon interactions have significant implications for the understanding of astrophysical objects such as neutron stars. For example, the presence of hypernuclei in the dense inner core can significantly suppress the formation of high-mass neutron stars. As a result, there is some tension between the observation of neutron stars heavier than two solar masses and corresponding hypertriton results from the STAR collaboration at Brookhaven. ALICE seems to have resolved the tension between hypertriton measurements at colliders and neutron stars. An independent confirmation of the ALICE result has up to now been missing, and can be provided by LHCb.
The invariant-mass distribution of hypertriton and antihypertriton candidates is shown in figure 1. More than 100 signal decays are reconstructed, with a statistical uncertainty on the mass of 0.16 MeV, similar to that of STAR. In a next step, corrections for efficiencies and acceptance obtained from simulation, as well as systematic uncertainties on the mass scale and lifetime measurement, will be derived.
The new helium identification method from LHCb summarised here proves the feasibility of a rich programme of measurements in QCD and astrophysics involving light antinuclei in the coming years. The collaboration also plans to apply the method to other LHCb Run 2 datasets, such as proton–ion, ion–ion and SMOG collision data.
About 90 physicists attended the sixth plenary workshop of the Muon g-2 Theory Initiative, held in Bern from 4 to 8 September, to discuss the status and strategies for future improvements of the Standard Model (SM) prediction for the anomalous magnetic moment of the muon. The meeting was particularly timely given the recent announcement of the results from runs two and three of the Fermilab g-2 experiment (Muon g-2 update sets up showdown with theory), which reduced the uncertainty of the world average to 0.19 ppm, in dire need of a SM prediction at commensurate precision. The main topics of the workshop were the two hadronic contributions to g-2, hadronic vacuum polarisation (HVP) and hadronic light-by-light scattering (HLbL), evaluated either with a lattice–QCD or data-driven approach.
Hadronic vacuum polarisation
The first one-and-a-half days were devoted to the evaluation of HVP – the largest QCD contribution to g-2, whereby a virtual photon briefly transforms into a hadronic “blob” before being reabsorbed – from e+e– data. The session started with a talk from the CMD-3 collaboration at the VEPP-2000 collider, whose recent measurement of the e+e–→ π+π– cross section generated shock waves earlier this year by disagreeing (at the level of 2.5–5σ) with all previous measurements used in the Theory Initiative’s 2020 white paper. The programme also featured a comparison with results from the earlier CMD-2 experiment, and a report from seminars and panel discussions organised by the Theory Initiative in March and July on the details of the CMD-3 result. While concerns remain regarding the estimate of certain systematic effects, no major shortcomings could be identified.
Further presentations from BaBar, Belle II, BESIII, KLOE and SND detailed their plans for new measurements of the 2π channel, which in the case of BaBar and KLOE involve large data samples never analysed before for this measurement. Emphasis was put on the role of radiative corrections, including a recent paper by BaBar on additional radiation in initial-state-radiation events and, in general, the development of higher-order Monte Carlo generators. Intensive discussions reflected a broad programme to clarify the extent to which tensions among the experiments can be due to higher-order radiative effects and structure-dependent corrections. Finally, updated combined fits were presented for the 2π and 3π channels, for the former assessing the level of discrepancy among datasets, and for the latter showing improved determinations of isospin-breaking contributions.
CMD-3 generated shock waves by disagreeing with all previous measurements at the level of 2.5-5σ
Six lattice collaborations (BMW, ETMC, Fermilab/HPQCD/MILC, Mainz, RBC/UKQCD, RC*) presented updates on the status of their respective HVP programmes. For the intermediate-window quantity (the contribution of the region of Euclidean time between about 0.4–1.0 fm, making up about one third of the total), a consensus has emerged that differs from e+e–-based evaluations (prior to CMD-3) by about 4σ, while the short-distance window comes out in agreement. Plans for improved evaluations of the long-distance window and isospin-breaking corrections were presented, leading to the expectation of new, full computations for the total HVP contribution in addition to the BMW result in 2024. Several talks addressed detailed comparisons between lattice-QCD and data-driven evaluations, which will allow physicists to better isolate the origin of the differences once more results from each method become available. A presentation on possible beyond-SM effects in the context of the HVP contribution showed that it seems quite unlikely that new physics can be invoked to solve the puzzles.
Light-by-light scattering
The fourth day of the workshop was devoted to the HLbL contribution, whereby the interaction of the muon with the magnetic field is mediated by a hadronic blob connected to three virtual photons. In contrast to HVP, here the data-driven and lattice-QCD evaluations agree. However, reducing the uncertainty by a further factor of two is required in view of the final precision expected from the Fermilab experiment. A number of talks discussed the various contributions that feed into improved phenomenological evaluations, including sub-leading contributions such as axial-vector intermediate states as well as short-distance constraints and their implementation. Updates on HLbL from lattice QCD were presented by the Mainz and RBC/UKQCD groups, as were results on the pseudoscalar transition form factor by ETMC and BMW. The latter in particular allow cross checks of the numerically dominant pseudoscalar- pole contributions between lattice QCD and data-driven evaluations.
It is critical that the Theory Initiative work continues beyond the lifespan of the Fermilab experiment
On the final day, the status of alternative methods to determine the HVP contribution were discussed, first from the MUonE experiment at CERN, then from τ data (by Belle, CLEOc, ALEPH and other LEP experiments). First MUonE results could become available at few-percent precision with data taken in 2025, while a competitive measurement would proceed after Long Shutdown 3. For the τ data, new input is expected from the Belle II experiment, but the critical concern continues to be control over isospin-breaking corrections. Progress in this direction from lattice QCD was presented by the RBC/UKQCD collaboration, together with a roadmap showing how, potentially in combination with data-driven methods, τ data could lead to a robust, complementary determination of the HVP contribution.
The workshop concluded with a discussion on how to converge on a recommendation for the SM prediction in time for the final Fermilab result, expected in 2025, including new information expected from lattice QCD, the BaBar 2π analysis and radiative corrections. A final decision for the procedure for an update of the 2020 white paper is planned to be taken at the next plenary meeting in Japan in September 2024. In view of the long-term developments discussed at the workshop – not least the J-PARC Muon g-2/EDM experiment, due to start taking data in 2028 – it is critical that the work by the Theory Initiative continues beyond the lifespan of the Fermilab experiment, to maximise the amount of information on physics beyond the SM that can be inferred from precision measurements of the anomalous magnetic moment of the muon.
High-energy heavy-ion collisions at the LHC exhibit strong collective flow effects in the azimuthal angle distribution of final-state particles. Since these effects are governed by the initial collision geometry of the two colliding nuclei and the hydrodynamic evolution of the collision, the study of anisotropic flow is a powerful way to characterise the production of the quark–gluon plasma (QGP) – an extreme state of matter expected to have existed in the early universe.
To their surprise, researchers on the ALICE experiment have now revealed similar flow signatures in small systems encompassing proton–proton (pp) and proton–lead (pPb) collisions, where QGP formation was previously assumed not to occur. The origin of the flow signals in small systems (and in particular whether the mechanisms behind these correlations in small systems share commonalities with heavy-ion collisions) are not yet fully understood. To better interpret these results, and thus to understand the limit of the system size that exhibits fluid-like behaviour, it is important to carefully single out possible scenarios that can mimic the effect of collective flow.
Anisotropic-flow measurements become more difficult in small systems because non-flow effects, such as the presence of jets, become more dominant. Thus, it is important to examine methods where non-flow effects are properly subtracted first. One of the methods, the so-called low-multiplicity template fit, has been widely used in several experiments to determine and subtract the non-flow elements.
The origin of the flow signals in small systems is not yet fully understood
The ALICE collaboration studied long-range angular correlations for pairs of charged particles produced in pp and pPb collisions at centre-of-mass energies of 13 TeV and 5.02 TeV, respectively. Flow coefficients were extracted from these correlations using the template-fit method in samples of events with different charged-particle multiplicities. This method considers that the yield of jet fragments increases as a function of particle multiplicity and allows physicists to examine assumptions made in the low-multiplicity template fit for the first time – demonstrating their validity, including a possible jet-shape modification.
Figure 1 shows the measurement of two components of anisotropic flow – elliptic (v2) and triangular (v3) – as a function of charged-particle multiplicity at midrapidity (Nch). The data show decreasing trends towards lower multiplicities. In pp collisions, the results suggest that the v2 signal disappears below Nch = 10. The results are then compared with hydrodynamic models. To accurately describe the data, especially for events with low multiplicities, a better understanding of initial conditions is needed.
These results can help to constrain the modelling of initial-state simulations, as the significance of initial-state effects increases for collisions resulting in low multiplicities. The measurements with larger statistics from Run 3 data will push down this multiplicity limit and reduce the associated uncertainties.
Quarks and gluons are the only known elementary particles that cannot be seen in isolation. Once produced, they immediately start a cascade of radiation (the parton shower), followed by confinement, when the partons bind into (colour-neutral) hadrons. These hadrons form the jets that we observe in detectors. The different phases of jet formation can help physicists understand various aspects of quantum chromodynamics (QCD), from parton interactions to hadron interactions – including the confinement transition leading to hadron formation, which is particularly difficult to model. However, jet formation cannot be directly observed. Recently, theorists proposed that the footprints of jet formation are encoded in the energy and angular correlations of the final particles, which can be probed through a set of observables called energy correlators. These observables record the largest angular distance between N particles within a jet (xL), weighted by the product of their energy fractions.
The CMS collaboration recently reported a measurement of the energy correlators between two (E2C) and three (E3C) particles inside a jet, using jets with pT in the 0.1–1.8 TeV range. Figure 1 (top) shows the measured E2C distribution. In each jet pT range, three scaling regions can be seen, corresponding to three stages in jet-formation evolution: parton shower, colour confinement and free hadrons (from right to left). The opposite E2C trends in the low and high xL regions indicate that the interactions between partons and those between hadrons are rather different; the intermediate region reflects the confinement transition from partons to hadrons.
Theorists have recently calculated the dynamics of the parton shower with unprecedented precision. Given the high precision of the calculations and of the measurements, the CMS team used the E3C over E2C ratio, shown in figure 1 (bottom), to evaluate the strong coupling constant αS. The ratio reduces the theoretical and experimental uncertainties, and therefore minimises the challenge of distinguishing the effects of αS variations from those of changes in quark–gluon composition. Since αS depends on the energy scale of the process under consideration, the measured value is given for the Z-boson mass: αS = 0.1229 with an uncertainty of 4%, dominated by theory uncertainties and by the jet-constituent energy-scale uncertainty. This value, which is consistent with the world average, represents the most precise measurement of αS using a method based on jet evolution.
Following a decision taken during the June session of the CERN Council to launch a technical design study for a new high-intensity physics programme at CERN’s North Area, a recommendation for experiment(s) that can best take advantage of the intense proton beam on offer is expected to be made by the end of 2023.
The design study concerns the extraction of a high-intensity beam from the Super Proton Synchrotron (SPS) to deliver up to a factor of approximately 20 more protons per year to ECN3 (Experimental Cavern North 3). It is an outcome of the Physics Beyond Colliders (PBC) initiative, which was launched in 2016 to explore ways to further diversify and expand the CERN scientific programme by covering kinematical domains that are complementary to those accessible to high-energy colliders, with a focus on programmes for the start of operations after Long Shutdown 3 towards the end of the decade.
CERN is confident in reaching the beam intensities required for all experiments
To employ a high-intensity proton beam at a fixed-target experiment in the North Area and to effectively exploit the protons accelerated by the SPS, the beam must be extracted slowly. In contrast to fast extraction within a single turn of the synchrotron, which utilises kicker magnets to change the path of a passing proton bunch, slow extraction gradually shaves the beam over several hundred thousand turns to produce a continuous flow of protons over a period of several seconds. One important limitation to overcome concerns particle losses during the extraction, foremost on the thin electrostatic extraction septum of the SPS but also along the transfer line leading to the North Area target stations. An R&D study backed by the PBC initiative has shown that it is possible to deflect the protons away from the blade of the electrostatic septum using thin, bent crystals. “Based on the technical feasibility study carried out in the PBC Beam Delivery ECN3 task force, CERN is confident in reaching the beam intensities required for all experiments,” says ECN3 project leader Matthew Fraser.
Currently, ECN3 hosts the NA62 experiment, which searches for ultra-rare kaon decays as well as for feebly-interacting particles (FIPs). Three experimental proposals that could exploit a high-intensity beam in ECN3 have been submitted to the SPS committee, and on 6 December the CERN research board is expected to decide which should be taken forward. The High-Intensity Kaon Experiment (HIKE), which requires an increase of the current beam intensity by a factor of between four and seven, aims to increase the precision on ultra-rare kaon decays to further constrain the Cabibbo–Kobayashi–Maskawa unitarity triangle and to search for decays of FIPs that may appear on the same axis as the dumped proton beam. Looking for off-axis FIP decays, the SHADOWS (Search for Hidden And Dark Objects With the SPS) programme could run alongside HIKE when operated in beam-dump mode. Alternatively, the SHiP (Search for Hidden Particles) experiment would investigate hidden sectors such as heavy neutral leptons in the GeV mass range and also enable access to muon- and tau-neutrino physics in a dedicated beam-dump facility installed in ECN3.
The ambitious programme to provide and prepare the high-intensity ECN3 facility for the 2030s onwards is driven in synergy with the North Area consolidation project, which has been ongoing since Long Shutdown 2. Works are planned to be carried out without impacting the other beamlines and experiments in the North Area, with first beam commissioning of the new facility expected from 2030.
“Once the experimental decision has been made, things will move quickly and the experimental groups will be able to form strong collaborations around a new ECN3 physics facility, upgraded with the help of CERN’s equipment and service groups,” says Markus Brugger, co-chair of the PBC ECN3 task force.
Innovation often becomes a form of competition. It can be thought of as a race among creative people, where standardized tools measure progress toward the finish line. For many who strive for technological innovation, one such tool is the vacuum gauge.
High-vacuum and ultra-high-vacuum (HV/UHV) environments are used for researching, refining and producing many manufactured goods. But how can scientists and engineers be sure that pressure levels in their vacuum systems are truly aligned with those in other facilities? Without shared vacuum standards and reliable tools for meeting these standards, key performance metrics – whether for scientific experiments or products being tested – may not be comparable. To realize a better ionization gauge for measuring pressure in HV/UHV environments, INFICON of Liechtenstein used multiphysics modelling and simulation to refine its product design.
A focus on gas density
The resulting Ion Reference Gauge 080 (IRG080) from INFICON is more accurate and reproducible when compared with existing ionization gauges. Development of the IRG080 was coordinated by the European Metrology Programme for Innovation and Research (EMPIR). This collaborative R&D effort by private companies and government research organizations aims to make Europe’s “research and innovation system more competitive on a global scale”. The project participants, working within EMPIR’S 16NRM05 Ion Gauge project, considered multiple options before agreeing that INFICON’s gauge design best fulfilled the performance goals.
Of course, different degrees of vacuum require their own specific approaches to pressure measurement. “Depending on conditions, certain means of measuring pressure work better than others,” explained Martin Wüest, head of sensor technology at INFICON. “At near-atmospheric pressures, you can use a capacitive diaphragm gauge. At middle vacuum, you can measure heat transfer occurring via convection.” Neither of these approaches is suitable for HV/UHV applications. “At HV/UHV pressures, there are not enough particles to force a diaphragm to move, nor are we able to reliably measure heat transfer,” added Wüest. “This is where we use ionization to determine gas density and corresponding pressure.”
The most common HV/UHV pressure-measuring tool is a Bayard–Alpert hot-filament ionization gauge, which is placed inside the vacuum chamber. The instrument includes three core building blocks: the filament (or hot cathode), the grid and the ion collector. Its operation requires the supply of low-voltage electric current to the filament, causing it to heat up. As the filament becomes hotter, it emits electrons that are attracted to the grid, which is supplied with a higher voltage. Some of the electrons flowing toward and within the grid will collide with any free-floating gas molecules that are circulating in the vacuum chamber. Electrons that collide with gas molecules will form ions that then flow toward the collector, with the measurable ion current in the collector proportional to the density of gas molecules in the chamber.
“We can then convert density to pressure, according to the ideal gas law,” explained Wüest. “Pressure will be proportional to the ion current divided by the electron current, [in turn] divided by a sensitivity factor that is adjusted depending on what gas is in the chamber.”
Better by design
Unfortunately, while the operational principles of the Bayard–Alpert ionization gauge are sound and well understood, their performance is sensitive to heat and rough handling. “A typical ionization gauge contains fine metal structures that are held in spring-loaded tension,” said Wüest. “Each time you use the device, you heat the filament to between 1200 and 2000 °C. That affects the metal in the spring and can distort the shape of the filament, [thereby] changing the starting location of the electron flow and the paths the electrons follow.”
At the same time, the core components of a Bayard–Alpert gauge can become misaligned all too easily, introducing measurement uncertainties of 10 to 20% – an unacceptably wide range of variation. “Most vacuum-chamber systems are overbuilt as a result,” noted Wüest, and the need for frequent gauge recalibration also wastes precious development time and money.
With this in mind, the 16NRM05 Ion Gauge project team set a measurement uncertainty target of 1% or less for its benchmark gauge design (when used to detect nitrogen gas). Another goal was to eliminate the need to recalibrate gas sensitivity factors for each gauge and gas species under study. The new design also needed to be unaffected by minor shocks and reproducible by multiple manufacturers.
To achieve these goals, the project team first dedicated itself to studying HV/UHV measurement. Their research encompassed a broad review of 260 relevant studies. After completing their review, the project partners selected one design that incorporates current best practice for ionization gauge design: INFICON’s IE514 extractor-type gauge. Subsequently, three project participants – at NOVA University Lisbon, CERN and INFICON – each developed their own simulation models of the IE514 design. Their results were compared to test results from a physical prototype of the IE514 gauge to ensure the accuracy of the respective models before proceeding towards an optimized gauge design.
Computing the sensitivity factor
Francesco Scuderi, an INFICON engineer who specializes in simulation, used the COMSOL Multiphysics® software to model the IE514. The model enabled analysis of thermionic electron emissions from the filament and the ionization of gas by those electrons. The model can also be used for comray-tracing the paths of generated ions toward the collector. With these simulated outputs, Scuderi could calculate an expected sensitivity factor, which is based on how many ions are detected per emitted electron – a useful metric for comparing the overall fidelity of the model with actual test results.
“After constructing the model geometry and mesh, we set boundary conditions for our simulation,” Scuderi explained. “We are looking to express the coupled relationship of electron emissions and filament temperature, which will vary from approximately 1400 to 2000 °C across the length of the filament. This variation thermionically affects the distribution of electrons and the paths they will follow.”
He continued: “Once we simulate thermal conditions and the electric field, we can begin our ray-tracing simulation. The software enables us to trace the flow of electrons to the grid and the resulting coupled heating effects.”
Next, the model is used to calculate the percentage of electrons that collide with gas particles. From there, ray-tracing of the resulting ions can be performed, tracing their paths toward the collector. “We can then compare the quantity of circulating electrons with the number of ions and their positions,” noted Scuderi. “From this, we can extrapolate a value for ion current in the collector and then compute the sensitivity factor.”
INFICON’s model did an impressive job of generating simulated values that aligned closely with test results from the benchmark prototype. This enabled the team to observe how changes to the modelled design affected key performance metrics, including ionization energy, the paths of electrons and ions, emission and transmission current, and sensitivity.
The end-product of INFICON’s design process, the IRG080, incorporates many of the same components as existing Bayard–Alpert gauges, but key parts look quite different. For example, the new design’s filament is a solid suspended disc, not a thin wire. The grid is no longer a delicate wire cage but is instead made from stronger formed metal parts. The collector now consists of two components: a single pin or rod that attracts ions and a solid metal ring that directs electron flow away from the collector and toward a Faraday cup (to catch the charged particles in vacuum). This arrangement, refined through ray-tracing simulation with the COMSOL Multiphysics® software, improves accuracy by better separating the paths of ions and electrons.
A more precise, reproducible gauge
INFICON, for its part, built 13 prototypes for evaluation by the project consortium. Testing showed that the IRG080 achieved the goal of reducing measurement uncertainty to below 1%. As for sensitivity, the IRG080 performed eight times better than the consortium’s benchmark gauge design. Equally important, the INFICON prototype yielded consistent results during multiple testing sessions, delivering sensitivity repeatability performance that was 13 times better than that of the benchmark gauge. In all, 23 identical gauges were built and tested during the project, confirming that INFICON had created a more precise, robust and reproducible tool for measuring HV/UHV conditions.
“We consider [the IRG080] a good demonstration of [INFICON’s] capabilities,” said Wüest.
Entanglement is an extraordinary feature of quantum mechanics: if two particles are entangled, the state of one particle cannot be described independently from the other. It has been observed in a wide variety of systems, ranging from microscopic particles such as photons or atoms to macroscopic diamonds, and over distances ranging from the nanoscale to hundreds of kilometres. Until now, however, entanglement has remained largely unexplored at the high energies accessible at hadron colliders, such as the LHC.
At the TOP 2023 workshop, which took place in Michigan this week, the ATLAS collaboration reported a measurement of entanglement using top-quark pairs with one electron and one muon in the final state selected from proton–proton collision data collected during LHC Run 2 at a centre-of-mass energy of 13 TeV, opening new ways to test the fundamental properties of quantum mechanics.
Two-qubit system The simplest system which gives rise to entanglement is a pair of qubits, as in the case of two spin-1/2 particles. Since top quarks are typically generated in top-antitop pairs (tt) at the LHC, they represent a unique high-energy example of such a two-qubit system. The extremely short lifetime of the top (10-25 s, which is shorter than the timescale for hadronisation and spin decorrelation) means that its spin information is directly transferred to its decay products. Close to threshold, the tt pair produced through gluon fusion is almost in a spin-singlet state, maximally entangled. By measuring the angular distributions of the tt decay products close to threshold, one can therefore conclude whether the tt pair is in an entangled state.
For this purpose, a single observable can be used as an entanglement witness, D. This can be measured from the distribution of cos?, where ? is the angle between the charged lepton directions in each of the parent top and anti-top rest frames, with D = −3⋅⟨cos?⟩. The entanglement criterion is given by D = tr(C)/3 < −1/3, where tr(C) is the sum of the diagonal elements of the spin-correlation matrix C of the tt̄ pair before hadronisation effects occur. Intuitively, this criterion can be understood from the fact that tr(C) is the expectation value of the product of the spin polarizations, tr(C) =〈σ⋅σ〉, with σ, σ being the t,t polarizations, respectively (classically tr(C) ≤ 1, since spin polarizations are unit vectors). D is measured in a region where the invariant mass is approximately twice the mass of the top quark, 340 < mtt < 380 GeV, and is performed at particle level, after hadronisation effects occur.
This constitutes the first observation of entanglement between a pair of quarks and the highest-energy measurement of entanglement
The shape of cos? is distorted by detector and event-selection effects for which it has to be corrected. A calibration curve connecting the value of D before and after the event reconstruction is extracted from simulation and used to derive D from the corresponding measurement, which is then compared to predictions from state-of-the-art Monte Carlo simulations. The measured value D = -0.547 ± 0.002 (stat.) ± 0.021 (syst.) is well beyond 5σ from the non-entanglement hypothesis. This constitutes the first-ever observation of entanglement between a pair of quarks and the highest-energy measurement of entanglement.
Apart from the intrinsic interest of testing entanglement under unprecedented conditions, this measurement paves the way to use the LHC as a novel facility to study quantum information. Prime examples are quantum discord, which is the most basic form of quantum correlations; quantum steering, which is how one subsystem can steer the state of the other one; and tests of Bell’s inequalities, which explore non-locality. Furthermore, borrowing concepts from quantum information theory inspires new approaches to search for physics beyond the Standard Model.
Electronics engineer Oscar Barbalat, who pioneered knowledge-transfer at CERN, died on 8 September 2023, aged 87.
Born in Liège, Belgium in 1935, Oscar joined CERN in 1961, working initially in the Proton Synchrotron (PS) radio-frequency (RF) group. At the time, the PS beam intensity was still below 109 protons per pulse and the beam-control system was somewhat difficult to master, even though the operations consisted mainly of striking internal targets at 24 GeV/c. The control system became increasingly complex when the PS slow-resonant extraction system of Hugh Hereward was put into service. As part of a team of expert accelerator physicists that included Dieter Möhl, Werner Hardt, Pierre Lefèvre and Aymar Sörensen, Oscar wrote a substantial FORTRAN simulation program to understand how the extraction efficiency depended on its numerous correlated parameters.
In the 1970s, the PS division set out to digitise the controls of all PS subsystems (Linac, PS Booster, RF, beam transport systems, vacuum system, beam observation, etc). These subsystems used independent control systems, which were based on different computers or operated manually. Oscar was tasked with devising a structured naming scheme for all the components of the PS complex. After producing several versions, in collaboration with all the experts, the fourth iteration of his proposed scheme was adopted in 1977. To design the scheme, Oscar used the detailed knowledge he had acquired of the accelerator systems and their control needs. His respectful and friendly but tenacious way with colleagues enabled him to explore their desires and problems, which he was then able to reconcile with the needs of the automated controls. Oscar was modest. In the acknowledgements of his naming scheme, he wrote: “This proposal is the result of numerous contributions and suggestions from the many members of the division who were interested in this problem and the author is only responsible for the inconsistencies that remain.”
On Giorgio Brianti’s initiative, following the interest of the CERN Council’s finance committee, the “Bureau de Liaison pour l’Industrie et la Technologie” (BLIT) was founded, with Oscar in charge. His activity began in 1974 and ended on his retirement in 1997. His approach to this new task was typical of his and CERN’s collaborative style: low-key and constructive. He was eager to inform himself of details and he had a talent for explaining technical aspects to others. It helped that he was well educated with broad interests in people, science, technology, languages along with cultural and societal purposes. He built a network of people who helped him and whom he convinced of the relevance of sharing technological insights beyond CERN.
After more than 20 years developing this area, he summarised the activities, successes and obstacles in Technology Transfer from Particle Physics, the CERN Experience 1974–1997. When activities began in the 1970s, few considered the usefulness of CERN technologies outside particle physics as a relevant objective. Now, CERN prominently showcases its impact on society. After his retirement, Oscar continued to be interested in CERN technology-transfer, and in 2012 he became a founding member of the international Thorium Energy Committee (iThEC), promoting R&D in thorium energy technologies.
No doubt, Oscar is the pioneer of what is now known as knowledge-transfer at CERN.